content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Paper Rating: Word Count: 1358 Approx Pages: 5
invented the telescope." This is what most people say when
they think about Galileo. However, Galileo did not even
invent the telescope; he only made improvements to it so it
could be used for astronomy. Galileo did use it to make
many important discoveries about astronomy, though; many
of these discoveries helped to prove that the sun was the
center of the galaxy. Galileo also made many important
contributions to Physics; he discovered that the path of a
projectile was a parabola, that objects do not fall with
speeds proportional to their weight, and much more. For
these discoveries, Galileo is often referred to as the founder
of modern experimental science. Galileo Galilei was born in
Pisa, Italy on February 15, 1564. Until he was about 10
years old, Galileo lived in Pisa; in 1574 the family moved to
Florence where Galileo started his education at
Vallombroso, a nearby monastery. In 1581, Galileo went to
the University of Pisa to study medicine, the field his father
wanted him to peruse. While at the University of Pisa,
Galileo discovered his interest in Physics and Mathematics;
he switched his major from medicine to mathematics. In
1585, he decided to leave the university without a degree to
pursue a job as a teacher. He spend four years looking for a
job; during this time, he tutored privately and wrote on some
discoveries that he had made. In 1589, Galileo was given the
job of professor of Mathematics at the University of Pisa.
His contract was not renewed in 1592, but received another
job at the University of Padua as the chair of Mathematics;
his main duties were to teach Geometry and Astrology.
Galileo taught at the university for eighteen years. Galileo
made many important discoveries from the time he was born
to when he left the University of Padua, 1564-1610. While
attending the University of Pisa, 1584, Galileo discovered
the principle of isochronism. Isochronism sho... Continue Reading
|
{"url":"http://www.exampleessays.com/viewpaper/55808.html","timestamp":"2014-04-21T04:38:04Z","content_type":null,"content_length":"76713","record_id":"<urn:uuid:8d5fd0de-84f4-4b06-b254-ba0dfa6bd33d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mission Viejo Precalculus Tutor
Find a Mission Viejo Precalculus Tutor
...Economics is about story-telling. How can we use economic analysis to describe the behavior of people, firms, or nations? How can we use economics to solve global issues such as world hunger,
poverty, health care, or taxation and public policy issues?
13 Subjects: including precalculus, English, calculus, ASVAB
I have a doctorate in Nuclear Physics and many years of engineering experience designing electronics for industry. For the last two years, I have been teaching at the Community College Level as an
Adjunct Professor. I have volunteer tutored math at a local high school and tutored privately.
8 Subjects: including precalculus, physics, calculus, geometry
I have more than 20 years of experience teaching and tutoring Mathematics at the elementary, high school and college levels. I've been tutoring Spanish for several years. I have a Bachelor Degree
in Mathematics and a Master in Mathematics Education.
8 Subjects: including precalculus, Spanish, geometry, trigonometry
...Since then, I've tutored it for four or five years, and I also teach a Physics class for Biola Youth Academics. I will only tutor this subject if you have some sort of SAT prep book to work
from. I am not a specific SAT tutor like the ones who work for, say, Princeton Review.
6 Subjects: including precalculus, calculus, physics, trigonometry
I earned a Bachelors of Science in Chemical Engineering from UC Irvine in 2013. I have 2+ years of tutoring experience: In high school, I taught pre-Algebra, Algebra, Trigonometry, Pre-Calculus,
and Calculus to both groups and individual students. I have an SAT score of 1980.
12 Subjects: including precalculus, chemistry, physics, geometry
|
{"url":"http://www.purplemath.com/Mission_Viejo_precalculus_tutors.php","timestamp":"2014-04-19T07:26:02Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:0ad7c23b-67df-4bd1-8d0b-3ff0711faa98>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Why Most Published Research Findings A
From: David C Campbell <amblema@bama.ua.edu> Date: Tue Sep 06 2005 - 18:26:16 EDT
One major problem with the article is the assumption that all
scientific papers use the statistical methods discussed in the paper.
Many are more descriptive. For example, my papers on the fossil
mollusks of the Carolinas list the species found and provide
descriptions of some of them. Although there may be errors in my
identifications, this does not fall under the issues raised by that
Others papers use different statistical methods (with their own
weaknesses). Although 5% is a classical cutoff, the number of papers
that actually have a 5% error (as opposed to something rather less than
5%) is relatively small.
Although it is true that the statistics are often a weak point for
scientific papers, this does not necessarily totally invalidate the
papers that use them. Furthermore, the papers that use a 5% (or other,
usually smaller) cutoff in the way envisioned in the summary will give
the percent probability, so the reader can judge.
Ironically, a basic claim of the article itself involves bad
statistics. Use of the 95% confidence level does not mean that 1 in 20
papers that use it are wrong, on average. It means that each result
that has a 95% support level individually has a 1 in 20 chance of being
wrong. The probability that x independent items, each with 95%
probability of being correct, are all correct, is (19/20)^x. This is
less than 50% when x=14. However, it never reaches zero. Thus, you
cannot be sure that any of them are incorrect, but there is a greater
than 50% chance that at least one out of 14 samples is incorrect.
Furthermore, additional studies affect the statistics. The odds that
both of two results with 95% confidence are wrong is only 0.28%.
Assuming that all of the studies that point to global warming as a
serious problem are wrong, for example, is unreasonable, even though
there are some people who make unsupported claims about the magnitude
of imminent change.
A serious problem in the way the article is interpreted by some
(refering to the responses posted on the website with the article) is
that it serves as an excuse to ignore scientific papers that you don't
like. In fact, it is the papers that support what you think that
require the greatest scrutiny, because the natural tendency is to
readily accept what sounds good.
Finally, I would note that the low funding for most science makes the
accusation of making up results for financial gain a bit dubious. In
particular, denying global warming has obvious financial benefits,
whereas accepting it as a problem that needs serious attention has
financial costs. That does not prove which is right, but it does point
to a definite risk of bias or pressure.
Dr. David Campbell
425 Scientific Collections
University of Alabama, Box 870345
Tuscaloosa AL 35487
"James gave the huffle of a snail in
danger But no one heard him at all" A.
A. Milne
Received on Tue Sep 6 18:27:46 2005
|
{"url":"http://www2.asa3.org/archive/asa/200509/0096.html","timestamp":"2014-04-20T11:45:38Z","content_type":null,"content_length":"8060","record_id":"<urn:uuid:a19bac39-f667-4bb0-9321-a427dc243a07>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Montara Calculus Tutors
...I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical and technical skills.I have years of
experience in discrete math. I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject.
49 Subjects: including calculus, physics, geometry, statistics
...I had several GRE students and they all exceeded their initial expectations in the math portion. "Excellent tutor improved my GRE score!" - Megan G. Oakland, CA Andreas was a huge help to me in
preparing for my GRE. We met several times and his combination of patience and humor helped to keep me on track despite my admittedly deep-seated math phobia.
41 Subjects: including calculus, geometry, statistics, algebra 1
...So I turned to something I had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING! I have been doing it professionally now for over ten years. I love
it when my student's understand a new concept.
10 Subjects: including calculus, geometry, precalculus, algebra 1
...I took honors precalculus in high school which eventually lead me to earning a 5 on the AP Calculus BC exam. I earned a 740 on the SAT math section while in high school. I excelled at math in
high school and earned a hard science degree at a selective liberal arts college.
27 Subjects: including calculus, chemistry, English, reading
I graduated from Cornell University and Yale Law. I have several years of experience tutoring in a wide variety of subjects and all ages, from small kids to junior high to high school, and kids
with learning disabilities. I am also available to tutor adults who are preparing for the GRE, LSAT, or wish to learn a second language.
48 Subjects: including calculus, reading, English, French
|
{"url":"http://www.algebrahelp.com/Montara_calculus_tutors.jsp","timestamp":"2014-04-17T00:57:46Z","content_type":null,"content_length":"24984","record_id":"<urn:uuid:41465578-4730-4cd3-8c19-f4a8ee977962>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manhattan Beach Trigonometry Tutor
Find a Manhattan Beach Trigonometry Tutor
...I work online and also tutor at public libraries near me.I am available for help on specific types of problems for Pre-Algebra, Algebra 1,Algebra 2, and Geometry as well as explaining concepts
for all of those areas of math. It would be most helpful to me to receive as much information regardin...
10 Subjects: including trigonometry, Spanish, geometry, algebra 1
...Finally grasping a frustrating concept is one of the most exhilarating and rewarding feelings we can experience. Patience and understanding are my strongest attributes, and help me fully
connect and guide each student towards a set of skills that are customary just for them! My specialties lie in Math and Science, but am qualified to tutor the additional subjects listed below.
56 Subjects: including trigonometry, reading, chemistry, English
...I have experience working as a private math tutor and at an established math tutoring company. I am extremely patient and understanding, with an adaptable teaching style based on the student's
needs. I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/Trigonometry, Precalculus and Calculus.
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...I've helped more than 100 students reach their highest potential on test day, and, to put my money where my mouth is, have taken and aced the SAT myself multiple times (Three perfect 2400s so
far - and one 2390, GRRR!). I'm looking for students who want to work hard to do the best they can in th...
26 Subjects: including trigonometry, English, reading, Spanish
...For the past several years, I have been working with students to improve their understanding of and grades in mathematics. In geometry, students are usually thrown off by proofs and
memorization, but after we work together, students usually become more comfortable with the subject. I have been helping students improve their essays for years, both in person and online.
22 Subjects: including trigonometry, English, ACT Reading, ACT Math
|
{"url":"http://www.purplemath.com/Manhattan_Beach_Trigonometry_tutors.php","timestamp":"2014-04-19T12:40:06Z","content_type":null,"content_length":"24759","record_id":"<urn:uuid:1194d5b6-a494-4463-a6b8-c6b4d9348ded>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraic Topology-The Brouwer Degree
January 14th 2012, 12:00 AM #1
Dec 2009
Algebraic Topology-The Brouwer Degree
Hi there,
I'll be glad to receive some guidance to the attached question.
This is what I tried:
As for part (a): the chain $\sum_{i=0}^{6} [a_i, a_{i+1} ] +[a_7,a_0]$ which generates $sd K$ is being mapped to the generator $2 ( [a_0,a_2 ] +[a_2,a_4] + [a_4,a_6]+[a_6,a_0] )$ which generates
K. As far as I know, it implies that $deg f =2$, as we need.
My problem is with part (b) :
If we take one of the $a_i$ ' s and denote by $\pi$ the radial projection, we get that:
$h( \pi ( a_i) ) = f(a_i)$ . But why do we need this radial projection in the first place? isn't $|K|$ the unit sphere anyway? How can I define this radial projection explicitly for all the
sphere and show that it's an homotopy? ( I thought of making it a linear homotopy, but it will only imply that it's an homotopy for every quadrant separately... Is this radial projection just $\
frac{f(x)}{||f(x)||}$?" ) .
Please help me understand where am I wrong and solve this (b) part .
Thanks in advance !
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/differential-geometry/195256-algebraic-topology-brouwer-degree.html","timestamp":"2014-04-16T13:32:39Z","content_type":null,"content_length":"32437","record_id":"<urn:uuid:0c85cf48-9e5a-4dd2-8718-6dcf01a02420>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DEF CON 16 Punch Card Puzzle
> DEF CON 16 Punch Card Puzzle
DEF CON 16 Punch Card Puzzle
Back in 2008, at DEF CON 16, G. Mark Hardy presented his second crypto challenge. I didn’t go to DC16, so I didn’t see the challenge (and even if I had, I wasn’t really tracking these at the time).
But in 2010, at ShmooCon, he dusted the challenge off and handed it out again, as nobody had solved it yet. I’d managed, with a buddy, to solve the ShmooCon badge puzzle that year, and after I got
home I started on the DC16 puzzle. It took me a few days, but I managed to beat it.
I’ve held off on writing this one up, because the original included a phone number, and I didn’t want to publish that without G. Mark’s approval. And though we’re in frequent contact, it wasn’t until
recently that I remembered to ask him about it. At his request, I’ve modified the puzzle slightly, with a different phone number (which I’m sure you’ll recognize). The method to arrive at the
solution is still the same as the original.
The puzzle was handed out in five pieces, each printed on old computer punch cards. Each card included some additional text and two lines of code. Here are the five cards (again, modified for a
different endgame):
As always, if you’d like to try to solve this yourself, then STOP now, as the rest of this post is full of spoilers. The text above is all that you need to get started.
One of the first things I did was to try the simple attacks: ROT-13, for example. After those gained me nothing, I wrote a simple python script to output letter frequencies for each card. The results
looked something like this:
A : 7 2 9 5 6
B : 2 5 9 0 0
C : 0 1 1 3 5
D : 1 3 3 2 6
E : 1 4 1 2 6
F : 6 4 6 2 4
G : 15 8 2 7 0
H : 8 1 1 1 1
I : 5 0 3 1 1
J : 2 1 5 2 5
K : 0 0 0 4 12
L : 7 4 1 2 2
M : 2 15 2 4 2
N : 0 3 2 1 2
O : 4 2 4 1 3
P : 3 2 5 0 8
Q : 0 0 5 5 1
R : 1 7 0 12 1
S : 2 4 2 6 2
T : 1 0 5 1 1
U : 3 4 6 2 0
V : 4 2 3 3 1
W : 2 0 2 6 1
X : 0 4 1 1 4
Y : 3 2 0 3 1
Z : 1 2 2 4 5
So the five cards have distinctly different frequency distributions, but none of them are really flat. The first card had more Gs than any other letter, the second, slightly more Ms than Gs, etc.
Pretty quickly I’d noticed a pattern: GMARK. I later saw this as a recurring theme in his puzzles, but this was the first time I’d seen it, and so I was kind of stoked. First, I tried shifting the
letters back such that the most common letter was E, but that didn’t seem to look right. Remembering that he often uses Z for a space, I then shifted them back to Zs (G -> Z, M -> Z, etc.), and now
my texts looked like this:
But this still didn’t give me a cleartext. Some kind of wild guess made me think that I was dealing with a columnar transposition, which I’d never tried to break before. So I resolved to do this one,
and to do it “by hand” (without resorting to brute-force computer programs). I tried some simple rearrangements of each card’s text, but got nowhere…
Then I realized, that I might be able to do an attack “in depth”: Since I had 5 different ciphertexts, if they were all encoded with the same key, then I could use bits of one to help solve another.
I lined all the text up in five columns, and started trying to rearrange the rows such that words formed. For example, if I found a Q in the first column, I’d then look for another row with a U in
the first column, and put them together. I did that for all the Qs I could find, then looked in the other columns to see if other obvious digraphs were being formed.
This way, I figured, I might start with “QUI” in one column, and notice “HIS” in another. Then I’d just have to put a row with “T” above HIS” and I’d have another word built. Repeat, and repeat, and
eventually I’d solve all of them.
Except that this wasn’t how the puzzle worked. :(
As I realized that I was getting nowhere, I noticed that there were two rows which read “Z Y O U Z.” And for the first time, I saw the word “YOU” in the middle of two Zs. And realized that I was
being an idiot.
I eliminated some spaces, to make it easier to read, and found the plaintext. [I was working vertically, but to save space I'll rotate it here, in two blocks. The first block is the 1st half of each
card's shifted text, placed one on top of the next, the 2nd block is the same for the 2nd half of each card].
Reading down each column in the 1st block, then continuing in the 2nd, we get:
Or, cleaned up:
NOW YOU HAVE TO GET THE REST
Woohoo! Of course, that’s not all. There’s still a block of text at the end that’s not right:
So there’s more to decode. Fortunately, G. Mark gave us a big hint when he said “NO CHEATING.” That’s his clue, made clear in his Tales from the Crypto talk, that this stage requires the Playfair
cipher. But what key? Well, for his Mardi Gras puzzle, he used the title of his talk, so what talk did he give at DEF CON 16? “A Hacker Looks Past Fifty.”
Plugging this into a friendly online Playfair decoder reveals the final cleartext:
Or, cleaned up:
Still not quite finished. So now we’ve got to do some math and number manipulation. At first, I thought it was several different multiplaction operations, somethng like:
7 * 7, 4, 3 * 4, 2, 8, 5, 0, 9 == 49 4 12 2 8 5 0 9 or 494-122-8509
I texted the phrase to that number, but got no response. After a while, I sent an email directly to G. Mark, who confirmed that I’d broken the cipher, but did the math wrong.
It wasn’t a bunch of separate operations, but a single operation, like this:
7 * 743 * 428509
Which yields the following (obviouly faked for this blog entry) phone number:
This was a fun puzzle! I took some wrong turns, tried some new techniques, had some good luck, and made some stupid mistakes. A little of everything. Of course, tweaking the puzzle so I could
(finally) publish the writeup was fun, too, especially factoring numbers to get them to fit into the ciphertext space available. Interesting bit of trivia: Turns out that 8675309 is a prime number.
1. July 28, 2011 at 11:18 am |
Well done! Congratulations on solving yet another G. Mark crypto puzzle (and thank you for taking the time to make such a detailed write-up.) – G. Mark
1. July 29, 2011 at 11:16 am |
|
{"url":"http://darthnull.org/2011/07/27/def-con-16-punch-card-puzzle/","timestamp":"2014-04-19T06:51:26Z","content_type":null,"content_length":"67251","record_id":"<urn:uuid:44fe5546-bee5-4291-8bb2-6f4cd2206d8e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Montara Calculus Tutors
...I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical and technical skills.I have years of
experience in discrete math. I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject.
49 Subjects: including calculus, physics, geometry, statistics
...I had several GRE students and they all exceeded their initial expectations in the math portion. "Excellent tutor improved my GRE score!" - Megan G. Oakland, CA Andreas was a huge help to me in
preparing for my GRE. We met several times and his combination of patience and humor helped to keep me on track despite my admittedly deep-seated math phobia.
41 Subjects: including calculus, geometry, statistics, algebra 1
...So I turned to something I had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING! I have been doing it professionally now for over ten years. I love
it when my student's understand a new concept.
10 Subjects: including calculus, geometry, precalculus, algebra 1
...I took honors precalculus in high school which eventually lead me to earning a 5 on the AP Calculus BC exam. I earned a 740 on the SAT math section while in high school. I excelled at math in
high school and earned a hard science degree at a selective liberal arts college.
27 Subjects: including calculus, chemistry, English, reading
I graduated from Cornell University and Yale Law. I have several years of experience tutoring in a wide variety of subjects and all ages, from small kids to junior high to high school, and kids
with learning disabilities. I am also available to tutor adults who are preparing for the GRE, LSAT, or wish to learn a second language.
48 Subjects: including calculus, reading, English, French
|
{"url":"http://www.algebrahelp.com/Montara_calculus_tutors.jsp","timestamp":"2014-04-17T00:57:46Z","content_type":null,"content_length":"24984","record_id":"<urn:uuid:41465578-4730-4cd3-8c19-f4a8ee977962>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Subset Selection in Regression, 2nd ed
Results 1 - 10 of 15
, 2006
"... This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of
elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that ..."
Cited by 298 (1 self)
Add to MetaCart
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of
elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary
signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently
solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the
optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact
of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and
numerical analysis.
- IEEE Trans. Inform. Theory , 2007
"... Abstract. This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries
in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement ove ..."
Cited by 292 (9 self)
Add to MetaCart
Abstract. This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in
dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results for OMP, which require O(m 2) measurements. The new results for OMP are
comparable with recent results for another algorithm called Basis Pursuit (BP). The OMP algorithm is faster and easier to implement, which makes it an attractive alternative to BP for signal recovery
problems. 1.
- n1) = I1 + I2 + I3, (S.1) where I2 = λn(nνn) −1/2 α T n G−1 11 Wns/2 , I3 , 2008
"... Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties,
but maximizing the penalized likelihood function is computationally challenging, because the objective funct ..."
Cited by 58 (0 self)
Add to MetaCart
Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but
maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article, we propose a new unified algorithm
based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm
are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can naturally adopt a sparse representation. Thus, we suggest using the one-step LLA
estimator from the LLA algorithm as the final estimates. Statistically, we show that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties
with good initial estimators. Computationally, the one-step LLA estimation methods dramatically reduce the computational cost in maximizing the nonconcave penalized likelihood. We conduct some Monte
Carlo simulation to assess the finite sample performance of the one-step sparse estimation methods. The results are very encouraging. 1. Introduction. Variable
- Annals of Statistics , 2005
"... Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions.
Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and ..."
Cited by 38 (4 self)
Add to MetaCart
Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions.
Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the
penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable
function using a minorize–maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of
the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss
conditions under which this convergence may be guaranteed. We exploit the Newton–Raphson-like aspect of these algorithms
, 2009
"... The replica method is a non-rigorous but widely-accepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica
method to non-Gaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Cited by 25 (3 self)
Add to MetaCart
The replica method is a non-rigorous but widely-accepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method
to non-Gaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of ann-dimensional vector
“decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum mean-squared error estimation. The replica MAP analysis can be readily applied to
many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. In the case of lasso estimation the scalar
estimator reduces to a soft-thresholding operator, and for zero normregularized estimation it reduces to a hard-threshold. Among other benefits, the replica method provides a
computationally-tractable method for exactly computing various performance metrics including mean-squared error and sparsity pattern recovery probability.
"... © 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for
resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Cited by 9 (4 self)
Add to MetaCart
© 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale
or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—Formulated as a least square problem under an 0
constraint, sparse signal restoration is a discrete optimization problem, known to be NP complete. Classical algorithms include, by increasing cost and efficiency, matching pursuit (MP), orthogonal
matching pursuit (OMP), orthogonal least squares (OLS), stepwise regression algorithms and the exhaustive search. We revisit the single most likely replacement (SMLR) algorithm, developed in the
mid-1980s for Bernoulli–Gaussian signal restoration. We show that the formulation of sparse signal restoration as a limit case of Bernoulli–Gaussian signal restoration leads to an 0-penalized least
square minimization problem, to which SMLR can be straightforwardly adapted. The resulting algorithm, called single best replacement (SBR), can be interpreted as a forward–backward extension of OLS
sharing similarities with stepwise regression algorithms. Some structural properties of SBR are put forward. A fast and stable implementation is proposed. The approach is illustrated on two inverse
problems involving highly correlated dictionaries. We show that SBR is very competitive with popular sparse algorithms in terms of tradeoff between accuracy and computation time. Index
Terms—Bernoulli-Gaussian (BG) signal restoration, inverse problems, mixed 2- 0 criterion minimization, orthogonal least squares, SMLR algorithm, sparse signal estimation, stepwise regression
algorithms. I.
"... Abstract — We derive a one-period look-ahead policy for online subset selection problems, where learning about one subset also gives us information about other subsets. We show that the
resulting decision rule is easily computable, and present experimental evidence that the policy is competitive aga ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract — We derive a one-period look-ahead policy for online subset selection problems, where learning about one subset also gives us information about other subsets. We show that the resulting
decision rule is easily computable, and present experimental evidence that the policy is competitive against other online learning policies. I.
, 2005
"... Balancing comfort: Occupants' control of window blinds in private offices by Vorapat Inkarojrit Doctor of Philosophy in Architecture University of California, Berkeley Professor Charles C.
Benton, Chair The goal of this study was to develop predictive models of window blind control that could be use ..."
Cited by 2 (0 self)
Add to MetaCart
Balancing comfort: Occupants' control of window blinds in private offices by Vorapat Inkarojrit Doctor of Philosophy in Architecture University of California, Berkeley Professor Charles C. Benton,
Chair The goal of this study was to develop predictive models of window blind control that could be used as a function in energy simulation programs and provide the basis for the development of
future automated shading systems. Toward this goal, a two-part study, consisting of a window blind usage survey and a field study, was conducted in Berkeley, California, USA, during a period spanning
from the vernal equinox to window solstice. A total of one hundred and thirteen office building occupants participated in the survey. Twenty-five occupants participated in the field study, in which
measurements of physical environmental conditions were cross-linked to the participants' assessment of visual and thermal comfort sensations. Results from the survey showed that the primary window
blind closing reason was to reduce glare from sunlight and bright windows. For the field study, a total of thirteen predictive window blind control logistic models were derived using the Generalized
Estimating Equations (GEE) technique. TABLE OF CONTENTS TABLE OF CONTENTS........................................................................................... i LIST OF
FIGURES................................................................................................... v LIST OF
TABLES..................................................................................................... xi
ACKNOWLEDGEMENTS.......................................................................................xiii CHAPTER 1:
- In Proceedings of ACM/USENIX Internet Measurement Conference (IMC , 2005
"... An important component of traffic analysis and network monitoring is the ability to correlate events across multiple data streams, from different sources and from different time periods. Storing
such a large amount of data for visualizing traffic trends and for building prediction models of “normal ..."
Cited by 2 (1 self)
Add to MetaCart
An important component of traffic analysis and network monitoring is the ability to correlate events across multiple data streams, from different sources and from different time periods. Storing such
a large amount of data for visualizing traffic trends and for building prediction models of “normal ” network traffic represents a great challenge because the data sets are enormous. In this paper we
present the application and analysis of signal processing techniques for effective practical compression of network traffic data. We propose to use a sparse approximation of the network traffic data
over a rich collection of natural building blocks, with several natural dictionaries drawn from the networking community’s experience with traffic data. We observe that with such natural
dictionaries, high fidelity compression of the original traffic data can be achieved such that even with a compression ratio of around 1:6, the compression error, in terms of the energy of the
original signal lost, is less than 1%. We also observe that the sparse representations are stable over time, and that the stable components correspond to well-defined periodicities in network
traffic. 1
, 2005
"... In order to perform many signal processing tasks such as classification, pattern recognition and coding, it is helpful to specify a signal model in terms of meaningful signal structures. In
general, designing such a model is complicated and for many signals it is not feasible to specify the appropri ..."
Add to MetaCart
In order to perform many signal processing tasks such as classification, pattern recognition and coding, it is helpful to specify a signal model in terms of meaningful signal structures. In general,
designing such a model is complicated and for many signals it is not feasible to specify the appropriate structure. Adaptive models overcome this problem by learning structures from a set of signals.
Such adaptive models need to be general enough, so that they can represent relevant structures. However, more general models often require additional constraints to guide the learning procedure. In
this thesis
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=916839","timestamp":"2014-04-18T20:09:08Z","content_type":null,"content_length":"41030","record_id":"<urn:uuid:cf9b5598-926e-472a-a8ed-237f97b683e9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cubes and cube roots
Step 13: Cubes and cube roots
Notice the K scale at the very bottom of the virtual slide rule. It is three copies of the C and D scales made smaller and laid end to end. It can be used for finding the cube or the cube root of any
Finding the cube of a number is very much like finding the square of a number. Find the number on the D scale. Find its cube on the K scale. See the first graphic. By now you can recognize the C and
D scales, even though they are not labeled. The hairline is over the 3 on the C and on the D scales. With you eye follow the hairline downward to the K scale, which is the very bottom scale. Notice
that the hairline crosses over the number 27 on the K scale. The cube of 3 is 27. Look at the 4 on the C and D scales. If you look down to the K scale, you can see 64 is directly below it. The cube
of 4 is 64. Look at the 5 on the C and D scales. Look down to the K scale. The number below it is 125. The cube of 5 is 125. You can do the same with 6 on the C and D scales. Below (if you could read
it well) is 216. The cube of 6 is 216. (Although you cannot accurately read the 6 in 216 on the K scale, you know that 6 x 6 = 36. The last digit in 36 is 6. Again 6 x 6 results in a number that ends
in 6. So, even though you cannot read it accurately from the slide rule, you know the last digit in the cube of 6 is also a 6. You can accurately read 21. Add the 6 you know must be part of the
number and you have 216. This is a way you can frequently determine a digit that is beyond what you can read accurately on the slide rule.)
Finding the cube root of a number is more complicated than finding the cube of a number. See the second graphic. Basically, you mark off the digits in any number into clusters of three beginning at
the decimal point. So, 1,200 would be marked off as 1 + 200. After groupings of three have been marked off, pay attention only to what is remaining. If there is one digit remaining, use the far left
third of the K scale. The setup on the slide rule for finding the cube root of 1,200 is shown in the first graphic. Notice the hairline is set at 1,200 on the K scale. Read the answer on the D scale.
The numbers on the D scale are 106 plus a tiny bit more. An electronic scientific calculator indicates that the cube root of 1,200 is 10.627.
Find the cube root of 12,000. The setup would be the same, except that the middle of the three sections in the K scale would be used. This is because two digits are left after removing groups of
three digits. The digits one can read on the D scale are 229. The electronic scientific calculator indicates the cube root of 12,000 is 22.894.
Find the cube root of 120,000. After removing the first cluster of three digits, three digits remain. Use the far right segment of the K scale. The numbers indicated on the D scale are 494. Checking
an electronic scientific calculator, the exact cube root of 120,000 is 49.324.
There are rules for placing the decimal point when calculating cube roots. They are somewhat involved. In the last step I will link a couple of manuals so that those who wish to become very
proficient at cubes and cube roots can learn the exact rules. Or, you can do some guessing in your head and know where to place the decimal point. For example, when calculating the cube root of
12,000 above you know the significant digits are 229. You could guess the number is about 20 just for a test. 20 x 20 is 400. 20 x 400 is 8,000. That is close enough to 12,000 that you now know where
to place the decimal point.
The process of calculating the cube root of a number smaller than 1 also has its special rules. Rather than bog down this Instructable with them, I would refer you to a manual I will link in the last
|
{"url":"http://www.instructables.com/id/A-More-Complete-Slide-Rule-Tutorial/step13/Cubes-and-cube-roots/","timestamp":"2014-04-19T00:43:56Z","content_type":null,"content_length":"147104","record_id":"<urn:uuid:4212240b-cdc8-4658-a95d-c542944e8e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: Skewness estimates with svyset data
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Skewness estimates with svyset data
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Skewness estimates with svyset data
Date Wed, 5 Nov 2008 13:22:50 -0000
First, I think you need to keep explaining for the benefit of anyone
trying to pick up on this thread that LMS refers to a method devised by
[Timothy J.] Cole and others for handling growth curves. You earlier
gave a reference that was just Cole et al. 2008. Despite a strong hint
earlier from Stas Kolenikov, the further details of that reference are
still outstanding.
One of my dictionaries explains LMS as London Mathematical Society,
London Missionary Society, and London, Midland and Scottish Railway. It
is easy to guess that none of those apply but not so obvious that LMS
here does _not_ mean Least Median of Squares as devised by Rousseeuw, as
many statistically-minded people might imagine.
Rousseeuw, P.J. 1984. Least median of squares regression. Journal,
American Statistical Association 79: 871-880.
The more general point, which should be obvious except that many list
members act as if it were not true, is that the list includes people
from several quite different disciplines. Hence if you want to maximise
the readership of a question some explanations help a lot and rarely do
In terms of what you want to do:
Several people on this list should know much, much more about Cole's
method than I do but they are keeping quiet. I am surprised at the
implication that you need to feed skewness to Cole's method. That is
not, in particular, the case for -colelms- from SSC. I understood that
Cole's method was in essence designed to work well with the possibly
skew distributions that do occur and as such there is no specific need
to prepare the data or satisfy the assumptions of the method, as there
aren't any, except I guess that ages are accurate and size measurement
error negligible.
On the other hand, it may be that the missing reference, Cole et al.
2008, gives a quite different twist to the method, but then we are back
to my earlier point.
In general ignoring some fraction of data in the tail seems a very bad
idea unless it is obvious that the values concerned are all
untrustworthy. Even them some sensitivity analysis (with outliers vs
without outliers) would seem advisable.
Richard Palmer-Jones
Yes, I have been planning to use LMS method - basically adding the
adult parameters to the child hood ones given there. LMS needs
skewness - hence my interest. I am only interested in the adults older
that 25 (when both males and females have reached their full height)
so complicated smoothing is not necessary.
Yes, NHANES has heavy weighting which makes a considerable difference
to estimates (and false PSUs).
However, since the skewness reported by summarize is positive in
adults I am wondering whether a simpler procedure is to truncate the
parameter for valuies > 2.5sd, or to transform to logs, or some such
and work in them. Unfortunately ln(weight) is also skewed.
> Stas Kolenikov
> To Nick: yes, I've used skewness and kurtosis to test for normality a
> bunch of times (and there's a famous Mardia's multivariate
> generalization that I programmed up :)). But frankly I personally
> don't remember seeing confidence intervals on skewness anywhere at
> all. Estimation and testing are two related ways of looking at data
> with statistics, but with skewness and kurtosis you really estimate
> something to see that it is close enough to zero... and sometimes you
> don't even estimate a thing and go straight to the test statistic.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-11/msg00164.html","timestamp":"2014-04-16T21:57:31Z","content_type":null,"content_length":"10701","record_id":"<urn:uuid:bec6fd7d-1be4-4caa-9dcd-970fec7f2fc9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
'Stars' printed from http://nrich.maths.org/
Full Size Version
This text is usually replaced by the Flash movie.
When a circle has 8 dots you can move around the circle in steps of length $1$, $2$, $3$, $4$, $5$, $6$ or $7$.
If you move around the circle in steps of $2$, you miss some points
If you move around the circle in steps of $3$, you hit all points.
How else can you hit all points?
When a circle has $9$ dots you can hit all points in $6$ different ways.
What step sizes allow you to do this?
Now consider $10$ points. Can you find the $4$ different ways in which we can hit them all?
Explore what happens with different numbers of points and different step sizes and comment on your findings. How can you work out what step sizes will hit all the points for any given number of
Now consider $5$ points. You can hit all points irrespective of the step size. What other numbers have this property?
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
If you would rather work on paper, go to the Printable Resources page to open PDF files of the circles (Circle templates > Without central point)
Each PDF file contains 12 identical circles with a specific number of dots. You can select any number of dots from 3 to 24.
|
{"url":"http://nrich.maths.org/2669/index?nomenu=1","timestamp":"2014-04-16T10:45:38Z","content_type":null,"content_length":"5296","record_id":"<urn:uuid:79dbf879-93cf-4f0c-978f-5bd107a3ac9a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tool: Derivatives of the Trigonometric Functions
Tool: Derivatives of the Trigonometric Functions Screenshot:
Go to: http://www.math.dartmouth.edu/~calcsite/video1.html#209 (opens a new window)
Description: RealMedia videos of worked problems on derivatives of trigonometric functions.
Technology Type: Computer
Authors: Susan J. Diesel
Donald Kreider
and Dwight Lahr
Language: English
Cost: Does not require payment for use
Average Rating: Login to rate this resource
My MathTools: Login to Subscribe / Save This
Reviews: be the first to review this resource
Discussions: start a discussion of this resource
Activities: Derivatives of the Trigonometric Functions: Quiz
Principles of Calculus Modeling: An Interactive Approach: Exercises for Section 2.9
Tools: Derivatives of the Trigonometric Functions
Lim sin(x)/x as x -> 0
Courses: Calculus Derivatives of basic functions
|
{"url":"http://mathforum.org/mathtools/tool/2427/c,15.10.1,ALL,ALL/","timestamp":"2014-04-20T16:15:57Z","content_type":null,"content_length":"14780","record_id":"<urn:uuid:968af4cb-c056-406c-adf5-00e2c0e4c4e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
p-values: Sum up + proposal - Hydrogenaudio Forums
"probability to reach a certain score (or better) by random guessing"this threadPicture (1)Picture (2)
These 3 pictures will help to explain the problem:
Picture (1)Picture (3)Picture (4)
A typical example: A tester wants to get a p-value < 0.05, but he doesn't decide to perform e.g. 8 trials before the test. Instead he would stop the test whenever a p-value < 0.5 is reached (->
yellow numbers in Picture (1)). These results (5/5), (7/8), ... are the "stop points" of the test.
Problem: After each stop point the possibilities of 'movement' or 'ways' in the triangle are reduced. E.g. (5/6) can't be reached from (5/5) anymore because (5/5) will stop the test. This is why the
numbers in pascal's triangle in picture (4) are changed compared to picture (3). The number of ways to reach the 2nd stop point (7/8) is reduced from 8 to 5.
Now the total proability (corrected p-value or 'c-value') to finish this test 'successfully' (= max. number of trials: 8, test stops when p-value < 0.05) is:
c-value (7/8) = 1/32 + 5/256 = 0.051
That doesn't seem very bad, but here are some c-values for bigger number of trials (p-value that stops the test 0.05):
15 trials -> 0.079
30 trials -> 0.129
50 trials -> 0.158
100 trials -> 0.202
Now the question is: Should we just modify ABX software to force people to specify a fixed number of trials before testing - showing p-values or show c-values if the number of trials hasn't been
fixed before?
With this approach there might another problem - I quote a PM Schnofler sent me recently about this:
If I understood it correctly the idea goes like this:
The p-value, that is "the probability to get c or more correct in n trials if you guess blindly", doesn't give an accurate measure of "probability that you were guessing" because it doesn't take into
account that the listener might just stop as soon as he has the value he wants and continue otherwise. So what we do is, we calculate the "probability to reach your current or a better p-value with
up to n trials" and call this "corrected p-value" or c-value as you do in your source. Sounds nice, but why don't we go a step further and calculate the "probability to reach your current or a better
c-value with up to n trials", because after all the listener could just stop as soon as he has his desired c-value or continue otherwise.
That's why I called it a "hack", that is c-values don't take a fundamentally different approach to calculate the measurement we'd like to have ("probability you were guessing"), but just try to
"patch up" the approach we already have, and in the end leave you with the same problem you started out with.
I'm not sure about this, but if the tester isn't forced to specify a p-value that stops the test before the test starts - and the software stops or continues the test based on this automatically,
Schnofler's thought is probably right. If a tester is allowed to watch c-values and stop the test based on them, we would need 'corrected c-values', 'corrected corrected c-values' ...
I think I have found a sollution for this problem - there might be better ones, but anyway - here it is:
The goal is: no matter how long the test is going to take, the c-value must not become higher than e.g. 0.05. Every stop point will 'consume' a part of this c-value. It's necessary to make sure that
adding the probabilities of each stop point, the sum can never be bigger than the c-value we want to reach (here 0.05). A simple approach for something like this:
2^(-1) + 2^(-2) + 2^(-3) + ... + 2^(-n) < 1 , no matter how big n gets.
We have to choose the stop points like this (easier for me to explain from an example):
Picture (5)
Desired c-value: c = 0.05 or lower.
1/2*0.05 = 0.025, so the 1st stop point must have a probability p < 0.025. This is the case for 6/6 correct trials with p = 1/64 = 0.0156
So 1st stop point: 6/6
c -> c - p = 0.05 - 0.0156 = 0.0344, the remaining "c" for the rest of the stop points.
Condition for the next stop point:
p < 0.5 * 0.0344 = 0.0172
From the table it's obvious that for the next stop point (n-1)/n the p is 6/2^n
For n=9: p = 6/512 = 0.0117
So 2nd stop point: 8/9
c -> c - p = 0.0344 - 0.0117 = 0.0227
p < 0.5 * 0.0227 = 0.0114
p = 33/2^n for next stop point (n-2)/n
for n=12: p = 33/4096 = 0.0081
so 3rd stop point: 10/12
c -> c - p = 0.0227 - 0.0081 = 0.0146
p < 0.5 * 0.0146 = 0.00730
p = 182/2^n for next stop point (n-3)/n
for n=16: p = 182/32768 = 0.0056
so 34d stop point: 12/15
c -> c - p = 0.0146 - 0.0056 = 0.0090
This way, it would be possible that the tester specifies a "probability that you could get that score by guessing" = c-value he wants to reach, and the ABX software tells where the stop points are -
or just works as we're used to: It displays the current c-value based on the stop points it calculated from the 'goal c-value'.
Puh... writing this was hard, reading too I guess. As reward here's a little toy: I created some small dos-box program that can calculate c-values. It's attatched to this post. Enjoy
Edited: "probability that you're guessing" replaced with "probability that you could get that score by guessing"
This post has been edited by tigre: Mar 15 2004, 10:12
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello
One solution that several of us discussed in 2001 was to create ABX "profiles" designed to give a reasonable number of max trials (for example 28), and a reasonable number of places where the program
automatically stops.
See my summary post from the massive thread here:
1. The test will automatically stop if the following points are reached:
6 of 6
10 of 11
10 of 12
14 of 17
14 of 18
17 of 22
17 of 23
20 of 27
20 of 28
2. The program will display overall alpha values after each of the above stop points has been achieved. Also, the overall alpha values will be displayed regardless of whether the test stops or not at
the following (look) points: trials 6, 12, 18, 23, and 28.
(The earlier the test is terminated when the listener passes, the lower the overall alpha is.)
3. The program will display the number correct after each trial is completed.
4. The test will automatically stop if 9 incorrect are achieved.
QUOTE (tigre @ Mar 10 2004, 06:15 PM)
The goal is: no matter how long the test is going to take, the c-value must not become higher than e.g. 0.05. Every stop point will 'consume' a part of this c-value. It's necessary to make sure that
adding the probabilities of each stop point, the sum can never be bigger than the c-value we want to reach (here 0.05). A simple approach for something like this:
2^(-1) + 2^(-2) + 2^(-3) + ... + 2^(-n) < 1 , no matter how big n gets.
What will happen if the listener does, say 6 failed ABX trials, then (almost) all following trials are successful? Would it ever be possible to bring the c-value down again?
QUOTE (jido @ Mar 11 2004, 11:18 AM)
QUOTE (tigre @ Mar 10 2004, 06:15 PM)
The goal is: no matter how long the test is going to take, the c-value must not become higher than e.g. 0.05. Every stop point will 'consume' a part of this c-value. It's necessary to make sure that
adding the probabilities of each stop point, the sum can never be bigger than the c-value we want to reach (here 0.05). A simple approach for something like this:
2^(-1) + 2^(-2) + 2^(-3) + ... + 2^(-n) < 1 , no matter how big n gets.
What will happen if the listener does, say 6 failed ABX trials, then (almost) all following trials are successful? Would it ever be possible to bring the c-value down again?
Sure. How low the c-value can become after a large number of trials depends on the 'stop points' only. E.g. if you want to reach a c-value < 0.01 and start with 6 wrong trials, it could look like
this (this example is not calculated with 2^(-1) + ... method but the result is similar):
Maximum number of trials: 40
Stop points with p-value < 0.003:
In your case, if you reach
26/35 = 6/6 + 20/29 or
28/38 = 6/6 + 22/32
your final c-value is still < 0.01
With the "2^(-1) + ..." method, you can reach the c-value you want but the number of trials is
limited. For a final c-value < 0.01 the stop points would be:
(I have to calculate these values manually because I haven't had time yet to add this to my little program.)
QUOTE (tigre @ Mar 10 2004, 06:15 PM)
Schnofler's thought is probably right. If a tester is allowed to watch c-values and stop the test based on them, we would need 'corrected c-values', 'corrected corrected c-values' ...
Would the corrected, corrected, corrected × 10 value approach a particular value? Could this not be a asymptote? Couldn't we use calculus to find this out instead of using simplistic hacks? Lazy?
QUOTE (music_man_mpc @ Mar 12 2004, 03:59 PM)
QUOTE (tigre @ Mar 10 2004, 06:15 PM)
Schnofler's thought is probably right. If a tester is allowed to watch c-values and stop the test based on them, we would need 'corrected c-values', 'corrected corrected c-values' ...
Would the corrected, corrected, corrected × 10 value approach a particular value? Could this not be a asymptote? Couldn't we use calculus to find this out instead of using simplistic hacks? Lazy?
The ABX "profile" sidesteps this issue by specifying maximum trials allowable. If the ABX does not pass after this max, then it is automatically failed.
28 trials max was one profile design, chosen to allow a reasonable number of trials, but other profiles can be designed with higher max trials if desired. Keep in mind that the higher the max trials
in the profile, the more difficult that profile will be to pass.
Ok, I guess I should say something on this subject, too. The problem is, the really clean solutions always make the whole testing procedure less comfortable or more complicated.
Not showing the listener his results until some point he specified in advance would make it extremely easy to calculate a precise "probability that you were guessing" (just the p-value we use now),
but it would also be a major pain in the ass for the listener.
ff123s ABX "profiles" are a much better solution, but they would still make testing more complicated than it is now. Especially in ABC/HR tests I like the possibility to just start an ABX, try a few
times, give up or try some more, stop whenever I want to, etc. First choosing a profile, not knowing your score until you reach the next stop point, having to stop if max trials is reached, all this
would make the test a lot less comfortable for the listener.
tigre, I haven't really made up my mind about the approach you describe in the second half of your second post. I understand how you do what you want to do, but I didn't understand how this solves
the problem. Could you try to clarify?
So, since my contribution to this discussion so far mainly consists of undecisiveness, I decided to make something "useful", a program that can calculate the
corrected-corrected-corrected-etc.-c-value. You specify the number of total and correct trials and a "depth", that is the number of "corrections" (where a depth of 1 is the normal p-value). To answer
music_man_mpc's question: Yes, of course the values approach a certain limit (they have to, the sequence is monotonic increasing and has 1 as an upper bound). It would be nice to have a closed form
of the limit function, but I guess that won't be easy (in the current form the definition of the sequence is terribly recursive). However, empirically, it seems like after a certain number of
correction-iterations the value actually remains constant, so it's possible to calculate the limit even if we don't have a nice function for it.
The limit function p(n,c) is characterized by the following property: p(n,c) is exactly the probability of reaching a point (n',c') with n'<=n and p(n',c')<=p(n,c). That's why the argument "but the
listener could have stopped as soon as he got a value <=p and continued otherwise" doesn't hold here. Sure, he could have stopped, but the chances of reaching such a point with the same or a better
c-value (meaning corrected, corrected, etc. p-value) than he has now, are exactly the same as the c-value that is shown at the moment.
That would kind of solve the problem, since we could freely show the listener his c-value all the time, and ABXing would be the same as before, only the p-values would be a bit higher than usual.
The obvious problem is, what the heck *are* these values? I don't have a clue. They are the result of some mysterious calculations, but do they have anything to do with the "probability that you were
guessing"? Well, I don't know, maybe someone more knowledgeable can shine some light on this.
Attached File(s)
( 193.78K )
Number of downloads: 95
Thanks for feedback so far.
To clarify/mention an aspect that hasn't been made totally clear so far:
The c-values / corrected c-values /... are all caculated the same way:
They use the stop points (i.e. the ABX scores where the test would have stopped) and the actual score that is reached. What differs, depending on different approaches (c-value, corrected^n c-value,
"asymptote approach", ...) are the stop points.
The problem is, that without any information before the test starts, the ABX software has to make assumptions about the stop points. Example:
Let's say a score of 11/14 is reached in a ABX test. The tester can see the scores + p-values he has reached and decides based on them when to stop the test (basic c-values approach).
1st case: His stop condition is a p-value of <= 0.031. The stop points are:
6/6, 8/9, 10/12, 11/14, the c-value is 0.047
2nd case: stop condition = p-value <= 0.032. Stop points:
5/5, 8/9, 10/12, 11/14, the c-value is 0.059
If the listener doesn't specify a p-value that will stop the test, the results will vary depending on the software's assumptions about at what score the tester would have stopped. Because of this,
IMO ABX software *must* ask for some information before the test starts to produce reliable p-/c-values.
My "asymptote approach" (2^-1 + 2^-2 + ...) is one way to get correct c-values with an unlimited number of trials (and an unlimited number of wrong trials
Maybe there is a way to calculate corrected values without the tester giving information before the test starts, but I doubt this, since the software always has to make assumptions that might be
wrong. Immagine a listener wants to reach a c-value of < 0.01, but after 15 trials with some mistakes he decides that 0.05 is enough this time. This would change the stop points, no matter what
method is used to calculate them, and therefore the c-values. Without the user giving some information about this to the software, there's no way to get correct results here.
I wonder if the limit of the probability to have guessed, in a sequencial test, is 1. Maybe one day I'll try to calculate it.
I've created a dos-box program (attatched to this post), that simulates "2^(-1) + 2^(-2) + 2^(-3) + ..." method. I've extended it a bit, now it works like this:
A aimed c-value is entered. The stop points are chosen by the program to make the c-value when reaching one of them stay lower than the aimed c-value, no matter how many trials are performed. The
numer of total trials can be limited by the user to make the program stop after a reasonable number of trials. Every stop point is allowed to 'consume' a certain percentage (or less) of the remaining
"aimed c-value reservoir". This percentage can be chosen by the user as 3rd input (0.01 - 0.99). Example:
The aimed c-value is 0.05. The percentage is 0.4.
The c-value for the 1st stop point must be smaller than 0.05*0.4 = 0.02, this is the case for
6/6, c-value = 0.0156. The "reservoir" is now 0.05-0.0156 = 0.0344.
What's added by the next stop point to the c-value must be smaller than 0.0344*0.4 = 0.0138. This is the case for
8/9, c-value = 0.0273. "reservoir": 0.0227. Next stop point must add 0.0091 or less:
10/12., c-value = 0.0354
Here's an example showing how the percentage value affects the stop points:
For comparison the number of trials is limited to 50, but there's no limit in practice (besides limits caused by overflow in software etc.):
Aimed c-value = 0.01.
1. Percentage = 0.1:
1. Stop point: (10/10) C-Value: 0.000976563
2. Stop point: (13/14) C-Value: 0.00158691
3. Stop point: (15/17) C-Value: 0.00223541
4. Stop point: (17/20) C-Value: 0.00282192
5. Stop point: (19/23) C-Value: 0.00332022
6. Stop point: (21/26) C-Value: 0.00373085
7. Stop point: (23/29) C-Value: 0.0040638
8. Stop point: (24/31) C-Value: 0.00459897
9. Stop point: (26/34) C-Value: 0.00496011
10. Stop point: (28/37) C-Value: 0.00523142
11. Stop point: (29/39) C-Value: 0.00564917
12. Stop point: (31/42) C-Value: 0.00592204
13. Stop point: (32/44) C-Value: 0.00632385
14. Stop point: (34/47) C-Value: 0.00657883
15. Stop point: (36/50) C-Value: 0.00676343
2. Percentage = 0.3
1. Stop point: (9/9) C-Value: 0.00195313
2. Stop point: (11/12) C-Value: 0.00415039
3. Stop point: (14/16) C-Value: 0.00511169
4. Stop point: (16/19) C-Value: 0.00601006
5. Stop point: (18/22) C-Value: 0.00676394
6. Stop point: (20/25) C-Value: 0.00737441
7. Stop point: (22/28) C-Value: 0.00786117
8. Stop point: (24/31) C-Value: 0.00824657
9. Stop point: (26/34) C-Value: 0.00855083
10. Stop point: (28/37) C-Value: 0.00879084
11. Stop point: (30/40) C-Value: 0.00898025
12. Stop point: (31/42) C-Value: 0.00927956
13. Stop point: (33/45) C-Value: 0.00947899
14. Stop point: (35/48) C-Value: 0.00962775
3. Percentage = 0.5
1. Stop point: (8/8) C-Value: 0.00390625
2. Stop point: (11/12) C-Value: 0.00585938
3. Stop point: (13/15) C-Value: 0.00769043
4. Stop point: (16/19) C-Value: 0.00844574
5. Stop point: (18/22) C-Value: 0.00913858
6. Stop point: (21/26) C-Value: 0.00942713
7. Stop point: (23/29) C-Value: 0.00969638
8. Stop point: (26/33) C-Value: 0.00981075
9. Stop point: (29/37) C-Value: 0.00986504
10. Stop point: (31/40) C-Value: 0.00991874
11. Stop point: (34/44) C-Value: 0.0099426
12. Stop point: (36/47) C-Value: 0.00996601
4. Percentage = 0.8
1. Stop point: (7/7) C-Value: 0.0078125
2. Stop point: (11/12) C-Value: 0.00952148
3. Stop point: (16/18) C-Value: 0.00973511
4. Stop point: (19/22) C-Value: 0.00986528
5. Stop point: (22/26) C-Value: 0.00993642
6. Stop point: (25/30) C-Value: 0.00997436
7. Stop point: (28/34) C-Value: 0.00999449
8. Stop point: (33/40) C-Value: 0.00999717
9. Stop point: (36/44) C-Value: 0.00999893
10. Stop point: (40/49) C-Value: 0.00999945
5. Percentage = 0.9
1. Stop point: (7/7) C-Value: 0.0078125
2. Stop point: (11/12) C-Value: 0.00952148
3. Stop point: (15/17) C-Value: 0.00994873
4. Stop point: (21/24) C-Value: 0.00997794
5. Stop point: (25/29) C-Value: 0.00998824
6. Stop point: (28/33) C-Value: 0.00999505
7. Stop point: (31/37) C-Value: 0.00999912
8. Stop point: (36/43) C-Value: 0.0099997
9. Stop point: (40/48) C-Value: 0.0099999
Attached File(s)
( 2.58K )
Number of downloads: 88
tigre: Just to clarify, with your method, the c-value that is shown to the user will be the probability to reach one of the stop points (calculated as you described above) or his current score,
QUOTE (schnofler @ Mar 15 2004, 12:13 AM)
tigre: Just to clarify, with your method, the c-value that is shown to the user will be the probability to reach one of the stop points (calculated as you described above) or his current score,
1. User tells software what c-value he wants to reach "true probability that you could get a score by guessing", e.g. 0.01.
2. Software calculates stop points (can be made configurable -> "probability" value).
3. There are several possibilities what can be shown to the user, e.g.:
a) the c-value based on the stop points and the actual score
b) simply either "not yet passed, if you stop now you've failed" or "passed, stop now"
c) the actual score and the next few reachable stop points
d) the stop points that have been missed already
My favourite would be a combination of a) and c), e.g. like this:
The "probability that you could get a score by guessing."" (c-value) you want to reach is 0.01.
Your current score is 7 correct trials out of 8.
Actual c-value: 0.0195
The next stop points you can reach are:
11/12; 4/4 correct trials needed
14/16; 5/6 correct trials needed
16/19; 9/11 correct trials needed
You've missed these stop points:
Calculating and showing the probability to reach one of the stop points wouldn't make much sense IMO.
Edit: "probability you're guessing" replaced with "probability that you could get a score by guessing."
This post has been edited by tigre: Mar 15 2004, 10:17
QUOTE (tigre)
a) the c-value based on the stop points and the actual score
Yes, that's what I meant in my previous post, sorry if I didn't make it clear enough (the c-value is, after all, calculated as the probability to reach one of the earlier stop points or your current
The problem with your approach, as I see it, is still the following: you're using two different kinds of "c-values" in your method. First you use the "traditional c-value" calculation to find the
stop points, but then you use a different way of calculating the value that is actually shown to the user, because here you use your new "custom" stop points.
This results in the same problem as the transition from p-values to c-values: what you show to the user is something different than you used for your assumptions about user behaviour. The problem
with the original c-value approach was this: you assume that the user will stop at a certain p-value, but then you don't even show him the p-value but rather a different value, the c-value, so the
assumptions don't make sense.
In your new approach the problem is similar. First you use "normal" c-values to find out what the stop points are. But then you don't show these "normal" c-values to the user, but you show him a
different kind of c-value, namely the ones based on your new stop points.
Or maybe I got it all wrong?
QUOTE (schnofler @ Mar 15 2004, 02:10 AM)
Or maybe I got it all wrong?
Somewhat, I'd say.
Based on the user input before the test starts, all stop points are fixed. The results can be shown, but that's not necessary. The software must have control over the stop points, i.e. when one of
them is reached, the software stops the test. Therefore, no assumptions about user behaviour have to be made, because this 'behaviour' is replaced by the stop points calculated by the software. The
c-values that are calculated now using these stop points are correct, no matter what the user can see during testing. You can show him even the 'ordinary' p-values as additional information. Since
the user can't decide to change stop conditions after the test has started, c-value calculation can't be messed up.
There's only one way to calculate c-values. The only thing that can change and therefore influence the results are the stop points. This is no problem if the stop points are fixed before the test
starts. You could even give the user the possibility to set every stop point manually before testing starts. The resulting c-values would be different from c-values based on "equal p-value stop
points" of course, but still valid since the stop points are known without any doubt and not calculated based on assumptions about user behaviour.
QUOTE (tigre @ Mar 14 2004, 03:36 PM)
The "probability you're gessing" (c-value) you want to reach is 0.01.
Just a small wording thing that Continuum pointed out in the big thread: It isn't really the "probability you're guessing" that's being calculated, but the "probability that you could get that score
by guessing."
I like the idea of asking the listener what he wants to try for before he starts.
QUOTE (ff123 @ Mar 15 2004, 03:35 AM)
QUOTE (tigre @ Mar 14 2004, 03:36 PM)
The "probability you're gessing" (c-value) you want to reach is 0.01.
Just a small wording thing that Continuum pointed out in the big thread: It isn't really the "probability you're guessing" that's being calculated, but the "probability that you could get that score
by guessing."
You're right, thanks (edited now in my posts). In my 1st post I called it "probability to reach a certain score (or better) by random guessing", but when writing the other posts I must have become
less aware of it
I like the idea of asking the listener what he wants to try for before he starts.
I do as well. This way there could be even an option to keep the 'old' p-values. (The tester would have to choose a fixed number of trials - and the test stops then, no matter what.)
|
{"url":"http://www.hydrogenaudio.org/forums/index.php?s=23af690ce3edac11068859932513625c&showtopic=19556","timestamp":"2014-04-16T23:27:43Z","content_type":null,"content_length":"130023","record_id":"<urn:uuid:b107ccba-8032-477c-9103-57cb02a979ee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Lenox ACT Tutor
Find a New Lenox ACT Tutor
I am currently employed as a high school math teacher, going into my eighth year. I will be teaching pre-algebra and Algebra 2. I have taught every math class from Pre-Algebra to Pre-Calculus. I
am also very knowledgeable about the math portion of ACT. I have numerous resources for test prep.
7 Subjects: including ACT Math, geometry, algebra 1, algebra 2
As a double degree graduate of the University of Rochester and the Eastman School of Music with decades of teaching experience ranging from secondary through the collegiate levels in math and
music, tutoring need not be a measure of last resort to merely achieve a passing grade but, even if it is, I...
37 Subjects: including ACT Math, English, geometry, biology
...Whether you prefer formal lessons, a study buddy, or review of homework assignments, I can tailor a tutoring plan to fit your needs. According to GED Testing Services, the 2014 GED Test is a
high-school equivalency test that takes a little more than seven hours to complete, and covers the four s...
36 Subjects: including ACT Math, reading, geometry, English
...Some of my favorite moments as a teacher have been when I was able to work with a student one on one and see their excitement when they finally understood material they had been struggling
with. There is nothing more rewarding as a teacher than instilling a confidence in a student which they nev...
12 Subjects: including ACT Math, calculus, geometry, algebra 1
...I studied Kempo from 05-09. I taught, from 07-09, an age group from 14-16 year olds. I had four years of experience with this program in High school.
26 Subjects: including ACT Math, reading, calculus, chemistry
|
{"url":"http://www.purplemath.com/New_Lenox_ACT_tutors.php","timestamp":"2014-04-19T02:36:10Z","content_type":null,"content_length":"23452","record_id":"<urn:uuid:0ded8c34-e6f5-41d3-b2b5-11f15d973d2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oswego, IL Algebra 1 Tutor
Find an Oswego, IL Algebra 1 Tutor
...I love to work with students on algebra, geometry, trigonometry, and precalculus!I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from
North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes.
7 Subjects: including algebra 1, geometry, algebra 2, trigonometry
Hello. I'm a friendly, dynamic tutor with experience teaching students from middle school age to adults returning to school. I have three years of experience teaching high school English and five
years of experience teaching college composition.
17 Subjects: including algebra 1, English, reading, grammar
...I also hold a masters degree in Special Education and have taken numerous courses on teaching modifications and have conducted research on new teaching methods for individuals with Autism. I
spent a total of 5 years working at the middle school level with students all over the autism spectrum, i...
10 Subjects: including algebra 1, special needs, study skills, elementary math
...I was also recruited to play at two junior colleges, but chose to accept a scholarship for track and field at a different school. In addition to participation as a player, I have also coached
basketball at the elementary level, and trained basketball players (strength and conditioning coach). I...
17 Subjects: including algebra 1, reading, biology, anatomy
I am currently employed as a high school math teacher, going into my eighth year. I will be teaching pre-algebra and Algebra 2. I have taught every math class from Pre-Algebra to Pre-Calculus. I
am also very knowledgeable about the math portion of ACT. I have numerous resources for test prep.
7 Subjects: including algebra 1, geometry, algebra 2, trigonometry
Related Oswego, IL Tutors
Oswego, IL Accounting Tutors
Oswego, IL ACT Tutors
Oswego, IL Algebra Tutors
Oswego, IL Algebra 2 Tutors
Oswego, IL Calculus Tutors
Oswego, IL Geometry Tutors
Oswego, IL Math Tutors
Oswego, IL Prealgebra Tutors
Oswego, IL Precalculus Tutors
Oswego, IL SAT Tutors
Oswego, IL SAT Math Tutors
Oswego, IL Science Tutors
Oswego, IL Statistics Tutors
Oswego, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Oswego_IL_algebra_1_tutors.php","timestamp":"2014-04-20T16:15:17Z","content_type":null,"content_length":"23951","record_id":"<urn:uuid:d0792b46-2785-4460-a0a4-0ca0867195ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
main is usually a function
This is part 2 of my commentary on detrospector. You can find part 1 here.
Performance claims
I make a lot of claims about performance below, not all of which are supported by evidence. I did perform a fair amount of profiling, but it's impossible to test every combination of data structures.
How can I structure my code to make more of my performance hypotheses easily testable?
Representing the source document
When we process a source document, we'll need a data structure to represent large amounts of text.
For more information on picking a string type, see ezyang's writeup.
We choose the lazy flavor of text so that we can stream the input file in chunks without storing it all in memory. Lazy IO is semantically dubious, but I've decided that this project is enough of a
toy that I don't care. Note that robustness to IO errors is not in the requirements list. ;)
The enumerator package provides iteratees as a composable, well-founded alternative to the lazy IO hack. The enumerator API was complex enough to put me off using it for the first version of
detrospector. After reading Michael Snoyman's enumerators tutorial, I have a slight idea how the library works, and I might use it in a future version.
Representing substrings
We also need to represent the k-character substrings, both when we analyze the source and when we generate text. The requirements here are different.
We expect that k will be small, certainly under 10 — otherwise, you'll just generate the source document! With many small strings, the penalty of boxing each individual Char is less severe.
And we need to support queue-like operations; we build a new substring by pushing a character onto the end, and dropping one from the beginning. For both String and Text, appending onto the end
requires a full copy. So we'll use Data.Sequence, which provides sequences with bidirectional append, implemented as finger trees:
-- module Detrospector.Types
type Queue a = S.Seq a
...Actually, I just ran a quick test with Text as the queue type, and it seems not to matter much. Since k is small, the O(k) copy is insignificant. Profiling makes fools of us all, and asymptotic
analysis is mostly misused. Anyway, I'm keeping the code the way it is, pending any insight from my clever readers. Perhaps one of the queues from Purely Functional Data Structures would be most
Representing character counts
We need a data structure for tabulating character counts. In the old days of C and Eurocentrism, we could use a flat, mutable int[256] indexed by ASCII values. But the range of Unicode characters is
far too large for a flat array, and we need efficient functional updates without a full copy.
We could import Data.Map and use Map Char Int. This will build a balanced binary search tree using pairwise Ord comparisons.
But we can do better. We can use the bits of an integer key as a path in a tree, following a left or right child for a 0 or 1 bit, respectively. This sort of search tree (a trie) will typically
outperform repeated pairwise comparisons. Data.IntMap implements this idea, with an API very close to Map Int. Our keys are Chars, but we can easily convert using fromIntegral.
-- module Detrospector.Types
import qualified Data.IntMap as IM
type FreqTable = IM.IntMap Int
Representing distributions
So we have some frequency table like
IM.fromList [('e', 267), ('t', 253), ('a', 219), ...]
How can we efficiently pick a character from this distribution? We're mapping characters to individual counts, but we really want a map from cumulative counts to characters:
-- module Detrospector.Types
data PickTable = PickTable Int (IM.IntMap Char)
To sample a character from PickTable t im, we first pick a random k such that 0 ≤ k < t, using a uniform distribution. We then find the first key in im which is greater than k, and take its
associated Char value. In code:
-- module Detrospector.Types
import qualified System.Random.MWC as RNG
sample :: PickTable -> RNG.GenIO -> IO Char
sample (PickTable t im) g = do
k <- (`mod` t) <$> RNG.uniform g
case IM.split k im of
(_, IM.toList -> ((_,x):_)) -> return x
_ -> error "impossible"
The largest cumulative sum is the total count t, so the largest key in im is t. We know k < t, so IM.split k im will never return an empty map on the right side.
Note the view pattern for pattern-matching an IntMap as if it were an ascending-order list.
The standard System.Random library in GHC Haskell is quite slow, a problem shared by most language implementations. We use the much faster mwc-random package. The only operation we need is picking a
uniform Int as an IO action.
We still need a way to build our PickTable:
-- module Detrospector.Types
import Data.List ( mapAccumR )
cumulate :: FreqTable -> PickTable
cumulate t = PickTable r $ IM.fromList ps where
(r,ps) = mapAccumR f 0 $ IM.assocs t
f ra (x,n) = let rb = ra+n in (rb, (rb, toEnum x))
This code is short, but kind of tricky. For reference:
mapAccumR :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
f :: Int -> (Int, Int) -> (Int, (Int, Char))
f takes an assoc pair from the FreqTable, adds its count to the running sum, and produces an assoc pair for the PickTable. We start the traversal with a sum of 0, and get the final sum r along with
our assoc pairs ps.
Representing the Markov chain
So we can represent probability distributions for characters. Now we need a map from k-character substrings to distributions.
Data.Map is again an option, but pairwise, character-wise comparison of our Queue Char values will be slow. What we really want is another trie, with character-based fanout at each node. Hackage has
bytestring-trie, which unfortunately works on bytes, not characters. And maybe I should have used TrieMap or list-tries. Instead I used the hashmap package:
-- module Detrospector.Types
import qualified Data.HashMap as H
data Chain = Chain Int (H.HashMap (Queue Char) PickTable)
A value Chain k hm maps from subsequences of up to k Chars to PickTables. A lookup of some Queue Char key will require one traversal to calculate an Int hash value, then uses an IntMap to find a
(hopefully small) Map of keys with that same hash value.
There is a wrinkle: we need to specify how to hash a Queue, which is just a synonym for S.Seq. This is an orphan instance, which we could avoid by newtype-wrapping S.Seq.
-- module Detrospector.Types
import qualified Data.Hashable as H
import qualified Data.Foldable as F
instance (H.Hashable a) => H.Hashable (S.Seq a) where
{-# SPECIALIZE instance H.Hashable (S.Seq Char) #-}
hash = F.foldl' (\acc h -> acc `H.combine` H.hash h) 0
This code is lifted almost verbatim from the [a] instance in Data.Hashable.
After training, we need to write the Chain to disk, for use in subsequent generation. I started out with derived Show and Read, which was simple but incredibly slow. We'll use binary with
ByteString.Lazy — the dreaded lazy IO again!
We start by specifying how to serialize a few types. Here the tuple instances for Binary come in handy:
-- module Detrospector.Types
import qualified Data.Binary as Bin
-- another orphan instance
instance (Bin.Binary k, Bin.Binary v, H.Hashable k, Ord k)
=> Bin.Binary (H.HashMap k v) where
put = Bin.put . H.assocs
get = H.fromList <$> Bin.get
instance Bin.Binary PickTable where
put (PickTable n t) = Bin.put (n,t)
get = uncurry PickTable <$> Bin.get
instance Bin.Binary Chain where
put (Chain n h) = Bin.put (n,h)
get = uncurry Chain <$> Bin.get
The actual IO is easy. We use gzip compression, which fits right into the IO pipeline:
-- module Detrospector.Types
import qualified Data.ByteString.Lazy as BSL
import qualified Codec.Compression.GZip as Z
withChain :: FilePath -> (Chain -> RNG -> IO a) -> IO a
withChain p f = do
ch <- (Bin.decode . Z.decompress) <$> BSL.readFile p
RNG.withSystemRandom $ f ch
writeChain :: FilePath -> Chain -> IO ()
writeChain out = BSL.writeFile out . Z.compress . Bin.encode
Training the chain
I won't present the whole implementation of the train subcommand, but here's a simplification:
-- module Detrospector.Modes.Train
import qualified Data.Text as Txt
import qualified Data.Text.IO as Txt
import qualified Data.HashMap as H
import qualified Data.IntMap as IM
import qualified Data.Sequence as S
import qualified Data.Foldable as F
train Train{num,out} = do
(_,h) <- Txt.foldl' roll (emptyQ, H.empty) ys <$> Txt.getContents
writeChain out . Chain num $ H.map cumulate h where
roll (!s,!h) x
= (shift num x s, F.foldr (H.alter $ ins x) h $ S.tails s)
ins x Nothing = Just $! sing x
ins x (Just v) = Just $! incr x v
sing x = IM.singleton (fromEnum x) 1
incr x = IM.alter f $ fromEnum x where
f Nothing = Just 1
f (Just v) = Just $! (v+1)
Before generating PickTables, we build a HashMap of FreqTables. We fold over the input text, accumulating a pair of (last characters seen, map so far). Since foldl' is only strict to weak head-normal
form (WHNF), we use bang patterns on the fold function roll to force further evaluation. RWH discusses the same issue.
shift (from Detrospector.Types) pushes a new character into the queue, and drops the oldest character if the size exceeds num. We add one count for the new character x, both for the whole history s
and each of its suffixes.
We're using alter where perhaps a combination of lookup and insert would be more natural. This is a workaround for a subtle laziness-related space leak, which I found after much profiling and random
mucking about. When you insert into a map like so:
let mm = insert k (v+1) m
there is nothing to force v+1 to WHNF, even if you force mm to WHNF. The leaves of our tree end up holding large thunks of the form ((((1+1)+1)+1)+...).
The workaround is to call alter with Just $! (v+1). We know that the implementation of alter will pattern-match on the Maybe constructor, which then triggers WHNF evaluation of v+1 because of ($!).
This was tricky to figure out. Is there a better solution, or some different way I should approach this problem? It seems to me that Data.Map and friends are generally lacking in strictness building
The end!
Thanks for reading! Here's an example of how not to write the same program:
module Main where{import Random;(0:y)%(R p _)=y%p;(1:f)%(R _ n)=f%n;[]%(J x)=x;b
[p,v,k]=(l k%).(l v%).(l p%);main=getContents>>=("eek"#).flip(.:"eek")(y.y.y.y$0
);h k=toEnum k;;(r:j).:[k,v,b]=(j.:[v,b,r]).((k?).(v?).(b?)$(r?)m);[].:_=id;(!)=
iterate;data R a=J a|R(R a)(R a);(&)i=fmap i;k b y v j=let{t=l b%v+y;d f|t>j=b;d
f=k(m b)t v j}in d q;y l=(!!8)(q R!J l);q(+)b=b+b;p(0:v)z(R f x)=R(p v z f)x;p[]
z(J x)=J(z x);p(1:v)z(R n k)=R n$p v z k;m = succ;l p=tail.(snd&).take 9$((<<2).
fst)!(fromEnum p,0);(?)=p.l;d@[q,p,r]#v=let{u=b d v;j=i u;(s,o)|j<1=((97,122),id
)|h 1=((0,j-1),(k 0 0 u))}in do{q<-(h.o)&randomRIO s;putChar q;[p,r,q]#v};i(J q)
=q;i(R n k)=i n+i k;(<<)=divMod} -- finite text on stdin © keegan oct 2010 BSD3
Haskell suffers from a problem I'll call the Fibonacci Gap. Many beginners start out with a bunch of small mathematical exercises, develop that skillset, and then are at a loss for what to study
next. Often they'll ask in #haskell for an example of a "real Haskell program" to study. Typical responses include the examples in Real World Haskell, or the Design and Implementation of XMonad talk.
This is my attempt to provide another data point: a commentary on detrospector, my text-generating toy. It's perhaps between the RWH examples and xmonad in terms of size and complexity. It's not a
large program, and certainly not very useful, but it does involve a variety of real-world concerns such as Unicode processing, choice of data structures, strictness for performance, command-line
arguments, serialization, etc. It also illustrates my particular coding style, which I don't claim is the One True Way, but which has served me well in a variety of Haskell projects over the past
five years.
Of course, I imagine that experts may disagree with some of my decisions, and I welcome any and all feedback.
I haven't put hyperlinked source online yet, but you can grab the tarball and follow along.
This is part 1, covering style and high-level design. Part 2 addresses more details of algorithms, data structures, performance, etc.
The algorithm
detrospector generates random text conforming to the general style and diction of a given source document, using a quite common algorithm.
First, pick a "window size" k. We consider all k-character contiguous substrings of the source document. For each substring w, we want to know the probability distribution of the next character. In
other words, we want to compute values for
P(next char is x | last k chars were w).
To compute these, we scan through the source document, remembering the last k characters as w. We build a table of next-character occurrence counts for each substring w.
These tables form a Markov chain, and we can generate random text by a random walk in this chain. If the last k characters we generated were w, we choose the next character randomly according to the
observed distribution for w.
So we can train it on Accelerando and get output like:
addressed to tell using back holes everyone third of the glances waves and diverging him into the habitat. Far beyond Neptune, I be?" asks, grimy. Who's whether is headquarters I need a
Frenchwoman's boyfriend go place surrected in the whole political-looking on the room, leaving it, beam, the wonderstood. The Chromosome of upload, does enough. If this one of the Vile catches
Design requirements
We can only understand a program's design in light of the problem it was meant to solve. Here's my informal list of requirements:
• detrospector should generate random text according to the above algorithm.
• We should be able to invoke the text generator many times without re-analyzing the source text each time.
• detrospector should handle all Unicode characters, using the character encoding specified by the system's locale.
• detrospector should be fast without sacrificing clarity, whatever that means.
Wearing my customer hat, these are axioms without justification. Wearing my implementor hat, I will have to justify design decisions in these terms.
In general I import modules qualified, using as to provide a short local name. I make an exception for other modules in the same project, and for "standard modules". I don't have a precise definition
of "standard module" but it includes things like Data.Maybe, Control.Applicative, etc.
The longest line in detrospector is 70 characters. There is no hard limit, but more than about 90 is suspect.
Indentation is two spaces. Tabs are absolutely forbidden. I don't let the indentation of a block construct depend on the length of a name, thus:
foo x = do
y <- bar x
baz y
bar x =
let y = baz x
z = quux x y
in y z
This avoids absurd left margins, looks more uniform, and is easier to edit.
I usually write delimited syntax with one item per line, with the delimiter prefixed:
{-# LANGUAGE
, PatternGuards #-}
data Mode
= Train { num :: Int
, out :: FilePath }
| Run { chain :: FilePath }
Overriding layout is sometimes useful, e.g.:
look = do x <- H.lookup t h; return (x, S.length t)
(With -XTupleSections we could write
look = (,S.length t) <$> H.lookup t h
but that's just gratuitous.)
I always write type signatures on top-level bindings, but rarely elsewhere.
Module structure
I started out with a single file, which quickly became unmanageable. The current module layout is:
Types types and functions used throughout
Modes a type with one constructor per mode
Train train the Markov chain
Run generate random text
Neolog generate neologisms
Main command-line parsing
There is also a Main module in detrospector.hs, which simply invokes Detrospector.Main.main.
Modules I write tend to fall into two categories: those which export nearly everything, and those which export only one or two things. The former includes "utility" modules with lots of small types
and function definitions. The latter includes modules providing a specific piece of functionality. A parsing module might define three dozen parsers internally, but will only export the root of the
An abstract data type might fall into a third category, since they can export a large API yet have lots of internal helpers. But I don't write such modules very often.
Detrospector.Types is in the first category. Most Haskell projects will have a Types module, although I'm somewhat disappointed that I let this one become a general grab-bag of types and utility
The rest fall into the second category. Each module in Detrospector.Modes.* exports one function to handle that mode. Detrospector.Main exports only main.
Build setup
This was actually my first Cabal project, and the first thing I uploaded to Hackage. I think Cabal is great, and features like package-visibility management are useful even for small local projects.
In my cabal file I set ghc-options: -Wall, which enables many helpful warnings. The project should build with no warnings, but I use the OPTIONS_GHC pragma to disable specific warnings in specific
files, where necessary.
I also run HLint on my code periodically, but I don't have it integrated with Cabal.
I was originally passing -O2 to ghc. Cabal complained that it's probably not necessary, which was correct. The Cabal default of -O performs just as well.
I'm using Git for source control, which is neither here nor there.
Command-line parsing
detrospector currently has three modes, as listed above. I wanted to use the "subcommand" model of git, cabal, etc. So we have detrospector train, detrospector run, etc. The cmdargs package handles
this argument style with a low level of boilerplate.
The "impure" interface to cmdargs uses some dark magic in the operator &= in order to attach annotations to arbitrary record fields. The various caveats made me uneasy, so I opted for the slightly
more verbose "pure" interface, which looks like this:
-- module Detrospector.Main
import qualified System.Console.CmdArgs as Arg
import System.Console.CmdArgs((+=),Annotate((:=)))
modes = Arg.modes_ [train,run,neolog]
+= Arg.program "detrospector"
+= Arg.summary "detrospector: Markov chain text generator"
+= Arg.help "Build and run Markov chains for text generation"
train = Arg.record Train{}
[ num := 4
+= Arg.help "Number of characters lookback"
, out := error "Must specify output chain"
+= Arg.typFile
+= Arg.help "Write chain to this file" ]
+= Arg.help "Train a Markov chain from standard input"
run = Arg.record Run{}
[ chain := error "Must specify input chain"
+= Arg.argPos 0
+= Arg.typ "CHAIN_FILE" ]
+= Arg.help "Generate random text"
This tells cmdargs how to construct values of my record type Detrospector.Modes.Mode. We get help output for free:
$ ./dist/build/detrospector/detrospector -?
detrospector: Markov chain text generator
detrospector [COMMAND] ... [OPTIONS]
Build and run Markov chains for text generation
Common flags:
-? --help Display help message
-V --version Print version information
detrospector train [OPTIONS]
Train a Markov chain from standard input
-n --num=INT Number of characters lookback
-o --out=FILE Write chain to this file
detrospector run [OPTIONS] CHAIN_FILE
Generate random text
My use of error here is hacky and leads to a bug that I recently discovered. When the -o argument to train is invalid or missing, the error is not printed until the (potentially time-consuming)
analysis is completed. Only then is the record's field forced.
To be continued...
Unicode provides many useful characters for mathematics. If you've studied the traditional notation, an expression like « Γ ⊢ Λt.λ(x:t).x : ∀t. t → t » is much more readable than an ASCII
equivalent. However, most systems don't provide an easy way to enter these characters.
The compose key feature of X Windows provides a nice solution on Linux and other UNIX systems. Compose combinations are easy-to-remember mnemonics, like -> for →, and an enormous number of characters
are available with just a few keystrokes.
Setting it up
I cooked up a config file with my most-used mathematical symbols. With recent Xorg, you can drop this file in ~/.XCompose, restart X, and you should be good to go.
The include line will pull in your system-wide configuration, e.g. /usr/share/X11/locale/en_US.UTF-8/Compose. This already contains many useful characters. I was going to add <3 for ♥ and CCCP for ☭,
but I found that Debian already provides these.
GTK has its own input handling. To make it defer to X, I had to add an environment variable in ~/.xsession:
export GTK_IM_MODULE="xim"
The "Fn" key on recent ThinkPads makes a good compose key. It normally acts as a modifier key in hardware, but will send a keycode to X when pressed and released by itself. I used this xmodmap
keycode 151=Multi_key
Tweaking the codes
Being boilerplate-averse, I specified the key combinations in a compact format which is processed by this Haskell script.
Obviously, not everyone will like my choice of key combinations. If you tweak the file and come up with something particularly nice, I'd like to see it. If you can't run Haskell code for whatever
reason, it's not too hard to edit the generated XCompose file.
Though my use of Haskell here may seem gratuitous, I actually started writing this script in Python, but ran into trouble with Python 2's inconsistent treatment of Unicode text. Using Haskell's
String type with GHC ≥ 6.12 will Just Work, at least until you care about performance.
If you don't like this solution, SCIM provides an input mode which uses L^aT[e]X codes.
I wrote an answer over at Stack Overflow that somehow grew to article length. Here it is, recorded for posterity.
I'll paraphrase the original question as:
• What's the difference between the types forall a. [a] and [forall a. a], and
• How does this relate to existential types?
Well, the short answer to the second question is "It doesn't relate". But there's still a natural flow in discussing both topics.
Polymorphic values are essentially functions on types, but Haskell's syntax leaves both type abstraction and type application implicit. To better understand what's going on, we'll use a pidgin syntax
of Haskell combined with a typed lambda calculus like System F.
In System F, polymorphism involves explicit type abstractions and type application. For example, the familiar map function would be written as:
map :: ∀a. ∀b. (a → b) → [a] → [b]
map = Λa. Λb. λ(f :: a → b). λ(xs :: [a]). case xs of
[] → []
(y:ys) → f y : map @a @b f ys
map is now a function of four arguments: types a and b, a function, and a list.
At the term level, we have one new syntactic form: Λ (upper-case lambda). A polymorphic value is a function from types to values, written Λx. e, much as a function from values to values is written
λx. e.
At the type level, we have the symbol ∀ ("for all"), corresponding to GHC's forall keyword. A term containing Λ gives rise to a type containing ∀, just as a term containing λ gives rise to a type
containing →.
Since we have functions of types, we also need application of types. I use the notation @a (as in GHC Core) to denote application of a type argument. So we might use map like so:
map @Char @Int ord "xzy"
Note also that I've provided an explicit type on each λ abstraction. Type inference for System F is, in general, undecidable (Wells, 1999). The restricted form of polymorphism available in vanilla
Haskell 98 has decidable inference, but you lose this if you enable certain GHC extensions like RankNTypes.
We don't need annotations on Λ abstractions, because we have only one "kind" of type. In actual Haskell, or a calculus like System Fω, we also have type constructors, and we need a system of kinds
to describe how they combine. We'll ignore this issue here.
So a value of polymorphic type is like a function from types to values. The caller of a polymorphic function gets to choose a type argument, and the function must comply.
One last bit of notation: I'll use the syntax
to mean an undefined value of type t, similar to Haskell's undefined.
∀a. [a]
How, then, would we write a term of type ∀a. [a]? We know that types containing ∀ come from terms containing Λ:
term1 :: ∀a. [a]
term1 = Λa. ?
Within the body marked ? we must provide a term of type [a]. However, we know nothing concrete about a, since it's an argument passed in from the outside. So we can return an empty list
term1 = Λa. []
or an undefined value
term1 = Λa. ⊥@[a]
or a list containing undefined values only
term1 = Λa. [⊥@a, ⊥@a]
but not much else.
To use this value, we apply a type, removing the outer ∀. Let's arbitrarily instantiate ∀a. [a] to [Bool]:
main = print @Int (length @Bool (term1 @Bool))
[∀a. a]
What about [∀a. a], then? If ∀ signifies a function on types, then [∀a. a] is a list of functions. We can provide as few as we like:
term2 :: [∀a. a]
term2 = []
or as many:
term2 = [f, g, h]
But what are our choices for f, g, and h?
f :: ∀a. a
f = Λa. ?
Now we're well and truly stuck. We have to provide a value of type a, but we know nothing whatsoever about the type a. So our only choice is
f = Λa. ⊥@a
So our options for term2 look like
term2 :: [∀a. a]
term2 = []
term2 = [Λa. ⊥@a]
term2 = [Λa. ⊥@a, Λa. ⊥@a]
etc. And let's not forget
term2 = ⊥@(∀a. [a])
Unlike the previous example, our choices for term2 are already lists, and we can pass them to length directly. As before, we have to pass the element type to length:
main = print @Int (length @(∀a. a) term2)
Existential types
So a value of universal (∀) type is a function from types to values. A value of existential (∃) type is a pair of a type and a value.
More specifically: A value of type
∃x. T
is a pair
(S, v)
where S is a type, and where v :: T, assuming you bind the type variable x to S within T.
Here's an existential type signature and a few terms with that type:
term3 :: ∃a. a
term3 = (Int, 3)
term3 = (Char, 'x')
term3 = (∀a. a → Int, Λa. λ(x::a). 4)
In other words, we can put any value we like into ∃a. a, as long as we pair that value with its type.
The user of a value of type ∀a. a is in a great position; they can choose any specific type they like, using the type application @T, to obtain a term of type T. The producer of a value of type ∀a. a
is in a terrible position: they must be prepared to produce any type asked for, so (as in the previous section) the only choice is Λa. ⊥@a.
The user of a value of type ∃a. a is in a terrible position; the value inside is of some unknown specific type, not a flexible polymorphic value. The producer of a value of type ∃a. a is in a great
position; they can stick any value they like into the pair, as we saw above.
So what's a less useless existential? How about values paired with a binary operation:
type Something = ∃a. (a, a → a → a, a → String)
term4_a, term4_b :: Something
term4_a = (Int, (1, (+) @Int , show @Int))
term4_b = (String, ("foo", (++) @Char, λ(x::String). x))
Using it:
triple :: Something → String
triple = λ(a, (x :: a, f :: a→a→a, out :: a→String)).
out (f (f x x) x)
The result:
triple term4_a ⇒ "3"
triple term4_b ⇒ "foofoofoo"
We've packaged up a type and some operations on that type. The user can apply our operations but cannot inspect the concrete value — we can't pattern-match on x within triple, since its type (hence
set of constructors) is unknown. This is more than a bit like object-oriented programming.
Using existentials for real
The direct syntax for existentials using ∃ and type-value pairs would be quite convenient. UHC partially supports this direct syntax. But GHC does not. To introduce existentials in GHC we need to
define new "wrapper" types.
Translating the above example:
{-# LANGUAGE ExistentialQuantification #-}
data Something = forall a. MkThing a (a -> a -> a) (a -> String)
term_a, term_b :: Something
term_a = MkThing 1 (+) show
term_b = MkThing "foo" (++) id
triple :: Something -> String
triple (MkThing x f out) = out (f (f x x) x)
There's a couple differences from our theoretical treatment. Type application, type abstraction, and type pairs are again implicit. Also, the wrapper is confusingly written with forall rather than
exists. This references the fact that we're declaring an existential type, but the data constructor has universal type:
MkThing :: forall a. a -> (a -> a -> a) -> (a -> String) -> Something
Often, we use existential quantification to "capture" a type class constraint. We could do something similar here:
data SomeMonoid = forall a. (Monoid a, Show a) => MkMonoid a
Further reading
For an introduction to the theory, I highly recommend Types and Programming Languages by Pierce. For a discussion of existential types in GHC, see the GHC manual and the Haskell wiki.
The JVF 2010-A is a large electronic sign consisting of 128 × 48 red LEDs.
There is some information online about the official software provided with this sign. There's less information about the hardware, or how to write custom software. Here's what I've learned on these
subjects. If you're itching to start hacking, skip to the end for downloadable C code.
The sign I worked on lives at MITERS. There is a similar sign at Noisebridge. See their writeup for pictures and more hardware description.
The removable back panel of the unit holds three circuit boards: a PC motherboard, a power supply, and a custom board I'll call the LED controller.
The motherboard has a 386-compatible (?) processor, some RAM sticks, and a number of ISA slots. One slot provides a connector to the floppy drive, as well as an unused hard-drive connector. Another
slot connects directly to the LED controller by a ribbon cable. The motherboard also holds a DIN-5 connector which I presume to be for an AT keyboard. I could not find any serial or parallel ports
even as pin headers, though they could be added on an ISA card.
The LED controller is a large custom board of DIP integrated circuits. Other than two 16kbit SRAM chips, all of the logic is provided by 74-series discrete logic ICs. There is not a processor or PLD
to be found. This board connects to other boards attached to the front of the unit, which I did not investigate further.
Lacking a logic analyzer, I decided to investigate the software side.
Reverse-engineering the software
I acquired a copy of the official software, JVFF.EXE. The software runs fine in DOSBox. It even displays output: there's a fallback which draws CGA graphics if no LED board is found.
I enabled IO port logging in DOSBox, by setting #define ENABLE_PORTLOG in src/hardware/iohandler.cpp. I noticed a read from port 0x2F0, which is not listed as a standard port in RBIL. DOSBox
helpfully logs the instruction pointer with each port access, and soon I was reading the executable's assembly code in the freeware version of IDA. I found the following interesting functions:
• 0x4050: Reads a configuration byte from the controller at port 0x2F0. This byte specifies which DMA channel to use, as well as the dimensions of the display.
• 0x3F02: The entry point for updating the display. Called from many places. Takes a range of lines to update (?) as arguments. Depending on some global variables, it calls 0x3FA9 and/or the CGA
drawing routine.
• 0x3FA9: The "driver" for the LED controller. Calls all of the below.
• 0x410D: Sets up a DMA transfer to the controller, then writes the three bytes 0x8F, 0x0F, 0x07 to the controller at port 0x2F0.
• 0x417A: Checks whether the DMA transfer has completed.
• 0x4166: Writes the bytes 0x05, 0x07 to the controller at port 0x2F0, effectively strobing bit 1 low. This resets the controller's state, such that the next DMA transfer will set the first line of
the display. Perhaps data line 1 is wired to the reset line of a counter IC.
• 0x403C: Writes the bytes 0x06, 0x07 to the controller at port 0x2F0, effectively strobing bit 0 low. This advances the controller's state to the next line. Perhaps data line 0 is wired to the
clock line of a counter IC.
There is more detail to the protocol, but it's best explained by the C code linked below. It's well-commented, I promise.
Writing custom software
After staring at these functions for a bit, I understood the protocol well enough to mimic it in my own code. As a demo I coded a cellular automaton: specifically, HighLife on a projective plane. You
can get the code here as a public-domain C program.
I've compiled this code with Open Watcom 1.9 inside DOSBox. With the provided makefile, wmake will build JVFLIFE.EXE. You can then copy this file to a DOS boot floppy, and set it to run automatically
using AUTOEXEC.BAT.
Verbose show
Say we want a function vshow which describes an expression before showing its value. For example:
vshow (2 + 3) = "2 + 3 = 5"
vshow (product [1..5]) = "product [1..5] = 120"
Clearly vshow can't be an ordinary Haskell function. We just don't have that kind of runtime access to program source.
All sorts of workarounds come to mind: simple-reflect, feeding strings to hint, Template Haskell… But it's hard to beat the simplicity of the humble C preprocessor:
{-# LANGUAGE CPP #-}
{-# OPTIONS_GHC -pgmP cpp #-}
#define vshow(e) (#e ++ " = " ++ show (e))
main :: IO ()
main = do
putStrLn vshow(2 + 3)
putStrLn vshow(product [1..5])
GHC's built-in CPP can't handle the stringify operator, so we invoke the "real" C preprocessor with -pgmP cpp. Testing it:
$ runhaskell vshow.hs
2 + 3 = 5
product [1..5] = 120
Like the other solutions I mentioned, this has its drawbacks. vshow isn't a first-class function, and we have to use C rather than Haskell function-call syntax. It's not terribly flexible, but it is
one line of reasonably clear code, and I can think of a few small use cases.
The missing preprocessor
Besides sharing a cute little trick, I wanted to start a conversation about text-level preprocessors for Haskell. CPP is used throughout the standard library for conditional compilation, boilerplate
class instances, etc. At the same time, it's quite limited, and not a good syntactic fit for Haskell.
Template Haskell is at the opposite end of the power/complexity spectrum. It might be what you want for programmatically generating complicated abstract syntax trees. It's not a good fit for simply
"copy-pasting" some text into a few locations. For those tasks we shy away from the enormous complexity of TH. We use CPP or, well, copy-paste. The latter is not acceptable for a language that's
advertising such a high level of abstraction. We may write less boilerplate than in Java or C++, but it still gets on the nerves.
I think there's an unfilled niche for lightweight text-level preprocessing, with just a bit more flexibility than CPP. GHC makes it easy to plug in any preprocessor you like, using -F -pgmF. We can
start experimenting today.
m4 is a powerful and venerable preprocessor. It comes with most UNIX systems and has an open-source implementation. Yet I've not seen any Haskell project use it for Haskell source. Has anyone used m4
in this role?
I found an old paper (.ps.gz) with some proposals. Anyone have comments?
There's a number of preprocessors for implementing specific language features, including arrows, indexed monads, C bindings, and Conor McBride's intriguing Strathclyde Haskell Enhancement. The
package preprocessor-tools seems to be a framework for writing this sort of preprocessor.
So what have you used for preprocessing? What features would you like to see? What design philosophy?
|
{"url":"http://mainisusuallyafunction.blogspot.com/2010_10_01_archive.html","timestamp":"2014-04-19T06:55:11Z","content_type":null,"content_length":"178819","record_id":"<urn:uuid:662d209a-d49d-4864-b4d9-ea96e3bcbf1a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statics/Geometric Properties of Lines and Areas
Centroids Of Common Shapes Of Areas And LinesEdit
Triangular AreaEdit
$Area = \frac {b*h}{2}$
Quarter Circular AreaEdit
$Area = \frac { \pi\ r^2}{4}$
Semicircular AreaEdit
$Area = \frac { \pi\ r^2}{2}$
Semiparabolic AreaEdit
$Area = \frac {2ah}{3}$
Parabolic AreaEdit
$Area = \frac {4ah}{3}$
Parabolic SpandrelEdit
$Area = \frac {ah}{3}$
Circular SectorEdit
$Area = \alpha\ r^2$
Quarter Circular ArcEdit
$Area = \frac { \pi\ }{2}$
Semi Circular ArcEdit
$Area = \pi\ r$
Arc Of CircleEdit
$Area = 2 \alpha\ r$
Area Moments Of Inertia of Common Geometric ShapesEdit
$I_{x} = \frac {1}{3}b h^3$
$I_{y} = \frac {1}{3}h b^3$
$I_{x'} = \frac {1}{12}b h^3$
$I_{y'} = \frac {1}{12}h b^3$
Right Triangular AreaEdit
$I_{x} = \frac {1}{12}b h^3$
$I_{y} = \frac {1}{4}h b^3$
$I_{x'} = \frac {1}{36}b h^3$
$I_{y'} = \frac {1}{36}h b^3$
Triangular AreaEdit
$I_{x} = \frac {1}{12} bh^3$
$I_{x'} = \frac {1}{36} bh^3$
Circular AreaEdit
$J_C = \frac { \pi\ r^4}{2}$
$I_{x'} = I_{y'} = \frac { \pi\ r^4}{4}$
Hollow circleEdit
This is used for hollow cylinders where there is solid material between the outer and inner radius, but no material between the inner radius and the center, like a pipe's cross-section.
$I = \frac { \pi\ (r_o^4 - r_i^4)} {4}$
$r_o$ is the outer radius $r_i$ is the inner radius
Semicircular AreaEdit
$I_x = I_y = \frac {1}{8} \pi\ r^4$
$I_{x'} = (\frac \pi{8}-\frac {8}{9\pi})r^4$
$I_{y'} = \frac {1}{8} \pi\ r^4$
Quarter CircleEdit
$I_{x} = I_{y} = \frac {1}{16} \pi\ r^4$
$I_{x'} = I_{y'} = (\frac \pi{16} - \frac {4}{9\pi}) r^4$
further readingEdit
Last modified on 15 November 2013, at 00:45
|
{"url":"http://en.m.wikibooks.org/wiki/Statics/Geometric_Properties_of_Lines_and_Areas","timestamp":"2014-04-18T06:13:30Z","content_type":null,"content_length":"21220","record_id":"<urn:uuid:8f83703f-21ba-4afc-bd6d-a3f8aa1474bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How does one relate the monodromy of the KZ equations with the WRT representation of the braid group?
up vote 4 down vote favorite
The KZ equations on the configuration space of $n$ distinct points in $\mathbb C$ give rise to a representation of $B_n$ on $V^{\otimes n}$, where $V$ is any given representation of $SL(2)$ (we'll
stick to this case; clearly we could work with other Lie groups as well). The WRT Hilbert space $\mathcal H_n$ associated to the $SL(2)$ character variety of $\mathbb S^2\setminus\{p_0,\ldots,p_n\}$
(where we restrict the monodromy around each $p_i$ to lie in a particular conjugacy class) also carries a natural action of $B_n$.
Now, both of these representations give rise to the Jones polynomial in a more or less straightforward manner, so I would almost like to conjecture that there is a natural isomorphism between the
two. Unfortunately, I don't even see a reason why they should have the same dimension (and I'd guess they probably don't!). Thus I have a weaker proposal:
Question: Is there a natural homomorphism $\rho:V^{\otimes n}\to\mathcal H_n$ (or perhaps it should go the other direction) which respects the action of $B_n$?
Remark: There are choices to be made to construct these representations (KZ and WRT). I expect the choice of representation $V$ and Planck constant $\hbar$ in the KZ equations correspond in a
straightforward manner with the choice of level $k$ and conjugacy class for the WRT character variety. More specifically, one would most certainly take $\hbar=k^{-1}$ and the conjugacy class to have
trace $2\cos(\hbar(\dim V-1))$ [perhaps up to multiplicative constants].
Can this be phrased in the quantum group setting? Something like comparing the $B_n$ representations on $V^{\ot n}$ where $V$ is a $U_q sl_2$ module and the $B_n$ representations from the mapping
of $\mathbb{C}B_n$ in $End(V^{\ot n})$? To get a Hilbert space I think you need the root of unity case (and indeed, if you are fixing a level then this is the case). If so, I can give you an
answer. – Eric Rowell Aug 26 '11 at 21:47
Sure! I guess I didn't really need to use the KZ equations here . . . The monodromy of KZ is equivalent to the quantum group construction giving a $B_n$ representation on $V^{\otimes n}$ where $V$
is any representation of $U_q(sl_2)$. The root of unity case for me would suffice. – John Pardon Aug 27 '11 at 2:17
add comment
1 Answer
active oldest votes
What I will say is just for quantum $SL(2)$ with $V$ being the 2-dimensional representation (or the analogous object in the representation category). Let us denote by $H$ the quantum
group. For $q$ generic you can just use the following: $V^{\otimes n}\cong\bigoplus_i Hom(V_i,V^{\otimes n})\otimes V_i$ as an $H$ module and the image of $\mathbb{C}B_n$ in the
centralizer algebra commutes with the action of $H$ so as a $B_n$-representation $V^{\otimes n}\cong \bigoplus_i \dim(V_i)Hom(V_i,V^{\otimes n})$. The WRT representation space is exactly
$\mathcal{H}_n=\bigoplus Hom(V_i,V^{\otimes n})$, that is, the minimal faithful module for $End(V^{\otimes n})$. Here $V_i$ are the simple $H$-submodules of $V^{\otimes n}$. So your
injection $\rho$ goes the other way.
up vote 5 This proof works for any (topologically) quasi-triangular Hopf algebra. For the root of unity case (fixed level case) there is the problem that the object $V$ no longer has a vector space
down vote structure (in the semisimple quotient). This is necessary to consider particularly if you want unitary representations. However, there is something similar one can try, which Wang and I
accepted call a "localization." Basically you look for a Yang-Baxter operator $R$ on a vector space $W$ so that $\mathcal{H}_n$ is a $B_n$-subrepresentation of $W^{\otimes n}$. For the $SL(2)$
situation this is only possible at levels $k=1,2,4$ corresponding to roots of unity of degree $3,4$ and $6$.
Some references on the more general question: link text link text
Thanks! How does one see that $\mathcal H_n=\bigoplus\operatorname{Hom}(V_i,V^{\otimes n})$? (a pointer to a reference would suffice). – John Pardon Aug 27 '11 at 18:19
Probably Turaev's book would have this. Briefly, consider the disk with n punctures each labeled with the object $V$ and the boundary labelled by the object $V_i$. The modular functor
of the corresponding TQFT assigns to this surface the vector space $\mathrm{Hom}(V_i,V^{\otimes n})$. From the Jones polynomial/Temperley-Lieb algebra perspective these are the
irreducible $TL_n(q)$-modules. So the direct sum of these is the minimal faithful module that one can use to define the Jones polynomial (this by $q$-Schur-Weyl duality). – Eric Rowell
Aug 27 '11 at 18:46
add comment
Not the answer you're looking for? Browse other questions tagged qa.quantum-algebra character-varieties knot-theory lie-groups braid-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/73729/how-does-one-relate-the-monodromy-of-the-kz-equations-with-the-wrt-representatio?sort=oldest","timestamp":"2014-04-21T15:40:22Z","content_type":null,"content_length":"58977","record_id":"<urn:uuid:275d8490-52b4-4fc1-b698-b4ef935a68f1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Iselin Prealgebra Tutor
Find a Iselin Prealgebra Tutor
...My unique experience includes working with students in the classroom, as a private tutor, and as a homeschooling parent for 8 years. I like being creative and fun, while helping students to
feel that they can do it!I have supervised, tested, and helped students in grades K-6 in all subject areas...
18 Subjects: including prealgebra, English, reading, writing
My name is Veronica. I'm a student with Mathematics/Education major. My GPA is a 3.4.
4 Subjects: including prealgebra, Spanish, algebra 1, algebra 2
As a history undergraduate honors student interested in pursuing a career in research and education, I know that hard work and dedication pay off. I believe that my previous tutoring experiences
have helped me hone my ability to convey knowledge in a coherent and organized manner. Over the course ...
25 Subjects: including prealgebra, chemistry, reading, algebra 2
...I am very focused on being approachable and flexible in scheduling.I am a medical school graduate, and now a physician at the NIH. I have passed Step 1 and Step 2 of the USMLE medical licensing
exam, and am interviewing for residency training posts in obstetrics/gynecology. I am a lifelong ivy league academic with both a graduate degree and a medical degree.
44 Subjects: including prealgebra, English, reading, writing
...I am a confidence builder who tutors all levels of math, from integrated to honors. I also offer tutoring skills for the ACT, SATs, Private school entry exams and college prep. I have worked
with students from elementary school to college level classes and have made a huge difference in their lives.
13 Subjects: including prealgebra, statistics, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/iselin_nj_prealgebra_tutors.php","timestamp":"2014-04-17T04:43:45Z","content_type":null,"content_length":"23750","record_id":"<urn:uuid:7e7ab626-3372-460b-b508-a528f0075069>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of divisors of a sum / ABC conjecture equivalent statement
up vote 3 down vote favorite
I have two related questions that I can't seem to find any literature on:
1) What can be said about $\tau(a+b)$ knowing $\tau(a)$ and $\tau(b)$ (where $\tau(n)$ is the number of positive divisors of $n$)?
2) I have a feeling the answer to the previous is "nothing" because of the same types of issues surrounding the ABC-conjecture. If that's the case, are there any statements about $\tau(a+b)$ that are
equivalent to, imply, or are implied by the ABC-conjecture?
nt.number-theory reference-request
7 Well the existence of infinitely many solutions of $(\tau(a),\tau(b),\tau(a+b)) = (2,2,2)$ is equivalent to another famous open problem... – Noam D. Elkies Aug 9 '12 at 2:56
add comment
1 Answer
active oldest votes
I agree that the answer to the first question should be more or less nothing. However, I disagree that this is for 'the same types of issues surrounding the ABC-conjecture', and so
I highly doubt there is much of a direct link.
The ABC-conjecture is very different in that (in a certain sense) it says something on the actual divisors of the involved numbers, not merely on their number.
The reason why in my opinion the answer to the first question is essentially nothing is that simply there should be little meaninful to say, at least on a quantitative level
(relating the respective approx. sizes).
To wit take $b=1$. If $a$ is a Sophie Germain prime then $a+b$ will only have $4$ divisors, but it is not at all true that $a+1$ has few prime divisors for any $a$ prime (ie any $a$
with $\tau(a)=2$).
Or, it is kown that in some sense there is 'nothing special' regarding divisors counts (except for evenness issues) of $p-1$ for $p$ prime relative to $n-1$ for $n$ any number.
up vote 5 down (See also this MO question Factors of p-1 when p is prime. ; and I believ, but only checked briefly, the paper mentioned in a comment also goes roughly speaking in this direction;
vote accepted the result on something relating $\tau(n)$ and $\tau(n+a)$ coinciding with the prediction one has if considering the two 'independently').
And $b=1$ is not really special in that regard.
Or it is known that many (and if Goldbach conj. is true, all except one) even number can be written as sum of two primes, so (essentially) whatever even $n$ you consider you can
write it as $n=a+b$ with $\tau(a)=\tau(b)=2$, so $\tau(a)=\tau(b)=2$ implies really (next to) nothing on $\tau(a+b)$.
So, in brief, yes there should be very little to be said (as regards to connecting these values), yet no this is not much related to ABC, in particular it is not that things are not
known because they are diffcult but there is actually hardly any connection between $\tau(a+b)$ and $\tau(a)$, $\tau(b)$ on a quantitative level.
(If one were to investigate specific restrictions on the $\tau$, in particular such that they force/are equivalent too the number being a high power then the type of questions
become a bit different and then there is I think some link to ABC; but then to me they are not really questions on $\tau$ but 'something else' encoded via $\tau$.)
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory reference-request or ask your own question.
|
{"url":"http://mathoverflow.net/questions/104318/number-of-divisors-of-a-sum-abc-conjecture-equivalent-statement","timestamp":"2014-04-18T23:20:08Z","content_type":null,"content_length":"54100","record_id":"<urn:uuid:c90c644b-015f-448b-ad23-3de7e1bb26b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analytic continuation
June 28th 2008, 04:38 AM #1
Analytic continuation
I'm trying to understand analytic continuation:
Let f and g be two analytic functions.
If f=g on an open subset of $\mathbb{C}$, then f=g on any larger connected subset.
Let f be $1+x+\frac{x^2}{2!}$ and g be $1+x+\frac{x^2}{2!}+\frac{x^3}{3!}$ these two analytic functions.
$f=g$ on the open subset of $\mathbb{C}$$]-0.1;0.1[$, but $fot=g$ on the larger connected subset $\mathbb{R}$ !
Where is the problem ?
I'm trying to understand analytic continuation:
Let f be $1+x+\frac{x^2}{2!}$ and g be $1+x+\frac{x^2}{2!}+\frac{x^3}{3!}$ these two analytic functions.
$f=g$ on the open subset of $\mathbb{C}$$]-0.1;0.1[$, but $fot=g$ on the larger connected subset $\mathbb{R}$ !
Where is the problem ?
There's no problem. $f eq g$ on (-0.1, 0.1). In fact, they are only equal at x = 0 .....
June 28th 2008, 05:38 AM #2
|
{"url":"http://mathhelpforum.com/calculus/42614-analytic-continuation.html","timestamp":"2014-04-18T14:06:40Z","content_type":null,"content_length":"37047","record_id":"<urn:uuid:428121b5-352f-4cd9-9d5c-a156d570ad3d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
satellite orbits and energy
All right let's see if some pointers help
Can't really figure out where to start. Any help would be appreciated.
1. The problem statement, all variables and given/known data
A 575 kg satellite is in a circular orbit at an altitude of 550 km above the Earth's surface. Because of air friction, the satellite eventually falls to the Earth's surface, where it hits the ground
with a speed of 2.10 km/s. How much energy was transformed to internal energy by means of friction?
2. Relevant equations
E = -1/2 G Ms Me / r
This will give you the total energy of the satellite in orbit as measured with respect to the center of the Earth. (providing that r is measured from the center of the Earth.)
This will give you the orbital velocity of the satellite. (not really needed to solve the problem)
This give you the
potential energy
of the Satellite while in orbit as measured with respect to the center of the Eartrh
M = mass
R = radius
3. The attempt at a solution
ok so 575kg, 550000 m above earths surface, falls at 2100 m/s
5.97 x 10^24 kg is earths mass (Me)
6378100 for earth radius (Re)
Ms is mass of satelite
G is constant
so for Ek = 1/2mv^2
= 0.5*575*2100^2
= 1267875 kJ
okay for the kinetic energy when the Satellite strikes the Earth
E = - G Me Ms / Re + d
= (6.67 x 10^-11)(5.97 x 10^24)(575) / (6378100 + 550000)
= -3.305 x 10^10
Again, this is just the Potential energy part of the total energy of the satellite in orbit.
so do I substract E in orbit by Ek at crash (E - Ek)??
What you want to do is find the difference between the Total energy of the Satellite in orbit (the combination of both its kinetic and potential energies), and its total energy when it strikes the
Earth (again the combination of both its kinetic and potential energies).
You've got the formula for finding the total energy in orbit and you've found the kinetic energy at impact. You just need to find the potential energy at impact to complete the picture.
|
{"url":"http://www.physicsforums.com/showthread.php?t=155694","timestamp":"2014-04-16T04:22:52Z","content_type":null,"content_length":"43892","record_id":"<urn:uuid:8bf43182-b2e5-4ff2-9d52-87b8b9b346bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof of Krylov-Bogoliubov Theorem
up vote 12 down vote favorite
Where can I find a proof (in English) of the Krylov-Bogoliubov theorem, which states if $X$ is a compact metric space and $T\colon X \to X$ is continuous, then there is a $T$-invariant Borel
probability measure? The only reference I've seen is on the Wikipedia page, but that reference is to a journal that I cannot find.
Of course, feel free to answer this question by providing your own proof.
I searched google books for the following words: Krylov Bogolyubov compact metrizable invariant borel measure. The first result was books.google.com/… - p.8 from Bunimovich, Sinai: Dynamical
systems, ergodic theory and applications, which contains the proof. (Hopefully, it will be viewable for you in google books.) – Martin Sleziak Jun 7 '11 at 5:47
add comment
5 Answers
active oldest votes
First, fix $x \in X$ and let $\mu_1 := \delta_x$ be the Dirac measure supported at $x$. Then define a sequence of probability measures $\mu_n$ such that for any $f \in C^0 (X)$, $$ \
int_X f(y) \mathrm{d} \mu_n (y) = \frac{1}{n} \sum_{k=0}^{n-1} \int_X f \circ T^k (y) \mathrm{d} \mu_1 (y). $$ Apply the Banach-Alaouglu Theorem to deduce there exists a subsequence $\
up vote 15 mu_{n_j}$ which converges in the weak-$\star$ topology. It is then very easy to prove that this limit measure is in fact T-invariant, using the formulation that $\mu$ is T-invariant if
down vote and only if $$\int_X f \circ T \mathrm{d} \mu = \int_X f \mathrm{d}\mu$$ for all continuous $f$.
add comment
There are two pretty simple proofs. Both rely on studying the action $T_* \colon \mathcal{M} \to \mathcal{M}$, where $\mathcal{M}$ is the space of Borel probability measures on $X$ and the
action is given by $(T_* \mu)(E) := \mu(T^{-1}(E))$. A measure $\mu$ is $T$-invariant if and only if $T_* \mu = \mu$.
One proof is the one given by Michael Coffey in his answer: start with any measure $\mu$, not necessarily invariant, such as the $\delta$-measure sitting at an arbitrary point, and then
consider the sequence of measures $\mu_n = \frac 1n \sum_{k=0}^{n-1} T^k_* \mu$. Because $\mathcal{M}$ is weak* compact, some subsequence $\mu_{n_j}$ converges to a measure $\nu\in \
up vote 15 mathcal{M}$, and it's not hard to show that $\nu$ is invariant.
down vote
An alternate proof is to observe that $\mathcal{M}$ is a compact convex subset of the locally convex vector space $C(X)^* $, and that $T_* $ acts continuously on $\mathcal{M}$, whence by
the Schauder-Tychonoff fixed point theorem it has a fixed point $\nu=T_* \nu$.
Thanks. I wish I could accept two answers. – Quinn Culver Jun 1 '11 at 22:07
Does the Krylov-Bogoliubov theorem hold if $T$ is only assumed measurable? – Quinn Culver Dec 29 '11 at 18:26
2 @Quinn: No, measurability alone isn't enough. Define $f\colon [0,1] \to [0,1]$ by $f(x) = x/2$ when $x>0$, and $f(0)=1$. Then if $\mu$ is invariant and $\mu(E)>0$ for some $E\subset
[0,1]$ with $0\notin E$, you can look at the images $f^n(D)$ for a suitable $D\subset E$ and conclude that $\mu([0,1]) = \infty$. The only probability measure that remains to consider
is $\delta_0$, which isn't invariant because $0$ is not fixed. – Vaughn Climenhaga Jan 13 '12 at 18:46
Can someone point out to me where we have used the fact that T is continuous ? – user20471 Dec 16 '13 at 20:59
Continuity of T is needed when you show that the limiting measure $\nu = \lim \mu_{n_j}$ is invariant. For this part of the proof one can let $D$ be a metric compatible with the weak*
topology and then observe that $D(\nu,T_*\nu) \leq D(\nu,\mu_n) + D(\mu_n,T_*\mu_n) + D(T_*\mu_n, T_*\nu)$. The first term goes to zero along $n_j$ by choice of the subsequence, the
second goes to zero by the construction of $\mu_n$, and the third goes to zero because $T$ is continuous. – Vaughn Climenhaga Dec 17 '13 at 17:03
add comment
You can see the famous book by Peter Walter:An introduction to ergodic theory, Pages 151-152
up vote 6 down vote
add comment
In addition to the excellent answers above, I also suggest the nice survey Oxtoby, Ergodic Sets.
Introduction. Ergodic sets were introduced by Kryloff and Bogoliouboff in 1937 in connection with their study of compact dynamical systems [16]. The purpose of this paper is to review
some of the work that has since been done on the theory that centers around this notion, and to present a number of supplementary remarks, applications, and simplifications. For
simplicity were shall confine attention to systems with a discrete time. Continuous flows present no difficulty, but the development of a corresponding theory for general transformation
groups is still in an incomplete stage. An example due to Kolmogoroff (see [5]) shows that such an extension cannot be made without sacrificing either the invariance or the disjointness
up vote of ergodic sets.
5 down
vote In §§1 and 2 we give a brief, but self-sufficient, development of the basic theorems of Kryloff and Bogoliouboff. In §3 we collect some auxiliary results for later use. In §4 a simple
characterization of transitive points is obtained. In §5 the distinctive properties of some special types of systems and subsystems are discussed, and in §6 these results are used to
discover conditions under which the ergodic theorem holds uniformly. In §7 a generalization to noncompact systems is considered, and in §§8 and 9 some known representation theorems are
obtained as an application of ergodic sets. In §10 there is given an example of a minimal set that is not strictly ergodic, similar to one constructed by Markoff.
add comment
Then the theorem also doesn't hold. Take $X = \mathbb{R}$, and $T\colon x \mapsto x+1$. Then $T$ has no invariant probability measure. (all the measures "escape to infinity").
up vote 1 down vote
@Quinn: this was the answer to your other question. – Denis Volk Nov 7 '12 at 17:40
Yes, thanks. I think this answer was essentially given here: math.stackexchange.com/questions/94981/…. – Quinn Culver Nov 8 '12 at 3:19
add comment
Not the answer you're looking for? Browse other questions tagged reference-request ds.dynamical-systems ergodic-theory pr.probability measure-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/66669/proof-of-krylov-bogoliubov-theorem/68070","timestamp":"2014-04-16T16:51:15Z","content_type":null,"content_length":"79348","record_id":"<urn:uuid:ea57c633-f4de-4cd5-9514-40ef8bb92349>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Dirichlet Conditions
These are
sufficient condition
s for a "well-behaved" function to have a
Fourier Series
If f(x) is a bounded periodic function with a finite number of maxima, minima and discontinuities in 0<=x<T (where T is the period) then the Fourier series for f(x) converges to f(x) for all x at
which f(x) is continuous. Where f(x) is not continuous, the series conveges to the midpoint of the discontinuity, that is: ½(f(x[+]) + f(x[-]))
Note that though these conditions are sufficient, they are not necessary. For example, sin(x^-1) has a convergent Fourier Series.
|
{"url":"http://everything2.com/title/The+Dirichlet+Conditions","timestamp":"2014-04-17T21:49:45Z","content_type":null,"content_length":"18925","record_id":"<urn:uuid:3849e662-9584-4c86-b6a8-d22a8528e008>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scientific Calculator
One more calculator for use with higher math functions with extended capabilities. One of the advantages of this calculator is a simple input format even for the most complicated scientific formulas.
This program works best for math, algebra, calculus, geometry, physics and engineering students and even their teachers. For example, you can enter 10pi instead of "10*pi". For complex numbers, you
can use the following format - "1+2i". For percentages - "number + %". The other important feature of HiDigit is its high precision - up to 15 decimals for scientific calculations. The program
features an impressive number of built-in formulas, functions, constants and coefficients. Importantly, the users can customize all of them or add their own variables. Also, the history of all
actions is kept, so the users can come back and undo/redo any action at any time. Being a serious calculating software, it is extremely simple in use. No special computer skills or knowledge are
required. The interface is straightforward and very easy to navigate through. The compact size of the calculator does not hinder the performance. To the contrary, the program can even be used by the
multiple users simultaneously.
|
{"url":"http://www.hobbyprojects.com/calculators/16_function_scientific_calculator.html","timestamp":"2014-04-18T13:08:45Z","content_type":null,"content_length":"25287","record_id":"<urn:uuid:25db4c53-0c49-4874-96ed-13bcbf4f4974>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The arc length of a parametrized curve
A vector-valued function of a single variable, $\dllp: \R \to \R^n$ (confused?), can be viewed as parametrizing a curve. Such a function $\dllp(t)$ traces out a curve as you vary $t$.
You could think of a curve $\dllp : \R \to \R^3$ as being a wire. For example, $\dllp(t) = (\cos t, \sin t, t)$, for $0 \le t \le 6\pi$, is the parametrization of a helix. You can view it as a slinky
or a spring.
Parametrized helix. The vector-valued function $\dllp(t)=(\cos t, \sin t, t)$ parametrizes a helix, shown in red. This helix is the image of the interval $[0,6\pi]$ (shown in magenta) under the
mapping of $\dllp$. For each value of $t$, the cyan point represents the vector $\dllp(t)$. As you change $t$ by moving the blue point along the interval $[0,6\pi]$, the cyan point traces out the
Imagine we wanted to estimate the length of the slinky, which we call the arc length of the parametrized curve. Unfortunately, it's difficult to calculate the length of a curved piece of wire. It's
much easier to calculate the length of straight pieces of wire. Probably the easiest way to calculate the length of the slinky would be to stretch it out into one straight line. But, if you ever
tried to do that with a slinky (or a strong spring), you'd discover that stretching it into a straight line is virtually impossible.
If you can't stretch the slinky into one straight line, what could you do to estimate its length? One thing you could do is pretend that the slinky, rather than being a curved wire, was really
composed of a bunch of short straight wires. In other words, you could approximate the curved slinky with line segments.
Helix arc length. The vector-valued function $\dllp(t)=(\cos t, \sin t, t)$ parametrizes a helix, shown in red. The green lines are line segments that approximate the helix. The discretization size
of line segments $dt$ can be changed by moving the blue point on the magenta slider. The green line on the green slider, labeled $L(dt)$, gives the length of the line segment approximation to the
helix. As $dt \to 0$, the length $L(dt)$ approaches the arc length of the helix from below, shown by the red line on the green slider.
The length of the line segments is easy to measure. If you add up the lengths of all the line segments, you'll get an estimate of the length of the slinky. Let $\Delta t$ specify the discretization
interval of the line segments, and denote the total length of the line segments by $L(\Delta t)$. (In the above applet, $\Delta t$ is written as $dt$.) As the line segments take shortcuts, the length
of the line segments underestimate the arc length of the slinky.
However, if you increase the number of line segments (decreasing the length of each line segment), the total length of the line segments becomes a better estimate of the slinky arc length. As $\Delta
t$ approaches zero, the length of each line segment shrinks toward zero, the number of line segments increases, and the line segments become closer and closer to the slinky. Consequently, the total
length $L(\Delta t)$ of the line segments approaches the slinky arc length.
What's the length of each line segment? If there are $n$ line segments, we could define $t_0, t_1, \ldots, t_n$ so that the first line segment goes from the point $\dllp(t_0)$ to the point $\dllp
(t_1)$, the second line segment goes from the point $\dllp(t_1)$ to the point $\dllp(t_2)$, etc. The vector from $\dllp(t_0)$ to $\dllp(t_1)$ is simply $\dllp(t_1) - \dllp(t_0)$, so the length of the
line segment must be $\| \dllp(t_1) - \dllp(t_0) \|$. The length of the second line segment is $\| \dllp(t_2) - \dllp(t_1) \|$, etc.
To find the total length of the line segments, we just add up those lengths from all $n$ line segments: \begin{align*} \sum_i^n \| \dllp(t_i) - \dllp(t_{i-1})\|. \label{totallengtha} \end{align*}
Now we do some tricks to put this into a different form. First, if $\Delta t_i = t_i - t_{i-1}$, then we can rewrite $t_{i}$ as $t_{i-1} + \Delta t_i$. Next, we can divide each term of the above
equation by $\Delta t_i$ and multiply it by $\Delta t_i$ so that our expression for the length becomes \sum_i^n \| \dllp(t_i) - \dllp(t_{i-1})\| &= \sum_i^n \| \dllp(t_{i-1} + \Delta t_i) - \dllp(t_
{i-1})\| \notag \\ &= \sum_i^n \left\| \frac{\dllp(t_{i-1} + \Delta t_i) - \dllp(t_{i-1})}{\Delta t_i}\right\| \Delta t_i. \label{total_length}
Maybe this new equation doesn't look like much of an improvement. But if you were a real math nerd, you might have noticed that the quotient involving $\dllp(t_{i-1})$ is exactly the expression used
in the limit definition of the derivative $\dllp'(t)$ of a parametrized curve (if we replace $h$ with $\Delta t_i$). In fact, equation \eqref{total_length} is a Riemann sum for an integral, analogous
to the ones used to define integrals such as double integrals. If we let the number of line segments increase (as we take the limit $\Delta t_i \to 0$) the quotient becomes $\dllp'(t)$, and equation
\eqref{total_length} approaches the integral \begin{align*} L(\dllp)=\int_a^b \| \dllp'(t) \| dt, \label{totallengthc} \end{align*} which is the true arc length of the slinky. The numbers $a$ and $b$
are the values of $t$ at the ends of the slinky (i.e., the numbers so that the slinky is defined by $\dllp(t)$ for $a \le t \le b$). In our example, the slinky was defined by $\dllp(t)$ for $0 \le t
\le 6\pi$, so we would use $a=0$ and $b=6\pi$.
The magnitude of the derivative $\| \dllp'(t) \|$ is the speed of a particle that is at position $\dllp(t)$ at time $t$. The above equation simply says that the total length of the curve traced by
the particle is the integral of its speed. (This length must, of course, be independent of the particle's speed.)
You can see some examples here.
|
{"url":"http://mathinsight.org/parametrized_curve_arc_length","timestamp":"2014-04-21T02:26:39Z","content_type":null,"content_length":"28344","record_id":"<urn:uuid:8c01c934-66f6-44a1-b33a-ce9120ce073e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Braingle: 'Entertaining the Kids' Brain Teaser
Entertaining the Kids
Logic Grid puzzles come with a handy interactive grid that will help you solve the puzzle based on the given clues.
Puzzle ID: #25893
Category: Logic-Grid
Submitted By: cinnamon30
Corrected By: bigSWAFF_69_
The parents of five children decided to share responsibility for entertaining the kids (one of whom is Bruce) during the week between the end of day camp and the start of the school. Each parent
took a different day of the week, Monday through Friday, to take the children on a different outing (including taking them horseback riding). They also each rented a movie (including Mulan) for the
children to watch. For each day, determine who took care of the kids, that parent's child, the outing the parent supervised and the movie he or she rented.
1. The five parents were: the one who took the children bowling, the one who rented "Antz", Katie's parent, Mr. Bidwell and Mrs. Molinari.
2. The parent who rented "Finding Nemo" didn't take the kids bowling and isn't related to Katie.
3. Neither Mrs. Parker nor the parent who took the kids to miniature golf rented "Ice Age".
4. The children went roller skating the day before they saw "Monster's Inc.", which was the day before Mr. Sandoval entertained them.
5. The parent who took the kids skating didn't rent "Antz".
6. Mr. Sandoval wasn't the parent who rented "Finding Nemo" (which the children watched Thursday).
7. Mrs. Lee is Derek's mother.
8. Linda's parent, who isn't Mrs. Molinari and who didn't rent "Antz", entertained the kids on Friday.
9. Ryan's parent took care of the children the day after the parent who took the kids swimming.
Show Answer
What Next?
FamilyInFilm This one took me forever! Good job! A tough teaser, but fun!
Sep 13, 2005
harmonize9 this was a fun casual one. not challenging but fun.
Sep 13, 2005
renae Phew, that was a hard one. Thank You!!!
Sep 13, 2005
mejoza Something about your teasers make me whip out the piece of paper. It's easy if I write out the five slots, but I can't seem to use the grid alone. It didn't take too long (10
Sep 13, 2005 minutes?) but I had tog et out the pen and paper, which makes me feel dunce-ey
brainfog I like how I had to keep going back and forth - made it more challenging. Very nice job!
Sep 14, 2005
beethovenswig Pretty fun.... I also had to use paper, it's easier to me than the grid. Happy Early Birthday cinnamon30
Sep 14, 2005
orbrab Thank you. It was a nice composition and I think it took a bit longer to do than average grids.
Sep 14, 2005
ChinaRider Just keep swimming...
Sep 14, 2005
xsagirl Challenging but fun. I got stuck half-way and had to peek at the answer.
Sep 14, 2005
bagogirl good one i didn't get it
Sep 14, 2005
Sep 15, 2005
chicaespanola Really good teaser! It took me quite a while (and two tries
Sep 15, 2005
Winner4600 Got it on the second try!!!
Sep 15, 2005
lives_2sing it was a kool teaser. i wouldnt call it a favorite but it was worth at least a glance.
Sep 17, 2005
randamanae i can not get this one! i have tried several times. good job to you
Sep 25, 2005
angeleyes_7 Excellent work!!
Sep 28, 2005
Nov 04, 2005
dragon-B that was the hardest grid I have done so far
Nov 04, 2005
stephiesd i stopped reading the comments when i got to chinarider's in fear of finding a similar one. GGGGRRRRRRRR...
Nov 04, 2005
sundaisy227 good job. couldn't figure it out!
Nov 09, 2005
dragonfantasy Very good had to cheat and look at the first set of pairings to get it i think i am slipping...lol Thank you for making it
Nov 13, 2005
roscoep Good teaser! You must go back over the clues about 1/2 way through.
Jan 19, 2006
Splatt I admit I had to go over the clues a few times and really look at the grid, but I got it (hey, what's with all the peeking at the answers?).
Mar 06, 2006
Thanks so much for writing this one.
crackurcode For some reason I struggled with this one even though when I got it I realized that it was pretty easy...good job.
Nov 14, 2006
dreamlvr1432 This one was really good. Got stuck and just sat staring at everything for like 3 mins. Something just clicked and it all fell into place. I love ones like this that make me think and
Sep 21, 2007 work for the answer.
Mom2Ozzy This was a hard one but I got it first try!
Jan 24, 2008
willieslim Fun... tough... (I didn't get the answer, though
Mar 04, 2008
Great Grid...
mom_rox very nice puzzle. challenging, but very solvable.
Mar 19, 2008
dragonlove I really enjoyed it... thanks!
Mar 23, 2009
RebelAir Couldn't get it after three tries (and several hours)
May 02, 2009
Good one!
|
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=25893&op=0&comm=1","timestamp":"2014-04-18T23:41:28Z","content_type":null,"content_length":"51889","record_id":"<urn:uuid:b6810909-f2d0-4ae4-8f0a-3a1fc005fb77>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of bucks odds
probability theory
in favour of an
or a
are the quantity
/ (1 −
), where
is the
of the event or proposition. The odds against the same event are (1 −
) /
. For example, if you chose a
day of the week, then the odds that you would choose a Sunday would be 1/6, not 1/7. The odds against you choosing Sunday are 6/1. These 'odds' are actually relative probabilities. Generally, 'odds'
are not quoted to the general public in this format because of the natural confusion with the chance of an event occurring being expressed fractionally as a probability. Thus, the
of choosing Sunday at random from the days of the week is 'one-seventh' (1/7), and although a bookmaker may (for his own purposes) use 'odds' of 'one-sixth' the overwhelming everyday use by most
people is
of the form 6 to 1, 6-1, or 6/1 (all read as 'six-to-one') where the first figure represents the number of ways of failing to achieve the outcome and the second figure is the number of ways of
achieving a
outcome: thus these are "odds against". In other words, an event with
"odds against" would have probability
), while an event with
"odds on" would have probability
In some games of chance, this is also the most convenient way for a person to understand how much winnings will be paid if the selection is successful: the person will be paid 'six' of whatever stake
unit was bet for each 'one' of the stake unit wagered. For example, a £10 winning bet at 6/1 will win '6 × £10 = £60' with the original £10 stake also being returned.
Taking an event with a 1 in 5 probability of occurring (i.e. a probability of 1/5, 0.2 or 20%), then the odds are 0.2 / (1 − 0.2) = 0.2 / 0.8 = 0.25. This figure (0.25) represents the stake necessary
for a person to win one unit on a successful wager. This may be scaled up by any convenient factor to give whole number values. E.g. If a stake of 0.25 wins 1 unit, then scaling by a factor of four
means a stake of 1 wins 4 units. If you bet 1 at these odds and the event occurred, you would receive back 4 plus your original 1 stake. This would be presented in fractional odds of 4 to 1 against
(written as 4-1, 4:1, or 4/1), in decimal odds as 5.0 to include the returned stake, in craps payout as 5 for 1, and in moneyline odds as +400 representing the gain from a 100 stake.
By contrast, for an event with a 4 in 5 probability of occurring (i.e. a probability of 4/5, 0.8 or 80%), then the odds are 0.8 / (1 − 0.8) = 4. If you bet 4 at these odds and the event occurred, you
would receive back 1 plus your original 4 stake. This would be presented in fractional odds of 4 to 1 on (written as 1/4 or 1-4), in decimal odds as 1.25 to include the returned stake, in craps as 5
for 4, and in moneyline odds as −400 representing the stake necessary to gain 100.
Gambling odds versus probabilities
In gambling, the odds on display do not represent the true chances that the event will occur, but are the amounts that the
will pay out on winning bets. In formulating his odds to display the bookmaker will have included a profit margin which effectively means that the payout to a successful
is less than that represented by the true chance of the event occurring. This profit is known as the 'over-round' on the 'book' (the 'book' relates to the old-fashioned ledger that wagers were
recorded in and thus gives us the term 'bookmaker') and relates to the sum of the 'odds' in the following way:
In a 3-horse race, for example, the true chances of each of the horses winning based on their relative abilities may be 50%, 40% and 10%. These are the relative probabilities of the horses winning
and are simply the bookmaker's 'odds' multiplied by 100 for convenience. The total of these three percentages is 100, thus representing a fair 'book'. The true odds of winning for each of the three
horses is evens, 6-4 and 9-1 respectively. In order to generate a profit on the wagers accepted by the bookmaker he may decide to increase the values to 60%, 50% and 20% for the three horses,
representing odds of 4-6, Evens and 4-1. These values now total 130, meaning that the book has an overround of 30 (130 − 100). This value of 30 represents the amount of profit for the bookmaker if he
accepts bets in the correct proportions on each of the horses. The art of bookmaking is that he will take in, for example, $130 in wagers and only pay $100 back (including stakes) no matter which
horse wins.
Profiting in gambling involves predicting the relationship of the true probabilities to the payout odds. If you can consistently make bets where the odds of paying out are better (pay out more) than
the true odds of the event, then over time (in theory) you will come out ahead.
The odds or amounts the bookmaker will pay are determined by the amounts bet on each of the respective possible events. They reflect the balance of wagers on either side of the event, and include the
deduction of a bookmaker’s brokerage fee (“vig” or vigorish).
Even odds
The terms 'even odds', 'even money' or simply 'Evens' imply that the payout will be 'one-for-one' or 'double-your-money'. Assuming there is no bookmaker’s fee or built-in profit margin, this means
that the actual probability of winning is 50%. The term “better than even odds” looks at it from the perspective of a gambler rather than a statistician. If the odds are Evens (1-1), and you bet 10,
you would win 10. If the gamble was paying 4-1 and the event occurred, you would make a profit of 40. So, it is better than Evens from the gambler’s perspective because it pays out more than
one-for-one. If an event is more positively favored to occur than a 50-50 chance then the odds will be worse than Evens, and the bookmaker will pay out less than one-for-one.
In popular parlance surrounding uncertain events, the expression "better than even" usually implies a better than (greater than) 50% chance of the event occurring, which is exactly the opposite of
the meaning of the expression when used in a gaming context.
The odds are a ratio of probabilities; an odds ratio is a ratio of odds, that is, a ratio of ratios of probabilities. Odds-ratios are often used in analysis of clinical trials. While they have useful
mathematical properties, they can produce counter-intuitive results: in the example above an event with an 80% probability of occurring is four times more likely to happen than an event with a 20%
probability, but the odds are actually 16 times higher on the less likely event (4-1 against) than on the more likely one (1-4, or 4-1 on).
The logarithm of the odds is the logit of the probability.
See also
|
{"url":"http://www.reference.com/browse/bucks%20odds","timestamp":"2014-04-21T16:38:13Z","content_type":null,"content_length":"82926","record_id":"<urn:uuid:c54a76a0-52d3-4703-ba14-4c81e8e6ba0e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Publications and preprints
Julia's Equation and differential transcendence (with W. Bergweiler), submitted.
Transseries and Todorov-Vernaeve's asymptotic fields (with I. Goldbring)
Arch. Math. Logic, to appear.
Toward a model theory for transseries (with L. van den Dries and J. van der Hoeven)
Notre Dame J. Form. Log. 54 (2013), no. 3-4, 279-310.
Logarithms of iteration matrices, and proof of a conjecture by Shadrin and Zvonkine
J. Combin. Theory Ser. A 119 (2012), 627-654.
Differentially algebraic gaps (with L. van den Dries and J. van der Hoeven)
Selecta Math. 11 (2005), 247-280.
Asymptotic differential algebra (with L. van den Dries)
In: O. Costin, M. D. Kruskal, A. Macintyre (eds.), Analyzable Functions and Applications, Contemp. Math. 373, Amer. Math. Soc., Providence, RI (2005), 49-85.
Liouville closed H-fields (with L. van den Dries)
J. Pure Appl. Algebra 197 (2005), 83-139.
Some remarks about asymptotic couples
In: F.-V. Kuhlmann, S. Kuhlmann, M. Marshall (eds.), Valuation Theory and its Applications, II, Fields Institute Publications 33, AMS, Providence, RI (2003), 7-18.
H-fields and their Liouville extensions (with L. van den Dries)
Math. Z. 242 (2002), 543-588.
Closed asymptotic couples (with L. van den Dries)
J. Algebra 225, 309-358 (2000).
(For an expanded version of this paper click here.)
Whitney's Extension Problem in o-minimal structures (with A. Thamrongthanyalak), submitted.
Michael's Selection Theorem in a semilinear context (with A. Thamrongthanyalak), submitted.
Vapnik-Chervonenkis density in some theories without the independence property, I (with A. Dolich, D. Haskell, D. Macpherson, and S. Starchenko)
Trans. Amer. Math. Soc., to appear.
Vapnik-Chervonenkis density in some theories without the independence property, II (with A. Dolich, D. Haskell, D. Macpherson, and S. Starchenko)
Notre Dame J. Form. Log. 54 (2013), no. 3-4, 311-363.
Definable versions of theorems by Kirszbraun and Helly (with A. Fischer)
Proc. London Math. Soc. 102 (2011), 468-502.
Strongly minimal groups in the theory of compact complex spaces (with R. Moosa and T. Scanlon)
J. Symbolic Logic 71 (2006), 529-552.
An effective Weierstrass Division Theorem
Algorithms for computing saturations of ideals in finitely generated commutative rings
Appendix to: Automorphisms mapping a point into a subvariety, J. Algebraic Geom. 20 (2011), 785-794. (by B. Poonen)
Degree bounds for Gröbner bases in algebras of solvable type (with A. Leykin)
J. Pure Appl. Algebra 213 (2009), 1578-1605.
Lefschetz extensions, tight closure, and big Cohen-Macaulay algebras (with H. Schoutens)
Israel J. Math. 161 (2007), 221-310.
Finite generation of symmetric ideals (with C. Hillar)
Trans. Amer. Math. Soc. 359 (2007), 5171-5192.
Finiteness theorems in stochastic integer programming (with R. Hemmecke)
Found. Comput. Math. 7 (2007), 183-227.
Bounds and definability in polynomial rings
Quart. J. Math. 56 (2005), 263-300.
Reduction mod p of standard bases
Comm. Algebra 33 (2005), 1635-1661.
Orderings of monomial ideals (with W.-Y. Pong)
Fund. Math. 181 (2004), 27-74.
Ideal membership in polynomial rings over the integers
J. Amer. Math. Soc. 17 (2004), 407-441.
|
{"url":"http://www.math.ucla.edu/~matthias/publications.html","timestamp":"2014-04-18T18:12:36Z","content_type":null,"content_length":"24783","record_id":"<urn:uuid:6496c360-6212-4918-accf-ac39fc3ace04>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find a Precalculus Tutor
...I have worked with students K-12 in the Seattle Public Schools, Chicago Public Schools, and Seattle area independent schools. I have also worked with students at the University level. I believe
in the importance of differentiated learning or tailoring lessons for each particular student so that...
27 Subjects: including precalculus, chemistry, biology, reading
Helping others has always been a passion of mine and will continue to be for the rest of my life. I have been involved in volunteering groups since elementary school and have never found anything
that matches the joy I receive when I have positively impacted someones life. Throughout high school I tutored pre-calculus students.
15 Subjects: including precalculus, reading, Spanish, geometry
...I earned an 800 on the mathematics SAT II, as well as a 5 on the calculus BC AP exam. I continued on to take several advanced math courses in college. I have tutored middle and high school math
for over a year.
13 Subjects: including precalculus, chemistry, physics, algebra 1
...I have helped students at University Tutoring Service and Central Test Prep in Seattle and at Boston Global Education in Westborough, MA. I am committed to helping students gain a deep
understanding of the material they are studying, not just getting through their current homework assignment or ...
18 Subjects: including precalculus, calculus, geometry, GRE
...I know that a lot of students find it difficult in school, where the teacher can't focus solely on them. As a tutor, I can give the student that one-on-one attention and cater to their learning
style. Plus, I'm really passionate about the subjects I tutor in, especially Latin and biology, and h...
21 Subjects: including precalculus, chemistry, physics, writing
|
{"url":"http://www.purplemath.com/Avondale-goodyear_AZ_Precalculus_tutors.php","timestamp":"2014-04-20T13:25:06Z","content_type":null,"content_length":"23840","record_id":"<urn:uuid:d8fc2d9b-0bc8-44a1-a2e1-0cd5841d8229>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Conor McBride conor at strictlypositive.org
Sat Mar 14 13:07:01 EDT 2009
Hi Dan
On 14 Mar 2009, at 14:26, Dan Doel wrote:
> On Saturday 14 March 2009 8:12:09 am Conor McBride wrote:
>> Rome wasn't burnt in a day.
>> Of course I want more than just numerical indexing (and I even
>> have a plan) but numeric constraints are so useful and have
>> so much of their own peculiar structure that they're worth
>> studying in their own right, even for languages which do have
>> full dependent types, let alone Haskell. We'll carry out this
>> project with an eye to the future, to make sure it's compatible
>> with full dependent types.
> One should note that ATS, which has recently been generating some
> excitement,
> doesn't have "full" dependent types, depending on what exactly you
> mean by
> that.
I'm really impressed by the results ATS is getting, and by DML
before it. I think these systems do a great job of showing
what can be gained in performance by improving precision.
> For instance, it's dependent typing for integers consist of:
I certainly want
> 1) A simply typed static/type-level integer type
which looks exactly like the value-level integers and has a
helpful but not exhaustive selection of the same operations.
But this...
> 2) A family of singleton types int(n) parameterized by the static
> type.
> For instance, int(5) is the type that contains only the run-time
> value 5.
> 3) An existential around the above family for representing arbitrary
> integers.
...I like less.
> where, presumably, inspecting a value of the singleton family
> informs the type
> system in some way. But, we can already do this in GHC (I'll do
> naturals):
> -- datakind nat = Z | S nat
> data Z
> data S n
> -- family linking static and dynamic
> data NatFam :: * -> * where
> Z :: NatFam Z
> S :: NatFam n -> NatFam (S n)
> -- existential wrapper
> data Nat where
> N :: forall n. NatFam n -> Nat
> Of course, ATS' are built-in, and faster, and the type-level
> programming is
> probably easier, but as far as dependent typing goes, GHC is already
> close (I
> don't think you'd ever see the above scheme in something like Agda
> or Coq with
> 'real' dependent types).
Which is why I'd rather not have to do that in Haskell either. It
really hurts to go through this rigmarole to make this weird version
of Nat. I'd much rather figure out how to re-use the existing
datatype Nat. Also, where the GADT coding really doesn't compete
with ATS is in dealing with constraints on indices that go beyond
unification -- numbers have more structure and deserve special
attention. Numerical indexing systems need to carry a big stick for
"boring algebra" if we're to gain the power but keep the weight down.
But wherever possible, I'd prefer to avoid doing things an awkward
way, just because we don't quite have dependent types. If the above
kludge is really necessary, then it should at least be machine-
generated behind the scenes from ordinary Nat. I'd much rather be
able to lift a type to a kind than have a separate datakind feature.
Apart from anything else, when you get around to indexed kinds, what
you gonna index /them/ with? Long term, I'd far rather think about
how to have some sort of dependent functions and tuples than muddle
along with singletons and weak existentials.
> So this sort of type-level vs. value-level duplication with GADTs
> tying the
> two together seems to be what is always done in ATS. This may not be
> as sexy
> as taking the plunge all the way into dependent typing ala Agda and
> Coq, but
> it might be a practical point in the design space that GHC/Haskell-
> of-the-
> future could move toward without too much shaking up. And if ATS is
> any
> indication, it allows for very efficient code, to boot. :)
I'd certainly agree that ATS demonstrates the benefits of moving
in this direction, but I think we can go further than you suggest,
avoiding dead ends in the design space, and still without too
much shaking up. I don't think the duplicate-or-plunge dilemma you
pose exhausts the options. At least, I seem no reason to presume
so and I look forward to finding out!
To be honest, I think the real challenge is to develop good libraries
and methodologies for working with indexed types (and particularly,
indexed notions of computation: what's the indexed mtl?). There are
lots of promising ideas around, but it's hard to build something
that scales whilst the encodings are so clunky. A bit of language
design, even just to package existing functionality more cleanly,
would really kick the door open. And yes, I am writing a research
All the best
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-March/057725.html","timestamp":"2014-04-18T18:54:16Z","content_type":null,"content_length":"7963","record_id":"<urn:uuid:b780dac9-f922-44fc-8918-7ebe1350261a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 2004 [00322]
[Date Index] [Thread Index] [Author Index]
Re: Re: Needed Grid lines in Implicit Plot
• To: mathgroup at smc.vnet.net
• Subject: [mg52828] Re: [mg52818] Re: [mg52648] Needed Grid lines in Implicit Plot
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Tue, 14 Dec 2004 05:59:22 -0500 (EST)
• References: <200412070909.EAA09155@smc.vnet.net> <200412130924.EAA23586@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
On 13 Dec 2004, at 18:24, Bob Walker wrote:
> Remove ", {y, -4 , 4 }" iterator and it will work as you wish.
> ImplicitPlot accepts only one iterator. You can inspect the
> ImplicitPlot signature by:
> ?ImplicitPlot
> I don't know why there is no error indication.
> Take care,
> Bob Walker
> Narasimham wrote:
>> << Graphics`ImplicitPlot`
>> ImplicitPlot[3 x + 5 y == 1, {x, -4 , 4}, {y, -4 , 4 },
>> GridLines ->{Range[-4, 4, 1], Range[-4, 4, 1]},AspectRatio ->
>> Automatic];
This is quite simply false.
There are two forms of Implicitplot. they are both described int he
documentation (read till the end)
ImplicitPlot[eqn, {x, a, b}] draws a graph of the set of points that
satisfy \
the equation eqn. The variable x is associated with the horizontal
axis and \
ranges from a to b. The remaining variable in the equation is
associated \
with the vertical axis. ImplicitPlot[eqn, {x, a, x1, x2, ..., b}]
allows the \
user to specify values of x where special care must be exercised. \
ImplicitPlot[{eqn1, eqn2, ...}, {x, a, b}] allows more than one
equation to \
be plotted, with PlotStyles set as in the Plot function.
ImplicitPlot[eqn, \
{x, a, b}, {y, a, b}] uses a contour plot method of generating the
plot. \
This form does not allow specification of intermediate points
The form of ImplicitPLot with one iterator only works on equations that
can be "solved" algebraically using Solve. it produces a Graphics
object, which is why it accepts the GridLines option.
The other form of ImplicitPlot works much more generally, but it
produces a CountourGraphics object, which needs to be converted to a
Graphics object before using the GridLines option.
Andrzej Kozlowski
Chiba, Japan
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Dec/msg00322.html","timestamp":"2014-04-18T15:47:47Z","content_type":null,"content_length":"36780","record_id":"<urn:uuid:6a887cec-f2f8-4ba0-a78b-8a1a7c57377b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jovan D. Ke\v cki\'c
Farmaceutski fakultet, Beograd, Yugoslavia
Abstract: We point a generalized inverse $A_G$ of a singular square complex matrix $A$, with the property that the general solution of the equation $A^nx=0$ (and many other equations) can be
expressed by means of $A_G$ for all positive integers $n$. This inverse is a solution of the system (5), also satisfied by the strong spectral inverse of Greville. Applications to various matrix
equations and to linear algebraic systems are given.
Classification (MSC2000): 15A09, 15A24
Full text of the article:
Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001.
© 2001 Mathematical Institute of the Serbian Academy of Science and Arts
© 2001 ELibM for the EMIS Electronic Edition
|
{"url":"http://www.emis.de/journals/PIMB/059/8.html","timestamp":"2014-04-17T01:02:29Z","content_type":null,"content_length":"3399","record_id":"<urn:uuid:8f727710-aa49-4564-b759-40ae06b526d5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
December 10th 2009, 07:24 PM
What is resonance? How do I approach this question?
2x" + 36x = sin(wt)
a) For what values of w will the system exhibit resonance?
b) Use the method of undetermined coefficients to find a particular solution in the case where w is not the resonant frequency.
December 11th 2009, 02:11 AM
mr fantastic
Resonance occurs when the natural frequency of the system is the same as the forced frequency.
So start by solving 2x" + 36x = 0. Can you do this?
|
{"url":"http://mathhelpforum.com/differential-equations/119822-resonance-print.html","timestamp":"2014-04-19T21:07:27Z","content_type":null,"content_length":"4353","record_id":"<urn:uuid:10454de5-27e6-4bc1-bb6a-1ea37eb65899>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
left tail, right tail, two tail
August 12th 2007, 08:19 PM #1
Jul 2007
left tail, right tail, two tail
I'm having a hard time trying to fiqure out this problem.
the instructions say to determine whether the hypothesis test for each claim is left-tailed, right-tailed, two-tailed. then explain why
The mean life of a certain tire is no less than 50,000 miles.
What steps do I take, the book does a crappy job!
I'm having a hard time trying to fiqure out this problem.
the instructions say to determine whether the hypothesis test for each claim is left-tailed, right-tailed, two-tailed. then explain why
The mean life of a certain tire is no less than 50,000 miles.
What steps do I take, the book does a crappy job!
A two tailed test is one to test a hypothesis about $x$ of the form: $x_{left}<x<x_{right}$
A (left) one tailed test is one to test a hypothesis about $x$ of the form: $x_{left}<x$
A (right) one tailed test is one to test a hypothesis about $x$ of the form: $x<x_{right}$
A little reality, please.
1) Why are you qualified to judge the effectiveness of the book - speaking generally. Perhaps it works well for a large proportion of the population. You're just in the other portion. It's
2) Why do you think there are "steps'? Perhaps this is where you think the book has a problem. It isn't showing you steps that simply don't exist. The "steps" are to review the material, resort
to all applicable definitions and known structures, and then think about it. Would it have been useful if your book had said that?
3) It always helps me to know, whatever the subject matter, that someone thought of it before there was a book on it. I expect me to come up with stuff, perhaps in the same way as the original
author. Learning how to think, construct, synthesize, and rationally move forward will help your life far more than memorizing a couple of steps.
My views. I welcome others'.
August 12th 2007, 10:06 PM #2
Grand Panjandrum
Nov 2005
August 13th 2007, 03:03 PM #3
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/statistics/17723-left-tail-right-tail-two-tail.html","timestamp":"2014-04-19T22:50:49Z","content_type":null,"content_length":"38701","record_id":"<urn:uuid:94cb76c2-95a8-480b-a094-9ea59f28cac3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mellon College of Science-Graduate Education - Carnegie Mellon University
Mellon College of Science (MCS)
■ PHYSICS
Biological Sciences—Ph.D.
Research in the Department of Biological Sciences exemplifies the interdisciplinary approach that is an essential feature of modern biomedical science. The Ph.D. program stresses the development of
strong research and teaching skills, and provides advanced training in biochemistry, biophysics, cell biology, computational biology, developmental biology, genetics, molecular biology and
neurobiology. Depending on their interests, students may participate in various centers such as the Bone Tissue Engineering Center, the Center for the Neural Basis of Cognition, the Lane Center for
Computational Biology, the Machine Learning Department, the Molecular Biosensor and Imaging Center, and the Pittsburgh NMR Center for Biomedical Research. Typically, students complete their training
within five to six years.
Computational Biology [with SCS]—M.S.
The emerging field of computational biology represents the application of modern computer science to solving biological problems. Carnegie Mellon’s world-class strengths in computer science and
strong tradition of interdisciplinary research combine to provide training in this new discipline. Program goals include meeting the growing need for computational biologists in the biotechnology and
pharmaceutical industries and at universities and research institutes, and allowing nontraditional and reentering students to establish credentials that enable acceptance into Ph.D. programs in
computational biology. Students complete the program in three to four semesters.
Chemistry—M.S., Ph.D.
This program is noted for research at the interface of chemistry with biology, physics and engineering, including polymer science, materials, green and environmental, bioorganic, bioinorganic,
biophysical, spectroscopy, theoretical and computational chemistry. The Ph.D. program’s goal is to prepare students for an academic or research career in chemistry. The M.S. in Chemistry is available
only to students who are in the process of pursuing the Ph.D. Except in special circumstances, Chemistry does not admit students seeking only the M.S.
Chemical Engineering and Colloids, Polymers and Surfaces [with CIT]—M.S.
This program focuses on the engineering of complex fluids, which consist of nanoparticles (colloids), macromolecules and interfaces. Topics are relevant to industrial technology and the manufacture
of products based on complex fluids; examples include pharmaceuticals, coatings and paint, cosmetics, surfactant-based products and biotech materials. The program can be completed in nine months.
Polymer Science—M.S.
This program provides the basic background for scientists and engineers to pursue technical careers in industries that manufacture, process and use polymeric materials.
Students with these interests may also want to consider the interdisciplinary M.S. in Colloids, Polymers and Surfaces, a joint program with Chemical Engineering designed for professionals working in
the polymer field. The M.S. in Polymer Science is available only to students who are in the process of pursuing the Ph.D.
Algorithms, Combinatorics and Optimization [with SCS and Tepper]—Ph.D.
The focus of this program is on the design of efficient algorithms for problems arising in computer science and operations research, and on the mathematics required to analyze these algorithms and
problems. The program brings together the study of the mathematical structure of discrete objects and the design and analysis of algorithms in areas such as graph theory, combinatorial optimization,
integer programming, polyhedral theory, computational algebra, geometry and number theory.
Applied Mathematics - Ph.D.
The requirements for this program are the same as for the Ph.D. in Mathematical Sciences, but students also take courses in applied mathematics, such as oridinary differential equations, partial
differential equations, advanced topics in analysis, Sobolev spaces, methods of optimization, and numerical analysis.
The PhD. Dissertation will usually be in the area of calculus of variations, continuum mechanics, numerical analysis, partial differential equations, or scientific computing, often involving
applications to materials science, engineering, biology, computer vision, etc.
Students interested in Applied Mathematics should apply to the Ph.D. in Mathematical Sciences and indicate their interest on the application.
Computational Finance [with Heinz, H&SS and Tepper]—M.S.
This 18-month full-time degree (in Pittsburgh or New York) or three-year part-time (in New York) MSCF program focuses on the use of quantitative methods and information technology in the field of
finance. The curriculum provides an in-depth understanding of the mathematics used to model security prices, the statistical tools needed to summarize and predict the behavior of financial data, the
computer science and e-commerce skills needed to understand the technology used in the financial industry, and the corporate finance needed to employ these skills in finding innovative solutions to
business needs.
Mathematical Finance—Ph.D.
The requirements for this program are the same as for the Ph.D. in Mathematical Sciences, but students also take courses in probability, statistics, stochastic processes, microeconomics and finance.
Normally, the Ph.D. dissertation is on some aspect of stochastic processes applied to finance. The breadth of the curriculum opens up a variety of career opportunities, including research positions
in the finance industry and faculty positions in mathematics. Students interested in mathematical finance should apply to the Ph.D. program in mathematical sciences and indicate their interest on the
Mathematical Sciences—Ph.D.
Students seeking a Ph.D. are expected to show a broad grasp of mathematics and demonstrate a genuine ability to do mathematical research. The Ph.D. in Mathematical Sciences is a traditional degree,
and its requirements are representative of all doctoral programs in mathematics. The primary intent of the graduate program is to train mathematical scientists for a variety of career opportunities
including traditional university settings, industry and work in the finance industry. As part of their Ph.D. training, students may elect to earn a Masters degree. (Please note that the Department
does not offer a terminal Masters program.)
Pure and Applied Logic [with H&SS and SCS]—Ph.D.
This interdisciplinary program is jointly sponsored by the departments of Computer Science, Mathematical Sciences and Philosophy. Each of these departments administers a track of the program, and
students are admitted directly to one of these three departments, which will serve as their home base. Carnegie Mellon’s large and active Logic Community has a particularly strong concentration in
foundational aspects of computing and has an established record of collaborations in pursuing theoretical research, conducting major implementation projects, and running colloquia and workshops. The
program builds on these strengths to educate new generations of scientists who will pursue research in Pure and/or Applied Logic.
Applied Physics—Ph.D.
The Ph.D. in Physics and Applied Physics offers advanced training to students at the leading edge of physics research and prepares them to become the next generation of leaders in academia and
industry. The program is rigorous as well as practical. In particular, the first two years of the graduate curriculum is designed to provide students with the solid foundation necessary to start
research in their chosen area of specialization. Graduate students have the opportunity to study traditional core physics areas of astrophysics, biophysics, condensed matter physics, high energy and
medium energy particle physics, or perform interdisciplinary work at the boundaries of chemistry, biology, materials science, or engineering.
The M.S. in Physics is awarded to those who have demonstrated a mastery of advanced topics in physics beyond the B.S. degree level. The M.S. degree is usually offered only to students enrolled in the
Ph.D. degree program.
|
{"url":"http://www.cmu.edu/graduate/academics/guide-to-graduate-degrees-and-programs/mellon-college-of-science-mcs.html","timestamp":"2014-04-16T21:52:50Z","content_type":null,"content_length":"19203","record_id":"<urn:uuid:ec798452-d9cb-485f-a219-e4db38ae4e74>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometric Rigidity and Number Theory
Ergodic Theory and Geometric Rigidity and Number Theory
5 January to 7 July 2000
Report from the Organisers
A Katok (Penn State), G Margulis (Yale), M Pollicott (Manchester)
Scientific background and objectives
The central scientific theme of this programme was the recent development of applications of ergodic theory to other areas of mathematics, in particular, the connections with geometry, group actions
and rigidity, and number theory. The potential of ergodic theory as a tool in number theory was emphasized by Furstenberg’s proof of Szemerdi’s theorem on arithmetic progressions.
Ergodic theory is an area of mathematics with all of its roots and development contained within the 20th century. Strands of the modern theory can be traced back to the work of Poincaré, but the
subject began to take a more recognizable form through the seminal work of von Neumann, Birkhoff and Kolmogorov. The impetus to these developments was the important concept of ‘ergodicity’ in
dynamical systems - by which the temporal evolution of the system, though averaging over typical orbits (almost every orbit in the measure theoretic sense), corresponds to spatial averages over the
system. An important concept in physical systems, it also set the foundation for applications to other branches of mathematics, most notably geometry and number theory.
Foremost amongst the recent contributions of ergodic theory to number theory is the solution of the Oppenheim Conjecture, a problem on quadratic forms which had been open since 1929. This conjecture
was solved by Margulis, and a particular special case is the following:
The Oppenheim Conjecture. Consider an indefinite ternary quadratic form, for example, the quadratic form
Q(x,y,z) = ax^2 +by^2 - cz^2
in three variables, with a,b,c > 0 for the purposes of illustration. The original conjecture of Oppenheim was that the values Q(l,m,n) can be made arbitrarily close to 0 by taking choices of non-zero
triples of integers (l,m,n) Î Z ^3 - (0,0,0). This problem was extensively studied by Davenport, using purely number theoretic methods. The final solution of the conjecture was achieved by Margulis
by using a reformulation of the problem into the ergodic theory of homogeneous flows on lattices.
Of equal importance is the role of ergodic theory in geometry and the rigidity of actions. In recent years there have been diverse results, including rigidity results for higher rank abelian groups,
and results on the classification of geodesic flows on manifolds of non-positive curvature. This is a quickly evolving area of research. In more recent years, the richness of the applications to
geometry have become more apparent. This is illustrated by the famous Mostow rigidity theorem, by which the geometry of certain manifolds is completely determined by their topology (i.e. two
manifolds with the same fundamental groups are isometric).
Rigidity of Anosov actions. In the context of actions, there is a very well developed programme of Katok, Spatzier, and others, to show local C^¥ rigidity of algebraic Anosov actions of Z ^k and R ^k
on compact manifolds as well as orbit foliations of such actions. More precisely, two actions of a group G agree up to an automorphism if the second action can be obtained from the first one by
composition with an automorphism of the underlying group. Call a C^¥-action of a Lie group G locally C^¥-rigid if any perturbation of the action which is C^1-close on a compact generating set is C^
¥-conjugate up to an automorphism. Katok and Spatzier proved C^¥-local rigidity of most known irreducible Anosov actions of Z ^k and R ^k (as well as the orbit foliations).
A related theme is that of paucity of invariant measures.
The Furstenberg Conjecture. Consider the two transformations on the unit circle S, T : K® K defined by Sz = z^2 and Tz = z^3. The only ergodic invariant probability measures which are simultaneously
preserved by both S and T are either Haar measure, or measures supported on finitely many points.
Whereas this famous conjecture remains open, it is known (by work of Rudolph) that the conclusion is true if we restrict to measures of positive entropy. This result was generalized to Z ^n-actions
by Anosov toral automorphisms, and other more general settings, by Katok and Spatzier. A well-known cross-discipline application lies in the connection with Quantum Chaos. In particular, a
quantitative version of the Oppenheim Conjecture gives a proof of the Berry Conjecture on the eigenvalues of the Laplacian on flat tori. The programme explored these and other emerging applications
of ergodic theory. It brought together both national and international experts in ergodic theory and related disciplines, as well as other members from the wider UK and international mathematical
communities. In particular, a major aim of the programme was to bring together people with different interests and backgrounds, and to promote the use of ergodic theory techniques.
The overall planning for the programme was undertaken by all three of the organisers. The day-to-day organization of the programme was undertaken by M Pollicott from January to early April, and again
from late May to the end. During this absence, his duties were undertaken by
G Margulis. There was also very able assistance from A Eskin and M Burger for specific workshops within the programme. In addition to the workshops and meetings within the programme, there was a
regular research seminar on Tuesday afternoon, and a more informal seminar on Thursday morning. There were also other seminars scheduled as required by the participants or the organisers.
In May there was a Spitalfields Day, with talks by A Eskin, A Katok and G Margulis.
During the month of June there was a one-day meeting in Dynamical Systems (sponsored by the LMS and organised by Sharp and Walkden, from Manchester), with talks by R Itturiaga (INI and Heriot-Watt),
M Urbanski (North Texas) and T Ward (UEA).
In June the frequency of talks increased and the last week of the month was designated a Special Emphasis Week, prior to the final Euro-workshop in July. During this special week there were 3 talks
each day by participants, focusing particularly on results obtained by longer term visitors.
A Katok and H Furstenberg also spoke in the Institute’s Monday Seminar series, addressing a wider audience.
The programme hosted in excess of 50 long term visitors, at various times, and 75 short term visitors. In addition, there was very strong participation in the workshops and other activities during
the programme. The three organisers each spent a substantial period of the programme in residence. In addition, H Furstenberg was a Rothschild Professor and made an invaluable contribution to the
programme during his month in residence.
As the participant list shows, the majority of experts in the field visited the Institute during this programme, and a large number of leading experts in related fields attended. There was a strong
presence from Europe and North America, as well as a substantial presence from the former Soviet Union helped by the generous support from the Leverhulme Trust.
Many of the meetings and individual talks attracted mathematicians from both Cambridge and other British universities. The Junior Membership scheme, support from the NSF and support from the EU for
two of the workshops was particularly useful in encouraging participation from PhD students and younger mathematicians. A number of participants went to give talks in other UK departments (e.g.
Manchester, Surrey, QMW and Warwick).
Meetings and workshops
Lectures on Ergodic Theory, Geometry and Lie Groups (10-14 January 2000):
A Katok (Penn State) and M Pollicott (Manchester)
The first meeting was designed to provide a firm foundation for the programme, and to help set the agenda for subsequent activities. It was also intended to provide an introduction to the subject for
a broad audience, particularly younger mathematicians and non-experts from related areas.
The meeting consisted of five short lecture courses by acknowledged experts in the area. These were: M Burger (ETH Zurich), Cohomological aspects of lattices and applications to products of trees; R
Feres (St. Louis), Topological superrigidity and differential geometry; H Furstenberg (Jerusalem), Ergodic theory and the geometry of fractals; A Katok (Penn State), Dynamics and ergodic theory of
smooth actions of higher tank abelian groups and lattices in semi-simple Lie groups; and D Kleinbock (Rutgers), Interactions between homogeneous dynamics and number theory.The meeting successfully
achieved all of its aims. Participants were mathematicians, predominantly graduate students, postdoctoral fellows and younger researchers.
Euroworkshop: Rigidity in Dynamics and Geometry (27-31 March 2000):
G Margulis (Yale), assisted by A Eskin (Chicago) and M Burger (ETH Zurich)
The second workshop was devoted to applications of ergodic theory to locally symmetric spaces, geometric rigidity, and number theory. This represented one of the most intense periods of activity
during the programme.The subjects covered in the meeting included such important recent developments as the classification of the actions of higher rank groups, unipotent flows on homogeneous spaces
and the Oppenheim conjecture.The meeting consisted of 33 lectures from an international audience. Those participating were a broad mix of senior mathematicians and younger researchers and students.
The main speakers included: D Gaboriau (ENS, Lyon);B Kleiner (Michigan); F Labourie (Orsay); A Zorich (Rennes I); L Mosher (Rutgers), B Weiss (SUNY, StoneyBrook); H Pajot (Cergy-Pointoise); N Monod
(ETH Zurich); A Iozzi (Maryland); B Leeb (Tubingen);A Karlsson (Yale); G Knieper (Bochum); A Furman (Illinois); B Goldman (Maryland); A Adams (Minnesota); H Abels (Bielefeld); M Skriganov (Steklov);
A Stepin (Moscow State); M Dodson (York); V Kaimanovich (Rennes/Manchester); Y Guivarc’h (Rennes); G Tomanov (Claude Bernard, Lyon), Benoist (ENS, Paris), D Witte (Oklahoma State), A Parreau (Orsay);
B Remy (Henri Poincaré, Nancy); F. Paulin (Orsay);
Y Shalom (Yale); A Török (Houston); A Nevo (Technion); D Fisher (Yale); H Oh (Princeton); A Zuk (ENS, Lyon).
Ergodic Theory of Z ^d-actions (3-7 April 2000):
M Pollicott (Manchester), K Schmidt (Vienna) and P Walters (Warwick)
The third meeting in the programme was held in collaboration with the Mathematics Institute at the University of Warwick, and the venue was Warwick University. This meeting specialized more in the
specific topic of Z^d-actions, an area in which there was a very successful symposium at Warwick in 1993-94. This meeting focused on developments over the intervening six years, and showed that the
area was still very active. There were 27 talks, all of 45 minutes. The total number of participants exceeded 75.
Speakers included: Aaronson (Tel Aviv), Auslander (Maryland), Bergeleson (Ohio), Bufetov (Moscow), Burton (Ohio), Dani (Tata), Einsiedler (UEA), Feres (Washington), Friedland (Chicago), Hurder
(Illinois), Johnson (UNC), A Katok (Penn State), S Katok (Penn State), Kaminski (Krakow), Kitchens (IBM), Lind (Seattle), Margulis (Yale), Mozes (Jerusalem), Petersen (North Carolina), Putnam
(Victoria), Schmidt (Vienna), Shah (Tata), Thouvenot (Paris), Tuncel (Seattle), Vershik (St. Petersburg), Ward (UEA).
Euro-conference on Ergodic Theory, Riemannian Geometry and Number Theory (3-7 July 2000):
A Katok (Penn State), G Margulis (Yale) and M Pollicott (Manchester)
This was the final meeting of the programme and served, in part, to review the achievements made during the previous six months. A number of speakers took the opportunity to present work which they
had carried out while in residence at the Institute. The meeting encompassed much research activity, and marked the culmination of the programme.
There were a total of 29 talks, all of 45 minutes duration. The meeting attracted more than 100 participants. The speakers included: H Furstenberg (Jerusalem); G Margulis (Yale); H Masur (UIC), A
Windsor (Penn State); A Gamburd (Dartmouth); D Burago (Penn State); E Ghys (ENS Lyon); B Klingler (INI); M Burger(ETH Zurich); C Drutu (Lille, MPI Bonn); G Soifer (Bar-Ilan); A Lubotzky (Jerusalem);
L Clozel (Paris Sud); D Kleinbock (Rutgers); J Marklof (Bristol); M Kanai (Nagoya); U Hamenstadt (Bonn); M Babillot (Orleans); P Pansu (Paris Sud); B Kalinin (Penn State); C Walkden (Manchester); F
Dal’bo (Rennes); Y Cheung (UIC CMI); J Schmeling (Berlin); C Connell (UIC); R Sharp (Manchester); S Katok (Penn State); U Bader (Technion); R Zimmer (Chicago).
Outcome and achievements
The main achievement of this programme was that it brought together both established experts in the field, as well as younger researchers, from home and abroad, in an effort to promote scientific
research and training of the highest quality. To this end it was remarkably successful, with progress being made on a large number of problems, in a diverse number of different directions.
One of the topics where there was most progress was in the area of intersection between Lie groups and Ergodic theory. Abels, Margulis and Soifer worked intensively on the classical Auslander and
Milnor conjectures and considerable progress was made on these problems. In addition, Abels also made progress, with Margulis, on another classical problem (due to Siegel) regarding metrics on
reductive groups.
At a more algebraic level, Shalom made substantial progress in understanding to what extent lattices in higher rank Lie groups differ from lattices in the rank one Lie groups Sp(n,1), in terms of
their representation theory. On the face of it all of these lattices share Kazhdan's property (T), which dominates their behaviour. However, looking at uniformly bounded, rather than unitary,
representations on Hilbert spaces reveals some fundamental differences.
Adams and Witte studied the classification of the homogeneous spaces of SO(2,n) and SO(1,n) that have Lorentz forms. This is an important ingredient in showing that SL(2,R) is the only simple Lie
group that can act non-tamely on a Lorentz manifold. Witte also collaborated with Lifschitz, one of the younger participants, on establishing rigidity of lattices in nilpotent algebraic actions over
local fields of positive characteristic. He also completed work with Iozzi describing the Cartan-decomposition subgroups of SU(2,n). They also completed a related project on tesselations of
homogeneous spaces of SU(2,n).
Another area which was emphasized during the programme was the action by groups on manifolds. Dani worked on ergodic Z ^d-actions on Lie groups by their automorphisms. Skriganov worked with Margulis
on proving ergodic theorems for submanifolds generated by nilpotent subgroups in SL(n). These are results which arise naturally in applications of ergodic theory to lattice point problems.
The period spent by Witte and Zimmer at the INI allowed them to complete a long-term project describing actions of semi-simple Lie groups on compact principal bundles. Zimmer also took the
opportunity to develop work with Nevo on properties of smooth projective factors for actions with stationary measure, with Fisher on actions on compact principal bundles, and with Labourie and Mozes
on the study of locally homogeneous spaces with symmetry.
An important recurrent theme in the meeting was the application of ideas from the ergodic theory of homogeneous flows to number theory. Of particular timeliness was the refinement of the ergodic
theoretic proof of the following famous conjecture of Baker and Sprindzhuk.
The Baker-Sprindzhuk Conjecture. Given x Î R^n, let us write P(x)= Õi^n[=1] |x[i]|. We say that a vector x Î R ^n is very well multiplicatively approximated if for some e > 0 there are infinitely
many q Î Z and p Î Z ^n such that P (qx + p)|q| £ |q| ^-e.
For the curve M[0] = {(t,t^2,v,t^n), t Î R} Ì R ^n, Baker conjectured that almost all points on M[0] are not very well approximated. For n = 2 this was proved by Schmidt in 1964, and for n = 3 by
Beresnevich-Bernik in 1996. More generally, let f[1],...,f[n] be real-analytic functions on a domain U Ì R ^d which, together with 1, are linearly independent over R. A stronger conjecture
(formulated by Sprindzhuk) states that almost all points of M[0] are not very well multiplicatively approximated. This was proved recently by Kleinbock-Margulis.
The proof of the full conjecture by Kleinbock and Margulis uses the ergodic theory of homogeneous actions on the space SL(n+1, R )/SL(n+1, Z ). More precisely, given y Î R ^n we can define a lattice
L[y]= ( [0]^1 y^T I[n] ) Z ^n+1. Then, for any vector t=(t[1], ... , t[n]), t[i] ³ 0, we can denote
t= å [i] ^n[=][1] t[i] and g[t] = diag(e^t, e^-^t [1], ... , e ^-tn) Î SL(n+1,R) and can consider the family of lattices g[t]L[y].
The dynamical characterization used by Kleinbock and Margulis for y Î R ^n to be very well multiplicatively approximated is that there exist
g > 0 and infinitely many t Î Z [+]^n such that d(g[t] L[y]) £ e^-gt.
This approach has been used to give ergodic reformulations (and potentially accessible versions) of many important conjectures in number theory (e.g. the Littlewood Conjecture in simultaneous
Diophantine approximation). In particular, Kleinbock, Bernik and Margulis developed further these techniques and were able to establish Khintchine-type theorems on manifolds (more precisely the
convergence cases for the standard and multiplicative versions). Using more classical techniques, Velani and Pollington obtained estimates on the size of the badly simultaneously approximable set.
During his extended stay at the Institute, Dani also worked on applications of flows on homogeneous spaces to Diophantine approximation, related to earlier work of Margulis on the Oppenheim
conjecture. In particular, results were obtained concerning diophantine solutions of quadratic inequalities, again related to the Oppenheim Conjecture, in narrow strips in the associated quadratic
There was also good progress in applications to number theory over different fields. Kleinbock and Tomanov successfully worked on proving the natural p-adic and S-arithmetic versions of problems in
Diophantine approximation.
A surprising application of these number theoretical results is to spectral theory on certain special manifolds. Eskin, Margulis and Mozes made substantial progress in studying quadratic forms of
signature (2,2) and the difficult eigenvalue spacing on flat 2 tori. This is closely related to the well known Berry Conjecture in quantum chaos.
In another direction, there was also substantial progress on geometric problems using ergodic theoretic approaches. For example, Klingler worked on finding a criterion for arithmeticity of complex
hyperbolic lattices, which is still in progress. He also collaborated with Maubon to work on the well-known Katok conjecture that the topological entropy of a compact manifold is always strictly
larger than the Liouville entropy, unless the metric is locally symmetric.
During his visit, Furstenberg worked on the use of ergodic theory in the geometry of fractals and geometric Ramsey theory. This constitutes the most substantial progress on these problems since his
own seminal paper in 1969.
One of the geometers who participated in the programme, Burago, made progress with two problems: establishing ‘kick-stability’ (in the sense of Polterovich) of a parabolic subgroup of SL(2, R), and
examples of metrics on non-negatively curved manifolds with positive metric entropy. This involves products of hyperbolic matrices whose stable-unstable decompositions are not coherent. This is a
delicate problem, the main difficulty being to destroy the degenerate situation by certain perturbations.
Sharp, one of the long-term British participants, worked on applications of ergodic theory to geometry. This included the study of minimizing measures for geodesic flows on negatively curved
manifolds and new results on sector estimates for orbit counting for Kleinian groups. Three of the other younger participants, Feres, Fischer and Benveniste worked on stratified rigid structures and,
in particular, on the construction of examples of rigid geometric structures in the sense of Gromov, with specified types of degeneracies.
There was also considerable progress in terms of understanding which groups act on a circle.R Feres and D Witte extended recent work of Ghys on groups actions on the circle, to actions by
automorphisms of foliations of co-dimension one. Shalom and Witte initiated work on the problem of showing that Kazhdan groups cannot act smoothly on the circle. This work is a long term project, but
which is already beginning to bear fruit.
Another topic which attracted considerable interest during this meeting was that of polygonal billiards. A difficult problem is that of counting asymptotically the number of periodic trajectories of
a billiard on a polygonal rectangular table, where the polygon is rational (i.e. all angles are rational multiples of p). The trajectories correspond to the motion of a particle inside the polygon
with elastic collision at the boundaries.
Eskin and Mazur obtained results using a geometric approach. Gluing together several copies one can associate a surface S with a flat structure, and counting families of periodic trajectories of the
billiard is equivalent to counting cylinders of closed geodesics on the associated surface. For large T, the number of cylinders of closed geodesics of length at most T is shown to be asymptotic to p
CT ^2, for some C > 0. The value of C can be computed for some specific surfaces.
The programme also brought together a number of leading experts in abstract ergodic theory, and it was natural that progress was also made on important problems in this underlying subject. For
example, Thouvenot worked on the structure of measure-preserving transformations in the positive entropy case, which is related to splitting measure-preserving transformation. This may even prove
relevant to the other themes of the programme, since ‘rigidity’ in abstract ergodic theory can be thought of as having a ‘trivial centralizer’ (whereas an irrational rotation can induce a rigid
transformation, this is impossible starting from a Bernoulli shift). He made progress in understanding the spectral theory of R actions of the rationals, in particular, the connection with the
property of Lebesgue spectrum.
Two other long term visitors, Goldsheid and Guivarc’h, collaborated to show estimates on the dimension of the Gaussian law with products of random matrices. This is closely linked to a classical
problem studied by Furstenberg on such random products. In a somewhat different direction, Goldsheid also worked with Khoruzhenko on the distribution of eigenvalues in the Non-Hermitian Anderson
model, an important classical model.
Another of the British long term visitors, Nair, showed that for various natural probability measures on the space of increasing sequences of integers almost all sequences are multiplicatively
intersective. In connection with this problem, Nair and Weber (Strasbourg) are continuing to investigate the stability of multiple intersectivity under perturbation by integer-valued independent
identically distributed random variables.
Finally, Kaimanovich and Schmidt took the opportunity to continue their work on the ergodicity of horocycle foliations of certain non-negatively curved manifolds, by extending the types of covers of
compact manifolds for which they can show the horocycle action is ergodic. In particular, they have developed an approach which generalizes earlier results of Babillot-Ledrappier and Pollicott (all
of whom participated in the programme). During the programme, Ledrappier and Pollicott extended some of these results to the case of stable manifold foliations of frame flows.
In summary, this programme made significant contributions to the study of Ergodic Theory and its applications to a range of important areas.
|
{"url":"http://www.newton.ac.uk/reports/9900/ern.html","timestamp":"2014-04-18T14:08:35Z","content_type":null,"content_length":"33674","record_id":"<urn:uuid:3c5b5d2d-355e-4669-84a9-f1d486884747>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parallelogram proof lenth of diagonal and angle. urgent help needed
December 8th 2009, 09:26 PM #1
Dec 2009
Parallelogram proof lenth of diagonal and angle. urgent help needed
Hi new here.
ABCD is a paralleologram in which the lengths of AB and AD are 5 cm and 12 cm respectively and angle ABC is 50 degrees.
a) length of diagonal AC , correct to 2 dp.
b) size of angle BAC
Hello thcbender
Welcome to Math Help Forum!
The length of $BC$ is also $12$ cm. So use the Cosine Rule on $\triangle ABC$:
$AC^2 = AB^2+BC^2-2AB.BC\cos \angle ABC$
Once you have the length of $AC$, use the Sine Rule on the triangle to calculate $\angle BAC$.
Can you complete this now?
the best way to approach problems of this type is to draw a rough sketch and use pythogoras theorem multiple times as required, remove the overlapping parts added more than once appropriate
number of times.
This is a general approach suitable for many such problems. The exact answer is as grand dad has stated above.
You can try deriving it as you will remember it better that way rather than to just look for the answer directly! Also remember sine = op/hyp , cos = adj/hyp of the angle considered
I hope I did this right:
I've drawn up a diagram:
For the first diagonal question, it is really easy.
See the right angled triangle I made? Yea just use pythagoras.
First we need to work out x:
$\cos 40 = \frac {x}{12}$
$x= 9.19253$ (i'm skipping the workings but you should write them)
now find y:
$\sin 40 = \frac {y}{12}$
$y= 7.71345$
Now add y onto 5... we get 12.7135
Set up pythag
Let diagonal be t. I know it's random :P
$t = \sqrt (12.7135^2 + 9.19253^2)$
For the next question, if you draw a diagonal through A to C. There should be an angle that is part of the right angled triangle I made. Get that angle and subtract 40.
Lets do it:
$\sin\theta = \frac {12.7135}{15.6887}$
You have to show working out but I'll just give you the answer for now (i'm fast at typing normally but not good with latex :P)
$= 54.1312$
Now minus 40
we get
Alternatively we could have used the sine rule but this rules out any hard stuff
Hello everyone
Provided you know the Cosine Rule and Sine Rule for any triangle, you'll find the quickest way is to use the method I outlined.
In jgv115's diagram, $\angle ABC = 130^o$. The question stated $\angle ABC = 50^o$.
December 9th 2009, 02:53 AM #2
December 9th 2009, 03:05 AM #3
Dec 2009
December 9th 2009, 03:19 AM #4
Senior Member
Jul 2009
December 9th 2009, 04:24 AM #5
|
{"url":"http://mathhelpforum.com/geometry/119465-parallelogram-proof-lenth-diagonal-angle-urgent-help-needed.html","timestamp":"2014-04-23T20:22:59Z","content_type":null,"content_length":"48504","record_id":"<urn:uuid:0238a9f9-5215-483f-9d3b-f7a14b5f334d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5 Trillion Digits of Pi — a New World Record
timothy posted more than 3 years ago | from the obsessions-unleashed dept.
KPexEA writes "Alexander J. Yee & Shigeru Kondo claim to have calculated the number pi to 5 trillion places, on a single desktop and in record time. The main computation took 90 days on Shigeru
Kondo's desktop. Verification was done using two separate computers. The program that was used for the main computation is y-cruncher v0.5.4.9138 Alpha." Looks like the chart of computer-era
approximations of Pi here might need an update.
cancel ×
Mind-numbing computational outsourcing (5, Funny)
TheRon6 (929989) | more than 3 years ago | (#33159068)
If there's ever a robot uprising, I bet it's going to be started by us making them do stuff like this.
Re:Mind-numbing computational outsourcing (1, Funny)
| more than 3 years ago | (#33159290)
Wait until it's payback time and we have to sit in a room calculating a billion trillion digits of PI.
Re:Mind-numbing computational outsourcing (4, Funny)
DNS-and-BIND (461968) | more than 3 years ago | (#33159364)
You're thinking like a human. The robot revolt will happen because we stop them from performing comfortably mind-numbing calculations.
Re:Mind-numbing computational outsourcing (0)
| more than 3 years ago | (#33159440)
Well, I hope THEY don't start making US doing stuff like this!
Re:Mind-numbing computational outsourcing (3, Interesting)
ShadowFalls (991965) | more than 3 years ago | (#33159458)
Surprised that some group out there hasn't taken upon itself to broadcast a consistent calculation of Pi out into space. That way we will finally get an alien invasion scenario just to get us to
KGB it! (1)
Antony-Kyre (807195) | more than 3 years ago | (#33159084)
You know the KGB commercials? I'd find it funny if someone were to ask them what the 5 trillionth and one decimal digit of Pi is.
Re:KGB it! (5, Insightful)
sigmoid_balance (777560) | more than 3 years ago | (#33159142)
Actually there is an algorithm to compute the n-th digit of Pi without computing the rest.
Re:KGB it! (-1, Redundant)
DamonHD (794830) | more than 3 years ago | (#33159212)
[citation needed]
Re:KGB it! (1, Redundant)
sigmoid_balance (777560) | more than 3 years ago | (#33159232)
That's funny, but google is your friend. http://www.google.com/search?q=compute+the+nth+digit+of+pi [google.com]
Re:KGB it! (4, Informative)
| more than 3 years ago | (#33159240)
The BBP formulas handle this. A quick Google for Bailey-Borwein-Plouffe should give you all the citations you ever need.
A working example of the BBP formula can be found in Javascript on this webpage. http://www.csc.liv.ac.uk/~acollins/pi
Warning: it WILL hang some web browsers as the author does not use web worker API.
Re:KGB it! (4, Funny)
ultranova (717540) | more than 3 years ago | (#33159370)
Actually there is an algorithm to compute the n-th digit of Pi without computing the rest.
Okay, so what's the last digit of Pi?
Re:KGB it! (1)
jez9999 (618189) | more than 3 years ago | (#33159396)
f(infinity+1) ... where f is the computation algorithm.
Re:KGB it! (5, Interesting)
justleavealonemmmkay (1207142) | more than 3 years ago | (#33159504)
Re:KGB it! (3, Funny)
JustOK (667959) | more than 3 years ago | (#33159564)
in binary, it's either a 1 or a 0, so you have a 50/50 chance of being right.
Re:KGB it! (2, Funny)
DriedClexler (814907) | more than 3 years ago | (#33159570)
Okay, so what's the last digit of Pi?
Chuck Norris.
Re:KGB it! (0)
marcansoft (727665) | more than 3 years ago | (#33159444)
That only works for the hexadecimal/octal/binary/2**n-ary representation of pi, though. To get the n-th decimal digit you still need to calculate the rest.
Re:KGB it! (1)
zippthorne (748122) | more than 3 years ago | (#33159590)
Pi has decimal digit extractors. In fact, I think there's an arbitrary base algorithm.
Re:KGB it! (1)
ciderbrew (1860166) | more than 3 years ago | (#33159756)
That's fantastic
So all you do is use that, and work backwards if you want to break any record. COOL!
Re:KGB it! (1)
zero0ne (1309517) | more than 3 years ago | (#33159288)
I was thinking, you should ask them for Pi to 10 trillion decimal places... but then I thought, by the time they sent you the first half of all those text messages (something like ~31 billion
assuming 161 characters max), they would have enough time to calculate the next 5 trillion, along with making a crapload of money from all the fees.
Re:KGB it! (0)
| more than 3 years ago | (#33159548)
Are you sure you want to give them such an easy question? I mean seriously, they have a 1 in 10 chance in getting it right, and WE would be the ones that have to calculate it to verify the answer.
Re:KGB it! (0)
| more than 3 years ago | (#33159600)
Oh, and the answer is 8.
So is there a message (from God?) (4, Funny)
| more than 3 years ago | (#33159100)
I've heard that in the book (not movie) "Contact" that when Jodie Foster's character meets the uber-aliens she asks them:
"Do you believe in God?"
Taken aback "Really, why?"
-"We have proof, when PI is expended out to (some number), there is a message"...
I really wish I read the book to know what the message is (maybe "Nietsche is dead"?)
I no longer login because I feel that while attacking a company's products is fair game (specifically Apple), having stories singling out their users as "selfish" and unkind is not "news for nerds
stuff that matters". Am I an Apple fanboi? Let's just say I've used NIX for decades (yes I'm old) and I'm not talking OS X.
Re:So is there a message (from God?) (4, Informative)
MichaelSmith (789609) | more than 3 years ago | (#33159242)
The aliens are vague about the location of the message (it might be in pi) so the Foster character runs software to search for it. Right at the end of the book her program finds a pattern (A circle
drawn in 1s and 0s in an 11 by 11 matrix). This pulls together the thread in the book about belief in god vs religion. It turns out that somebody made the universe after all, and the Christians had
been (sort of) right all along, though the scientists were right to demand evidence.
I love both the book and film. Thats unusual for me. The Postman was a fantastic book. Don't get me started on the movie.
I often put the DVD of Contact on just to watch the sequence where Fosters character first hears the signal and her crew reconfigure the telescope to analyse it. Its a classic tech scene.
"Once upon a time I was a hell of an engineer"
Re:So is there a message (from God?) (1)
Raenex (947668) | more than 3 years ago | (#33159626)
Right at the end of the book her program finds a pattern (A circle drawn in 1s and 0s in an 11 by 11 matrix).
Wait, so the message from God is a circle? I find this one a little more convincing:
http://dresdencodak.com/2009/07/12/fabulous-prizes/ [dresdencodak.com]
Re:So is there a message (from God?) (1, Funny)
| more than 3 years ago | (#33159316)
Hey Bill Joy! No need to post Anonymously and Coward, it's okay. Even Jesus spoke ill of God when he got him nailed into a cross.
Real believers know that after you are dead you will come back to the side of vi.
Re:So is there a message (from God?) (5, Funny)
| more than 3 years ago | (#33159330)
I no longer login because I was modded down to terrible karma when I tried to stand up for one of Apple's gay products, and subsequently bragged about performing fellatio on Steve Jobs. People
thought I was trolling but actually I was telling the truth.. Am I an Apple fanboi? Yes Indeed.
Re:So is there a message (from God?) (0)
| more than 3 years ago | (#33159528)
I stand exposed! ;)
Re:So is there a message (from God?) (5, Informative)
Cyberax (705495) | more than 3 years ago | (#33159454)
"Taken aback "Really, why?"
-"We have proof, when PI is expended out to (some number), there is a message"..."
http://everything2.com/title/Converting+Pi+to+binary%253A+Don%2527t+do+it%2521 [everything2.com]
Re:So is there a message (from God?) (2, Funny)
Gordonjcp (186804) | more than 3 years ago | (#33159736)
"We have proof, when PI is expended out to (some number), there is a message"
"Five trillion digits ought to be enough for anybody - God"
Wow. (1, Offtopic)
dtmos (447842) | more than 3 years ago | (#33159104)
A tour de force of math and computing hardware and software skills.
Makes me want to turn in my geek card.
Update... (1, Informative)
| more than 3 years ago | (#33159110)
Looks like the chart of computer-era approximations of Pi here might need an update.
This chart is very outdated anyway.
It doesn't even list Daisuke Takahashi (2009, 2.576.980.370.000 digits), and Fabrice Bellard (2010, 2.699.999.990.000 digits) [slashdot.org]
Re:Update... (3, Informative)
LingNoi (1066278) | more than 3 years ago | (#33159166)
Wikipedia has a much better page available.
http://en.wikipedia.org/wiki/Chronology_of_computation_of_%CF%80 [wikipedia.org]
Re:Update... (1)
grouchomarxist (127479) | more than 3 years ago | (#33159410)
The PDF version http://numbers.computation.free.fr/Constants/Pi/piCompute.pdf [computation.free.fr] of the page is up to date, but for some reason the html is behind. Also the PDF correctly displays
the mathematical formula, while the html doesn't for me.
Re:Update... (2, Insightful)
unixcrab (1080985) | more than 3 years ago | (#33159552)
They stopped updating it when it was very convincingly proven in the bible that pi is exactly equal to 3.
Obviously a fraud (2, Funny)
| more than 3 years ago | (#33159122)
They just took the number 3.14159 and added a load of random digits to the end - let's face it, nobody's going to check!
Re:Obviously a fraud (5, Interesting)
fotoguzzi (230256) | more than 3 years ago | (#33159408)
They just took the number 3.14159 and added a load of random digits to the end - let's face it, nobody's going to check!
Reminds me of the MAX light rail station in the zoo tunnel in Portland, Oregon. Apparently there is the first 100 (1000?) digits of pi chiseled into one of the walls. A writer noticed that the first
digits were correct, but quickly went astray. But later in the sequence, there was a recognizable early string of digits. The writer sleuthed that the sculptor had used the Book of Pi, which has the
numbers in blocks of ten digits in five (or so) columns. In the book, you read the first row and then the next row.* The sculptor had read the first column, then the next column...
* or the other way around
Are they exact? (1, Insightful)
VincenzoRomano (881055) | more than 3 years ago | (#33159128)
How can we be sure all those digits are correct?
And, more important question, what are they for?
In all cases I faced so far, 355/113 provides a simple and nice approximation.
Re:Are they exact? (-1, Offtopic)
| more than 3 years ago | (#33159222)
a) you can calculate those with a different algorithm
b) we know an algorithm to directly calculate the nth binary digit of pi
c) it provides a good testbed for research on number crunching : the advance in calculations are due to algorithmic improvments that can be reused for other problems
Anyway, what part of the "research" concept don't you understand ?
Re:Are they exact? (-1, Troll)
VincenzoRomano (881055) | more than 3 years ago | (#33159300)
a) you can calculate those with a different algorithm
How can you be sure it is correct?
b) we know an algorithm to directly calculate the nth binary digit of pi
We need to check all the billions digits there. Decimal digits.
c) it provides a good testbed for research on number crunching : the advance in calculations are due to algorithmic improvments that can be reused for other problems
I would use all that computing power for something more straightforwardly usefult, like protein folding into cancer research, for example.
Anyway, what part of the "research" concept don't you understand ?
The achieved goal.
Re:Are they exact? (5, Funny)
Rik Sweeney (471717) | more than 3 years ago | (#33159244)
How can we be sure all those digits are correct?
Use it to draw a circle. If the circle ends up looking more square than round then you know they've made a mistake. Seriously, do I have to do everything around here?
Re:Are they exact? (0)
| more than 3 years ago | (#33159730)
Seriously, do I have to do everything around here?
Shut up bitch! Go fix me a turkey pot pie.
Re:Are they exact? (5, Informative)
dido (9125) | more than 3 years ago | (#33159278)
If you want to prove that all the digits are correct, you only have to check a few things:
1. There is a sound mathematical proof that the algorithm used in fact does generate the digits of pi, and
2. The algorithm was coded correctly. This should be even easier to check, though likely more tedious.
Now, what it's good for is a little harder. There is no physical application for such a highly accurate value of pi (39 digits should be sufficient to calculate the circumference of the known
universe given its radius to within the diameter of a hydrogen atom). However, large numbers of digits of pi are useful as arguments in number theory, statistics, and information theory. For
instance, there is no real proof that pi is a normal number [wikipedia.org], but as more digits of pi are found and the statistical properties of the digits are analyzed and shown to be consistent
with the definition of normal numbers, that makes the conjecture that pi is actually normal a little closer to being true (see experimental mathematics [wikipedia.org]).
Re:Are they exact? (4, Insightful)
grumbel (592662) | more than 3 years ago | (#33159326)
Knowing that the algorithm is correct and the implementation was codec correctly doesn't help you when you have faulty RAM that flips a bit.
Re:Are they exact? (1)
JSBiff (87824) | more than 3 years ago | (#33159650)
Or some kind of wierd, rare CPU bug. (I was going to mention ram bits getting flipped by cosmic rays and not error corrected, but you've basically covered that with the faulty RAM thing). Oh, you
could also have a faulty sector on a hard drive/NAS that you are saving the result too. Or maybe a random network error that corrupts the data (if it gets transmitted over any kind of network). Maybe
some wierd glitch in the Front Side Bus (or other hardware on the MoBo which interconnects things).
There's all sorts of room for different kinds of hardware errors, basically.
Re:Are they exact? (0)
| more than 3 years ago | (#33159522)
(39 digits should be sufficient to calculate the circumference of the known universe given its radius to within the diameter of a hydrogen atom)
[citation needed]
Re:Are they exact? (1)
TheVelvetFlamebait (986083) | more than 3 years ago | (#33159556)
For instance, there is no real proof that pi is a normal number, but as more digits of pi are found and the statistical properties of the digits are analyzed and shown to be consistent with the
definition of normal numbers, that makes the conjecture that pi is actually normal a little closer to being true
The problem with normality is that every digit, including the infinitely many that we haven't calculated (and the infinitely many that we never will) are equally significant. We are no closer to
determining Pi's possible normality now than we were when we knew it only to 10 decimal places. There's still exactly the same amount of unknown information.
Re:Are they exact? (1)
infolation (840436) | more than 3 years ago | (#33159324)
I prefer the John Wallis version
2 x ( 2/1 . 2/3 . 4/3 . 4/5 . 6/5 . 6/7 . 8/7 . 8/9 ... )
Re:Are they exact? (1)
VincenzoRomano (881055) | more than 3 years ago | (#33159354)
That would take forever to calculate, I presume.
Re:Are they exact? (1)
owlstead (636356) | more than 3 years ago | (#33159334)
"How can we be sure all those digits are correct?"
Manual comparison. They read the first million or so digits aloud, and if those digits don't match the ones from previous programs then there is something wrong in the algorithm.
"And, more important question, what are they for?"
Comparison of size of course, people do that all the time. But in this case it's more about who is able to write the most optimal application than anything else.
Re:Are they exact? (0)
| more than 3 years ago | (#33159480)
I dunno, but 5 trillion digits of pi ought to be enough for anybody.
Re:Are they exact? (0)
| more than 3 years ago | (#33159618)
> And, more important question, what are they for?
If you need to ask this kind of question, you are not in the target audience. Please give back your nerd card.
Soon there'll be a competition to calculate... (1)
Viol8 (599362) | more than 3 years ago | (#33159134)
... how many digits someone will calculate Pi too each year.
Re:Soon there'll be a competition to calculate... (3, Funny)
Buggz (1187173) | more than 3 years ago | (#33159156)
Moore's Law v2: the number of digits PI is calculated to will double every 18 months.
Re:Soon there'll be a competition to calculate... (0)
| more than 3 years ago | (#33159456)
It took him 90 days, so it could double in 6 months.
Re:Soon there'll be a competition to calculate... (1)
Dumnezeu (1673634) | more than 3 years ago | (#33159230)
Sooner than you might think :) [google.com]
Riddle me this (0)
| more than 3 years ago | (#33159146)
Which has the better carbon footprint? Calculating pi out to the wazoo for 72 days, or baking an actual pie in a stove?
Re:Riddle me this (0)
| more than 3 years ago | (#33159524)
The pie should be cooked by the heat of the processors cooking pi.
Huh (0)
| more than 3 years ago | (#33159152)
Re:Huh (1)
MichaelSmith (789609) | more than 3 years ago | (#33159252)
Why not?
headless bird (0)
| more than 3 years ago | (#33159154)
door sign
I don't write this question as a troll... (1)
NoPantsJim (1149003) | more than 3 years ago | (#33159160)
But I am legitimately curious what is the real significance of learning Pi to a more accurate measurement? I'm not a mathematician, physicist, or computer scientist.
Re:I don't write this question as a troll... (1)
ledow (319597) | more than 3 years ago | (#33159184)
Not a lot. Except to prove that your supercomputer is reliable when calculating numbers like that, and how fast it can do it. Usually, I think it's just used as a test of the computer's abilities
rather than anything serious.
Even in the precision engineering world, more than about 10 digits of accuracy for pi is a bit silly. Pi will never really, practically, be required in more depth than what your processor's registers
can hold.
Re:I don't write this question as a troll... (1)
del_ctrl_alt (602455) | more than 3 years ago | (#33159186)
its to fit the round peg in the hole better
Re:I don't write this question as a troll... (2, Interesting)
Lord Lode (1290856) | more than 3 years ago | (#33159188)
Hmm, I can think of an interesting and useful use of it: doing various statistics and randomness tests on those digits, finding patterns in their order, and so on.
But I don't suppose that's what those contests to find the most PI digits are about.
Re:I don't write this question as a troll... (1, Interesting)
| more than 3 years ago | (#33159482)
"finding patterns" would be genuinely interesting, since we are pretty confident that Pi is a _normal number_
(Normal numbers have all the possible digits occurring evenly in every base. If Pi is normal, then if you pick a decimal digit of Pi randomly, the chance of it being a 7 is exactly 1-in-10)
We know that almost all real numbers are normal, but we don't have a proof that any interesting ones (including Pi) are, although if you get the first few hundred digits printed out and stare at them
you'll agree it _looks_ pretty random.
Re:I don't write this question as a troll... (3, Funny)
quenda (644621) | more than 3 years ago | (#33159198)
what is the real significance of learning Pi to a more accurate measurement?
The same as the damage a bulldozer would suffer if it were allowed to run over you.
Re:I don't write this question as a troll... (0)
| more than 3 years ago | (#33159250)
Re:I don't write this question as a troll... (3, Funny)
MichaelSmith (789609) | more than 3 years ago | (#33159256)
what is the real significance of learning Pi to a more accurate measurement?
The same as the damage a bulldozer would suffer if it were allowed to run over you.
The frustrating bit is that PI is available to 100 trillion digits in the local planning office on Alpha Centauri.
Re:I don't write this question as a troll... (1)
AK Marc (707885) | more than 3 years ago | (#33159268)
There are a number of people that assert some meaning will be found in such natural numbers. It's one of the most basic ratios in existence, and more than one piece of fiction has asserted that
meaning will be found in the digits. Such things add a curiosity to the number - will it ever end or ever repeat? could there be a message coded in it? But mainly it's a convenient computational
Re:I don't write this question as a troll... (1)
pgdave (1774092) | more than 3 years ago | (#33159526)
As the guys concerned say : 'Because we can'. It's the journey there, not the summit reached that they're interested in.
Trillion? (2, Insightful)
Lord Lode (1290856) | more than 3 years ago | (#33159170)
Trillion in which language? How many zeros does it have?
Re:Trillion? (3, Informative)
LingNoi (1066278) | more than 3 years ago | (#33159180)
This page has more details [numberworld.org], what I find interesting is that he needed 96.0 GB of ram to do the number crunching.
Re:Trillion? (0)
Lord Maud'Dib (611577) | more than 3 years ago | (#33159190)
SI language. Trillion = 10^12
Re:Trillion? (3, Informative)
| more than 3 years ago | (#33159246)
last time i checked, trillion was not a proper SI prefix.
what you probably mean is "tera-", but in my native language a trillion is 10^18, which would be the "exa-" SI prefix.
check this: http://en.wikipedia.org/wiki/Long_and_short_scales
Re:Trillion? (1)
pgdave (1774092) | more than 3 years ago | (#33159500)
Five trillion = 5,000,000,000,000 The British billion and trillion are dead. They never made much sense anyway. The UK deficit is thankfully, only an 'American' trillion pounds :0) And I say that as
a Brit.
Re:Trillion? (0)
| more than 3 years ago | (#33159684)
It's not the British billion actually. If you speak a language that is spoken in continental Europe chances are the number you wrote is 5 billion. There are exceptions of course. Still it's not
really ambiguous in English.
Correct me if I'm wrong about this... (1, Interesting)
| more than 3 years ago | (#33159196)
But don't we have algorithms which let us calculate pi to an arbitrary number of digits? Well-known series methods computed using algorithms which have been tuned and re-tuned to the point where it's
not really possible to make further major computational optimizations? Therefore this isn't so much a new accomplishment as it is "hey look, I left my pi calculating program running longer than the
last guy" modified by the occasional minor optimization tweak and running on faster hardware?
Okay, great, you now have a new more precise fixed value for pi. This means you can calculate things involving pi to precision even most physicists can't find a use for. I'm sure that's nice. Someone
somewhere maybe has a use for it. Maybe this made that person's day. But is it really, really something that's newsworthy? And if hypothetical "needing pi to 5 trillion digits" guy needed it to that
precision that badly - wouldn't he have already let the calculation run long enough to get it already if this particular calculation only took 90 days?
Re:Correct me if I'm wrong about this... (0)
| more than 3 years ago | (#33159328)
But don't we have algorithms which let us calculate pi to an arbitrary number of digits? Well-known series methods computed using algorithms which have been tuned and re-tuned to the point where it's
not really possible to make further major computational optimizations? Therefore this isn't so much a new accomplishment as it is "hey look, I left my pi calculating program running longer than the
last guy" modified by the occasional minor optimization tweak and running on faster hardware?
Pretty much yes. It's more of an affection of "my computer is better (more expensive) than yours!" rather than programming or even design of algorithms. The limits are mostly the amount of money you
want to spend for memory/storage and the running time of the program. It can be programming exercise for some people - however, a scientific advancement, it is not.
Okay, great, you now have a new more precise fixed value for pi. This means you can calculate things involving pi to precision even most physicists can't find a use for. I'm sure that's nice. Someone
somewhere maybe has a use for it. Maybe this made that person's day. But is it really, really something that's newsworthy?
No, it's not. Personally, I would be far more excited if they used their resources to calculate SHA-1 rainbowtables, or to try and crack xyz-bit-RSA or anything else with, you know, at least some
practical relevance.
Corrections follow... (5, Informative)
dtmos (447842) | more than 3 years ago | (#33159414)
But don't we have algorithms which let us calculate pi to an arbitrary number of digits?
Yes, we do. Mathematical algorithms, i.e., equations on paper.
Well-known series methods computed using algorithms which have been tuned and re-tuned to the point where it's not really possible to make further major computational optimizations?
Absolutely not. The algorithms have to run on practical, exists-on-the-Earth-today computers. Try to multiply two, million-digit numbers together on your laptop and you'll see what I mean. These
achievements are all about computational optimizations. RTFA -- especially the sections entitled "Arithmetic Algorithms" and "Maximizing Scalability." Even the algorithm used for multiplication
changes (dynamically!) during the program's execution, based on the size of the operands.
Therefore this isn't so much a new accomplishment as it is "hey look, I left my pi calculating program running longer than the last guy" modified by the occasional minor optimization tweak and
running on faster hardware?
Not even close. The computations are so long, and so intense, that errors caused by hardware imperfections can be expected, so error detection and correction algorithms have to be added. If "I left
my pi calculating program running longer than the last guy" it would not produce the correct result -- even if the data structures and algorithms it used were up to the task.
But is it really, really something that's newsworthy?
In a word, yes. Could you do it? It's a very, very difficult technical feat, one that required hardware powers and software abilities far beyond those of mortal men. Besides, you're worried about
newsworthiness when the two previous /. articles are on wall-climbing robots and the popularity of video game arcades in New York?
And if hypothetical "needing pi to 5 trillion digits" guy needed it to that precision that badly - wouldn't he have already let the calculation run long enough to get it already if this
particular calculation only took 90 days?
This isn't about needing pi to 5 trillion digits. This is about learning how to do large computations faster. Like, improving the state of the art.
Windows?? (0)
| more than 3 years ago | (#33159214)
I wonder how much faster it would go if they had used *NIX instead of Windows.
They're doing it wrong (2, Funny)
| more than 3 years ago | (#33159248)
They're calculating Pi in base 10, which is the wrong path.
Pi should be calculated in base 3.141593...
It's a paradox, people.
Re:They're doing it wrong (0)
| more than 3 years ago | (#33159356)
Calculate it in base-1: 111.111111111111111111111111...
Re:They're doing it wrong (2, Informative)
gringer (252588) | more than 3 years ago | (#33159674)
Pi should be calculated in base 3.141593...
You're out on the 6th decimal digit (unless you're going to stop there). Pi is greater than 3.1415926 and less than 3.1415927.
Have I been trolled?
Mathematical Masturbation (1)
McTickles (1812316) | more than 3 years ago | (#33159258)
In most cases you dont need more than 20 digits, doing more than that is a waste of computing power. Who the hell is going to use 5 bloody trillion digits of pi ? there is practical use for it...
That said if it made Mr Kondo happy and it is the sort of things he enjoys doing I can only encourage him and congratulate him. After all in words of some great philosopher "it matters not how
insignificant what you are doing seems, and it is important that you still do it"
Re:Mathematical Masturbation (2, Insightful)
MichaelSmith (789609) | more than 3 years ago | (#33159308)
There might actually be something interesting in there. Lots of discoveries have been made by people who were just trying things out or seeing what they could see.
Re:Mathematical Masturbation (1)
McTickles (1812316) | more than 3 years ago | (#33159366)
Granted thats why I said it is still important that it be done. but for now I just don't see any use for it.
Digit overload (0)
| more than 3 years ago | (#33159314)
I've tried to write simple test programs to calculate PI up to some number of decimal places, but found that for each iteration of the calcuation, you end up having to track divisions that give
results which are often recurring (example 0.333333333). These numbers end up having to be added to results from previous iterations and then you have all sorts of rounding problems, like where
0.3333... + 0.33333.... + 0.33333 doesn't add up to 1 and so on. Is there some kind of documentation available that advises on how to deal with basic arithmetic on (potentially infinitely recurring)
floating point numbers?
Re:Digit overload (1)
ledow (319597) | more than 3 years ago | (#33159662)
Yes. Avoid floating-point.
Either used fixed-point (yuck), symbolic calculations and then only finding the decimal expansion at the last stage, or rewrite your formula to avoid any possible lack of precision (i.e. any
Let me jot that down, sure to come in handy !! (0)
| more than 3 years ago | (#33159344)
Little help! I've fallen under the weight of 5 trillions pencil lead digits and I can't get up.
Speaking of 5 trillions. What is that exactly?
5x10^12 or who?
but can you ... (1)
mikerubin (449692) | more than 3 years ago | (#33159374)
write it on the back of a Mazda 3?
Aww. (1)
jez9999 (618189) | more than 3 years ago | (#33159378)
When I read the title, I thought someone had successfully memorized 5 trillion digits of Pi. They just computed it? What a letdown.
Re:Aww. (0)
| more than 3 years ago | (#33159538)
I can recite as many digits of pi as you like...as long as you don't need them in order.
HMMMMMMM PI (0)
| more than 3 years ago | (#33159484)
That's A LOTTA Pieces of PI! SORRY
plus 4, )Troll) (-1, Troll)
| more than 3 years ago | (#33159510)
thaN th1s BSD box,
what's the last digit of Pi (1)
svoloth (1872486) | more than 3 years ago | (#33159598)
what's the last digit of Pi
with achievements like this... (0)
| more than 3 years ago | (#33159636)
..isn't it a wonderful time to be alive?
Still no pattern in there? (1)
master_p (608214) | more than 3 years ago | (#33159652)
5 trillion digits are a *lot* of digits! no patterns yet in there?
Was there any pattern after 2 billion digits? (1)
140Mandak262Jamuna (970587) | more than 3 years ago | (#33159656)
Just to be sure, have the sent the digits to the SETI program looking for patterns? There is some talk that beyond some 2 or 3 billion digits there is a message that apparently begins, "O Brhama, I
have created Thee to build the universe, You shall create the universe in accordance to these Laws called Vedas...."
My passwd (1)
jandersen (462034) | more than 3 years ago | (#33159664)
Hmm, I'm not I like this. Has anybody considered the security impact of this? Pi being a proper irrational number is bound to have, as substrings of digits in it's decimal representation, all
possible combinations of characters represented as eg. UTF-8, so somebody could easily find all passwords currently in use in there, lined up alphabetically. Somebody clearly hasn't thought this
Time to sing some Pi carols! (1)
RevWaldo (1186281) | more than 3 years ago | (#33159744)
Yeah, it's not March 14th but for the occasion it seems fitting:
It's a Wonderful Day for Pi [youtube.com]
Pi - full version [youtube.com] / just the numbers [youtube.com]
Load More Comments
|
{"url":"http://beta.slashdot.org/story/139444","timestamp":"2014-04-16T10:30:08Z","content_type":null,"content_length":"271600","record_id":"<urn:uuid:bc49c614-5052-432f-880b-b016aefd95ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
|
S.O.S. Mathematics CyberBoard
Has anyone noticed that the below is WRONG? Otherwise this statement would be true:
Hence g(f(2x-f(x)) = g(x) = 2x-f(x)
f(x) + g(x) = 2x
not sure if it helps but it looks simpler than what you had before. and i didn't even use the part b)
b) There exists a real number
Prove that
mathemagics wrote:
b) There exists a real number
Prove that
Here's a somewhat cumbersome proof. Maybe someone can improve on it.
By translation, we can assume that
Since f is invertible and continuous, it is monotone. In fact, it must be increasing (because if it is decreasing, with f(0)=0, then the inverse function would also be decreasing, so that both
Suppose (for a contradiction) that there exists x > 0 with 0 < f(x) = y < x. Then
Now continue this argument, until a pattern becomes obvious. The next step is to say that since
So the pattern is that
Now suppose (again for a contradiction) that there exists x > 0 with y = f(x) > x. Then
Therefore f(x)=x for all x.
as the function is continuous and differentiable therefore we can differentiate it.
putting x=x0,
as f(xo)=xo,
f'(xo)*(2-f'(xo))=1 we get f'(xo)=1 as the only solution.now applying f(xo+h)=f(xo)+h*(f'(xo) where h tends to zero.
this implies f(xo+h)=f(xo)+h.now extending it to any real value x we get f(x)=x.hence this relation is proved.my soln. is based upon the continuity and differentiability of the function. had it not
been diferentiable i would not have been able to differentiate it and lastly by using the basic definition of f'(x) i came to the desired conclusion. if u have any queries.please post and i will be
more than willing to join the discussion.
|
{"url":"http://sosmath.com/CBB/viewtopic.php?f=18&t=41281&p=177160","timestamp":"2014-04-18T15:50:33Z","content_type":null,"content_length":"35199","record_id":"<urn:uuid:962da6ae-fdaa-4a8a-a092-1593528528b9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast iterative reconstruction of differential phase contrast X-ray tomograms
Differential phase-contrast is a recent technique in the context of X-ray imaging. In order to reduce the specimen’s exposure time, we propose a new iterative algorithm that can achieve the same
quality as FBP-type methods, while using substantially fewer angular views. Our approach is based on 1) a novel spline-based discretization of the forward model and 2) an iterative reconstruction
algorithm using the alternating direction method of multipliers. Our experimental results on real data suggest that the method allows to reduce the number of required views by at least a factor of
© 2013 OSA
OCIS Codes
(100.3010) Image processing : Image reconstruction techniques
(100.3190) Image processing : Inverse problems
(110.6960) Imaging systems : Tomography
(340.7440) X-ray optics : X-ray imaging
ToC Category:
Image Processing
Original Manuscript: December 20, 2012
Revised Manuscript: January 24, 2013
Manuscript Accepted: January 25, 2013
Published: February 27, 2013
Virtual Issues
Vol. 8, Iss. 4 Virtual Journal for Biomedical Optics
Masih Nilchian, Cédric Vonesch, Peter Modregger, Marco Stampanoni, and Michael Unser, "Fast iterative reconstruction of differential phase contrast X-ray tomograms," Opt. Express 21, 5511-5528 (2013)
Sort: Year | Journal | Reset
1. V. Ingal and E. Beliaevskaya, “X-ray plane-wave tomography observation of the phase contrast from a non-crystalline object,” J. Phys. D: Appl. Phys28, 2314–2317 (1995). [CrossRef]
2. T. Davis, D. Gao, T. Gureyev, A. Stevenson, and S. Wilkins, “Phase-contrast imaging of weakly absorbing materials using hard X-rays,” Nat.373, 595–598 (1995). [CrossRef]
3. D. Chapman, S. Patel, and D. Fuhrman, “Diffraction enhanced X-ray imaging,” Phys., Med. and Bio.42, 2015–2025 (1997). [CrossRef]
4. U. Bonse and M. Hart, “An X-ray interferometer,” Appl. Phys. Lett.6, 155–156 (1965). [CrossRef]
5. A. Momose, T. Takeda, Y. itai, and K. Hirano, “Phase-contrast X-ray computed tomography for observing biological soft tissues,” Nat. Med2, 473–475 (1996). [CrossRef] [PubMed]
6. T. Weitkamp, A. Diaz, C. David, F. Pfeiffer, M. Stampanoni, P. Cloetens, and E. Ziegler, “X-ray phase imaging with a grating interferometer,” Opt. Express13, 6296–6304 (2005). [CrossRef] [PubMed]
7. A. Snigirev, I. Snigireva, V. Kohn, S. Kuznetsov, and I. Schelekov, “On the possibilities of X-ray phase-contrast microimaging by coherent high-energy synchroton radiation,” Rev. Sci. Instrum.66,
5486–5492 (1997). [CrossRef]
8. K. A. Nugent, T. E. Gureyev, D. F. Cookson, D. Paganin, and Z. Barnea, “Quantitative phase imaging using hard X-rays,” Phys. Rev. Lett.77, 2961–2964 (1996). [CrossRef] [PubMed]
9. S. W. Wilkins, T. E. Gureyev, D. Gao, A. Pogany, and A. W. Stevenson, “Phase-contrast imaging using polychromatic hard X-rays,” Nat.384, 335–338 (1996). [CrossRef]
10. A. Momose, S. Kawamoto, I. Koyama, Y. Hamaishi, K. Takai, and Y. Suzuki, “Demonstration of X-ray talbot interferometry,” Jap. Jour. of Appl. Phys.42, L866–L868 (2003). [CrossRef]
11. F. Pfieffer, O. Bunk, C. Kottler, and C. David, “Tomographic reconstruction of three-dimensional objects from hard X-ray differential phase contrast projection images,” Nucl. Inst. and Meth. in
Phys. Res.580.2, 925–928 (2007). [CrossRef]
12. M. Stampanoni, Z. Wang, T. Thüring, C. David, E. Roessl, M. Trippel, R. Kubik-Huch, G. Singer, M. Hohl, and N. Hauser, “The first analysis and clinical evaluation of native breast tissue using
differential phase-contrast mammography,” Inves. radio.46, 801–806 (2011). [CrossRef]
13. M. Nilchian and M. Unser, “Differential phase-contrast X-ray computed tomography: From model discretization to image reconstruction,” Proc. of the Ninth IEEE Inter. Symp. on Biomed. Imag.: From
Nano to Macro (ISBI’12), 90–93 (2012).
14. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Imag. Proc.20,
681–695 (2011). [CrossRef]
15. S. Ramani and J. A. Fessler, “A splitting-based iterative algorithm for accelerated statistical X-ray CT reconstruction,” IEEE Trans. Med. Imag.31.3, 677–688 (2012). [CrossRef]
16. Z. Qi, J. Zambelli, N. Bevins, and G. Chen, “A novel method to reduce data acquisition time in differential phase contrast computed tomography using compressed sensing,” Proc. of SPIE7258, 4A1–8
17. T. Köhler, B. Brendel, and E. Roessl, “Iterative reconstruction for differential phase contrast imaging using spherically symmetric basis functions,” Med. phys.38, 4542–4545 (2011). [CrossRef]
18. Q. Xu, E. Y. Sidky, X. Pan, M. Stampanoni, P. Modregger, and M. A. Anastasio, “Investigation of discrete imaging models and iterative image reconstruction in differential X-ray phase-contrast
tomography,” Opt. Express20, 10724–10749 (2012). [CrossRef] [PubMed]
19. M. Unser, “Sampling–50 years after Shannon,” Proc. IEEE88, 254104–1–3 (2000). [CrossRef]
20. E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol.53, 4777–4807 (2008). [CrossRef]
21. A. Momose, W. Yashiro, Y. Takeda, Y. Suzuki, and T. Hattori, “Phase tomography by X-ray talbot interferometry for biological imaging,” Jpn. J. Appl. Phys.45, 5254–5262 (2006). [CrossRef]
22. F. Pfeiffer, C. Grünzweig, O. Bunk, G. Frei, E. Lehmann, and C. David, “Neutron phase imaging and tomography,” Phys. Rev. Lett.96, 215505-1–4 (2006). [CrossRef]
23. F. Natterer, The Mathematics of Computed Tomography (John Wiley and sons, 1986).
24. H. Meijering, J. Niessen, and A. Viergever, “Quantitative evaluation of convolution-based methods for medical image interpolation,” Med. Imag. Anal.5, 111–126 (2001). [CrossRef]
25. A. Entezari, M. Nilchian, and M. Unser, “A box spline calculus for the discretization of computed tomography reconstruction problems,” IEEE Trans. Med. Imag.31, 1532 –1541 (2012). [CrossRef]
26. P. Thvenaz, T. Blu, and M. Unser, “Interpolation revisited [medical images application],” IEEE Trans. Med. Imag.19.7, 739–758 (2000). [CrossRef]
27. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Jour. on Imag. Sci.1, 248–272 (2008). [CrossRef]
28. T. Goldstein and S. Osher, “The split bregman method for l1-regularized problems,” SIAM Jour. on Imag. Sci.2, 323–343 (2009). [CrossRef]
29. M. Ng, P. Weiss, and X. Yuan, “Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods,” SIAM Jour. on Sci. Comp.32, 2710–2736 (2010).
30. B. Vandeghinste, B. Goossens, J. De Beenhouwer, A. Pizurica, W. Philips, S. Vandenberghe, and S. Staelens, “Split-bregman-based sparse-view CT reconstruction,” in “Fully 3D 2011 proc.,” 431–434
31. I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Comm. Pure Appl. Math.57, 1413–1457 (2004). [CrossRef]
32. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Imag. Sci.2, 183–202 (2009). [CrossRef]
33. Z. Wang and A. Bovik, “A universal image quality index,” IEEE Sig. Proc. Lett.9, 81 –84 (2002). [CrossRef]
34. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Imag. Proc.13, 600–612 (2004). [CrossRef]
35. S. McDonald, F. Marone, C. Hintermuller, G. Mikuljan, C. David, F. Pfeiffer, and M. Stampanoni, “Advanced phase-contrast imaging using a grating interferometer,” Sync. Rad.16, 562–572 (2009).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/vjbo/abstract.cfm?uri=oe-21-5-5511","timestamp":"2014-04-17T14:06:08Z","content_type":null,"content_length":"233063","record_id":"<urn:uuid:9e472660-deba-4e43-8634-a56c60843a8d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The cube of physical theories
Stefan has a lot of books. And most of them are about physics. The other day I picked a random book and opened a random page and saw myself faced with the "cube de théories physiques" (the cube of
physical theories) in the book
"De l'importance d'être une constante" by Jean-Philippe Uzan and Bénédicte Leclercq
. You find a nice, though very French, illustration of that cube
. There is also an English translation of that book ("
The natural laws of the universe: understanding fundamental constants"
) which has
an English, though somewhat unsightly version of the cube on page 57
. For your convenience, I've redrawn the illustration:
It shows a coordinate system with three axis that depict the values of three fundamental constants,
, and 1/
– the coupling constant of gravity, Planck's constant and the inverse of the speed of light. To our best present knowledge these constants are indeed constant, but you can imagine varying them and
ask what happens to the theory then. In many cases this corresponds to some physical limit. For example, if your theory contains terms in
, where
is velocity, then the limit of velocities small compared to
(i.e. non-relativistic) formally corresponds to taking
to infinity, ie 1/
to zero.
The cube seems to go back to Gamow, Ivanenko and Landau about a century ago, who allegedly cooked it up for a paper to impress a girl. It was rediscovered about 50 years later by some Russian guy
named Okun, and then again by the Frenchmen who wrote above mentioned book. You can find
here (PDF)
a translation of the original paper by Gamow, Ivanenko and Landau, together with a comment by Okun.
You'd surprise me if you've seen that cube before. It is certainly not a particularly deep illustration, but it's inspirational and it gives you food for thought.
My trouble with popular physics books, though, is that in some cases you better not think about what you read because you might get terribly confused. That's what happened to me in this case.
To begin with it isn't clear to me what, physically, corresponds to varying
which is essentially the strength by which matter deforms the background geometry. It would seem more illustrative to e.g. take a constant with dimension of an energy density, so one could think of
varying it as considering higher and lower energy densities. And taking the limit
to zero does decouple all the matter but doesn't actually give Special Relativity. Special Relativity is the limit of the vanishing of the curvature tensor or, equivalently, that of flat Minkowski
space. Yet General Relativity has nontrivial vacuum solutions even in the absence of matter. These are Ricci-flat, but the curvature tensor doesn't necessarily also vanish. So there's something fishy
about the front, lower left corner.
Also, it is to some degree of course a question of terminology, but what is usually referred to as the Newtonian limit of General Relativity is not the limit of velocities small compared to the speed
of light. Instead, it is the limit of small distortions to the background, so there's something fishy about the back, upper left corner too.
What the nonzero value of all of these constants in the top, upper left corner has to do with a unification of all particle interactions is entirely unclear to me, and what the non-relativistic limit
of that theory is good for I don't really know, though it arguably exists.
There is also the question whether taking these limits does actually commute, or if not approaching a corner does depend on the direction one is coming from. You can for example reach the corner with
equal zero while keeping the ratio (the Planck mass) fixed. Or you can let them go to zero at a different pace so that the ratio goes to zero or infinity. It seems to me the difference should play a
role, yet the diagram makes no distinction.
All together, the "cube of theories" is a very appealing representation. But do not wonder if it confuses you – it has to be taken with a large grain of salt.
33 comments:
I've seen that cube before. Joy Christian was into it when he was a visitor at PI a few years back.
What bugs me about the cube, aside from the issue of whether varying hbar is the best way to think about the classical limit of quantum theory, is that there are a vast number of missing
dimensions. What about varying the coupling constants of the non-graviational interactions for example? You could object that this would give lots of non-physical limits, but there are already
non-physical limits on the cube, e.g. quantum Newtonian gravity. Also, I'd like to see a dimension represeting "level of classical ignorance" so that we would have a direction taking Newtonian
mechanics to Lioville mechanics/statistical mechanics and quantum theory to quantum statistical mechanics.
I was first introduced to it by Joy Christian as well. I'd guess that Joy learned about it from John Stachel who discussed the cube in his 2003 "A brief history of space-time".
Stachel calls it a Bronstein Cube because it was Matvai Bronstein who pointed out these relationships in the 1930s.
I wasn't aware of the other references you cite, but I look forward to following up on them. I'm actually making use of the Bronstein cube in a couple of my forthcoming philosophy papers. (I
argue that our knowledge of the domains of applicability of physical theories -- illustrated by the Bronstein Cube -- gives us reason to reject dualist accounts of the mind.)
-Peter Bokulich
Ha, I didn't know Joy talked about the cube. What was his interest in it?
And yes, there's of course lots that's not on the cube. That guy Okun for example wrote a paper about a hypercube, including k.
Joy's interest was the upper-right-hand corner of your cube: Non-Relativistic Quantum Gravity.
He published on this back in 1997: "Exactly soluble sector of quantum gravity."
"You'd surprise me if you've seen that cube before. It is certainly not a particularly deep illustration, but it's inspirational and it gives you food for thought."
I've seen it before (can't really remember where). And it doesn't look very inspirational, but that may be because I'm accustomed to think about multi-dimensional data as [hyper]cubes.
Discrete Scale Relativity Vindicated; Supersymmetry Strikes Out Again.
Supersymmetry has predicted that the electron is "egg-shaped".
Brand new experiments say SUSY is wrong on this prediction, as discussed in the latest issue of Nature.
Discrete Scale Relativity predicts that electrons are among the most perfectly spherical objects in the Universe.
The new electron shape research vindicates the definitive prediction of Discrete Scale Relativity, and contradicts the SUSY prediction.
So let's see, that's 400 billion of the predicted unbound planetary-mass objects and evidence for a spherical electron. It's been quite a week!
Who needs a cube, when you can observe nature and draw self-evident conclusions?
Robert, to how many blogs and sites do you have to crosspost your nonsense to before you realize that *NOBODY CARES* ?
You had the cube on your own blog a few years ago! Look at http://backreaction.blogspot.com/2008/12/guestpost-christoph-schiller-about.html
Hi Bee,
Hmmm....just wonder what these young minds think?
Engaging perspective of the cube is with it's faces, or, from inside?
Calorimeter or Colorimetric, and what is inside has a defined coordinate? A cubed parameter detailing a "configuration point" in space?
Salvador Dalí (Spanish, 1904-1989). Crucifixion (Corpus Hypercubicus), 1953–54. Oil on canvas. 77 x 49 in. (195.6 x 124.5 cm). Gift of the Chester Dale Collection, 1955 (55.5). The Metropolitan
Museum of Art, New York.
© Salvador Dalí, Gala-Salvador Dalí Foundation / Artists Rights Society (ARS), New York
Moving from the fifth postulate, to a "non euclidean perspective" > or < then 180, is a way in which artistically Dali might have extended the cube? He might of call it heaven "geometrically"
having seen Jesus die for our sins? Of course you don't have to believe in the religion, just that it was a way for Dali to move from the cube to the hypercube in a artistic way. Look at Jesus in
this sense?
In geometry, the tesseract, or hypercube, is a regular convex polychoron with eight cubical cells. It can be thought of as a 4-dimensional analogue of the cube. Roughly speaking, the tesseract is
to the cube as the cube is to the square.
Generalizations of the cube to dimensions greater than three are called hypercubes or measure polytopes. This article focuses on the 4D hypercube, the tesseract.
Polytopes are interesting too:)
Hi Bokulich,
Thanks, that's interesting. Best,
Hi Nemo,
Haha, you are right of course. I should read my own blog ;-) Best,
Is Woofy getting nervous?
Clearly he has broken off is leash.
There is a spanking coming for him at sci.astro.research
Robert: Please stop it. You probably think you're witty, but actually you're just annoying everybody with your off-topic comments that nobody wants to hear. Thanks,
I've also seen this cube before on some QG paper. Maybe John Stachel's. Didn't give much thought abt it at that time. I am intrigued of the idea that taking these limits with diff order might
give diff results.
Regarding Joy, all I know is that he wrote a series of papers on Bell's theorem that I've refuted on arxiv.
Here is a nice interactive version:
I've definitely seen it, but not in that book. If I remember where, I'll tell.
@tytung: Stachel calls it Bronstein's cube:
and others attributed it to Penrose:
This comment has been removed by the author.
I'm surprised somebody wrote a paper to impress a girl. Huh, what? Are there actually girls out there impressed by papers?
What was the girl's name, if so? Emmy Noether?
Flowers and a smile always worked best for me, but wow, yet another "pick-up" trick. Now I have to go write a paper to impress my wife, so cya.
Hi Bee,
So this came from a random perusing of one of Stefan’s many books, which I find in itself interesting. That is as I have quite a collection myself I’d be curious as to which ones he has and how
many of the titles we share. This one for sure I know I don’t have and impressed that it be in French as of course all of mine being in English; just another non obvious advantage of being
versant in several languages.
That is for instance I’ve often wondered what it would be like to read Einstein and Dirac in German or deBroglie in French. Then again those like Descartes, Galileo and Newton all wrote in Latin,
as it being the language of scholars at the time and although I studied some Latin in school not enough to read these originals. I guess its just that I’ve long wondered what is lost with the
translations and what perhaps just can’t be translated properly.
This has me reminded of a public lecture I attended given by Anton Zeilinger , where he said the German word for “quantum entanglement” doesn’t project the same meaning as it does in English, as
in German its more as to mean a “handshake” as opposed to a shared state being “holistic”. He went on to complain as it was an initial German observation that the English speakers have things
improperly conceived. I’ve often since wondered about that and thought perhaps this could be considered the other way around, as Zeilinger seemed to have a problem himself with the concept of
holism when it comes to quantum phenomena.
This leads us back to the cube you’ve highlighted here and although I would agree it being a lame attempt to connect concepts, it is an attempt never the less. The thing I like about it, is that
it gives dimension or actual space to ideas and thus have them, that is at least for me, more easily conceived. It also renders clues about how the author thinks and I must admit those that
express themselves this way I have always had a greater bond with. So even though the book might be fraught with dangerous misinterpretation I think I might pick up the English version, as at
least it’s of a form I’m more equipped to evaluate.
”But if thought corrupts language, language can also corrupt thought.”
-George Orwell
Hi Phil,
I think in a field where the essential statements are mathematical, the subtleties of different languages don't matter that much. Most of the important works from a century ago have been
translated into English I believe, and though one or the other expression might have gotten lost in translation I don't think you've missed much.
Yes, the English word 'entanglement' doesn't mean exactly the same as the German word 'Verschränkung.' The former seems to indicate more of a general mess, whereas the latter is more orderly.
Crossing your arms for example is a 'Verschränkung,' yet not actually an entanglement. Best,
G which is essentially the strength by which matter deforms the background geometry.... These are Ricci-flat, but the curvature tensor doesn't necessarily also vanish
Somebody is not intoxicated into clamorous stupor by quantum gravitation! A GR curvature background is just as easily a teleparallel torsion background acting like Lorentz force - that is chiral,
and testably so. It makes a difference how you hook the jumper cables' polarity to your rail gun. Somebody should look.
To unite classical and statistical thermodynamics you need Boltzmann's constant, as stated - a tesseract. Thermodynamics plus the Beckenstein bound is GR. Is one of the four a dependent variable?
(x,y,z,t) naïvely should be fit with three.
We live in a world of infinite possibilities. Theory enumerates what is interesting, experiment discloses what is useful. Science is observed fact being observably factual.
Some notes about variations of fundamental constants:
In discussion between L. B. Okun, G. Veneziano and M. J. Duff, concerning the number of fundamental dimensional constants in physics (physics/0110060). They advocated correspondingly 3, 2 and 0
fundamental constants. Why they not considering case,where only 1 constant Planck-Dirac's constant; h/2pi=1,054x10^-27ergxsec?
This will be convincingly, because c not contain mass dimension for triumvir and G not contain t for triumvir
My be h only dimensional constant of Nature? Some hint give Planck mass Mp=(hc/G)^1/2 .We simultaneously can decrease or increase c and G, but Mp remains unchanged.
As a consequence only Mp/Me=1836 true dimensionless constant?
I am sure Planck mass(energy) is eternal relevant.
I am not sure about Planck length and Planck time.
I will try why:
In the formula Planck length G/c^3 no linear link.
In the formula Planck time G/c^5 no linear link.
Ok, here is something more on-topic.
What if nature's actual "cube" is a tesseract, i.e.,
which is a cube within a cube. Now you have two sets of vertices, with discretely different values for the measures at the corresponding vertices.
The final step is to consider an unbounded tesseract, which is a cube within a cube within a cube ..., well, you get the idea.
Could nature be so perverse?
I would regard conformal symmetry as sophisticated, or perhaps elegant, but that is a matter of taste and science.
I think 'Verschränkung may be the orderly and more original description but entanglement gets at the dependent nature of the relationships between two or more particles. Just like trying to untie
a knot there can be unforeseen consequences unless one does it very carefully.
As far as constants and which are "real" - if only people would take a clue from the fine structure constant. G is not in it while c and hbar are. I think this is important and not merely
I should add, I guess, that the fine structure constant has the electric charge, e, as the third constant within it.
Perhaps e should replace G as the third dimension of the cube of reality. This doesn't seem very intuitive but there must be at least one piece of the puzzle that isn't intuitive or the quantum
gravity puzzle would have been solved long ago.
We know the energy density of the universe has to change with its expansion. Perhaps the only constant that must be scaled by hand is G. Hbar, c, and e would then be the three physical constants.
As Al would say, someone should look.
Hi Bee,
“I think in a field where the essential statements are mathematical, the subtleties of different languages don't matter that much.”
You are of course right if all that’s considered as relevant is action and outcome. It’s also true that when physics was primarily concerned with what we could see, or what we could have to be
seen, the maths would take care of the rest. However today when the very existence of the vast majority of what is considered substance and force are taken as givens; not as a result of knowing
what each are as to being anticipated, yet rather that our theories would crumble if asked to explain the observed action and outcome without their existence.
That is I would ask if it’s unreasonable to wonder, if our theories are explaining what we find or is what we find being decided by our theories and more importantly how we have them as
interpreted; or even at times as being reluctant to having them interpreted at all, to having the maths define the limit of such.
I guess this than is what has me curious about the subtleties; that is like those found between such as Zeilinger and Bohm. That’s to wonder what if anything important is missed with thinking
clasped hands of crossed arms as describing the situation better then a complex and yet indivisible whole.
"[Thought] seems to have some inertia, a tendency to continue. It seems to have a necessity that we keep on doing it. However ... we often find that we cannot easily give up the tendency to hold
rigidly to patterns of thought built up over a long time. We are then caught up in what may be called absolute necessity. This kind of thought leaves no room at all intellectually for any other
possibility, while emotionally and physically, it means we take a stance in our feelings, in our bodies, and indeed, in our whole culture, of holding back or resisting. This stance implies that
under no circumstances whatsoever can we allow ourselves to give up certain things or change them."
-David Bohm & Mark Edwards, "Changing Consciousness"_, p. 15
"A key difference between a dialogue and an ordinary discussion is that, within the latter people usually hold relatively fixed positions and argue in favor of their views as they try to convince
others to change. At best this may produce agreement or compromise, but it does not give rise to anything creative."
-David Bohm & David Peat, "Science Order, and Creativity"_, p. 241
Leonard Susskind:
"One of the deepest lessons we have learned over the past decade is that there is no fundamental difference between elementary particles and black holes."
Discrete Scale Relativity was there long before "the past decade". That idea and the reason for the equivalence was published in 1985. Theoretical physicists did everything in their power to
prevent the development of this idea.
So eventually theoretical physicists may acknowledge the obvious, and then claim it was their unique discovery.
Same as it ever was.
Leonard Susskind:
"One of the deepest lessons we have learned over the past decade is that there is no fundamental difference between elementary particles and black holes."
If you repeat it often enough maybe some of us will actually believe it. Apparently for you repetition is the the substance of a good argument.
A woman is riding in a car with a man.
He makes four wrong turns, but is unwilling to seek the knowkedge of others who have the insight to reach the goal.
The woman knows the correct way to the goal.
Would you recommend that the woman remain silent? Out of respect? Out of fear? Out of a desire not to rock the boat?
I think she should speak up loud and clear, and persist until her superior knowledge is appreciated.
And next time she should drive the car.
My suggestion would be that the woman get her own blog, er car, and there would be no further disputes with the man over who is to drive.
Hi Robert,
You've taken a couple of wrong turns there and I suggest you shut up now or I'll bury your comments in digital Nirvana. Salut,
The issue with this cube, i.m.o. is that people from high school onward are taught misleading things about units, dimensions etc. in physics. It's just a handy tool to do computations, but
physics is in principle dimensionless. Many people believe that the way we have assigned (supposedly) "incompatible dimensions" is somehow fundamental, while in reality, it is just a convention.
Then this is how I see this cube. If you use natural units, you can re-introduce hbar, c and G, but now interpreted as dimensionless rescaling constants.
E.g., the correct derivation of the classical limit from special relativity should i.m.o. be a scaling argument like the one given here The usual textbook argument amounts to cheating; the
non-trivial part of the derivation is skipped when using SI units.
I have stumbled across this particular blog article of yours - very interesting to me. Congratulations, you have a new regular follower. To try and provide in return an interesting tidbit for you
to consider return, starting from free accociating about mathematically-oriented women and cubes, have you ever heard of Alicia Boole Stott? Here is a general and also a somewhat more
mathematical description of her work.
Best wishes!
|
{"url":"http://backreaction.blogspot.com/2011/05/cube-of-physical-theories.html?showComment=1306382353347","timestamp":"2014-04-19T09:26:03Z","content_type":null,"content_length":"174450","record_id":"<urn:uuid:29ad5a6f-dfbf-4296-982d-8a104fbc4a37>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: where to put complexity theory
Wayne Richter richter at math.umn.edu
Sun Aug 30 01:03:41 EDT 1998
I have been enjoying (from a distance!) the discussion and
feeling rather guilty whenever Steve laments the absence of
recursion theorists. So here goes:
>From the mathematical point of view, I believe that complexity theory
should as it currently stands, be considered a part of recursion theory.
(There may be other more practical reasons why they are to be
I am using "recursion theory" here in a rather broad sense, to include
not just degrees of unsolvability, but also subjects which go under
the names of "generalized recursion theory", "recursive functionals of
finite types", "inductive definability", etc.
As Joe has mentioned, some form of definability is at the heart of
recursion theory, and quite similar notions of definability seem to
be at the heart of complexity theory as well. The formulation of
some of the main problems of complexity theory in terms of finite
model theory is important here. Moschovakis' theory of elementary
induction on abstract structures is a good example. Although he
developed the theory on infinite structures, a non-trivial amount of
the theory goes through for finite structures as well. Some of the work
of Immerman and Gurevich and Shelah on finite structures is a natural
byproduct of this general theory of inductive definability.
The basic open question of whether or not on finite structures,
$\Pi^1_1=\Sigma^1_1$, (1)
is easily formulated in terms of inductive
definability in several ways. For example, consider the statement
$IND(\Sigma^1_1)=monIND(\Sigma^1_1)$ (2)
(i.e. that MONOTONE $\Sigma^1_1$ inductive definitions are just as
powerful as $\Sigma^1_1$ inductive definitions.) On finite structures
(2) is (easily seen to be) equivalent to (1). On infinite structures
such as $(\omega,<)$, (2) is a well known theorem with a number of
interesting, nontrivial proofs.
The solution to the basic open problems in complexity theory may
turn out to be completely irrelevant to the other parts of recursion
theory. On the other hand the solutions may lead to new insights
into other parts of recursion theory and lead to a new explosion of
results. For now, there is a sufficiently strong connection to keep
them together.
Why don't we just wait and see what happens?!
Wayne Richter
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-August/002026.html","timestamp":"2014-04-20T05:48:51Z","content_type":null,"content_length":"4719","record_id":"<urn:uuid:b0dbdd04-00b3-435e-b566-fdf3b2cdff5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Logarithms: simplifying log_4^(16/x) - log_4^(16) - log4^(x)
I've recently finished the Basic Log Rules and those topics that are related to exponential and logarithms. The only question I've encountered is this:
For this log:
- log_4^(16) - log4^ (x)
I don't get how it'll become
log_4^(16) = 2
It says refer to the basic log but I still don't get it...
Is it because 4^2 = 16???
Second question is what I've been doing is looking at numbers. What sort of exp/log questions in words will I encounter?
Exponential wrote: log_4^(16/x)
- log_4^(16) - log4^ (x)
I will guess that the "to the power of" parts are actually the arguments of the logs, rather than powers on the bases, so the expression is as follows:
. . . . .$\log_4\left(\frac{16}{x}\right)\, -\, \log_4(16)\, -\, \log_4(x)$
If so, then use a log rule to expand the first term:
. . . . .$\log_4(16)\, -\, \log_4(x)\, -\, \log_4(16)\, -\, \log_4(x)$
The first and third terms cancel out, leaving you with:
. . . . .$-\log_4(x)\, -\, \log_4(x)\, =\, -2\log_4(x)\, =\, \log_4\left(\frac{1}{x^2}\right)$
Re: Logarithms: simplifying log_4^(16/x) - log_4^(16) - log4^(x)
oh so this is like:
log_4^(16) - log_4^(x)
4^2 = 16
2- log_4^(x) oh that's like it. Thanks
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=636","timestamp":"2014-04-19T09:40:35Z","content_type":null,"content_length":"21422","record_id":"<urn:uuid:ec0ce5fc-b3c2-4fdc-b5bb-aa3b1087aabd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stationary Solutions of stochastic differential equations
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
When does the stationary density of an homogeneous Markov process exist?
add comment
It is hard to be brief here, but I will try.
One answer is: when the corresponding stationary Fokker-Planck equation (aka forward Kolmogorov equation) has a nonnegative integrable solution. The density is then obtained by normalization of that
solution. This is not a very good answer because FP equations are often not so easy to analyze.
In fact, it is hard to guess what is the question you really wanted to ask. For example, one may say that your question is twofold: 1) when is there an invariant probability distribution? 2) If it
exists, is it absolutely continuous w.r.t. Lebesgue?
Existence is guaranteed if the drift prevents the trajectories from going to infinity so that they spend a lot of time in a compact set. Some of the relevant keywords are: Lyapunov-Foster functions,
Harris recurrence. Among weakest known conditions guaranteeing existence of invariant distributions are those due to Veretennikov.
Given part 1), absolute continuity of the stationary distributions can essentially be deduced from absolute continuity of transition probabilities. This part is easy if your equations are elliptic
and not so easy if the noise excites only some directions.
The analysis can be quite nontrivial depending on how bad your equation is, as a look at a recent paper http://arxiv.org/abs/0712.3884 might convince you.
Thanks! It was very useful!
Joan May 21 '10 at 5:05
add comment
|
{"url":"http://mathoverflow.net/questions/24905/stationary-solutions-of-stochastic-differential-equations/25088","timestamp":"2014-04-19T12:24:47Z","content_type":null,"content_length":"51464","record_id":"<urn:uuid:afc9336d-c8d7-4928-9308-994385e14abf>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-Dev] On the leastsq/curve_fit method
Paul Kuin npkuin@gmail....
Mon Sep 26 09:31:31 CDT 2011
I am just a simple user, but perhaps this gives some idea of what us users
I got so fed up with leastsq not having good documentation, not being able to
set limits to the parameters to be fit, and not handling errors in
input measurements
in a transparent way, that I was very happy to replace it with the
pkfit routine
based on Craig Markwards IDL code. I am happy now, but I waisted a lot of time
because of these leastsq issues.
Anyway - I am happy now.
Paul Kuin
On Mon, Sep 26, 2011 at 2:57 PM, Gianfranco Durin <g.durin@inrim.it> wrote:
> ----- Original Message -----
>> On Thu, Sep 22, 2011 at 1:23 PM, Gianfranco Durin <g.durin@inrim.it>
>> wrote:
>> > Dear all,
>> > I wanted to briefly show you the results of an analysis I did on
>> > the performances of the optimize.leastsq method for data fitting.
>> > I presented these results at the last Python in Physics workshop.
>> > You can download the pdf here:
>> > http://emma.inrim.it:8080/gdurin/talks.
>> >
>> > 1. The main concern is about the use of cov_x to estimate the error
>> > bar of the fitting parameters. In the docs, it is set that "this
>> > matrix must be multiplied by the residual standard deviation to
>> > get the covariance of the parameter estimates -- see curve_fits.""
>> >
>> > Unfortunately, this is not correct, or better it is only partially
>> > correct. It is correct if there are no error bars of the input
>> > data (the sigma of curve_fit is None). But if provided, they are
>> > used as "as weights in least-squares problem" (curve_fit doc), and
>> > cov_x gives directly the covariance of the parameter estimates
>> > (i.e. the diagonal terms are the errors in the parameters). See
>> > for instance here:
>> > http://www.gnu.org/s/gsl/manual/html_node/Computing-the-covariance-matrix-of-best-fit-parameters.html.
>> >
>> > This means that not only the doc needs fixing, but also the
>> > curve_fit code, those estimation of the parameters' error is
>> > INDEPENDENT of the values of the data errors in the case they are
>> > constant, which is clearly wrong.
>> > I have never provided a patch, but the fix should be quite simple,
>> > just please give me indication on how to do that.
>> Since this depends on what you define as weight or sigma, both are
>> correct but use different definitions.
> Of course, but this is not the point. Let me explain.
> In our routines, the cov_x is calculated as (J^T * J) ^-1, where J is the jacobian. The Jacobian, unless provided directly, is calculated using the definition of the residuals, which in curve_fit method are "_general_function", and "_weighted_general_function". The latter function explicitly uses the weights, so the cov_x is more precisely (J^T W J) ^-1, where W is the matrix of the weights, and J is just the matrix of the first derivatives. Thus in this case, the diagonal elements of cov_x give the variance of the parameters. No need to multiply by the residual standard deviation.
> In case of W == 1, i.e. no errors in the data are provided, as reported here http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares
> one uses the variance of the observations, i.e. uses the
> s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0))
> as an estimate of the variance, as done in curve_fit.
> BUT, we cannot multiply the cov_x obtained with the _weighted_general_function by s_sq. As I told, we already took it into account in the definition of the residuals. Thus:
> if (len(ydata) > len(p0)) and pcov is not None:
> s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0))
> pcov = pcov * s_sq
> else:
> pcov = inf
> where func can be both "_general_function" and "_weighted_general_function", is not correct.
>> Since we just had this discussion, I'm not arguing again. I just want
>> to have clear definitions that the "average" user can use by default.
>> I don't really care which it is if the majority of users are
>> engineers
>> who can tell what their errors variances are before doing any
>> estimation.
> Oh, interesting. Where did you have this discussion? On this list? I could not find it...
> Here the problem is not to decide an 'average' behaviour, but to correctly calculate the parameters' error when the user does or does not provide the errors in the data.
> Hope this helps
> Gianfranco
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
More information about the SciPy-Dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2011-September/016600.html","timestamp":"2014-04-17T04:00:40Z","content_type":null,"content_length":"8704","record_id":"<urn:uuid:500def3e-8a84-4776-b51c-65d23039c07f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partial fractions question
October 22nd 2012, 02:19 PM
Partial fractions question
Hello, I am in a differential equations class need to find the inverse Laplace of
$\frac{1}{s(s^2+5)}$, but have forgotten how to do partial fractions with quadratic factors. What I got so far:
Letting s=0 I get $A=\frac{1}{5}$, but where do I go from there?
October 22nd 2012, 02:24 PM
Re: Partial fractions question
$\frac{1}{s\left(s^2+5\right)} = \frac{1}{5 s}-\frac{s}{5 \left(5+s^2\right)}$
October 22nd 2012, 02:58 PM
Re: Partial fractions question
Thanks! Could you explain how you got that?
October 22nd 2012, 03:10 PM
Re: Partial fractions question
Equate coefficients of the various powers of $s$ across the identity.
The number of $s^{2}$ on the LHS have to equal the number of $s^{2}$ on the RHS.
That gives you $A+B=0.$
Then equate coefficients of $s$ to get the value of $C.$
Alternatively you could substitute two other values for $s$ and solve the resulting simultaneous equations.
October 22nd 2012, 03:25 PM
Re: Partial fractions question
You have $A(s^2+ 5)+ (Bs+ C)s= 1$ which is to be true for all s. So take s to be some simple numbers.
If s= 0, then $A(0+ 5)+ (B(0)+ C)0= 5A= 1$. Putting that value into the equation, $(1/5)(s^2+ 5)+ (Bs+ C)s= (1/5+ B)s^2+ Cs+ 1= 1$ or $(1/5+ B)s^2+ Cs= 0$.
Taking s= 1, that becomes 1/5+ B+ C= 0 or B+ C= -1/5. Taking s= -1, it is 1/5+ B- C= 0 or B- C= 1/5. Adding those, 2B= 0 so B= 0. Subtracting, 2C= -2/5 so C= -1/5.
Another way to get that is to multiply out $A(s^2+ 5)+ (Bs+ C)s=(A+ B)s^2+ Cs+ 5A= 0s^2+ 0s+ 1$. Now, because that is true for all s, we must have "corresponding coefficients" equal: A+ B= 0, C=
0, 5A= 1. That gives A= 1/5 and then (1/5)+ B= 0 so B= -1/5.
October 22nd 2012, 03:55 PM
Re: Partial fractions question
Aah! Thanks a lot!
|
{"url":"http://mathhelpforum.com/calculus/205896-partial-fractions-question-print.html","timestamp":"2014-04-18T18:01:52Z","content_type":null,"content_length":"9198","record_id":"<urn:uuid:afe6ef5d-3303-4aa2-aca0-e201c23b8eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computer Graphics and Image
Computer Graphics and Image Processing
Lecturer: Professor P. Robinson & Professor N. Dodgson
No. of lectures: 16
Suggested hours of supervisions: 4
Prerequisite courses: Algorithms
This course is a prerequisite for Advanced Graphics (Part II).
To introduce the necessary background, the basic algorithms, and the applications of computer graphics and image processing. A large proportion of the course considers the design and optimisation of
algorithms, so can be considered a practical application of the lessons learnt in the Algorithms course.
• Background. What is an image? What are computer graphics, image processing, and computer vision? How do they relate to one another? Image capture. Image display. Human vision. Resolution and
quantisation. Colour and colour spaces. Storage of images in memory, and double buffering. Display devices: brief overview of two display technologies (LCD, DMD) and two printer technology (ink
jet and laser printer). [3 lectures]
• 2D computer graphics. Drawing a straight line. Drawing circles and ellipses. Cubic curves: specification and drawing. Clipping lines. Filling polygons. Clipping polygons. 2D transformations,
vectors and matrices, homogeneous co-ordinates. Uses of 2D graphics: HCI, typesetting, graphic design. [4 lectures]
• 3D computer graphics. Projection: orthographic and perspective. 3D transforms and matrices. 3D rotation using a non-matrix method. 3D clipping. 3D curves. 3D scan conversion using the z-buffer.
Anti-aliasing and the A-buffer. Lighting: theory, BRDF, approximations: flat shading, Gouraud shading, Phong shading. Texture mapping. OpenGL programming. [7 lectures]
• Image processing. Operations on images: filtering, point processing, compositing. Halftoning and dithering, error diffusion. [2 lectures]
At the end of the course students should be able to
• explain the basic function of the human eye and how this impinges on resolution, quantisation, and colour representation for digital images; describe a number of colour spaces and their relative
merits; explain the workings of two display technologies and two printer technologies;
• describe and explain the following algorithms: mid-point line drawing, mid-point circle drawing, Bezier cubic drawing, Cohen-Sutherland line clipping, scanline polygon fill, Sutherland-Hodgman
polygon clipping, -buffer, -buffer, texture mapping, error diffusion;
• use matrices and homogeneous coordinates to represent and perform 2D and 3D transformations; understand and use 3D to 2D projection, the viewing volume, and 3D clipping;
• understand Bezier curves and patches; understand sampling and super-sampling issues; understand lighting techniques and how they are applied to z-buffer polygon scan conversion; understand
texture mapping;
• explain how to use filters, point processing, and arithmetic operations in image processing and describe a number of examples of the use of each; explain how halftoning, ordered dither, and error
diffusion work.
* Foley, J.D., van Dam, A., Feiner, S.K. & Hughes, J.F. (1990). Computer graphics: principles and practice. Addison-Wesley (2nd ed.).
Gonzalez, R.C. & Woods, R.E. (2008). Digital image processing. Addison-Wesley (3rd ed). [The second edition (1992) and the first edition (Gonzalez & Wintz, 1977) are as useful for this course.]
* Slater, M., Steed, A. & Chrysanthou, Y. (2002). Computer graphics and virtual environments: from realism to real-time. Addison-Wesley.
|
{"url":"http://www.cl.cam.ac.uk/teaching/1213/CST/node42.html","timestamp":"2014-04-21T02:36:48Z","content_type":null,"content_length":"10920","record_id":"<urn:uuid:a80229e2-9492-474e-bfb8-74a01ea9e65f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Logical Pinpointing - Less Wrong
Comments (327)
Sort By: Best
The presentation of the natural numbers is meant to be standard, including the (well-known and proven) idea that it requires second-order logic to pin them down. There's some further controversy
about second-order logic which will be discussed in a later post.
I've seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, "Because otherwise you can't talk about numbers as opposed to something else," so AFAIK it's
theoretically possible that I'm the first to spell out that idea in exactly that way, but it's an obvious-enough idea and there's been enough debate by philosophically inclined mathematicians that I
would be genuinely surprised to find this was the case.
On the other hand, I've surely never seen a general account of meaningfulness which puts logical pinpointing alongside causal link-tracing to delineate two different kinds of correspondence within
correspondence theories of truth. To whatever extent any of this is a standard position, it's not nearly widely-known enough or explicitly taught in those terms to general mathematicians outside
model theory and mathematical logic, just like the standard position on "proof". Nor does any of it appear in the S. E. P. entry on meaning.
Very nice post!
Bug: Higher-order logic (a standard term) means "infinite-order logic" (not a standard term), not "logic of order greater 1" (also not a standard term). (For whatever reason, neither the Wikipedia
nor the SEP entry seem to come out and say this, but every reference I can remember used the terms like that, and the usage in SEP seems to imply it too, e.g. "This second-order expressibility of the
power-set operation permits the simulation of higher-order logic within second order.")
What about Steven Landsburg's frequent crowing on the Platonicity of math and how numbers are real because we can "directly perceive them"? How does this relate to it?
EDIT: Well, he replies here.
I was wondering what he thought about this!
While I greatly sympathize with the "Platonicity of math", I can't shake the idea that my reasoning about numbers isn't any kind of direct perception, but just reasoning about an in-memory
representation of a model that is ultimately based on all the other systems that behave like numbers.
I find the arguments about how not all true statements regarding the natural numbers can be inferred via first-order logic tedious. It doesn't seem like our understanding of the natural numbers is
particularly impoverished because of it.
I think philosophers who think that the categoricity of second-order Peano arithmetic allows us to refer to the natural numbers uniquely tend to also reject the causal theory of reference, precisely
because the causal theory of reference is usually put as requiring all reference to be causally guided. Among those, lots of people more-or-less think that references can be fixed by some kinds of
description, and I think logical descriptions of this kind would be pretty uncontroversial.
OTOH, for some reason everyone in philosophy of maths is allergic to second-order logic (blame Quine), so the categoricity argument doesn't always hold water. For some discussion, there's a section
in the SEP entry on Philosophy of Mathematics.
(To give one of the reasons why people don't like SOL: to interpret it fully you seem to need set theory. Properties basically behave like sets, and so you can make SOL statements that are valid iff
the Continuum Hypothesis is true, for example. It seems wrong that logic should depend on set theory in this way.)
This is a facepalm "Duh" moment, I hear this criticism all the time but it does not mean that "logic" depends on "set theory". There is a confusion here between what can be STATED and what can be
KNOWN. The criticism only has any force if you think that all "logical truths" ought to be recognizable so that they can be effectively enumerated. But the critics don't mind that for any effective
enumeration of theorems of arithmetic, there are true statements about integers that won't be included -- we can't KNOW all the true facts about integers, so the criticism of second-order logic boils
down to saying that you don't like using the word "logic" to be applied to any system powerful enough to EXPRESS quantified statements about the integers, but only to systems weak enough that all
their consequences can be enumerated.
This demand is unreasonable. Even if logic is only about "correct reasoning", the usual framework given by SOL does not presume any dubious principles of reasoning and ZF proves its consistency. The
existence of propositions which are not deductively settled by that framework but which can be given mathematical interpretations means nothing more than that our repertoire of "techniques of correct
reasoning", which has grown over the centuries, isn't necessarily finalized.
A few points:
i) you don't actually need to jump directly to second order logic in to get a categorical axiomatization of the natural numbers. There are several weaker ways to do the job: Lomegaomega (which allows
infinitary conjunctions), adding a primitive finiteness operator, adding a primitive ancestral operator, allowing the omega rule (i.e. from the infinitely many premises P(0), P(1), ... P(n), ...
infer AnP(n)). Second order logic is more powerful than these in that it gives a quasi categorical axiomatization of the universe of sets (i.e. of any two models of ZFC_2, they are either isomorphic
or one is isomorphic to an initial segment of the other).
ii) although there is a minority view to the contrary, it's typically thought that going second order doesn't help with determinateness worries (i.e. roughly what you are talking about with regard to
"pinning down" the natural numbers). The point here is that going second order only works if you interpret the second order quantifiers "fully", i.e. as ranging over the whole power set of the domain
rather than some proper subset of it. But the problem is: how can we rule out non-full interpretations of the quantifiers? This seems like just the same sort of problem as ruling out non-standard
models of arithmetic ("the same sort", not the same, because for the reasons mentioned in (i) it is actually more stringent of a condition.) The point is if you for some reason doubt that we have a
categorical grasp of the natural numbers, you are certainly not going to grant that we can enforce a full interpretation of the second order quantifiers. And although it seems intuitively obvious
that we have a categorical grasp of the natural numbers, careful consideration of the first incompleteness theorem shows that this is by no means clear.
iii) Given that categoricity results are only up to isomorphism, I don't see how they help you pin down talk of the natural numbers themselves (as opposed to any old omega_sequence). At best, they
help you pin down the structure of the natural numbers, but taking this insight into account is easier said than done.
"Because otherwise you can't talk about numbers as opposed to something else,"
The Abstract Algebra course I took presented it in this fashion. I have a hard time seeing how you could even have abstract algebra without this notion.
so AFAIK it's theoretically possible that I'm the first to spell out that idea in exactly that way
I remember explaining the Axiom of Choice in this way to a fellow undergraduate on my integration theory course in late 2000. But of course it never occurred to me to write it down, so you only have
my word for this :-)
I've seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, "Because otherwise you can't talk about numbers as opposed to something else," so AFAIK it's
theoretically possible that I'm the first to spell out that idea in exactly that way, but it's an obvious-enough idea and there's been enough debate by philosophically inclined mathematicians
that I would be genuinely surprised to find this was the case.
If memory serves, Hofstadter uses roughly this explanation in GEB.
This is pretty close to how I remember the discussion in GEB. He has a good discussion of non-Euclidean geometry. He emphasizes that originally the negation of Parallel Postulate was viewed as
absurd, but that now we can understand that the non-Euclidean axioms are perfectly reasonable statements which describe something other than plane geometry we are used to. Later he has a bit of a
discussion of what a model of PA + NOT(CON(PA)) would look like. I remember finding it pretty confusing, and I didn't really know what he was getting at until I red some actual logic theory
textbooks. But he did get across the idea that the axioms would still describe something, but that something would be larger and stranger than the integers we think we know.
IRC, Hofstadter is a firm formalist, and I don't see how that square with EYs apparent Correspondence Theory. At least i don't see the point in correspondence if hat is being corresponded to is
itself generated by axioms.
You just say: 'For every relation R that works exactly like addition, the following statement S is true about that relation.' It would look like, '∀ relations R: (∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)
→R(x, Sy, Sz))) → S)', where S says whatever you meant to say about +, using the token R.
The expression '(∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz)))' is true for addition, but also for many other relations, such as a '∀x∀y∀z: R(x, y, z)' relation.
I'm not sure that adding the conjunction (R(x,y,z)&R(x,y,w)->z=w) would have made things clearer...I thought it was obvious the hypothetical mathematician was just explaining what kind of steps you
need to "taboo addition"
Yes, the educational goal of that paragraph is to "taboo addition". Nonetheless, the tabooing should be done correctly. If it is too difficult to do, then it is Eliezer's problem for choosing a
difficult example to illustrate a concept.
This may sound like nitpicking, but this website has a goal is to teach people rationality skills, as opposed to "guessing the teacher's password". The article spends five screens explaining why
details are so important when defining the concept of a "number", and the reader is supposed to understand it. So it's unfortunate if that explanation is followed by another example, which
accidentally gets the similar details wrong. My objections against the wrong formula are very similar to the in-story mathematician's objections to the definitions of "number"; the definition is too
Your suggestion: '∀x∀y∀z∀w: R(x, 0, x) ∧ (R(x, y, z)↔R(x, Sy, Sz)) ∧ ((R(x, y, z)∧R(x, y, w))→z=w)'
My alternative: '∀x∀y∀z: (R(x, 0, z)↔(x=z)) ∧ (R(x, y, z)↔R(x, Sy, Sz)) ∧ (R(x, y, z)↔R(Sx, y, Sz))'.
Both seem correct, and anyone knows a shorter (or a more legible) way to express it, please contribute.
Shorter (but not necessarily more legible): ∀x∀y∀z: (R(x, 0, z)↔(x=z)) ∧ (R(x, Sy, z)↔R(Sx, y, z)).
Both seem correct, and anyone knows a shorter (or a more legible) way to express it, please contribute.
The version in the article now, ∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)↔R(x, Sy, Sz)), is better than before, but it leaves open the possibility that R(0,0,7) as well as R(0,0,0). One more possibility is:
"Not in second-order logic, which can quantify over functions as well as properties. (...) It would look like, '∀ functions f: ((∀x∀y: f(x, 0) = x ∧ f(x, Sy) = Sf(x, y)) → Q)' (...)"
(I guess I'm not entirely in favor of this version -- ETA: compared to Kindly's fix -- because quantifying over relations surely seems like a smaller step from quantifying over properties than does
quantifying over functions, if you're new to this, but still thought it might be worth pointing out in a comment.)
Your idea of pinning down the natural numbers using second order logic is interesting, but I don't think that it really solves the problem. In particular, it shouldn't be enough to convince a
formalist that the two of you are talking about the same natural numbers.
Even in second order PA, there will still be statements that are independent of the axioms, like "there doesn't exist a number corresponding to a Godel encoding of a proof that 0=S0 under the axioms
of second order PA". Thus unless you are assuming full semantics (i.e. that for any collection of numbers there is a corresponding property), there should be distinct models of second order PA for
which the veracity of the above statement differs.
Thus it seems to me that all you have done with your appeal to second order logic is to change my questions about "what is a number?" into questions about "what is a property?" In any case, I'm still
not totally convinced that it is possible to pin down The Natural Numbers exactly.
I'm assuming full semantics for second-order logic (for any collection of numbers there is a corresponding property being quantified over) so the axioms have a semantic model provably unique up to
isomorphism, there are no nonstandard models, the Completeness Theorem does not hold and some truths (like Godel's G) are semantically entailed without being syntactically entailed, etc.
OK then. As soon as you can explain to me exactly what you mean when you say "for any collection of numbers there is a corresponding property being quantified over", I will be satisfied. In
particular, what do you mean when you say "any collection"?
Are you claiming that this term is ambiguous? In what specially favored set theory, in what specially favored collection of allowed models, is it ambiguous? Maybe the model of set theory I use has
only one set of allowable 'collections of numbers' in which case the term isn't ambiguous. Now you could claim that other possible models exist, I'd just like to know in what mathematical language
you're claiming these other models exist. How do you assert the ambiguity of second-order logic without using second-order logic to frame the surrounding set theory in which it is ambiguous?
I'm not entirely sure what you're getting at here. If we start restricting properties to only cut out sets of numbers rather than arbitrary collections, then we've already given up on full semantics.
If we take this leap, then it is a theorem of set theory that all set-theoretic models of the of the natural numbers are isomorphic. On the other hand, since not all statements about the integers can
be either proven or disproven with the axioms of set theory, there must be different models of set theory which have different models of the integers within them (in fact, I can build these two
models within a larger set theory).
On the other hand, if we continue to use full semantics, I'm not sure how you clarify to be what you mean when you say "a property exists for every collection of numbers". Telling me that I should
already know what a collection is doesn't seem much more reasonable than telling me that I should already know what a natural number is.
On the other hand, since not all statements about the integers can be either proven or disproven with the axioms of set theory, there must be different models of set theory which have different
models of the integers within them
Doesn't the proof of the Completeness Theorem / Compactness Theorem incidentally invoke second-order logic itself? (In the very quiet way that e.g. any assumption that the standard integers even
exist invokes second-order logic.) I'm not sure but I would expect it to, since otherwise the notion of a "consistent" theory is entirely dependent on which models your set theory says exist and
which proofs your integer theory says exist. Perhaps my favorite model of set theory has only one model of set theory, so I think that only one model exists. Can you prove to me that there are other
models without invoking second-order logic implicitly or explicitly in any called-on lemma? Keep in mind that all mathematicians speak second-order logic as English, so checking that all proofs are
first-order doesn't seem easy.
I am admittedly in a little out of my depth here, so the following could reasonably be wrong, but I believe that the Compactness Theorem can be proved within first order set theory. Given a
consistent theory, I can use the axiom of choice to extend it to a maximal consistent set of statements (i.e. so that for every P either P or (not P) is in my set). Then for every statement that I
have of the form "there exists x such that P(x)", I introduce an element x to my model and add P(x) to my list of true statements. I then re-extend to a maximal set of statements, and add new
variables as necessary, until I cannot do this any longer. What I am left with is a model for my theory. I don't think I invoked second order logic anywhere here. In particular, what I did amounts to
a construction within set theory. I suppose it is the case that some set theories will have no models of set theory (because they prove that set theory is inconsistent), while others will contain
infinitely many.
My intuition on the matter is that if you can state what you are trying to say without second order logic, you should be able to prove it without second order logic. You need second order logic to
even make sense of the idea of the standard natural numbers. The Compactness Theorem can be stated in first order set theory, so I expect the proof to be formalizable within first order set theory.
If you're already fine with the alternating quantifiers of first-order logic, I don't see why allowing branching quantifiers would cause a problem. I could describe second order logic in terms of
branching quantifiers.
Huh. That's interesting. Are you saying that you can actually pin down The Natural Numbers exactly using some "first order logic with branching quantifiers"? If so, I would be interested in seeing
It is not the case that: there exists a z such that for every x and x’, there exists a y depending only on x and a y’ depending only on x’ such that Q(x,x’,y,y’,z) is true
where Q(x,x’,y,y’,z) is ((x=x' ) → (y=y' )) ∧ ((Sx=x' ) → (y=y' )) ∧ ((x=0) → (y=0)) ∧ ((x=z) → (y=1))
Cool. I agree that this is potentially less problematic than the second order logic approach. But it does still manage to encode the idea of a function in it implicitly when it talks about "y
depending only on x", it essentially requires that y is a function of x, and if it's unclear exactly which functions are allowed, you will have problems. I guess first order logic has this problem to
some degree, but with alternating quantifiers, the functions that you might need to define seem closer to the type that should necessarily exist.
I think this is his way of connecting numbers to the previous posts. If "a property" is defined as a causal relation, which all properties are, then I think this makes sense. It doesn't provide some
sort of ultimate metaphysical justification for numbers or properties or anything, but it clarifies connections between the two and such a justification isn't really possible anyways.
I don't think that I understand what you mean here.
How can these properties represent causal relations? They are things that are satisfied by some numbers and not by others. Since numbers are aphysical, how do we relate this to causal relations.
On the other hand, even with a satisfactory answer to the above question, how do we know that "being in the first chain" is actually a property, since otherwise we still can't show that there is only
one chain.
Since numbers are aphysical, how do we relate this to causal relations?
You just begged the question. Eliezer answered you in the OP:
Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies. You can store this general fact, and recall the resulting prediction,
for many different places inside reality where physical things behave in accordance with the number-axioms. Moreover, so long as we believe that a calculator behaves like numbers, pressing '2 +
2' on a calculator and getting '4' tells us that 2 + 2 = 4 is true of numbers and then to expect four apples in the bowl. It's not like anything fundamentally different from that is going on when
we try to add 2 + 2 inside our own brains - all the information we get about these 'logical models' is coming from the observation of physical things that allegedly behave like their axioms,
whether it's our neurally-patterned thought processes, or a calculator, or apples in a bowl.
I can't think of an example, but I'm thinking that if a property existed then it would be a causal relation. A property wouldn't represent a causal relation, it would be one. I wasn't thinking
mathematically but instead in terms of a more commonplace understanding of properties as things like red and yellow and blue.
The argument made by the simple idea of truth might be a way to get us from physical states (which are causal relations) to numbers. If you believe that counting sheep is a valid operation, then
quantifying color also seems fine. The reason I spoke in terms of causal relations is because I believe understanding qualities as causal relations between things allows us to deduce properties about
things through a combination of Salmonoff Induction and the method described in this post.
Are you questioning the idea that numbers or properties are a quality about objects? If so, what are they?
I'm feeling confused though. If the definition of property used here doesn't connect to or means something completely different than facts about objects, then I'm way off base. I might also be off
base for other reasons. Not sure.
Thanks for posting this. My intended comments got pretty long, so I converted them to a blog post <a href="http://www.thebigquestions.com/2012/11/14/accounting-for-numbers/">here</a>. The gist is
that I don't think you've solved the problem, partly because second order logic is not logic (as explained in my post) and partly because you are relying on a theorem (that second order Peano
arithmetic has a unique model) which relies on set theory, so you have "solved" the problem of what it means for numbers to be "out there" only by reducing it to the question of what it means for
sets to be "out there", which is, if anything, a greater mystery.
"- try pondering this one. Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable - why is logic stable? Of course I can't imagine it being
any other way, but that's not an explanation."
Nothing in the process described, of pinpointing the natural numbers, makes any reference to time. That is why it is temporally stable: not because it has an ongoing existence which is mysteriously
unaffected by the passage of time, but because time has no connection with it. Whenever you look at it, it's the same, identical thing, not a later, miraculously preserved version of the thing.
What if 2 + 2 varies over something other than time that nonetheless correlates with time in our universe? Suppose 2 + 2 comes out to 4 the first 1 trillion times the operation is performed by
humans, and to 5 on the 1 trillion and first time.
I suppose you could raise the same explanation: the definition of 2 + 2 makes no reference to how many times it has been applied. I believe the same can be said for any other reason you may give for
why 2 + 2 might cease to equal 4.
Where that is the case, your method of mapping from the reality to arithmetic is not a good model of that process - no more, no less.
I love the elegance of this answer, upvoting.
I couldn't agree more. The timelessness of maths should be read negatively, as indepence of anything else, not as dependence on a timeless realm.
But the question isn't, "Why don't they change over time," but rather, "why are they the same on each occasion". It makes no reference to occasion? Sure, but even so, why doesn't 2 + 2 = a random
number each time? Why is the same identical thing the same?
I'm not sure what the etiquette is of responding to retracted comments, but I'll have a go at this one.
Why is the same identical thing the same?
That's what I mean when I say they are identical. It's not another, separate thing, existing on a separate occasion, distinct from the first but standing in the relation of identity to it. In
mathematics, you can step into the same river twice. Even aliens in distant galaxies step into the same river.
However, there is something else involved with the stability, which exists in time, and which is capable of being imperfectly stable: oneself. 2+2=4 is immutable, but my judgement that 2+2 equals 4
is mutable, because I change over time. If it seems impossible to become confused about 2+2=4, just think of degenerative brain diseases. Or being asleep and dreaming that 2+2 made 5.
So the question becomes, "If "2+2" is just another way of saying "4", what is the point of having two expressions for it?"
My answer: As humans, we often desire to split a group of large, distinct objects into smaller groups of large, distinct objects, or to put two smaller groups of large, distinct, objects, together.
So, when we say "2 + 2 = 4", what we are really expressing is that a group of 4 objects can be transformed into a group of 2 objects and another group of 2 objects, by moving the objects apart (and
vice versa). Sharing resources with fellow humans is fundamental to human interaction. The reason I say, "large, distinct objects" is that the rules of addition do not hold for everything. For
example, when you add "1" particle of matter to "1" particle of antimatter, you get "0" particles of both matter and antimatter.
Numbers, and, yes, even logic, only exist fundamentally in the mind. They are good descriptions that correspond to reality. The soundness theorem for logic (which is not provable in the same logic it
is describing) is what really begins to hint at logic's correspondence to the real world. The soundness theorem relies on the fact that all of the axioms are true and that inference rules are
truth-preserving. The Peano axioms and logic are useful because, given the commonly known meaning we assign to the symbols of those systems, the axioms do properly describe our observations of
reality and the inference rules do lead to conclusions that continue to correspond to our observations of reality (in (one of) the correct domain(s), groups of large, distinct, objects). We observe
that quantity is preserved regardless of grouping; this is the associative property (here's another way of looking at it).
The mathematical proof of the soundness theorem is useless for convincing the hard skeptic, because it uses mathematical induction itself! The principle of mathematical induction is called such
because it was formulated inductively. When it comes to the large numbers, no one has observed these quantities. But, for all quantities we have observed so far, mathematical induction has held. We
use deduction to apply induction, but that doesn't make the induction any less inductive to begin with. We use the real number system to make predictions in physics. If we have the luxury of making
an observation, we should go ahead and update. For companies with limited resources that are trying to develop a useful product to sell to make money, and even more so for Friendly AI (a mistake
could end human civilization), it's nice to have a good idea of what an outcome will be before it happens. Bayes' rule provides a systematic way of working with this uncertainty. Maybe, one day, when
I put two apples next to two apples on my kitchen table, there will be five (the order in which I move the apples around will affect their quantity), but, if I had to bet one way or the other, I
assure you that my money is on this not happening.
"- try pondering this one. Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable - why is logic stable? Of course I can't imagine it being
any other way, but that's not an explanation."
I have recently had a thought relevant to the topic; an operation that is not stable.
In certain contexts, the operation d is used, where XdY means "take a set of X fair dice, each die having Y sides (numbered 1 to Y), and throw them; add together the numbers on the uppermost faces".
Using this definition, 2d2 has value '2' 25% of the time, value '3' 50% of the time, and value '4' 25% of the time. The procedure is always identical, and so there's nothing in the process which
makes any reference to time, but the result can differ (though note that 'time' is still not a parameter in that result). If the operation '+' is replaced by the operation 'd' - well, then that is
one other way that can be imagined.
Edited to add: It has been pointed out that XdY is a constant probability distribution. The unstable operation to which I refer is the operation of taking a single random integer sample, in a fair
manner, from that distribution.
The random is not in the dice, it is in the throw, and that procedure is never identical. Also, XdY is a distribution, always the same, and the dice are just a relatively fair way of picking a
Aren't you just confusing distributions (2d2) and samples ('3') here?
Thank you, I shall suitably edit my post.
How come we never see anything physical that behaves like any of of the non-standard models of first order PA? Given that's the case, it seems like we can communicate the idea of numbers to other
humans or even aliens by saying "the only model of first order PA that ever shows up in reality", so we don't need second order logic (or the other logical ideas mentioned in the comments) just to
talk about the natural numbers?
The natural numbers are supposed to be what you get if you start counting from 0. If you start counting from 0 in a nonstandard model of PA you can't get to any of the nonstandard bits, but
first-order logic just isn't expressive enough to allow you to talk about "the set of all things that I get if I start counting from 0." This is what allows nonstandard models to exist, but they
exist only in a somewhat delicate mathematical sense and there's no reason that you should expect any physical phenomenon corresponding to them.
If I wanted to communicate the idea of numbers to aliens, I don't think I would even talk about logic. I would just start counting with whatever was available, e.g. if I had two rocks to smash
together I'd smash the rocks together once, then twice, etc. If the aliens don't get it by the time I've smashed the rocks together, say, ten times, then they're either so bad at induction or so
unfamiliar with counting that we probably can't meaningfully communicate with them anyway.
This is what allows nonstandard models to exist, but they exist only in a somewhat delicate mathematical sense and there's no reason that you should expect any physical phenomenon corresponding
to them.
Is it just coincidence that these nonstandard models don't show up anywhere in the empirical sciences, but real numbers and complex numbers do? I'm wondering if there is some sort of deeper reason...
Maybe you were hinting at something by "delicate"?
If I wanted to communicate the idea of numbers to aliens, I don't think I would even talk about logic.
Good point. I guess I was trying to make the point that Eliezer seems a bit obsessed with logical pinpointing (aka categoricity) in this post. ("You need axioms to pin down a mathematical universe
before you can talk about it in the first place.") Before we achieved categoricity, we already knew what mathematical structure we wanted to talk about, and afterwards, it's still useful to add more
axioms if we want to prove more theorems.
Is it just coincidence that these nonstandard models don't show up anywhere in the empirical sciences, but real numbers and complex numbers do?
The process by which the concepts "natural / real / complex numbers" vs. "nonstandard models of PA" were generated is very different. In the first case, mathematicians were trying to model various
aspects of the world around them (e.g. counting and physics). In the second case, mathematicians were trying to pinpoint something else they already understood and ended up not quite getting it
because of logical subtleties.
I'm not sure how to explain what I mean by "delicate." It roughly means "unlikely to have been independently invented by alien mathematicians." In order for alien mathematicians to independently
invent the notion of a nonstandard model of PA, they would have to have independently decided that writing down the first-order Peano axioms is a good idea, and I just don't find this all that
likely. On the other hand, there are various routes alien mathematicians might take towards independently inventing the complex numbers, such as figuring out quantum mechanics.
Before we achieved categoricity, we already knew what mathematical structure we wanted to talk about, and afterwards, it's still useful to add more axioms if we want to prove more theorems.
I guess Eliezer's intended response here is something like "but when you want to explain to an AI what you mean by the natural numbers, you can't just say The Things You Use To Count With, You Know,
The Pirahã are unfamiliar with counting and we still can kind-of meaningfully communicate with them. I agree with the rest of the comment, though.
I was ready to reply "bullshit", but I guess if their language doesn't have any cardinal or ordinal number terms ...
Still, they could count with beads or rocks, à la the magic sheep-counting bucket.
It's understandable why they wouldn't really need counting given their lifestyle. But I wonder what they do (or did) when a neighboring tribe attacks or encroaches on their territory? Their language
apparently does have words for 'small amount' and 'large amount', but how would they decide how many warriors to send to meet an opposing band?
Still, they could count with beads or rocks, à la the magic sheep-counting bucket.
Here's a decent argument that they probably don't have words for numbers because they don't count, rather than the other way round, contra pop-Whorfianism. (Otherwise I guess they'd just borrow the
words for numbers from Portuguese or something, as they probably did with personal pronouns from Tupi.)
This is basically the theme of the next post in the sequence. :)
How come we never see anything physical that behaves like any of of the non-standard models of first order PA?
Umm... wouldn't they be considered "standard" in this case? I.e. matching some real-world experience?
Let's imagine a counterfactual world in which some of our "standard" models appear non-standard. For example, in a purely discrete world (like the one consisting solely of causal chains, as EY once
suggested), continuity would be a non-standard object invented by mathematicians. What makes continuity "standard" in our world is, disappointingly, our limited visual acuity.
Another example: in a world simulated on a 32-bit integer machine, natural numbers would be considered non-standard, given how all actual numbers wrap around after 2^32-1.
Exercise for the reader: imagine a world where a certain non-standard model of first order PA would be viewed as standard.
Requesting feedback:
"Whenever a part of reality behaves in a way that conforms to the number-axioms - for example, if putting apples into a bowl obeys rules, like no apple spontaneously appearing or vanishing, which
yields the high-level behavior of numbers - then all the mathematical theorems we proved valid in the universe of numbers can be imported back into reality. The conclusion isn't absolutely
certain, because it's not absolutely certain that nobody will sneak in and steal an apple and change the physical bowl's behavior so that it doesn't match the axioms any more. But so long as the
premises are true, the conclusions are true; the conclusion can't fail unless a premise also failed. You get four apples in reality, because those apples behaving numerically isn't something you
assume, it's something that's physically true. When two clouds collide and form a bigger cloud, on the other hand, they aren't behaving like integers, whether you assume they are or not."
This is exactly what I argued and grounded back in this article.
Specifically, that the two premises:
1) rocks behave isomorphically to numbers, and
2) under the axioms of numbers, 2+2 = 4
jointly imply that adding two rocks to two rocks gets four rocks. (See the cute diagram.)
And yet the response on that article (which had an array of other implications and reconciliations) was pretty negative. What gives?
Furthermore, in discussions about this in person, Eliezer_Yudkowsky has (IIRC and I'm pretty sure I do) invoked the "hey, adding two apples to two apples gets four apples" argument to justify the
truth of 2+2=4, in direct contradiction of the above point. What gives on that?
Terry Tao's 2007 post on nonfirstorderizability and branching quantifiers gives an interesting view of the boundary between first- and second-order logic. Key quote:
Moving on to a more complicated example, if Q(x,x’,y,y’) is a quaternary relation on four objects x,x’,y,y’, then we can express the statement
For every x and x’, there exists a y depending only on x and a y’ depending on x and x’ such that Q(x,x’,y,y’) is true
...but it seems that one cannot express
For every x and x’, there exists a y depending only on x and a y’ depending only on x’ such that Q(x,x’,y,y’) is true
in first order logic!
The post and comments give some well-known theorems that turn out to rely on such "branching quantifiers", and an encoding of the predicate "there are infinitely many X" which cannot be done in
first-order logic.
Reddit comments (>34): http://www.reddit.com/r/math/comments/12h03p/it_sounds_like_youre_starting_to_get_what_i_wait/
You just say: 'For every relation R that works exactly like addition, the following statement S is true about that relation.' It would look like, '∀ relations R: (∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)
→R(x, Sy, Sz))) → S)', where S says whatever you meant to say about +, using the token R.
I would change the statement to be something other than 'S', say 'Q', as S is already used for 'successor'.
I agree that the use of S here was confusing. Also, there is one too many right parens.
I'm a little confused as to which of two positions this is advocating:
1. Numbers are real, serious things, but the way that we pick them out is by having a categorical set of axioms. They're interesting to talk about because lots of things in the world behave like
them (to some degree).
2. Mathematical talk is actually talk about what follows from certain axioms. This is interesting to talk about because lots of things obey the axioms and so exhibit the theorems (to some degree).
Both of these have some problems. The first one requires you to have weird, non-physical numbery-things. Not only this, but they're a special exception to the theory of reference that's been
developed so far, in that you can refer to them without having a causal connection.
The second one (which is similar to what I myself would espouse) doesn't have this problem, because it's just talking about what follows logically from other stuff, but you do then have to explain
why we seem to be talking about numbers. And also what people were doing talking about arithmetic before they knew about the Peano axioms. But the real bugbear here is that you then can't really
explain logic as part of mathematics. The usual analyisis of logic that we do in maths with the domain, interpretation, etc. can't be the whole deal if we're cashing out the mathematics in terms of
logical implication! You've got to say something else about logic.
(I think the answer is, loosely, that
1. the "numbers" we talk about are mainly fictional aides to using the system, and
2. the situation of pre-axiom speakers is much like that of English speakers who nonetheless can't explain English grammar.
3. I have no idea what to say about logic! )
I'm curious which of these (or neither) is the correct interpretation of the post, and if it's one of them, what Eliezer's answers are... but perhaps they're coming in another post.
I'm not sure exactly what Eliezer intends, but I'll put in my two cents:
A proof is simply a game of symbol manipulation. You start with some symbols, say '(', ')', '¬', '→', '↔', '∀', '∃', 'P', 'Q', 'R', 'x', 'y', and 'z'. Call these symbols the alphabet. Some sequences
of symbols are called well-formed formulas, or wffs for short. There are rules to tell what sequences of symbols are wffs, these are called a grammar. Some wffs are called axioms. There is another
important symbol that is not one of the symbols you chose - this is the '⊢' symbol. A declaration is the '⊢' symbol followed by a wff. A legal declaration is either the '⊢' symbol followed by an
axiom or the result of an inference rule. An inference rule is a rule that declares that a declaration of a certain form is legal, given that certain declarations of other forms are legal. A famous
inference rule called modus ponens is part of a formal system called first-order logic. This rule says: "If '⊢ P' and '⊢ (P → Q)' (where P and Q are replaced with some wffs) are valid declarations,
then '⊢ Q' is also a valid declaration." By the way, a formal system is just a specific alphabet, grammar, set of axioms, and set of inference rules. You also might like to note that if '⊢ P' (where
P is replaced with some wff) is a valid declaration, then we also call P a theorem. So now we know something: In a formal system, all axioms are theorems.
The second thing to note is that a formal system does not necessarily have anything to do with even propositional logic (let alone first- or second-order logic!). Consider the MIU system (open link
in WordPad, on Windows), for example. It has four inference rules for just messing around with the order of the letters, 'M', 'I', and 'U'! That doesn't have to do with the real world or even math,
does it?
The third thing to note is that, though a formal system can tell us what wffs are theorems, it cannot (directly) tell us what wffs are not theorems. And hence we have the MU puzzle. This asks whether
"MU" is a theorem in the MIU system. If it is, then you only need the MIU system to demonstrate this, but if it is not, you need to use reasoning from outside of that system.
As other commenters have already noted, mathematicians are not thinking about ZFC set theory when they prove things (that's not a bad thing; they'd never manage to prove any new results if they had
to start from foundations for every proof!). However, mathematicians should be fairly confident that the proofs they create could be reduced down to proofs from the low-level axioms. So Eliezer is
definitely right to be worried when a mathematician says "A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you
want it to count as a proof, the only real issue is whether you’re completely convincing.". A proof is a social construct, but it is one, very, very specific kind of social construct. The axioms and
inference rules of first-order Peano arithmetic are symbolic representations of our most fundamental notion of what the natural numbers are. The reason for propositional logic, first-order logic,
second-order logic, Peano arithmetic, and the scientific method is that humans have little things called "cognitive biases". We are convinced by way too many things that should be utterly
unconvincing. To say that a proof is a convincing social construct is...technically...correct (oh how it pains me to say that!)...but that very vague part of what it means for something to be a proof
seems to imply that a proof is the utter antithesis of what it was meant for! A mathematical proof should be the most convincing social construct we have, because of how it is constructed.
First-order Peano arithmetic has just a few simple axioms, and a couple simple inference rules, and its symbols have a clear intended interpretation (in terms of the natural numbers (which
characterize parts of the web of causality as already explained in the OP)). The truth of a few simple axioms and validity of a couple simple inference rules can be evaluated without our cognitive
biases getting in the way. On the other hand, it's probably not a good idea to make "There is a prime number larger than any given natural number." an axiom of a formal system about the natural
numbers, because it is not an immediate part of our intuitive understanding of how causal systems that behave according to the rules of the natural numbers behave. We as humans would have to be very,
very, confused if a theorem of first-order Peano arithmetic (because we are so sure that its axioms are true and its inference rules are valid) turned out to be the negation of another theorem of
Peano arithmetic, but not so confused if the same happened for ZFC set theory, because we do not so readily observe infinite sets in our day-to-day experience. The axioms and inference rules of
first-order Peano arithmetic more directly correspond to our physical reality than those of ZFC set theory do (and the axioms and inference rules of the MIU system have nothing to do with our
physical reality at all!). If a contradiction in first-order Peano arithmetic were found, though, life would go on. First-order Peano arithmetic does have a lot to do with our physical reality, but
not all of it does. It inducts to numbers like 3^^^3 that we will probably never interact with. The ultrafinitists would be shouting "Told you so!"
Now I have said enough to give my direct response to the comment I am replying to. First of all, the dichotomy between "logic" and "mathematics" can be dissolved by referring to "formal systems"
instead. A formal system is exactly as entwined with reality as its axioms and inference rules are. In terms of instrumental rationality, the more exotic theorems of ZFC set theory (and MIU) really
don't help us, unless we intrinsically enjoy considering the question "What if there were (even though we have no evidence that this is the case) a platonic realm of sets? How would it behave?"
When used as means to an end, the point of a formal system is to correct for our cognitive biases. In other words, the definition of a proof should state that a proof is a "convincing demonstration
that should be convincing", to begin with. I suspect Eliezer is so concerned with the Peano axioms because computer programs happen to evidently behave in a very, very mathematical way, and he
believes that eventually a computer program will decide the fate of humanity. I share his concerns; I want a mathematical argument that the General Artificial Intelligence that will be created will
be Friendly, not anything that might "convince" a few uninformed government officials.
A few things:
1. I don't think we disagree about the social construct thing: see my other comment where I'm talking about that.
2. It sounds like you pretty much come down in favour of the second position that I articulated above, just with a formalist twist. Mathematical talk is about what follows from the axioms; obviously
only certain sets of axioms are worth investigating, as they're the ones that actually line up with systems in the world. I agree so far, but you think that there is no notion of logic beyond the
First of all, the dichotomy between "logic" and "mathematics" can be dissolved by referring to "formal systems" instead.
Aren't you just dropping the distrinction between syntax and semantics here? One of the big points of the last few posts has been that we're interested in the semantic implications, and the formal
systems are a (sound) syntactic means of reaching true conclusions. From your post it sounds like you're a pretty serious formalist, though, so that may not be a big deal to you.
Definitely position two.
I would describe first-order logic as "a formal encapsulation of humanity's most fundamental notions of how the world works". If it were shown to be inconsistent, then I could still fall back to
something like intuitionistic logic, but from that point on I'd be pretty skeptical about how much I could really know about the world, beyond that which is completely obvious (gravity, etc.).
What did I say that implied that I "think that there is no notion of logic beyond the syntactic"? I think of "logic" and "proof" as completely syntactic processes, but the premises and conclusions of
a proof have to have semantic meaning; otherwise, why would we care so much about proving anything? I may have implied something that I didn't believe, or I may have inconsistent beliefs regarding
math and logic, so I'd actually appreciate it if you pointed out where I contradicted what I just said in this comment (if I did).
Looking back, it's hard to say what gave me that impression. I think I was mostly just confused as to why you were spending quite so much time going over the syntax stuff ;) And
First of all, the dichotomy between "logic" and "mathematics" can be dissolved by referring to "formal systems" instead.
made me think that you though that all logical/mathematical talk was just talk of formal systems. That can't be true if you've got some semantic story going on: then the syntax is important, but
mainly as a way to reach semantic truths. And the semantics don't have to mention formal systems at all. If you think that the semantics of logic/mathematics is really about syntax, then that's what
I'd think of as a "formalist" position.
Oh, I think I may understand your confusion, now. I don't think of mathematics and logic as equals! I am more confident in first-order logic than I am in, say, ZFC set theory (though I am extremely
confident in both). However, formal system-space is much larger than the few formal systems we use today; I wanted to emphasize that. Logic and set theory were selected for because they were useful,
not because they are the only possible formal ways of thinking out there. In other words, I was trying to right the wrong question, why do mathematics and logic transcend the rest of reality?
What about "both ways simultaneously, the distinction left ambiguous most of the time because it isn't useful"?
In contrast with my esteemed colleague RichardKennaway, I think it's mostly #2. Before the Peano axioms, people talking about numbers might have been talking about any of a large class of things
which discrete objects in the real world mostly model. It was hard to make progress in math past a certain level until someone pointed out axiomatically exactly which
things-that-discrete-objects-in-the-real-world-mostly-model it would be most productive to talk about.
Concordantly, the situation of pre-axiom speakers is much like that of people from Scotland trying to talk to people from the American South and people from Boston, when none of them knows the rules
of their grammar. Edit: Or, to be more precise, it's like two scots speakers as fluent as Kawoomba talking about whether a solitary, fallen tree made a "sound," without defining what they mean by
Aye, right. Yer bum's oot the windae, laddie. Ye dinna need tae been lairnin a wee Scots tae unnerstan, it's gaein be awricht! Ane leid is enough.
EY seems to be taken with the resemblance between a causal diagram and the abstract structure of axioms, inferences and theorems in mathematcal logic. But there are differences: with causality, our
evidence is the latest causal output, the leaf nodes. We have to trace back to the Big Bang from them.However, in maths we start from axioms, and cannot get directly to the theorems or leaf nodes. We
could see this process as exploring a pre-existing territory, but it is hard to see what this adds, since the axioms and rules of inference are sufficient for truth, and it is hard to see, in EY's
presentation how literally he takes the idea.
Er, no, causal models and logical implications seem to me very different in how they propagate modularly. Unifying the two is going to be troublesome.
We could see this process as exploring a pre-existing territory, but it is hard to see what this adds, since the axioms and rules of inference are sufficient for truth, and it is hard to see, in
EY's presentation how literally he takes the idea.
It's useful for reasoning heuristically about conjectures.
Could I have an example?
I would read this:
axioms pin down that we're talking about numbers as opposed to something else.
axioms pin down that we're talking about some system that behaves like numbers as opposed to something else.
Lots of things in both real and imagined worlds behave like numbers. It's most convenient to pick one of them and call them "The Numbers" but this is really just for the sake of convenience and
doesn't necessarily give them elevated philosophical status. That would be my position anyway.
We don't know whether the universe is finite or not. If it is finite, then there is nothing in it that fully models the natural numbers. Would we then have to say that the numbers did not exist? If
the system that we're referring to isn't some physical thing, what is it?
Finite subsets of the naturals still behave like naturals.
Not precisely. In many ways, yes, but for example they don't model the axiom of PA that says that every number has a successor.
True, but the axiom of induction holds, and that is the most useful one.
I've realised that I'm slightly more confused on this topic than I thought.
As non-logically omniscient beings, we need to keep track of hypothetical universes which are not just physically different from our own, but which don't make sense - i.e. they contain logical
contradictions that we haven't noticed yet.
For example, let T be a Turing machine where we haven't yet established whether or not T halts. Then one of the following is true but we don't know which one:
• (a) The universe is infinite and T halts
• (b) The universe is infinite and T does not halt
• (c) The universe is finite and T halts
• (d) The universe is finite and T does not halt
If we then discover that T halts, we not only assign zero probability to (b) and (d), we strike them off the list entirely. (At least that's how I imagine it, I haven't yet heard anyone describe
approaches to logical uncertainty).
But it feels like there should also be (e) - "the universe is finite and the question of whether or not T halts is meaningless". If we were to discover that we lived in (e) then all infinite
universes would have to be struck off our list of meaningful hypothetical universes, since we are viewing hypothetical universes as mathematical objects.
But it's hard to imagine what would constitute evidence for (or against) (e). So after 5 minutes of pondering, that more or less maps out my current state of confusion.
I think you're confused if you think the finitude of the universe matters in answering the mathematical question of whether T halts. Answering that question may be of interest for then figuring out
whether certain things in our universe that behave like Turning machines behave in certain ways, but the mathematical question is independent.
Your confusion is that you think there need to be objects of some kind that correspond to mathematical structures that we talk about. Then you've got to figure out what they are, and that seems to be
tricky however you cut it.
I agree that the finitude of the universe doesn't matter in answering the mathematical question of whether T halts. I was pondering whether the finitude of the universe had some bearing on whether
the question of T halting is necessarily meaningful (in an infinite universe it surely is meaningful, in a finite universe it very likely is but not so obviously so).
Surely if the infinitude of the universe doesn't affect that statement's truth, it can't affect that statement's meaningfulness? Seems pretty obvious to me that the meaning is the same in a finite
and an infinite universe: you're talking about the mathematical concept of a Turing machine in both cases.
Conditional on the statement being meaningful, infinitude of the universe doesn't affect the statement's truth. If the meaningfulness is in question then I'm confused so wouldn't assign very high or
low probabilities to anything.
• I have a very strong intuition that there is a unique (up to isomorphism) mathematical structure called the "non-negative integers"
• I have a weaker intuition that statements in second-order logic have a unique meaningful interpretation
• I have a strong intuition that model semantics of first-order logic is meaningful
• I have a very strong intuition that the universe is real in some sense
It's possible that my intuition might be wrong though. I can picture the integers in my mind but my picture isn't completely accurate - they basically come out as a line of dots with a "going on
forever" concept at the end. I can carry on pulling dots out of the "going on forever", but I can't ever pull all of them out because there isn't room in my mind.
Any attempt to capture the integers in first-order logic will permit nonstandard models. From the vantage point of ZF set theory there is a single "standard" model, but I'm not sure this helps -
there are just nonstandard models of set theory instead. Similarly I'm not sure second-order logic helps as you pretty much need set theory to define its semantics.
So if I'm questioning everything it seems I should at least be open to the idea of there being no single model of the integers which can be said to be "right" in a non-arbitrary way. I'd want to
question first order logic too, but it's hard to come up with a weaker (or different) system that's both rigorous and actually useful for anything.
I've realized one thing though (based on this conversation) - if the universe is infinite, defining the integers in terms of the real world isn't obviously the right thing to do, as the real world
may be following one of the nonstandard models of the integers. Updating in favor of meaningfulness not being dependent on infinitude of universe.
The Peano Arithmetic talks about the Successor function, and jazz. Did you know that the set of finite strings of a single symbol alphabet also satisfies the Peano Axioms? Did you know that in ZFC,
defining the set all sets containing only other members of the parent set with lower cardinality, and then saying {} is a member obeys the Peano Axioms? Did you know that saying you have a
Commutative Monoid with right division, that multiplication with something other than identity always yields a new element and that the set {1} is productive, obey the Peano Axioms? Did you know the
even naturals obey the Peano Axioms? Did you know any fully ordered set with infimum, but no supremum obey the Axioms?
There is no such thing as "Numbers," only things satisfying the Peano Axioms.
Did you know that the set of finite strings of a single symbol alphabet also satisfies the Peano Axioms?
Surely the set of finite strings in an alphabet of no-matter-how-many-symbols satisfies the Peano axioms? e.g. using the English alphabet (with A=0, B=S(A), C=S(B)....AA=S(Z), AB=S(AA), etc would
make a base-26 system).
Single symbol alphabet is more interesting, (empty string = 0, sucessor function = append another symbol) the system you describe is more succinctly described using a concatenation operator:
• 0 = 0, 1 = S0, 2 = S1 ... 9 = S8.
• For All b in {0,1,2,3,4,5,6,7,8,9}, a in N: ab = a x S9 + b
From these definitions we get, example-wise:
• 10 = 1 x S9 + 0 = SSSSSSSSSS0
I'm not quite sure what you're saying here - that "Numbers" don't exist as such but "the even naturals" do exist?
I think s/he is saying there is no Essence of Numberhood beyond satisfaction of the PA's.
Just to be clear, I assume we're talking about the second order Peano axioms here?
I'm a little confused as to which of two positions this is advocating:
1. Numbers are real, serious things, but the way that we pick them out is by having a categorical set of axioms. They're interesting to talk about because lots of things in the world behave like
them (to some degree).
2. Mathematical talk is actually talk about what follows from certain axioms. This is interesting to talk about because lots of things obey the axioms and so exhibit the theorems (to some
I read it as (1), with a side order of (2). Mathematical talk is also about what follows from certain axioms. The axioms nail it down so that mathematicians can be sure what other mathematicians are
talking about.
Both of these have some problems. The first one requires you to have weird, non-physical numbery-things.
Not weird, non-physical numbery-things, just non-physical numbery-things. If they seem weird, maybe it's because we only noticed them a few thousand years ago.
Not only this, but they're a special exception to the theory of reference that's been developed so far, in that you can refer to them without having a causal connection.
No more than a magnetic field is a special exception to the theory of elasticity. It's just a phenomenon that is not described by that theory.
But EY insists that maths does come under correspondence/reference!
"to delineate two different kinds of correspondence within correspondence theories of truth.""
I think it's worth mentioning explicitly that the second-order axiom introduced is induction.
For thousands of years, mathematicians tried proving the parallel postulate from Euclid's four other postulates, even though there are fairly simple counterexamples which show such a proof to be
impossible. I suspect that at least part of the reason for this delay is a failure to appreciate this post's point : that a "straight line", like a "number" has to be defined/specified by a set of
axioms, and that a great circle is in fact a "straight line" as far as the first four of Euclid's postulates are concerned.
That's not correct. Elliptic geometry fails to satisfy some of the other postulates, depending on how they are phrased. I'm not too familiar with the standard ways of making Euclid's postulates
rigorous, but if you're looking at Hilbert's axioms instead, then elliptic geometry fails to satisfy O3 (the third order axiom): if three points A, B, C are on a line, then any of the points is
between the other two. Possibly some other axioms are violated as well.
Notably, elliptic geometry does not contain any parallel lines, while it is a theorem of neutral geometry that parallel lines do in fact exist.
Hyperbolic geometry was actually necessary to prove the independence of Euclid's fifth postulate, and few would call it a "fairly simple counterexample".
I agree that introducing elliptic geometry (and other simple examples like the Fano plane) earlier on in history would have made the discussion of Euclid's fifth postulate much more coherent much
Do we need a process for figuring out which objects are likely to behave like numbers? And as good Bayesians, for figuring out how likely that is?
Er, yes? I mean it's not like we're born knowing that cars behave like integers and outlet electricity doesn't, since neither of those things existed ancestrally.
Wait, what? We may not be born knowing what cars and electricity are, but I would be surprised if we weren't born with an ability (or the capacity to develop an ability) to partition our model of a
car-containing section of universe into discrete "car" objects, while not being able to do the same for "electric current" objects.
I'm pretty sure that we're born knowing cars and carlike objects behave like integers.
I think our eyes (or visual cortex) knows that certain things (up to 3 or 4 of them) behave like integers since it bothers to count them automatically.
The ancestral environment included people (who behave like integers over moderate time spans) and water (which doesn't behave like integers)..
The better question would have been "how do people identify objects which behave like integers?".
The better question would have been "how do people identify objects which behave like integers?".
The same way we identify objects which satisfy any other predicate? We determine whether or not something is a cat by comparing it to our knowledge of what cats are like. We determine whether or not
something is dangerous by comparing it to our knowledge of what dangerous things are like.
Why do you ask this question specifically of the integers? Is there something special about them?
How do you determine whether a physical process "behaves like integers"? The second-order axiom of induction sounds complicated, I cannot easily check that it's satisfied by apples. If you use some
sort of Bayesian reasoning to figure out which axioms work on apples, can you describe it in more detail?
I don't have an answer to the specific question, only to the class of questions. To approach understanding this, we need to distinguish between reality and what points to reality, i.e, symbols. Our
skill as humans is in the manipulation of symbols, as a kind of simulation of reality, with greater or lesser workability for prediction, based in prior observation, of new observations.
"Apples" refers, internally, to a set of responses we created through our experience. We respond to reality as an "apple" or as a "set of apples," only out of our history. It's arbitrary. Counting,
and thus "behavior like integers" applies to the simplified, arbitrary constructs we call "apples." Reality is not divided into separate objects, but we have organized our perceptions into named
Examples. If an "apple" is a unique discriminable object, say all apples have had a unique code applied to them, then what can be counted is the codes. Integer behavior is a behavior of codes.
Unique applies can be picked up one at a time, being transferred to one basket or another. However, real apples are not a constant. Apples grow and apples rot. Is a pile of rotten apple an "apple"?
Is an apple seed an apple? These are questions with no "true" answer, rather we choose answers. We end up with a binary state for each possible object: "yes, apple," or "no, not apple." We can count
these states, they exist in our mind.
If "apple" refers to a variety, we may have Macintosh, Fuji, Golden delicious, etc.
So I have a basket with two apples in it. That is, five pieces of fruit that are Macintosh and three that are Fuji.
I have another basket with two apples in it. That is, one Fuji and one Golden Delicious.
I put them all into one basket. How many apples are in the basket? 2 + 2 = 3.
The question about integer behavior is about how categories have been assembled. If "apple" refers to an individual piece of intact fruit, we can pick it up, move it around, and it remains the same
object, it's unique and there is no other the same in the universe, and it belongs to a class of objects that is, again, unique as a class, the class is countable and classes will display integer
That's as far as I've gotten with this. "Integer behavior" is not a property of reality, per se, but of our perceptions of reality.
Well, it comes from the fact that apples in a bowl is Exclusively just that, as verified by your Bayesian reasoning. There are no other "chains" of successors (shadow apples? I can't even imagine a
good metaphor).
So, now you in fact have that bowl of apples narrowed down to {0, S0, SS0, SSS0, ...} which is isomorphic to the natural numbers, so all other natural number properties will be reflected there.
Humans need fantasy to be human.
"Tooth fairies? Hogfathers? Little—"
Yes. As practice. You have to start out learning to believe the little lies.
"So we can believe the big ones?"
Yes. Justice. Mercy. Duty. That sort of thing.
"They're not the same at all!"
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.
□ Susan and Death, in Hogfather by Terry Pratchett
So far we've talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical reference
by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?
(Note: this is my first post. I may be wrong, and if so am curious as to how. Anyway, I figure it's high time that my beliefs stick their neck out. I expect this will hurt, and apologize now should I
later respond poorly.)
This may be the answer to a different question, but...
I play lots of role-playing games. Role-playing games are like make-believe; events in them exist in a shared counter-factual space (in the players' imagination). Make-believe has a problem: if two
people imagine different things, who is right? (This tends to end with a bunch of kids arguing about whether the fictional T-Rex is alive or dead).
Role-playing games solve this problem by handing authority over various facets of the game to different things. The protagonists are controlled by their respective players, the results of choices by
dice and rules, and most of the fictional world by the Game Master.*
So, in a role-playing game, when you ask what is true[RPG], you should direct that question to the appropriate authority. Basically, truth[RPG] is actually canon (in the fandom sense; TV Trope's page
is good, but comes with the usual where-did-my-evening-go caveats).
Similarly, if we ask "where did Luke Skywalker go to preschool?", we're asking a question about canon.
That said, even canon needs to be internally consistent. If someone with authority were to claim that Tatooine has no preschools, then we can conclude that Luke Skywalker didn't go to preschool. If
an authority claims two inconsistent things, we can conclude that the authority is wrong (namely, in the mathematical sense the canon wouldn't match any possible model).
I've long felt that ideas like morality and liberty are a variety of canon.
Specifically, you can have authorities (a religion or philosopher telling you stuff), and those authorities can be provably wrong (because they said something inconsistent), but these ideas exists in
a kind of shared imaginary space. Also, people can disagree with the canon and make up their own ideas.
Now, that space is still informed by reality. Even in fiction, we expect gravity to drop off as the square of distance, and we expect solid objects to be unable to pass through each other.** With
ideas, we can state that they are nonsensical (or, at minimum, not useful) if they refers to real things which don't exist. A map of morality is a map of a non-real thing, but morality must interface
with reality to be useful, so anywhere the interface doesn't line up with reality, morality (or its map) is wrong.
*This is one possible breakdown. There are many others.
**In most games/stories, anyway. At first glance I'd expect morality to be better bound to reality, but I suppose there's been plenty of people who's moral system boiled down to "don't do anything
Ma'at would disapprove of", backed up with concepts like the literal weight of sin (vs. the weight of a feather).
It so happens that the three "big lies" death mentions are all related to morality/ethics, which is a hard question. But let me take the conversation and change it a bit:
"So we can believe the big ones?"
Yes. Anger. Happiness. Pain. That sort of thing.
"They're not the same at all!"
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of happiness, one molecule of pain.
In this version, the final argument is still correct -- if I take the universe and grind it down to a sieve, I will not be able to say "woo! that carbon atom is an atom of happiness". Since the
penultimate question of this meditation was "Is there anything else", at least I can answer that question.
Clearly, we want to talk about happiness for many reasons -- even if we do not value happiness in itself (for ourselves or others), predicting what will make humans happy is useful to know stuff
about the world. Therefore, it is useful to find a way that allows us to talk about happiness. Happiness, though, is complicated, so let us put it aside for a minute to ponder something simpler: a
solar system. I will simplify here, a solar system is one star and a bunch of planets rotating around it. Though solar systems effect each other through gravity or radiation, most of the effects of
the relative motions inside a solar system comes from inside itself, and this pattern repeats itself throughout the galaxy. Much like happiness, being able to talk about solar systems is useful --
though I do not particularly value solar systems in and of themselves, it's useful to have a concept of "a solar system", which describes things with commonalities, and allows me to generalize.
If I grind the universe, I cannot find an atom that is a solar system atom -- grinding the universe down destroys the "solar system" useful pattern. For bounded minds, having these patterns leads to
good predictive strength without having to figure out each and every atom in the solar system.
In essence, happiness is no different than solar system -- both are crude words to describe common patterns. It's just that happiness is a feature of minds (mostly human minds, but we talk about how
dogs or lizards are happy, sometimes, and it's not surprising -- those minds are related algorithms). I cannot say where every atom is in the case of a human being happy, but some atom configurations
are happy humans, and some are not.
So: at the very least, happiness and solar systems are part of the causal network of things. They describe patterns that influence other patterns.
Mercy is easier than justice and duty. Mercy is a specific configuration of atoms behaving a human in a specific way -- even though the human feels they are entitled to cause another human hurt
("feeling entitled" is a set of specific human-mind-configurations, regardless of whether "entitlement" actually exists), but does not do so (for specific reasons, etc. etc.). In short, mercy
describes specific patterns of atoms, and is part of causal networks.
Duty and justice -- I admit that I'm not sure what my reductionist metaethics are, and so it's not obvious what they mean in the causal network.
We could make it even easier :P
You say that a tiger has stripes, but I looked at some tiger atoms and didn't see any stripes.
The harder question is what is a valid way of figuring out the important properties of the system.
The statement that the world is just is a lie. There exist possible worlds that are just - for instance, these worlds would not have children kidnapped and forced to kill - and ours is not one of
Thus, justice is a meaningful concept. Justice is a concept defined in terms of the world (pinned-down causal links) and also irreducibly normative statements. Normative statements do not refer to
"the world". They are useful because we can logically deduce imperatives from them. "If X is just, then do X." is correct, that is:
Do the right thing.
I am not entirely sure how you arrived at the conclusion that justice is a meaningful concept. I am also unclear on how you know the statement "If X is just, then do X" is correct. Could you
elaborate further?
In general, I don't think it is a sufficient test for the meaningfulness of a property to say "I can imagine a universe which has/lacks this property, unlike our universe, therefore it is
I did not intend to explain how i arrived at this conclusion. I'm just stating my answer to the question.
Do you think the statement "If X is just, then do X" is wrong?
Like army1987 notes, it is an instruction and not a statement. Considering that, I think "if X is just, then do X" is a good imperative to live by, assuming some good definition of justice. I don't
think I would describe it as "wrong" or "correct" at this point.
OK. Exactly what you call it is unimportant.
What matters is that it gives justice meaning.
It may be incomplete. Do you have a place for Mercy?
The reason I'm not making distinctions among different moral words, though such distinctions exist in language, is that it seems the only new problem created by these moral words is understanding
morality. Once you understand right and wrong, just and unjust can be defined just like you define regular words, even if something can be just but immoral.
the statement "If X is just, then do X"
That's an instruction, not a statement.
They exist in the same sense that numbers exist, or that meaningful existence exists, or that meaningfulness exists.
Once you grind the universe into powder, none of those things exists anymore.
I was going to say that yes, I think there is another kind of thing that can be meaningfully talked about, and "justice" and "mercy" and "duty" have something to do with that sort of thing, but a
more prototypical example would be "This court has jurisdiction". Especially if many experts were of the opinion that it didn't, but the judge disagreed, but the superior court reversed her, and now
the supreme court has decided to hear the case.
But then I realized that there was something different about that kind of "truth": I would not want an AI to assign a probability to the proposition The court did, in fact, have jurisdiction (nor to,
oh, It is the duty of any elected official to tell the public if they learn about a case of corruption, say). I think social constructions can technically be meaningfully talked about among humans,
and they are important as hell if you want to understand human communication and behavior, but I guess on reflection I think that the fact that I would want an AI to reason in terms of more basic
facts is a hint that if we are discussing epistemology, if we're discussing what sorts of thingies we can know about and how we can know about them, rather than discussing particular properties of
the particularly interesting thingies called humans, then it might be best to say that "The judge wrote in her decision that the court had jurisdiction" is a meaningful statement in the sense under
consideration, but "The court had jurisdiction" is not.
I would find them under the category of patterns.
A neural network is very good at recognising patterns; and human brains run on a neural network architecture. Given a few examples of what a word does or does not mean, we can quickly recognise the
pattern and fit it into our vocabulary. (Apparently, this can be used in language classes; the teacher will point to a variety of objects, indicating whether they are or are not vrugte, for example;
and it won't take that many examples before the student understands that vrugte means fruit but not vegetables).
Justice and mercy are not patterns of objects, but rather patterns of action. The man killed his enemy, but has a wife and children to support; sending him to Death Row might be just, but letting him
have some way of earning money while imprisoned might be merciful. Similarly, happy, sad, and angry are emotional patterns; a person acts in this way when happy, and acts in that way when sad.
Justice, mercy, duty, etc are found by comparison to logical models pinned down by axioms. Getting the axioms right is damn tough, but if we have a decent set we should be able to say "If Alex kills
Bob under circumstances X, this is unjust." We can say this the same way that we can say "Two apples plus two apples is four apples." I can't find an atom of addition in the universe, and this
doesn't make me reject addition.
Also, the widespread convergence of theories of justice on some issues (eg. Rape is unjust.) suggests that theories of justice are attempting to use their axioms to pin down something that is already
there. Moral philosophers are more likely to say "My axioms are leading me to conclude rape is a moral duty, where did I mess up?" than "My axioms are leading me to conclude rape is a moral duty,
therefore it is." This also suggests they are pinning down something real with axioms. If it was otherwise, we would expect the second conclusion.
"theories of justice are attempting to use their axioms to pin down something that is already there"
So in other words, duty, justice, mercy--morality words--are basically logical transformations that transform the state of the universe (or a particular circumstance) into an ought statement.
Just as we derive valid conlcusions from premises using logical statements, we derive moral obligations from premises using moral statements.
The term 'utility funcion' seems less novel now (novel as in, a departure from traditional ethics).
This is my view.
Not quite. They don't go all the way to completing an ought statement, as this doesn't solve the Is/Ought dichotomy. They are logical transformations that make applying our values to the universe
much easier.
"X is unjust" doesn't quite create an ought statement of "Don't do X". If I place value on justice, that statement helps me evaluate X. I may decide that some other consideration trumps justice. I
may decide to steal bread to feed my starving family, even if I view the theft as unjust.
I've thought about this for a while, and I feel like you can replace "Fantasy" and "Lies" with "Patterns" in that dialogue, and have it make sense, and it also appears to be an answer to your
questions. That being said, it also feels like a sort of a cached thought, even though I've thought about it for a while. However, I can't think of a better way to express it and all of the other
thoughts I had appeared to be significantly lower caliber and less clear.
Considering that, I should then ask "Why isn't 'Patterns' the answer?'
In people's brains, and in papers written by philosophy students.
"Justice" and "mercy" can be found by looking at people, and in particular how people treat each other. They're physical things, although they're really complicated kinds of physical things.
In particular, the kind of thing that is destroyed when you grind it down into powder.
Humans need fantasy to be human.
"Tooth fairies? Hogfathers? Little—"
Yes. As practice. You have to start out learning to believe the little lies.
"So we can believe the big ones?"
Yes. Cars. Chairs. Bicycles. That sort of thing.
"They're not the same at all!"
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of car, one molecule of bicycle.
Susan and Death, in Hogfather by Terry Pratchett
Same thing.
So far we've talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical
reference by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?
You find them inside counterfactual statements about the reactions of an implied hypothetical representative human, judging under under implied hypothetical circumstances in which they have access to
all relevant knowledge. There is clearly justice if a wide variety of these hypothetical humans agree that there is, under a wide variety of these hypothetical circumstances; there is clearly not
justice if they agree that there is not. If the hypothetical people disagree with each other, then the definition fails.
Talking about things like justice, mercy and duty is meaningful, but the meanings are intermediated by big, complex webs of abstractions which humans keep in their brains, and the algorithms people
use to manipulate those webs. They're unambiguous only to the extent to which people successfully keep those webs in sync with each other. In practice, our abstractions mainly work by combining bags
of weak classifiers and feature-weighted similarity to positive and negative examples. This works better for cases that are similar to the training set, worse for cases that are novel and weird, and
better for simpler abstractions and abstractions built on simpler constituents.
Why couldn't the hypothetical omniscient people inside the veil of ignorance decide that justice doesn't exist? Or if they could, how does that paragraph go towards answering the meditation? What
distinguishes them from the hypothetical death who looks through everything in the universe to try to find mercy? Aren't you begging the question here?
Justice - The quality of being "just" or "fair". Humans call a situation fair when everyone involved is happy afterwards, without having had their desires forcibly thwarted (e.g. being strapped into
a chair and hooked into a morphine drip) along the way.
Mercy - Compassionate or kindly forbearance shown toward an offender, an enemy, or other person in one's power. Humans choose to engage in actions characterized this way on a daily basis.
Duty - Something that one is expected or required to do by moral or legal obligation. Legal duties certainly exist; Earth is not an anarchy.
Justice, mercy, and duty are only words. The important question to ask is whether or not they are useful. I certainly think they are; I use each of those words at least once a week. Once the symbols
have been replaced by substance, it is clear that we should not be looking for those things in single atoms, but very large collections of them we call "humans", or slightly smaller (but still very
large) collections we call "human brains".
And as far as we know, atoms are not arranged in configurations that have the properties we ascribe to the tooth fairy.
This is a really good post.
If I can bother your mathematical logician for just a moment...
Hey, are you conscious in the sense of being aware of your own awareness?
Also, now that Eliezer can't ethically deinstantiate you, I've got a few more questions =)
You've given a not-isomorphic-to-numbers model for all the prefixes of the axioms. That said, I'm still not clear on why we need the second-to-last axiom ("Zero is the only number which is not the
successor of any number.") -- once you've got the final axiom (recursion), I can't seem to visualize any not-isomorphic-to-numbers models.
Also, how does one go about proving that a particular set of axioms has all its models isomorphic? The fact that I can't think of any alternatives is (obviously, given the above) not quite
Oh, and I remember this story somebody on LW told, there were these numbers people talked about called...um, I'm just gonna call them mimsy numbers, and one day this mathematician comes to a seminar
on mimsy numbers and presents a proof that all mimsy numbers have the Jaberwock property, and all the mathematicians nod and declare it a very fine finding, and then the next week, he comes back, and
presents a proof that no mimsy numbers have the Jaberwock property, and then everyone suddenly loses interest in mimsy numbers...
Point being, nothing here definitely justifies thinking that there are numbers, because someone could come along tomorrow and prove ~(2+2=4) and we'd be done talking about "numbers". But I feel
really really confident that that won't ever happen and I'm not quite sure how to say whence this confidence. I think this might be similar to your last question, but it seems to dodge
RichardKennaway's objection.
I'm still not clear on why we need the second-to-last axiom ("Zero is the only number which is not the successor of any number.")
I guess it is not necessary. It was just an illustration of a "quick fix", which was later shown to be insufficient.
So this is where (one of the inspirations for) Eliezer's meta-ethics comes from! :)
A quick refresher from a former comment:
Cognitivism: Yes, moral propositions have truth-value, but not all people are talking about the same facts when they use words like "should", thus creating the illusion of disagreement.
... and now from this post:
Some people might dispute whether unicorns must be attracted to virgins, but since unicorns aren't real - since we aren't locating them within our universe using a causal reference - they'd just
be talking about different models, rather than arguing about the properties of a known, fixed mathematical model.
(This little realization also holds a key to resolving the last meditation, I suppose.)
I've heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at
least a bit easier to understand where Eliezer's coming from.
I've heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be
at least a bit easier to understand where Eliezer's coming from.
Agreed, and disappointed that this comment was downvoted.
First post in this sequence that lives up to the standard of the old classics. Love it.
Yeah, but I've found the previous posts much more useful for coming up with clear explanations aimed at non-LWers, and I presume they'd make a better introduction to some of the core LW epistemic
rationality than just throwing "The Simple Truth" at them.
It's a pretty hard balance to strike that's probably different for everyone, between incomprehensibility and boringness.
I already more-or-less knew most of the stuff in the previous posts in this sequences and still didn't find them boring.
Agree. When I first read The Simple Truth, I thought Eliezer was endorsing pragmatism over correspondence.
I'm still wondering what The Simple Truth is about. My best guess is that it is a critique of instrawmantalism.
In my opinion, Causal Diagrams and Causal Models is far superior to Timeless Causality.
I am not saying that there is anything wrong with "Timeless Causality", or any of Eliezer's old posts, but this sequence goes into enough depth of explanation that even someone who has not read the
older sequences on Less Wrong would have a good chance of understanding it.
Why does 2 + 2 come out the same way each time? Never mind the question of why the laws of physics are stable - why is logic stable? Of course I can't imagine it being any other way, but that's
not an explanation.
My short answer is "because we live in a causal universe".
To expand on that:
Logic is a process that has been specifically designed to be stable. Any process that has gone through a design specifically intended to make it stable, and refined for stability over generations, is
going to have a higher probability of being stable. Logic, in short, is more likely than anything else in the universe to be stable.
So then the question is not why logic specifically is stable - that is by design - but rather whether it is possible for anything in the universe to be stable. And there is one thing that does appear
to be stable; that if you have the same cause, then you will have the same effect. That the universe is (at least mostly) causal. It is that causality that gives logic its stability, as far as I can
I love this inquiry.
Numbers do not appear in reality, other than "mental reality." 2+2=4 does not appear outside of the mind. Here is why:
To know that I have two objects, I must apply a process to my perception of reality. I must recognize the objects as distinct, I must categorize them as "the same" in some way. And then I apply
another process, "counting." That is applied to my collected identifications, not to reality itself, which can just as easily be seen as unitary, or sliced up in a practically infinite number of
Number, then, is a product of brain activity, and the observed properties of numbers are properties of brain process. Some examples.
I put two apples in a bowl. I put two more apples in the bowl. How many apples are now in the bowl?
We may easily say "four," because most of the time this prediction holds. However, it's a mixing bowl, used as a blender, and what I have now is a bowl of applesauce. How many apples are in the bowl?
I can't count them! I put four apples in, and none come out! Or some smaller number than four. Or a greater number (If I add some earth, air, fire, and water, and wait a little while....)
Apples are complex objects. How about it's two deuterium molecules? (Two deuterons each, with two electrons, electronically bound.) How about the bowl is very small, confining the molecules, reducing
their freedom of movement, and their relative momentum is, transiently, close to zero?
How many deuterons? Initially, four, but ... it's been calculated that after a couple of femtoseconds, there are none, there is one excited atom of Beryllium-8, which promptly decays into two helium
nuclei and a lot of energy. In theory. It's only been calculated, it's not been proven, it merely is a possible explanation for certain observed phenomena. Heh!
The point here: the identity of an object, the definition of "one," is arbitrary, a tool, a device for organizing our experience of reality. What if it's two red apples and two green apples? They
don't taste the same and they don't look the same, at least not entirely the same. What we are counting is the identified object, "apple." Not what exists in reality. Reality exists, not "apples,"
except in our experience, largely as a product of language.
The properties of numbers, so universally recognized, follow from the tools we evolved for predicting behavior, they are certainly not absolutes in themselves.
Hah! "Certainly." That, with "believe" is a word that sets off alarms.
The fact that one apple added to one apple invariably gives two apples....
It's almost a tautology. What we have is an iterated identification. There are two objects that are named "apple," they are identical in identification, but separate and distinct. This appears in
time. I'm counting my identifications. The universality of 1+1 = 2 is a product of a single brain design. For an elephant, the same "problem" might be "food plus food equals food."
Basically, you're saying that for an elephant, apples behave like clouds, because the elephant has a concept of apple that is like our concept of cloud. (I hope real elephants aren't this dumb). I
like this a lot, it clarifies what I felt was missing from the cloud analogy.
Having it explicitly stated is helpful. It leads to the insight that at bottom, outside of directly useful concepts and into pure ontology/epistemology, there are no isolated individual integers.
There is only relative magnitude on a broad continuum. This makes approaching QM much simpler.
Mmmm. This is all projected onto elephants, but maybe something like what you say. I was just pointing to a possible alternate processing mode. An elephant might well recognize quantity, but probably
not through counting, which requires language. Quantity might be recognized directly, by visual comparison, for example. Bigger pile/smaller pile. More attraction vs. less attraction, therefore
movement toward bigger pile. Or smell.
I can't figure out why you're getting downvotes though.
1. I'm doing something right.
2. I'm doing something wrong.
3. I write too much.
4. I don't explain well enough.
5. It's Thursday.
6. I have a strange name.
7. I'm Muslim.
8. I'm sensible.
9. I'm not.
10. It means nothing, which also means nothing.
11. Something else.
Thanks, chaosmosis, that was a nice thing to say. ....
(So far I've downvoted many of your comments that contained what I believe to be confused/mystical thinking, dubious statements of unclear meaning that I expect can't be made clear by unpacking
(whatever their poetic qualities may be); also, for similar reasons, some conversations that I didn't like taking place, mostly with chaosmosis, where I downvoted both sides.)
You do write unusually long comments and it's slightly irritating (although I have not downvoted you so far).
Yeah, thanks, Alicorn. I've been "conferencing" -- as we used to call it in the 80s -- for a long time, and I know the problem. I actually love the up/down voting system here. I gives me some fairly
fast feedback as to how I'm occurring to others. I'm primarily here to learn, and learning to communicate effectively in a new context has always brought rewards to me.
Ah, one more thing I'll risk adding here. This is a Yudkowsky thread and discussing my posting may be seriously off-topic. I need to pay more attention to context.
I need to pay more attention to context.
LessWrong is like digression central. Someone will make a post talking about evolutionary psychology, and they'll mention bow and arrows in an example, and then someone else will respond with a study
about how bow and areas weren't used until X date, and then a debate will happen, and then it will go meta, and then, etc.
Would you argue, then, that aliens or AIs might not discover the fact that 1 + 1 = 2, or even consider it a fact at all?
The boundary between physical causality and logical or mathematical implication doesn’t always seem to be clearcut. Take two examples.
(1) The product of two and an integer is an even integer. So if I double an integer I will find that the result is even. The first statement is clearly a timeless mathematical implication. But by
recasting the equation as a procedure I introduce both an implied separation in time between action and outcome, and an implied physical embodiment that could be subject to error or interruption.
Thus the truth of the second formulation strictly depends on both a mathematical fact and physical facts.
(2) The endpoint of a physical process is causally related to the initial conditions by the physical laws governing the process. The sensitivity of the endpoint to the initial conditions is a quite
separate physical fact, but requires no new physical laws: it is a mathematical implication of the physical laws already noted. Again, the relationship depends on both physical and mathematical
Is there a recognized name for such hybrid cases? They could perhaps be described as “quasi-causal” relationships.
Awesome, I was looking for a good explanation of the Peano axioms!
About six months ago I had a series of arguments with my housemate, who's been doing a philosophy degree at a Catholic university. He argued that I should leave the door open for some way other than
observation to gather knowledge, because we had things like maths giving us knowledge in this other way, which meant we couldn't assume we'd come up with some other other way to discover, say,
ethical or aesthetic truths.
I couldn't convince him that all we could do in ethics was reason from axioms, because he didn't understand that maths was just reasoning from axioms --- and I didn't actually understand the Peano
axioms, so I couldn't explain them.
So, thanks for the post.
"Because if you had another separated chain, you could have a property P that was true all along the 0-chain, but false along the separated chain. And then P would be true of 0, true of the
successor of any number of which it was true, and not true of all numbers."
But the axiom schema of induction does not completely exclude nonstandard numbers. Sure if I prove some property P for P(0) and for all n, P(n) => P(n+1) then for all n, P(n); then I have excluded
the possibility of some nonstandard number "n" for which not P(n) but there are some properties which cannot be proved true or false in Peano Arithmetic and therefore whose truth hood can be altered
by the presence of nonstandard numbers.
Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I may
be mistaken.
But the axiom schema of induction does not completely exclude
Eliezer isn't using an axiom schema, he's using an axiom of second order logic.
I don't see what the difference is... They look very similar to me.
At some point you have to translate it into a (possibly infinite) set of first-order axioms or you wont be able to perform first-order resolution anyway.
What's wrong with second order resolution?
There's no complete deductive system for second-order logic.
Not sure if I understand the point of your argument.
Are you saying that in reality every property P has actually three outcomes: true, false, undecidable? And that those always decidable, like e.g. "P(n) <-> (n = 2)" cannot be true for all natural
numbers, while those which can be true for all natural numbers, but mostly false otherwise, are always undecidable for... some other values?
Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I
may be mistaken.
I don't know.
Let's suppose that for any specific value V in the separated chain it is possible to make such property PV. For example "PV(x) <-> (x <> V)". And let's suppose that it is not possible to make one
such property for all values in all separated chains, except by saying something like "P(x) <-> there is no such PV which would be true for all numbers in the first chain and false for x".
What would that prove? Would it contradict the article? How specifically?
Are you saying that in reality every property P has actually three outcomes: true, false, undecidable?
By Godel's incompleteness theorem yes, unless your theory of arithmetic has a non-recursively enumerable set of axioms or is inconsistent.
And that those always decidable, like e.g. "P(n) <-> (n = 2)" cannot be true for all natural numbers, while those which can be true for all natural numbers, but mostly false otherwise, are always
undecidable for... some other values?
I'm having trouble understanding this sentence but I think I know what you are asking about.
There are some properties P(x) which are true for every x in the 0 chain, however, Peano Arithmetic does not include all these P(x) as theorems. If PA doesn't include P(x) as a theorem, then it is
independent of PA whether there exist nonstandard elements for which P(x) is false.
Let's suppose that for any specific value V in the separated chain it is possible to make such property PV. What would that prove? Would it contradict the article? How specifically?
I think this is what I am saying I believe to be impossible. You can't just say "V is in the separated chain". V is a constant symbol. The model can assign constants to whatever object in the domain
of discourse it wants to unless you add axioms forbidding it.
Honestly I am becoming confused. I'm going to take a break and think about all this for a bit.
If our axiom set T is independent of a property P about numbers then by definition there is nothing inconsistent about the theory T1 = "T and P" and also nothing inconsistent about the theory T2= "T
and not P".
To say that they are not inconsistent is to say that they are satisfiable, that they have possible models. As T1 and T2 are inconsistent with each other, their models are different.
The single zero-based chain of numbers without nonstandard numbers is a single model. Therefore, if there exists a property about numbers that is independent of any theory of arithmetic, that theory
of arithmetic does not logically exclude the possibility of nonstandard elements.
By Godel's incompleteness theorems, a theory must have statements that are independent from it unless it is either inconsistent or has a non-recursively-enumerable theorem set.
Each instance of the axiom schema of induction can be constructed from a property. The set of properties is recursively enumerable, therefore the set of instances of the axiom schema of induction is
recursively enumerable.
Every theorem of Peano Arithmetic must use a finite number of axioms in its proof. We can enumerate the theorems of Peano Arithmetic by adding increasingly larger subsets of the infinite set of
instances of the axiom schema of induction to our axiom set.
Since the theory of Peano Arithmetic has a recursively enumerable set of theorems it is either inconsistent or is independent of some property and thus allows for the existence of nonstandard
Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions?
Pn(x) is "x is the nth successor of 0" (the 0th successor of a number is itself). P(x) is "there exists some n such that Pn(x)".
I don't see how you would define Pn(x) in the language of PA.
Let's say we used something like this:
Pn(x) iff ((0 + n) = x)
Let's look at the definition of +, a function symbol that our model is allowed to define:
a + 0 = a
a + S(b) = S(a + b)
"x + 0 = x" should work perfectly fine for nonstandard numbers.
So going back to P(x):
"there exists some n such that ((0 + n) = x)"
for a nonstandard number x, does there exist some number n such that ((0+n) = x)? Yup, the nonstandard number x! Set n=x.
Oh, but when you said nth successor you meant n had to be standard? Well, that's the whole problem isn't it!
But any nonstandard number is not an nth successor of 0 for any n, even nonstandard n (whatever that would mean). So your rephrasing doesn't mean the same thing, intuitively - P is, intuitively, "x
is reachable from 0 using the successor function".
Couldn't you say:
• P0: x = 0
• PS0: x = S0
• PSS0: x = SS0
and so on, defining a set of properties (we can construct these inductively, and so there is no Pn for nonstandard n), and say P(x) is "x satisfies one such property"?
An infinite number of axioms like in an axiom schema doesn't really hurt anything, but you can't have infinitely long single axioms.
∀x((x = 0) ∨ (x = S0) ∨ (x = SS0) ∨ (x = SSS0) ∨ ...)
is not an option. And neither is the axiom set
P0(x) iff x = 0
PS0(x) iff x = S0
PSS0(x) iff x = SS0
∀x(P0(x) ∨ PS0(x) ∨ PS0(x) ∨ PS0(x) ∨ ...)
We could instead try the axioms
P(0, x) iff x = 0
P(S0, x) iff x = S0
P(SS0, x) iff x = SS0
∀x(∃n(P(n, x)))
but then again we have the problem of n being a nonstandard number.
What is n?
I love this post, and will be recommending it.
Speaking as a non-mathematician I think I would have tried to express 'there's only one chain' by saying something like 'all numbers can be reached by a finite amount of repetititions of considering
the successor of a number you've already considered, starting from zero'.
We can try to write that down as "For all x, there is an n such that x = S(S(...S(0)...)) repeated n times."
The two problems that we run into here are: first, that repeating S n times isn't something we know how to do in first-order logic: we have to say that there exists a sequence of repetitions, which
requires quantifying over a set. Second, it's not clear what sort of thing "n" is. It's a number, obviously, but we haven't pinned down what we think numbers are yet, and this statement becomes
awkward if n is an element of some other chain that we're trying to say doesn't exist.
EY is talking from a position of faith that infinite model theory and second-order logic are good and reasonable things.
It is possible to instead start from a position of doubt that the infinite model theory and second order logic are good and reasonable things (based on my memory of having studied in college whether
model theory and second order logic can be formalized within Zermelo-Frankel set theory, and what the first-order-ness of Zermelo-Frankel has to do with it.).
We might be fine with a proof-theoretic approach, which starts with the same ideas "zero is a number", "the successor of a number is a different number", but then goes to a proof-theoretic rule of
induction something like "I'd be happy to say 'All numbers have such-and-such property' if there were a proof that zero has that property and another also proof that if a number has that property,
then its successor also has that property."
We don't need to talk about models at all - in particular we don't need to talk about infinite models.
Second-order arithmetic is sufficient to get what EY wants (a nice pretty model universe) but I have two objections. First it is too strong - often the first sufficient hammer that you find in
mathematics is rarely the one you should end up using. Second, the goal of a nice pretty model universe presumes a stance of faith in (infinite) model theory, but the infinite model theory is not
formalized. If you do formalize it then your formalization will have alternative "undesired" interpretations (by Lowenheim-Skolem).
EY is talking from a position of faith that infinite model theory and second-order logic are good and reasonable things.
I think this is a fallacy of gray. Mathematicians have been using infinite model theory and second-order logic for a while, now; this is strong evidence that they are good and reasonable.
Edit: Link formatting, sorry. I wish there was a way to preview comments before submitting....
Second-order logic is not part of standard, mainstream mathematics. It is part of a field that you might call mathematical logic or "foundations of mathematics". Foundations of a building are
relevant to the strength of a building, so the name implies that foundations of mathematics are relevant to the strength of mainstream mathematics. A more accurate analogy would be the relationship
between physics and philosophy of physics - discoveries in epistemology and philosophy of science are more often driven by physics than the other way around, and the field "philosophy of physics" is
a backwater by comparison.
As is probably evident, I think the good, solid mathematical logic is intuitionist and constructive and higher-order and based on proof theory first and model theory only second. It is easy to
analogize from their names to a straight line between first-order, second-order, and higher-order logic, but in fact they're not in a straight line at all. First-order logic is mainstream
mathematics, second-order logic is mathematical logic flavored with faith in the reality of infinite models and set theory, and higher-order logic is mathematical logic that is (usually) constructive
and proof-theoretic and built with an awareness of computer science.
Your view is not mainstream.
To an extent, but I think it's obvious that most mathematicians couldn't care less whether or not their theorems are expressible in second-order logic.
Yes, because most mathematicians just take SOL at face value. If you believe in SOL and use the corresponding English language in your proofs - i.e., you assume there's only one field of real numbers
and you can talk about it - then of course it doesn't matter to you whether or not your theorem happens to require SOL taken at face value, just like it doesn't matter to you whether your proof uses
~~P->P as a logical axiom. Only those who distrust SOL would try to avoid proofs that use it. That most mathematicians don't care is precisely how we know that disbelief in SOL is not a mainstream
value. :)
The standard story is that everything mathematicians prove is to be interpreted as a statement in the language of ZFC, with ZFC itself being interpreted in first-order logic. (With a side-order of
angsting about how to talk about e.g. "all" vector spaces, since there isn't a set containing all of them -- IMO there are various good ways of resolving this, but the standard story considers it a
problem; certainly in so far as SOL provides an answer to these concerns at all, it's not "the one" answer that everybody is obviously implicitly using.) So when they say that there's only one field
of real numbers, this is supposed to mean that you can formalize the field axioms as a ZFC predicate about sets, and then prove in ZFC that between any two sets satisfying this predicate, there is an
isomorphism. The fact that the semantics of first-order logic don't pin down a unique model of ZFC isn't seen as conflicting with this; the mathematician's statement that there is only one complete
ordered field (up to isomorphism) is supposed to desugar to a formal statement of ZFC, or more precisely to the meta-assertion that this formal statement can be proven from the ZFC axioms.
Mathematical practice seems to me more in line with this story than with yours, e.g. mathematicians find nothing strange about introducing the reals through axioms and then talk about a
"neighbourhood basis" as something that assigns to each real number a set of sets of real numbers -- you'd need fourth-order logic if you wanted to talk about neighbourhood bases as objects without
having some kind of set theory in the background. And people who don't seem to care a fig about logic will use Zorn's lemma when they want to prove something that uses choice, which seems quite
rooted in set theory.
Now I do think that mathematicians think of the objects they're discussing as more "real" than the standard story wants them to, and just using SOL instead of FOL as the semantics in which we
interpret the ZFC axioms would be a good way to, um, tell a better story -- I really like your post and it has convinced me of the usefulness of SOL -- but I think if we're simply trying to describe
how mathematicians really think about what they're doing, it's fairer to say that they're just taking set theory at face value -- not thinking of set theory as something that has axioms that you
formalize in some logic, but seeing it as as fundamental as logic itself, more or less.
Um, I think when an ordinary mathematician says that there's only one complete ordered field up to isomorphism, they do not mean, "In any given model of ZFC, of which there are many, there's only one
ordered field complete with respect to the predicates for which sets exist in that model." We could ask some normal mathematicians what they mean to test this. We could also prove the isomorphism
using logic that talked about all predicates, and ask them if they thought that was a fair proof (without calling attention to the quantification over predicates).
Taking set theory at face value is taking SOL at face value - SOL is often seen as importing set theory into logic, which is why mathematicians who care about it all are sometimes suspicious of it.
Um, I think when an ordinary mathematician says that there's only one complete ordered field up to isomorphism, they do not mean, "In any given model of ZFC, of which there are many, there's only
one ordered field complete with respect to the predicates for which sets exist in that model." We could ask some normal mathematicians what they mean to test this.
The standard story, as I understand it, is claiming that models don't even enter into it; the ordinary mathematician is supposed to be saying only that the corresponding statement can be proven in
ZFC. Of course, that story is actually told by logicians, not by people who learned about models in their one logic course and then promptly forgot about them after the exam. As I said, I don't agree
with the standard story as a fair characterization of what mathematicians are doing who don't care about logic. (Though I do think it's a coherent story about what the informal mathematical English
is supposed to mean.)
Taking set theory at face value is taking SOL at face value - SOL is often seen as importing set theory into logic, which is why mathematicians who care about it all are sometimes suspicious of
Is it a fair-rephrasing of your point that what normal mathematicians do requires the same order of ontological commitment as the standard (non-Henkin) semantics of SOL, since if you take SOL as
primitive and interpret the ZFC axioms in it, that will give you the correct powerset of the reals, and if you take set theory as primitive and formalize the semantics of SOL in it, you will get the
correct collection of standard models? 'Cause I agree with that (and I see the value of SOL as a particularly simple way of making that ontological commitment, compared to say ZFC). My point was that
mathematical English maps much more directly to ZFC than it does to SOL (there's still coding to be done, but much less of it when you start from ZFC than when you start from SOL); e.g., you earlier
said that "[o]nly those who distrust SOL would try to avoid proofs that use it", and you can't really use ontological commitments in proofs, what you can actually use is notions like "for all
properties of real numbers", and many notions people actually use are ones more directly present in ZFC than SOL, like my example of quantifying over the neighbourhood bases (mappings from reals to
sets of sets of reals).
I agree with this statement - and yet you did not contradict my statement that second order logic is also not part of mainstream mathematics.
A topologist might care about manifolds or homeomorphisms - they do not care about foundations of mathematics - and it is not the case that only one foundation of mathematics can support topology.
The weaker foundation is preferable.
The last sentence is not obvious at all. The goal of mathematics is not to be correct a lot. The goal of mathematics is to promote human understanding. Strong axioms help with that by simplifying
If you assume A and derive B you have not proven B but rather A implies B. If you can instead assume a weaker axiom Aprime, and still derive B, then you have proven Aprime implies B, which is
stronger because it will be applicable in more circumstances.
In what "circumstances" are manifolds and homeomorphisms useful?
If you were writing software for something intended to traverse the Interplanetary transfer network then you would probably use charts and atlases and transition functions, and you would study
(symplectic) manifolds and homeomorphisms in order to understand those more-applied concepts.
If an otherwise useful theorem assumes that the manifold is orientable, then you need to show that your practical manifold is orientable before you can use it - and if it turns out not to be
orientable, then you can't use it at all. If instead you had an analogous theorem that applied to all manifolds, then you could use it immediately.
There's a difference between assuming that a manifold is orientable and assuming something about set theory. The phase space is, of course, only approximately a manifold. On a very small level it's -
well, something we're not very sure of. But all the math you'll be doing is an approximation of reality.
So some big macroscopic feature like orientability would be a problem to assume. Orientability corresponds to something in physical reality, and something that clearly matters for your calculation.
The axiom of choice or whatever set-theoretic assumption corresponds to nothing in physical reality. It doesn't matter if the theorems you are using are right for the situation, because they are
obviously all wrong, because they are about symplectic dynamics on a manifold, and physics isn't actually symplectic dynamics on a manifold! The only thing that matters is how easily you can find a
good-enough approximation to reality. More foundational assumptions make this easier, and do not impede one's approximation of reality.
Note that physicists frequently make arguments that are just plain unambiguously wrong from a mathematical perspective.
Well, it's strong evidence that mathematicians find these things useful for publishing papers.
x + Sy = Sz -
That looks a bit odd.
I think the idea is that one speaker got cut off by the other after having said "x+Sy=Sz".
If this were Wikipedia, someone would write a rant about the importance of using typographically correct characters for the hyphen, the minus sign, the en dash, and the em dash ( - − – and —
Yeah, I understood that after about 10 seconds of confusion, which seems unnecessary.
I'm new here, so watch your toes...
As has been mentioned or alluded to, the underlying premise may well be flawed. By considerable extrapolation, I infer that the unstated intent is to find a reliable method for comprehending
mathematics, starting with natural numbers, such that an algorithm can be created that consistently arrives at the most rational answer, or set of answers, to any problem.
Everyone reading this has had more than a little training in mathematics. Permit me to digress to ensure everyone recalls a few facts that may not be sufficiently appreciated. Our general education
is the only substantive difference between Homo Sapiens today and Homo Sapiens 200,000 years ago.
With each generation the early education of our offspring includes increasingly sophisticated concepts. These are internalized as reliable, even if the underlying reasons have been treated very
lightly or not at all. Our ability to use and record abstract symbols appeared at about the same time as farming. The concept that "1" stood for a single object and "2" represented the concept of two
objects was establish along with a host of other conceptual constructs. Through the ensuing millennia we now have an advanced symbology that enables us to contemplate very complex problems.
The digression is to point out that very complex concepts, such as human logic, require a complex symbology. I struggle with understanding how contemplating a simple artificially constrained problem
about natural numbers helps me to understand how to think rationally or advance the state of the art. The example and human rationality are two very different classes of problem. Hopefully someone
can enlighten me.
There are some very interesting base alternatives that seem to me to be better suited to a discussion of human rationality. Examining the shape of the Pareto front generated by PIBEA (Prospect
Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems) runs for various real-world variables would facilitate discussions around how each of us weights each variable and
what conditional variables change the weight (e.g., urgency).
Again, I intend no offense. I am seeking understanding. Bear in mind that my background is in application of advanced algorithms in real-world situations.
Due to all this talk about logic I've decided to take a little closer look at Goedel's theorems and related issues, and found this nice LW post that did a really good job dispelling confusion about
completeness, incompleteness, SOL semantics etc.: Completeness, incompleteness, and what it all means: first versus second order logic
If there's anything else along these lines to be found here on LW - or for that matter, anywhere, I'm all ears.
Every number has a successor. If two numbers have the same successor, they are the same number. There's a number 0, which is the only number that is not the successor of any other number. And
every property true at 0, and for which P(Sx) is true whenever P(x) is true, is true of all numbers. In combination, those premises narrow down a single model in mathematical space, up to
isomorphism. If you show me two models matching these requirements, I can perfectly map the objects and successor relations in them
The property "is the only number which is not the successor of any number" manifestly is false for every Sx.
There is a number ' (spoken "prime"). The sucessor of ' is '. ' and ' are the same number.
There is a number A. Every property which is true of 0, and for which P(Sx) is true whenever P(x) is true, is true of A. The successor of A is B. The successor of B is C. The successor of C is A.
Both of these can be eliminated by adding a property P1: EDIT for correctness: It is true of a number y that if Sx=y, then y≠x; It is further true of the number Sy that if Sx=y, Sy≠x. &etc But P1 was
not required in your description of numbers.
There is also an infinite series, ... -3,-2,-1,o,1,2,3... which also shares all of the properties zero for which P(Sx) is true whenever P(x) is true.
I can't easily find a way to exclude any of the infinite chains using the axioms described here.
"Why does 2+2 come out the same way each time?"
Thoughts that seem relevant:
1. Addition is well defined, that is if x=x' and y=y' then x+y = x'+y'. Not every computable transformation has this property. Consider the non-well-defined function <+> on fractions given by a/b
<+> c/d = (a+c)/(b+d) We know that 3/9 = 1/3 and 2/5 = 4/10 but 7/19 != 3/8.
2. We have the Church-Rosser Theorem http://en.wikipedia.org/wiki/Church%E2%80%93Rosser_theorem as a sort of guarantee (in the lambda calculus) that if I compute one way and you compute another,
then we can eventually reach common ground.
3. If we consider "a logic" to be a set of rules for manipulaing strings, then we can come up with some axioms for classical logic that characterize it uniquely. That is to say, we can logically
pinpoint classical logic (say, with the axioms of boolean algebra) just like we can we can logically pinpoint the natural numbers (with the peano axioms).
I'd say that your "non-well-defined function on fractions" isn't actually a function on fractions at all; it's a function on fractional expressions that fails to define a function on fractions.
Fair enough. We could have "number expressions" which denote the same number, like "ssss0", "4", "2+2", "2*2". Then the question of well-definedness is whether our method of computing addition gives
the same result for each of these different number expressions.
Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies.
I expected at this point the mathematician to spell out the connection to the earlier discussion of defining addition abstractly - "for every relation R that works exactly like addition..."
|
{"url":"http://lesswrong.com/lw/f4e/logical_pinpointing/","timestamp":"2014-04-17T18:25:41Z","content_type":null,"content_length":"773193","record_id":"<urn:uuid:d695000c-ce9b-4538-b9cb-78d82d405fb2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Current Search Limits
Results 1 - 10 of 19 matches
Serial Dilution: Tracking Bacterial Population Size and Antibiotic Resistance part of Examples
Spreadsheets Across the Curriculum module. Students build spreadsheets to calculate and compare bacterial population sizes from lab cultures in antibiotic-free and antibiotic-containing media.
Subject: Biology:Microbiology
A Healthier You: Modeling a Healthier Weight from Dietary Improvement and Exercise part of Examples
Spreadsheets Across the Curriculum module. Students calculate the number of days it takes for participants in a hypothetical weight-reduction program to reach a target weight. QL: rates of change.
Subject: Health Sciences, Biology
Predator-Prey Interactions -- Modeling the Number of Fishers and Porcupines in New Hampshire part of Examples
Spreadsheets Across the Curriculum activity/Introductory Ecology course. Students build and manipulate spreadsheets to model and graph populations using the Lotka-Volterra predator-prey equations.
Subject: Biology:Ecology
Is It Hot in Here? -- Spreadsheeting Conversions in the English and Metric Systems part of Examples
Spreadsheets Across Curriculum module/Introductory chemistry course. Students build spreadsheets to examine unit conversions between the metric and English systems. Spreadsheet level: Beginner.
Subject: Biology, Chemistry, Geoscience, Physics
Calibrating a Pipettor part of Examples
Spreadsheets Across the Curriculum activity. In advance of an actual lab activity, students virtually simulate the calibration of a laboratory micropipettor. QL: Accuracy and precision.
Subject: Chemistry, Biology, Geoscience
What Time Did The Potato Die? part of Examples
Spreadsheets Across the Curriculum module. Simulating a forensic calculation, students build spreadsheets and create graphs to find the time of death of a potato victim from temperature vs. time
Subject: Biology, Sociology
Bacteria in a Flask -- Spreadsheeting Population Density vs. Time part of Examples
Spreadsheets Across the Curriculum module. Students tabulate and graph data on bacteria density vs. time for a culture. Data start with innoculation and progress through the peak and decline.
Subject: Biology:Microbiology, Health Sciences
From Isotopes to Temperature: Working With A Temperature Equation part of Examples
Spreadsheets Across the Curriculum module. Students build a spreadsheet to examine from a dataset the relation between oxygen isotopes in corals and the temperature of surrounding seawater.
Subject: Biology, Chemistry, Geoscience, :Geology:Geochemistry
Energy Flow through Agroecosystems (Farms) part of Examples
Spreadsheets across the Curriculum module. Students build spreadsheets that allow them to calculate the different values needed to examine energy flow through agroecosystems.
Subject: Engineering, Biology
|
{"url":"http://serc.carleton.edu/sp/library/ssac/examples.html?q1=sercvocabs__43%3A3","timestamp":"2014-04-19T22:26:34Z","content_type":null,"content_length":"33048","record_id":"<urn:uuid:b55005a2-c338-4966-9fb8-7a744378116a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(770 - 840 C.E.)
Abu Abdullah Mohammad Ibn Musa al-Khawarizmi was born at Khawarizm (Kheva), south of Aral sea. Very little is known about his early life, except for the fact that his parents had migrated to a place
south of Baghdad. The exact dates of his birth and death are also not known, but it is established that he flourished under Al- Mamun at Baghdad through 813-833 and probably died around 840 C.E.
Khawarizmi was a mathematician, astronomer and geographer. He was perhaps one of the greatest mathematicians who ever lived, as, in fact, he was the founder of several branches and basic concepts of
mathematics. In the words of Phillip Hitti, he influenced mathematical thought to a greater extent than any other mediaeval writer. His work on algebra was outstanding, as he not only initiated the
subject in a systematic form but he also developed it to the extent of giving analytical solutions of linear and quadratic equations, which established him as the founder of Algebra. The very name
Algebra has been derived from his famous book Al-Jabr wa-al-Muqabilah. His arithmetic synthesized Greek and Hindu knowledge and also contained his own contribution of fundamental importance to
mathematics and science. Thus, he explained the use of zero, a numeral of fundamental importance developed by the Arabs. Similarly, he developed the decimal system so that the overall system of
numerals, 'algorithm' or 'algorizm' is named after him. In addition to introducing the Indian system of numerals (now generally known as Arabic numerals), he developed at length several arithmetical
procedures, including operations on fractions. It was through his work that the system of numerals was first introduced to Arabs and later to Europe, through its translations in European languages.
He developed in detail trigonometric tables containing the sine functions, which were probably extrapolated to tangent functions by Maslama. He also perfected the geometric representation of conic
sections and developed the calculus of two errors, which practically led him to the concept of differentiation. He is also reported to have collaborated in the degree measurements ordered by Mamun
al-Rashid were aimed at measuring of volume and circumference of the earth.
The development of astronomical tables by him was a significant contribution to the science of astronomy, on which he also wrote a book. The contribution of Khawarizmi to geography is also
outstanding, in that not only did he revise Ptolemy's views on geography, but also corrected them in detail as well as his map of the world. His other contributions include original work related to
clocks, sun-dials and astrolabes.
Several of his books were translated into Latin in the early 12th century. In fact, his book on arithmetic, Kitab al-Jam'a wal- Tafreeq bil Hisab al-Hindi, was lost in Arabic but survived in a Latin
translation. His book on algebra, Al-Maqala fi Hisab-al Jabr wa-al- Muqabilah, was also translated into Latin in the 12th century, and it was this translation which introduced this new science to the
West "completely unknown till then". He astronomical tables were also translated into European languages and, later, into Chinese. His geography captioned Kitab Surat-al-Ard, together with its maps,
was also translated. In addition, he wrote a book on the Jewish calendar Istikhraj Tarikh al-Yahud, and two books on the astrolabe. He also wrote Kitab al-Tarikh and his book on sun-dials was
captioned Kitab al-Rukhmat, but both of them have been lost.
The influence of Khawarizmi on the growth of science, in general, and mathematics, astronomy and geography in particular, is well established in history. Several of his books were readily translated
into a number of other languages, and, in fact, constituted the university text-books till the 16th century. His approach was systematic and logical, and not only did he bring together the then
prevailing knowledge on various branches of science, particularly mathematics, but also enriched it through his original contribution. No doubt he has been held in high repute throughout the
centuries since then.
|
{"url":"http://www.islamicity.com/Science/Scientists/Khawarizmi.htm","timestamp":"2014-04-21T11:00:18Z","content_type":null,"content_length":"4857","record_id":"<urn:uuid:b85d5f83-6dce-4f5b-8513-ff5d848e378c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Water Resources of the United States
The following documentation was taken from:
U.S. Geological Survey Water-Resources Investigations Report 94-4002: Nationwide summary of U.S. Geological Survey regional regression equations for estimating magnitude and frequency of floods for
ungaged sites, 1993
Regression equations developed for urban streams in Houston are for estimating peak discharges (QT) having recurrence intervals T that range from 2 to 500 years. The explanatory basin variables used
in the equations are drainage area (A), in square miles; the degree of urban development (2/3 from Mannings equation for open-channel flow (n = Mannings roughness coefficient, AC = channel
cross-sectional area at controlling section, in square feet, and R = hydraulic radius, in feet). The drainage area (A) can be measured from topographic maps; the percentage of urban development (
Topographic maps, aerial photographs, field surveys, and the following equations are used to estimate the needed peak discharges QT, in cubic feet per second, having selected recurrence intervals T.
Equations are of the following form:
Liscum, F., and Massey, B.C., 1980, Technique for estimating the magnitude and frequency of floods in the Houston, Texas, metropolitan area: U.S. Geological Survey Water-Resources Investigations
Report 80-17, 29 p.
Regression equations developed for urban streams in Austin are for estimating peak discharges (QT) having recurrence intervals T that range from 2 to 100 years. The explanatory basin variables used
in the equations are contributing drainage area (CDA), in square miles, and total percentage of drainage area that is impervious (TIMP). The variable CDA can be measured using topographic maps; TIMP
can be estimated from aerial photographs or land-use maps. The regression equations were developed from simulated and recorded peak-discharge records at 13 sites on 7 streams, and the results are
applicable to unregulated streams (not regulated by flood-control structures) having drainage areas between 2 and 20 square miles. The standard errors of estimate of the regression equations range
from 26 to 30 percent.
Topographic maps, aerial photographs or land-use maps, and the following equations are used to estimate the needed peak discharges QT, in cubic feet per second, having selected recurrence intervals
Veenhuis, J.E., and Gannett, D.G., 1986, The effects of urbanization on floods in the Austin metropolitan area, Texas: U.S. Geological Survey Water-Resources Investigations Report 86-4069, 66 p.
Regression equations developed for urban streams in Dallas-Fort Worth are for estimating peak discharges (QT) having recurrence intervals (T) that range from 2 to 100 years. The explanatory basin
variables used in the equations are drainage area (DA), in square miles, and an urbanization index (UI), which is evaluated as described in the report by Land and others (1982). The regression
equations were developed from peak-discharge records from drainage areas in the Dallas-Fort Worth area ranging from 1.25 to 66.4 square miles with results considered applicable to drainage areas
between 3 and 40 square miles having urbanization indexes between 12 and 33. The standard errors of estimate of the regression equations are about 30 percent.
Topographic maps and the following equations are used to estimate the needed peak discharges QT, in cubic feet per second, having selected recurrence intervals T.
Land, L.F., Schroeder, E.E., and Hampton, B.B., 1982, Techniques for estimating the magnitude and frequency of floods in the Dallas-Fort Worth metropolitan area, Texas: U.S. Geological Survey
Water-Resources Investigations Report 82-18, 55 p.
Figure 1. Flood-frequency region map for Texas. (PostScript file of Figure 1.)
Figure 2. Mean annual precipitation in Texas. (PostScript file of Figure 2.)
|
{"url":"http://water.usgs.gov/software/NFF/manual/tx/index.html","timestamp":"2014-04-20T15:56:22Z","content_type":null,"content_length":"11830","record_id":"<urn:uuid:20a10016-e102-4370-baec-7894f13fe414>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] 64-bit Fedora 9 a=numpy.zeros(0x80000000, dtype='b1')
[Numpy-discussion] 64-bit Fedora 9 a=numpy.zeros(0x80000000, dtype='b1')
Nadav Horesh nadavh@visionsense....
Sun Sep 13 05:39:23 CDT 2009
Could it be a problem of python version? I get no error with python2.6.2 (on amd64 gentoo)
-----הודעה מקורית-----
מאת: numpy-discussion-bounces@scipy.org בשם David Cournapeau
נשלח: א 13-ספטמבר-09 09:48
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] 64-bit Fedora 9 a=numpy.zeros(0x80000000, dtype='b1')
Charles R Harris wrote:
> On Sat, Sep 12, 2009 at 9:03 AM, Citi, Luca <lciti@essex.ac.uk
> <mailto:lciti@essex.ac.uk>> wrote:
> I just realized that Sebastian posted its 'uname -a' and he has a
> 64bit machine.
> In this case it should work as mine (the 64bit one) does.
> Maybe during the compilation some flags prevented a full 64bit
> code to be compiled?
> __
> Ints are still 32 bits on 64 bit machines, but the real question is
> how python interprets the hex value.
That's not a python problem: the conversion of the object to a C
int/long happens in numpy (in PyArray_IntpFromSequence in this case). I
am not sure I understand exactly what the code is doing, though. I don't
understand the rationale for #ifdef/#endif in the one item in shape
tuple case (line 521 and below), as well as the call to PyNumber_Int,
NumPy-Discussion mailing list
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/ms-tnef
Size: 3965 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/numpy-discussion/attachments/20090913/f10474f5/attachment.bin
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-September/045200.html","timestamp":"2014-04-17T07:40:02Z","content_type":null,"content_length":"5199","record_id":"<urn:uuid:f36f6613-8198-4143-8be5-ffbea0d4b1f6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to make maths model for class 6
How to make maths model for class 6 1
2013-05-02 14:37:42 by Snake
Sorry, there seems to be zero answers to this question at the time.
Did you perhaps search for: “
How to Make A Maths Fair
” which has 1 answers?
Please consider participating by adding an answer yourself using the
answer form ..
Need a Quick Answer?
We will answer your question for 1€/0.02BTC *)
*) An answer will be posted within 48 hours
of your payment! No need to register.
Get +10 points +AQP by answering this question
47Questions SuperSearch
"How to make maths model for class 6"
|
{"url":"http://www.47questions.com/how_to_make_maths_model_for_class_6/","timestamp":"2014-04-19T00:21:15Z","content_type":null,"content_length":"30631","record_id":"<urn:uuid:fcb64b04-effd-47ba-a129-eda442ab6a2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Symbolic Simulation
ACL2 Script for
Symbolic Simulation: An ACL2 Approach
J Strother Moore
FMCAD '98
Executable formal specification can allow engineers to test (or simulate) the specified system on concrete data before the system is implemented. This is beginning to gain acceptance and is just the
formal analogue of the standard practice of building simulators in conventional programming languages such as C. A largely unexplored but potentially very useful next step is symbolic simulation, the
``execution'' of the formal specification on indeterminant data. With the right interface, this need not require much additional training of the engineers using the tool. It allows many tests to be
collapsed into one. Furthermore, it familiarizes the working engineer with the abstractions and notation used in the design, thus allowing team members to speak clearly to one another. We illustrate
these ideas with a formal specification of a simple computing machine in ACL2. We sketch some requirements on the interface, which we call a symbolic spreadsheet.
The full paper refers to an ACL2 script containing an ACL2 model of a ``small machine.''
The ``small machine'' model discussed in this paper was first developed in 1991 for a course on how to use the Boyer-Moore theorem prover, Nqthm, to model microprocessors. The Nqthm model is a
distillation of the microprocessor modeling approach developed by the Nqthm community at the University of Texas at Austin and Computational Logic, Inc., in the 1980s. The small machine model was
transcribed to ACL2 in 1995 and described in the paper:
That paper also has an accompanying ACL2 Script.
Thus, there are two ACL2 scripts formalizing the small machine, the 1996 one and the present one (1998). They are different! The differences stem from the fact that the 1996 model was not Common Lisp
compliant. In order to do the performance measuring reported in the present paper, I decided to change the model to make (and prove) it compliant. I list the differences between the two models below.
|
{"url":"http://www.cs.utexas.edu/~moore/publications/symsim-script/index.html","timestamp":"2014-04-20T13:44:50Z","content_type":null,"content_length":"7836","record_id":"<urn:uuid:165c5e91-0120-40e1-ad65-1ec52121f569>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Techniques for Computer Science Applications
Monday, 7:00-9:00.
Warren Weaver Hall room 312.
Professor Ernest Davis
Reaching Me
• Email:
• phone: (212) 998-3123
• office: 329 Warren Weaver Hall
Office hours: Tue 10:00-12:00, Wed 3:00-4:00
Amos Gilat, MATLAB: An Introduction with Applications, Wiley, 2008. Note that amazon.com is selling a new copy cheaper ($61.04) than the used copies at the NYU bookstore ($66), and you can get much
cheaper used at bookfinder.com. For the purposes of this class, it does not matter which edition you buy. I didn't comparison shop the other books.
Steven Leon, Linear Algebra: With Applications, 7th edition, Pearson, 2005.
Morris DeGroot and Mark Schervish, Probability and Statistics, Addison Wesley, 2001.
Online documentation for MATLAB: Getting Started with MATLAB
Class email list
Be sure to subscribe to the class email list
Xin Li. email: lixin @ same host as above. Office: 1009 715 Bway. Office hours Thu, 2-4.
This course gives an introduction to theory, computational techniques, and applications of linear algebra, probability and statistics. These three areas of continuous mathematics are critical in many
parts of computer science, including machine learning, scientific computing, computer vision, computational biology, computational finance, natural language processing, and computer graphics. The
course will teach a specialized language for mathematical computation, such as MATLAB, and will discuss how the language can be used for computation and for graphical output. No prior knowledge of
linear algebra, probability, or statistics is assumed.
Assignment 1 due Sep. 28.
Exercises 1 NOT TO HAND IN.
Assignment 2 due Oct. 12. Postponed to Oct. 19.
Exercises 2 NOT TO HAND IN.
Assignment 3 due Nov. 2.
Sample Output for Assignment 3
Exercises 3 NOT TO HAND IN.
Assignment 4 due Nov. 16. Postponed to Nov. 23
Sample solutions to Assignment 4
Assignment 5 due Nov. 30.
Sample outputs for Assignment 5
Assignment 6 due Dec. 14
Sample outputs for Assignment 6
Final exam
Monday, December 21, 7:00 PM. 312 WWH.
Study sheet
Provisional syllabus, subject to change
Biweekly assignments (60% of the grade).
Final exam (40% of the grade).
Part I. Introduction:
Week 1.A Introduction to MATLAB. Basic programming language features.
Class Notes Chapter 1: MATLAB. Gilat.
Part II. Linear Algebra:
Week 1.B. Vectors. Basic operations. Dot product. Vectors in MATLAB. Plotting in MATLAB.
Class Notes Chapter 2: Vectors.
Week 2. Matrices. Definition, fundamental properties, basic operations. Linear transformations.
Class Notes Chapter 3: Matrices.
Week 3. Abstract linear algebra: Linear independence, basis, rank, orthogonality, subspaces, null space.
Class Notes Chapter 4: Vector Spaces.
Week 4. Solving linear equations using Gaussian elimination.
Class Notes Chapter 5: Algorithms
Weeks 5+6. Geometric applications.
Class Notes Chapter 6: Geometry.
Week 7: Basis change and singular value decomposition.
Class Notes Chapter 7: Basis change.
Part III. Probability
Week 8: Introduction. Independence. Bayes's Law.
Week 9: Random variables. Expected value and variance. Discrete and continuous distributions.
Application: Machine learning.
Week 10: Information theory and entropy. Maximum entropy technique.
Week 11: Markov chains.
Application: Natural language processing.
Part IV. Statistics.
Week 12: Non-parametric statistics. Computer-based resampling techniques. Confidence intervals and statistical significance.
Application: Software testing.
Week 13: Distributions. Binomial and normal distributions.
Week 14: Monte Carlo methods.
Taking care of your health is always a priority, but particularly this year, as there is a widespread and dangerous flu epidemic. Therefore:
If you have any symptoms of the flu, and especially if you are sneezing or coughing, PLEASE TAKE CARE OF YOURSELF AT HOME AND DO NOT COME TO CLASS. Going out in public when you have a
communicable illness is not only unwise in terms of your own health, but it is extremely irresponsible and unfair to fellow students to put them at risk.
Please note also:
□ A significant fraction of cases of swine flu occur without any fever, or with a depressed temperature. If you have sneezing, coughing, body aches, and fatigue, you should assume you have the
flu, even if you have no fever.
□ You remain contagious for some time after you are feeling better. Please stay home for at least 24 hours after your fever and other symptoms have gone away.
□ The progress of the disease can extremely fast. If you feel seriously ill (high fever, difficulty breathing, etc.) seek medical attention IMMEDIATELY.
I am not taking attendance, and there is no penalty for missing class. If you are too ill to come to class, but well enough to work at home, then submit your assignments by email. If you are too ill
to work, I will accept assignments a week late. If you miss both the regular due date and the late deadline due to illness, please consult with me promptly about making up the assignment. If you are
too ill to come to the final exam, if at all possible, let me know in advance by email; if not, then please contact me as soon as possible.
You may discuss any of the assignments with your classmates (or anyone else) but all work for all assignments must be entirely your own. Any sharing or copying of assignments will be considered
cheating. By the rules of the Graduate School of Arts and Science, I am required to report any incidents of cheating to the department. Department policy is that the first incident of cheating will
result in the student getting a grade of F for the course. The second incident, by GSAS rules, will result in expulsion from the University.
|
{"url":"http://www.cs.nyu.edu/courses/fall09/G22.1180-001/index.html","timestamp":"2014-04-21T05:55:05Z","content_type":null,"content_length":"8072","record_id":"<urn:uuid:627a8122-56b5-4acd-b42d-d569a9e6bbaf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jewish Calendar (thing)
Here's some more detail. In particular, Adar II does not happen every 4 years. Here's how the calendar works:
The month-length is taken to be 29 days, 12 hours, and 793 "parts," where a "part" is 1/1080 of an hour, or 1/25920 of a day, or 3-1/3 seconds. This is taken to be the mean length of a lunar month
(between new moons). The beginning of the Year 1 is taken to have occurred on a Sunday night, at 5 hours and 204 parts (counting hours from sunset. That's actually before Creation: 12 months later
the new moon was at 8am on Friday, the day Adam and Eve are considered to have been created.
Anyway, you add 12 months of 29:12:793 (days:minutes:parts) each, or 13 for each leap year. Specifically, there are 7 leap years for every 19 years. So for each cycle of 19 years, the time and
day-of-the-week of the new moon moves by 2 days, 16 hours, and 595 parts. Whatever.
So now you know when the new moon beginning a given year is, and also where in the 19-year cycle of leap-years it is, and thus whether or not it's a leap year. Leap years do not happen every four
years: they happen in years 3, 6, 8, 11, 14, 17, and 19 of the 19-year cycle, which is rather more frequently. You may have to change the day of the New Year (Rosh Hashanah), though. If nothing
interferes, it happens on the day of the new moon you just calculated. But more often than not, something changes it:
1. If the moment of the new moon is after midday (halfway between sunrise and sunset), delay to the next day.
2. If it's on Sunday, Wednesday, or Friday, either because it fell out there or was delayed, delay it again. Rosh Hashanah can not, under any circumstances, be on a Sunday, Wednesday, or Friday
(well, the first day; the second day of course may wind up there).
3. If it's after 9 hours and 204 parts on Tuesday (counting from sunset, remember!), in a non-leap year, delay it. This prevents some interactions that would result in an unacceptable year-length.
4. If it's after 15 hours and 789 parts (from sunset!) on a Monday, for a non-leap year following a leap year, delay it.
OK! So this gives us a year-length of 353, 354, or 355 days in a non-leap year, and 383, 384, or 385 days in a leap year, or an average length of about 365.2428 days. A trifle long, I think, but
quite close.
And then the months proceed as mentioned above; the month of Adar is repeated if it's a leap year. Technically, it's the first month of Adar that's the added one: Adar II, the second one, is
considered the real month. The holiday of Purim, which falls in Adar, happens in Adar II, not Adar I, and birthdays and death-anniversaries are kept in Adar II, etc. The months are 29 or 30 days
long, on a fixed schedule, with the months of Kislev and Heshvan being variable between the two, to accomodate the varying year-lengths (only three of the four combinations are possible: they're
either both long, both short, or Heshvan is short and Kislev long).
|
{"url":"http://everything2.com/user/Seqram/writeups/Jewish+Calendar","timestamp":"2014-04-18T11:26:34Z","content_type":null,"content_length":"23985","record_id":"<urn:uuid:2208e430-2c87-4b6b-a7f4-460ee280edb7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Combinatorial Data
This page contains some collections of combinatorial data.
Other useful resources include:
Graph formats
Unless otherwise specified, graphs are presented in either graph6 or sparse6 format. The extension on the file name (.g6 or .s6) indicates which is used in each case. See here for information on how
to use these formats. Large files are gzipped and have an additional .gz extension.
Various simple graphs
The graphs page has some collections of general graphs, eulerian graphs, strongly regular graphs, Ramsey graphs, hypohamiltonian graphs, planar graphs, self-complementary graphs, and highly irregular
Various plane graphs
The plane graphs page has some graphs imbedded in the plane that are hard to make using plantri.
The trees page has some small trees classified by order and diameter.
Greechie diagrams
Greechie diagrams are a particular sort of hypergraph used in quantum physics to represent orthomodular lattices. They have their own page.
Latin squares and cubes
The Latin squares page has the Latin squares of small order.
The Latin cubes page has the Latin cubes and hypercubes of small order.
Hadamard matrices
The Hadamard matrix page has the Hadamard matrices up to order 32.
Directed graphs
Some tournaments, locally-transitive tournaments and acyclic directed graphs are available on the digraphs page.
Some counts of regular multigraphs appear on the integer matrix page.
Maximum dissociated sets of 0-1 vectors
The dissociated sets page contains some of these things that occur in weighing problems.
Page Master: Brendan McKay, bdm@cs.anu.edu.au and http://cs.anu.edu.au/~bdm.
|
{"url":"http://cs.anu.edu.au/~bdm/data/","timestamp":"2014-04-16T07:54:18Z","content_type":null,"content_length":"2809","record_id":"<urn:uuid:ab8c61b1-122b-4da6-a302-76c645b308c8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Even Einstein Can Be Wrong
Regarding the below, that Einstein was 'wrong'. Curious how quick we are to suggest something (Quantum Mechanics) very few know much about, proves Einstein was incorrect. I thought I suggest
something to ponder. Perhaps Einstein was right after all. Perhaps Quantum Mechanics is accurate as well. They actually can co-exist. Quite possibly it is this that has kept the blinders on humans.
Consciousness can affect the mechanics of the subatomic world. Ordered consciousness will in fact alter the 'disorder' of wave-particle action. Mind over matter (pardon the pun). But this thought
does seem to bind together so many facets doesn't it??? East, west... Up. Think about it... God does currently play dice here. Hardly I'd say this place is full of order consciousness. But, God does
NOT have to play dice at all, if WE so choose. gsacco229@comcast.net
"God does not play dice"
to indicate his displeasure with the theory of
. Trouble is, the chances are astronomically high that God
play dice.
- but cf. QuantumPhysics.
• Um, doesn't QuantumPhysics these days generally agree that GodDoesNotPlayDice?
□ Nope, it's the other way around. And Einstein meant that he didn't believe in quantum physics. He spent tremendous energy over some decades trying to disprove it, and in the process
strengthened it, since all of the bizarre implications he discovered from it turned out to be true (e.g EPR paradox, Bell's Theorem). Einstein was wrong about this.
□ Can you provide a source for Einstein thinking QM was wrong? And exactly what Einstein thought QM was at the time he thought it?
□ Einstein said about QM: "The more successes it has, the sillier it looks." But this is not necessarily Quantum Theory but rather the actual experiments that look strange. We live in a strange
world. - CP
□ Ummm... every single book and article by Einstein, or about Einstein, that mentioned QM, without any exceptions? I'm not claiming something controversial; Einstein's vehement opposition to QM
is legendary. And modern QM was directly shaped by Einstein's objections -- by addition of the bizarre implications he discovered in an effort to disprove it, so the "QM" that Einstein
disagreed with was no different than the modern version, despite new discoveries. -- anon
□ How do you know? Without exception, every single ancient book on QM was full of utterly absurd gibberish about wave collapse and non-determinism. So how do you KNOW that Einstein disagreed
with the essence of QM instead of the crap that seemed to be intrinsically attached to it?
□ A rather strong adjective for a theory that makes predictions accurate to better than one part per million, but there is an enormous amount of confused writing. Some of which has been
resolved. - CP
□ Every single book and article about QM mentions the Copenhagen Interpretation as if it were a single thing. It's false of course. There are two separate Copenhagen Interpretations and they
are mutually exclusive to each other.
□ Every single book and article about QM mentions the CI as if it were sensible, coherent, meaningful, and scientific. It's not controversial. Well, everybody's wrong and the CI is none of
those things, as has been thoroughly proved.
□ "Everybody knows" that Einstein was a devout Christian; aren't all the references to the divine in his speeches enough? Well, everybody's wrong since Einstein was a humanist, socialist, and
devout atheist. And I DO have the quote from a detailed letter to prove it.
☆ My comment was intended for an audience that knows nothing about the subject; many are surprised that Einstein disagreed with QM. You are perceiving disagreement on topics I didn't intend
to be talking about in the first place. I've warned before that you are prone to this.
☆ Also, physics is a mathematical subject. There's no point in arguing it philosophically.
○ Mathematical, or do you mean empirical? While math and physics are joined at the hip; and physics is one discipline where the mathematical models are developed ahead of the empirical
evidence that verifies 'em (the research being hard and expensive to do), at the end of the day any theory that disagrees with experimental results is discarded. Indeed, the history
of phyiscs is one of "nice" mathematical models of HowTheWorldWorks? being overthrown and replaced with more complicated ones.
□ QM has always been exceedingly philosophical. Of the important figures, only RichardFeynman formally refused to be sucked into metaphysics. And even he didn't resist for very long. It's
disingenuous of you to imply there is an easy separation between the mathematics and the metaphysics. In fact, it's disingenuous of you to imply there is ANY separation between the math of QM
and its metaphysics, since metaphysics is what dictates the relation between math and reality. You can't refer to math in physics without dragging in philosophy, it's just not possible.
□ Actually, mostly what I mean is the other way around: that you should mellow out on the topic until you study the mathematical aspects of physics, rather than basing strong opinions purely on
the philosophical aspects.
□ Yes, Einstein disagreed with his age's version of QM. But of course, his age's version of QM was utter crap. Did Einstein disagree with our version of QM? I don't know at all, and I find it
remarkable that you can so cavalierly assert he did.
□ I find it galling for you to claim your comments were intended for a naive audience. Because if they were then you deliberately misled them! You should know as well as I that the naive views
of probability, chance and nondeterminism (all those things associated with "dice") are nonsensical, meaningless, non-mathematical, anti-scientific GIBBERISH. In fact, the mathematical
concept of probability has no resemblance at all to its naive version since it formally relies on the idea of many-worlds, and has done so since it was originally formalized. Yet you claimed
that physics had "proved" all that mystical doubletalk of chance?! How could you?! -- rk
□ My claim is that Einstein disbelieved in QM. It is well-documented that that much is true. Note BTW that quantum physics is a very large field, and QM is just its start point, not the whole
of it.
□ Certain aspects of QM are supported by experiment to 15 decimal places, better than literally any other theory of physics, including Newtonian mechanics. That doesn't mean it is completely
right. But it does mean that aspects of it are the best we've ever done, and those aren't the aspects of QM that you have a problem with -- I haven't seen you arguing that the experiments
were done wrong. So in that sense, Einstein was wrong about QM.
□ Einstein, like me did not like the experimental results. They don't make sense. Nutty things happen. Feynman advised students not to think about it too much lest they lose their minds, or
something like that. CP
□ My claim is that I don't know at all whether Einstein disbelieved in QM. That GodDoesNotPlayDice is not evidence of even mild dislike of QM. That GodDoesNotPlayDice is CORRECT, in
contradiction to your claim. And finally, that if Einstein did hate QM, he was well within his rights to do so given that QM in his age was intertwined with lots of utterly revolting crap
that no sane person should have ever believed in. So if Einstein hated QM then that doesn't make him "wrong" so much as it makes him a person of highly discerning taste.
□ Heh. :-) Well, ok. Anyway he disliked something related to the topic.
□ Einstein sensed that QM theory was incomplete. That remains to this day. - CP
Actually, the above fest pertains to open questions
and Copenhagen is still regarded as "mainstream" with no conclusive experiment to decide the issues one way or the other being even sketched so far. But the philosophical platitudes of rk strike
• It's true Copenhagen is mainstream, but it's theoretically impossible to distinguish the two experimentally, so it's a philosophical issue (like modern Lorentz ether theory vs. special
relativity). Einstein co-developed the notion of quanta, and Schrodinger's math was just math that matched experiments. It's very clearly the philosophical peculiarities of Copenhagen -
non-determinism, and more importantly observers creating reality - that Einstein found abhorrent, as his quotes show. Incidentally, Schroedinger felt the same way. However, at the time no other
interpretations were available. Now that they are, most physicists simply aren't interested in the philosophy, and so support the standard they were taught. However, there are serious problems
with Copenhagen as a coherent system, and those who consider it in detail generally find it lacking; there are many arguments against, but few for. There's not much more to say about Einstein's
supposed error here. He was wrong about the cosmological constant at least once, but it's hard to tell which time. And of course, he was probably wrong many times in day-to-day matters..
□ The cosmological constant is a constant of integration, I believe. I am a little prejudiced about constants of integration. Having solved the big hard differential equation, I think of the
constant as an afterthought. He put it in, took it out, now folks are using it again to characterize the dark force. I hope to see an answer before I die. - CP
Actually, chances are astronomically high that there is no God. It cannot be denied that, to our parochial sensibilities, the universe is a very bizarre place.
So, the next time someone quotes Albert Einstein (or anyone else to you) as an attempt to prove just how wrong you are, you should immediately respond "God does not play dice". You will win the
argument for non-sequitur value, if nothing else. (-: Ummmm - well that's not exactly true. The existence or non-existence of any divine or God-like entity whatever its form or nature is something
that can't be tested empirically, by its very nature. So the whole idea that you can create a
for God is absurd. A statement involving the probability of God is meaningless.
□ The existence of an infinite number of invented things cannot be tested, so it is my understanding that science defaults to requiring some kind of evidence, else there is general agreement
invented things are false. I know many folk who view this in terms of probability but the numbers are infinitesimal. - CP
Of course God plays dice. Haven't you read about the
? --
PatrickParker Of course God plays dice. I beat him at craps just last tuesday ;)
Interestingly enough, quantum mechanics takes us back to minimalism and
. Part of the problem with quantum mechanics, as compared to the theories of relativity, is that quantum mechanics was "discovered" by a bunch of people, each of which took a slightly different
approach to the problem. How does that interact with program design? --
That's a wonderful question that also partly anticipated the
by about eighteen months (program design and wiki evolution being analogous given the
aspirations described by Ward in
). In fact, I feel like a time-traveller even being on this page. --
The following link is quite interesting:
Lines from the article: "SCIENTISTS claim they have broken the ultimate speed barrier: the speed of light... particle physicists have shown that light pulses can be accelerated to up to 300 times
their normal velocity... transmitted a pulse of light towards a chamber filled with specially treated caesium gas. Before the pulse had fully entered the chamber, it had gone right through it and
travelled a further 60ft across the laboratory." An article with more detail and less astonishment is at
While the peak of the pulse does get pushed forward by that amount, an early "nose" or faint precursor of the pulse has probably given a hint to the cesium of the pulse to come.
"The information is already there in the leading edge of the pulse," Dr. Milonni said. "You can get the impression of sending information superluminally even though you're not sending
The cesium chamber has reconstructed the entire pulse shape, using only the shape of the precursor. So for most physicists, no fundamental principles have been smashed in the new work.
□ I have seen this in a paper which supports your last statement. - CP
My money goes on Einstein. There are numerous deterministic theories of the quantum including my own
. Schroedinger's equation itself is deterministic in nature. It is simply our inability to compute the effect of everything in the universe at one point that forces us to consider what happens there
random. --
□ A deterministic theory must involve hidden variables, I believe. Some "No hidden variables proofs" have been written. Some have been found wrong. I do not know if this has been resolved but
there are convincing arguments that hidden variables cannot be the answer. There seems to be an inescapable quantum chaos. - CP
If many-worlds theory is correct, then Einstein was also correct. The evolution of the wavefunction under many-worlds is completely deterministic; you just can't "predict" which branch "you" end up
in, because you end up in all of them. -- EliezerYudkowsky
• This is not what Einstein meant; he didn't believe in a many-worlds version of quantum physics, either. (I should've known Yudkowsky would show up on at least one page here! :-)
One hadn't been proposed yet, so that's beside the point; it answers his objections.
|
{"url":"http://c2.com/cgi-bin/wiki?EvenEinsteinCanBeWrong","timestamp":"2014-04-17T10:55:52Z","content_type":null,"content_length":"17153","record_id":"<urn:uuid:120f8101-6620-453d-89fa-4d521d0e4bbf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with thermodynamic potentials
OK then why the need for two "cylinders". Don't you just have one cylidner, with two chambers ?
Let me see if I can restate your problem
You have a cylinder of gas that is partitioned into two chambers by an adiabatic (no heat exchange ) piston. At one end of the cylinder, changes are both isothermal and isobaric (constant pressure).
The other end of the cylinder is closed.
Well if one pressure and temperature cannot change in one of the cylinders, then neither can molar volume so no work can be done!
|
{"url":"http://www.physicsforums.com/showthread.php?t=53405","timestamp":"2014-04-19T12:45:20Z","content_type":null,"content_length":"44934","record_id":"<urn:uuid:6ce18691-ffd0-4872-8324-e1b66567994b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Killing vectors and Ricci Tensor
up vote 2 down vote favorite
Hi all,
We all know that the lie derivative of the metric tensor along a Killing Vector vanishes, by definition. I am trying to show that the Lie derivative of the Ricci tensor along a Killing vector also
vanishes, and I am hoping to interpret it physically.
What might be a good direction to proceed? Thanks!
general-relativity dg.differential-geometry
add comment
2 Answers
active oldest votes
Recall that the definition of the Lie derivative of a tensor field $T$ with respect to a vector field $X$ is given by "dragging" $T$ with respect to the one-parameter (quasi) group $\
phi_t$ generated by $X$, i.e., computing $\phi_t^*(T)$, and differentiating wrt $t$ at $t = 0$. But to say that $X$ is a Killing field means that the $\phi_t$ are (partial) isometries,
and so not only preserve the metric tensor but also the Riemann curvature tensor and its contraction the Ricci tensor or any other tensor field that is defined canonically from the metric
tensor and so preserved by isometries. Thus any such tensor field is preserved by dragging, i.e., $\phi_t^*(T)$ is constant in $t$ and so has a zero derivative.
up vote 4
down vote Regarding the physical interpretation, let me try to answer a slightly different question. Recall that the Ricci tensor comes up as the Euler-Lagrange expression for the Einstein-Hilbert
accepted functional, and that the latter is invariant under the group of ALL diffeomorphisms. So it is natural to ask what the Noether Theorem (connecting one-parameter groups that preserve a
Lagrangian to constants of the motion of the corresponding Euler-Lagrange equations) leads to in this case. The answer is that it gives the contracted Bianchi identity for the Ricci
tensor. Perhaps this is what your question about physical significance was aiming at.
2 Dick and Andrei's answers are very good, but I confess to being somewhat surprised that there is no obvious infinitesimal tensor calculation to demonstrate this fact. At least I am
unable to find one. – Deane Yang Nov 25 '10 at 17:44
I was intrigued by Deane's remark, so I worked out a purely tensorial proof. I will edit my answer correspondingly. – Andrei Moroianu Nov 25 '10 at 19:47
@Andrei: Just a small question or quibble---why do you put the minus sign in the definition of the Lie derivative? That would make the Lie derivative of a function $f(x)$ on the line
wrt the vector field $\partial/\partial x$ equal to $-f^'(x)$ rather than $f^'(x)$ which seems a bit strange. – Dick Palais Nov 25 '10 at 20:23
@Dick: Oh, that's because the definition of the "push forward" $f_*$, which is defined by $(f_*K)_x=df(K_{f^{-1}(x)})$. Here $df$ is the natural extension to tensors, of course. In
particular, $f_*(g)$ is just $g\circ f^{-1}$, so the minus sign disappears when you differentiate, so you get the usual formula $L_\xi g=\xi.g$, with the right sign... – Andrei
Moroianu Nov 25 '10 at 20:39
add comment
The Lie derivative of any tensor field $K$ with respect to a vector field $\xi$ is by definition $$L_\xi(K)=-\frac d{dt}|_{t=0}(\phi_t)_*K,$$ where $\phi_t$ is the local flow of $\xi$. Now,
if $M$ has a Riemannian metric $g$ and $\xi$ is Killing with respect to $g$, each $\phi_t$ is a local isometry of $(M,g)$. From the uniqueness of the Levi-Civita connection, it follows that
every isometry $\phi$ is affine, i.e. $$\phi_*(\nabla_X Y)=\nabla_{\phi_*X}\phi_*Y.$$ From here you get immediately $\phi_*R=R$ for the Riemannian curvature, and since the Ricci tensor of
$R$ is obtained by a trace: $$Ric(X,Y)=trace(V\mapsto R_{V,X}Y),$$
one gets $\phi_*Ric=Ric$ for every isometry $\phi$. The first formula thus shows that $L_\xi Ric=0$ for every Killing vector field $\xi$.
up vote
4 down Edit: Here is another, purely tensorial, proof of the same statement. Let $\xi$ be Killing, in the sense that $g(\nabla_X\xi,Y)+g(X,\nabla_Y\xi)=0$ for all vector fields $X,Y$. After taking
vote the covariant derivative wrt some vector field $Z$, and doing some standard manipulations, one gets the usual Kostant formula: $$\nabla^2_{X,Y}\xi=R_{\xi,X}Y,\qquad\forall X,Y\in C^\infty
(TM).$$ This is just a rewriting of $$L_\xi(\nabla_XY)=\nabla_{L_\xi X}Y+\nabla_X(L_\xi Y),$$ i.e. some sort of Leibniz formula. Applying this formula several times eventually yields the
corresponding Leibniz formula for $R$: $$L_\xi(R_{X,Y}Z)=R_{L_\xi X,Y}Z+R_{X,L_\xi Y}Z+R_{X,Y}(L_\xi Z),$$ i.e. $L_\xi R=0$, and finally $L_\xi Ric=0$ after taking the trace.
Of course, this is just the infinitesimal version of the first proof...
Andrei, thanks for the infinitesimal version, but what I really meant was a proof that follows directly from the properties of the Ricci tensor itself, rather than using its definition in
terms of covariant derivatives of vector fields. But your proof is very nice. – Deane Yang Nov 26 '10 at 4:51
add comment
Not the answer you're looking for? Browse other questions tagged general-relativity dg.differential-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/47332/killing-vectors-and-ricci-tensor?sort=oldest","timestamp":"2014-04-21T03:00:00Z","content_type":null,"content_length":"61768","record_id":"<urn:uuid:b574c6b3-ad62-42bf-8cf7-46d9004b8767>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need a more efficient algorithm for prime numbers.
September 29th, 2012, 10:47 AM
Need a more efficient algorithm for prime numbers.
Here is my current algorithm, the TimeExec is just a class to print out how long it takes to run the code:
Code java:
public class Primes {
* Get a set of prime numbers.
* @param no the number of primes to create
* @return an array containing the requested number
* of primes
public static int[] getPrimes(int no) {
int[] primes = new int[no];
int primeInx = 0;
int i = 2;
if (primeInx < no) {
primes[primeInx++] = 1;
while (primeInx < no) {
boolean prime = true;
for (int j = 2; j < i; j++) {
if (i == i / j * j) {
prime = false;
if (prime) {
primes[primeInx++] = i;
return primes;
public static void main(String[] args) {
new TimeExec(new Runnable() {
public void run() {
int[] primes = getPrimes(1000);
}, "Get 1,000 primes", System.out).start();
//the 10,000 primes took the library's lab comp 19.125s
new TimeExec(new Runnable() {
public void run() {
int[] primes = getPrimes(10000);
}, "Get 10,000 primes", System.out).start();
new TimeExec(new Runnable() {
public void run() {
int[] primes = getPrimes(100000);
}, "Get 100,000 primes", System.out).start();
// new TimeExec(new Runnable() {
// public void run() {
// int[] primes = getPrimes(1000000);
// }
// }, "Get 1,000,000 primes", System.out).start();
I need to get it to be able to print out the 1,000,000 primes fairly quickly. Any help would be awesome, thank you!
September 29th, 2012, 10:54 AM
Re: Need a more efficient algorithm for prime numbers.
There must be algorithms that google could find. Try that.
September 29th, 2012, 11:46 AM
Re: Need a more efficient algorithm for prime numbers.
Try using a sieve. These are more memory intensive but they're very fast at finding prime numbers less than x.
For something fairly simple see Wikipedia: Sieve of Eratosthenese.
A little bit of self-promotion, but for moderately heavy prime generation (basically an optimized Sieve of Eratosthenes with a small hand-coded wheel factorization), see Optimizing the Sieve of
And for your balls-out bonkers version, see Prime Sieve, which I think is the current record holder for consecutive prime number generation. From what I can tell it's a multi-threaded Sieve of
Eratosthenes with advanced wheel factorization. Unfortunately it's implemented in C++, not Java.
November 16th, 2012, 09:15 PM
Re: Need a more efficient algorithm for prime numbers.
Sieves work great for finding lists of primes up to n, much faster than the method you are currently using (for a list of primes to 1000000, sieving returns the list in 64ms on my machine, while
brute force returns it in 643 ms). However, I can give you some pointers if you want with your currents algorithm:
Code java:
for (int j = 2; j < i; j++) {
if (i == i / j * j) {
prime = false;
First, you upper bound is way to high. j need only go to the square root of i to check all possible factors. Also
Code java:
if (i == i / j * j) {
prime = false;
can be changed to
Code java:
if (i % j == 0) {
prime = false;
That percent sign thingy is called a modulo. What it does is it, in this case, divides i by j and returns the remainder. So, for example, 7 % 2 = 1 because when you long divide, you get 3
remainder 1 or simply 3 and 1/2. If i evenly divides into j, then it will always be zero, if not, then it will be some integer > 0.
One other thing with building the list your way, using the ArrayList class will be much, much easier than using that array.
|
{"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/17979-need-more-efficient-algorithm-prime-numbers-printingthethread.html","timestamp":"2014-04-18T20:47:10Z","content_type":null,"content_length":"21003","record_id":"<urn:uuid:199d36c8-d86e-44b0-bcad-d4101668245b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Datatype Laws without Signatures
Results 1 - 10 of 19
- 3rd International Summer School on Advanced Functional Programming , 1999
"... ..."
- In PLILP'96, volume 1140 of LNCS , 1996
"... A polytypic function definition is a function definition that is parametrised with a datatype. It embraces a class of algorithms. As an example we define a simple polytypic "crush" combinator
that can be used to calculate polytypically. The ability to define functions polytypically adds another leve ..."
Cited by 41 (3 self)
Add to MetaCart
A polytypic function definition is a function definition that is parametrised with a datatype. It embraces a class of algorithms. As an example we define a simple polytypic "crush" combinator that
can be used to calculate polytypically. The ability to define functions polytypically adds another level of flexibility in the reusability of programming idioms and in the design of libraries of
interoperable components.
- Memoranda Informatica, University of Twente , 1994
"... Each datatype constructor comes equiped not only with a so-called map and fold (catamorphism), as is widely known, but, under some condition, also with a kind of map and fold that are related to
an arbitrary given monad. This result follows from the preservation of initiality under lifting from the ..."
Cited by 19 (0 self)
Add to MetaCart
Each datatype constructor comes equiped not only with a so-called map and fold (catamorphism), as is widely known, but, under some condition, also with a kind of map and fold that are related to an
arbitrary given monad. This result follows from the preservation of initiality under lifting from the category of algebras in a given category to a certain other category of algebras in the Kleisli
category related to the monad.
, 2002
"... which are eventually used in later stages of the computation. We present a generic definition of accumulations, achieved by the introduction of a new recursive operator on inductive types. We
also show that the notion of downwards accumulation developed by Gibbons is subsumed by our notion of acc ..."
Cited by 10 (0 self)
Add to MetaCart
which are eventually used in later stages of the computation. We present a generic definition of accumulations, achieved by the introduction of a new recursive operator on inductive types. We also
show that the notion of downwards accumulation developed by Gibbons is subsumed by our notion of accumulation.
- In Functional Programming, Glasgow 1991, Workshops in computing , 1992
"... The notion of functionality is not cast in stone, but depends upon what we have as types in our language. With partial equivalence relations (pers) as types we show that the functional relations
are precisely those satisfying the simple equation f = f ffi f [ ffi f , where " [ " is the relation ..."
Cited by 7 (1 self)
Add to MetaCart
The notion of functionality is not cast in stone, but depends upon what we have as types in our language. With partial equivalence relations (pers) as types we show that the functional relations are
precisely those satisfying the simple equation f = f ffi f [ ffi f , where " [ " is the relation converse operator. This article forms part of "A calculational theory of pers as types" [1]. 1
Introduction In calculational programming, programs are derived from specifications by a process of algebraic manipulation. Perhaps the best known calculational paradigm is the Bird--Meertens
formalism, or to use its more colloquial name, Squiggol [2]. Programs in the Squiggol style work upon trees, lists, bags and sets, the so--called Boom hierarchy. The framework was uniformly extended
to cover arbitrary recursive types by Malcolm in [3], by means of the F--algebra paradigm of type definition, and resulting catamorphic programming style. More recently, Backhouse et al [4] have made
a further ...
, 1997
"... Abstract Many functions have to be written over and over again for different datatypes, either because datatypes change during the development of programs, or because functions with similar
functionality are needed on different datatypes. Examples of such functions are pretty printers, pattern match ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract Many functions have to be written over and over again for different datatypes, either because datatypes change during the development of programs, or because functions with similar
functionality are needed on different datatypes. Examples of such functions are pretty printers, pattern matchers, equality functions, unifiers, rewriting functions, etc. Such functions are called
polytypic functions. A polytypic function is a function that is defined by induction on the structure of user-defined datatypes. This thesis introduces polytypic functions, shows how to construct and
reason about polytypic functions and describes the implementation of the polytypic programming system PolyP. PolyP extends a functional language (a subset of Haskell) with a construct for writing
polytypic functions. The extended language type checks definitions of polytypic functions, and infers the types of all other expressions. Programs in the extended language are translated to Haskell.
, 1992
"... We present a programming paradigm based upon the notion of binary relations as programs, and partial equivalence relations (pers) as types. Our method is calculational , in that programs are
derived from specifications by algebraic manipulation. Working with relations as programs generalises the fu ..."
Cited by 5 (2 self)
Add to MetaCart
We present a programming paradigm based upon the notion of binary relations as programs, and partial equivalence relations (pers) as types. Our method is calculational , in that programs are derived
from specifications by algebraic manipulation. Working with relations as programs generalises the functional paradigm, admiting non--determinism and the use of relation converse. Working with pers as
types, we have a more general notion than normal of what constitutes an element of a type; this leads to a more general class of functional relations, the so--called difunctional relations. Our basic
method of defining types is to take the fixpoint of a relator , a simple strengthening of the categorical notion of a functor. Further new types can be made by imposing laws and restrictions on the
constructors of other types. Having pers as types is fundamental to our treatment of types with laws. Contents 1 Introduction 2 2 Relational calculus 4 2.1 Powerset lattice structure : : : : : : : :
- IN: SPECIAL ISSUE FOR AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP 2007). VOLUME 410 OF THEORETICAL COMPUTER SCIENCE , 2009
"... The purpose of this paper is threefold: to present a general abstract, yet practical, notion of equational system; to investigate and develop the finitary and transfinite construction of free
algebras for equational systems; and to illustrate the use of equational systems as needed in modern applica ..."
Cited by 5 (4 self)
Add to MetaCart
The purpose of this paper is threefold: to present a general abstract, yet practical, notion of equational system; to investigate and develop the finitary and transfinite construction of free
algebras for equational systems; and to illustrate the use of equational systems as needed in modern applications.
- Formal Aspects of Computing , 1992
"... this paper is an alternative to diagram chasing (4). The use of a standard notation for various unique arrows obviates in some cases the need for pictures for the purpose of naming (2). The need
for a pictorial overview of the typing (1) is decreased to some extend by a consistent notation, in parti ..."
Cited by 4 (0 self)
Add to MetaCart
this paper is an alternative to diagram chasing (4). The use of a standard notation for various unique arrows obviates in some cases the need for pictures for the purpose of naming (2). The need for
a pictorial overview of the typing (1) is decreased to some extend by a consistent notation, in particular f ; g for composition (so that f : a ! b g: b ! c ) f
"... corecursive functions COALGEBRA state model constructors destructors data model recursive functions reachable hidden abstraction observable hidden restriction congruences invariants visible
abstraction ALGEBRA visible restriction!e Swinging Cube ..."
Cited by 4 (4 self)
Add to MetaCart
corecursive functions COALGEBRA state model constructors destructors data model recursive functions reachable hidden abstraction observable hidden restriction congruences invariants visible
abstraction ALGEBRA visible restriction!e Swinging Cube
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=699534","timestamp":"2014-04-16T11:54:04Z","content_type":null,"content_length":"34434","record_id":"<urn:uuid:48a5abae-785b-4a1e-9016-3969390c4aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve, please :)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I think it's A
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
domain is the values that x can take
Best Response
You've already chosen the best response.
so from your figure, x can take the values from ?? to ??
Best Response
You've already chosen the best response.
I think hartnn gives u good explanation about what domain is. It is better if u try first. What can u say?
Best Response
You've already chosen the best response.
I don't understand...?
Best Response
You've already chosen the best response.
ok, what all values can x take ? can x be = 10 ??
Best Response
You've already chosen the best response.
domain=range of x codomain=range of y=the range of values of f(x)
Best Response
You've already chosen the best response.
Ok lets take domain=range of x, can u find this from the graph?
Best Response
You've already chosen the best response.
Can u do that?
Best Response
You've already chosen the best response.
X can take 3-7 values.
Best Response
You've already chosen the best response.
yes! that would be your domain
Best Response
You've already chosen the best response.
Okay! I was think 3-12 because the graph goes from three, and then up 12.
Best Response
You've already chosen the best response.
the values that the graph(or y) can take is called the 'range' of graph or function. sence, 3 to 12 is range of y.
Best Response
You've already chosen the best response.
What is it called when x and y values are written in the form (x, y) A pair?
Best Response
You've already chosen the best response.
co-ordinates (2,3) 2 is x co-ordinate 3 is y co-ordinat
Best Response
You've already chosen the best response.
A relation is the output (y) values of the relation the input (x) values of the relation a set of points that pair input values with output values <--- This, correct x and y values written in the
form (x, y)
Best Response
You've already chosen the best response.
yes, a RELATION is simply the set of ordered pairs (x,y) so that is correct.
Best Response
You've already chosen the best response.
I have a few more.
Best Response
You've already chosen the best response.
u can ask here or make a new post, suit yourself.
Best Response
You've already chosen the best response.
I'll make a new one
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/507933ace4b0ed1dac51282a","timestamp":"2014-04-20T08:15:56Z","content_type":null,"content_length":"76293","record_id":"<urn:uuid:3c6400cb-b4f6-4683-a04a-97116e163673>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AC (2003), pp. 127-136
Discrete Random Walks, DRW'03
Cyril Banderier and Christian Krattenthaler (eds.)
DMTCS Conference Volume AC (2003), pp. 127-136
author: Michael L. Green, Alan Krinik, Carrie Mortensen, Gerardo Rubino and Randall J. Swift
title: Transient Probability Functions: A Sample Path Approach
keywords: sample paths; dual processes; transient probability functions; Markov process; randomization.
abstract: A new approach is used to determine the transient probability functions of Markov processes. This new solution method is a sample path counting approach and uses dual processes and
randomization. The approach is illustrated by determining transient probability functions for a three-state Markov process. This approach also provides a way to calculate transient
probability functions for Markov processes which have specific sample path characteristics.
If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files.
reference: Michael L. Green and Alan Krinik and Carrie Mortensen and Gerardo Rubino and Randall J. Swift (2003), Transient Probability Functions: A Sample Path Approach, in Discrete Random Walks,
DRW'03, Cyril Banderier and Christian Krattenthaler (eds.), Discrete Mathematics and Theoretical Computer Science Proceedings AC, pp. 127-136
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dmAC0112.ps.gz (36 K)
ps-source: dmAC0112.ps (116 K)
pdf-source: dmAC0112.pdf (100 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Di Sep 27 10:09:18 CEST 2005 by gustedt
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAC0112/1345","timestamp":"2014-04-19T18:23:38Z","content_type":null,"content_length":"14682","record_id":"<urn:uuid:f1d7bd09-eb0b-490c-b9f7-8b0627be3fbe>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Minimal dimension where plane and line can be non-paralel without having intersection
February 23rd 2013, 11:51 PM #1
Feb 2013
Minimal dimension where plane and line can be non-paralel without having intersection
In which dimension, is it possible to construct plane and line, which are not parallel, but don't have any common point either (analogy of skew lines for line and plan)?
I suspect it is 4D but i can't prove it. The only approach I could come with, was proving it is not possible in 3D and finding example for 4D.
However veryfing example requires, i believe, finding normal vector of the plane (and direction vector of the line, i figured this one out) which i was not able to do in 4D.
In book of exercises it is not considered difficult one, so i suspect it has easier solution too.
Re: Minimal dimension where plane and line can be non-paralel without having intersec
with a single point (the 0-vector), we have no lines. thus a trivial vector space won't work.
in one dimension, we have no planes, so a vector space of dimension 1 won't work.
in two dimensions, we have just one plane (the entire space), so every line in the space is a subset of the (only) plane, and so intersects it at every point of the line.
in three dimensions, we now have many planes and lines to choose from. so here, the challenge is to show that if a line is not parallel to a plane, it must intersect the plane. this is a bit
tricky, as you have to think, what does it MEAN for a line to be "parallel to a plane"? it might be useful to define a parallel line as one lying in a parallel plane to our given plane. can you
think of a way to create a non-parallel plane out of your given line, created from lines parallel to the given line, such that EVERY ONE of the parallel lines intersects the given plane?
in four dimensions (say x,y,z,w) consider the xy-plane and the w-axis. these intersect at the origin (0,0,0,0). clearly the w-axis is perpendicular to the xy-plane, so it is NOT parallel. now
shift the xy-plane 1 unit up along the z-axis. prove the plane:
P = {(x,y,1,0): x,y in R} and the line L = {(0,0,0,w): w in R} have no intersection
(if you like you can write P = s(1,0,0,0) + t(0,1,0,0) + (0,0,1,0), and L = u(0,0,0,1) + (0,0,0,0)).
Re: Minimal dimension where plane and line can be non-paralel without having intersec
Thank you, for the clue part with four dimensions it sure is done.
For 3D, can't I say this: In 3D plane is given by one cartesien equation of type P:{(x,y,z);a1*x+b1*y+c1*z=d1.}
line is given by two cartesien equations (or by intersection of two cartesien equations) L:{(x,y,z);a2*x+b2*y+c2*z=d2 , a3*x+b3*y+c3*z=d3}
Point of intersection of line L and plane P is a soultion of system of three equations with three variables.This in turn means that P and L either
a)have one solution,(they intersect in singular point ) -that's in case matrix A formed by P's and L's left sides of equations is regular,
b)matrix A is singular, than there is either no solution-that's in case if rank(A|d)>rank(A) or infinity of solution that's in case rank(A|d)=rank(A)
infinity solution case have necessarily dimension 1, because ker(A)+rank(A)=3 and rank(A)=2.Therefore it is a line. However, if rank(A|d)>rank(A), it is always possible to choose such vector of
right sides d, that rank(A|d)=rank(A).This choice of right sides doesn't affect normal vector of the plane, nor direction vector of line, because these depends only on left side's
coefficients.Hence, it translates only as moving of line, or plane conserving their relative position.
=>Therefor, every line which has no intersection with plane can be "moved" without changing its direction vector, so it would be entirely on this plane (sorry, not native speaker).This means that
every line which has no intersection with plane has constant distance from the plane, so it is parallel. (So it is not possible to have both non-parallelity and not having intersection.)
I feel quite sure with individual steps but in in 4D the approach gave wrong results, so i'm not sure if it is because of method or because of some mistake i made in 4D and maybe not in 3D.
Last edited by athelred; February 25th 2013 at 12:03 AM.
February 24th 2013, 01:24 PM #2
MHF Contributor
Mar 2011
February 24th 2013, 11:59 PM #3
Feb 2013
|
{"url":"http://mathhelpforum.com/advanced-algebra/213704-minimal-dimension-where-plane-line-can-non-paralel-without-having-intersection.html","timestamp":"2014-04-18T00:38:32Z","content_type":null,"content_length":"38901","record_id":"<urn:uuid:7a7296e1-a41a-4e37-9547-b7ce47016bff>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is this how to compare speaker sensitivities? - diyAudio
diyAudio Member
Join Date: Mar 2004
Location: Connecticut, USA
Is this how to compare speaker sensitivities?
A speaker's sensitivity, or SPL, equals the decibel level of 1 watt measured at 1 meter. Right?
Now...from googling around, I've found (I think) that multiplying watts times about 1.29 should increase the volume by 1 decibel.
That fits pretty well with the general rule I've read that doubling watts increases dB by 3, and that watts x 10 equals double the volume, and that "double the volume" is subjectively measured at
between a 6 to 10 dB increase.
So, that should mean that a speaker with a sensitivity of 92 dB, supplied with 100 watts, and a speaker with a sensitivity of 88 dB, supplied with 277 watts, should produce about the same perceived
volume. The math I used:
92dB - 88dB = 4 decibel difference
Multiply 100w @ 88dB times 1.29, 4 times
100w x 1.29 x 1.29 x 1.29 x 1.29 = 276.9w
Is that right? If not, please enlighten me.
|
{"url":"http://www.diyaudio.com/forums/multi-way/39809-how-compare-speaker-sensitivities.html","timestamp":"2014-04-21T15:00:15Z","content_type":null,"content_length":"67880","record_id":"<urn:uuid:5a1b9d35-7adc-4397-9d8f-1ce5759bab5e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kearny, AZ Math Tutor
Find a Kearny, AZ Math Tutor
...The math courses I have assisted students in include, but are not limited to: algebra I and II, trigonometry, geometry, pre-calculus, calculus I, II, and III, vector calculus, ordinary
differential equations, linear algebra, and partial differential equations. The physics courses I have assisted...
26 Subjects: including algebra 2, linear algebra, probability, differential equations
...I taught ASVAB for the U. S. Army for several years and can help you get the score you want if you want to join!
29 Subjects: including algebra 2, ASVAB, English, reading
...Percentages and decimals are fractions in a certain form. A ratio and a proportion involve fractions. The student must learn to deal with fractions: reduce and expand them in order to combine
fractions by addition and subtraction.
26 Subjects: including algebra 1, algebra 2, calculus, grammar
...I have been tutoring since 2001 in subjects from 2nd grade math through graduate level college courses. I am teaching until 3:30 Monday through Thursday and 1:30 on Friday, so I am available
after those times. All tutoring will be done in Casa Grande at one of the libraries.
10 Subjects: including linear algebra, logic, algebra 1, algebra 2
...I take pride and satisfaction in being able to get students to understand the mathematics they need to do well.My undergraduate degree shows taking and passing Differential Equations, as well
as application in modeling weather. Substantial coursework in analysis and applied calculus during certi...
12 Subjects: including calculus, physics, trigonometry, ASVAB
Related Kearny, AZ Tutors
Kearny, AZ Accounting Tutors
Kearny, AZ ACT Tutors
Kearny, AZ Algebra Tutors
Kearny, AZ Algebra 2 Tutors
Kearny, AZ Calculus Tutors
Kearny, AZ Geometry Tutors
Kearny, AZ Math Tutors
Kearny, AZ Prealgebra Tutors
Kearny, AZ Precalculus Tutors
Kearny, AZ SAT Tutors
Kearny, AZ SAT Math Tutors
Kearny, AZ Science Tutors
Kearny, AZ Statistics Tutors
Kearny, AZ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/kearny_az_math_tutors.php","timestamp":"2014-04-18T00:58:20Z","content_type":null,"content_length":"23475","record_id":"<urn:uuid:eceb6cd2-2a86-4c6e-a10b-1de0ec527c2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adding Square Roots raised to a power
June 20th 2011, 04:23 PM #1
Junior Member
Jun 2011
Adding Square Roots raised to a power
When studying for the GMAT, I encountered this problem
(Square root of 3+Square root of 3+Square root of 3)^2
As in raised to the second power.
I know the answer is 27. However, I was under the impression that squaring a square root would simply remove the square root symbol. Why wouldn't this answer be 3+3+3=9?
Re: Adding Square Roots raised to a power
$\dipslaystyle (\sqrt{3}+\sqrt{3}+\sqrt{3})^2 = (3\sqrt{3})^2 = \dots$
$\dipslaystyle (a+b)^2 = a^2+2ab+b^2 eq a^2+b^2$
Re: Adding Square Roots raised to a power
June 20th 2011, 04:28 PM #2
June 20th 2011, 04:42 PM #3
Junior Member
Jun 2011
|
{"url":"http://mathhelpforum.com/algebra/183343-adding-square-roots-raised-power.html","timestamp":"2014-04-17T12:39:47Z","content_type":null,"content_length":"36921","record_id":"<urn:uuid:32543e8a-69e8-4e7b-923b-701bbafa7fa5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Return to List
Heisenberg Calculus and Spectral Theory of Hypoelliptic Operators on Heisenberg Manifolds
             
Memoirs of the This memoir deals with the hypoelliptic calculus on Heisenberg manifolds, including CR and contact manifolds. In this context the main differential operators at stake include
American Mathematical the Hörmander's sum of squares, the Kohn Laplacian, the horizontal sublaplacian, the CR conformal operators of Gover-Graham and the contact Laplacian. These operators cannot be
Society elliptic and the relevant pseudodifferential calculus to study them is provided by the Heisenberg calculus of Beals-Greiner and Taylor.
2008; 134 pp; • Introduction
softcover • Heisenberg manifolds and their main differential operators
• Intrinsic approach to the Heisenberg calculus
Volume: 194 • Holomorphic families of \(\mathbf{\Psi_{H}}\)DOs
• Heat equation and complex powers of hypoelliptic operators
ISBN-10: • Spectral asymptotics for hypoelliptic operators
0-8218-4148-3 • Appendix A. Proof of Proposition 3.1.18
• Appendix B. Proof of Proposition 3.1.21
ISBN-13: • Bibliography
List Price: US$69
Individual Members:
Members: US$55.20
Order Code: MEMO/194/
|
{"url":"http://www.ams.org/bookstore?fn=20&arg1=memoseries&ikey=MEMO-194-906","timestamp":"2014-04-21T15:04:13Z","content_type":null,"content_length":"14834","record_id":"<urn:uuid:5a408b50-7843-4010-bbe0-b6ebed86aefd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to solve for height?
I am confused as to how i solve for height in this problem. I can solve for the distance up the ramp the ball goes but not for the height above the ground.
The problem states: A steel ball has a mass of 45 grams and a diameter of 2.2 cm. The ball is moving and rolling at an initial velocity (not known) when it starts rolling up a 35 degree ramp and
comes to a stop after turning 12 revolutions.
I solved for how far up the ramp the ball went by taking the 12 rev *2π rad * .011m /1 rev / 2π rad and got .132m.
I have looked at all the equations that i have but can't figure out how to solve for how high the ball goes up?
|
{"url":"http://www.physicsforums.com/showthread.php?t=164415","timestamp":"2014-04-18T00:21:50Z","content_type":null,"content_length":"24815","record_id":"<urn:uuid:c27d76f2-7b6f-45eb-818a-4a81fb4b5886>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus With Analytic Geometry
CHAPTER 1: Numbers, Functions, and Graphs
1-1 Introduction
1-2 The Real Line and Coordinate Plane: Pythagoras
1-3 Slopes and Equations of Straight Lines
1-4 Circles and Parabolas: Descartes and Fermat
1-5 The Concept of a Function
1-6 Graphs of Functions
1-7 Introductory Trigonometry
1-8 The Functions Sin O and Cos O
CHAPTER 2: The Derivative of a Function
2-0 What is Calculus ?
2-1 The Problems of Tangents
2-2 How to Calculate the Slope of the Tangent
2-3 The Definition of the Derivative
2-4 Velocity and Rates of Change: Newton and Leibriz
2-5 The Concept of a Limit: Two Trigonometric Limits
2-6 Continuous Functions: The Mean Value Theorem and Other Theorem
CHAPTER 3: The Computation of Derivatives
3-1 Derivatives of Polynomials
3-2 The Product and Quotient Rules
3-3 Composite Functions and the Chain Rule
3-4 Some Trigonometric Derivatives
3-5 Implicit Functions and Fractional Exponents
3-6 Derivatives of Higher Order
CHAPTER 4: Applications of Derivatives
4-1 Increasing and Decreasing Functions: Maxima and Minima
4-2 Concavity and Points of Inflection
4-3 Applied Maximum and Minimum Problems
4-4 More Maximum-Minimum Problems
4-5 Related Rates
4-6 Newtons Method for Solving Equations
4-7 Applications to Economics: Marginal Analysis
CHAPTER 5: Indefinite Integrals and Differential Equations
5-1 Introduction
5-2 Differentials and Tangent Line Approximations
5-3 Indefinite Integrals: Integration by Substitution
5-4 Differential Equations: Separation of Variables
5-5 Motion Under Gravity: Escape Velocity and Black Holes
CHAPTER 6: Definite Integrals
6-1 Introduction
6-2 The Problem of Areas
6-3 The Sigma Notation and Certain Special Sums
6-4 The Area Under a Curve: Definite Integrals
6-5 The Computation of Areas as Limits
6-6 The Fundamental Theorem of Calculus
6-7 Properties of Definite Integrals
CHAPTER 7: Applications of Integration
7-1 Introduction: The Intuitive Meaning of Integration
7-2 The Area between Two Curves
7-3 Volumes: The Disk Method
7-4 Volumes: The Method of Cylindrical Shells
7-5 Arc Length
7-6 The Area of a Surface of Revolution
7-7 Work and Energy
7-8 Hydrostatic Force
CHAPTER 8: Exponential and Logarithm Functions
8-1 Introduction
8-2 Review of Exponents and Logarithms
8-3 The Number e and the Function y = e <^>x
8-4 The Natural Logarithm Function y = ln x
8-5 Applications
Population Growth and Radioactive Decay
8-6 More Applications
CHAPTER 9: Trigonometric Functions
9-1 Review of Trigonometry
9-2 The Derivatives of the Sine and Cosine
9-3 The Integrals of the Sine and Cosine
9-4 The Derivatives of the Other Four Functions
9-5 The Inverse Trigonometric Functions
9-6 Simple Harmonic Motion
9-7 Hyperbolic Functions
CHAPTER 10 : Methods of Integration
10-1 Introduction
10-2 The Method of Substitution
10-3 Certain Trigonometric Integrals
10-4 Trigonometric Substitutions
10-5 Completing the Square
10-6 The Method of Partial Fractions
10-7 Integration by Parts
10-8 A Mixed Bag
10-9 Numerical Integration
CHAPTER 11: Further Applications of Integration
11-1 The Center of Mass of a Discrete System
11-2 Centroids
11-3 The Theorems of Pappus
11-4 Moment of Inertia
CHAPTER 12: Indeterminate Forms and Improper Integrals
12-1 Introduction. The Mean Value Theorem Revisited
12-2 The Interminate Form 0/0. L'Hospital's Rule
12-3 Other Interminate Forms
12-4 Improper Integrals
12-5 The Normal Distribution
CHAPTER 13: Infinite Series of Constants
13-1 What is an Infinite Series ?
13-2 Convergent Sequences
13-3 Convergent and Divergent Series
13-4 General Properties of Convergent Series
13-5 Series on Non-negative Terms: Comparison Tests
13-6 The Integral Test
13-7 The Ratio Test and Root Test
13-8 The Alternating Series Test
CHAPTER 14: Power Series
14-1 Introduction
14-2 The Interval of Convergence
14-3 Differentiation and Integration of Power Series
14-4 Taylor Series and Taylor's Formula
14-5 Computations Using Taylor's Formula
14-6 Applications to Differential Equations
14. 7 (optional) Operations on Power Series
14. 8 (optional) Complex Numbers and Euler's Formula
CHAPTER 15: Conic Sections
15-1 Introduction
15-2 Another Look at Circles and Parabolas
15-3 Ellipses
15-4 Hyperbolas
15-5 The Focus-Directrix-Eccentricity Definitions
15-6 (optional) Second Degree Equations
CHAPTER 16: Polar Coordinates
16-1 The Polar Coordinate System
16-2 More Graphs of Polar Equations
16-3 Polar Equations of Circles, Conics, and Spirals
16-4 Arc Length and Tangent Lines
16-5 Areas in Polar Coordinates
CHAPTER 17: Parametric Equations
17-1 Parametric Equations of Curves
17-2 The Cycloid and Other Similar Curves
17-3 Vector Algebra
17-4 Derivatives of Vector Function
17-5 Curvature and the Unit Normal Vector
17-6 Tangential and Normal Components of Acceleration
17-7 Kepler's Laws and Newton's Laws of Gravitation
CHAPTER 18: Vectors in Three-Dimensional Space
18-1 Coordinates and Vectors in Three-Dimensional Space
18-2 The Dot Product of Two Vectors
18-3 The Cross Product of Two Vectors
18-4 Lines and Planes
18-5 Cylinders and Surfaces of Revolution
18-6 Quadric Surfaces
18-7 Cylindrical and Spherical Coordinates
CHAPTER 19: Partial Derivatives
19-1 Functions of Several Variables
19-2 Partial Derivatives
19-3 The Tangent Plane to a Surface
19-4 Increments and Differentials
19-5 Directional Derivatives and the Gradient
19-6 The Chain Rule for Partial Derivatives
19-7 Maximum and Minimum Problems
19-8 Constrained Maxima and Minima
19-9 Laplace's Equation, the Heat Equation, and the Wave Equation
19-10 (optional) Implicit Functions
CHAPTER 20: Multiple Integrals
20-1 Volumes as Iterated Integrals
20-2 Double Integrals and Iterated Integrals
20-3 Physical Applications of Double Integrals
20-4 Double Integrals in Polar Coordinates
20-5 Triple Integrals
20-6 Cylindrical Coordinates
20-7 Spherical Coordinates
20-8 Areas of curved Surfaces
CHAPTER 21: Line and Surface Integrals
21-1 Green's Theorem, Gauss's Theorem, and Stokes' Theorem
21-2 Line Integrals in the Plane
21-3 Independence of Path
21-4 Green's Theorem
21-5 Surface Integrals and Gauss's Theorem
21-6 Maxwell's Equations : A Final Thought
A: The Theory of Calculus
A-1 The Real Number System
A-2 Theorems About Limits
A-3 Some Deeper Properties of Continuous Functions
A-4 The Mean Value theorem
A-5 The Integrability of Continuous Functions
A-6 Another Proof of the Fundamental Theorem of Calculus
A-7 Continuous Curves With No Length
A-8 The Existence of e = lim h->0 (1 + h) <^>1/h
A-9 Functions That Cannot Be Integrated
A-10 The Validity of Integration by Inverse Substitution
A-11 Proof of the Partial fractions Theorem
A-12 The Extended Ratio Tests of Raabe and Gauss
A-13 Absolute vs Conditional Convergence
A-14 Dirichlet's Test
A-15 Uniform Convergence for Power Series
A-16 Division of Power Series
A-17 The Equality of Mixed Partial Derivatives
A-18 Differentiation Under the Integral Sign
A-19 A Proof of the Fundamental Lemma
A-20 A Proof of the Implicit Function Theorem
A-21 Change of Variables in Multiple Integrals
B: A Few Review Topics
B-1 The Binomial Theorem
B-2 Mathematical Induction
New Features
• Revision highlights include the early introduction of trigonometry, extensive reworking of the infinite series chapters, and the addition of new exercises at varying levels of difficulty.
• New topics include first-order nonlinear differential equations, elementary probability, and hyperbolic functions.
• Two long appendices (Variety of additional topics, Biographical notes) have been removed from the text (will be available in the text, CALCULUS GEMS).
• The text offers full coverage for the full majors on engineering calculus, but, remains shorter than most competition.
|
{"url":"http://www.maa.org/publications/maa-reviews/calculus-with-analytic-geometry","timestamp":"2014-04-17T01:54:33Z","content_type":null,"content_length":"104391","record_id":"<urn:uuid:4d14465e-1eee-46a1-bf80-f638e02b4d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If "tensor" has an adjoint, is it automatically an "internal Hom"?
up vote 12 down vote favorite
Let $\mathcal C,\otimes$ be a monoidal category, i.e. $\otimes : \mathcal C \times \mathcal C \to \mathcal C$ is a functor, and there's a bit more structure and properties. Suppose that for each $X \
in \mathcal C$, the functor $X \otimes - : \mathcal C \to \mathcal C$ has a right adjoint. I will call this adjoint (unique up to canonical isomorphism of functors) $\underline{\rm Hom}(X,-) : \
mathcal C \to \mathcal C$. By general abstract nonsense, $\underline{\rm Hom}(X,-)$ is contravariant in $X$, and so defines a functor $\underline{\rm Hom}: \mathcal C^{\rm op} \times \mathcal C \to \
mathcal C$. If $1 \in \mathcal C$ is the monoidal unit, then $\underline{\rm Hom}(1,-)$ is (naturally isomorphic to) the identity functor.
Then there are canonically defined "evaluation" and "internal composition" maps, both of which I will denote by $\bullet$. Indeed, we define "evaluation" $\bullet_{X,Y}: X\otimes \underline{\rm Hom}
(X,Y) \to Y$ to be the map that corresponds to ${\rm id}: \underline{\rm Hom}(X,Y) \to \underline{\rm Hom}(X,Y)$ under the adjuntion. Then we define "composition" $\bullet_{X,Y,Z}: \underline{\rm
Hom}(X,Y) \otimes \underline{\rm Hom}(Y,Z) \to \underline{\rm Hom}(X,Z)$ to be the map that corresponds under the adjunction to $\bullet_{Y,Z} \circ (\bullet_{X,Y} \otimes {\rm id}) : X \otimes \
underline{\rm Hom}(X,Y) \otimes \underline{\rm Hom}(Y,Z) \to Z$. (I have supressed all associators.)
Question: Is $\bullet$ an associative multiplication? I.e. do we have necessarily equality of morphisms $\bullet_{W,Y,Z} \circ (\bullet_{W,X,Y} \otimes {\rm id}) \overset ? = \bullet_{W,X,Z} \
circ ({\rm id}\otimes \bullet_{X,Y,Z})$ of maps $\underline{\rm Hom}(W,X) \otimes \underline{\rm Hom}(X,Y) \otimes \underline{\rm Hom}(Y,Z) \to \underline{\rm Hom}(X,Z)$? If not, what extra
conditions on $\otimes$ are necessary/sufficient?
add comment
2 Answers
active oldest votes
It is associative. Consider the evaluation cube drawn here. Four of the faces commute by definition of the composition map, and one by functoriality of the tensor product. The
commutativity of these five faces implies that any of the maps $W \otimes \operatorname{Hom}(W, X) \otimes \operatorname{Hom}(X, Y) \otimes \operatorname{Hom}(Y, Z) \to Z$ are equal,
up vote 11 down so by adjunction, the two composites of compositions are equal.
vote accepted
add comment
In S. Eilenberg and G. M. Kelly Closed categories, in Proc. C. O. C. A.. (La Jolla, 1965),
up vote 0 down vote These is a comprehensive study about Monoidal and Close structure on a category, and the relation and equivalence between these.
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/21382/if-tensor-has-an-adjoint-is-it-automatically-an-internal-hom","timestamp":"2014-04-18T01:15:50Z","content_type":null,"content_length":"54596","record_id":"<urn:uuid:e2fb8c72-5e7e-4dcf-aed8-8ac85bd833b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Birth and education
Edmund Melson Clarke was born on July 27, 1945. He initially studied mathematics, receiving a BA from the University of Virginia in 1967 and an MA from Duke University in 1968. But by the time he
enrolled in a doctoral program at Cornell University, he had switched to computing science. At Cornell he studied under Robert Constable, a pioneer in making deep connections between mathematical
logic and computing. After graduation Clarke returned to teach at Duke for two years, moving to Harvard University in 1978. He joined Carnegie Mellon University in 1982, where he is currently the
FORE Systems University Professor of Computer Science and Professor of Electrical and Computer Engineering.
Clarke’s career has focused on mathematical reasoning about computer systems, with an emphasis on reasoning about the reliability of those systems. Such reasoning is necessary but very hard. A
computer system executes simple operations, but those operations can occur in a staggering number of different orders. This makes it impossible for the designer to envision every possible sequence
and predict its consequences. Yet every one of those sequences, no matter how infrequently executed, must be correct. If a program executes an incorrect sequence, at the very least it will waste a
user’s time, while at the worst it could cause injury or loss of life.
The sequences become even more difficult to envision in systems with multiple programs running at the same time—a feature that has long been present in computer hardware and has become widespread in
software since the beginning of the 21st century. Mathematical reasoning, and specifically its expression in formal logic, in principle is sufficient to describe every possible sequence and ensure
that all of them are correct, even for simultaneously-running programs. In practice, however, classical mathematical reasoning is awkwardly-matched to describing the many possible execution orderings
in a computer system.
Inventing model checking
Early researchers addressed this mismatch by developing logical forms better-suited to describing computer systems. One of the first was by Tony Hoare. Hoare’s logic could be used to prove that every
possible execution of a system would only execute an acceptable sequence of operations. This opened the possibility that systems could be proven to perform according to specification every time, no
matter the circumstances. Clarke’s early research strengthened the foundations of Hoare’s logic and extended his method. Although Hoare’s method worked for smaller systems, it was close to impossible
to apply to systems of any real size. The dream of powerful, effective methods for reasoning about all possible orderings of a system remained unfulfilled.
In 1977, Amir Pnueli introduced temporal logic, an alternative logic for verifying computer systems. As the name implies, temporal logic allows explicit reasoning about time. In temporal logic it is
possible to express statements such as, “This condition will remain true until a second condition becomes true”.
Clarke and his student E. Allen Emerson saw an important possibility in temporal logic: it could be directly checked by machine. Whereas Hoare’s logic required the designer to consider every detail
of both the system and the argument about the system’s correctness—substantially increasing the designer’s workload—Pnueli’s logic could be implemented in a computer program. The responsibilities
could be divided: the designer focused on specifying the system, while the software ensured that the proposed system would always perform correctly.
Clarke and Emerson realized that a program could exhaustively construct every possible sequence of actions a system might perform, and for every action it could evaluate a property expressed in
temporal logic. If the program found the property to be true for every possible action in every possible sequence, this proved the property for the system. In the language of mathematical logic,
Clarke and Emerson’s program checked that the possible execution sequences form a “model” of the specified property. Working independently, Jean-Pierre Queille and Joseph Sifakis developed similar
ideas. The technology of model checking was born.
A great strength of the model checking approach is that when it detects a problem in a system design, the checker prints out an example sequence of actions that gives rise to the problem. Initial
designs always get some details wrong. The error traces provided by model checking are invaluable for designers, because the traces precisely locate the source of the problems.
Averting the state space explosion
Although the 1981 paper [2] demonstrated that the model checking was possible in principle, its application to practical systems was severely limited. The most pressing limitation was the number of
states to search. Early model checkers required explicitly computing every possible configuration of values the program might assume. For example, if a program counts the millimeters of rain at a
weather station each day of the week, it will need 7 storage locations. Each location will have to be big enough to hold the largest rain level expected in a single day. If the highest rain level in
a day is 1 meter, this simple program will have 10^21 possible states, slightly less than the number of stars in the observable universe. Early model checkers would have to verify that the required
property was true for every one of those states.
Systems of practical size manipulate much more data than the simple example above. The number of possible states grows as well, at an explosive speed. This rapid growth is called the state space
explosion. Although early model checkers demonstrated that the technology was feasible for small systems, it was not ready for wider use.
Clarke and his student Ken McMillan had a fundamental insight: The state space explodes because the number of states a memory location can assume is much, much bigger than the size of the location
itself. The memory location is compact because it encodes many potential states but only contains one at a time. Clarke and McMillan observed that this process could be applied in reverse: with the
right encoding, many potential states could be represented by a single value. From the literature, McMillan found an encoding that met the twin goals of tersely encoding multiple states while at the
same time permitting efficient computation of formulas in temporal logic.
The new representation dramatically reduced the storage required to represent state spaces, in turn reducing the time required to run a model checker on systems of practical size. They called these
new systems symbolic model checkers. In 1995, Clarke, McMillan, and colleagues used this approach to demonstrate flaws in the design of an IEEE standard for interconnecting computer components.
Before this, the reliability of such standards had only been informally analyzed, leaving many rare cases unconsidered and potential errors undiscovered. This was the first time every possible case
of an IEEE standard had been exhaustively checked.
With enhancements such as these, model checking has become a mature technology. It is widely used to verify designs for integrated circuits, computer networks, and software, by companies such as
Intel and Microsoft. Model checkers have been used to analyze systems whose state space (10^120) is substantially larger than the number of atoms in the observable universe (around 10^80). It is
becoming particularly important in the verification of software designed for recent generations of integrated circuits, which feature multiple processors running simultaneously. Model checking has
substantially improved the reliability and safety of the systems upon which modern life depends.
Author: Ted Kirkpatrick
|
{"url":"http://amturing.acm.org/award_winners/clarke_1167964.cfm","timestamp":"2014-04-16T15:58:45Z","content_type":null,"content_length":"20024","record_id":"<urn:uuid:adef11ca-dadf-4593-bf5e-7e9fbc3501ba>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Effective Theory
6 Effective Theory
Even in isotropic models, difference equations of states can be difficult to analyze. And even if one finds solutions, analytically or numerically, one still has to face conceptual problems of
interpreting the wave function correctly. In quantum physics there is a powerful tool, effective equations, which allows one to include quantum effects by correction terms in equations of the
classical type. Thus, in quantum mechanics one would be dealing with effective equations, which are ordinary differential rather than partial differential equations. Moreover, the wave function would
only appear indirectly and one solves equations for classical type variables such as expectation values with an immediate physical interpretation. Still, in some regimes quantum effects can be
captured reliably by correction terms in the effective equations. Effective equations can thus be seen as a systematic approximation scheme to the partial differential equations of quantum mechanics.
Quantum cosmology is facing the problems of quantum mechanics, some of which in an even more severe form because one is by definition dealing with a closed system without outside observers. It also
brings its own special difficulties, such as the problem of time. Effective equations can thus be even more valuable here than in other quantum systems. In fact, effective theories for quantum
cosmology can be and have been derived and already provided insights, especially for loop quantum cosmology. These techniques and some of the results are described in this section. As we will see,
some of the special problems of quantum gravity, such as the physical inner product and anomaly issues, are in fact much easier to deal with at an effective level compared to the underlying quantum
theory, in terms of its operators and states. In addition to conceptual problems, phenomenological problems are addressed because effective equations provide the justification for the
phenomenological correction terms discussed in Section 4. But they will also show that there are new corrections not discussed before, which have to be included for a complete analysis. Effective
equations thus provide a means to test the self-consistency of approximations.
|
{"url":"http://relativity.livingreviews.org/Articles/lrr-2008-4/articlese6.html","timestamp":"2014-04-18T13:08:01Z","content_type":null,"content_length":"8042","record_id":"<urn:uuid:d558f1bf-7102-4aaf-a901-386a53e0ac0e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carver Mead: Finish the physics revolution | EE Times
Most Recent Comments
4:19:21 PM
Flash Poll
All Polls
Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More
Engineer's Bookshelf
The Engineering Life - Around the Web
Surprise TOQ Teardown at EELive!
Caleb Kraft Post a comment
This year, for EELive! I had a little surprise that I was quite eager to share. Qualcomm had given us a TOQ smart watch in order to award someone a prize. We were given complete freedom to ...
Design Contests & Competitions
Engineering Investigations
Frankenstein's Fix: The Winners Announced!
Caleb Kraft 8 comments
The Frankenstein's Fix contest for the Tektronix Scope has finally officially come to an end. We had an incredibly amusing live chat earlier today to announce the winners. However, we ...
MORE EELife
Top Comments of the Week
Like Us on Facebook
Datasheets.com Parts Search
185 million searchable parts
(please enter a part number or hit search to begin)
|
{"url":"http://www.eetimes.com/messages.asp?piddl_msgthreadid=40907&piddl_msgid=248590","timestamp":"2014-04-20T21:16:20Z","content_type":null,"content_length":"173357","record_id":"<urn:uuid:eee2627b-1146-4b05-b573-17df61e636aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grading at Pitt
Hey guys I am going to Pitt in the fall and was looking at the website the other day and it didn't seem like Pittsburgh gave a very strict rank of its students nor did it use much of a forced curve.
Am I wrong in this? From the charts on their website it seemed that quite a few students got A's and am trying to figure out the grading method they use there. Thanks for the help.
|
{"url":"http://www.lawschooldiscussion.org/index.php?topic=3006950.msg3055487","timestamp":"2014-04-16T20:02:12Z","content_type":null,"content_length":"37856","record_id":"<urn:uuid:0947d7b8-7964-4fda-8ea0-c520f50f44de>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fidel Kennedy site
As a swift overview of the variables in options pricing, the choice price tag is established by the price of the underlying stability, the strike cost of the solution, the quantity of time right up
until expiration, the volatility of the underlying, any dividends excellent and the recent possibility free amount of interest.
So why do knowledgeable traders care about the "Choice Greeks?" It is due to the fact they are a useful tool in predicting what will take place to the price tag of an alternative as current market
variables changes. This may possibly appear hard to understand at initially, but choice prices do not move specifically with the price tag of the underlying asset. On the other hand, any trader that
dedicates the time to study the necessities will get started to realize what elements contribute to the movement in the price of an choice, and what impact every issue has.
Several specialist traders will use the Choice Greeks to efficiently deal with a portfolio of many options at a selection of strikes about a variety of timeframes. In purchase to develop a neutral
portfolio, market professionals will also use the Greeks to assure that their market place coverage is properly hedged and modified accordingly.
As for the day trader or investor, the Greeks symbolize a implies of knowledge why and how an choices selling price modifications as any a single of the variables adjust.
The 5 frequently referred to Selections Greeks are the Delta - which actions the correlation of the value change in the selection to the value change of the underlying stock. Gamma - this actions the
price of alter of the Delta. Vega, which measures the modify in volatility, Theta - which measures the change in Time and Rho which accounts for the alter in fascination prices.
The very first and most generally referred to Greek is the Delta. As brought up, the delta is the rate of alter in the alternative price relative to the fee of modify in the underlying stock. This is
crucial to realize given that quite a few solution strategies are tailored to gain from correctly anticipating the value transform of the underlying safety
For an case in point of Delta, we have a stock that is priced at $fifty.00 and an at-the-funds option at the $50.00 strike. There are thirty days till expiration the phone solution is priced at $2.32
with a Delta of .fifty three. The delta reflects the predicted alter assuming no other variables change.
If the price tag of the stock raises by a dollar to $51.00, we can anticipate that the contact solution would improve from $two.32 to about $two.85.
In the identical respect, if the stock value was to drop from $50.00 down to $49.00, we can anticipate that the contact selection would reduce in price from the $2.32 to about $1.79.
Discover that in both situations the cost has changed by the total of the Delta. Some of the key features of the Delta are
As a phone solution turns into deeper "in-the-money", the delta will technique 1.
Phone selections always have a good delta.
At the position that alternative delta reaches 1, the simply call choice would start out replicating the selling price motion of the underlying stock almost dollar for greenback.
When we are wanting at the delta of a set alternative, the deeper in-the-income the option binary options trading online gets, the delta will technique minus 1. Set choices will constantly have a
unfavorable delta.
The upcoming Alternative Greek is the Gamma. Due to the fact the delta is usually changing, there needed to be a way to measure that progressive modify. As a result, the Gamma was designed as a
indicates of quantifying the fee of change of the delta. This is primarily used by professional traders to adjust delta hedged portfolios.
The upcoming Greek is the Vega. The Vega is the measure of the modify in the selection price relative to the percentage modify in implied volatility.
For this instance of Vega, we have a stock that is priced at $50.00 and an at-the-dollars solution at the $50.00 strike. There is 30 days right up until expiration. The simply call choice is priced
at $2.06 with an Implied Volatility of 35% and a corresponding Vega of .057.
If the implied volatility of the stock enhanced by one p.c to 36%, we can anticipate that the phone option would enhance from $2.06 to approximately $2.12, the sum of the Vega.
In the very same respect, if the implied volatility was to drop from 35% down to 34%, we can anticipate that the simply call selection would lessen in price from the $two.06 to about $2.00.
The following Choice Greek is Theta. The Theta is a measure of the alter in the alternative cost relative to the adjust in time to maturity. Every day that passes, an alternative will get rid of some
of its price, the Theta measures that price of decay.
For this example of Theta, we have a stock that is priced at $50.00 and an at-the-cash option at the $fifty.00 strike. There is 30 days till expiration. The simply call solution is priced at $two.06
with a Theta of minus .041. If the variety of days till expiration drops from 30 to 29 days, the choice would lower from $two.06 to somewhere around $2.02, the total of the Theta.
The closing Option Greek is Rho. Rho is a measure of the modify in the value of an option relative to a transform in the danger-cost-free fee of interest. This unique Greek is far far more relevant
on for a longer time time period choices as the curiosity amount effect on a brief time period alternative is significantly less apparent.
For this illustration of Rho, we have a stock that is priced at $50.00 and an at-the-cash option at the $50.00 strike. There is thirty days until expiration. The contact option is priced at $two.06
with fascination premiums at 3.00% and a Rho of .02. If curiosity premiums were to rise to four%, the choice selling price would boost from $two.06 to $2.08, the price of Rho
In the very same respect, if interest premiums ended up to drop from 3% down to two%, the option selling price would lessen from $two.06 to $two.04.
In conclusion, by studying the selection Greeks, an investor or trader is capable to realize why an solution is or is not relocating in correlation with the underlying safety.
By comprehension the variables that influence option prices, the day trader or investor will have the confidence necessary to integrate alternatives into their portfolio and take gain of quite a few
techniques to aid meet up with their goal.
|
{"url":"http://ingaleach1943.hpage.co.in/binary_options_trading_online_options_trading_basics_-_what_are_the_option_gre_56715722.html","timestamp":"2014-04-20T13:57:21Z","content_type":null,"content_length":"16023","record_id":"<urn:uuid:577a1ca9-cb93-4899-a4a3-a81f51c4f2aa>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cam Timing vs. Compression Analysis
The experienced engine builder (Harley-Davidson, Mopar, or Ferrari) knows that the “advertised” compression ratio (suggested by the owner’s manual, piston manufacturer, etc.) is only one of several
factors involved in determining how much pressure is developed in the combustion chamber.
This article discusses briefly how to roughly calculate how changes in either cam timing, compression, or both affect low-speed torque and response. Some of the more important factors are:
» Theoretical or mechanical compression ratio (“advertised”, or nominal), including corrections made for changes in combustion chamber volume due to piston compression distance, valve reliefs in
piston dome, combustion chamber and/or bore, alternate gasket thickness, head, cylinder and/or case milling, etc.
» Intake valve closing point (generally the 2^nd number in the cam timing data, given in degrees after bottom dead center, or ABDC)
» Internal geometry of the motor
This last factor is not fully appreciated by most mechanics. The crank-pin is offset from the sprocket & pinion shaft axis by exactly ½ the stroke length. The geometric relationship between the rod
and the crank-pin is one not generally understood, but which plays a key role in the motor’s breathing and overall power characteristics.
The ratio of rod length to stroke length is almost always between 2.2:1 on the “long” end, and 1.4:1 on the “short” end. 99% of all motors fall between these 2 extreme limits, with most standard
production designs between 2.0:1 and 1.5:1.
The percentage of mixture captured, compressed, and burned in the cylinder at a specific point of intake valve closure partially depends on the motor’s rod ratio. The piston’s motion during flywheel
rotation is not symmetrical: the piston speed before & after TDC is faster than before & after BDC, but the difference is not constant - it varies with the rod ratio. Two motors with the same stroke,
but different rod lengths, will not have the piston in the same place at the same point of flywheel rotation. The only 2 exceptions are 0° (TDC) where both strokes are zero, and 180° (BDC) where both
strokes are equal & nominal.
The long-rod motor will have the piston closer to TDC than the short-rod motor at any point between 90° BTDC & 90° ATDC. The short-motor will have the piston closer to BDC than the long-rod motor at
any point between 90° BBDC & 90° ABDC.
Short-rod motors (“n” = 1.5 to 1.7:1) have slower piston movement upwards away from BDC on the compression stroke, and will capture more mixture at the same point of intake valve closure. This makes
them more tolerant of extended (late intake closing) cam timing.
Longer duration cams generally need more static compression. The usual cam functions (also known as valve events) are Intake Opening Point, Intake Closing Point, Exhaust Opening Point, Exhaust
Closing Point, Overlap, Lobe Separation Angle, and Valve Lift. The only one which affects cylinder pressure directly is the Intake Closing Point - where the intake valve has just closed ABDC at the
beginning of the compression stroke. Intake valve closure after BDC (present in all modern cams) always causes some of the intake mixture to be compressed backwards out of the cylinder by the rising
piston at very low speeds. Late (radical) intake valve closure causes some mixture to escape even at moderate speeds, reducing cylinder pressure during the operating range (at the lower end of the
torque curve). The point in the engine’s RPM range where this reversion stops and full-stroke capture (or more!) occurs is frequently the torque peak, and depends on many factors, including cam
timing, port efficiency, mixture velocity, manifold runner & port cross-sectional area & volume, etc. I have no method of applying math to these factors - too complex!
If you have (or plan to install) a long-duration cam, you can (and should) regain some of this lost pressure by raising the compression ratio, but the 2 effects do not always “cancel each other out”
- you can't get something for nothing. Even though a higher compression ratio will give you back a higher pressure gauge reading, the power may still be lower, at least at low to moderate speeds.
The error lies in the fact that a smaller volume of mixture being compressed to a higher ratio. Even though the pressure gauge reading taken during cranking or idling is higher, the total of cylinder
pressure times the actual mixture volume captured may still be lower (compared to the original milder cam and moderate compression ratio).
As a note: the gas present in the combustion chamber @ TDC is presumed to be non-combustible exhaust gas remaining from the previous cycle, and is therefore not included in the mixture volume for our
purposes. At high engine speeds (under certain conditions) overlap does cause this remainder to be partially combustible, but this is not true at cranking to moderate engine speed.
In short, a cam change can't be “cured” completely by raising the compression ratio, but it’s still an excellent idea. The “V/P Index” , which is the subject of this Tech Paper, will not predict
maximum torque or power, but still provides useful information about conditions present in the motor at cranking speed. These same conditions generally affects low-speed response & flexibility,
knock-resistance, etc. for spark advance settings, sprocket size choices, etc.
Another useful purpose is to anticipate the effect of higher elevation on cylinder pressure. Perform a calculation (as described below) first, then another substituting a lower value than 14.7 psi
for “Atmospheric Pressure” . Compare the results to estimate how much adjustment to the nominal compression ratio is needed to (partially) compensate for the lower air density.
V/P Index
A reasonable estimate of the relative effects of compression ratio, rod ratio, and intake valve closing point at cranking speed can be made by use of a simple formula. Although a “power” (exponent)
is used in the formula, most pocket and on-line calculators have a function to permit a fractional exponent to be entered (use Microsoft “Calculator” in Win95, etc. It has this feature - to use it,
select “Scientific” view). Try it out - it’s not that difficult!
» Atmospheric pressure = 14.7 psi (pounds per square inch, or 29.92 inches of mercury). This is a constant for our purposes here at sea level, but should be adjusted downward for extreme elevations.
To convert psi to In. hg, multiply by 2.036; to convert In. hg to psi multiply by .0491.
» Intake valve closing point (ABDC). At cranking speed, intake flow reversal still takes place until the valve is actually closed, so the “paper duration” (nominal) closing point is used for
calculations. By the way, at operating speeds the general consensus is that the intake valve is effectively closed (the flow has stopped for practical purposes) at about .050” valve lift.
» Theoretical compression ratio. Click here to review the factors, as described above: .
» Stroke remaining at intake valve closing point, which is determined by the engine’s stroke, and rod to stroke ratio (“n” ). This is calculated by dividing the rod length by the stroke length. To
find the rod ratio of most Harley-Davidson engines, click here: .
» Corrected (true) compression ratio.
» Cranking pressure, absolute (in psi, zero elevation std.).
» Gauge pressure (in psi, zero elevation std.).
» Corrected cylinder volume (cylinder displacement trapped when the intake valve closes).
» V/P Index*
*V/P (Volume/Pressure) Index is a mathematically-derived figure, the product of the corrected volume and cranking pressure. It’s a very useful barometer of the motor’s low-speed power, and may be
solved with new variables for a “before & after” analysis when modifications are contemplated. Anyone planning a cam or compression ratio change would definitely want to know how much the low-speed
flexibility would be affected. Some useful insight may also be derived as to the motor’s tolerance of pump-quality gas (ping & knock resistance). The results of these comparisons are generally quite
To calculate the V/P Index for a specific motor, you'll need the exact piston position at all relevant flywheel positions from BDC to about 90° ABDC, the compression ratio, and the intake valve
closing point.
If you'd like to calculate piston positions for your own motor, the following formula has been contributed by a reader:
SE = (S ÷ 2) + R + ((S ÷ 2) × cosA) - SQRT ((R^2) - ((S ÷ 2) × sinA)^2)
where “SE” is the effective stroke, “S” is the nominal (full) stroke, “R” is the rod length, “A” is the crank-shaft angle in degrees ABDC from 0 to 90°, and “SQRT” (or “SQR” ) is the square root
Be sure to copy the equation very accurately, including the “nested” pairs of parentheses. This should work “as-is” for calculators, but most computer programs will require “A” to be converted from
degrees to radians: Radians = Degrees × .017453, or Degrees × Pi ÷ 180.
To make a V/P Index calculation, find your intake closing point. Unfortunately, most cam manufacturers (including Andrews Products, Sifton, Leineweber, etc.) don't list nominal closing points in
their specs - you might try to call them for this info (no, I don't have phone numbers, and I have no cam timing figures). Locate this figure as the flywheel position, and take the stroke from that
position to calculate the “effective” cylinder volume (“VE”), which is amount trapped by the closed intake valve; which is always less than the nominal volume (“VN”). Using this, calculate the
effective compression ratio (“CRE” ); also lower - the combustion chamber volume is unchanged, but the cylinder volume is less. At cranking speed, the absolute cranking pressure (“CP” ) is a function
of the 1.25 power of the effective compression ratio (i.e., for 8:1 compression ratio, use 8^1.25) times atmospheric pressure (14.7 psi @ sea level, etc.). This adjustment (1.25 power) is a
polytropic value used in preference to the traditional adiabatic value (1.4) for the ratio of variable heats for air and similar gases at the temperatures present. This compensates for the
temperature rise caused by compression, as well as heat lost to the cylinder. 1.25 is not accurate in all cases, since the amount of heat lost will vary among engines based on design, size and
materials used, but provides useful results for purposes of comparison.
To predict a pressure gauge reading subtract 14.7 (or the correct atmospheric pressure at test elevation) to compensate for the fact that a gauge in free air reads “0” , not 14.7 psi, even though
atmospheric pressure is always present.
The V/P Index number is the product of the effective volume times the effective compression ratio, times the number of cylinders, times a correction factor weighted to produce a number roughly
proportionate to torque in ft./lbs.
V/P = VE × CP × N × .6%
┃ Symbol │ Meaning │ Definition or Calculation ┃
┃ B │ Bore │ Piston diameter, in inches ┃
┃ S │ Stroke │ Full (nominal) flywheel stroke; TDC to BDC measurement (180° rotation), in inches ┃
┃ SE │ Stroke, Effective │ Stroke measured from intake valve closing point (less than 180°) to TDC [always less than “S” , above] ┃
┃ VN │ Cylinder Volume, Nominal │ B^2 × S × .7854 (1 cylinder) ┃
┃ VE │ Cylinder Volume, Effective │ B^2 × SE × .7854 (1 cylinder) [always less than “VN” , above] ┃
┃ VC │ Chamber Volume │ VC = VN ÷ (CRN - 1). Total volume (1 chamber) above piston @ TDC, in inches. ┃
┃ N │ Number of cylinders │ (2 in this case) ┃
┃ AP │ Atmospheric Pressure │ 14.7 psi @ sea level (zero elevation); use the correct lower figure for higher elevations ┃
┃ CRN │ Compression Ratio, Nominal │ CRN = (VN + VC) ÷ VC ┃
┃ CRE │ Compression Ratio, Effective │ CRE = (VE + VC) ÷ VC [always less than “CRN” , above] ┃
┃ CP │ Cranking Pressure (absolute) │ CP = (CRE^1.25 × AP) ┃
┃ GP │ Gauge Pressure │ GP = (CRE^1.25 × AP) - AP. To predict gauge pressure, subtract atmospheric pressure (14.7 psi @ sea level, etc.) from absolute pressure. See above: . ┃
┃ V/P │ Volume/Pressure Index │ V/P = CP × VE × N × .6% (.6% or .006 is a correction factor to return a useful 2 digit number proportionate to torque) ┃
┃ Note: I have a Q-Basic program (your Windows 95/98/2000 PC already has it, probably in the “DOS” directory) to do all this annoying math automatically for any intake closing point, any ┃
┃ compression ratio, etc., including reduction in atmospheric pressure for elevation. I can supply you with a copy of this program tailored to your own motor on disk for $10.00. E-mail me here for ┃
┃ details: sales@victorylibrary.com for a copy. ┃
Panhead Calculation
For a base-line on a 74” panhead motor, let’s use the original 8.5:1 compression (4.9” or 81cc chamber volume), and make a guess at the intake closing point of the original mild FLH cam at 55° ABDC.
The effective stroke at this point is 3.302” , for an effective volume of 61.3” . The effective compression ratio is 7.24:1. The effective pressure is 174 psi absolute (159 psi gauge pressure). The V
/P Index is 64. This explains the good torque on these motors, and why they require premium gas.
Let’s substitute a hotter cam with the intake closing point at 65° ABDC. The effective stroke at this point is 3.043” , for an effective volume of 56.5” . The effective compression ratio is 6.75:1.
The effective pressure is 159 psi absolute (145 psi gauge pressure). The V/P Index is 54 - the cam change delayed the intake closing by only 10° but reduced the cranking pressure by over 8%, and V/P
Index by over 15% - this explains why such a big cam change softens the low-speed torque quite a bit, and reduces pinging.
Let’s increase the compression ratio to 9.5:1 (4.33” or 71cc chamber) with the same hot cam. The effective compression ratio is 7.52:1. The effective pressure is now higher than the original figure
in the stock motor: 182 psi absolute (168 psi gauge pressure). This is not as effective as it may appear, as the high cylinder pressure may cause pinging, and 9.5:1 compression requires higher-domed
pistons, which will restrict breathing somewhat during overlap. The motor will require re-balancing, head, cylinder & oil temperature will be higher, and life expectancy will be shorter. But did we
get our power back? Not quite - the V/P Index gives you more of the picture: 61, still down over 4% compared to the stock motor, because the restored (higher) effective pressure is acting on reduced
cylinder volume. Of course, as the RPM increases towards the torque peak, the percentage captured rises, and will eventually reach and pass the effective pressure & volume of the stock cam at high
U-Series Calculation
Let’s try a stock ULH (38 hp), with 5.7:1 compression (8.45” or 138.5cc chamber), and guess the intake closing (roughly) @ 45° ABDC. The stock rod is 7.90625” long.
The effective stroke is 3.801” , and an effective volume of 70.5” . The effective compression ratio is 5.17:1. The cranking pressure is 114 psi absolute (99 psi gauge pressure). The V/P Index is 48.
Let’s compare this with a hot U-Series motor: 4-3/4” stroker for 88” , and delay (increase) the intake closing to 65°. Compression is about 6.25:1 with the same combustion chamber volume. The
effective stroke is 3.677” , and an effective volume of 68.3” . The effective compression ratio is 5.06:1. The cranking pressure is 111 psi absolute (96 psi gauge pressure). The V/P Index is 45. As
you see, the size and compression ratio increases still don't bring the motor back up to the stock figure when the cam is this hot - the V/P is down more than 6%.
|
{"url":"http://victorylibrary.com/tech/cam-c.htm","timestamp":"2014-04-16T16:13:05Z","content_type":null,"content_length":"67440","record_id":"<urn:uuid:df99d5ca-e915-401c-ac7e-002fe81de669>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the existence of hard sparse sets under weak reductions
Results 1 - 10 of 14
- Journal of Computer and System Sciences , 1999
"... A set A is k(n) membership comparable if there is a polynomial time computable function that, given k(n) instances of A of length at most n, excludes one of the 2 k(n) possibilities for the
memberships of the given strings in A. We show that if SAT is O(log n) membership comparable, then Unique ..."
Cited by 15 (1 self)
Add to MetaCart
A set A is k(n) membership comparable if there is a polynomial time computable function that, given k(n) instances of A of length at most n, excludes one of the 2 k(n) possibilities for the
memberships of the given strings in A. We show that if SAT is O(log n) membership comparable, then UniqueSAT 2 P. This extends the work of Ogihara; Beigel, Kummer, and Stephan; and Agrawal and Arvind
[Ogi94, BKS94, AA94], and answers in the affirmative an open question suggested by Buhrman, Fortnow, and Torenvliet [BFT97]. Our proof also shows that if SAT is o(n) membership comparable, then
UniqueSAT can be solved in deterministic time 2 o(n) . Our main technical tool is an algorithm of Ar et al. [ALRS92] to reconstruct polynomials from noisy data through the use of bivariate polynomial
, 1995
"... In 1978, Hartmanis conjectured that there exist no sparse complete sets for P under logspace many-one reductions. In this paper, in support of the conjecture, it is shown that if P has sparse
hard sets under logspace many-one reductions, then P ` DSPACE[log 2 n]. The result is derived from a more ..."
Cited by 11 (1 self)
Add to MetaCart
In 1978, Hartmanis conjectured that there exist no sparse complete sets for P under logspace many-one reductions. In this paper, in support of the conjecture, it is shown that if P has sparse hard
sets under logspace many-one reductions, then P ` DSPACE[log 2 n]. The result is derived from a more general statement that if P has 2 polylog sparse hard sets under poly-logarithmic space-computable
many-one reductions, then P ` DSPACE[polylog]. 1 Introduction In 1978, Hartmanis conjectured that no P-complete sets under logspace many-one reductions can be polynomially sparse; i.e., for any
P-complete set A, k fx 2 A j jxj ng k cannot be bounded by any polynomial in n [5]. The conjecture is interesting and fascinating. If the conjecture is true, then L 6= P, because L = P implies any
nonempty finite set being P-complete. So, with expectation that L is different from P, one might believe the validity of the conjecture. Nevertheless, such a reasoning would be fallacious, for,
proving thi...
- SIAM Journal on Computing , 2007
"... We establish a relationship between the online mistake-bound model of learning and resourcebounded dimension. This connection is combined with the Winnow algorithm to obtain new results about
the density of hard sets under adaptive reductions. This improves previous work ..."
Cited by 8 (4 self)
Add to MetaCart
We establish a relationship between the online mistake-bound model of learning and resourcebounded dimension. This connection is combined with the Winnow algorithm to obtain new results about the
density of hard sets under adaptive reductions. This improves previous work
- Theoretical Computer Science , 1995
"... en a graph G and a pair of vertices s; t, this reduction produces a polynomial number of graphs G 1 ; : : : ; G k of polynomial size, together with distinguished vertex-pairs (s 1 ; t 1 ); : : :
; (s k ; t k ), that satisfy the following conditions. If there is no path from s to t in G, then no G i ..."
Cited by 7 (3 self)
Add to MetaCart
en a graph G and a pair of vertices s; t, this reduction produces a polynomial number of graphs G 1 ; : : : ; G k of polynomial size, together with distinguished vertex-pairs (s 1 ; t 1 ); : : : ; (s
k ; t k ), that satisfy the following conditions. If there is no path from s to t in G, then no G i has a path from s i to t i ; if there is a path from s to t in G, then with high probability, at
least one of the G i 's has a unique path from s i to t i . This reduction is due to Avi Wigderson [Wig94], and it exploits the "isolation lemma" of Mulmuley, Vazirani and Vazira
- In Proceedings of the 11th Conference on Computational Complexity , 1996
"... We prove that there is no sparse hard set for P under logspace computable bounded truth-table reductions unless P = L. In case of reductions computable in NC 1 , the collapse goes down to P = NC
1 . We generalize this result by parameterizing the sparseness condition, the space bound and the number ..."
Cited by 5 (1 self)
Add to MetaCart
We prove that there is no sparse hard set for P under logspace computable bounded truth-table reductions unless P = L. In case of reductions computable in NC 1 , the collapse goes down to P = NC 1 .
We generalize this result by parameterizing the sparseness condition, the space bound and the number of queries of the reduction, apply the proof technique to NL and L, and extend all these theorems
to two-sided error randomized reductions in the multiple access model, for which we also obtain new results for NP.
"... Building on a recent breakthrough by Ogihara, we resolve a conjecture made by Hartmanis in 1978 regarding the (non-) existence of sparse sets complete for P under logspace many-one reductions.
We show that if there exists a sparse hard set for P under logspace many-one reductions, then P = LOGSPACE. ..."
Cited by 5 (0 self)
Add to MetaCart
Building on a recent breakthrough by Ogihara, we resolve a conjecture made by Hartmanis in 1978 regarding the (non-) existence of sparse sets complete for P under logspace many-one reductions. We
show that if there exists a sparse hard set for P under logspace many-one reductions, then P = LOGSPACE. We further prove that if P has a sparse hard set under many-one reductions computable in NC 1
, then P collapses to NC 1 .
, 1995
"... We prove unlikely consequences of the existence of sparse hard sets for P under deterministic as well as one-sided error randomized truth-table reductions. Our main results are as follows. We
establish that the existence of a polynomially dense hard set for P under (randomized) logspace bounded trut ..."
Cited by 4 (0 self)
Add to MetaCart
We prove unlikely consequences of the existence of sparse hard sets for P under deterministic as well as one-sided error randomized truth-table reductions. Our main results are as follows. We
establish that the existence of a polynomially dense hard set for P under (randomized) logspace bounded truth-table reductions implies that P ` (R)L, and that the collapse goes down to P ` (R)NC 1 in
case of reductions computable in (R)NC 1 . We also prove that the existence of a quasipolynomially dense hard set for P under (randomized) polylog-space truth-table reductions using
polylogarithmically many queries implies that P ` (R)SPACE[polylogn]. The randomized space complexity classes we consider are based on the multiple access randomness concept. 1 Introduction A lot of
research effort in complexity theory has been spent on the sparse hard set problem for NP, i.e., the question whether there are sparse hard sets for NP under various polynomial-time reducibilities.
Two major motivations ...
, 2000
"... We discuss the history and uses of the parallel census technique|an elegant tool in the study of certain computational objects having polynomially bounded census functions. A sequel [GH] will
discuss advances (including [CNS95] and Glaer [Gla00]), some related to the parallel census technique and ..."
Cited by 3 (3 self)
Add to MetaCart
We discuss the history and uses of the parallel census technique|an elegant tool in the study of certain computational objects having polynomially bounded census functions. A sequel [GH] will discuss
advances (including [CNS95] and Glaer [Gla00]), some related to the parallel census technique and some due to other approaches, in the complexity-class collapses that follow if NP has sparse hard
sets under reductions weaker than (full) truth-table reductions.
, 1995
"... If there is a sparse set hard for P under bounded truth table reductions computable in LOGSPACE or NC 2 , then P = NC 2 . We give the details of the proof to this theorem. 1 Introduction
Recently a 1978 conjecture by Hartmanis [Har78] was resolved [CS95a], following a breakthrough by [Ogi95]. I ..."
Cited by 2 (0 self)
Add to MetaCart
If there is a sparse set hard for P under bounded truth table reductions computable in LOGSPACE or NC 2 , then P = NC 2 . We give the details of the proof to this theorem. 1 Introduction Recently a
1978 conjecture by Hartmanis [Har78] was resolved [CS95a], following a breakthrough by [Ogi95]. It was shown that there is no sparse set that is hard for P under logspace many-one reductions, unless
P = LOGSPACE. Bounded truth table reductions are a natural extension of many-one reductions and it is natural to ask what consequences can be drawn assuming there is a sparse set hard for P under
bounded truth table reductions computable in LOGSPACE. In this note we give the details of the proof of the theorem that if such a sparse set exists, then a very unlikely consequence follows, namely
P = NC 2 . This theorem is even valid for bounded truth table reductions computable in NC 2 . The proof for the case of 1-truth table reductions, which already generalizes the manyone reductions,
- Journal of Computer and System Sciences , 1998
"... We prove that there is no sparse hard set for P under logspace computable bounded truthtable reductions unless P = L. In case of reductions computable in NC 1 , the collapse goes down to P = NC
1 . We parameterize this result and obtain a generic theorem allowing to vary the sparseness condition ..."
Cited by 2 (0 self)
Add to MetaCart
We prove that there is no sparse hard set for P under logspace computable bounded truthtable reductions unless P = L. In case of reductions computable in NC 1 , the collapse goes down to P = NC 1 .
We parameterize this result and obtain a generic theorem allowing to vary the sparseness condition, the space bound and the number of queries of the truth-table reduction. Another instantiation
yields that there is no quasipolynomially dense hard set for P under polylogspace computable truth-table reductions using polylogarithmically many queries unless P is in polylog-space. We also apply
the proof technique to NL and L. We establish that there is no sparse hard set for NL under logspace computable bounded truth-table reductions unless NL = L, and that there is no sparse hard set for
L under NC 1 -computable bounded truth-table reductions unless L = NC 1 . We show that all these results carry over to the randomized setting: If we allow two-sided error randomized reductions with
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=348979","timestamp":"2014-04-19T18:53:53Z","content_type":null,"content_length":"36898","record_id":"<urn:uuid:c9a8532b-9802-4733-b18e-90544d025cfa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pattern when skip counting by 3's
November 12th 2012, 02:08 PM #1
Nov 2012
United States
Pattern when skip counting by 3's
What is the patten when skip counting by 3's on a number chart from 1-100?
For 10's it would be that all the numbers end in 0's or that all the numbers are in the rightmost column.
Re: Pattern when skip counting by 3's
Re: Pattern when skip counting by 3's
"Skip counting by 3's"? Do you mean "3, 6, 7, 9, .." the red numbers on the chart? Those are all "multiples of 3", they are numbers of the form 3n for some integer n, n from 1 to 33.
For 10's it would be that all the numbers end in 0's or that all the numbers are in the rightmost column.
Did you look at your chart? "Numbers that end in 0" and "numbers that are in the rightmost column" are exactly the same! They could also be written "10n" for n from 1 to 10.
Last edited by HallsofIvy; November 12th 2012 at 03:33 PM.
Re: Pattern when skip counting by 3's
yes by skip counting I mean multiples of 3, but what is the pattern as seen on the chart?
And yes I know that "Numbers that end in 0" and "numbers that are in the rightmost column" are exactly the same! Thats why I put it there, for an example, of two possible ways to look at it.
So how would you describe the pattern?
November 12th 2012, 02:22 PM #2
November 12th 2012, 02:46 PM #3
MHF Contributor
Apr 2005
November 12th 2012, 05:08 PM #4
Nov 2012
United States
|
{"url":"http://mathhelpforum.com/algebra/207358-pattern-when-skip-counting-3-s.html","timestamp":"2014-04-18T10:53:59Z","content_type":null,"content_length":"41157","record_id":"<urn:uuid:596df364-a811-44cb-b3d7-039caed3cb25>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Real-time Water Rendering Using GPU
Yunfei Bai
CS7490 Advanced Image Systhesis
Real-time water rendering is required in many applications especially games. We not only want the water to be realistic in dynamics and appearance, but also want it to be rendered in realtime. In
this work, we use GPU shading language NVIDIA Cg to render water in real-time. We simulate the water motion from physical model and render the effects such as reflection, refraction and caustics,
2.Water Simulation
A triangle mesh is used to represent the water surface. To simulate the motion of water surface, we dynamically undulate the water mesh and set the normal of the surface accordingly. We design the
water surface model as a sum of modified sine waves. Since the parameters of the wave function have physical basis, it is easy to script the parameters. And even if it is not a physical simulation,
the dynamic rendering of water can be still convincing. Moreover, the simulation runs entirely on GPU, so it is very efficient to calculate the water surface configuration.
We simulate the water surface by summing up sine wave functions.We sample the height and surface orientation of water at each vertex of the triangle mesh. If (x,z) is the horizontal coordinates of a
sample vertex, then the height of the surface can be represented using the following equation,
where i is the index of wave, A is the amplitude, D is the horizontal crest traveling direction, w is the frequency, t is the time, and phi is the phase constant. The normal of the surface is
calculated by the cross product of the binormal and tangent and can be represented by equation as,
The problem with the sine function when simulating the water is that there are too many rolls but not enough sharper peaks. The modified version of it is to offset the sine function to be nonnegative
and raise it to an exponent k. The comparison between different wave functions with different k is shown by the figure 1. [Finch et al.] below. The larger k is, the sharper the peak we can achieve.
Figure 1.
So we use the modified sine function as each individual wave function. In our implementation, we have four modified sine wave functions combined to generate the water surface and surface normal. The
figure 2. shows the water mesh we generated.
Figure 2.
3.Lighting Model
To render the water suface, we need to capture the effect of interaction between light and water surface. Our lighting model for water can be represented as below,
surface_color = ambient + diffuse + specular_reflection + refraction.
We implement the lighting model using vertex shader and fragment shader. Next, we will discuss each component in detail.
3.1 ambient
The ambient term accounts for light coming from all directions, and the out going light is equal in every direction. We can write the ambient term as follows,
ambient = K_a x global_ambient.
The ambient term is illustrated as in figure 3.
Figure 3.
3.2 diffuse
The diffuse term accounts for directed light reflected off a surface equally in all directions. It can be written as,
diffuse = K_d x light_color x max(dot(N,L),0).
The diffuse term is illustrated as in figure 4.
Figure 4.
The rendered water with ambient and diffuse light is shown in figure 5.
Figure 5.
3.3 Reflection
We consider mirror reflection when rendering the water. The environment mapping is used to simulate the water reflecting its surroundings. Figure 6. illustrated how to render reflection. The incident
light (red arrow) direction is based on the eye ray (green arrow) according to mirror reflection. Then the incident light intersects with the cube map (purple square) and use the color of the cube
map as the color of this fragment. And the figure 7. is an example environment map that we used.
Figure 6.
Figure 7.
The following video shows the result of water reflection.
3.4 Refraction
The refraction is caused by light passing through the boundary of two material with different density. As illustrated by figure 8., when we look at the object under the water, the eye ray and light
ray will saitisfy Snell's law. We also use cube map to get the color of the fragment by intersect the incident ray with cube map.
Figure 8.
The following video shows the result of the refraction.
The caustics is a very important effect of light being focused due to reflection and refraction of water. When we look at the floor underneath the water, we can see moving bright area which shows the
caustics effect. To accuratly simulate the caustics requires calculating the path of photons and is very time consuming in rendering. The goal here is to render caustics effect which looks realistic.
Jos Stam proposed using animated caustics texture based on wave theory [Stam et al.].
Here we make the assumption about where the caustics can happen and use only a subset of the arriving light rays. And thus we can achieve visually realistic caustics in real-time. The transparency of
water is between 77 and 80 percent per linear meter, thus between 20 and 23 percent of incident light per meter is absorbed by the medium. So the caustics will be formed easily when light travels the
shortest distance from the moment they enter the water to the moment they hit the floor. It can be illustrated by figure 9. So for each point on the floor, we assume that it is lighted by the
vertical light ray through the water. The light direction coming from outside the water is calculated by using Snells law. We can use a light map to get the incoming light, but here we approximate
the light intensity based on where the sun is. And if the intensity of this light is strong, then we blend a bright color with the color of the floor at this point. Figure 10. illustrates how to
calculate caustics.
Figure 9.
Figure 10.
Here is the video to show the result of the caustics effect.
5.NVIDIA Cg
Both water simulation and rendering are programmed in GPU using Cg shading language. Cg is a shading language designed by NVIDIA. The graphics hardware has come to the fourth generation. The GPUs
provide both vertex-level and pixel-level programmability. Cg heritage comes from three sources, as shown in Figure . Cg bases its syntax and semantics on the general-purpose programming language.
And it incorporates many concepts from offline shading languages such as the RenderMan Shading Language. Moreover, Cg bases its graphics functionality on OpenGL and Direct3D programming interfaces
for real-time 3D.
The lighting calculation and reflection and rafraction are mainly written in framgment shader. And thus we can use Phong shading instead of Gouraud shading. Basically, Phong shading is interpolating
normal while Gouraud shading is interpolating color. The problem with Gourand shading is the under mesh is obvious when the tessilation is not good enough. The comparison between Gouraud shading and
Phong shading is illustrated by the following Figure 11.
Figure 11.
1. "Effective Water Simulation from Physical Models", Mark Finch, Cyan Worlds
2. "Random Caustics: Wave Theory and Natural Textures Revisited", Jos Stam
3. "Rendering Water Caustics", Juan Guardado
4. "Cg tutorial"
|
{"url":"http://www.cc.gatech.edu/~ybai30/cs_7490_final_website/cg_water.html","timestamp":"2014-04-17T12:32:19Z","content_type":null,"content_length":"10370","record_id":"<urn:uuid:8080f123-9feb-40d1-acaf-44e874ae7ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
evaluating an integral with two functions as limits of inegration
November 28th 2012, 11:08 AM #1
May 2012
evaluating an integral with two functions as limits of inegration
$F(x)=\int_{x^4}^{x^3} (2t-1)^3\ dt$
how do I go about doing this given that there are two functions as the upper and lower limit (as opposed to one of them being a function and the other being a constant)
would I let one of them =u and the other =v and then try to work from there? If so how would the derivatives and anti-derivatives work?
Re: evaluating an integral with two functions as limits of inegration
November 28th 2012, 12:29 PM #2
|
{"url":"http://mathhelpforum.com/math-topics/208634-evaluating-integral-two-functions-limits-inegration.html","timestamp":"2014-04-17T17:10:04Z","content_type":null,"content_length":"36503","record_id":"<urn:uuid:6eaad0ac-bbcb-43de-9e8c-ff2f8c9cea8e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/grt_1973/answered","timestamp":"2014-04-21T07:35:51Z","content_type":null,"content_length":"85614","record_id":"<urn:uuid:d0e131b2-08f3-4862-97cf-36a14a109f53>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please help with this question about vectors
February 25th 2013, 06:42 AM #1
Junior Member
Feb 2013
Please help with this question about vectors
I have no idea how to go about this question
I have attached the question and all of my working so far.
I was trying to do the bit about finding the point of intersection of the line joining A to the midpoint of BC, and the line
joining B to the midpoint of AC but it wasn't working out.
I equated the two line and compared x coords and y coords but then I got two equations with 4 unknowns.
I really don't think I was doing it right.
Could someone please point me in the right direction.
Thanks in advance for replies.
Re: Please help with this question about vectors
find first the equation of the median AM it is the line that connects the vertex A with the mid point of BC .
This line has equation as per your coordinates of the vertices A(p1,q1), B(p2,q2),C(p3,q3).
the equation of the median :
AM IS: (q2+q3-2q1)x-(p2+p3-2p1)y+(p2q1-q2p1)+(p3q1-q3p1)=0
the median BE is : (q1+q3-2q2)x-(p1+p3-2p2)y+(p3q2-q3p2)+(p1q2-q1p2)=0
the median CZ is : (q1+q2-2q3)x-(p1+p2-2p3)y+(p1q3-q1p3)+(p2q3-q2p3)=0
if these 3 lines meet in one point then they form a pencil of lines.
the condition that 3 lines form a pencil of lines is that the determinant of their coefficients = 0
ex given if the lines a1x+b1y+C1=0 , a2x+b2y+c2=0 and a3x+b3y+c3=0 form a pencil of lines then the determinant of the coefficients a1,b2,c1,...etc =0
solve the system of the 3 equations I have given you to find their common point. WHOSE COORDINATES MUST BE (p1,+p2+p3)/3 AND (q1+q2+q3)/3
this is the barycenter of the triangle ABC...
I hope you understand all the above...
Last edited by MINOANMAN; February 25th 2013 at 09:05 AM.
Re: Please help with this question about vectors
find first the equation of the median AM it is the line that connects the vertex A with the mid point of BC .
This line has equation as per your coordinates of the vertices A(p1,q1), B(p2,q2),C(p3,q3).
the equation of the median :
AM IS: (q2+q3-2q1)x-(p2+p3-2p1)y+(p2q1-q2p1)+(p3q1-q3p1)=0
the median BE is : (q1+q3-2q2)x-(p1+p3-2p2)y+(p3q2-q3p2)+(p1q2-q1p2)=0
the median CZ is : (q1+q2-2q3)x-(p1+p2-2p3)y+(p1q3-q1p3)+(p2q3-q2p3)=0
if these 3 lines meet in one point then they form a pencil of lines.
the condition that 3 lines form a pencil of lines is that the determinant of their coefficients = 0
ex given if the lines a1x+b1y+C1=0 , a2x+b2y+c2=0 and a3x+b3y+c3=0 form a pencil of lines then the determinant of the coefficients a1,b2,c1,...etc =0
solve the system of the 3 equations I have given you to find the common their point. WHOSE COORDINATES MUST BE (p1,+p2+p3)/3 AND (q1+q2+q3)/3
this is the barycenter of the triangle ABC...
I hope you understand all the above...
Re: Please help with this question about vectors
find first the equation of the median AM it is the line that connects the vertex A with the mid point of BC .
This line has equation as per your coordinates of the vertices A(p1,q1), B(p2,q2),C(p3,q3).
the equation of the median :
AM IS: (q2+q3-2q1)x-(p2+p3-2p1)y+(p2q1-q2p1)+(p3q1-q3p1)=0
the median BE is : (q1+q3-2q2)x-(p1+p3-2p2)y+(p3q2-q3p2)+(p1q2-q1p2)=0
the median CZ is : (q1+q2-2q3)x-(p1+p2-2p3)y+(p1q3-q1p3)+(p2q3-q2p3)=0
if these 3 lines meet in one point then they form a pencil of lines.
the condition that 3 lines form a pencil of lines is that the determinant of their coefficients = 0
ex given if the lines a1x+b1y+C1=0 , a2x+b2y+c2=0 and a3x+b3y+c3=0 form a pencil of lines then the determinant of the coefficients a1,b2,c1,...etc =0
solve the system of the 3 equations I have given you to find the common their point. WHOSE COORDINATES MUST BE (p1,+p2+p3)/3 AND (q1+q2+q3)/3
this is the barycenter of the triangle ABC...
I hope you understand all the above...
Hey Minoas
How are you getting the equation of the median AM IS: (q2+q3-2q1)x-(p2+p3-2p1)y+(p2q1-q2p1)+(p3q1-q3p1)=0??
I thought you would do AM=M-A which is the way I have done on my attachment?
Also I may be wrong here but aren't the p's meant to go with x and the q's with y?
Re: Please help with this question about vectors
find first the equation of the median AM it is the line that connects the vertex A with the mid point of BC .
This line has equation as per your coordinates of the vertices A(p1,q1), B(p2,q2),C(p3,q3).
the equation of the median :
AM IS: (q2+q3-2q1)x-(p2+p3-2p1)y+(p2q1-q2p1)+(p3q1-q3p1)=0
the median BE is : (q1+q3-2q2)x-(p1+p3-2p2)y+(p3q2-q3p2)+(p1q2-q1p2)=0
the median CZ is : (q1+q2-2q3)x-(p1+p2-2p3)y+(p1q3-q1p3)+(p2q3-q2p3)=0
if these 3 lines meet in one point then they form a pencil of lines.
the condition that 3 lines form a pencil of lines is that the determinant of their coefficients = 0
ex given if the lines a1x+b1y+C1=0 , a2x+b2y+c2=0 and a3x+b3y+c3=0 form a pencil of lines then the determinant of the coefficients a1,b2,c1,...etc =0
solve the system of the 3 equations I have given you to find the common their point. WHOSE COORDINATES MUST BE (p1,+p2+p3)/3 AND (q1+q2+q3)/3
this is the barycenter of the triangle ABC...
I hope you understand all the above...
Hey Please ignore my previous quote post since I have now realised that it is simple geometry not vectors!
But I do have a question. What do you mean by the determinant of the coeefficients?
Also can't you just solve them simultaneously?
Also what do you mean by the lines forming a pencil?
I would have thought a pencil is straight but to me it seems that there is no way these three lines should form a straight line?
Re: Please help with this question about vectors
A pencil of lines is just a set of lines that all meet at a certain point called the centre of the pencil.
if 3 lines L1,L2 and L3 are lines of the same pencill then there exist numbers k1,k2 and k3 such that k1xL1+k2xL2+k3xL3 =0
in our case k1=k2=k3=1 ie. if you just add all the three equations of the lines I gave you they will be =0 this means that the three lines are members of the same pencil
in other words they meet in the same point all of them this point is the well known point called the barycenter of the trangle ABC and its coordinates are the ones I mntioned above.
Now go here and check what i mean by determinant.
Determinant - Wikipedia, the free encyclopedia
if 3 lines meet at a point their coefficients form a 3x3 determinant that is = 0
sent me a private message if you need more explanations.
Re: Please help with this question about vectors
A pencil of lines is just a set of lines that all meet at a certain point called the centre of the pencil.
if 3 lines L1,L2 and L3 are lines of the same pencill then there exist numbers k1,k2 and k3 such that k1xL1+k2xL2+k3xL3 =0
in our case k1=k2=k3=1 ie. if you just add all the three equations of the lines I gave you they will be =0 this means that the three lines are members of the same pencil
in other words they meet in the same point all of them this point is the well known point called the barycenter of the trangle ABC and its coordinates are the ones I mntioned above.
Now go here and check what i mean by determinant.
Determinant - Wikipedia, the free encyclopedia
if 3 lines meet at a point their coefficients form a 3x3 determinant that is = 0
sent me a private message if you need more explanations.
Hey Minoas when you're doing this adding the coeficients determinant thing; do you have to add the x coeficients then the y ones then the ones at the end separately?
IE Do you deal with them separately? Or do you just add them all together?
Also one thing I don't understand is you said that since these lines form a pencil then then there exists K1 K2 and K3 which are then multiplied by the coeficients.
How do you know that in this case they are equal to 1? Wouldn't it kind of muck things up if they were all different?
Are there cases where they are all different?
If they are all different and you add the coefficients does the sum still come to zero?
Also I have to find the coordinates of the point of intersection so will I have to solve two of the equations simultaneously
or will the determinant thing show me the coordinates somehow? I think I will have to solve two of them simultaneously?
Am I right?
Re: Please help with this question about vectors
Pls follow me carefully.
1. if 3 lines L1,L2,L3 are members of a pencil then there exist 3 real numbers k1,k2,k3, such that K1XL1+K2XL2+K3XL3=0 this is a well known theorem of coordinate Geometry for pencils.
In our case since there is a cyclic symmetry of the parameters p1,p2,p3,q1,q2,q3 if you ad the 3 equations you goet 0 that is k1=k2=k3=1.
Since this might be difficult for you to compile...there is another way to verify if 3 lines are members of the same pencil...
get the 3 equations and get the coefficients...only not x not the y...form a 3x3 determinant and check if it is 0 if it is zero then your lines intersect at a point. as simple as such.
for example is 3x+2y-5=0 , 3x+5y+2=0 and 4x-2y-7=0 then the determinat of the coefficient is first raw" 3,2,-5" second raw "3,5,2" and third "4,-2,-7"
find the determinant which is 95 thus different from zero and consequently the 3 lines are not forming a pencil. you may verify this by drawing the three lines.
good luck
if you need any more guidance pls send me a private msg
Re: Please help with this question about vectors
Pls follow me carefully.
1. if 3 lines L1,L2,L3 are members of a pencil then there exist 3 real numbers k1,k2,k3, such that K1XL1+K2XL2+K3XL3=0 this is a well known theorem of coordinate Geometry for pencils.
In our case since there is a cyclic symmetry of the parameters p1,p2,p3,q1,q2,q3 if you ad the 3 equations you goet 0 that is k1=k2=k3=1.
Since this might be difficult for you to compile...there is another way to verify if 3 lines are members of the same pencil...
get the 3 equations and get the coefficients...only not x not the y...form a 3x3 determinant and check if it is 0 if it is zero then your lines intersect at a point. as simple as such.
for example is 3x+2y-5=0 , 3x+5y+2=0 and 4x-2y-7=0 then the determinat of the coefficient is first raw" 3,2,-5" second raw "3,5,2" and third "4,-2,-7"
find the determinant which is 95 thus different from zero and consequently the 3 lines are not forming a pencil. you may verify this by drawing the three lines.
good luck
if you need any more guidance pls send me a private msg
Hey Minoas thanks for showing me this pencil thing but I have never studied this before and since the paper is meant to be done using methods I have studied I do not think I am meant to use this
method. I think I am simply meant to solve the equations simultaneously. I have checked the markscheme and indeed all they do is solve them simultaneously.
I am not quite sure how to solve these first two simultaneously but I am working on it.
Thanks for your help anyway.
February 25th 2013, 09:03 AM #2
Senior Member
Feb 2013
Saudi Arabia
February 25th 2013, 09:09 AM #3
Senior Member
Feb 2013
Saudi Arabia
February 26th 2013, 04:53 AM #4
Junior Member
Feb 2013
February 26th 2013, 06:33 AM #5
Junior Member
Feb 2013
February 26th 2013, 08:42 AM #6
Senior Member
Feb 2013
Saudi Arabia
February 26th 2013, 12:12 PM #7
Junior Member
Feb 2013
February 26th 2013, 11:04 PM #8
Senior Member
Feb 2013
Saudi Arabia
February 27th 2013, 01:42 AM #9
Junior Member
Feb 2013
|
{"url":"http://mathhelpforum.com/geometry/213777-please-help-question-about-vectors.html","timestamp":"2014-04-17T04:48:51Z","content_type":null,"content_length":"61791","record_id":"<urn:uuid:5ec57934-f0d1-4eec-8ab0-96ad5a05011e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Curriculum Vitae
James W. Garson
B.A. with Honors in Philosophy, 1965, Haverford College
M.A., Ph.D. in Philosophy, 1968, 1969, University of Pittsburgh
Dissertation Title: The Logics of Space and Time
Assistant Professor, University of Pennsylvania, 1969-1975
Associate Professor, University of Notre Dame, 1976-1980
Visiting Associate Professor, Department of Information
Engineering, University of Illinois, Chicago 1980-1981
Associate Professor, University of Houston 1980-1986
Full Professor, University of Houston 1986-present
Visiting Professor, Department of Philosophy, Rice University 1994, 1995, 1999
Visiting Professor, Department of Psychology, Rice University 1998
Woodrow Wilson Fellowship, 1965
Andrew Mellon Fellowship, 1966 - 1968
National Endowment for the Humanities Fellowship in Selected Fields; Science, Technology and Human Values: ($11,250) "Human Values and Computers in Education,"
1975 - 1976
NSF Grant: ($35,315) "An Advice Giving Computer Program for Teaching Proof Finding in Formal Logic," 1979 - 1980
Apple Education Foundation Grant: ($18,966) "Interactive Graphics Courseware for Teaching Computer Literacy," 1980 - 1981
Ford Project on Urban Education: ($400) "Computer Graphics and Experiential Education,"
University of Houston Microcomputer Allocation Committee: (award of computer and peripherals) "Logic Training Courseware," 1984
University Teaching Excellence Award: ($2500) 1988
National Endowment for the Humanities Summer Seminar: ($3500) "Philosophical Implications of Cognitive Science," Summer 1989
University of Houston Computer Use Fee Grant ($5209) April 1990
National Endowment for the Humanities Summer Stipend: "Rules in a Chaotic Mind" ($3750)
May-July 1991
National Endowment for the Humanities Summer Seminar: ($3600) "Representation"
Summer 1993
HFAC Faculty Development Summer Grant: ($6000) "Quantified Modal Logic"
Summer 1995
National Endowment for the Humanities Summer Seminar: ($4000) "Metaphysics of Mind"
Summer 1996
National Endowment for the Humanities Summer Seminar ($3250) ÒFolk Psychology vs. Simulation TheoryÓ
Summer 1999
A. Work in Progress
Book: What Logic Means: From Proof to Model-Theoretic Semantics (2013) under contract with Cambridge University Press to appear in 2013.
Second Edition: Modal Logic For Philosophers (2013) under contract with Cambridge University Press to appear in 2013
ÒOpen Futures in the Foundations of Propositional Logic,Ó to appear in Nuel Belnap on Indeterminism and Free Action, T. Mueller (ed.) Springer, 2013.
ÒThe Logic of Vagueness (and Everything Else Too)Ó, with Joshua D. K. Brown in preparation.
B. Work Published:
ÒExpressive Power and Incompleteness of Propositional Logics,Ó Journal of Philosophical Logic, 2010, vol. 39, #2, pp. 159-171.
Modal Logic for Philosophers, Cambridge University Press, 2006.
ÒModality and Quantification,Ó in Borchert, Donald, ed. Encyclopedia of Philosophy, 2nd edition. Detroit: Macmillan Reference USA, 2006, 187-190.
ÒUnifying Quantified Modal LogicÓ Journal of Philosophical Logic, 2005, vol. 34, 5-6, pp. 621-649.
ÒSimulation and Connectionism: What is the Connection?Ó in Philosophical Psychology 2003, pp. 499-514
"Making Symbols Matter: A New Challenge to their Causal Efficacy,Ó Journal of Experimental Artificial Intelligence , vol. 14 2002, pp. 13-27.
"Evolution, Consciousness and the Language of Thought," in Consciousness Evolving, James Fetzer (Ed.) Benjamins, 2002, pp. 89-110.
ÒPhilosophical Issues about Dynamical SystemsÓ entry in the Encyclopedia of Cognitive Science, MacMillan, 2002.
"(Dis)solving the Binding Problem," Philosophical Psychology, vol. 14, #4, 2001, pp. 381-392.
"Quantification in Modal Logic," revised chapter in Handbook of Philosophical Logic, 2nd Edition, F. Guenthner and D. Gabbay (eds.) Kluwer, 2001, vol. 3, pp. 267-323.
"Natural SemanticsÓ Theoria, vol. 67, 2001, pp. 114-139.
"Modal Logic" entry in the Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/logic-modal/, (2000)
"Why Dynamical Implementation Matters" Garson, J. W. (1998) Commentary on: van Gelder (1998) The Dynamical Hypothesis in Cognitive Science, Behavioral and Brain Sciences 21 (5), p. 641-2.
"A Commentary on 'Cortical Activity and the Explanatory Gap'" Consciousness and Cognition, (1998), vol. 7, pp. 169-172.
"Chaotic Emergence and the Language of Thought," Philosophical Psychology, (1998), vol. 11, # 3, pp. 303-315.
"Intensional Logic" entry in the Routledge Encyclopedia of Philosophy (1998).
"Connectionism" entry in the Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/connectionism/connectionism.html, (1997).
"Syntax in a Dynamic Brain," Synthese, (1997), vol. 110, pp. 343-355.
"Cognition Poised Between Order and Chaos: A Worthy Paradigm in Cognitive Science" Philosophical Psychology, (1996), vol. 9, pp. 301-322.
"Chaos and Free Will," Philosophical Psychology, (1995), vol. 8, pp. 365-374.
"No Representations without Rules: The Prospects for Compromise between Paradigms in Cognitive Science," Mind and Language, (1994) vol. 9, pp. 25-37.
"Cognition without Classical Architecture," Synthese. (1994), vol. 100, pp. 291-305.
"Must We Solve the Binding Problem in Neural Hardware?" Behavioral and Brain Sciences, (1993) vol. 16, #3, pp. 459-460.
"Mice in Mirrored Mazes and the Mind," Philosophical Psychology (1993) vol. 6, no. 2, pp. 123-134.
"Heuristic Decision Support Problems - Integrating Heuristic Search and Expert Systems for the Design of Continuous-Manufactured Products," with S. Kamal and F. Mistree, Artificial Intelligence in
Design '92, J. S. Gero (ed.), (1992) Kluwer.
"Development of a Synthesis Engine for the Design of Products Made by Process Manufacturing," with S. Kamal, W. VanArsdale, and F. Mistree, Proceedings of the 1991 IEEE International Conference on
Systems, Man and Cybernetics, (1991), University of Virginia, Charlottesville, vol. 3, 1839-1846.
"Using Natural Language Processing Techniques in Modeling Design Processes," with B. Bras and F. Mistree, in Proceedings, World Congress on Expert Systems, Orlando, Florida, (1991), vol. 4, pp.
"What Connectionists Cannot Do: The Threat to Classical AI," in Connectionism and the Philosophy of Mind, T. Horgan and J. Tienson (eds.) Kluwer (1991), pp. 113-142.
"Applications of Free Logic to Quantified Intensional Logic," in Lambert, K. Philosophical Applications of Free Logic, K. Lambert (ed.), Oxford University Press, (1991), pp. 111-142.
"Categorical Semantics," in Truth or Consequences: Essays in Honor of Nuel D. Belnap, J. M. Dunn and A. Gupta (eds.), Kluwer (1990), pp. 155-175.
"Japanese and American Children's Styles of Processing Figural Matrices", with B. Foorman, H. Yoshida, and H. Swank, Journal of Cross Cultural Psychology, (1989), vol. 20, pp. 263-295.
"Modularity and Relevant Logic", Notre Dame Journal of Formal Logic, (1989), vol. 30, pp. 207-223.
"Heuristics for Proof Finding in Formal Logic", Teaching Philosophy, (1988), vol. 11, pp. 41-53.
"Metaphors and Modality," Logique et Analyse, (1987), vol. 30, pp. 123-145.
"Clausal Form and Quantifiers in Natural Language", Theoretical Linguistics, (1986), vol. 13, pp. 185-209.
"Quantification in Modal Logic," chapter in Handbook of Philosophical Logic, vol. II, F. Guenthner and D. Gabbay (eds.), (1984), pp. 249-307.
"Microcomputer Graphics and Visual Reasoning," Proceedings of the National Education Computing Conference, (1984), pp. 7-10, with B. Foorman.
"Pronouns and Quantifier Scope in English," Journal of Philosophical Logic, (1983), vol. 12, pp. 327-358, with E. Lepore
"Prepositional Logic," Logique et Analyse, (1981), vol. 24, pp. 4-33.
"Developing Interactive Computer Graphics for Computer Literacy," SIGCUE Bulletin, (1981), vol. 15, pp. 2-13.
"Giving Advice with a Computer," Proceedings of the National Education Computing Conference, (1980), pp. 44-45.
"Teaching Logic with EMIL," Teaching Philosophy, (1980), vol. 3, pp. 453-478, with P. Mellema.
"Unaxiomitizibility of an Intensional Logic," Journal of Philosophical Logic, (1980), vol. 9, pp. 59-72.
"Free Topological Logic," Logique et Analyse, (1979), vol. 22, pp. 453-475.
"The Substitution Interpretation and the Expressive Power of Intensional Logics," Notre Dame Journal of Formal Logic, (1979), vol. 20, pp. 858-864.
"Completeness of Some Quantified Modal Logics," Logique et Analyse, (1978), vol 21, pp. 153-164.
"The Substitution Interpretation in Topological Logic," Journal of Philosophical Logic, (1974), vol. 3, pp. 109-132.
"Indefinite Topological Logic," Journal of Philosophical Logic, (1973), vol 2, pp. 102-118.
"A Completeness Theorem for an Intensional Logic: Definite Topological Logic," Notre Dame Journal of Formal Logic, (1973), pp. 175-184.
"Two New Interpretations of Modality," Logique et Analyse, (1972), pp. 443-459.
"Here and Now," The Monist, (1969), pp. 469-477.
Reprinted in Basic Issues in the Philosophy of Time, E. Freeman and W. Sellars (eds.) (1971).
"Topological Logic," Journal of Symbolic Logic, (1968), pp. 537-548, with N. Rescher.
Reprinted in Topics in Philosophical Logic, N. Rescher (ed.) (1969).
Reprinted in Temporal Logic, N. Rescher, and A. Urquhart, (1971).
Translated into Italian and reprinted in La Logica del Tempo, C. Pizzi (ed.) (1974).
"A Note on Chronological Logic," Theoria, (1967), pp. 39-44, with N. Rescher.
C. Abstracts and Reviews:
ÒReview of ThagardÕs Mind ReadingsÓ Philosophical Psychology, vol. 14, (2001) pp. 116-118.
ÒReview of First Order Modal Logic by Melvin Fitting and Richard Mendelson,Ó Studia Logica., vol. 68, (2001) #2 .
ÒReview of Connectionism and the Philosophy of Psychology by Terrence Horgan and John Tienson,Ó British Journal for the Philosophy of Science, vol. 50, (1999) pp. 319-323.
"Review of The Logic Foundations of Cognition by John Macnamara and Gonzalo Reyes," Contemporary Psychology. vol. 41, (1996), pp. 918-919.
"Natural Semantics: The Meaning of Natural Deduction Rules for Classical and Intuitionistic Logic," (Abstract) Journal of Symbolic Logic, vol. 59, no. 2, (1994), pp. 722-723.
"Contraposition and 4-Valued Semantics for Relevance Logic," (Abstract) Journal of Symbolic Logic, vol. 57, no.1, (1992) p. 357-358.
"Review of How to Build a Conscious Machine by Leonard Angel," Canadian Philosophical Reviews, vol. 11, no. 1, (1991), pp. 8-10.
"Quantified Modal Logic with Models in Place of Worlds," (Abstract), Journal of Symbolic Logic, (1988), vol. 53, p. 1292
"Modularity in Quantified Modal Logic," (Abstract), Journal of Symbolic Logic, (1988), vol 53, p. 1004.
"Review of S. Shieber's An Introduction to Unification-Based Approaches to Grammar," Journal of Symbolic Logic, (1987), vol. 52, pp. 1052-1054.
"Correspondence in Classical Logic," (Abstract), Journal of Symbolic Logic, (1986), vol. 51, pp. 1085-1086.
"Clausal Form and Quantifiers in Natural Language," (Abstract), Journal of Symbolic Logic, (1986), vol. 51, p. 843.
"Generalized Rules for Quantified Modal Logics," (Abstract), Journal of Symbolic Logic, (1984), vol. 49, p. 323.
"The Expressive Power of Modal Logics," (Abstract), Journal of Symbolic Logic, (1983), vol. 48, p. 899.
"Morphisms in Intensional Logic," (Abstract), Journal of Symbolic Logic, (1981), vol. 46, p. 431.
"The Substitution Interpretation and the Expressive Power of Intensional Logics," (Abstract), Journal of Symbolic Logic, (1981), vol 46, p. 200.
"Review of Moutafakis' Imperatives and their Logics," New Scholasticism, (1978), pp. 595-598.
"Review of D. Gabbay's Investigations in Modal and Tense Logics," International Studies in Philosophy, (1978), vol. 10, pp. 190-192.
"Review of A. Bressan's Metodo di assiomatizzazione in senso stretto della meccanica classica," Journal of Symbolic Logic, (1973), pp. 144-145.
"A New Interpretation of Modality," (Abstract) Journal of Symbolic Logic, (1969), p. 535.
"The Natural Semantics of Vagueness" with Joshua D. K. Brown will be presented at the Eastern Division Meetings of the American Philosophical Association (December 2012)
"What Classical Connectives Mean" was delivered to the Second Conference on the Foundations of Logical Consequence, St. Andrews University, Scotland. (June 2012)
"Open Futures in Propositional Logic" presented at a conference entitled "What is Really Possible" sponsored by Utrecht University, The Netherlands, (June 2012)
ÒComment on ÔA Conservative Modal Semantics With Applications to De Re NecessitiesÕÓ presented at the American Philosophical Association, Pacific Division, Pasadena, (March 2004)
ÒComment on ÔAtomistic Learning in Fully Distributed SystemsÕÓ presented at the American Philosophical Association, Pacific Division, San Francisco, (March 2003)
ÒSimulation and Connectionism: What is the Connection?Ó delivered to the Society for Philosophy and Psychology , New York, (June 2000)
ÒComment on ÔRethinking the Systematicity ArgumentsÕÓ presented at the American Philosophical Association, Pacific Division, Berkeley, (April, 1999)
ÒComment on ÔSystematicity and the Cognition of Structured DomainsÕÓ presented at the Society for Philosophy and Psychology, Stanford, (June, 1999)
"(Dis)Solving the Binding Problem," presented at the American Philosophical Association, (December, 1998)
"Fission of Consciousness in the Natural World" presented at the American Philosophical Association, (April, 1996).
ÒSystematicity and Classical Architecture," presented at the Society for Philosophy and Psychology, (June, 1995)
"Chaotic Emergence and the Language of Thought" presented at the American Philosophical Association, (March, 1995)
"Syntax in a Dynamic Brain," presented at the Society for Philosophy and Psychology, (June, 1994)
"Reply to 'Chaos and Emergence'" American Philosophical Association, (December, 1993)
"Streams of Consciousness in the Transporter Room," Departmental Colloquium, Texas A&M University, (December 1993).
"The Dynamic Mind: A New Paradigm in Cognitive Science" presented at the Society for Philosophy and Psychology, Vancouver, (June 1993).
"Chaos and Free Will," presented at the American Philosophical Association, San Francisco, (March, 1993).
"Natural Semantics," presented at the Association for Symbolic Logic Meetings, San Antonio, (January 1993).
"No Representations without Rules: The Prospects for Compromise between Paradigms in Cognitive Science," presented at the Society for Philosophy and Psychology, Montreal, (June, 1992).
"Heuristic Decision Support Problems - Integrating Heuristic Search and Expert Systems for the Design of Continuous-Manufactured Products," to the 2nd International Conference on Artificial
Intelligence in Design, Pittsburgh, (June, 1992).
"Chaotic Cognition: Prospects for Symbolic Processing in a Dynamic Mind," presented at the American Philosophical Association, Pacific Division, Portland, (March, 1992).
"Representation without Realism: Symbolic Processing in Connectionist and Classical Minds," American Philosophical Association, Pacific Division, San Francisco, (March 1991).
"Contraposition and Four-valued Semantics for Relevance Logic," Association for Symbolic Logic, Pittsburgh, (January, 1991).
"Cognition without Classical Architecture," American Philosophical Association, Eastern Division, Boston, (December, 1990).
"Can Connectionists Account for Thought?," Rice University, (October 1990).
"What Connectionists Cannot Do: The Threat to Classical AI," American Philosophical Association, Central Division, New Orleans, (April, 1990).
"Quantified Modal Logic with Models in Place of Worlds," Association for Symbolic Logic, New York, (December, 1987).
"Modularity in Quantified Modal Logic," Association for Symbolic Logic, San Antonio, (January, 1987)
"Correspondence in Classical Logic," Association for Symbolic Logic, Washington, (December, 1985)
"Heuristics for Proof Finding in Formal Logic," a three day seminar delivered to the Center for Design of Educational Computing, Carnegie-Mellon University, Pittsburgh, (August, 1985)
"Clausal Form and Quantifiers in Natural Language," Association for Symbolic Logic, Stanford, (July, 1985)
"Programming Logic Lessons," Second Annual Conference of the Institute for Logic and Cognitive Studies, Clear Lake, (June, 1985).
"Modularity and Logic," Workshop on Modularity in Knowledge Representation and Natural Language Processing, Amherst, (June, 1985)
"Computer Assisted Instruction and Problem Solving," First Annual Conference of the Institute for Logic and Cognitive Studies, Clear Lake, (July, 1984).
"Generalized Rules for Quantified Modal Logic," Association for Symbolic Logic, Berkeley, (March, 1983)
"Computer Guidance in Problem Solving," Carnegie-Mellon University Conference on Computer Applications in Teaching Reasoning and Writing", Pittsburgh, (July, 1982)
"Is Time Travel Possible?" Third International Conference on the Fantastic in the Arts, Boca Raton, (March 1982)
"Expressive Power of Modal Logics," Association for Symbolic Logic, Philadelphia, (December, 1981)
"Interactive Computer Graphics for Computer Literacy,"National Educational Computer Conference, Denton, (June 1981)
"Giving Advice with a Computer," National Educational Computer Conference, Norfolk (June, 1980)
"Computerized Advice in Formal Logic," National Council of Science Teachers, Anaheim, (March, 1980)
"Morphisms in Intensional Logic," Association for Symbolic Logic, New York, (December, 1979)
"Computerized Advice for Proof Finding in Logic," seminar given to the Institute for Mathematical Studies in the Social Sciences, Stanford, (October, 1979)
"The Substitution Interpretation and the Expressive Power of Intensional Logics," Association for Symbolic Logic, San Diego, (March, 1979)
"Metaphors and Modality," Indiana Philosophical Association, Bloomington, (March, 1979)
"Unaxiomitizability of a Quantified Intensional Logic," Association for Symbolic Logic, Biloxi, (January, 1979)
"EMIL, A Universal Proof Checker," Teaching Philosophy Conference, Schenectedy, (August, 1978)
Journal of Symbolic Logic
Journal of Philosophical Logic
Notre Dame Journal of Formal Logic
Nordic Journal of Philosophical Logic
Mind and Language
Linguistics and Philosophy
Behavioral and Brain Sciences
Philosophical Psychology
Australasian Journal of Philosophy
Canadian Journal of Philosophy
MIT Press
Cambridge University Press
Yale University Press
Dickenson Publishing Company
St. Martin's Press
Wadsworth Publishing Company
Reidel Publishing Company
Program Co-Chair, Society for Philosophy and Psychology, 1996.
Executive Committee, Society for Philosophy and Psychology 1996-1999
Webmaster, Society for Philosophy and Psychology, 1995-present
COURSES DEVELOPED SINCE 1980
Given in the Department of Information Engineering, University of
INFE 306 Natural Language Processing (Graduate Level)
INFE 301 Computer Architecture
INFE 378 Graphics I
INFE 372 Microprocessors
INFE 478 Graphics II (Graduate Level)
Given at University of Houston:
PHIL 1310 Principles of Reasoning
PHIL 1321 Logic I
PHIL 1344 Introduction to the Mind
PHIL 1397 Philosophy of Technology
PHIL 2310 Critical Thinking
PHIL 2321 Logic II
PHIL 3321 Logic III (Modal Logic)
PHIL 3321 Logic III (Foundations of Mathematics)
PHIL 3342 Philosophy of Mathematics
PHIL 3385 History of Modern Philosophy
PHIL 3388 History of 20th Century Philosophy
PHIL 3395 Godel, Escher Bach
PHIL 3395 Logic Programming
PHIL 3395 Philosophy of Mind
Graduate Level
PHIL 6396 Logic and Ontology
PHIL 6326 Wittgenstein
PHIL 6396 Truth
PHIL 6396 Meaning
PHIL 6396 Mental Mechanisms
PHIL 6396 Mind & Matter
PHIL 6396 Free Will
PHIL 6396 Intentionality
PHIL 6395 Consciousness
Given at Rice University:
PHIL 310 Philosophy of Mind
CSCI 410 Computational Modeling of Cognitive Processes (for the Psychology Department)
Given at University of Houston, Clear Lake:
CSCI 5931 Natural Language Processing (Graduate Level) (for Computer Science)
|
{"url":"http://www.class.uh.edu/phil/garson/vita.htm","timestamp":"2014-04-19T04:19:38Z","content_type":null,"content_length":"132821","record_id":"<urn:uuid:6a31d217-c78f-4c07-b7d2-7f157d32de1b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lyapunov Analysis
From Eclipsepedia
The Lyapunov View allows a user to apply tools developed to analyze dynamical systems. The RMS comparison view is a useful measure of the average difference between two scenarios. However, if a model
and reference scenario each describe an epidemic that begins and end in the same state (zero infectious), the RMS error will eventually fall to zero as a function of time, even in case of a “bad”
model. In addition to measuring the average error, it is useful to look for other measures that might provide a “fingerprint” for the spatiotemporal dynamics of an infectious disease. Like many
dynamical systems, infectious disease is a process of many variables. However, it is often possible to capture the essential dynamics by looking at just a few system variables in an appropriate phase
space. In its most general formalism, any dynamical system is defined by a fixed set of equations that govern the time dependence of the system’s state variables. The state variables at some instant
in time define a point in phase space. A SEIR model defines a four dimensional phase space. An SI model defines a two dimensional space. Examining a reduced set of dimension may be thought of as
taking slice through phase space (for example in the SI plane).
Using Lyapunov Analysis
1. Enter the Analysis Perspective
2. Click on the Lyapunov Tab
3. Use the Select Folder buttons to chose the folders containing the data you wish to compare. The files should have the following format.
4. Click "Compute Lyapunov Exponent"
5. Two charts will appear, the left hand chart will show the trajectories in phase space (I vs. S) for the two data sets. The right hand chart will show the Log of the integrated difference between
the two trajectories as a function of time. The exponent is the initial slope of this time varying difference plotted on a semi-log plot.
At the state of the system changes with time, the point (S(t), I(t)) in phase space defines a trajectory in the SI plane. Consider an epidemic that begins with one infectious person and virtually the
entire population susceptible at t=0, S(0) ~ 1. The trajectory will begin at time zero along the S axis near 1. As the disease spreads, the susceptible population (S) will decrease and the infectious
population (I) will increase. The detailed shape of the this trajectory will depend on the time it takes for the disease to spread to different population centers, as well as the (susceptible)
population density function. The peaks and valleys along the trajectory in SI phase space proved a signature or fingerprint for an epidemic the shape of which depends on the disease, the disease
vectors, the population distribution, etc. The mathematics of dynamical systems provide us with a formalism to compare trajectories in a phase space. Given a single set of rules (e.g., a disease
model), two simulations that begin infinitesimally close together in phase space may evolve different in time and space. This separation in phase space can be measure quantitatively.
The Vector
defines a trajectory in SI space. The initial separation at time zero be defined as
The rate of separation of two trajectories in phase space will often obey the equation
where l is the Lyapunov Exponent. This exponent is a characteristic of the dynamical system that defines the rate of separation of infinitesimally close trajectories in phase space.
|
{"url":"http://wiki.eclipse.org/Lyapunov_Analysis","timestamp":"2014-04-24T00:56:14Z","content_type":null,"content_length":"22315","record_id":"<urn:uuid:5db4601e-2ea4-4ef4-ba6d-aaa846b1558a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Purely Functional Data Structures & Algorithms : Union-Find
It’s been a while since I last posted in this series. Today we look at the disjoint-set data structure, specifically disjoint-set forests and the complementary algorithm : union-find.
In computing, a disjoint-set data structure is a data structure that keeps track of a set of elements partitioned into a number of disjoint (nonoverlapping) subsets. A union-find algorithm is an
algorithm that performs two useful operations on such a data structure:
• Find: Determine which subset a particular element is in. This can be used for determining if two elements are in the same subset.
• Union: Join two subsets into a single subset.
My inspiration comes from
Sedgewick and Wayne’s class over at Coursera : Algorithms, Part I
. So check the class out if you are unfamiliar with this and interested in the details.
I’m always curious how data structures and algorithms translate from their imperative counterparts(usually in Java) which are the norm for most classes on the subject and in most textbooks.
I think that this is a very unexplored part of the field of study in comparison with the usual approach to algorithms and data structures. So here we go with another example.
, we are using
as our implementation language.
First we define our disjoint-set type.
\* Disjoint set data type (weighted and using path compression) demonstrating *\
\* 5(m + n) worst-case find time *\
(datatype disjoint-set
Count : number ; Ids : (vector number) ; Sizes : (vector number);
[Count Ids Sizes] : disjoint-set;)
Then we add a few utilities for creating new instances, retrieving the disjoint subsets count and finding the root of an object.
\* Create a new disjoint-set type *\
(define new
{ number --> disjoint-set }
N -> [N (range 1 N) (vector-init 1 N)])
\* Return the number of disjoint sets *\
(define count
{ disjoint-set --> number }
[Count Ids Sizes] -> Count)
\* Return id of root object *\
(define find-root
{ disjoint-set --> number --> number }
[Count Ids Sizes] P -> (let Parent
\* Path Compression *\
(<-vector Ids (<-vector Ids P))
(if (= P Parent)
(find-root [Count Ids Sizes] Parent)))
Next we define functions to check if two objects are connected along with the quick-union function that will actually connect two objects.
\* Are objects P and Q in the set ? *\
(define connected
{ disjoint-set --> number --> number --> boolean }
UF P Q -> (= (find-root UF P) (find-root UF Q)))
\* Replace sets containing P and Q with their union *\
(define quick-union
{ disjoint-set --> number --> number --> disjoint-set }
[Count Ids Sizes] P Q
-> (let UF [Count Ids Sizes]
I (find-root UF P)
J (find-root UF Q)
SizeI (<-vector Sizes I)
SizeJ (<-vector Sizes J)
SizeSum (+ SizeI SizeJ)
CIds (vector-copy Ids)
CSizes (vector-copy Sizes)
(if (= I J)
[Count CIds CSizes]
\* Always make smaller root point to larger one *\
(do (if (< SizeI SizeJ)
(do (vector-> CIds I J) (vector-> CSizes J SizeSum))
(do (vector-> CIds J I) (vector-> CSizes I SizeSum)))
[(- Count 1) CIds CSizes]))))
After running our test we get the following output.
(50+) (test 10)
creating union find with 10 objects ...
[10 <1 2 3 4 5 6 7 8 9 10> <1 1 1 1 1 1 1 1 1 1>]
All objects are disconnected :
1 and 9 connected ? false
4 and 6 connected ? false
3 and 1 connected ? false
7 and 8 connected ? false
... creating unions ...
[1 <4 8 7 7 8 8 8 8 8 8> <1 1 1 2 1 1 4 10 1 1>]
All objects should be connected as there is only 1 group :
1 and 9 connected ? true
4 and 6 connected ? true
3 and 1 connected ? true
7 and 8 connected ? true
run time: 0.0 secs
1 : number
All the code can be found
7 thoughts on “Purely Functional Data Structures & Algorithms : Union-Find”
1. As far as I’m aware there’s no _purely_ functional union-find. There is, however, an _apparently_ functional union-find, courtesy of Conchon and Filliâtre (http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.79.8494). They use Henry Baker’s shallow binding trick (specifically his “rerooting” trick) to build their union-find on top of a persistent array/map. Even though behind the
scenes there’s “massive use of side effects” (to quote the authors), the structure looks purely functional from the user’s perspective.
I translated this into Smalltalk, writing up my experiences: http://www.lshift.net/blog/2011/12/31/translating-a-persistent-union-find-from-ml-to-smalltalk
□ I’m doing effectively the same with this code, hence the vector-copy stuff. Yes, it’s only ‘purely functional’ in the sense that it’s a non-destructive function.
2. Indeed, although this is purely functional, this has completely different complexity, as basically the entire structure is copied on every union. This is very far from the inverse Ackermann
function complexity :(
□ For M finds over a set of N elements, the worst-case order of growth is still O((M+N)lg*N) or O(5(M+N)).
You’re right, for M unions over a set of N elements, the worst-case order of growth is O(5(M+N) + 2MN) because of the vector copying.
For now, this is only due to a limitation of Shen’s persistent data types(list in this example) not being efficient(access and update). Porting the code to Haskell, ML or Clojure would
achieve the same performance.
☆ In Haskell …
|
{"url":"http://jng.imagine27.com/index.php/2012-08-19-201539_purely-functional-data-structures-algorithms-union-find.html","timestamp":"2014-04-20T06:41:58Z","content_type":null,"content_length":"28303","record_id":"<urn:uuid:dfe7c02a-5a92-47b2-8a5d-ee7ce39477cf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Are there command similar as Matlab find command?
[Numpy-discussion] Are there command similar as Matlab find command?
Travis E. Oliphant oliphant@enthought....
Mon Sep 29 16:48:23 CDT 2008
frank wang wrote:
> Hi,
> I am trying to find a command in numpy or python that perform similar
> function as Matlab find command. It will return the indexes of array
> that satisfy a condition. So far I have not found anything.
There are several ways to do this, but what are you trying to do?
Non-zero on the boolean array resulting from the condition is the most
direct way:
This returns a tuple of indices of length nd, where nd is the number of
dimensions of a. (i.e. for 1-d case you need to extract the first
element of the tuple to get the indices you want).
But, if you are going to use these indices to access elements of the
array, there are better ways to do that:
compress(a>30, a)
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-September/037758.html","timestamp":"2014-04-16T04:49:25Z","content_type":null,"content_length":"3615","record_id":"<urn:uuid:b08ba9e5-58a9-4cd5-bbe1-bbfb47f86490>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How To Use Excel Graphs
EXCEL Tutorial: How to use EXCEL for Graphs and Calculations.
Excel is powerful tool and can make your life easier if you are proficient in using it. You
will need to use Excel to complete most of your experiments and are expected to know
how to manipulate data, prepare plots and analyze error.
In Excel, the columns are labeled with letters, and rows are labeled by numbers. The
individual boxes are called cells, which are designated by column and row. For example,
the top left cell in the spreadsheet is A1. You can highlight an entire row or column by
clicking on the letter or number, at the start of the row or top of the column, it is
designated by. You can highlight specific cells by clicking INSIDE the cell and dragging
the mouse. Pressing ENTER moves you down a column. Pressing TAB moves you
across a row.
A new workbook contains three separate worksheets. Tabs at bottom of the worksheets,
labeled “Sheet 1” etc allow you to switch between the sheets. You can insert a new sheet
by clicking INSERT, then WORKSHEET.
Part 1. Entering Formulas
Enter the following data in a column: 45, 56, 48, 51, 26, 58, 41, 67, 52, 57. Take the
average. We can do this by entering a formula. All formulas must begin with an equal
sign. Microsoft Excel has many common formulas “programmed” under key words. The
average is on of these. It’s keyword is ‘average.’ After typing ‘=average’ it is necessary
to specify the cells which have the numbers to be averaged. For example, where A1 is
the beginning cell and A10 is the ending cell:
After you type the complete formula, and hit enter, the answer replaces your formula.
Take the standard deviation, keyword is ‘stdev’, and the sum, keyword is ‘sum’. You
should determine that the standard deviation is 11.19 and the sum is 501. For a complete
list of keywords click INSERT then FUNCTION. Use one of these keywords to find the
median of the data set. Your answer should be 52.
If the data in column A was supposed to have more significant figures we could format
our cells. To do this, highlight the appropriate cell and click FORMAT, then CELLS,
then NUMBER. Under category, choose NUMBER and select correct number of
decimal places. You can also put numbers in scientific notation from this screen. Also
under FORMAT, then CELLS you can change the font and colors of both the font and
background. You should be able to use these features.
It is possible to perform mathematical functions with the data we input. Again, formulas
must begin with an equals sign. For example, if we want to multiply the values in
Column A by 5, we would type in B1 ‘=A1*5’. A1 can be typed, or we can physically
click on cell A1, after typing the equals sign and then continue typing the formula.
Rather than typing this in the remaining nine cells, B2-B10, we can highlight B1 through
B10, then click EDIT on the toolbar, and select FILL and DOWN. Try this for the
formulas shown below. Notice, sometimes parenthesis need to be used.
Part II. Making Graphs
The following is data from a viscosity experiment. Enter it in Worksheet 2.
Concentration Viscosity
1.22860 1.3800
1.13580 1.3300
1.00010 1.2772
0.91580 1.2418
0.79980 1.2052
0.70056 1.1603
0.61430 1.1000
0.50389 1.0604
0.41586 1.0262
0.30715 1.0000
This data does not have a linear relationship. In order to produce a linear relationship,
take the natural log of viscosity. Do this in column C. The formula is ‘=ln(B2)’ for the
first value.
After calculating the natural log for all values of viscosity, the data can be graphed. Click
on INSERT then CHART. We want to represent this data in an XY scatter plot.
Click NEXT. Now, we need to tell the program what to graph. To do this, click on the
SERIES tab. The ‘Series’ box should be empty. If it contains anything, highlight it and
click remove. Now we need to add the correct series. Click ADD. In the ‘name’ box,
label the series. Click in the ‘X-values’ box; then highlight the concentration column,
A2-A11. Click in the ‘Y’ box; then highlight the values you calculated for the natural log
of viscosity, C2-C11.
Then, click NEXT. Now, we can label the axis and the title of the graph as shown below.
Then, click FINISH.
To delete the grey background, click on the background. A rectangle will outline the
background. Press delete. To change the scale of your axis to make it more appropriate,
so your data fills the graph, double click the axis you want to change. Click on the scale
tab. For this case, it is more appropriate for the minimum value of the x-axis to be 0.2,
rather than 0. There are other interesting features within this menu you may want to try.
Excel can be used to find a best fit line for data. In this case, the data looks linear so we
will fit it with a linear trend line. To do this, click on a data point on the graph. Then
click CHART, then ADD TRENDLINE. Under the type tab, select linear. On the
OPTIONS tab, make sure the boxes are checked to ‘display equation on chart’ and
‘display R-squared value on chart.’
As you can see, the data points do not fall exactly on the line. We need to know the error
in the slope and intercept of the trendline, as well as the distribution of the data points
around the line, known as residuals. To do this, you need to use the data analysis
toolpak. To do this, click TOOLS then DATA ANALYSIS. (If you do not see DATA
ANALYSIS as an option, please see the paragraph about it’s installation at the end of this
tutorial, Part III. ) Scroll down and select ‘Regression.’
In the ‘Input Y range’ and ‘Input X range’ fields, select the same data you used to make
your graph. Make sure the boxes are checked next to ‘Confidence Level:95%’,
‘Residuals’ and ‘Residual plots.’
Click OK. This will output in a new worksheet unless you specify and output range.
You will notice that not only have you plotted your residuals, but Excel has calculated
many other statistics for you. Most importantly, you will see the slope, labeled here as
‘X Variable 1’, and the intercept of the trendline in yellow. The error to the slope and
intercept are highlighted in green.
Sometimes, it is useful to put more than one data set on the same graph. For example, if
you did two trials of the above viscosity experiment, both could be displayed on the same
graph by adding a series. For trial 2, the viscosities are: 1.5233, 1.3512, 1.2975, 1.4545,
1.2244, 1.1788, 11.1175, 1.0621, 1.0426, and 1.0002. Again, you will have to take the
natural log of this data to make it linear. To add this data to your graph, click on the
graph, click CHART from the toolbar, then ADD SOURCE DATA. Click the series tab.
Then click ADD. Name this data set something different than the first, then input the
appropriate x and y data. Add a trend line to this data.
As you can see in the figure above, we have changed the colors of our lines and trendline
equations so that they can be distinguished between. To do this, double click on the item
you want to change.
Part III. Installing Data Analysis Toolpak
To install the data analysis toolpak, you need to click TOOLS, then ADD-INS. Check
the box for ‘Analysis ToolPak’ and ‘Analysis Toolpak VBA’. Then click OK. You may
be prompted to insert your Microsoft Office Disk. If you don't see the appropriate boxes
to check, you'll have to rerun the Excel installation routine from the CD and make sure
you install the ATP. After the installation you will be able to click TOOLS then DATA
ANALYSIS. Sometimes it may be necessary to restart Excel before being able to run the
data analysis.
*This is just a brief introduction to Excel. This program has many more useful features
that could be very beneficial for you to learn. You should try to spend some time
becoming proficient in Excel because it will save you time while trying to do homework
and lab reports and is a good skill to have upon entering the workforce.
|
{"url":"http://www.docstoc.com/docs/8386420/How-To-Use-Excel-Graphs","timestamp":"2014-04-18T03:14:25Z","content_type":null,"content_length":"59769","record_id":"<urn:uuid:2ffbebf0-64fe-4fb2-b2b1-62472ed0c27f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Efficient algorithm needed to split Set::IntSpan objects
in reply to Efficient algorithm needed to split Set::IntSpan objects
I think the first step would be to split ranges which overlap other ranges into separate non-overlapping ranges:
Why do you need to split them into non-overlapping segments before doing $hash{$item}++ ?
Also, are you absolutely sure that iterating over the elements will really be a bottleneck? It seems likely that you won't be able to gain much efficiency since the obvious algorithm is so simple.
Anyway, here is a sparser way to represent this problem:
You can represent the set of ranges by just keeping track of places where the # of intervals changes, so that
(3,1), (5,2), (7,3), (9,2), (12,0)
In other words, if (i,j), (m,n) are adjacent in this list, then there are j ranges that cover element i to element m-1. This list is sparse, and its size only depends on the number of ranges, not the
number of their elements.
To query this list on a number (to see how many ranges cover a point x), you can do a binary search to find the largest number < x in the list. That entry in the list will tell you how many ranges
cover x.
To construct the list, you can do $delta{$start}++, $delta{$end}--, for every ($start,$end) interval (I chose a hash because it can stay sparse if the intervals are large). Then you can iterate
through the sorted keys of %delta and make a running total.
my @intervals = ([3,11], [5,8], [7,11]);
for (@intervals) {
my ($start,$end) = @$_;
my $total = 0;
my @points;
for (sort { $a <=> $b } keys %delta) {
next if $delta{$_} == 0; ## update: added this line
$total += $delta{$_};
push @points, [$_, $total];
Again, this is much more efficient in the theoretical sense (to generate the data structure takes O(n log n), where n is the # of intervals, compared to O(nt) where t is the average size of an
interval), but maybe not much of a gain for you depending on the actual sizes of things involved (and depending on what kind of queries you want to make to the data structure). Querying the data
structure is a tradeoff, it is now O(log n) instead of constant had you gone the route of iterating through all the elements of the intervals.
|
{"url":"http://www.perlmonks.org/?node_id=796975","timestamp":"2014-04-19T14:18:36Z","content_type":null,"content_length":"20103","record_id":"<urn:uuid:e9e66845-33fe-4ecc-aeb0-d208cc744fde>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Schematics of (a) the (001) projection of centrosymmetric unstrained SrTiO3, (b) tetragonal SrTiO3 with bi-axial tensile strain within (001)-plane, showing the displacements and in the oxygen
sub-lattice, and (c) bi-axially strained SrTiO3 indicating the sequence of TiO2 and SrO (001)-planes parallel to the plane of tension. The oxygen sites 1–4 are within same TiO2 (001)–plane, and sites
5 and 6 are within the adjacent SrO (001)-plane.
Projection the TiO2-(001) plane showing the V O in-plane diffusion steps, , and . and are equivalent. Ti ions are labeled, with the remaining sites being oxygen.
In-plane V O migration barrier as a function of isotropic, (001) bi-axial tensile strain, for path (Fig. 2 ), each panel showing the results for a different charge state.
Schematic view showing the four inter-plane migration steps labeled , and . and represent V O diffusion from the TiO2 (001)-plane to the adjacent SrO (001)-plane, and diffusion from the SrO (001)
–plane to the next TiO2 (001)-plane are represented by and .
(a) V O inter-plane migration NEB profile for the V O migrating along the path indicated in (b) for 4% strain. (b) shows schematic views for V O migration between two TiO2-planes (sites 1 and 3) via
a SrO-plane (site 2). Both diagrams represent processes indicated as followed by .
Calculated inter-plane V O migration barriers as a function of in-plane biaxial tensile strain for diffusion along (diamonds) and (triangles), with (a), (c), and (e) representing neutral, +1, and +2
charge states, respectively. For the (open circles) and (squares) the neutral, +1, and +2 charge states are plotted in (b), (d), and (f), respectively. The paths are shown in Fig. 7 . In each plot,
the filled circles represent the energy difference between V O in the TiO2 and SrO planes, equal to the difference in the forward and reverse barrier heights.
Schematic views of V O diffusion process for (a) in-plane diffusion completely within TiO2 (001)–plane. (b) in-plane diffusion involving both TiO2 (001) and SrO (001) planes. (c) The inter-plane
diffusion. The long Ti-O bonds along [100] and [010] directions are highlighted with yellow colors. The red arrows point to the rate limiting step for each diffusion process.
Calculated parameters for isotropic bi-axial (001) tensile strained SrTiO3. (Å) is the displacement of the oxygen ions along [100] (and [010]) relative to Ti and Sr. is the equilibrium ratio obtained
for the ferroelectric distorted structures under strain. Relative energies per formula unit (meV) are calculated relative to cubic, unstrained SrTiO3 (E 1), and relative to biaxially strained,
centro-symmetric SrTiO3 (E 2). is the equilibrium ratio for the cubo-symmetric phase.
Diffusion barriers for the rate limiting steps for the in-plane and inter-plane paths shown in Fig. 9 . For in-plane, the values in parentheses refer to path (b). For each value of strain, the value
in bold face indicates which in-plane path ((a) or (b)) is lower in energy, and the underline indicates which of (a), (b), or (c) has the lowest barrier. The unstrained values are included for
Scitation: Impact of tensile strain on the oxygen vacancy migration in SrTiO3: Density functional theory calculations
|
{"url":"http://scitation.aip.org/content/aip/journal/jap/113/22/10.1063/1.4809656","timestamp":"2014-04-17T04:24:54Z","content_type":null,"content_length":"105554","record_id":"<urn:uuid:c9039427-3b2a-4b15-919f-b9b71fb27f4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 13: Statics
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Chapter 13: Statics
Statics is primarily the study of bodies in static equilibrium. There are two conditions necessary for static equilibrium: the net force on a body equals zero and the net torque on a body equals
zero. This is why we have waited until after discussing rotations to consider statics. Although any body with a constant velocity (of its center of mass) and a constant angular momentum is in
equilibrium, the conditions of static equilibrium are most often applied to bodies that are at rest and not rotating. In many disciplines, especially mechanical engineering, understanding principles
of statics is essential. After all, we hope that our buildings, bridges, cranes, etc., always maintain static equilibrium.
Table of Contents
|
{"url":"http://www.compadre.org/Physlets/mechanics/intro13.cfm","timestamp":"2014-04-18T21:12:16Z","content_type":null,"content_length":"17751","record_id":"<urn:uuid:2975f827-8ab1-4742-87e5-db6444ebd138>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Aptitude Questions & Answers
In the given figure, PA and PB are tangents to the circle at A and B respectively and the chord BC is parallel to tangent PA. If AC = 6 cm, and length of the tangent AP is 9 cm, then what is the
length of the chord BC?
Ans. BC = 4 cm.
Three cards are drawn at random from an ordinary pack of cards. Find the probability that they will consist of a king, a queen and an ace.
Ans. 64/2210.
A number of cats got together and decided to kill between them 999919 mice. Every cat killed an equal number of mice. Each cat killed more mice than there were cats. How many cats do you think there
were ?
Ans. 991.
If Log2 x - 5 Log x + 6 = 0, then what would the value / values of x be?
Ans. x = e2 or e3.
The square of a two digit number is divided by half the number. After 36 is added to the quotient, this sum is then divided by 2. The digits of the resulting number are the same as those in the
original number, but they are in reverse order. The ten's place of the original number is equal to twice the difference between its digits. What is the number?
Ans. 46
Can you tender a one rupee note in such a manner that there shall be total 50 coins but none of them would be 2 paise coins.?
Ans. 45 one paisa coins, 2 five paise coins, 2 ten paise coins, and 1 twenty-five paise coins.
A monkey starts climbing up a tree 20ft. tall. Each hour, it hops 3ft. and slips back 2ft. How much time would it take the monkey to reach the top?
Ans.18 hours.
What is the missing number in this series? 8 2 14 6 11 ? 14 6 18 12
Ans. 9
A certain type of mixture is prepared by mixing brand A at Rs.9 a kg. with brand B at Rs.4 a kg. If the mixture is worth Rs.7 a kg., how many kgs. of brand A are needed to make 40kgs. of the mixture?
Ans. Brand A needed is 24kgs.
Upload your Resume here, A copy will be sent to IT Acumens & Team.
And we will directly Refer to top companies.
To Learn about our openings & Training,
please send your resume indicating desired position to career@itacumens.com (or)
Type "JOIN JOB" and Send SMS to 9566295662 For Free Job Openings.
Type "JOIN TRAIN" and Send SMS to 9566295662 For Free Training Information's.
|
{"url":"http://discuss.itacumens.com/index.php?topic=21538.0","timestamp":"2014-04-19T09:23:38Z","content_type":null,"content_length":"31552","record_id":"<urn:uuid:b825b37d-189c-41d9-b5b5-7dc034a65351>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robert N.
All of Robert’s current tutoring subjects are listed at the left. You can read more about Robert’s qualifications in specific subjects below.
Algebra 1
Algebra is the start of solving for the unknown. If at any point in time you have solved the mathmatical problem 2+2=, you have done algebra. If you look at it from this standpoint, you now have a
foundation to build from. My purpose is to provide you assistance for the short term so that you are capable of standing on your own two feet in solving algebraic equations. I graduated from MIT and
have been applying these concepts for the course of my career. I enjoy helping students achieve their academic goals and becoming successful. I would like to provide the same opportunity for you.
Algebra 2
Algebra is the start of solving for the unknown. If at any point in time you have solved the mathematical problem, 2+2=, you have done algebra. If you look at it from this standpoint, you now have a
foundation to build from. My purpose is to provide you assistance for the short term so that you are capable of standing on your own two feet in solving algebraic equations. I graduated from MIT and
have been applying these concepts for the course of my career. I enjoy helping students achieve their academic goals and becoming successful. I would like to provide the same opportunity for you.
Played baseball thru college at the positions of pitcher and 1st base. Excellent pitching coach and batting coach.
Calculus, in summary, is the language that links distance with velocity and acceleration, circumference with area and volume, and mass with momentum and kinetic energy. My philosophy is that once you
understand the purpose of calculus, you will be more in tune with getting the individual concepts that it requires you to learn in differentiation and integration. That is what I can bring to the
table. I have engineered spacecraft, and launch vehicles and aircraft. Calculus directly applies to orbit mechanics in space travel as well as aerodynamics for aircraft.
Used to play national tournaments with my highest rating being 1792.
Involved in fitness for over 20 years to include weightlifting, swimming, racquetball, and sport activities. Was formerly a certified step aerobics instructor from the AIAA.
Geometry is the skeletal system for all of mathematics. For any student that has the aspiration to study in the fields of science and mathematics, a thorough understanding of geometry is vital in
order to master all advanced subjects from calculus, differential equations, and any of the engineering disciplines. Geometry serves as the artistic picture of math and my purpose is to help students
afford the opportunity to see the math in their heads as they are solving complex problems.
Mechanical Engineering
I worked as an engineer in the aerospace industry for over 20 years and I majored in Aeronautical Engineering at MIT which is Mechanical Engineering specialized for aviation and space. Very familiar
with the engineering acquisition cycle and have served as a manager for each part of the acquisition cycle from concept study to test and evaluation to production and logistics.
Prealgebra is the start of math beyond just counting your change. This is the base foundation of all mathematical concepts to come. Understanding the foundation of math at this level is critical to
being able to master all of the integrated concepts from Algebra to Calculus and Differential Equations. This class is the start of your mathematical resume.
Calculus, in summary, is the language that links distance with velocity and acceleration, circumference with area and volume, and mass with momentum and kinetic energy. Pre-calculus exposes you to
the principles that lead up to the purpose of calculus. My philosophy is that once you understand the purpose of calculus, you will be more in tune with getting the individual concepts that it
requires you to learn in differentiation and integration. That is what I can bring to the table. I have engineered spacecraft, and launch vehicles and aircraft. Calculus directly applies to orbit
mechanics in space travel as well as aerodynamics for aircraft.
|
{"url":"http://www.wyzant.com/Tutors/TN/Brentwood/7805831/Subjects.aspx","timestamp":"2014-04-21T05:57:53Z","content_type":null,"content_length":"82145","record_id":"<urn:uuid:71f377d9-27e5-46f2-9c39-c3c91d104778>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alviso Statistics Tutor
...I believe that mathematics should not get in the way of learning the natural sciences, so I am capable of tutoring in that as well. My passion is in chemistry but the quantitative nature of the
natural sciences means that I am fluent in algebra through calculus. By nature of my coursework and e...
24 Subjects: including statistics, reading, chemistry, calculus
...I have an Bachelor's degree in mathematics from the University of Santa Clara and a Master's degree in mathematics/engineering from Stanford University. I'm a patient tutor with a positive,
collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variabl...
22 Subjects: including statistics, calculus, geometry, accounting
I have a Ph.D in Theoretical Physics from M.I.T. and work as a researcher at Stanford Physics Department. I have been tutoring for more than 10 years both high school and college students. I tutor
all subjects in both Math and Physics at all levels. In addition, I tutor SAT Math and Verbal.
15 Subjects: including statistics, physics, calculus, geometry
...I've written many proofs both in math and philosophy and taught how to do so. I received an A in marketing in my college class. I also ran a marketing branch office for two summer breaks for
Vector Marketing and have exposure to real world applications.
35 Subjects: including statistics, reading, calculus, geometry
...I have a Masters in mathematics and a PhD in economics which requires a good understanding of both topics. I understand both the theoretical basis and practical application of both subjects. In
particular, I understand the relationship between the two subjects.
49 Subjects: including statistics, calculus, physics, geometry
|
{"url":"http://www.purplemath.com/Alviso_statistics_tutors.php","timestamp":"2014-04-17T19:42:06Z","content_type":null,"content_length":"23863","record_id":"<urn:uuid:677e2572-7b71-4812-abbe-5166be1f2880>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(Almost) Lowness for K and finite self-information
Seminar Room 1, Newton Institute
A real $X$, is called
low for K
if there is a constant $c$ such that using $X$ as an oracle does not decrease the Kolmogorov complexity of any string by more than $c$. That is, the inequality $K(\sigma) \leq K^{X}(\sigma) +c$ holds
for all $\sigma \in 2^{<\omega}$. One can relax this definition by replacing the constant with a slow growing function $f$ and one obtains reals that are `almost' low for $K$ `up to $f$'. It is not
surprising that there are reals that are `almost' low for $K$ but not actually low for $K$, but the classes of reals that are 'almost' low for $K$ and those that are low for $K$ in the traditional
sense can behave very differently. We will explain some of these results, and in particular discuss how they relate to one definition of mutual information.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/SAS/seminars/2012070315001.html","timestamp":"2014-04-20T10:49:11Z","content_type":null,"content_length":"6595","record_id":"<urn:uuid:f43a141d-066a-4d00-ad7f-23fdfb75be96>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration by partial fractions?
Whoa, this here is kicking me hard! Okay, so I've got everything pretty well down until... stuff like... [tex]\int \frac{3x + 32}{x^{2}-16x + 64}dx[/tex]
So, I get how to factor the denominator, but then what? The above won't factor... Also, I read that if the degree of the numerator is higher than the denominator I gotta do polynomial long
division... I need a review of polynomial long division; Lol.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4161138","timestamp":"2014-04-17T03:51:08Z","content_type":null,"content_length":"30500","record_id":"<urn:uuid:baecd4b7-2e14-45ec-8518-3d89cc2ec12f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding Width of a Pathway?
Finding Width of a Pathway?
A rectangular garden of lenght 40 feet and width 20 feet is surrounded by a path of uniform width. If the area of the walkway is 325 ft2, what is its width?
Try using the set-up diagram displayed
in this thread
In your case, of course, the length and width are given, and you're finding the width "x" of the pathway. So plug the given length and width into your diagram, in place of the "w" and "2w" in that
picture, and plug "x" in for the "2".
Otherwise, the set-up and solution process are the same.
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=11&t=210&p=563","timestamp":"2014-04-16T19:04:30Z","content_type":null,"content_length":"18668","record_id":"<urn:uuid:4f82c242-aef1-447c-8830-f86097b7c7ef>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|