content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Logic comment in Mumford's Red Book
up vote 22 down vote favorite
In Mumford's "The red book of varieties and schemes" one of the examples (G on pg 74) is the space Spec $(\prod_{i=1}^\infty k)$, where $k$ is a field. He mentions that "Logicians assure us that we
can prove more theorems if we use these outrageous spaces".
Are the any examples of theorems proved using such spaces, or any references to logicians saying such a thing?
ag.algebraic-geometry lo.logic
add comment
5 Answers
active oldest votes
At (about) the time Mumford was giving his lectures at Harvard, Ax was lecturing on his work with Kochen in which they proved a conjecture of Artin for almost all p by using
ultrafilters. This is clearly what Mumford was thinking of. The reference for the Ax-Kochen work is:
up vote 40 down vote
accepted MR0184930 Ax, James; Kochen, Simon Diophantine problems over local fields. I. Amer. J. Math. 87 1965 605--630. Ib. 87 1965 631--648.
add comment
These things have to do with ultrafilters. This is not my field, but http://www.math.sc.edu/~nyikos/rings1.ps seems like an okay introduction to the area. I think that if k is a ring and X
up vote is an infinite set, Spec(k^X) can be identified with the set of ultrafilters on X. The theorem that every ideal is contained in a maximal ideal implies the existence of non-principal
17 down ultrafilters (which correspond to maximal ideals of k^X other than the obvious ones), which logicians use to make constructions of non-standard models of the reals and the like.
Of course, the existence of a non-principal ultrafilter uses the axiom of choice. So there's a very real sense in which you can prove "more theorems" using these ideas. – HJRW Nov 12 '09
at 6:15
I think you have to assume that k is a finite field. – Arno Kret Oct 5 '11 at 18:12
You're right that k should be a field, but I don't think it has to be finite. – Alison Miller Mar 31 '12 at 15:40
add comment
Like Alison said, one can identify $Spec(k^{\mathbb{N}})$ with the set of ultrafilters on $\mathbb{N}$. There is a canonical topology on this set, which makes it into the Stone-Cech
compactification of $\mathbb{N}$, $S \mathbb{N}$: one takes as a basis the sets $U_A = ${$ F \in S \mathbb{N} : A \in F $} , where $A \subset \mathbb{N}$.
$S \mathbb{N}$ is universal among compactifications of $ \mathbb{N}$, in the sense that every map from $\mathbb{N}$ to compact $X$ extends uniquely to $S\mathbb{N}$. It's not hard to see
up vote 6 that $S \mathbb{N}$ is homeomorphic to $(Spec(k^{\mathbb{N}}), zariski)$.
down vote
I think that what Mumford is pointing at are Stone-Cech compactifications in general rather than $Spec(k^{\mathbb{N}})$ in particular. There's quite a good brief definition and
explanation of them in Steen & Seebach's `Counterexamples in Topology'; you could also check out the references on the Wikipedia page.
add comment
If you take a product of finite fields of infinitely many characteristics and divide by a maximal ideal, the result is called a pseudo-finite field. This has characteristic zero and
a commutative Galois group: Z-hat.
This is decades later than Mumford's book, but Tomasic constructed a Weil cohomology theory using pseudofinite fields as coefficients:
up vote 5 down http://www.maths.qmul.ac.uk/~ivan/psf-weil-coh.ps
The construction still uses etale cohomology, but only with finite coefficients. But I guess the verification that it satisfies the Weil axioms still makes use of the ell-adic
add comment
Non-triviality of an infinite product is a reformulation of the Axiom of Choice. So possibly he is referring to the fact that (logicians say that) the Axiom of Choice is independent,
up vote 1 down and has some consequences that cannot be proved without it.
You don't need AC for the non-triviality of an infinite power. Of course AC does affect the size of that spectrum. – David Feldman Feb 7 '11 at 4:33
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/5159/logic-comment-in-mumfords-red-book/5206","timestamp":"2014-04-18T00:35:08Z","content_type":null,"content_length":"70277","record_id":"<urn:uuid:3bbd9a6f-4c42-4428-b2a0-7a6bff3e7d72>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
If A is singular; solution space of Ax=b
Suppose it is known that A is singular. Then the system Ax=0 has infinitely many solutions by the Invertible Matrix theorem.
I am curious about the system Ax=b, for any column vector b. In general, i.e. for all vectors b, will this system be inconsistent, or will it have infinitely many solutions?
Surely there exists a vector b for which this system is inconsistent. For otherwise, if it were consistent for every vector b, it would necessarily be invertible (again by the IM theorem), but by
assumption it is not.
So here are a few questions I have begun to think about, but not fully able to explain:
Given that A is a singular square matrix:
1) Does there necessarily exist a vector b for which Ax=b has infinitely many solutions?
2) Does there necessarily exist a vector b for which Ax=b has a unique solution?
3) If b is a certain column vector, can one determine simply by inspection whether Ax=b is inconsistent, has a unique solution, or has infinitely many solutions?
I would appreciation an answer to these questions. I prefer to do the "proofs" myself, so don't give anything away. Thanks much! | {"url":"http://www.physicsforums.com/showthread.php?p=4196813","timestamp":"2014-04-17T07:25:29Z","content_type":null,"content_length":"30877","record_id":"<urn:uuid:9bf05844-a463-4efd-b60b-46d2215cec79>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Height And Distance
Mob.9835859669 JSUNIL TUTORIAL
Punjabi colony gali no. 01
Class X Applications of Trigonometry
Height And Distance
1. A circus artist is climbing a 20 m long rope, which is tightly stretched and tied from the top of a vertical pole to the ground. Find the height of the pole, if the angle made by the rope with the
ground level is 30°.
2. A tree breaks due to storm and the broken part bends so that the top of the tree touches the ground making an angle 30° with it. The distance between the foot of the tree to the point where the
top touches the ground is 8 m. Find the height of the tree.
3. A contractor plans to install two slides for the children to play in a park. For the children below the age of 5 years, she prefers to have a slide whose top is at a height of 1.5 m, and is
inclined at an angle of 30° to the ground, whereas for elder children, she wants to have a steep slide at a height of 3m, and inclined at an angle of 60° to the ground. What should be the length of
the slide in each case?
4. The angle of elevation of the top of a tower from a point on the ground, which is 30 m away from the foot of the tower, is 30°. Find the height of the tower.
5. A kite is flying at a height of 60 m above the ground. The string attached to the kite is temporarily tied to a point on the ground. The inclination of the string with the ground is 60°. Find the
length of the string, assuming that there is no slack in the string.
6. A 1.5 m tall boy is standing at some distance from a 30 m tall building. The angle of elevation from his eyes to the top of the building increases from 30° to 60° as he walks towards the building.
Find the distance he walked towards the building.
7. From a point on the ground, the angles of elevation of the bottom and the top of a transmission tower fixed at the top of a 20 m high building are 45° and 60° respectively. Find the height of the
8. A statue, 1.6 m tall, stands on the top of a pedestal. From a point on the ground, the angle of elevation of the top of the statue is 60° and from the same point the angle of elevation of the top
of the pedestal is 45°. Find the height of the pedestal.
9. The angle of elevation of the top of a building from the foot of the tower is 30° and the angle of elevation of the top of the tower from the foot of the building is 60°. If the tower is 50 m
high, find the height of the building.
10. Two poles of equal heights are standing opposite each other on either side of the road, which is 80 m wide. From a point between them on the road, the angles of elevation of the top of the poles
are 60° and 30°, respectively. Find the height of the poles and the distances of the point from the poles.
11. A TV tower stands vertically on a bank of a canal. From a point on the other bank directly opposite the tower, the angle of elevation of the top of the tower is 60°. From another point 20 m away
from this point on the line joing this point to the foot of the tower, the angle of elevation of the top of the tower is 30°. Find the height of the tower and the width of the canal.
12. From the top of a 7 m high building, the angle of elevation of the top of a cable tower is 60° and the angle of depression of its foot is 45°. Determine the height of the tower.
13. As observed from the top of a 75 m high lighthouse from the sea-level, the angles of depression of two ships are 30° and 45°. If one ship is exactly behind the other on the same side of the
lighthouse, find the distance between the two ships.
14. A 1.2 m tall girl spots a balloon moving with the wind in a horizontal line at a height of 88.2 m from the ground. The angle of elevation of the balloon from the eyes of the girl at any instant
is 60°. After some time, the angle of elevation reduces to 30°. Find the distance travelled by the balloon during the interval.
15. A straight highway leads to the foot of a tower. A man standing at the top of the tower observes a car at an angle of depression of 30°, which is approaching the foot of the tower with a uniform
speed. Six seconds later, the angle of depression of the car is found to be 60°. Find the time taken by the car to reach the foot of the tower from this point.
16. The angles of elevation of the top of a tower from two points at a distance of 4 m and 9 m from the base of the tower and in the same straight line with it are complementary. Prove that the
height of the tower is 6 m.
Contact: JSunil . Punjabi colony Gali no. 01 | {"url":"http://cbsemathstudy.blogspot.com/2011/03/height-and-distance.html","timestamp":"2014-04-18T08:01:52Z","content_type":null,"content_length":"99563","record_id":"<urn:uuid:705da189-4505-459a-9911-9f7d033a6672>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
permutation matrix question
March 6th 2011, 11:31 AM
permutation matrix question
the question highlighted below is in two parts, really have no clue how to start it, any help would be greatly appreciated!!
a. Recall that an elementary permutation matrix is an n x n matrix which is In except that two rows of
In have been swapped. If $P_{i;j}$ is the elementary permutation matrix where rows i and j have been swapped
and $A$ is a matrix (with n rows), describe the relationship between $A$ and $P_{i;j}A$.
b. Let A be the matrix:
$A=\left[\begin{array}{ccc}1&2&3\\2&4&6\\1&3&5\end{array}\r ight]$
Find a permutation matrix P, a lower triangular matrix L and an upper triangular matrix U so that
$A = P^{T}LU$:
thank you!
March 6th 2011, 12:29 PM
the question highlighted below is in two parts, really have no clue how to start it, any help would be greatly appreciated!!
a. Recall that an elementary permutation matrix is an n x n matrix which is In except that two rows of
In have been swapped. If $P_{i;j}$ is the elementary permutation matrix where rows i and j have been swapped
and $A$ is a matrix (with n rows), describe the relationship between $A$ and $P_{i;j}A$.
Have you looked at an example? Write down any permutation matrix, P, any matrix A, and multiply them. Compare A and PA. The answer should be obvious.
b. Let A be the matrix:
$A=\left[\begin{array}{ccc}1&2&3\\2&4&6\\1&3&5\end{array}\r ight]$
Find a permutation matrix P, a lower triangular matrix L and an upper triangular matrix U so that
$A = P^{T}LU$:
thank you!
March 6th 2011, 01:36 PM
ok so am i right in saying that if i took any identity matrix, swapped any of the two rows, that would be denoted a permutation matrix?
eg. $\left(\begin{array}{ccc}1&0&0\\0&0&1\\0&1&0\end{ar ray}\right)$ would be a permutation matrix on the identity matrix for a 3x3??
March 7th 2011, 01:59 AM
You're correct. However, your problem has asked you to compare the result when you take the matrix you exhibited in your last post and left-multiply it with A, versus just plain A itself. That
is, if P is the matrix in your last post, then what's the difference between A and PA? | {"url":"http://mathhelpforum.com/advanced-algebra/173655-permutation-matrix-question-print.html","timestamp":"2014-04-18T20:55:15Z","content_type":null,"content_length":"9585","record_id":"<urn:uuid:844cfdc5-4df9-455d-92f5-1ea9c0c55e06>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
GISS July temp up 0.09°C.
The NASA GISS surface temp average anomaly for July is
- up 0.09°C, from 0.51°C to 0.60°C. Updated plot
. The TempLS calc
a 0.003°C rise.
So far no spatial plot available - when it is I'll compare that with TempLS.
4 comments:
1. Here are the maps of UAH, RSS and GISS (250 and 1200 radius) for July 2011 with the same base period
Action seems to be at the poles, so coverage probably matters. Also interesting to see the tropics and some regions above oceans are warmer for UAH and RSS.
2. Thanks, MP
I'll write a new post with this comparison. If you want to write something on what you have done, I would be very happy to post it here.
3. Of course, it doesn't mean much for the July anomaly to be "0.09 up" on June. If temperatures were linearly increasing from 1979 with no noise, then the July anomaly would be exactly the same as
the June anomaly. All months of a given year would share the same anomaly. The month to month anomaly is mostly noise with possibly some change in the seasonal cycle.
4. drj,
Yes, that's an artefact of the fixing of anomaly periods in a calendar year. But it also means that there would be a sudden change in anomaly every January, expressing the whole year's change.
One could work out an anolaly base that avoided that. If m=0:11 are months, and y=0:30 are years after 1950, say, and T(m,y) is the temp,then the base for month i is
That would smooth it out. | {"url":"http://moyhu.blogspot.com/2011/08/giss-july-temp-up-009c.html","timestamp":"2014-04-20T03:09:45Z","content_type":null,"content_length":"97598","record_id":"<urn:uuid:786bb87e-cceb-4dbc-bdad-9178eeec5986>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to Henrik Shahgholian's home page; PDE & Potential Theory Group
The development of a "potential analysis" group in the department of mathematics at the Royal Institute of Technology (KTH) wasn't really the result of planning, it just evolved. The starting point
was the discovery of "quadrature domains" (q.d.) which (in one incarnation) were first encountered by Dov Aharonov and Harold S. Shapiro in work on variational problems for conformal maps in the
early seventies. The late Bo Kjellberg was enthusiastic about these ideas and encouraged H.S. Shapiro to pursue them with graduate students, and this advice, which was followed, seems to have borne
An interesting feature of q.d. is that they sit at a crossroads of nice ideas from complex analysis, potential theory, partial differential equations and (as was later discovered by Mihai Putinar)
moment problems, and operator theory in Hilbert space. On the one hand, a q.d. is characterized as a uniform lamina having very remarkable "gravitational" problems: it attracts bodies far away just
as if it were a finite collection of point masses. On the other hand, it is characterized by the solvability of a certain overdetermined Cauchy problem (for the Laplace operator), so the subject at
once led (especially in its multivariable generalizations) to new problems involving Newtonian gravitation (especially so-called inverse problems) and overdetermined p.d.e. In the planar case the
p.d.e. part was encapsulated in a "Schwarz function", originally envisaged for the elaboration of reflection principles w.r.t. analytic boundaries.
The spade work in sorting out all this was largely done in doctorate theses by Shapiro's students, especially (named in chronological order) Carina Ullemar, Bjoern Gustafsson, Gunnar Johnsson, Henrik
Shahgholian and Peter Ebenfelt. One can also name here Lavi Karp, who began his research in Stockholm but finished his doctoral studies in Israel, with Aharonov, and who has maintained close contacts
with our group. Furthermore, an expanding international constellation of mathematicians has pursued these studies, often in collaboration with our group. Their ranks include Boris Sternin and V.
Shatalov (Moscow), Makoto Sakai (Tokyo), Dmitri Khavinson (Arkansas), Dov Aharonov (Haifa), Alexander Solynin (St. Petersburg), Mihai Putinar (Santa Barbara), John McCarthy (St. Louis), Liming Yang
(Hawaii), Dmitry Yakubovich (St. Petersburg) , and Daoxing Xia (Nashville).
At the present time we have a small but active group at KTH, which is very well connected internationally. The fields of initial focus have spread further with the years and encompass variational
inequalities, complementarity problems, free and moving boundary problems, ele Shaw flows, geometric measure theory, symmetrization, .... Stan Richardson (Edinburgh), Darren Crowdy (London), Rrobert
Millar (Vancouver), David Armitage (Belfast) , Stephen Gardiner (Dublin), Mark Agranovsky (Israel). | {"url":"http://www.math.kth.se/~henriksh/Henriks_page/pde_ptg.html","timestamp":"2014-04-20T13:41:59Z","content_type":null,"content_length":"10722","record_id":"<urn:uuid:9eda43b9-fc0d-43f4-87ce-de0c55465181>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
DW State Space Model
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
As discussed in §E.2, the traveling-wave decomposition Eq.E.4) defines a linear transformation Eq.E.10) from the DW state to the FDTD state:
Since change of coordinates for the state space. Substituting Eq.E.27) into the FDTD state space model Eq.E.24) gives
Multiplying through Eq.E.28) by E.2:
To verify that the DW model derived in this manner is the computation diagrammed in Fig.E.2, we may write down the state transition matrix for one subgrid from the figure to obtain the permutation
and displacement output matrix
Subsections Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/pasp/DW_State_Space_Model.html","timestamp":"2014-04-18T23:55:08Z","content_type":null,"content_length":"19335","record_id":"<urn:uuid:f62b6294-94bc-419f-9efb-ae76763fb7bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
undergraduate research
Undergraduate research is a high priority at Morehouse College and the Division of Science and Mathematics supports and encourages initiatives that aim to increase the number of its graduates
entering PhD programs. As such, Morehouse faculty regularly work with undergraduates on research projects. Most of these projects are supported through one of the College’s “future research
scholars” programs--John Hopps Research Scholars Program, MBRS-RISE Program; Ronald E. McNair Program; LSAMP; MARC U*STAR; Howard Hughes Medical Institute Program; HBCU-UP and NIMH-COR.
Consequently, I am thrilled to say that there are many opportunities for Morehouse students to engage in research experiences in their respective science and mathematics disciplines.
The research endeavor tends to be unique for any given student and project. The reason that it is important to me to supervise research at the undergraduate level is that exposes students to
what research in mathematics entails, so they may make informed decisions about their own future career paths. In pursuing such experiences, students will learn the language of mathematics, proof
techniques, and software; write mathematics; read mathematics; and give presentations. Such skills will increase their ability to hit the ground running in their future career or graduate studies.
A successful undergraduate researcher will begin to make their own conjectures, develop original approaches to discovery, construct examples and counterexamples, connect ideas, write proofs and
survey results. | {"url":"http://www.morehouse.edu/facstaff/uwilson/morehouse/undergrad_research.html","timestamp":"2014-04-21T12:09:40Z","content_type":null,"content_length":"7224","record_id":"<urn:uuid:f15c8ec2-4694-4373-8717-4b2372839de6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Class problem
03-26-2006 #1
Registered User
Join Date
Oct 2005
Class problem
I have a program that should take some points that make a straight line and calculate a slope. I am getting some random garbage and I don't know why.
I have a class "Line":
class Line
Point p1;
Point p2;
double slope;
double yInt;
Point readPts(Point p1, Point p2);
void slopePts(Point p1, Point p2);
and a class "Point":
class Point
double dX;
double dY;
friend class Line;
In my Line.cpp file, I have:
Point Line::readPts(Point p1, Point p2)
cout <<"Enter the point's X coordinate: ";
cin >> p1.dX;
cout <<"And its Y coordinate: ";
cin >> p1.dY;
cout <<"Enter the point's X coordinate: ";
cin >> p2.dX;
cout <<"And its Y coordinate: ";
cin >> p2.dY;
void Line::slopePts(Point p1, Point p2)
slope = (p2.dY - p1.dY) / (p2.dX - p1.dX);
In my main.cpp, here is where I call the function to read in points:
Line AB, BC;
AB.readPts(p1, p2); //reading in points for line AB
CD.readPts(p1, p2); //reading in points for line CD
If I print "p1.dX" or "p1.dY" etc. in the "readPts" function, obviously the correctly entered numbers will come out. However, when I print these in the "slopePts" function, they will be garbage.
This is why my slope is comming out as garbage as well.
You are passing the points by value to the readPts function, so that function is getting a copy and reading data into that copy. The original variables p1 and p2 in main.cpp never change and are
probably just un-initialized.
You could pass the points by reference to the function, or you could make the read function a member of the Point class, or what probably makes the most sense is to remove the function parameters
completely and read into the member variables of the Line class.
dX and dY are private, don't they have to be protected for other classes to reference them via the friend statement?
Point Line::readPts(Point p1, Point p2)
Also notice that function doesn't return a point.
These two variables could be replaced with functions that retrieve the slope and yInt instead of storing them in memory:
double slope;
double yInt;
Last edited by theJ89; 03-26-2006 at 05:54 PM.
No, a friend is allowed access to a class's privates.
03-26-2006 #2
Registered User
Join Date
Jan 2005
03-26-2006 #3
Registered User
Join Date
Mar 2006
03-26-2006 #4
Registered User
Join Date
Jan 2005 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/77409-class-problem.html","timestamp":"2014-04-19T09:49:13Z","content_type":null,"content_length":"50807","record_id":"<urn:uuid:87259e43-7abb-42df-9f97-d0fe1acc4541>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
NOVAL DYAN S.
In general company founded to obtain; get its profit utilize to pass off its life, despitefully on a long term company have to try to improve and apply its effort position in the centre of emulation
of business which progressively komplek and tighten. An company to perform ekspansi that is addition of machine, in taking decision need matured considerations, considering karateristik cultivation
of this capital which do not easy to to be altered if have been released is (unrevocable). On that account before addition of plant asset executed to need an invesment evaluation with a[n measuring
instrument to assess do addition of the plant asset competent or do not, in order not to happened terlanjur cultivation of capital at activity which actually do not profit. To assess competent do not
it him addition of plant asset, specially loom one of them hence can be applied by technique of Study Elegibility.
Target of research in this research is to know elegibility of invesment addition of loom conducted by company. And to know profit improvement of company caused by addition of loom.
Analysis method which used in this research is: method of Payback Period, method of Net Present Value, Internal method Rate Of Return, method of Protability Index and of ARR.
Pursuant to calculation by using method of payback period. Duration required to return invesment 10 months 7 day. In comparison with period of invesment time determined by economic value or company
of plant asset namely 8 year. hence invesment proposal accepted or competent be achieved. Pursuant to calculation by using method of IRR known that level of return of rate 62,87%. Its meaning of IRR
Invesment bigger than expense of mean capital that is equal to 33,54% hence invesment proposal accepted or competent be achieved. By using method of NPV which searching difference between PV Outlays
of with PV Proceeds of after disagio on the basis of expense of capital giving positive result that is equal to Rp 872.138.574 so that as according to criterion assessment of invesment expressing
that if / when NPV is equal to or bigger than zero. hence invesment proposal accepted. By using method of profitability index show 1,66 meaning comparison between PV Proceeds of with PV Outlays of
more than 1. hence plant asset invesment proposal earn calculation. Using method of ARR show 97,77% coming comparison between EAT with profit after Iease.
Keyword : Payback Period, metode Net Present Value, metode Internal Rate of Return, metode Protability Index dan ARR
Link Terkait : http://skripsi.umm.ac.id/files/disk1/284/jiptummpp-gdl-s1-2008-novaldyans-14168-PENDAHUL-N.pdf | {"url":"http://student-research.umm.ac.id/index.php/dept_of_accounting/article/view/1784","timestamp":"2014-04-18T05:31:48Z","content_type":null,"content_length":"13768","record_id":"<urn:uuid:3619561e-6fb8-4a94-879d-7758314a2681>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Write an equation for the problem and solve. There are 17 2/3 gallons of water in the tank. The tank is 3/4 full. How many gallons of water, G, can the tank hold? My answer was G= 13 1/4. I did the
inverse and got 17 2/3 gallons back that's correct.. But why is it that the tank holds less the the amount currently in it... Where did I go wrong.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f3b174de4b0fc0c1a0ee6d9","timestamp":"2014-04-18T03:35:38Z","content_type":null,"content_length":"46940","record_id":"<urn:uuid:65e88625-0e84-451c-bde4-96623c91828b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: ionvulcanesc_@_erizon.net
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Most relevant results by forum
Results: 117
Pages: 8 [ 1 2 3 4 5 6 7 8 | Next ]
Results from geometry.announcements Show all results within this discussion.
1) The Antiquity of the Sumerian PI Value, Part 23 - THE VATICAN REPRESSED RELIGIOUS SECRET HISTORY OF THE PI VALUE 100%
Posted: Apr 14, 2014 8:13 AM, by: "ionvulcanescu"
Show all results within this topic.
2) The Antiquity of the Sumerian PI Value, Part 22 - THE DESIGN OF ARCHIMEDES 100%
Posted: Mar 10, 2014 12:13 AM, by: "ionvulcanescu"
Show all results within this topic.
3) The Antiquity of the Sumerian PI Value, Part 21 - The Jerusalem "Defector's" Code 115880. 100%
Posted: Feb 3, 2014 8:32 AM, by: "ionvulcanescu"
Show all results within this topic.
4) The Antiquity of the Sumerian PI Value, Part 20 - The Discovery of the Culture of the 1st Century BC Romanized Priests of Judea, Code 103972 100%
Posted: Jan 13, 2014 8:29 AM, by: "ionvulcanescu"
Show all results within this topic.
5) The Antiquity of the Sumerian PI Value, Part 20 - The Discovery of the Culture of the 1st Century BC Romanized Priests of Judea, Code 103972 100%
Posted: Jan 13, 2014 8:26 AM, by: "ionvulcanescu"
Show all results within this topic.
6) The Antiquity of the Sumerian PI Value, Part 19 - The writing of the TORAH took place in 1st cent. BC, not before, in 4th-3th century BC 100%
Posted: Dec 16, 2013 10:05 AM, by: "ionvulcanescu"
Show all results within this topic.
7) The Antiquity of the Sumerian PI Value, Part 18 - The Measurement of the Circle. 100%
Posted: Nov 4, 2013 8:10 AM, by: "ionvulcanescu"
Show all results within this topic.
8) The Antiquity of the Sumerian PI Value, Part 17- The Rediscovery of. ..THE PROPORTIONAL PI VALUE, and its Relation to the Designs of Swastika , and Cheops Pyramid. 100%
Posted: Sep 9, 2013 9:36 AM, by: "ionvulcanescu"
Show all results within this topic.
9) The Antiquity of the Sumerian PI Value, Part 16 - THE SWATIKA'S CUBE Mathematical Realations to the Sexagessimal and Decimal Systems, and to the PI Values 3.141592654, and 3.14626437 100%
Posted: Aug 19, 2013 8:10 AM, by: "ionvulcanescu"
Show all results within this topic.
10) The Antiquity of the Sumerian PI Value, Part 14 - THE FALSIFICATIONS OF THE HISTORY OF MATHEMATICS in 3rd-4th Cent. BC, and 17th-18th Cent. AD 100%
Posted: Mar 4, 2013 8:38 AM, by: "ionvulcanescu"
Show all results within this topic.
11) The Antiquity of the Sumerian PI Value, Part 13 - THE FALSIFICATIONS OF THE HISTORY OF MATHEMATICS in 3rd Cent. BC, and 17 Cent. AD 100%
Posted: Feb 4, 2013 8:20 AM, by: "ionvulcanescu"
Show all results within this topic.
12) The Antiquity of the Sumerian PI Value, Part 12 - THE FALSIFICATIONS OF THE HISTORY OF MATHEMATICS In 3rd Cent. BC, and in 17th Cent. AD 100%
Posted: Jan 8, 2013 9:25 AM, by: "ionvulcanescu"
Show all results within this topic.
13) The Antiquity of the Sumerian PI Value, Part 10 - HOMAGE TO ALFRED NOBEL 100%
Posted: Nov 12, 2012 8:30 AM, by: "ionvulcanescu"
Show all results within this topic.
14) The Antiquity of the Sumerian PI Value, Part 9 100%
Posted: Mar 12, 2012 9:04 AM, by: "ionvulcanescu"
Show all results within this topic.
15) The Antiquity of the Sumerian PI Value, Part 8 100%
Posted: Feb 20, 2012 9:36 AM, by: "ionvulcanescu"
Show all results within this topic. | {"url":"http://mathforum.org/kb/search!execute.jspa?userID=498025&forceEmptySearch=true","timestamp":"2014-04-19T07:55:55Z","content_type":null,"content_length":"34258","record_id":"<urn:uuid:403289e1-24cf-46c0-a896-c6854edd02b2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can we write Rolle's Theorem this way?
Rolle's theorem:
If y =f(x) is a real valued function of a real variable such that:
1) f(x) is continuous on [a,b]
2) f(x) is differentiable on (a,b)
3) f(a) = f(b)
then there exists a real number c[itex]\in[/itex](a,b) such that f'(c)=0
what if the the f(x) is like the following graph:
here there is a point 'c' for which f'(c) =0 but f(a) [itex]\neq[/itex] f(b)
So to take such cases in consideration can we make a change to the last statement of Rolle's theorem as:
3)f(c) > [f(a),f(b)] Or f(c)<[f(a),f(b)]
are there any exceptions to the above statement? | {"url":"http://www.physicsforums.com/showthread.php?s=9f21c8728e9d3e87e5447c0cfe8c4f2d&p=4646671","timestamp":"2014-04-25T06:01:00Z","content_type":null,"content_length":"31597","record_id":"<urn:uuid:059ec48f-ab63-41ed-845a-910808bf44b5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
x^2 + y^2 Is Composite?
Date: 01/30/2003 at 21:38:59
From: Allan
Subject: Discrete Mathematics
Prove or give a counterexample:
For all integers, if x + y is composite and x - y is composite,
then x^2 + y^2 is composite.
I know a composite number is an integer b such that a|b and 1<a<b. So
there is a number c such that a*c=b, by definition of divisibility. I
get this far and think, okay, so x+y is equal to a composite number so
that x+y= a*c and for the other x-y= g*h. This also holds true for
x^2+y^2= e*f. Now this is where I get stuck. I have six different
variables, which is probably incorrect. I know that this is probably a
very simple problem but I am completly confused. Any help would be
Thank you, Dr. Math!
Date: 01/31/2003 at 17:30:59
From: Doctor Marshall
Subject: Re: Discrete Mathematics
Hi Allen,
Interesting question. I looked for a way to prove it for a
long time, then decided to play with some numbers to see if I might
understand it better. I came up with this:
for x=47, y=2
(47 + 2) = 7^2 is composite
(47 - 2) = 3^2 * 5 is composite
(47^2) + (2^2) = 2113 is prime.
(To check for primality, I used the list of The First 10,000 Primes
from the University of Tennessee at Martin at
It's possible that many other combinations would work. Here's how I
chose this one.
I decided to look for a counter-example. That requires (x^2 + y^2) to
be prime, which requires that exactly one of the numbers (x,y) be odd.
(If they are both odd or even, then the sum of their squares is even,
and composite without a contest.)
Now, let's recognize that
x^2 + y^2 = x^2 - y^2 + 2y^2
= (x-y)(x+y) + 2y^2.
Now, if exactly one of the numbers (x,y) is odd, then both (x-y) and
(x+y) must be odd. Since (x-y) and (x+y) are assumed to be composite,
there exists a number n that divides (x-y) and a number m that divides
(x+y), such that neither n nor m is even.
Let's assume that n=m, i.e., that (x-y) and (x+y) share a common
divisor. Then n divides ((x+y) - (x-y)), which is equal to 2y. Hence
n, which is odd, divides y.
Now we can show that n divides (x^2 + y^2) because
x^2 + y^2 = (x-y)(x+y) + 2y^2
and n divides each of these terms. Therefore, if n=m, our number
(x^2+y^2) is composite.
So I found TWO ODD, COMPOSITE numbers WITHOUT a COMMON DIVISOR, and
it worked.
I hope this facilitates a better understanding of the problem;
however, some insight into why this might generally not be valid might
help you (and me) out more.
Is it possible, for example, that for EVERY pair of odd composites
(x-y) and (x+y) without a common divisor, it must be true that
(x^2 + y^2) is prime? I don't know.
I hope this helps (a little bit). Feel free to write back if you have
- Doctor Marshall, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/62170.html","timestamp":"2014-04-17T21:51:00Z","content_type":null,"content_length":"7869","record_id":"<urn:uuid:30398cfa-fb42-4676-969c-607f944e296b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reporting points in halfspaces
Results 1 - 10 of 53
, 1998
"... The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the
point in P closest to a query point q 2 X. We focus on the particularly interesting case of the d-dimens ..."
Cited by 715 (33 self)
Add to MetaCart
The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point
in P closest to a query point q 2 X. We focus on the particularly interesting case of the d-dimensional Euclidean space where X = ! d under some l p norm. Despite decades of effort, the current
solutions are far from satisfactory; in fact, for large d, in theory or in practice, they provide little improvement over the brute-force algorithm which compares the query point to each data point.
Of late, there has been some interest in the approximate nearest neighbors problem, which is: Find a point p 2 P that is an ffl-approximate nearest neighbor of the query q in that for all p 0 2 P , d
(p; q) (1 + ffl)d(p 0 ; q). We present two algorithmic results for the approximate version that significantly improve the known bounds: (a) preprocessing cost polynomial in n and d, and a trul...
, 1998
"... We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high
dimensional Euclidean space, we want to construct a space-efficient data structure that would allow us to ..."
Cited by 188 (9 self)
Add to MetaCart
We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high
dimensional Euclidean space, we want to construct a space-efficient data structure that would allow us to search, given a query vector, for the closest or nearly closest vector in the database. We
also address this problem when distances are measured by the L 1 norm, and in the Hamming cube. Significantly improving and extending recent results of Kleinberg, we construct data structures whose
size is polynomial in the size of the database, and search algorithms that run in time nearly linear or nearly quadratic in the dimension (depending on the case; the extra factors are polylogarithmic
in the size of the database). Computer Science Department, Technion --- IIT, Haifa 32000, Israel. Email: eyalk@cs.technion.ac.il y Bell Communications Research, MCC-1C365B, 445 South Street,
Morristown, NJ ...
, 1992
"... We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two- and three-dimensions. An optimal
triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some cri ..."
Cited by 180 (8 self)
Add to MetaCart
We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two- and three-dimensions. An optimal
triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some criterion that measures the size, shape, or number of triangles. We discuss algorithms both for
the optimization of triangulations on a fixed set of vertices and for the placement of new vertices (Steiner points). We briefly survey the heuristic algorithms used in some practical mesh
, 1997
"... Representing data as points in a high-dimensional space, so as to use geometric methods for indexing, is an algorithmic technique with a wide array of uses. It is central to a number of areas
such as information retrieval, pattern recognition, and statistical data analysis; many of the problems aris ..."
Cited by 169 (0 self)
Add to MetaCart
Representing data as points in a high-dimensional space, so as to use geometric methods for indexing, is an algorithmic technique with a wide array of uses. It is central to a number of areas such as
information retrieval, pattern recognition, and statistical data analysis; many of the problems arising in these applications can involve several hundred or several thousand dimensions. We consider
the nearest-neighbor problem for d-dimensional Euclidean space: we wish to pre-process a database of n points so that given a query point, one can efficiently determine its nearest neighbors in the
database. There is a large literature on algorithms for this problem, in both the exact and approximate cases. The more sophisticated algorithms typically achieve a query time that is logarithmic in
n at the expense of an exponential dependence on the dimension d; indeed, even the averagecase analysis of heuristics such as k-d trees reveals an exponential dependence on d in the query time. In
this wor...
, 1993
"... Given a set of n points in d-dimensional Euclidean space, S ae E d , and a query point q 2 E d , we wish to determine the nearest neighbor of q, that is, the point of S whose Euclidean distance
to q is minimum. The goal is to preprocess the point set S, such that queries can be answered as effic ..."
Cited by 105 (10 self)
Add to MetaCart
Given a set of n points in d-dimensional Euclidean space, S ae E d , and a query point q 2 E d , we wish to determine the nearest neighbor of q, that is, the point of S whose Euclidean distance to q
is minimum. The goal is to preprocess the point set S, such that queries can be answered as efficiently as possible. We assume that the dimension d is a constant independent of n. Although reasonably
good solutions to this problem exist when d is small, as d increases the performance of these algorithms degrades rapidly. We present a randomized algorithm for approximate nearest neighbor
searching. Given any set of n points S ae E d , and a constant ffl ? 0, we produce a data structure, such that given any query point, a point of S will be reported whose distance from the query point
is at most a factor of (1 + ffl) from that of the true nearest neighbor. Our algorithm runs in O(log 3 n) expected time and requires O(n log n) space. The data structure can be built in O(n 2 )
- DISCRETE COMPUT. GEOM , 1994
"... Let P be a set of n points in R d (where d is a small fixed positive integer), and let \Gamma be a collection of subsets of R d , each of which is defined by a constant number of bounded degree
polynomials. We consider the following \Gamma-range searching problem: Given P , build a data structur ..."
Cited by 80 (22 self)
Add to MetaCart
Let P be a set of n points in R d (where d is a small fixed positive integer), and let \Gamma be a collection of subsets of R d , each of which is defined by a constant number of bounded degree
polynomials. We consider the following \Gamma-range searching problem: Given P , build a data structure for efficient answering of queries of the form `Given a fl 2 \Gamma, count (or report) the
points of P lying in fl'. Generalizing the simplex range searching techniques, we give a solution with nearly linear space and preprocessing time and with O(n 1\Gamma1=b+ffi ) query time, where d b
2d \Gamma 3 and ffi ? 0 is an arbitrarily small constant. The actual value of b is related to the problem of partitioning arrangements of algebraic surfaces into constant-complexity cells. We present
some of the applications of \Gamma-range searching problem, including improved ray shooting among triangles in R³.
- Handbook of Computational Geometry , 1998
"... The arrangement of a finite collection of geometric objects is the decomposition of the space into connected cells induced by them. We survey combinatorial and algorithmic properties of
arrangements of arcs in the plane and of surface patches in higher dimensions. We present many applications of arr ..."
Cited by 78 (22 self)
Add to MetaCart
The arrangement of a finite collection of geometric objects is the decomposition of the space into connected cells induced by them. We survey combinatorial and algorithmic properties of arrangements
of arcs in the plane and of surface patches in higher dimensions. We present many applications of arrangements to problems in motion planning, visualization, range searching, molecular modeling, and
geometric optimization. Some results involving planar arrangements of arcs have been presented in a companion chapter in this book, and are extended in this chapter to higher dimensions. Work by P.A.
was supported by Army Research Office MURI grant DAAH04-96-1-0013, by a Sloan fellowship, by an NYI award, and by a grant from the U.S.-Israeli Binational Science Foundation. Work by M.S. was
supported by NSF Grants CCR-91-22103 and CCR-93-11127, by a Max-Planck Research Award, and by grants from the U.S.-Israeli Binational Science Foundation, the Israel Science Fund administered by the
Israeli Ac...
, 1996
"... Range searching is one of the central problems in computational geometry, because it arises in many applications and a wide variety of geometric problems can be formulated as a range-searching
problem. A typical range-searching problem has the following form. Let S be a set of n points in R d , an ..."
Cited by 70 (1 self)
Add to MetaCart
Range searching is one of the central problems in computational geometry, because it arises in many applications and a wide variety of geometric problems can be formulated as a range-searching
problem. A typical range-searching problem has the following form. Let S be a set of n points in R d , and let R be a family of subsets; elements of R are called ranges . We wish to preprocess S into
a data structure so that for a query range R, the points in S " R can be reported or counted efficiently. Typical examples of ranges include rectangles, halfspaces, simplices, and balls. If we are
only interested in answering a single query, it can be done in linear time, using linear space, by simply checking for each point p 2 S whether p lies in the query range.
- SIAM J. Comput
"... We study the question of finding a deepest point in an arrangement of regions, and provide a fast algorithm for this problem using random sampling, showing it sufficient to solve this problem
when the deepest point is shallow. This implies, among other results, a fast algorithm for solving linear pr ..."
Cited by 63 (11 self)
Add to MetaCart
We study the question of finding a deepest point in an arrangement of regions, and provide a fast algorithm for this problem using random sampling, showing it sufficient to solve this problem when
the deepest point is shallow. This implies, among other results, a fast algorithm for solving linear programming with violations approximately. We also use this technique to approximate the disk
covering the largest number of red points, while avoiding all the blue points, given two such sets in the plane. Using similar techniques imply that approximate range counting queries have roughly
the same time and space complexity as emptiness range queries. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1577279","timestamp":"2014-04-21T07:46:31Z","content_type":null,"content_length":"36959","record_id":"<urn:uuid:42d94718-84ef-4314-8cc3-b1c757093250>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: Differencing two equations
Replies: 0
Re: Differencing two equations
Posted: Feb 11, 2013 4:37 AM
On 2/10/13 at 3:24 AM, g.c.b.at.home@gmail.com wrote:
>I'm trying to figure out how to difference two equations. Basically
>if I have: a==r
>I'd like to get: a-b == r-s
>What I'm getting is more like (a==r) - (b==s). I'm not sure how
>that's a useful result, but is there a function to do what I'm
>looking for?
>A quick search of the archives seem to bring up ways of doing this
>from using transformation rules to swap heads to unlocking the
>Equals operator and hacking its behavior. I'd like to avoid doing
>that kind of rewiring for a simple operation, and I'd like to keep
>the syntax clean.
Here is one way, but it may not meet your criteria for clean syntax
In[1]:= eq1 = a == r;
eq2 = b == s;
In[3]:= Equal @@ (Subtract @@ List @@@ {eq1, eq2})
Out[3]= a-b==r-s | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2434497","timestamp":"2014-04-19T15:22:38Z","content_type":null,"content_length":"14455","record_id":"<urn:uuid:b73c9e7c-0c2a-4354-bf65-9c2efaf7cbe2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Inductive reasoning
Stephen G Simpson simpson at math.psu.edu
Wed Jan 19 18:12:29 EST 2000
Joe Shipman Wed Jan 19 14:37:19 2000 writes:
> Technical question here for logicians: in the Boolean-valued model
> approach to independence proofs, is there ever a way to "project" the
> algebra of "truth values" into the probability space [0,1] so that
> (some) statements with intermediate truth values can be assigned a
> probability strictly between 0 and 1in a consistent way?
The ``algebra of truth values'' is the complete Boolean algebra that
is being considered, right? And ``projections'' are homomorphisms,
right? So the question seems to amount to: Does there exist a
homomorphism of (some Boolean subalgebra of) some complete Boolean
algebra onto the measure algebra of a non-trivial probability space?
If this is the question, then the answer is, yes, of course. For
instance, consider the complete Boolean algebra M for adding a random
real to the universe. M is the standard atomless probability measure
algebra. Each Boolean value (i.e., each element of M) is an
equivalence class of measurable sets and can be assigned a probability
equal to its measure.
But I'm sure Joe Shipman knows all this very well, better than I in
fact, so maybe I am misunderstanding the question.
> Is there a Boolean-valued model in which the axioms of ZFC have
> value 1 but CH has a value strictly between 0 and 1?
Again, doesn't this question have the following easy affirmative
answer? By Cohen/Solovay, let B1 and B2 be complete Boolean algebras
such that [[ CH ]]_B1 = 1 and [[ CH ]]_B2 = 0. Let B = B1 x B2. Then
[[ CH ]]_B = (1,0). The Stone space of B is the disjoint union of the
Stone spaces of B1 and B2. There is an obvious homomorphism of B into
the 4-element Boolean algebra {0,1} x {0,1} which carries an obvious
probability measure, so you can consistently assign CH a probability
of 1/2.
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-January/003609.html","timestamp":"2014-04-20T16:36:27Z","content_type":null,"content_length":"4112","record_id":"<urn:uuid:320ca8fa-d5be-4db2-8c4e-bea159980c85>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
1983 A note on Limit Identification of c-minimal Indices - E. B. Kinber
Editing By Example (Ph.D Thesis) - R. Nix
Densité et Dimension - P. Assouad
Solution to inductive inference problem P3 - J. Case and M. Fulk
Machine Learning: An Artificial Intelligence Approach - Ryszard S. Michalski and Jaime G. Carbonell and Tom M. Mitchell
Formal Learning Theory - D. Osherson and S. Weinstein
Algorithmic Program Debugging - E. Y. Shapiro
Inferno: A Cautious Approach to Uncertain Inference - J. R. Quinlan
On the Synthesis of Fastest Programs in Inductive Inference - Thomas Zeugmann
A Boolean Complete Neural Model of Adaptive Behavior - S. Hampson and D. Kibler
Inferring Unions of two Pattern Languages - Takeshi Shinohara
A-posteriori Characterizations in Inductive Inference of Recursive Functions - Thomas Zeugmann
Comparison of Identification Criteria for Machine Inductive Inference - John Case and Carl H. Smith
Why Should Machines Learn? - H. A. Simon
CONVINCE: A Conversational Inference Consolidation Engine - J. H. Kim
Zur algorithmischen Synthese von schnellen Programmen - Thomas Zeugmann
A Universal Prior for Integers and Estimation by Minimum Description Length - J. Rissanen
On the error correcting power of pluralism in BC-type inductive inference - Robert P. Daley
Note on a Central Lemma of Learning Theory - D. Osherson, M. Stob and S. Weinstein
April On the Application of Vector Quantization and Hidden Markov Models to Speaker-Independent, Isolated Word Recognition - L. R. Rabiner, S. E. Levinson and M. M. Sondhi
An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition - S. E. Levinson, L. R. Rabiner and M. M. Sondhi
June Inductive Rule Generation in the Context of the Fifth Generation - D. Michie
August A Method of Computing Generalized Bayesian Probability Values for Expert Systems - P. C. Cheeseman
September A survey of Inductive Inference: Theory and Methods - D. Angluin and C. H. Smith | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cltbibZz-e--00-1----0-10-0---0---0direct-10-TX%2CCR%2CBO%2CSO--4--Bshouty%2C%2C%2C-----0-1l--11-en-50---20-help-%5BBshouty%5D%3ATX+--01-3-1-00-0--4--0--0-0-11-10-0utfZz-8-00&a=d&cl=CL3.11","timestamp":"2014-04-18T20:51:25Z","content_type":null,"content_length":"32258","record_id":"<urn:uuid:e3db917c-d97e-4ba0-b11a-a10f866e4a94>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Google Map Works
This is my analysis about how Google map works, and specially how the tiles are encoded. Google map uses pre-rendered tiles that can be obtained with a simple URL. This article explains how to build
the URL for a tile from its geo coordinates (latitude/longitude).
Map Tile Encoding
Google map uses two different algorithms to encode the location of the tiles.
For Google map, the URL of a tile looks like http://mt1.google.com/mt?n=404&v=w2.12&x=130&y=93&zoom=9 using x and y for the tile coordinates, and a zoom factor. The zoom factor goes from 17 (fully
zoomed out) to 0 (maximum definition). At a factor 17, the whole earth is in one tile where x=0 and y=0. At a factor 16, the earth is divided in 2x2 parts, where 0<=x<=1 and 0<=y<=1, and at each zoom
step, each tile is divided into 4 parts. So at a zoom factor Z, the number of horizontal and vertical tiles is 2^(17-z)
Algorithm to Find a Tile from a Latitude, a Longitude and a Zoom Factor
//correct the latitude to go from 0 (north) to 180 (south),
// instead of 90(north) to -90(south)
//correct the longitude to go from 0 to 360
//find tile size from zoom level
double latTileSize=180/(pow(2,(17-zoom)));
double longTileSize=360/(pow(2,(17-zoom)));
//find the tile coordinates
int tilex=(int)(longitude/longTileSize);
int tiley=(int)(latitude/latTileSize);
In fact this algorithm is theoretical as the covered zone doesn't match the whole globe.
Google uses four servers to balance the load. These are mt0, mt1, mt2 and mt3.
Tile Size
Each tile is a 256x256 PNG picture.
For Satellite Images, the Encoding is a Bit Different
The URL looks like http://kh0.google.com/kh?n=404&v=8&t=trtqtt where the 't' parameters encode the image location. The length of the parameter indicates a zoom level.
To see the whole globe, just use 't=t'. This gives a single tile representing the earth. For the next zoom level, this tile is divided into 4 quadrants, called, clockwise from top left : 'q' 'r' 's'
and 't'. To see a quadrant, just append the letter of that quadrant to the image you are viewing. For example :'t=tq' will give the upper left quadrant of the 't' image. And so on at each zoom
Algorithm to Find a Tile from a Latitude, a Longitude and a Zoom Factor
//initialise the variables;
double xmin=-180;
double xmax=180;
double ymin=-90;
double ymax=90;
double xmid=0;
double ymid=0;
string location="t";
//Google uses a latitude divided by 2;
double halflat = latitude / 2;
for (int i = 0; i < zoom; i++)
xmoy = (xmax + xmin) / 2;
ymoy = (ymax + ymin) / 2;
if (halflat > ymoy) //upper part (q or r)
ymin = ymoy;
if (longitude < xmoy)
{ /*q*/
location+= "q";
xmax = xmoy;
location+= "r";
xmin = xmoy;
else //lower part (t or s)
ymax = ymoy;
if (longitude < xmoy)
{ /*t*/
location+= "t";
xmax = xmoy;
location+= "s";
xmin = xmoy;
//here, the location should contain the string corresponding to the tile...
Again, this algorithm is quite theoretical, as the covered zone doesn't match the full globe.
Google uses four servers to balance the load. These are kh0, kh1, kh2 and kh3.
Tile Size
Each tile is a 256x256 JPG picture.
Mercator Projection
Due to the Mercator projection, the above algorithm has to be modified. In Mercator projection, the spacing between two parallels is not constant. So the angle described by a tile depends on its
vertical position.
Here comes a piece of code to compute a tile's vertical number from its latitude.
/**<summary>Get the vertical tile number from a latitude
using Mercator projection formula</summary>*/
private int getMercatorLatitude(double lati)
double maxlat = Math.PI;
double lat = lati;
if (lat > 90) lat = lat - 180;
if (lat < -90) lat = lat + 180;
// conversion degre=>radians
double phi = Math.PI * lat / 180;
double res;
//double temp = Math.Tan(Math.PI / 4 - phi / 2);
//res = Math.Log(temp);
res = 0.5 * Math.Log((1 + Math.Sin(phi)) / (1 - Math.Sin(phi)));
double maxTileY = Math.Pow(2, zoom);
int result = (int)(((1 - res / maxlat) / 2) * (maxTileY));
return (result);
Covered Zone
Theoretically, latitude should go from -90 to 90, but in fact due to the Mercator projection which sends the poles to the infinites, the covered zone is a bit less than -90 to 90. In fact the maximum
latitude is the one that gives PI (3.1415926) on the Mercator projection, using the formula Y = 1/2((1+sin(lat))/(1-sin(lat))) (see the link in the Mercator paragraph).
Google map uses a protection mechanism to keep a good quality of service. If one makes too many requests, Google map will add its IP address to a blacklist, and send a nice message:
Google Error
We're sorry... ... but your query looks similar to automated requests from a computer virus or spyware application. To protect our users, we can't process your request right now. We'll restore your
access as quickly as possible, so try again soon. In the meantime, if you suspect that your computer or network has been infected, you might want to run a virus checker or spyware remover to make
sure that your systems are free of viruses and other spurious software. We apologize for the inconvenience, and hope we'll see you again on Google.
To avoid being blacklisted, developers should use a caching mechanism if possible...
Sat Examples
See the whole globe at http://kh0.google.com/kh?n=404&v=8&t=t.
And the four corresponding quadrants: (note the 4 servers name to balance the load)
• http://kh0.google.com/kh?n=404&v=8&t=tq
• http://kh1.google.com/kh?n=404&v=8&t=tr
• http://kh2.google.com/kh?n=404&v=8&t=ts
• http://kh3.google.com/kh?n=404&v=8&t=tt
Map Examples
See the whole globe at http://mt1.google.com/mt?n=404&v=&x=0&y=0&zoom=17.
And the four corresponding quadrants:
• http://mt0.google.com/mt?n=404&v=&x=0&y=0&zoom=16
• http://mt1.google.com/mt?n=404&v=&x=1&y=0&zoom=16
• http://mt2.google.com/mt?n=404&v=&x=0&y=1&zoom=16
• http://mt3.google.com/mt?n=404&v=&x=1&y=1&zoom=16
Nice, isn't it?
For a sample code written in C#, see the download at the top of this article.
• Article edited: Google map has changed the v parameter for the maps. It was 2.12 when I wrote this article, but it's now 2.66. I suppose this is a version number or something like that... | {"url":"http://www.codeproject.com/Articles/14793/How-Google-Map-Works?msg=2676251","timestamp":"2014-04-23T10:32:36Z","content_type":null,"content_length":"182408","record_id":"<urn:uuid:6df16f7a-0a1b-49a3-bf79-24afbfdc43b0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2012 [00360]
[Date Index] [Thread Index] [Author Index]
Re: ContourPlot non rectangular evaluation?
• To: mathgroup at smc.vnet.net
• Subject: [mg127839] Re: ContourPlot non rectangular evaluation?
• From: Bob Hanlon <hanlonr357 at gmail.com>
• Date: Sun, 26 Aug 2012 02:50:22 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-newout@smc.vnet.net
• Delivered-to: mathgroup-newsend@smc.vnet.net
• References: <20120825082641.60C04689F@smc.vnet.net>
Try the option Exclusions or using Boole
f[x_, y_] = Sin[x + y];
g[x_, y_] = x - y;
h[x_, y_] = f[x, y]/g[x, y];
ContourPlot[h[x, y],
{x, -Pi, Pi}, {y, -Pi, Pi}] //
ContourPlot[h[x, y],
{x, -Pi, Pi}, {y, -Pi, Pi},
Exclusions -> {g[x, y] == 0}] //
ContourPlot[h[x, y]*
Boole[g[x, y] != 0],
{x, -Pi, Pi}, {y, -Pi, Pi}] //
Bob Hanlon
On Sat, Aug 25, 2012 at 4:26 AM, Sam McDermott <samwell187 at gmail.com> wrote=
> Hi,
> I'd like to make a plot of level curves of a function, but this function =
becomes singular and passes from infinity to negative infinity, which reall=
y uglifies the plot. It is easy for me to identify where the function becom=
es singular (i.e., when the denominator becomes 0!), but I'm having trouble=
telling Mathematica to stop evaluating before then, because this singulari=
ty is sensitive to both of the variables on the axes of my ContourPlot. Is =
there a simple way of telling Mathematica where to stop?
> In other words, I have some function
> h[x_,y_]:=f[x,y]/g[x,y]
> and I can numerically find the zeroes of g[x,y]. Can I make a ContourPlot=
such that
> ContourPlot[h[x,y],{x,xmin,xmax},{y,ymin,ymax}]
> does not evaluate below the curve
> g[x,y]=0
> ?
> I think I'm basically looking for some way of setting assumptions of eval=
uating an "If" conditional inside of ContourPlot but can't find a good way =
of doing it. Any help would be much appreciated!
> Thanks very much for your time!
> -Sam
• Follow-Ups:
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/Aug/msg00360.html","timestamp":"2014-04-18T00:20:25Z","content_type":null,"content_length":"27677","record_id":"<urn:uuid:3040b2e2-da56-44fb-b1eb-481c37918930>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 1-1 1-2 Alg Expressions & Order of Ops Ppt Presentation
A 1-1 1-2 Alg Expressions & Order Of Ops
Presentation Description
No description available.
By: debandahala (32 month(s) ago)
Can you pls share me your presentation it is very nice thank you.
By: lindaryland (34 month(s) ago)
I really enjoyed your presentation. Please allow me to use a copy with my students. linda.ryland@pgcps.org
By: satir (37 month(s) ago)
PLEASE IT IS VERY GOOD POWER POINT SO ALLOW TO ME | {"url":"http://www.authorstream.com/Presentation/Mr.Thomas-82461-1-2-alg-expressions-order-ops-algebraic-operations-education-ppt-powerpoint/","timestamp":"2014-04-18T18:13:19Z","content_type":null,"content_length":"187281","record_id":"<urn:uuid:fb13089d-1d11-4309-ae4c-4ed1681f98a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
$1.10 dollar bill
The cashier was a
tad quirky in that
she enjoyed multiplying
the digits together of
the price of the item
being purchased, then
she would tell the
customer that the
last digit of the
product of the digits
of her change would
be the same number
as the last digit of the
product of her purchase
price digits!!!!
(stick to 2-digit numbers for this)
Last edited by John E. Franklin (2011-02-07 01:28:29) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=173320","timestamp":"2014-04-21T05:44:14Z","content_type":null,"content_length":"13471","record_id":"<urn:uuid:8b6d4d63-8bf4-4ef4-ac1a-95b2ef12a88a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Congressional ideology by state
October 25, 2012
By is.R()
In a recent post, I illustrated how to add a background geom to your ggplot. While that code worked, and the plot looked fine, it was pointed out to me that I was missing an important aspect of plot
layering with ggplot2. Namely, it is not, as I previously claimed, necessary to add extra NULL variables to the background data.frame.
Fortunately, I was put on the right path by the inimitable Hadley Wickham, who pointed out that There is, of course, a Function for That: mutate()
This Gist correctly builds a layered plot, shows how mutate() works, and plots DW-NOMINATE House ideology in two-dimensions, by state, with an illustration of what I consider a very useful
visualization technique — adding a reference distribution to each plot facet.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/congressional-ideology-by-state/","timestamp":"2014-04-19T22:45:42Z","content_type":null,"content_length":"35335","record_id":"<urn:uuid:0f0faeb8-2a3b-4221-9e72-f9b94da73a88>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Legendre Polynomial and Rodrigues' Formula
I am reading Jackson's electrodynamics book. When I went through the Legendre polynomial, I have a question.
In the book, it stated that from the Rodrigues' formula we have
Consider only the odd terms
How to obtain this equation and how can I obtain the equation for even terms?
Thanks in advance. | {"url":"http://www.physicsforums.com/showthread.php?t=648168","timestamp":"2014-04-21T02:11:41Z","content_type":null,"content_length":"25203","record_id":"<urn:uuid:beec26eb-47de-4f9f-86c3-c00aea394d09>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Assigning complex value to real array
Charles R Harris charlesr.harris@gmail....
Thu Oct 7 00:31:20 CDT 2010
On Wed, Oct 6, 2010 at 11:07 PM, Andrew P. Mullhaupt
> I came across this gem yesterday
> >>> from numpy import *
> >>> R = ones((2))
> >>> R[0] = R[0] * 1j
> >>> R
> ...array([ 0., 1.])
> >>> R = ones((2), 'complex')
> >>> R[0] = R[0] * 1j
> >>> R
> array([ 0.+1.j, 1.+0.j])" and I read that this behavior is actually
> intended for some reason about how Python wants relations between types to
> be such that this mistake is unavoidable.
It's because an element of a real array only has space for a real and you
can't fit a complex in there. Some other software which is less strict about
types may allow such things, but it comes at a cost.
> So can we have a new abstract floating type which is a complex, but is
> implemented so that the numbers where the imaginary part is zero represent
> and operate on that imaginary part implicitly? By containing all these
> semantics within one type, then presumably we avoid problems with ideas
> relationships between types.
Short answer: no. If you want complex just use a complex array. Changing
types like you propose would require making a new copy or reserving space
ahead of time, which would be wasteful. It could also be done with lists or
objects, but then you would lose speed. Newer versions of numpy will warn
you that the imaginary part is going to be discarded in your first example.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20101006/6d637858/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-October/053109.html","timestamp":"2014-04-20T02:03:07Z","content_type":null,"content_length":"4720","record_id":"<urn:uuid:886f2fe9-8c2c-44ca-a3dd-1f11c3e01501>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Dynamic Step Distance Transforms
This Demonstration explores various ways of measuring the distance of a cell from a set of initial cells. Each cell stores a distance and a color (black or white), and these values propagate out
from the initial set. The color of each cell is updated based on the colors of the neighboring cells at smaller distances. For example, the color might be set to white if the number of white,
smaller neighbors is 0 or 2; this is totalistic color rule
. Distances between cells are measured by a variation of the taxicab metric: as the sum of distance steps along a connected path, with the individual steps chosen by taking the corresponding
horizontal, vertical, or diagonal entry from one of the two sets of 3x3 values shown. The color of the cell is used to choose one of the two sets. Toggle the distance steps, then use the slider to
change their values.
Several of the snapshots and bookmarks show approximately circular patterns. Other snapshots show snowflake-like shapes, rectangular or hexagonal, and patterns that operate on two scales, such as
the "bow tie" example.
The statistics displayed under the images include two ways of measuring the circularity: the percentage variation in the radius and the number of sides of the polygon with the same radial variation.
The "symmetric" checkbox (checked) means that the initial cells are set to 1 (white) and surrounded by zeros (black). When unchecked, the right and diagonal lower-left values are also set to 1. Try
making the bow tie or "least circular" bookmarks symmetric.
Internally the color is saved as the least significant bit of the distance, which is why the distance steps are even numbers.
The smaller neighbor counts image has a value at each cell in the range 0-8, equal to the number of neighbors with smaller distances. Closely spaced parallel white lines radiating outwards in this
image are caused by distance contours with a fine sawtooth pattern, and always seem to indicate circularity generation or smoothing.
For example, the approximately circular rule 416 (Snapshot 2) with size set to the maximum (wait a few seconds) shows this, as does Snapshot 1. The most circular-appearing rule (20-sided polygon
bookmark) shows a more uniform pattern; at large scale it can be seen to be a polygon without much smoothing. At a size of approximately 1000, the slight instability on the diagonals blows up and
the circle grows four ears.
The rotationally symmetric rules are set up by taking the pattern of smaller neighbors as digits, cyclically rotating to a minimum position, then applying a rule to the first three or four color
bits, taken in clockwise order from that position.
The hexagonal rules (rule type 6) use the two least significant bits for the color (black/white) and to simulate the hexagonal neighborhood by switching the six neighbors on alternate rows. Rule
type 9, where the rule counts smaller neighbors (instead of smaller white neighbors), can achieve a similar effect. For example, try rule 348 with white distance steps (clockwise from top left) =
{2, 2, 2, 100, 2, 2, 2, 100} and black distance steps={0, 198, 0, 0, 0, 198, 0, 0}.
Other growth rules (
A New Kind of Science
, p. 928) can be simulated by setting the black steps to a large value (or to 0, meaning ignore this neighbor), so that growth occurs first for smaller (white) steps.
The worm rules (rule type 7) turn left or right at each step depending on the color, producing tightly twisting or spiralling patterns with long path lengths. The path length is the longest series
of decreasing distances back to the start.
With random selection of the color (rule type 8) it is interesting to vary the proportion of white/black and to compare the result with patterns generated by a color rule with the same percentage of
white cells.
These dynamic step distance transforms can be defined more generally as cellular automata on directed acyclic networks on a grid. As with other kinds of distance transforms (fixed step or
Euclidean), cells can be updated in parallel or sequentially in any order, with the same end result, although by updating in order of increasing distance, each cell need only be updated once.
They can be useful for image analysis and for modeling patterns where growth occurs along a front that sweeps across an area, leaving a fixed pattern behind. Examples include the time evolution of
1D cellular automata, and more general cases where the network connections and shape of the front are generated dynamically. | {"url":"http://demonstrations.wolfram.com/DynamicStepDistanceTransforms/","timestamp":"2014-04-18T20:47:53Z","content_type":null,"content_length":"49523","record_id":"<urn:uuid:99dba73e-64a1-47b4-a5bd-5b0530c56a4a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inequality (Factorial Related)
Prove : $(n!)^2 > n^n$ when n is a +ve integer > 2
Hi ! $n! = 1\times 2\times 3\cdots (n-1)\times n$ so \begin{aligned}(n!)^2 &= [1\times 2\times 3\cdots (n-1)\times n]^2\\<br /> &={\color{red}1}\times {\color{blue}1} \times {\color{green}2}\times {\
color{magenta}2} \cdots {\color{magenta}(n-1)}\times{\color{green}(n-1)}\times {\color{blue}n}\times {\color{red}n}\end{aligned}<br /> Each term in this product appears twice so we can rewrite it
this way \begin{aligned}(n!)^2 &= \underbrace{\color{red}[1\cdot n]}_{1^{\text{st}}\text{ term}} \times \underbrace{\color{green}[2\cdot (n-1)]}_{2^{\text{nd}}\text{ term}} \cdots \underbrace{\color
{magenta}[(n-1)\cdot 2]}_{(n-1)^{\text{th}}\text{ term}}\times \underbrace{\color{blue}[ n\cdot 1]}_{n^{\text{th}}\text{ term}}\end{aligned}<br /> We want to show that $(n!)^2>n^n$ that is $\frac
{(n!)^2}{n^n}>1$ for $n>2$. As $n^n=\underbrace{n\times n \cdots n \times n}_{n\text{ terms}}$ also contains $n$ terms, one can write \begin{aligned}\frac{(n!)^2}{n^n} &= \frac{1\cdot n}{n} \times \
frac{2\cdot (n-1)}{n} \cdots \frac{(n-1)\cdot 2}{n}\times \frac{ n\cdot 1}{n}\end{aligned}<br /> Now I claim that each fraction which appears in this product is greater than or equal to 1 and that
for $n>2$, there is at least one fraction which is $>1$. If you manage to show this, the conclusion follows. Good luck !
Thanks a lot! I felt as if I am looking at some solution book! Great illustration! | {"url":"http://mathhelpforum.com/algebra/45142-inequality-factorial-related.html","timestamp":"2014-04-19T09:58:10Z","content_type":null,"content_length":"38419","record_id":"<urn:uuid:4379a185-cf42-4143-a077-4132a683fa9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
A basic math programming question
A basic math programming question
I'm trying to learn some basic ideas about programming mathematical functions in C.
I would like the user to be able to input a number with decimals, for instance "2.123"
and then use this input to carry out an equation.
Here is how I've been doing it:
int main (int argc, char * const argv[]) {
/* Variables */
float Xreal;
float Yreal;
float Zreal;
float Xforce;
float Yforce;
float Zforce;
float Tval;
char inXreal[8];
char inYreal[8];
char inZreal[8];
char inZforce[8];
/* Prompts */
/* Get coordinates of real-perspective point */
printf("Xreal value: \n");
printf("Yreal value: \n");
printf("Zreal value: \n");
/* Get Z coordinate of forced-perspective point */
printf("Zforce value: \n");
/* Computation */
/* Output Display */
printf("The point: (%.3f, %.3f, %.3f) was generated with a Tvalue of: %.3f.\n",Xforce,Yforce,Zforce,Tval);
This works so long as the input values are whole numbers.
But I would also like to be able to perform these functions using decimals.
I understand that the problem must have something to do with using CHAR and ATOI, but I'm not certain of another way to get user input for decimals, etc.
Can anyone offer some advice?
by the way
by the way:
I am working in Xcode 3 on a mac running os x leopard
Since atoi stands for "alphanumeric to integer", it seems unlikely it will work for getting floating point values.
Whatever path you're following in your quest for C knowledge, if it took you past "atoi" it should have taken you past "atof" as well. (And "atof" stands for "alphanumeric to floating point" --
it returns a value of type double.)
>int main (int argc, char * const argv[]) {
If you're not going to use the parameters, don't include them. Also, be sure to return a value from main, even if it's always 0:
int main ( void )
return 0;
Never use gets. It's an unsafe function, and it's impossible to make gets safe.
Even if you use it with integers, atoi is generally a bad idea unless you've validated the string first. I'd recommend strtol, or in the case of floating-point values, strtod.
Hi, thanks for your very quick response.
I suspected that was exactly the problem, and so I checked the index of the book I am using for other ato... functions, but believe it or not, they do not mention atof.
thanks for your help!
can you clarify:
I am getting a message in terminal that gets is unsafe...
How could I perform that same function without it?
Printf("Input number: \n");
WHAT GOES HERE?
and, in the context of the above code... where do i put the "return", because my "{" then contains the rest of the program, im not sure where the return zero goes
thanks for your help.
im just getting started
have you read FAQ?
you can use fgets(buffer, sizeof buffer, stdin)
>How could I perform that same function without it?
Sorry about that. I was in so much of a hurry that I forgot to point you toward the fgets function.
>where do i put the "return"
Put the return in main when you're done. A good start is right at the end, just before the closing brace.
int main ( void )
/* Your code here */
return 0;
ok, i get the return. thanks
but, im not certain i understand the fgets.
if it is "fgets(buffer, sizeof buffer, stdin)"
what do I put for each: buffer, size of, stdin?
all i want is to have it understand an input such as "2.123"
buffer = where you want the answer to go.
sizeof buffer = size of the buffer. IOW, how many characters are in your char array? (Answer: 8.)
stdin = stdin. You know, the thing where people type, and it shows up on the screen. Short for "standard in". This should literally be the word "stdin".
yes, but how do i implement that?
Once again - have you read the FAQ?
I would use scanf
int main()
float a;
float b;
printf("enter a float:\n");
scanf("%f", &a);
printf("Your float is %.3f", a);
return (EXIT_SUCCESS);
>I would use scanf
scanf has its own problems, and conveniently enough, your example exhibits one of them. What happens if I type "I'm a little teapot" instead of "123.456"? | {"url":"http://cboard.cprogramming.com/c-programming/99548-basic-math-programming-question-printable-thread.html","timestamp":"2014-04-20T10:31:18Z","content_type":null,"content_length":"18825","record_id":"<urn:uuid:3f90b770-ec47-44ee-8806-24e4914e1b9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Python vector and matrix formatter similar to Matlab
[Numpy-discussion] Python vector and matrix formatter similar to Matlab
Alexander Schmolck a.schmolck@gmx....
Sat Mar 10 10:57:30 CST 2007
"Simon Wood" <sgwoodjr@gmail.com> writes:
> To all,
> I came across an old thread in the archives in which Alexander Schmolck gave
> an example of a Matlab like matrix formatter he authored for Python. Is this
> formatter still available some where?
Yup. I've still got it as part of a matrix class I wrote many years ago
(because I found the one in Numeric mostly useless and arrays syntactically to
awkward for linear algebra). Numpy sort of broke my design (which was flawed
anyway and partly rather incompetently implemented) and for various reasons I
ended up mostly coding numeric stuff in matlab for some time anyway, but as I
thankfully seem to be moving mostly back to python again, I do intend to
salvage, port and clean up at least the formatting code (possibly more, there
were some other things I liked about it).
With a bit of work this could either go into numpy/scipy if there's interest
or maybe make a useful addition to ipython (as the main benefit is that it's
really a much more handy way to display matrices (or indeed arrays) for
interactive work).
How urgently are you interested in this?
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026437.html","timestamp":"2014-04-17T18:26:36Z","content_type":null,"content_length":"4142","record_id":"<urn:uuid:11e0e259-b538-41eb-90e3-d87fa83bc257>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fluid Simulation for Video Games (part 13)
By Dr. Michael J. Gourlay
Download Fluid Simulation for Games (part 13) [PDF 1.1MB]
Download MjgIntelFluidDemo_Part13.zip [RAR 2.4MB]
Figure 1. Convex polyhedra interacting with a vortex particle fluid
Convex Obstacles
Video games are compelling because they are interactive. Even visual effects should respond to other entities in the environment, especially those the user controls. Particle effects, including
fluids, should therefore respond to rigid bodies of any shape. Those shapes should include airfoils that can experience lift.
This article-the thirteenth in a series-describes how to augment the fluid particle system described earlier, interact with rigid bodies with any polyhedral shape, and generate a lift-like force on
those bodies. Part 1 summarized fluid dynamics; part 2 surveyed fluid simulation techniques. Part 3 and part 4 presented a vortex-particle fluid simulation with two-way fluid–body interactions that
runs in real time. Part 5 profiled and optimized that simulation code. Part 6 described a differential method for computing velocity from vorticity, and part 7 showed how to integrate a fluid
simulation into a typical particle system. Part 8 explained how a vortex-based fluid simulation handles variable density in a fluid; part 9 described how to approximate buoyant and gravitational
forces on a body immersed in a fluid with varying density. Part 10 described how density varies with temperature, how heat transfers throughout a fluid, and how heat transfers between bodies and
fluid. Part 11 added combustion, a chemical reaction that generates heat. Part 12 explained how improper sampling caused unwanted jerky motion and described how to mitigate it.
Collision Detection
Detecting collisions between objects first entails computing the distance between them. Video games model most shapes with planar, polygonal faces and treat particles as though they have spherical
shape. You therefore need to compute the distance between planes and spheres.
The Math Behind Planes
A plane is a two-dimensional surface in a three-dimensional space. You can define a plane in various ways. One convenient representation uses a normal vector d. The plane equation has this
All points with coordinates d is the distance of the plane from the origin. You can use this value to represent a plane as a vector with four components (that is, a 4-vector plane):
The distance, D, of a point
Notice that this equation has the same form as the equation for the plane, except that instead of equating it to zero, the formula tells you the distance of the point to the plane. This is obviously
consistent because if a point has zero distance to the plane, then the point lies in the plane.
But wait a moment! This formula could result in negative values. For example, choose the point d = 1. The formula claims that point has negative distance from the plane. What the heck is negative
A plane divides all of space into two halves. A half-space is the region of space on one side of a plane. The signed distance formula tells you whether a point lies in the positive half-space or the
negative half-space of a plane, as Figure 2 depicts.
Figure 2. Planes and half-spaces
Think of a plane as facing in the direction of its normal. Points behind the plane have negative distance. Points in front of the plane have positive distance. When a point lies behind a plane, we
call its distance (which is negative) the penetration depth.
The Plane class shows an implementation of the planar formulae that builds upon a 4-vector class (Vec4):
You can compute the distance of a sphere from a plane by computing the distance of the sphere's center (a point) from the plane, then subtracting the sphere's radius.
Note: Technically, planes exist in other dimensions. For example, in a two-dimensional space, a plane is also a line. In a four-dimensional space, a plane is a three-dimensional hyperplane. But this
article discusses three-dimensional spaces, where planes are two dimensional.
Planes Make Convex Hulls
A polytope is a shape with flat sides. In two dimensions, polytopes are called polygons. In three dimensions, polytopes are called polyhedra.
A convex shape is one where any line segment between any two points in the shape is also in that shape. So, a convex polytope is such a shape with flat sides, as shown in Figure 3.
An array of planes can represent a convex polyhedron. You can describe each face, i, of the polyhedron using a plane representation, half-space representation (or H-representation).
Figure 3. Convex versus non-convex polytopes
To determine whether a point is inside a polyhedron, compute the distance of that point to each face plane of the polyhedron. If all distances are negative, the point lies inside the polyhedron.
Figure 4. Measuring distance or penetration depth between stationary spheres and polytopes
As Figure 4 depicts, computing the distance between a stationary point (or sphere) and the planes of a polytope does not always give an unambiguous measure of distance or penetration depth.
Sometimes, the best measure of distance could be from an edge or vertex of the polytope. But the distance formula will always correctly tell whether a point (or sphere) is inside, outside, or
overlapping a polytope.
Alternative Method
The point-to-plane method suffices when detecting collisions between particles and polytopes. Other algorithms exist to compute the distance between two convex shapes. One of the most famous and
useful, especially among game developers, is the Gilbert–Johnson–Keerthi (GJK) distance algorithm. Although its code is fast and simple, the concepts are not easy to explain and are not in the scope
of this article. Furthermore, determining penetration depth can be even more problematic and usually entails more sophisticated approaches, such as the Expanding Polytope Algorithm (EPA). See the
"For Further Study" section at the end of this article for more information.
Collision Detection
Detecting a collision between objects entails computing their separation distance or penetration depth. Also, when objects collide, you usually want to know the region of contact and contact normals
-that is, the direction along which to apply force or displacement to separate the objects.
Although the point-to-plane distance formula will tell you whether a point lies inside a polytope, it will not unambiguously tell you its distance or penetration depth. In addition to the edge cases
described earlier, determining penetration depth entails the relative direction of travel of the two objects. The correct answer depends on the configuration of objects before and after the
collision. If objects lie inside each other (or if a particle lies inside a polytope), then you have detected the collision after it occurred. This is called interpenetration, and it should be
avoided or corrected when it happens.
For visual effects involving hundreds or thousands of tiny particles, you can get adequate results by using the following simple algorithm:
1. Given a query point, a polytope, its position and orientation, initialize largestDistance to an extremely large negative value.
2. For each plane in the polytope:
1. Compute the distance between the query point and the plane.
2. If that distance exceeds largestDistance, then:
1. Assign largestDistance to that distance
2. Remember this plane index
3. Return the plane index and largestDistance.
For spheres, subtract their radius from the returned largest distance to get the separation distance. If that value is negative, the sphere interpenetrates the polytope.
From that information, you can compute a contact point and normal:
Given a query point, a plane, a polytope orientation, and the largest distance:
1. Reorient the plane normal to world space.
2. Scale the normal for the returned plane index by the largest distance to get a penetration vector.
3. Subtract the penetration vector from the query point to get the contact point.
Although this algorithm does not accurately measure distance for the edge cases, the distance and normal it returns yield sufficiently close results to work for collision response.
The ConvexPolytope class implements these algorithms. (See the demonstration code that accompanies these articles for more details.)
The routine ContactPoint uses information computed by ContactDistance:
Broad Phase
The ContactDistance algorithm iterates over every face in a polyhedron. That process can get expensive. You can reduce that expense in a few ways:
• Only compute ContactDistance when the query point lies within a coarse bounding volume (such as a bounding sphere) that contains the polytope. That computation is much faster and can let you skip
the more expensive ContactDistance until the query point is somewhat near the polytope. The accompanying code uses this technique.
• If you only care about interpenetration, you can change ContactDistance to return immediately when it finds any distance-to-plane that is positive. The returned value will not necessarily be the
largest distance, but when positive, you don’t care. Note that if you want to compute the distance to a sphere instead of to a point, then you would have to pass in the sphere radius and take
that into account. The accompanying code includes a routine that uses this technique.
To further reduce CPU cost , but at the expense of memory and complexity you can:
• Remember the plane from the previous iteration, and reuse that for the next attempt. If it’s still positive, there is no collision, and you can bail out after testing one plane. Note that this
would entail storing another integer per particle. Because there can be tens of thousands of particles, that can add up.
• Store face connectivity information and only visit adjacent faces whose distance would increase largestDistance. Doing so can significantly reduce the number of faces visited. Also, game engines
often include such adjacency information. You might be able to exploit that information for particle collisions.
Deepening Penetration
If a particle penetrates an object, it could end up closer to the opposite side of the object rather than the side it penetrated, as Figure 5 shows. This could happen for thin objects or fast
particles. It is therefore useful to consider only those planes for which particles are moving farther behind.
Figure 5. Measuring collision depth between a moving spherical particle and polytope
The demo code accompanying this article contains a routine, ConvexPolytope::CollisionDistance, that implements this idea.
Continuous Collision Detection
The most accurate way to determine contacts would entail continuous collision detection (CCD)-that is, detecting the collision just as it happens (instead of after the fact). CCD involves computing a
time of impact (TOI) and either advancing the simulation up to that point or rewinding back to it. One way to approximate CCD to estimate TOI is to move into a reference frame where only the object
moves. The other object will still be in motion. Now, sweep that moving object across space to span the region it would occupy at all points during the test interval. If the swept shape intersects
with the stationary shape, the two objects probably collided during that interval.
Sweeping a shape is relatively easy if its motion is pure translation but more difficult if its motion includes rotation. Video games therefore either treat only linear motion for continuous
collision detection or simply use discrete collision detection and allow objects to interpenetrate.
For spherical shapes, like particles, the swept shape is a line segment with hemispherical caps, also known as a capsule or sausage. You can compute intersections between capsules and planes using
simple formulae. But particle effects for video games do not need that level of sophistication, and it takes longer to compute than most games budget for effects.
Concave Shapes
The technique described in this article applies directly to convex shapes. To apply to concave shapes, you can either compute the convex hull of that shape or decompose the shape into convex
components. See the "For Further Study" section for more information.
Collision Response
When the detection phase indicates that a particle interpenetrated an obstacle, the simulation must resolve the collision. In other words, it must push the particle outside the obstacle and adjust
the fluid flow to satisfy boundary conditions.
Part 1, part 2, and part 4 explain boundary conditions and one way to solve them approximately, so I will not repeat that here. This article only describes changes that facilitate particles
interacting with convex polyhedra.
Simplified Vorton Interaction with Planes
The routine SolveBoundaryConditions iterates through each rigid body and collides vortons and tracers with that body by calling CollideVortonsSlice and CollideTracersSlice. As of this article,
ColideVortonsSlice is a new routine, extracted from SolveBoundaryConditions from previous articles.
The code snippet below focuses on changes made to facilitate colliding with convex polytopes. Code in purple bold is new.
Notice that this code first checks whether the particle lies within a bounding sphere, regardless of whether the obstacle is a sphere or polytope. That is a broad-phase collision test.
The routines CollideTracersSlice and RemoveEmbeddedParticles have similar changes. See the demonstration code accompanying this article for details.
The routine CollideVortonsSlice was extracted from SolveBoundaryConditions to facilitate parallelizing it with Intel® Threading Building Blocks (Intel® TBB). In addition to extracting that code into
its own routine, other changes were made. Previously, the corresponding code directly applied changes to the rigid body’s temperature and momentum. The old code performed operations like this:
1. Read body temperature.
2. Compute heat exchange based on body temperature.
3. Write new body temperature.
But when run in parallel, such updates cause a race condition, as shown in Table 1.
Table 1. Parallel threads cause a race condition.
Thread 1 Thread 2
Read body temperature. Read body temperature.
Compute heat exchange based on body temperature. Compute heat exchange based on body temperature.
– Write new body temperature.
Write new body temperature. –
Both threads update a value at the same address (temperature, in this example). Only one thread can "win."
You could solve this issue by synchronizing the code with mutex locks on the body temperature. But doing so would serialize that critical section of code, which would in turn defeat the purpose of
parallelizing it.
Instead, have each thread accumulate changes in a variable local to each thread. When the thread terminates, have the parent thread accumulate those changes and apply them to the body. This might
seem to be a perfect use case for Intel TBB’s parallel_reduce operation.
There is one more catch, however: That accumulation operation is not associative. (See Part 12 for details of a similar problem.) Even though addition is associative for real numbers, it is not for
floating-point numbers. To ensure that this parallelized routine is deterministic, you have to spawn and join threads manually, because Intel TBB’s parallel_reduce does not split and join
deterministically. Instead, use Intel TBB’s parallel_invoke and manually spawn and join threads.
Create a functor to run CollideVortonsReduce with Intel TBB’s parallel_invoke:
For comparison, the demonstration code accompanying this article also includes code for using Intel TBB’s parallel_reduce. It works-in the sense that it generates a usable result-but it is not
deterministic, so using it would impede diagnosing issues.
Let’s replace some of the spheres in previous articles with polytopes.
Although the demonstration code uses boxes, the algorithms and data structures support any convex polytope, as depicted later in Figure 8.
Figure 6 shows a flat plate interacting with flames and smoke. Notice that particles move around the plate, and the plate causes vortices to shed from it.
Figure 6. Various views of a plate above flames
Figure 7 shows a flat plate moving horizontally through fluid, leaving a wake with vortices. The code accompanying this article also includes demonstrations with the obstacle rotating about
longitudinal and lateral axes, exhibiting the Magnus (curve ball) effect.
Figure 7. Flat plate moving through fluid
Figure 8 shows a polyhedral airfoil moving horizontally through fluid, leaving a wake with vortices. This demonstrates that the technique applies to shapes other than boxes and spheres.
Figure 8. Airfoil moving through fluid. This comes from Gourlay (2010), which used a similar formulation to that presented in this article.
Table 2 shows how the collision-detection and response routines perform for the scenario with flames and smoke passing by the flat plate. This scenario had, on average, 49,000 tracer particles and
981 vortex particles (per frame), two spheres, and one box. Tracers hit bodies 8876 times per frame, and vortons hit bodies 63 times per frame. The benchmark ran 6000 frames for each run. The
processor was a four-core (eight hardware threads with hyperthreading) Intel® Core™ i7-2600 running at 3.4 GHz.
Table 2. Collision-detection and response routines for the smoke and flame scenario
No. of threads Solve boundary conditions SBC tracers SBC vortons Sim update Total (including render)
1 2.42 2.346 0.057 4.12 25.7
2 1.606 1.563 0.0423 2.52 16.3
3 1.134 1.095 0.0378 2.15 11.7
4 1.142 1.107 0.0348 2.03 11.4
6 0.804 0.774 0.0312 1.81 9.65
8 0.784 0.75 0.0303 1.76 9.34
12 0.69 0.657 0.0303 1.61 8.38
16 0.718 0.684 0.0336 1.64 8.59
Figure 9 shows a plot of the data in the table.
Figure 9. Run times for the benchmark scenario
Nonrotating spheres do not generate lift. But lift occurs on asymmetric shapes like flat plates and airfoils.
The algorithm to solve boundary conditions generates lift-like impulses. A flat plate moving horizontally through the fluid at an appropriate angle of attack should encounter lift pushing the plate
upward. And indeed, this simulation generates a qualitatively similar result-but it’s from deflecting particles that bounce off the obstacle with partially elastic collisions. In contrast,
simulations used in science and engineering calculate the pressure the fluid exerts on bodies, and that calculation can include lift. This simulation does not calculate pressure.
Furthermore, the collision response algorithm does not generate new vortons; it only reassigns values for existing vortons in contact with the object. To be more physically accurate, objects should
generate new vortons when necessary. For example, vorticity would be generated on the leeward side of the airfoil, and this would generate a low-pressure region behind and above the airfoil, the
vertical component of which would be lift. See the "For Further Study" section for more information about more physically accurate ways to calculate realistic pressure and aerodynamic forces on
objects immersed in a fluid.
Using a simple point-to-plane distance formula, you can make fluid effects interact with obstacles that have shapes commonly used to create models in a video game. The algorithm is easy to
parallelize using Intel TBB and runs in less than a millisecond for tens of thousands of particles.
Future Articles
Liquids take the shape of their containers on all but one surface, so modeling liquids also implies modeling containers. Future articles will include extending boundary conditions to include
interiors, which will allow for creating containers. That will pave the way for a discussion of free surface tracking and surface tension-properties of liquids.
For Further Study
Related Articles
Fluid Simulation for Video Games (part 1)
Fluid Simulation for Video Games (part 2)
Fluid Simulation for Video Games (part 3)
Fluid Simulation for Video Games (part 4)
Fluid Simulation for Video Games (part 5)
Fluid Simulation for Video Games (part 6)
Fluid Simulation for Video Games (part 7)
Fluid Simulation for Video Games (part 8)
Fluid Simulation for Video Games (part 9)
Fluid Simulation for Video Games (part 10)
Fluid Simulation for Video Games (part 11)
Fluid Simulation for Video Games (part 12)
Fluid Simulation for Video Games (part 13)
Fluid Simulation for Video Games (part 14)
Fluid Simulation for Video Games (part 15)
Fluid Simulation for Video Games (part 16)
Fluid Simulation for Video Games (part 17)
About the Author
Dr. Michael J. Gourlay works as a senior software engineer on interactive entertainment. He previously worked at Electronic Arts Inc. (EA Sports) as the software architect for the Football Sports
Business Unit, as a senior lead engineer on Madden* NFL, on character physics and the procedural animation system used by EA on Mixed Martial Arts, and as a lead programmer on NASCAR*. He wrote the
visual effects system used in EA games worldwide and patented algorithms for interactive, high-bandwidth online applications. He also developed curricula for and taught at the University of Central
Florida, Florida Interactive Entertainment Academy, an interdisciplinary graduate program that teaches programmers, producers, and artists how to make video games and training simulations. Prior to
joining EA, he performed scientific research using computational fluid dynamics and the world’s largest massively parallel supercomputers. His previous research also includes nonlinear dynamics in
quantum mechanical systems and atomic, molecular, and optical physics. Michael received his degrees in physics and philosophy from Georgia Tech and the University of Colorado Boulder.
^*Other names and brands may be claimed as the property of others. | {"url":"https://software.intel.com/en-us/articles/fluid-simulation-for-video-games-part-13/","timestamp":"2014-04-20T10:48:15Z","content_type":null,"content_length":"79877","record_id":"<urn:uuid:717f0da1-a2fc-4cb4-b6cb-71b4bd301c88>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: How teaching factors rather than multiplicand & multiplier<br> confuses kids!
Date: Nov 12, 2012 7:58 AM
Author: Joe Niederberger
Subject: Re: How teaching factors rather than multiplicand & multiplier<br> confuses kids!
Robert Hansen says:
>That "intuition" you speak of is not common sense, otherwise we would see it everywhere. I don't see a shortage of common sense in the world, so why is mathematics so difficult for most? The "intuition" you speak of is the underpinnings of analysis and formal reasoning.
Well, I didn't want to get hung up on this point, but its OK. In the case on "continuity" there is a common sense meaning, that is wide-spread enough - it means no gaps, breaks, sudden jumps. A refined mathematical intuition, on the other, starts there and reasons to further no-so-common or easy to understand observations, correlations, conclusions. All I'm saying is much can be done without
all the various formal definitions from analysis or topology. Euler and Gauss did OK.
But the starting point is pretty much the same for everyone. And Clyde's version was pretty far from the starting point.
Here's a question - how do you get a student to appreciate that a kind of "reversal" is needed in viewpoint to get to a satisfactory definition. What I'm talking about is the common sense viewpoint takes continuity for granted, and merely looks for "gaps" or "breaks" in the otherwise continuous (think of basic functions like 1/x). The formal definitions though, talk about continuity at a point. In fact one can define a function that is *only* continuous at a single point -- a kind of oxymoron.
Joe N | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7922151","timestamp":"2014-04-21T02:17:39Z","content_type":null,"content_length":"2551","record_id":"<urn:uuid:7c9c65cc-7b8b-4aec-9908-f97f5ebbd301>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
A SIMPLE
LOCAL PROJECTION SYSTEM
FOR SURVEY APPLICATIONS
THE PRO'S AND CON'S
OF USING
SUCH SYSTEMS
Jerry L. Wahl
Bureau of Land Management
Cadastral Survey
Eastern States, ES915
7450 Boston Boulevard
Springfield, VA 22153
(703) 440-1674
This paper discusses the development of simple local coordinate systems for use in Public Land Surveying applications. One of these projections was developed for field use without elaborate
computations. Another was developed to simplify specific computations. Both and are of interest historically as illustrations of many of the features of coordinate systems and projections in general.
The discussion will include an exploration of the errors and limitations of these systems as a metaphor for the problems involved in any projection used for geodetic and/or land surveying purposes.
Strangely enough, after describing these systems, this paper will briefly argue against the use of specialized coordinate systems. At the same time that the use of computer technology makes
established systems more practical to use, there has been an increase in advocacy for proprietary or custom systems.
Coordinate systems are used by virtually every land surveyor. It is common to use local systems created on a project by project basis. Most often these systems are simple plane cartesian coordinate
systems with only rough north orientation. Virtually all surveying software is based upon this concept. Orientation is often arbitrary and is even modified based on subsequent information with a
project rotation. The general use is the 2 + 1 dimensional method. This "2 + 1" means computation of plane x y or N E coordinates, where elevations are collected and computed separately.
When moving into Public Land Survey System, PLSS, surveys or other large scale surveys such as route surveys, the simple plane systems can become difficult to maintain, and in fact may become an
invalid approach. As a result specialized systems are needed which have certain geodetic features and/or are geodetically related. It is common practice to implement the use of State Plane
coordinates to meet these requirements, and in fact this is what they were designed for. For PLSS application, however, these may not be directly usable and there are some intermediate options.
Surveying in the PLSS is a geodetic or geodetic-like system. That is, it has an orientation according to the true meridian. This creates a system which is non- orthogonal (non-cartesian), and thus
such surveys cannot easily be computed in directly in a plane coordinate system. This dilemma has lead to the use of a number of enhanced coordinate systems for use in PLSS work. Before we get into
the specifics of those systems, let us define some characteristics of coordinate systems and projections.
GENERAL PROPERTIES OF COORDINATE SYSTEMS
Projection Method/Surface: Coordinate Systems may be defined as projections to a defined mathematical reference surface. State Plane systems are typical of these, where the reference systems are
conic or cylindrical (Lambert or Transverse Mercator). It is not necessary for a coordinate system to use a projection or a surface. The system can transform or map coordinates based upon formula or
Datum: If the system is based on geodetically related projection, then the specific geodetic datum is relevant. In the geodetic sense, this refers to the specific mathematical spheroid parameters in
use. In the state plane systems, the projections are defined differently in each datum. The two most common familiar datums are NAD27 and NAD83.
Orientation: The system has to have a consistent basis of direction. This can be arbitrary or "assumed", and is often stated.
Mapping Angle: The difference between geodetic north and grid north at a given place. The magnitude of the mapping angle increases as you move east or west. The mapping angle is referred to as
"theta" in this paper.
Central Meridian: The longitude or Easting value where grid north and geodetic north are the same.
Second Term: A small value which represents the difference between grid directions, i.e. angles, and geodetic angles at the same point.
Base Point: Some systems are based upon or defined with respect to a specific point.
Scale Factor: The ratio derived by dividing the distance between two points on the ellipsoid and the same two points in grid.
Elevation Factor: Many coordinate systems that are defined with a geodetic reference or projection must base this projection upon a specific elevation. This requires lines measured at ground
elevation to be converted to "sea-level". Thus any distance measured horizontal on the ground must be reduced first to sea-level then to grid based on the scale factor average over the line.
Conformal: Is a concept which describes how well the coordinate system maintains angular relationships and shapes. That is, a square on the ground will be very close to a square in the grid. Since
projections map an ellipsoidal surface onto a two dimensional surface, they can never truly avoid distortion but can be designed to minimize it.
Distortion: In addition to the small distortions caused by the projection process, large projection zones necessarily have increasing variations of scale at their extremities. This distortion is much
smaller near the center of the zone.
Stability: A term used by Von Meyer, but not defined. We assume stability refers to a system being well defined and available so that all users who implement and use it get the same results. Another
stable feature which a projection may have is independence from datum changes.
CUSTOM SYSTEMS FOR PLSS APPLICATIONS
WP Coordinates:
This system initially was applied to large scale PLSS surveys and was designed to be simple to setup and easy to use. It was designed to be used in the field with simple calculators and uses a
minimal set of computations. The initial design is termed algorithm 1. The system is not a projection, but merely a system which allows the user to relate a orthogonal local coordinate system to the
PLSS true bearing system. All distances are assumed to be ground measurements. The system was designed before the use of programmable calculators, and allowed conversion between true and project
bearings with simple mathematical operations. In addition, a crude lat/long could be computed for use in astronomic observations.
Algorithm 1 - WP Coordinates: In this system a base point is defined with assigned coordinates: The system is defined by astronomic observations at the base point which thus creates a central
meridian at the base point easting. The primary projection "constant" is simply the change in direction per change in easting. In BLM Cadastral survey work the typical units are in chains. Thus this
value would be computed one time per project based upon the average project latitude. Typical examples are in the magnitude of 20 seconds of bearing per 40 chains in departure. There are a variety of
formula's possible to compute this value from a given longitude, however since this is only computed once, a full geodetic computation is not an encumbrance to the design. The example used NAD27
ellipsoid value, however this is not required.
a = 6378206.4 meters Clark 66 Spheroid major axis meters
b = 6356583.8 meters Clark 66 Spheroid minor axis meters
e1 = 1 - b2/a2 Spheroid eccentricity factors
e2 = (a2/b2) - 1.0
Ac = a * 39.37 / 792.0 Spheroid major axis in chains
rad = Average Spheroid radius at project for example = 20,906,000 feet,
rough value, can be computed for project area
A base point is selected with a given northing, easting and latitude and longitude. The base values are:
LatBase = Latitude of base point
LonBase = Longitude of base point
N0 = False Northing assigned to base point
E0 = False Easting assigned to base point on central meridian
The following values are then computed for a project:
k1 = 1 - e1 * SIN2(LatBase) Intermediate value
c = TAN(LatBase) * SQR(k1) / 5533.707122 Curvature in Degrees/chain
For example for a latitude of 34 degrees 45 minutes:
k1 = 0.998019
c = 0.000125239 Degrees per chain departure
= 0.4509 Secs of bearing per chain departure
= 18.03 Secs of bearing per half mile departure
The base point is defined to be on the meridian where true azimuth equals grid azimuth. Therefor with just this value of c, a theta or mapping angle computation can be performed at any point Ni, Ei,
theta = (Ei - E0) * c Mapping angle for point at Easting Ei
true az = grid azimuth + theta
This is obviously crude, but still within reasonable limits for the conversion of astronomic azimuths into the grid. The system definition can also include average m and p factors in the project area
make a crude computation of latitude and longitude possible. These additional factors can be computed from:
m(lat) = m factor for latitude lat in chains per degree
= pi / 180 * Ac * (1 - e1) / ((1 - e1 * (SIN2(lat)))1.5)
p(lat) = p factor for latitude lat in chains per degree
= -(pi / 180 * Ac * COS(lat) / SQRT(1 - e1 * SIN2(lat)))
Full implementation thus requires determining the constants for the base point:
N0 = false Northing of base point
E0 = false Easting of the base point and central meridian
LatBase = Latitude of base point
LonBase = Longitude of base point
MBase = m factor based on LatBase
PBase = p factor based on LatBase
c = change in direction per change in easting unit
Implementation of the basic system over a several township (+- 10 mile radius) does not require highly accurate computation of the c value. This system is adequate for the conversion between grid and
true bearings which is the dominant factor between any plane system and a geodetic system. However it has small inaccuracies due to the use of a constant value of c over the project, when this value
varies as a function of latitude. Also the quick option to compute field lat/long is crude. It is suitable for positional data to control astronomic azimuth observations and rough scaling, but not
useful for survey quality geodetic to plane conversions. Both of these problems were addresses by a modification called the second algorithm.
Algorithm 2 - WP Coordinates: This refinement makes the basic computation a little more complex but with the base computation still easily within the realm of the field calculator, adding only a few
trig functions. In fact, if the calculator has polar rectangular conversions the computation can be performed in one step. It is based upon a model of a conic tangent to the earth at the project base
point, with the apex over the pole. This is a model more than a projection, so the use of the system is still simple. The primary factor now being computed instead of the value c is the conic radius
at the base point, Rb. Since meridians mapped onto the conic converge at nearly the same rate as on the spheroid surface, corrections to the operation of the system can be computed using simple curve
and radius computations.
Rb = Ac * COT(LatBase) / SQRT(k1)
This refinement produces significantly greater accuracy in the lat/long computation, so much so that failure to correct the computations based on project elevation becomes the dominant error. Thus an
additional factor may be computed for average project elevation.
slfact = 1 - Elev / (Elev + rad)
sealevel dist = ground dist * slfact
This factor is then used in each conversion computation if needed. The following if statements are now used to compute theta for a point Ni, Ei:
dn = (Ni - N0) * slfact grid northing difference from point to base
de = (Ei - E0) * slfact grid easting difference from point to base
theta = TAN-1(de/(Rb - dn))
Conversion of a system coordinate Ni, Ei to latitude and longitude is performed accurately with the following process:
drad = Rb - dn rpti = SQRT(de2 + drad2)
latr = Latbase + ((Rb - rpti)/Mbase) trial point latitude
the trial latitude is used through iterations to compute a mean latitude between the point the base which is then used to compute a new m factor which is more correct for that interval.
mfacm = m factor at mean latitude,
this is approximated by Mbase, then refined by iteration.
pfacm = p factor at mean latitude,
as determined in the iterations used to compute the above.
lati = LatBase + (Rb - rpti)/mfacm)
loni = LonBase + (theta * rpti)/pfacm)
Conversion of a lat long to system N, E is accomplished with the following:
mnlat = mean latitude between point and base
= (lati + latbase) / 2.0 mfacm = m factor at mean latitude
pfaci = p factor at point drad
= difference in conic radius to point = mfacm * (lati - latbase)
radi = conic radius of point = Rb - drad arci
= length of parallel central meridian to point = pfaci * (loni - lonbase)
theta = arci/radi in radians
Ni = N0 + (Rb - radi * COS(theta)) / slfact
Ei = E0 + (SIN(theta) * radi) / slfact
It should be noted in order to be much more accurate this is now starting to be a more complex computation, requiring computation of several m and p factor. This begins to move this operation beyond
easy field manipulation. However the mapping angle computation is still easy, it is the transformation between geodetic and local system coordinates that has become complex.
This system is still not a true projection, and thus has no scale factor. A number of further refinements could be made to this system which each would increase the accurate scope of coverage, but
also increase the complexity of computation. At some point there is a point of diminishing returns.
Alaska variation
Alaska has implemented a local coordinate system for field use as an output from their GEO surveying software system developed by Thomas (Red) Wohlwend. This system is a very complete and powerful
system designed to perform the wide scope of surveys being performed in Alaska. In many small surveys the field crews, equipped with HP41C class calculators, need a system to perform closures, field
checks, and area computations, on the ground. As a result Red has developed a specialized output from GEO which produces a local coordinate system based upon a state plane coordinate base point. This
system involves computation of mapping angle and scale factor at the base point. Each point in a geodetic system can then be converted into a planar, ground distance, local system for temporary use
in the field.
T & T Coordinate System: Alaska has also developed another system which has very specific purposes. It is not a conformal system, nor is it trivial to compute, but it provides some very useful
characteristics. Since in the PLSS near cardinal lines are all important, and also in the PLSS world East and West lines are curved parallels of latitude. It is difficult to compute areas of PLSS
parcels or intersections, since the system is non-orthogonal. The T&T system is not a projection, but a mathematical mapping of latitudes and longitudes to coordinate values (usually in chains) which
makes all east and west lines straight grid east and west. north and south lines end up converging slightly if they are off of the base point which defines the central meridian. The system is
predominantly an aid to computations in the PLSS, and has minimal value for direct field use.
In the T&T system the north coordinate of a point is defined by the distance between the points latitude and the latitude of the base point. This is computed by evaluating the m factor for the mean
between the point and base latitude times the latitude difference. The east coordinate is defined by it's distance at latitude along a parallel from the base point longitude. This is accomplished by
evaluating the p factor for the latitude of the point times the difference in longitude. Thus the T&T coordinate for a point Lati, Loni are given by:
mnlat = (Lati + LatBase) / 2
Easti = (Loni - LonBase) * p(Lat)
Northi = (Lati - LatBase) * m(MnLat)
As stated this system has many special computational purposes, but is not conformal, in that angles about a point are preserved. The inverse operation from a T&T coordinate pair to latitude and
longitude are computed by iterations between determining a trial latitude based upon base m factor, then deriving a mnlat for use in determining a refined m factor and so forth.
CONCEPTS OF LOCAL PROJECTIONS
Each of the above described systems was created for specialized purposes, however in several recent papers, the topic of creating much more complex local projections has been discussed. The WP system
shares some of these rational, however it is the authors contention that these factors are no longer relevant due to the common availability of computing capability to surveys. The rational for these
systems varies between providing:
Less distortion in the project area. This is true as there is always less distortion near the origin of a well defined conformal projection. However, why this distortion creates any real problem at
the graphic level of a GIS, or to properly programmed projection software is unclear.
Grid distances close to ground distances. This is also often described as a desirable characteristic. If the projection is for GIS use, the idea of deriving survey quality from a GIS base seems to be
obscure, however it is nice to think of survey and design quality layers in a GIS, CAD or project COGO system. However it is this authors opinion that these ends can best be achieved by proper
software based on well known projections than by adopting a whole new specialized projection. Stakeout information can be readily converted to ground through application of simply computed scale and
elevation factors.
One of the more dangerous of these concepts are various methods to modify the state plane coordinate systems to ground elevation. These adulterated systems are very popular, but very dangerous as a
project or description appears to have state plane coordinates on it, but it is not at all clear that they are not. Thus they are highly subject to misinterpretation and use by subsequent users.
Facilitate GIS development This reason for adopting a locally defined projection makes the least sense. As local governments, states, and numerous federal government agencies develop GIS/LIS systems,
universal coordinates systems which allow for combining data over larger regions seamlessly seems to be an attribute which is not within grasp, but which would be frustrated greatly with the
proliferation of local proprietary systems.
Most of the reasons given for using locally defined projections over well defined projections for mapping and GIS purposes are invalid when computers are available and used for most, if not all,
computations. Furthermore the arguments in favor seem to be self serving in demanding specialized software and less universal systems at a time when the need for more universal systems are clear. On
the ground surveying stakeout problems can be addressed through a number of simple software solutions rather than custom or adulterated projections. The author feels that the time has come to get off
the dime and start implementing systems which properly use regional, state, or universally understood projections, rather than quick fixes which serve to further fragment data, datums, and undermine
the ability to educate the profession.
von Meyer, Nancy (1990), County Coordinate Systems, ACSM Bulletin, Number 126,, pp. 51-52.
Jeyapalan, Dr, Kandiah, Surface State Plane Coordinate Systems, Proceedings of the 1993 ACSM-ASPRS Annual Convention, Vol. 1, pp. 270-279.
Burkholder, Earl F., A Case Study Comparing Use of Local Map Projection Coordinates with 3-D Geodetic Model Coordinates, Proceedings of the 1993 ACSM-ASPRS Annual Convention, Vol. 1, pp. 42-51.
Burkholder, Earl F., Design of a Local Coordinate System for Surveying, Engineering, and LIS/GIS, Journal of Surveying and Land Information Systems Vol. 53, No. 1, 1993, pp. 29-40.
Wahl, J.L. (1990), Cadastral Measurement Management User's Manual, 147 p., Bureau of Land Management, NSPS Board of Governors.
Wahl, J.L., Rodine, C.J., and R.J. Hintz (1991) Progress Report on the Development of an Integrated PLSS Cadastral Measurement Management and Retracement Survey Software System.
Hintz, Raymond J., Examination of the Validity of State Plane Projection Computations in PLSS Applications, Proceedings of the 1993 ACSM-ASPRS Annual Convention, Vol. 1, pp. 223-232.
Cadastral Home | Cadastral Papers
converted 08/10/96 | {"url":"http://www.cadastral.com/cad-local.htm","timestamp":"2014-04-19T11:57:14Z","content_type":null,"content_length":"23161","record_id":"<urn:uuid:c5b5a317-a5ce-484f-b549-111eca33a777>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steel Bank Common Lisp
On Jan 3, 2008 3:15 AM, Christophe Rhodes <csr21@...> wrote:
> Heh. I think that my patch is at a position that you should be able
> to apply just the x86-64 bits implementing 32-bit arithmetic from
> <http://thread.gmane.org/gmane.lisp.steel-bank.devel/9155/>; it's
> possible that bits and pieces might break, though. It's more because
> you have useful test cases for this new behaviour that I'm trying to
> rope you in :-)
I finally had time to look at this; I stuck your patch and my patch
together to produce a working SBCL. SB-MD5::UPDATE-MD5-BLOCK looks
right (almost; see below) and James's favorite testcase does the right
thing. I haven't tried more complicated variations on James's
testcase, but it seems plausible that they too will do the right
The only problem with SB-MD5::UPDATE-MD5-BLOCK is that you get code like:
MOV R9, R8
SHL R9, 3
MOV R8, RCX
SHL R8, 3
MOV R8, RCX
OR R10, R9
MOV R8, RDX
SHL R8, 3
XOR R8, R10
SAR R8, 3
We want something more like:
OR R8D, ECX
XOR R8D, EDX
The problem here is that we've been doing ADD/ROL/NOT in 32-bit
registers, but the only way SBCL knows how to do modular
LOG{AND,IOR,XOR} operations is to do them on its usual triumvirate of
fixnum/signed-byte-64/unsigned-byte-64. These gymnastics are
converting from our 32-bit registers to fixnums (since fixnum
operations are "cheaper" than unsigned-byte-64 operations...sigh),
operating on fixnums, and then going back. So we can't rewrite the
unsigned-byte-64 vops to take UNSIGNED32-REG inputs, because that vop
will never be selected over the fixnum one. And we can't rewrite the
fixnum ones to take UNSIGNED32-REG inputs because that would just get
The simple-minded solution of (DEFINE-MODULAR-FUN LOGIOR ...) doesn't
work because LOGIOR is a :GOOD function.
I don't know what the right thing to do here is. We like having these
:GOOD functions because we're not making the user write even more
(LOGAND #xffffffff ...) forms. But we also want a way to specify that
these really should be done on a particular representation. | {"url":"http://sourceforge.net/p/sbcl/mailman/sbcl-devel/thread/874piyyv7o.fsf@tochka.ru/","timestamp":"2014-04-21T08:32:45Z","content_type":null,"content_length":"49503","record_id":"<urn:uuid:d2032669-470a-4cdf-bcc0-05cdffd0928c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which RAID solutions can handle 2 disk failures
up vote 2 down vote favorite
Our of RAID 0-5 and RAID 0+1 and RAID 1+0 which ones would be able to handle two disk failures at the same time?
I have read online that RAID 0+1 can not handle this
Also from my research it seems that RAID 1, 2, and 0+1 can handle this but I am really not sure.
hardware raid
I think you will have more luck with this question on the serverfault or superuser site – Bassetassen Aug 13 '11 at 21:04
3 en.wikipedia.org/wiki/RAID has a table with the information you're after. – Mat Aug 13 '11 at 21:05
add comment
migrated from stackoverflow.com Aug 13 '11 at 21:04
This question came from our site for professional and enthusiast programmers.
4 Answers
active oldest votes
This is very complex because many RAID levels can handle two disk failures but not ANY two disk failures. RAID 6 is the only I know of that can handle ANY two disk failures. RAID 10 (mirrors
that are striped) can handle two disk failures as long as the disks are in different RAID 1 arrays. The same goes for RAID 50 (RAID 5 arrays that are striped). RAID 0+1 (stripes that are
up vote mirrored) can also handle two disk failures but only if the disk failures are in the same RAID 0 array (opposite of RAID 10 and 50). It all depends on the cost benefit analysis for your
10 down needs. And always remember that RAID is fault tolerance and is no substitute for backups and a plan for Disaster Recovery (DR) and Business Continuity (BC).
2 Agree. RAID is only first line of defense from data loss. One can't overstress the importance of offsite backups. – pepoluan Aug 13 '11 at 21:20
1 RAID-1 with 3 or more disks can handle ANY two disk failures. – womble Aug 13 '11 at 22:31
@womble..Thank you for the clarification. Yes it can and I was not aware of this before now. Definitely not a standard (well used) configuration but one I will consider in the future
rather than a RAID 1 with 2 disks and a hot spare. – murisonc Aug 13 '11 at 22:38
add comment
RAID-6 is the canonical answer, because it can handle exactly two disk failures. However, to cover your larger question, RAID1 can (maybe) handle two disk failures -- technically, an N disk
RAID-1 array can handle N-1 failures, so whether a given RAID-1 will handle two failures depends on your configuration.
up vote 6 RAID-10 (or RAID-01, depending on your proclivities) can handle anywhere between M-1 and N/M failures (where N is the number of disks in the array, and M is the number of mirrors),
down vote depending on which disks happen to fail. If the M disks that fail are all the mirrors of the same data, then you're toast. On the other hand, if the N/M disks that fail are all mirrors of
different data, you're OK. I'll leave the probabalistic analysis of what the chances of catastrophic failure for given values of N and M for you to do if you're so interested.
add comment
According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: >"Any form of RAID that can continue to execute read and write requests to all of a RAID
up vote 2 >array's virtual disks in the presence of any two concurrent disk failures."
down vote
Any RAID level doesn't replace RAID Backup(locate it in separate building). For example, buggy software/driver writing randomly trash to disk, hackers/viruses (etc.) can trash your whole
array at once .. For me, ENOUGH DATA SAFETY = backups + ((RAID1 or any redundant) + hotspare(s)) + RAID Monitoring.
Beat me by 10 seconds! +1 – gWaldo Aug 13 '11 at 21:08
1 Hats off to SNIA for being wonderfully non-specific. I guess some of my RAID-1 arrays are actually RAID-6 arrays, and I never noticed. – womble Aug 13 '11 at 21:21
add comment
RAID 1+0 of (2xN) drives can handle up to N failed drives *as long as* no 2 drives belonging to the same pair fail at the same time.
An illustration:
+------------------RAID 0----------------------+
| +---RAID 1---+ +---RAID 1---+ +---RAID 1---+ |
up vote 1 down | | D0-1 D0-2 | | D1-1 D1-2 | | D2-1 D2-2 | |
vote | +------------+ +------------+ +------------+ |
We have 2x3 drives. If drives D0-1, D1-2, and D2-1 fail at the same time, the whole array still survives. But if ever D0-1 & D0-2 (or any two drives in the same RAID 1 pair) fail at
the same time, you'll lose the whole array.
5 I wish my grandmother would cross stitch something like that for me. – Wesley Aug 13 '11 at 22:03
add comment | {"url":"http://serverfault.com/questions/300846/which-raid-solutions-can-handle-2-disk-failures/300849","timestamp":"2014-04-18T01:07:15Z","content_type":null,"content_length":"82205","record_id":"<urn:uuid:5349e434-132a-474b-a0b5-d88b8f24bc88>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Koch Snowflake
Koch Snowflake Source:
Koch Snowflake 1 iteration:
Koch Snowflake 2 iterations:
Koch Snowflake 3 iterations:
Fractal Explorer
Koch Snowflake
The Koch Snowflake fractal is, like the Koch curve one of the first fractals to be described.
Basically the Koch Snowflake are just three Koch curves combined to a regular triangle. The construction rules are the same as the ones of the Koch curve.
Mathematical aspects:
Because the Koch Snowflake just contains three Koch curves the perimeter and the number of parts can be calculated by calculating it from one of the used Koch curves and multiplied by three. | {"url":"http://www.fractal-explorer.com/kochsnowflake.html","timestamp":"2014-04-19T22:05:50Z","content_type":null,"content_length":"9421","record_id":"<urn:uuid:6a68d191-f44a-4e3e-8bb2-9a74fd422b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Invited Commentary: Invited Commentary: GE-Whiz! Ratcheting Gene-Environment Studies up to the Whole Genome and the Whole Exposome
One goal in the post-genome-wide association study era is characterizing gene-environment interactions, including scanning for interactions with all available polymorphisms, not just those showing
significant main effects. In recent years, several approaches to such “gene-environment-wide interaction studies” have been proposed. Two contributions in this issue of the American Journal of
Epidemiology provide systematic comparisons of the performance of these various approaches, one based on simulation and one based on application to 2 real genome-wide association study scans for type
2 diabetes. The authors discuss some of the broader issues raised by these contributions, including the plausibility of the gene-environment independence assumption that some of these approaches rely
upon, the need for replication, and various generalizations of these approaches.
Keywords: epidemiologic research design, genetic epidemiology, genome-wide association study, genotype-environment interaction, polymorphisms, single nucleotide | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3261438/?lang=en-ca","timestamp":"2014-04-21T10:08:16Z","content_type":null,"content_length":"89462","record_id":"<urn:uuid:6044522a-f615-43eb-b0ba-8f35a399a972>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
a superhero recently asked his nemesis how many cats she has. she answered with a riddle: "one-half of my cats plus three" how many cats does the nemesis have?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
So One half which she did not describe is the Three she Did describe - which means 3=Half of her cats
Best Response
You've already chosen the best response.
so that means she has 3.5 cats?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/503ac338e4b006c9390b4028","timestamp":"2014-04-16T19:06:18Z","content_type":null,"content_length":"30221","record_id":"<urn:uuid:cc5b2f7b-2645-47bf-aeb1-58ab5ac4088b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Wikibooks, open books for an open world
The Natural Numbers[edit]
Mathematics requires, in order to avoid confusion or absurdity, an unambiguous definition of vocabulary. While this is true of any science, in mathematics this is achieved absolutely through the
abstraction of concepts.
However, the full description of math as above requires time, and is nowhere elementary. Also, in order to have a true axiomization, a math built up from the roots, we will have to use the strongest,
in the logical sense, statements possible. This makes them powerful for proofs but often sacrifices intuitiveness. Thus, we will concentrate on the later, only rarely using rigour, as necessary.
To do mathematics we must start by finding a way to think about numbers that is as above, unambiguous, while at the same time obvious and natural. We thus come up with the natural numbers.
Intuitively, we define these as the set N = { 1, 2, 3, ... }. We will say a number is a natural number if it belongs to this set. Soon we will find this needs to be expanded even for elementary
treatment, but this set alone already has some interesting properties.
If there is a set A such that if we pick an arbitrary number, call it x, and x ∈ N, we say A ⊆ N. In words, A is a subset of N.
By x ∈ A we mean that x is "in" A, or that x is an element of A.
While we have not defined properly a set, membership or inclusion, we have already a feel for what N ought to be like.
Mathematical induction[edit]
Principle of mathematical induction[edit]
Let A ⊆ N with the following properties:
(i) $1\in A$
(ii) $n\in A$ implies $n+1\in A$
Then A = N
The principle of mathematical induction is implied by the 'least element principle' (2.1). For, assume that 1.1 holds and let B be the set of all elements of N not in A. If B has any elements (that
is, is not empty) then it must have a least one. Call this least element n. Then, by (i), $ne 1$. Thus n - 1 is a natural number and is not in B and so is in A. By (ii), $n=(n-1)+1$ is a natural
number which is in A. But now n is in both A and B which is impossible. So B must have been empty. That is, A = N.
This serves as a fundamental property of the natural numbers, and we, further defining order, will call these numbers well-ordered, primarily because of this principle. In more rigorous terms, we
would have to identify a separate axiom, called the axiom of induction, which encompasses this property. However, for our current purposes, this is sufficient.
Principle of mathematical induction (modified)[edit]
Let A be a set of natural numbers which has the following properties:
(i) The natural number $n_0$is in A
(ii) If $n\ge n_0$and $n\in A$, then $n+1\in A$
Then A contains all natural number $n\ge n_0$
In this section, we shall describe some of the more commonly occurring types of numbers together with some of their properties.
Types of Numbers[edit]
Natural Numbers, N[edit]
As already mentioned, these are just ordinary counting numbers 1,2,3,4,...,29... . Note that zero has not been included in this set. This varies with different books or mathematicians and may include
zero as a natural number.
An important property of the natural numbers is the ordering. Note that natural numbers come with an idea of size so that we can talk about larger and smaller natural numbers.
Integers, Z[edit]
By the integers, we mean the natural numbers together with their negatives and zero. Although we all, presumably, have a reasonably clear practical idea of how to work with the integers, there are
definite problems as to what they actually are. Referring to such things as the 'number line' does not solve these problems because it relies heavily on our intuition.
One way that mathematicians solve this problem is to make an artificial construction which produces an artificial object with the properties we expect of the integers. Although we do not use this in
everyday mathematics, it gives a precise definition which we can use to justify our use of negative numbers and also provides a model we can use when we wish to develop quite new constructions which
are not so intuitively reasonable. We shall not describe this for the integers but will give a brief description next of a similar process starting with the integers and producing the rational
Rational Numbers, Q[edit]
With the integers, we have a number system which is closed under addition, multiplication and subtraction. The next step is to produce a collection numbers which is also closed under division (except
by zero). This is the rational numbers. Again we should all be able to manipulate rational numbers but there is some problem with what they actually mean. For example, what does it mean to say that $
\frac{2}{3}=\frac{4}{6}=\frac{6}{9}$? We usually want the equals sign to denote the fact that two things are identical and the symbols $\frac{2}{3}$ and $\frac{4}{6}$ are certainly not identical.
So we redefine what we mean by equality of fractions. Let us consider the set of ordered pairs of integers with the second integer non-zero; that is,
$\mathbb{Q}'=\{(x,y):a,b\in \mathbb{Z}$ and $b e 0\}$
(An ordered pair is just a pair in which it is specified which comes first and which second.) We want these ordered pairs to represent rational numbers but on a many-to-one basis. So we redefine
equality of these ordered pairs by
$(a,b)\equiv (c,d)$ if and only if ad = bc
(Read this as (a,b) is equivalent to (c,d).) This corresponds to the usual definition of equality of fractions. We can then think of one rational number as being represented by a collection of
ordered pairs of integers all equivalent to each other. Two ordered pairs represent the same rational number exactly when they are equivalent. WE can then go on to define, in terms of the ordered
pairs, the usual operations of addition, subtraction, multiplication and division. For example,
(a,b) + (c,d) = (ad+bc,bd)
We will define addition of rational numbers by taking an ordered pair for each number and then adding these ordered pairs. There is still a problem however. We need to be sure (without checking each
case) that, for example, because $(2,3)\equiv (4,6)\equiv (6,9)$, then also
$(2,3)+(5,6)\equiv (4,6)+(5,6)\equiv (6,9)+(5,6)$
What is order?[edit]
Again, as we try to find a way to define, based on the already established ideas of addition, and the intuitive idea of order, in what sense is a number larger than another. In what way can we
arrange the numbers?
With N this is easy, we say, if a, b are natural numbers, a < b if there is a natural number c such that a + c = b. From the principle of induction, we can thus order N in the following manner: 1, 2,
3, ... As has been done before with our intuition.
With Z, we run into a problem, where do we start? Since the principle of induction does not apply on Z as a whole, we have to write it as ..., -1, 0, 1, ..., leaving "..." on both sides. However, the
definition, as before, applies.
With Q, clearly $\frac{1}{2}$ > $\frac{1}{3}$, yet $\frac{1}{6}$ is not in N and so our definition fails. The new definition would be
Definition: (Order on Q) Let a, b be rational numbers, then a < b if there is a positive rational number c such that a + c = b.
We call a rational c positive if it is equivalent to a rational $\frac{n}{m}$ such that n and m are both natural. In other words, if c = $\frac{p}{q}$, then c is positive if pq is a natural number. A
simple exercise would be to show these are equivalent. (Note: Statements are equivalent if each implies the other.)
Any set such that for each two elements in it we can definitely say a < b, a > b, or a = b is called totally ordered.
The least element property[edit]
Any non-empty subset of the natural numbers has a least element. The words 'any non-empty subset' mean that the subset we take should have at least one element in it.
Any set with the least element property is called well-ordered. Thus N is well-ordered as above, while Z and Q are not.
Order properties of natural numbers[edit]
(i) There is no largest natural number.
(ii) There are natural numbers which are nearest neighbours in the sense that there is no natural number strictly between them.
Not finished yet...[edit]
A proof is simply a mathematical argument designed to convince the reader of some fact. However an intuitive proof differs from an inductive proof. The former being a fact that can be assumed to be
an observable constant from daily experiences (e.g. when two test tubes of milk are poured into one beaker, then if we divide the final volume again into the two beakers, we should get full test
tubes again (almost). This explains why ½+½ will still be 1. Since this follows from intuition/observation/experience alone it can be considered an intuitive proof.). The latter, inductive proof, is
of the type where many intuitive proofs can be combined to get the desired result. For example, we know that pouring two test tubes of milk gave us two units in the beaker, so if the volume of the
test tube can be regarded as a unit volume, then n such test tubes will give us a total of n units of milk in the beaker. Mathematical proof itself is based on natural logic and philosophy. Modern
mathematics generally assumes most proofs to be absolute, not without reason though. This allows them to engage in complex problems using purely inductive reasoning. However the basis for all that
still remains natural philosophy and logic. This is reflected in Newton's title to a relatively abstract mathematical treatment of his work being titled as 'Principles of natural philosophy'.
Mathematical terms[edit]
and - 'A and B' means that both A is true and B is true
not - This has the usual meaning in English; note that not(not(A)) is the same as A
or - This is always what is called 'inclusive or'. That is 'A or B' means that either A is true or B is true or both.
for all - This has the usual meaning in English
there exists - This has the usual meaning in English
implies; if-then - These kinds of statements are at the centre of mathematical reasoning.
if and only if; equivalent - 'A implies B' and 'B implies A'
Theorem, Proposition, Lemma - They all mean roughly the same thing but are in decreasing order of importance.
Corollary - Something which follows easily from a Theorem but for which the statement is not entirely obvious from the statement of the Theorem.
Summation Notation[edit]
The summation notation is a convenient abbreviation for sums of several real numbers. If $a_1$,$a_2$,...,$a_n$ are reals, we define
$\sum_{k=1}^n a_k = a_1+a_2+...+a_n$
The summation index k is often called a dummy index, as it can be replaced by any other letter:
$\sum_{k=1}^n a_k = \sum_{i=1}^n a_i = \sum_{j=1}^n a_j$, etc.
Sometimes it is convenient to start summation from 0 instead of 1, or from some other integral value. For instance,
$\sum_{k=0}^3 x_k = x_0+x_1+x_2+x_3$, or $\sum_{j=2}^5 j^3 = 2^3+3^3+4^3+5^3$, etc.
The most important properties of the summation notation can be summarized as
$\sum_{k=1}^n (a_k+b_k) = \sum_{k=1}^n a_k +\sum_{k=1}^n b_k$ (additive property)
$\sum_{k=1}^n (ca_k) = c \sum_{i=1}^n (a_k)$ (homogeneous property)
$\sum_{k=1}^n a_k = \sum_{k=0}^{n-1} a_{k+1} = \sum_{k=2}^{n+1} a_{k-1}$ (index translation)
$\sum_{k=1}^n (a_k-a_{k-1}) = a_n-a_0$ (telescoping) | {"url":"http://en.wikibooks.org/wiki/Algebra/Theory","timestamp":"2014-04-19T23:19:41Z","content_type":null,"content_length":"47262","record_id":"<urn:uuid:f540e586-9b2b-46ca-95ee-a1842ffd449d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
The fruits of string theory: The Shape of Inner Space
Physicists and mathematicians embrace in this book by Shing-Tung Yau.
by Matt Ford - Jul 14, 2012 4:03 pm UTC
String theory has its scientific origins in the late 1970s and early 1980s, but it was propelled into the full view of the public in 2000 thanks to Brian Greene's readily accessible and
scientifically accurate (if mathematically devoid) prose in The Elegant Universe. In the intervening decade, the basic ideas of string theory—that the Universe is potentially made up of little
strings—has become fairly well known by members of the public. However, with no real possible experimental test to directly probe for the existence of these tiny strings, many within and outside the
scientific community question the validity of a "theory of everything" that has no readily testable predictions.
One thing that, in my experience, constantly gets overlooked is that, in order to pursue the study of string theory, entire fields of mathematics have been revolutionized and brought back from what
was the dust bin of mathematical curiosity. Their resurgence and renewed interest exist completely independently of whether string theory is right or if it gets left alongside Ptolemaic epicycles in
the annals of scientific ideas.
This is a point that is driven at—although concretely mentioned too late, in my opinion—in The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions by Shing-Tung
Yau and Steve Nadis. Anyone with even a passing familiarity with string theory should recognize the name Yau as the latter half of the ubiquitous Calabi-Yau manifold that appears throughout string
theory. The book recounts Yau's experience with how he came to solve the Calabi conjecture, and how his proof was later rediscovered by physicists to become the cornerstone in a new theory
researchers were playing with.
The book starts with the beginnings of geometry and its influence on physics by first discussing the Platonic solids and how the ancient Greeks viewed them in relation to the physical properties of
the real world. It then jumps forward a few thousand years, to where Yau briefly mentions his childhood in China and his budding interest in geometry. Since the book is not an autobiography, it
quickly shifts back to the mathematics that Yau has spent his career studying, including topology, algebraic geometry, and geometric analysis—the tools that laid the foundations for his later work.
Yau doesn't hold the reader's hand in the book, and he expects the reader to have an intuition capable of understanding some non-obvious mathematical and geometric ideas. For instance, the product of
two S^1 manifolds (circles in a plane) is a rectangle rolled up on itself twice (think about it for a minute). The book gets down to the core of its focus when Yau and Nadis introduce the Calabi
conjecture, the geometric problem that gave rise to Calabi-Yau manifolds. Calabi's question boils down to this:
Can a compact Kähler manifold with a vanishing first Chern class also have a Ricci-flat metric?
How this turned the world of physics on its head is hard to see at first, but Yau explains this conjecture in more physical terms. The question, while purely mathematical in nature, can be intimately
tied to Einstein's field equations for general relativity. Viewed through the lens of general relativity, this question could be rephrased as, "Could there be gravity in our Universe even if space
was totally devoid of matter?" As Yau puts it, if Calabi was correct, then curvature alone would make gravity possible, even in the absence of matter.
The next section of the book focuses on Yau's solution to the Calabi conjecture, and it raises an interesting point that highlights the difference between mathematicians and scientists/engineers.
Yau's proof to the Calabi conjecture in 1976 (published in two papers in 1977 and 1978) showed that a manifold that satisfied Calabi's requirements (compact, Kähler, Ricci-flat) exists... nothing
more. Nothing about the nature of it, what it looks like, or if a solution in the form of a realization can actually be computed. As an engineer this doesn't help me one bit, but to a mathematician,
as Yau puts it, the fact that an answer exists is the answer. There is no need to go further. It reminds me of an old joke:
In three separate locked, airtight, rooms sit a physicist, an engineer, and a mathematician. In each room is a sealed box with a key that will allow escape. Upon learning of their predicament,
all get to working on the problem at hand. The physicist works out the material properties and forces needed to exactly pry the box open with a small screwdriver, applies the minimal amount of
force needed, gets the key, and leaves. The engineer takes the brute force approach: he smashes the box to pieces, picks up the key, opens the door, and leaves. The mathematician is not heard
from for weeks, and upon going back to look for him, the engineer and physicist find him dead in the room. In front of him are pages upon pages of a mathematical proof. At the end it is written,
"a solution exists."
It wasn't until eight years later, in 1984, that physicists started realizing the importance of manifolds such as those described by Calabi-Yau in the study of string theory. As Yau explains it, it
was the need for supersymmetry that lead to the mathematical concept of holonomy, which in turn lead to Calabi-Yau manifolds. Once the connection was made, physicists started creating actual
realizations of Calabi-Yau manifolds and studying the properties of these spaces and whether or not they could predict the properties of the universe we see around us.
The remainder of the book discusses how the mathematics has advanced thanks to work of physicists and how physics has advanced thanks to the work of mathematicians. While the book focuses heavily on
the mathematics, it is only in one of the last chapters where Yau really hammers his key point home: no matter what happens with string theory, the mathematics developed to describe it will still be
correct and will form the foundation for future generations of mathematicians and geometers. The Shape of Inner Space proves to be an interesting read to anyone with an interest not only in string
theory, but in the mathematics that underlie this potential theory of everything.
Listing image by shapeofinnerspace.com
41 Reader Comments
1. elhombreSmack-Fu Master, in training
Thanks for this. I've owned this book for ages but could make no progress at all with it!
2. archtopArs Centurion
So the intersection of two circles is a shape bordered by two arcs that touch at both ends. I've been trying to imagine how one could take a rectangle, "roll it up twice," and end up with that
shape. No luck here.
3. zeothermModeratoret Subscriptor
archtop wrote:
So the intersection of two circles is a shape bordered by two arcs that touch at both ends. I've been trying to imagine how one could take a rectangle, "roll it up twice," and end up with that
shape. No luck here.
Roll the rectangle into a tube, then remained is spoiler
Spoiler: show
then connect the end of the tube. The resulting torus is the intersection of two circles (one tracing out another)
4. PeterWimseyArs Scholae Palatinae
The book sounds interesting, but I think I need one with more handholding.
Maybe "The Shape of Inner Space For Dummies (tm)".
5. alanskyArs Scholae Palatinae
Calibi's question is like the old saw: If a tree falls in the forest, does it make a sound if there's no one there to hear it? As far as I'm concerned, the answer is, "What tree?"
If space were totally devoid of matter, there would be no way to detect the presence of gravity because there would be no matter for it to act upon. If gravity cannot be detected, there is no
empirical basis for asserting that it exists. What fun is that?
6. joshuardavisSmack-Fu Master, in training
I think you mean that the product (not intersection) of two S^1 manifolds (not S^2 manifolds), which are circles, gives a torus. An alternative way to construct a torus is to roll a rectangle up
into a cylinder, and then join the ends of the cylinder together.
Also, it's "Calabi", not "Calibi". Just thought you'd like to know. Thanks for the review.
7. abthreeSmack-Fu Master, in training
The author is obviously entirely unfamiliar with the math involved in string theory, so it puzzles me why he would presume to be in the position to characterize it as living "in the dust bin,"
which indeed it does not. In addition, the author shouldn't have said "intersection," but cross-product of two circles (S^1 is just a fancy name for a circle, no need to name-drop and mention
8. emertonomSmack-Fu Master, in training
"For instance, the intersection of two S2 manifolds (circles in a plane) is a rectangle rolled up on itself twice (think about it for a minute)."
The first parenthetical is misleading. As archtop noted, the intersection of two circles in a plane is either a point, a circle, or a region bounded by two arcs. That's not what you're talking
about here, but it's what the parenthetical makes it sound like.
As JoshuaRDavis points out, an S2 manifold is a space topologically equivalent to the surface of a sphere; an S1 manifold is a circle in a plane. And yes, I think you're talking about the
product, not the intersection.
9. OstracusArs Legatus Legionis
The Shape of Inner Space proves to be an interesting read to anyone with an interest not only in string theory, but in the mathematics that underlie this potential theory of everything.
Kind of limits the readership right there.
10. yogamanSmack-Fu Master, in training
And as long as we're beating up the author for misdescribing the mathematics, let's also jump on the editor who doesn't know that the past tense of "lead" is spelled "led".
11. zeothermModeratoret Subscriptor
joshuardavis wrote:
I think you mean that the product (not intersection) of two S^1 manifolds (not S^2 manifolds), which are circles, gives a torus. An alternative way to construct a torus is to roll a rectangle up
into a cylinder, and then join the ends of the cylinder together.
You are entirely correct, sorry for the brain fart, it is fixed now.
Also, it's "Calabi", not "Calibi". Just thought you'd like to know. Thanks for the review.
I am going to kick my spell checker in the nuts. That's fixed too.
12. danlockSmack-Fu Master, in training
Thank you for giving me a word to explain people who are skilled in geometry! _All_ these _years_ and I didn't know "geometer"...
ge·om·e·ter / Noun
Synonyms: geometrician, surveyor, land surveyor
1. A person skilled in geometry
2. A geometrid moth or its caterpillar
· a mathematician specializing in geometry
· caterpillars and moths in the family Geometridae; the caterpillars also are called measuring worms
(measuring worms? yes, inchworms is a synonym)
13. theseumArs Tribunus Militum
abthree wrote:
The author is obviously entirely unfamiliar with the math involved in string theory, so it puzzles me why he would presume to be in the position to characterize it as living "in the dust bin,"
which indeed it does not.
For example, it's closely related to the proof of the Poincare conjecture, a mathematically important result independently of physical applications.
14. emertonomSmack-Fu Master, in training
I hope this didn't come across as "beating up the author"! It seemed important, because the author was saying that this was the sort of mathematical insight the reader would be required to grasp
in order to make it through the book, but the mistakes in the description made the claim not only obscure but actually inaccurate. People who weren't already familiar enough with topology to see
what he was trying to describe might have concluded that they weren't math-skilled enough to understand the claim, which could have repelled a lot of readers from the book.
For the sake of clarity, an S1 manifold is a one-dimensional space which is locally flat, but which wraps around. It's like Mario in the original Mario Brothers, walking straight across the
screen, not stopping at the right edge but just continuing straight on and reappearing on the left. Topologically it's equivalent to a circle, but using strictly local measurements, there's no
way of telling that the space is at all curved--it looks completely flat.
The product of two S1 manifolds is a two-dimensional space in which each dimension behaves like that, like the map in some old RPG games (some of the Phantasy Star or Final Fantasy games have
this, for instance). The insight mentioned in the article involves seeing how this space can be characterized.
Spoiler: show
In particular, the insight is that this is topologically equivalent to a *TORUS*. Most people who haven't thought about it sort of assume it's equivalent to a sphere. One way of seeing that this
isn't the case is to consider a sphere, and a point on its equator. From this point, a path going due north and a path going due east will both wrap eventually wrap all the way around the sphere
and arrive back at the same point--but first they will intersect on the far side of the sphere. There's no point you can choose on the S1xS1 surface (or RPG map) where the same is true. Rather,
if you printed out the map as a rectangle, you'd need to make the entire top edge match up with the entire bottom edge, resulting in a tube...and then you'd need to make the entire right edge
(now a circle) match up with the entire left edge, requiring you to bend the tube into a loop...making a donut / inner tube / torus shape.
*I'm sorry, I can't figure out how to do the "spoiler" tag properly, and google isn't being helpful!
**Got it! Uses square brackets rather than angles.
15. theseumArs Tribunus Militum
For the sake of clarity, an S1 manifold is a one-dimensional space which is locally flat, but which wraps around.
ah, that clears it right up
maybe somebody should define a topological product? ok so a topological space is, roughly, just a shape of some kind, for example a circle. Note that I don't mean it has to be something you could
draw, it could be three dimensional, or even higher dimensional. So a solid cube is also a topological space.
With that out of the way, the product is pretty easy to define: For spaces X and Y, the product space X x Y is given by taking a copy of space X for every point in space Y, and gluing together
all the copies so that each point is glued to the same point in its neighboring copies.
So if X is a three inch long line, and Y is a beach ball, you'll put a copy of a three inch line at every point on the beach ball. After doing the first few thousand, what you'll get looks like a
porcupine. Keep doing it long enough though, and you won't be able to see the gaps between the lines any more. Eventually you just end up with a three inch thick beach ball.
If you were able to follow my vague description of how you glue the copies together, you should be able to convince yourself that X x Y = Y x X.
16. yogamanSmack-Fu Master, in training
I think "piling on" might have been a better characterization, but I did feel like the author deserved a bit of a thrashing for making me feel stupid for failing to understand what turned out to
be the author's wrong description.
Your comments are helpful and courteous. I apologize for my unintended slight.
But I still wish someone would fix the spelling of led.
17. Torbjörn Larsson, OMArs Scholae Palatinae
Needs more physics.
In fact, I thought the reason why supersymmetry is desirable is because it maximizes the symmetries a point particle can have: it gives a loophole for the Coleman-Mandula no-go theorem on mixing
external (spacetime) symmetries with internal symmetries. [ http://en.wikipedia.org/wiki/Coleman%E2 ... la_theorem ] And we know nature likes it laws (symmetries and symmetry breakings).
Then string theory is a natural way to implement supersymmetry, because the string degrees of freedom looks like external degrees of freedom on a string world-"membrane" (the space it sweeps as
it goes).
alansky wrote:
If space were totally devoid of matter, there would be no way to detect the presence of gravity because there would be no matter for it to act upon. If gravity cannot be detected, there is no
empirical basis for asserting that it exists. What fun is that?
But that is not how standard cosmology works. General relativity is a property of the cosmology, whether or not you have passed reheating and got matter. In fact, in standard cosmology spacetime
and its curvature is observed before particles, as the space inflation happens in.
I think standard cosmology makes much more sense than "quantum theories of gravity" and "theories of everything".
We know that galaxy structures are seeded by primordial fluctuations that we seen in CMB* and in galaxy cluster structure formation. But we also seem to observe by SN1986A photons that spacetime
is smooth on the Planck scale (whether it is Calabi-Yau wrapped with smaller or larger dimensions or not).
It makes so much more sense if inflation is the field that has the primordial fluctuations, since that is the only field we observe at that stage.
Of course, if something like eternal inflation is true we have pushed "blueshift" Planck energy singularities onto inflation instead, I think. But maybe it is much easier to take care of them in
a scalar field theory than in gravity when it comes to cosmology. The scalar higgs field seems nice enough. Black holes seems tough enough even when they happen within spacetime!
* Note that this is another way to observe that general relativity exists before particles exists. Inflation has blown up primordial fluctuations within spacetime, fluctuations that existed
before particles!
18. HonkyLipsWise, Aged Ars Veteran
PeterWimsey wrote:
The book sounds interesting, but I think I need one with more handholding.
Maybe "The Shape of Inner Space For Dummies (tm)".
I have a shelf full of them, and my favourite is "The trouble with physics today" by Lee Smolin. Although it is slightly dated, in that the LHC is operation, the Higgs Boson has been discovered,
and some of the theorised alternatives to string theory discussed in the book have since been disproved, it is still the best summary of the last 100+ years of physics I have read.
http://www.amazon.com/The-Trouble-With- ... 61891868X/
19. Putrid PolecatArs Scholae Palatinae
Great review. I will buy the book tomorrow and try to get through it.
I particularly liked the rebuttal to those critics that string theory is untestable. Not only has it broadened mathematics, but there is one other key point: it is inexpensive to study. I don't
understand what all the moaning is about when we are paying a few salaries and buying a few whiteboards.
20. Bill GrinderSmack-Fu Master, in training
Sitting here with my Pet Higgs Boson who tells me he's responsible for mass, henceforth gravity, I am reluctant to discuss with him the conjecture of gravity without mass.
I gently asked my Higgs if "Curvitrons" could be discovered that provide gravity through membrane curvature in a perfectly aligned Calabi-Yau Manifold.
He answered that we will need a super-symmetrical lead-aluminum (PbAl) collider operating at 60bTeV in nDimensional space (built in China) to even begin to approach the conjecture.
I agreed to guy him a Mercedes if he would just shut-up.
21. elhombreSmack-Fu Master, in training
zeotherm wrote:
joshuardavis wrote:
I think you mean that the product (not intersection) of two S^1 manifolds (not S^2 manifolds), which are circles, gives a torus. An alternative way to construct a torus is to roll a rectangle up
into a cylinder, and then join the ends of the cylinder together.
You are entirely correct, sorry for the brain fart, it is fixed now.
Also, it's "Calabi", not "Calibi". Just thought you'd like to know. Thanks for the review.
I am going to kick my spell checker in the nuts. That's fixed too.
Great to see it is all fixed (and I now feel slightly less stupid for not being able to visualise the intersection turning into a cylinder) but as a professional journalist you really need to
drop the adolescent "brain fart", "kick in the nuts" shtick. You will likely find that the vast majority of the Ars Technica paying subscribers have not been 13 for a long time. Aside from that,
keep up the good work.
22. justageezerWise, Aged Ars Veteran
To physicists string theorists are simply rabble rousers,
To engineers the string is queer - (as it won't hold up their trousers),
A geometer can cope better, but a Buddist knows what's what:
Just look inside and you will see, the Universe is Knot!
23. OstracusArs Legatus Legionis
elhombre wrote:
You will likely find that the vast majority of the Ars Technica paying subscribers have not been 13 for a long time. Aside from that, keep up the good work.
A reading of the Battlefront makes one wonder though.
24. Torbjörn Larsson, OMArs Scholae Palatinae
HonkyLips wrote:
I have a shelf full of them, and my favourite is "The trouble with physics today" by Lee Smolin. Although it is slightly dated, in that the LHC is operation, the Higgs Boson has been discovered,
and some of the theorised alternatives to string theory discussed in the book have since been disproved, it is still the best summary of the last 100+ years of physics I have read.
That book has been universally panned by physicists what I know of.
And no wonder, Smolin is one of a minor group of mathematicians at the core, a priori supporting such an alternative. The problem with his chosen physics is two-fold, generally axiomatic methods
doesn't work in physics (say, quantization of field theories has never been axiomatized) and specifically Loop Quantum Gravity neither obey relativity nor can display dynamics (can't predict a
simple harmonic oscillator to build dynamics out of). It is a non-starter.
So I wouldn't expect his book to be especially faithful to physics and its history.
@ elhombre:
I thought it was a refreshing variant! Anything is possible in love and text making, as long as you don't need to be faithful. (In the latter case, to the specific content. In the former case,
it's moral relativistic - can't help you there. (O.o))
Of course, if it becomes a habit it would eventually be boring. But has it?
25. FatAndrewArs Centurion
I got the Kindle version of this book and I have to admit that it's the best laid out maths book that I have on Kindle. (Maths books on Kindle always seem to have a million errors in them and all
manner of layout issues - too hard to proof read, I guess.)
But I just loved this book for the simple reason being that it didn't water the good stuff down. It expects you to be able to follow a discussion in modern maths. It was a joy reading a science/
maths book that had lots of detail in it and didn't feel so simplified that it felt like it was skipping important stuff. I wish more popular science/maths books tried to be of this level and
The only minor problem that I did have was the book didn't really feel like it had a theme or guiding direction. Although the author avoided putting too much autobiographical material in the
book, the structure did come across as being a set of work that Yau studied through his life. Nothing wrong with that as what Yau studied was wonderful stuff and I loved hearing about it.
Makes me wish that I was a mathematician.
26. Voix des AirsArs Scholae Palatinae
Bill Grinder wrote:
Sitting here with my Pet Higgs Boson who tells me he's responsible for mass, henceforth gravity,
I know this is just a silly post, but I'm in a pedantic mood (sorry) for some reason so...
This (presently) isn't true. The Higgs field is responsible for inertial mass. Gravitational mass is (at this point at least) a different thing. They are proportional. The Higgs field and boson
are part of the Standard Model which does not incorporate General Relativity (gravity) - in fact General Relativity and the Standard Model are incompatible. Also, the Higgs field is only
responsible for the mass of some things, the vast majority of the mass of "ordinary stuff" does not result from Higgs.
Last edited by Voix des Airs on Sun Jul 15, 2012 10:42 pm
27. AatchArs Centurionet Subscriptor
elhombre wrote:
zeotherm wrote:
joshuardavis wrote:
You are entirely correct, sorry for the brain fart, it is fixed now.
I am going to kick my spell checker in the nuts. That's fixed too.
Great to see it is all fixed (and I now feel slightly less stupid for not being able to visualise the intersection turning into a cylinder) but as a professional journalist you really need to
drop the adolescent "brain fart", "kick in the nuts" shtick. You will likely find that the vast majority of the Ars Technica paying subscribers have not been 13 for a long time. Aside from that,
keep up the good work.
Oh come on, just because you don't like it doesn't mean that other people don't find it amusing. As long as it doesn't come through in the articles, and it's limited to abuse of abstract things
in the comments, then I don't see the problem.
"Professionalism" tends to be a euphemism for "politically correct", and I don't really want a politically correct science writer.
28. pferrelSmack-Fu Master, in training
One of the best reviews of a physics or math book I've ever read.
Disclaimer: don't know Matt, don't know anyone that works at Ars, etc.
29. elhombreSmack-Fu Master, in training
Oh come on, just because you don't like it doesn't mean that other people don't find it amusing.
I didn't say "all", I saod only those over 13 years old.
"Professionalism" tends to be a euphemism for "politically correct", and I don't really want a politically correct science writer.[/quote]
So correct grammar, punctuation and a tone in keeping with the subject matter is a form of social tyranny? Really??
30. Torbjörn Larsson, OMArs Scholae Palatinae
Voix des Airs wrote:
The Higgs field is responsible for inertial mass. Gravitational mass is (at this point at least) a different thing. They are proportional. The Higgs field and boson are part of the Standard Model
which does not incorporate General Relativity (gravity) - in fact General Relativity and the Standard Model are incompatible. Also, the Higgs field is only responsible for the mass of some
things, the vast majority of the mass of "ordinary stuff" does not result from Higgs.
Pedantry on the pedantic. =D This is mostly correct, and I believe you have to be a particle physicist to see why the Higgs field couples mostly linearly. (Not so with photons, gluons, neutrinos,
and apparently not linearly with its free boson. The Z & Ws not included, since they are composite* with some of the higgs field's bosons.)
But I don't think GR & SM are incompatible. SM is compatible with special relativity and is a quantizable gauge theory I think. GR is compatible with special relativity (well, duh) and is a gauge
theory I think. It too is quantizable by way of its Lagrangian, as expected. The graviton that is thus predicted is compatible with the SM in the sense that as a spin 2 fundamental particle it
doesn't conflict with any of the SM fundamental particles.
The problem is that neither GR nor SM are fundamental theories but known to be effective theories.
In the case of GR it shows up when it, and its quantization, breaks down at high energies. GR displays singularities and the quantization becomes meaningless by non-convergence. That isn't
supposed to happen in quantum field theories, I think. Blah blah non-renormalizable blah.
In the case of SM it shows up as an enormous finetuning. I don't know if it is true, but I once saw the claim that if you add all SM finetunings up, it is as severely finetuned as the
cosmological constant (a factor of ~ 10^120)! And probably for the same reason, I should think, while GR is coupled to cosmology so aren't SM and its vacuum CC predictions as of yet. Perhaps a
discovery of supersymmetry will start to edge particle physics towards that.
If you want to put a fine point to it, eventually SM with higgs breaks down too. If the 125-126 GeV higgs is a standard higgs it predicts a quasistable vacuum, currently @ 2 sigma. (LHC needs to
finesse SM parameters, maybe they can tell at 3 sigma sometime soon.)
So SM has a sort-of singularity "in time". Which is curious come to think about it, time is a conjugate pair with energy as in Noether's theorems I take it. So GR breaks down at high energies,
while SM w higgs breaks down at "low energies" (long times).
This is, by the way, in a handwavy way a global problem in the sense that energy is a global property and a global symmetry in Noether's theorems (I take it), a "charge" conserved but not by
local symmetries as particle charges are. When energy measures breaks down because the system description does, if so locally, the physics of the global system is inconsistent. (Again, well,
duh.) Another connection with cosmology perhaps.
So for all practical concerns GR & SM are compatible and sort of akin, but rather separate. And evidently we need to marry cosmology (GR) with particle physics (SM) to make progress on both.
In the context I note that string theory, while being a good frame (theories for new particles, landscape for cosmologies), isn't strictly necessary for doing some of that.
* I don't think that is the technical term though. Hadrons like protons are composite in another and more dynamical sense, composed by dynamically interacting quarks and gluons. Z&Ws are more
like chimeras.
elhombre wrote:
I didn't say "all", I saod only those over 13 years old.
That means all readers over 13 years and fewer below, which is stereotyped for individuals of both groups.
[Disclaimer: I happened to like Ford's humor. And I don't think stereotyping is always useful.]
31. elhombreSmack-Fu Master, in training
elhombre wrote:
I didn't say "all", I saod only those over 13 years old.
That means all readers over 13 years and fewer below, which is stereotyped for individuals of both groups.
[Disclaimer: I happened to like Ford's humor. And I don't think stereotyping is always useful.][/quote]
Stereotyping is very important ! It lets you dislike whole groups of people without having to go to the trouble of getting to know them first. It is very efficient.
32. TupsterWise, Aged Ars Veteran
In certain mathematical philosophies proving that something exists requires you to be able to show how it is built. Just proving that something can't not exist is seen as so much hocus pocus. To
prove that something exists you aways have to show how to build it.
33. Torbjörn Larsson, OMArs Scholae Palatinae
elhombre wrote:
Stereotyping is very important ! It lets you dislike whole groups of people without having to go to the trouble of getting to know them first. It is very efficient.
When we seem to agree. Efficient when applicable, which is not always.
For example, I can dislike people (no one here, I rush to say) without stereotyping them. *That* is efficiency for ya'! =D
[ By the way, where is the instructions for coding in the editor? Trying this: [img src="http://arstechnica.com/civis/images/smilies/smile.png" alt=":)" title="Smile"], this: <img src="http://
arstechnica.com/civis/images/smilies/smile.png" alt=":)" title="Smile"> , and C&P:
OK, but that smiley isn't what I wanted to emote. (O.o) ]
Last edited by Torbjörn Larsson, OM on Mon Jul 16, 2012 8:27 am
34. Torbjörn Larsson, OMArs Scholae Palatinae
Tupster wrote:
In certain mathematical philosophies proving that something exists requires you to be able to show how it is built. Just proving that something can't not exist is seen as so much hocus pocus. To
prove that something exists you aways have to show how to build it.
- As you say, constructivism is a philosophy. Not a very good one either, because using idealization (say, infinity instead of unboundedness) works better in both math and physics as the results
covers a larger space.
Besides, on a physical level realism is built into both classical and quantum mechanics, which is what string theory lives in. Action-reaction and observation-observables embodies the old Johnson
maxim of "when I hit a stone, it hits back" or in other words constrained reaction on constrained action. Anything observable with these theories exists by such a definition (when the
observations and the theories constraining it shakes out so competitors are safely eliminated).
You may or may not have a problem with what it means to "exist" if it is an emergent phenomena. A table isn't a solid, most of it is vacuum between atoms. It only looks and behaves like a solid
in an idealized sense. But it doesn't devalue the observable definition.
- String theory has an embarrassment of constructions. That is why the landscape, while being finite, is simply too large to efficiently cut down by math alone (I believe). Cosmology may help
35. DarineSmack-Fu Master, in training
Geometry to this level I really struggle with. Part if my poor brain is always trying to visualise things, while another part is pointing out that it's often impossible to do that, and that it
only makes sense through the maths... which I don't know enough of
However, articles like this are food for thought, and generally lead me on a trawl through Wikipedia and the like in order to learn more about the subject. Thanks Ars, for keeping my grey matter
ticking over.
elhombre wrote:
...as a professional journalist you really need to drop the adolescent "brain fart", "kick in the nuts" shtick. You will likely find that the vast majority of the Ars Technica paying subscribers
have not been 13 for a long time. Aside from that, keep up the good work.
I'm 35, and heartily endorse the administration of virtual nut-kicking to intangible objects of frustration.
Of course, I'm not a paying subscriber (sorry Ars!) so maybe I just don't count.
36. hpmmfsArs Centurion
zeotherm wrote:
archtop wrote:
So the intersection of two circles is a shape bordered by two arcs that touch at both ends. I've been trying to imagine how one could take a rectangle, "roll it up twice," and end up with that
shape. No luck here.
Roll the rectangle into a tube, then remained is spoiler
Spoiler: show
then connect the end of the tube. The resulting torus is the intersection of two circles (one tracing out another)
maybe my problem with this is language and not the geometry. Since I learned most of my math (as little as it is) and science in spanish, I find that sometimes the words used in english, even if
they are normally familiar to me, don't make sense. Can you instead show a graphic of this?
I just am unable to use the words you have used to create a picture.
37. elhombreSmack-Fu Master, in training
I'm 35, and heartily endorse the administration of virtual nut-kicking to intangible objects of frustration.
Of course, I'm not a paying subscriber (sorry Ars!) so maybe I just don't count.[/quote]
You are a potential subscriber so of course you count!
If the article had been about "Xtreme body piercing to the Max", or "10 hot tips for your orgasm workout" then the adolescent genitalia references would have been appropriate, but with string
theory it is just jarring .. in my humble opinion.
38. EvolutionArs Praefectus
When "nuts" and "fart" came out of an educated individual people would always say this, "Wow, you're cool, way to go, man!" But when it came out of a homeless drug addict low-life and has not
taken a shower in years individual it's always sounds as disgusting as he looks. "You're disgusting, get a life, asshole!"
Don't get me wrong, love to read about "nuts" and "fart". :-)
<Never thought I would commented on this science thread. But here I am. I've found a hole for me to get in.>
39. vw_fan17Ars Scholae Palatinae
I guess in an ideal (topological?) world, rolling up a rectangle and then connecting the ends into a torus works, but I just keep seeing a scrunched-up inner ring and stretched/torn outer ring,
since you can't take the same amount of matter and (easily) make it fold nicely into two difference circumferences..
(i.e. - try it with newspaper, and see if you can get anything remotely smooth... guess I'm just an engineer :-)
You must login or create an account to comment.
Matt Ford / Matt is a contributing writer at Ars Technica, focusing on physics, astronomy, chemistry, mathematics, and engineering. When he's not writing, he works on realtime models of large-scale
engineering systems.
Feature Story (2 pages)
Introducing Steam Gauge: Ars reveals Steam’s most popular games
We sampled public data to estimate sales and gameplay info for every Steam game. | {"url":"http://arstechnica.com/science/2012/07/the-fruits-of-string-theory-the-shape-of-inner-space/?comments=1&post=23056150","timestamp":"2014-04-16T08:12:07Z","content_type":null,"content_length":"153109","record_id":"<urn:uuid:bda4095f-e111-4a99-bdba-191ed449c2e9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How precisely can a right angle be measured?
Replies: 9 Last Post: Apr 6, 2013 3:48 PM
Messages: [ Previous | Next ]
Re: How precisely can a right angle be measured?
Posted: Apr 6, 2013 3:48 PM
On Wednesday, April 3, 2013 5:56:46 AM UTC-5, Tom Potter wrote:
> "Poutnik" <poutnik@privacy.invalid> wrote in message news:MPG.2bc53fb062451f1810a7@news.eternal-september.org... > > Absolutely Vertical posted Tue, 02 Apr 2013 09:25:01 -0500 > >> >> > I know how
to create an ORTHOGONAL standard >> > that is accurate to 1x10^10 or better, >> > >> >> why do you think such a standard is required? > > I suppose it would be very challenging > wrt material,
thermal and mechanical stability. > > Regarding wavelength multiples as Will mentioned.... > > If Tom wants 10^10 precision ( accuracy is different story ), > the requirement is roughly > 1m versus
0.1 nm - 1 small atom, or > near 6 km versus 579 nm of sodium D line. > > Or multiple of above for multiples of WLs. > > -- > Poutnik Actually you can establish an extremely precise ORTHOGONAL
standard for very little money. -- Tom Potter http://the-cloud-machine.tk http://tiny.im/390k
Tom Potter We have to deal with these dysfunctionals from New Mexico!!
Date Subject Author
4/3/13 Re: How precisely can a right angle be measured? Tom Potter
4/6/13 Re: How precisely can a right angle be measured? HOPEINCHRIST
4/6/13 Re: How precisely can a right angle be measured? HOPEINCHRIST
4/3/13 Re: How precisely can a right angle be measured? Tom Potter
4/4/13 Re: How precisely can a right angle be measured? Tom Potter
4/5/13 Re: How precisely can a right angle be measured? Guest
4/6/13 Re: How precisely can a right angle be measured? Tom Potter | {"url":"http://mathforum.org/kb/message.jspa?messageID=8849108","timestamp":"2014-04-16T22:14:44Z","content_type":null,"content_length":"24496","record_id":"<urn:uuid:f52973d9-c983-4c50-9dcb-c2de89c4e18d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
1: input: Let be the task currently being executed, and
be the task wants to preempt , current time be ,
be the conditional expected utility density of
at time , be the expected utility density of ,
and are the expected execution time of and ,
3:When a new task arrives or it is the preemption checking
4: If then
5: Check what is ’s worst case finish time;
6: If then
7: Preemption not allowed;
8: else
9: Preemption allowed;
10: end if
11: end if | {"url":"http://www.hindawi.com/journals/isrn/2012/681985/alg3/","timestamp":"2014-04-17T19:53:30Z","content_type":null,"content_length":"15756","record_id":"<urn:uuid:f2cfad24-2284-4caf-91a2-8df966df2f80>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much difference is there between drafts?
by Tony Villiotti
April 15, 02013
Print This
This is the time of year the experts tell us how good or how bad this year’s draft class is, which playing positions have a lot of depth, and so forth. By the time the outcome of the draft class can
truly be measured, though, the class has lost its identity and any discussion centers around anecdotal situations. (e.g., the 2004 QB class).
In this article DRAFTMETRICS sets out to measure the results of past draft classes in order to determine how results vary from year to year. The 1993 through 2006 draft classes were reviewed for this
purpose. The analysis ends in 2006 because that class has pretty much taken form by that time and players have had a reasonable amount of time to achieve the five-year milestones that DRAFTMETRICS
often employs. Classes after 2006 are still evolving and it may be at least somewhat premature to conduct a full review of their results, at least in regards to the five-year measures. The following
table summarizes the draft results by class.
This table shows that some years are indeed better than others. It is somewhat interesting that the gap between best and for five-year starters is lower, on a percentage basis, than any other measure
except for three-year careers.
The fluctuations are more pronounced when viewed by playing position. The following table shows the number of five-year starters by playing position from each draft class. Every position shows
fluctuations where the best year has at least double the number of five-year starters as the worst year. For example, the 1994 draft class produced only six five-year starters at defensive back while
the 1998 draft class ended up with 16 five-year starters.
Another way to compare the draft classes is by number of games started. This permits DRAFTMETRICS to track results on an intermediate basis and even the most recent years can be compared to other
years. The following table shows the cumulative number of games started for each draft class after each season (limited to 10 years for presentation purposes). The table indicates, for example,that
players from the 2003 draft classstarted a total of 703 games in their rookie season, a total of 1857 games for their first two seasons combined and so on down the line.
This analysis supports the contention that there are good and bad years. The table also allows the reader to see which years are shaping up as good years before the final tally is in. 2006, for
example, looks to be a very strong year.
Tony is the founder of DRAFTMETRICS.COM can be e-mailed at draftmetrics@gmail.com and followed on Twitter @draftmetrics | {"url":"http://nationalfootballpost.com/print.html&post=How-much-difference-is-there-between-drafts","timestamp":"2014-04-21T02:05:31Z","content_type":null,"content_length":"5640","record_id":"<urn:uuid:3e1e8ab2-b251-4229-a9de-8e1e7ba7832d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 274
melind wants to paint the exterior lateral surface of her house. It is nine feet high and has a perimeter of 140 feet. each gallon of paint covers 300 square feet how many one gallon cans of paint
does she need to buy to paint her house
Assume you are working on a house the length of the house is 50 feet, the width is 30 feet, and from the ground to the gable is 10 feet. The height of the triangle prism on top is 6.5 feet. If you
are going to shingle the roof of this house, how many square feet should you pla...
Not understanding how to answer this You pore over the present evaluation: two intoxicated college females, one DUI and the other trying to run from police officers, both with no prior arrest, and
both nursing majors at the local college. As judge, you must decide whether to s...
Two stones are thrown vertically up at the same time. The first stone is thrown with an initial velocity of 11.5 m/s from a 12th floor balcony of a building and hits hits the ground after 4.6
seconds. With what initial velocity should the second stone be thrown from a 4th floo...
A gas occupying a volume of 664 mL at a pressure of 0.970 atm is allowed to expand at constant temperature until its pressure reaches 0.541 atm. What is its final volume?
A gas occupying a volume of 664 mL at a pressure of 0.970 atm is allowed to expand at constant temperature until its pressure reaches 0.541 atm. What is its final volume?
language and art
How to write a character sketch?
7th Grade Language Arts
yes 1( C 2( C 3( B
Social Studies
All I know is that it is Harrison. Can't remember why but it is.
Just use (k*|q1|*|q2|)/r^2 It should come out as 1,638,427.5N
a pizza has a diameter of 18. what is the best estimate of the area of the pizza?
can anyone explain how they got their answer?
1.5%h=12. What is h?
A ferry boat is 6.2m wide and 7.1 m long. When a truck pulls onto it, the boat sinks 6.02 cm in the water. What us the weight of the truck? The acceleration of gravity is 9.81 m/s2. Answer in units
of N
College Algebra and Finance
Please help! Just need the answer... A lender gives you a choice between the following two 30-year mortgages of $200,000: Mortgage A: 6.65% interest compounded monthly, one point, monthly payment of
$1283.93 Mortgage B: 6.8% interest compounded monthly, no points, monthly paym...
A lender gives you a choice between the following two 30-year mortgages of $200,000: Mortgage A: 6.65% interest compounded monthly, one point, monthly payment of $1283.93 Mortgage B: 6.8% interest
compounded monthly, no points, monthly payment of $1303.85 Assuming that you can...
The difference between the two numbers is 14. The LCM of the two numbers is 16. Find the two numbers
your completely wrong
A car traveling in a straight line has a velocity of +3.3 m/s. After an acceleration of 0.71 m/s2, the car's velocity is +8.8 m/s. In what time interval did the acceleration occur?
Find the least common multiple of the monomials. 5a squared, and 16a cubed. Also 17b squared and 3b cubed. Could you show me how to do these. Thank you, Sandy
what is 2056 less than 22010
Use breaking apart method to solve 5 times 5, I'm not sure what this means
A rectangle has a length that is 5 feet more than its width. The area of this rectangle is 84 square feet. What is the width and length of this rectangle?
I dont understand this question How would you explain to your friend why the quotient of x*6 and x^2 is x^6 Remember your friend is questioning the rule, so you just cannot say it is the rule.
how do i factor the following equation? 3x^2=48
List the conditions when the sum of a positive number and several negative numbers will be positive. Include an example that represents your answer.
prealgebra/ check answer please
John who is an amateur golfer spends 3 hours every day practicing golf. What percentage of his waking hours does John spend practicing golf if he sleeps 9 hours a day? 24-9=15 3/15= 0.2 0.2 x 100 =
20% spent practicing
prealgebra/ check answer please
Thanks for checking :)
prealgebra/ check answer please
What single discount is equivalent to two successive discounts of 15% and 7%? x= 100 [1-(.85)(.93)] = 100 (1-.79) =100 (.21) =21%
5 is 20% of what number? I am so lost
11 - 8 ÷ (1/3 + 1/5) I need to know how to to this please show me. Thank you
U.S. History
How would I persuade someone that the Vietnam war was the most devisive war in American History?
Express 55 minutes as a fraction of 11/3 hours. Please show work
Express 55 minutes as a fraction of 11/3 hours. please this is doen.
Subtract the sum of 81.6 and 4.07 from 129.4 Please show how its done.
the life of light bulbs is distributed normally. The variance of the life time is 225 and the mean lifetime of a bulb is 590 hours. Find the probability of a bulb lasting for at most 603 hours. Yeah,
WTF??? Why me!??!
I need some information on China's modern genocides and crimes against humanity staying the 20th century.
Calculus grade 12
a) 2cos2x b) 2cosx*cosx - 2sinx*sinx = 2cos2x c) double angle identity
Chem Lab
Thank you, it should say 'used' not 'sued'.
Chem Lab
Can the NH3/NH4Cl system be sued to prepare a buffer of a pH 9.5? why or why not? Can it prepare a buffer of pH 5? There were two buffers, 0.2 M and 0.1M.
Tell meabout Forced Sterilization and its sue int he U.S. What was the total number of people in the U.S. sterilized and the reasons why. How was this later imitated by the Nazi people?
Identify one harmful activity that humans do that might change the percentage of water that is available as a resource? Predict the possible effects this activity could cause to the food chains in
that area, the changes in the number and types of organisms, and on the populati...
Social Studies
What continent is located south of Europe in Medieval Europe?
given tanx= (1/2) and cscx<0, find cos2x
7th grade math
Toby is making a model of a battlefield. The actual area is 11 miles by 7.5 miles. He wants to put the model on a 3.25 ft by 3.25 ft square table with at least 3 inches on each side between the model
and table edges. What is the largest scale he can use?
7th grade math
Toby is making a model of a battlefield. The actual area is 11 miles by 7.5 miles. He wants to put the model on a 3.25 ft by 3.25 ft square table with at least 3 inches on each side between the model
and table edges. What is the largest scale he can use?
I need help comparing and contrasting rocks and minerals.
Why did the chicken cross the road?
There are two similar triangles. The angles are: from top to bottom left: DE, from bottom left to right: EF, And from Top to bottom right corner: DF. DE: 10.32 cm.EF: 8.88 cm.FD 12.24 cm. The angles
are: D,top,45 Degrees. E, bottom left corner, 79 Degrees. And F, Bottom right ...
grade 12 math
it is f(x)=x^3+8/x^2+x-6
grade 12 math
Given the function f(x)= x^3+8/x^2+-6, determine the eqaution of the asymptotes and state the end behaviours of the graph near the asymptotes.
When a sound wave passes from air into a solid object, which of the following wave properties change? -Frequency -Wave Speed -Amplitude -Wavelength all that apply
When a sound wave passes from air into a solid object, which of the following wave properties change? -frequency -wave speed -amplitude -wave length Can be more than one
Multiply and simplify (8v+4)(4x^2+9x+4)
A football bobble-head figure sits on your car dashboard. The 83-g head of the figure is attached to a spring with a spring constant of k = 21 N/m. You are driving along a straight road when you
notice a series of hoses lying across the road, perpendicular to your motion, that...
aladder attached to the side of a ship has ten rungs each one foot apart visible above the water if the water level rises 1.5 feet per hour how many rungs will be visible after 3 hours
A ball is dropped from the initial height h1 above the concrete floor and rebounds to a height of equal to 67% of the initial height h1. What is the ratio of the speed of the ball right after the
bounce to the speed of the ball just before the bounce? This ratio is called coef...
A Ferris wheel has a circumference of 93 m and it completes one rotation in 2.3 minutes without stopping. What is the percentage change in apparent weight (=weight difference/weight = W/W) of a
passenger between the highest and the lowest positions on this Ferris wheel?
so then what does f equal and what does a equal
The limit as x approaches 0 of ((4^x)-1)/x The limit above represents the derivative of some function f(x) at some number a. Find f and a.
The limit lim as x approaches 1 of (x^12-1)/(x-1) represents the derivative of some function f(x) at some number a. Find f and a.
Find the value of the constant C that makes the following function continuous on (-infinity, +inf) f(x)={cx+2 if x is in (-inf,2] cx^2-2 if x i in (2, inf)
Can you help me through this one? I don't want just the answer: A ball is thrown straight up and rises to a maximum height of 24 m above the ground. At what height is the speed of the ball equal to
half of its initial value? Assume that the ball starts at a height of 1.9 m...
I'm looking for information about the Black Death Plague without using Wikipedia as much as possible. Some questions that I need to keep in mind and answer are: How is religion relevant to the
plague? Was there a cure? How did it start?
Top fuel drag racer can reach the maximum speed of 304 mph at the end of the 1/4-mile (402 m) racetrack. (a) Assuming that the acceleration is constant during the race, calculate the average value of
the acceleration of the top fuel car. (b) What is the ratio of that accelerat...
Consider two masses m1 and m2 connected by a thin string. Assume the following values: m1 = 4.18 kg and m2 = 1.00 kg. Ignore friction and mass of the string. 1) what is the acceleration of the 2
masses? 2)What should be the value of mass m1 to get the largest possible value of...
A ball is thrown directly upward with an initial velocity of 14 m/s. If the ball is released from an initial height of 2.8 m above ground, how long is the ball in the air before landing on the
ground? Ignore air drag.
you my friend are a physics magician
In a tug-of-war between two athletes, each pulls on the rope with a force of 365 N. What is the absolute value of the horizontal force that each athlete exert against the ground?
If a car is travelling with a constant speed of 5.53 m/s in the Westward direction, what is the resultant force acting on it? Ignore friction
Which of the following represent examples of the following situation: An object that starts at the origin at t=0 and at some time later its displacement from origin is zero but the velocity is not
zero: 1)An object in a uniform circular motion around a circle passing through o...
so what you're saying is: forceonblock3-mu(m3)=m3*a forceonblock3-0(m3)=... forceonblock3=...
what does u stand for in the above equations?
Three blocks rest on a frictionless, horizontal table, with m1 = 9 kg and m3 = 16 kg. A horizontal force F = 104 N is applied to block 1, and the acceleration of all three blocks is found to be 3.3 m
/s2. 1) Find m2 2)What is the normal force between 2 and 3?
-3 is correct
Which of the following represent examples of the following situation: An object that starts at the origin at t=0 and at some time later its displacement from origin is zero but the velocity is not
zero: 1)An object in a uniform circular motion around a circle passing through o...
Vectors (Physics/Math)
Vector A has a magnitude of 18 (in some unspecified units) and makes an angle of 25 with the x axis, and a vector B has a length of 24 and makes an angle of 70 with the x axis. Find the components of
the vector C in the following: (a) C = A + B Cx = Cy = (b) C = A - B Cx= Cy=
Vector A has a magnitude of 18 (in some unspecified units) and makes an angle of 25 with the x axis, and a vector B has a length of 24 and makes an angle of 70 with the x axis. Find the components of
the vector C in the following: (a) C = A + B Cx = Cy = (b) C = A - B Cx= Cy=
duck= d gull= g pelican= p dgp, dpg, gdp, gpd, pdg, pgd
A cheetah runs a distance of 101 m at a speed of 24 m/s and then runs another 101 m in the same direction at a speed of 38 m/s. What is the cheetah's average speed?
causal in the scientific sense..
what is causal observation ?
What is casual observation and why is it different from scientific disipline?
What is the difference between scientific disipline and casual observation?
(a) A graph of is shown in the figure, where y _ L(x). As the weight of the plane increases, what can be said about the length of the runway required?
us history
doing a final review crossword and i cant figure these two out 1. Planned_____; creating products that would wear out over a given period of time ( _ _ _ _ _ _ n _ _ _ _ _ _) 13 letters 2. a clear
indication of nation-wide voter support ( _ a _ _ _ _ e) 7 letters please help i...
what is the approximate measure of angle A to the nearest tenth of a degree
Everyone i need your attention! I'm having a HUGE PE test on Monday, and I dont know how many players are on a basketball team!!!!! HELP??????
Heart Rate An athlete starts running and continues for 10 seconds. The polynomial calculates the heart rate of the athlete in beats per minute t seconds after beginning the run, where . (a) What is
the athlete s heart rate when the athlete first starts to run? (b) What is...
Consider the following electochemical cell. Pt | Fe3+ (0.0600 M), Fe2+ (6.50e-5 M) || Sn2+ (0.0620 M), Sn4+ (3.90e-4 M) | Pt (a) Calculate the voltage of the cell, Ecell, including the sign.
What is the basic principle that can be used to simplify a polynomial? What is the relevance of the order of operations in simplifying a polynomial?
Please help me! How much energy is released when125 grams of molten iron solidifies?
A bearthing mixture used by deep-sea divers contains helium, oxygen, and carbon dioxide. What is the partial pressure of oxygen at 1.2 atm if PHe = 0.98 atm and PCO2 = 0.04 atm? a) 1.02 atm b) 0.12
atm c) 0.94 atm d) 0.18 atm
Explain why the addition method might be preferred over the substitution method for solving the system 2x-3y=5 5x+2y=6 Am lost on these.Please help
When solving a system of equations by the addition method, how do you know when the system has no solution?
you have75.0ml of 2.5 M Solution of Na2CrO4(aq). You also have 125ml of a 1.74M solution of AgNO3(aq)Calculate the concentration of NO3 when the two solutions are added together
If the density of a piece of wood and the density of water are exactly the same, how much of the piece of wood remains above the water if it has a thickness of 5cm?
A block moving to the right at 12 m/s is placed on a surface having a friction coefficient of 0.3. Find the time in seconds it takes for the block to stop.
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=logan","timestamp":"2014-04-18T08:56:43Z","content_type":null,"content_length":"27185","record_id":"<urn:uuid:213958a3-075f-4cc0-8491-8ed6efb7d7a7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I attenuate a WAV file by a given decibel value?
up vote 7 down vote favorite
If I wanted to reduce a WAV file's amplitude by 25%, I would write something like this:
for (int i = 0; i < data.Length; i++)
data[i] *= 0.75;
A lot of the articles I read on audio techniques, however, discuss amplitude in terms of decibels. I understand the logarithmic nature of decibel units in principle, but not so much in terms of
actual code.
My question is: if I wanted to attenuate the volume of a WAV file by, say, 20 decibels, how would I do this in code like my above example?
Update: formula (based on Nils Pipenbrinck's answer) for attenuating by a given number of decibels (entered as a positive number e.g. 10, 20 etc.):
public void AttenuateAudio(float[] data, int decibels)
float gain = (float)Math.Pow(10, (double)-decibels / 20.0);
for (int i = 0; i < data.Length; i++)
data[i] *= gain;
So, if I want to attenuate by 20 decibels, the gain factor is .1.
audio wav
@sth: how dare you edit my question? Just for that, I'm giving you a mess of badges and a "k" after your rep. – MusiGenesis Jul 19 '09 at 2:29
2 :) – sth Jul 19 '09 at 4:11
add comment
4 Answers
active oldest votes
I think you want to convert from decibel to gain.
The equations for audio are:
decibel to gain:
gain = 10 ^ (attenuation in db / 20)
up vote 11 down vote or in C:
gain = powf(10, attenuation / 20.0f);
The equations to convert from gain to db are:
attenuation_in_db = 20 * log10 (gain)
Does he want voltage gain or power gain for the conversion? I can never remember. – Nosredna Jul 19 '09 at 2:42
2 I think it's voltage gain. Nils seems to be right. Source: sengpielaudio.com/calculator-gainloss.htm – Nosredna Jul 19 '09 at 2:43
Thanks, Nils. I always learn better from a good formula than anything else. – MusiGenesis Jul 19 '09 at 2:55
1 Digital audio is a representation of sound pressure (just like analog audio's voltage is a representation of sound pressure) so you want 20. – endolith Jul 29 '09 at
add comment
If you just want to adust some audio, I've had good results with the normalize package from nongnu.org. If you want to study how it's done, the source code is freely available. I've also
up vote 1 used wavnorm, whose home page seems to be out at the moment.
down vote
This is actually for a software synthesizer, for normalizing notes at different pitches. The normalize package in your link just uses RMS, which doesn't change significantly as I vary
the pitch (I have no idea what wavnorm does). I've found that attenuating the volume of a note by about 5 decibels (using the function from Nils) per octave above a base pitch results in
a constant perceived volume throughout the range of a scale. – MusiGenesis Jul 21 '09 at 2:53
add comment
One thing to consider: .WAV files have MANY different formats. The code above only works for WAVE_FORMAT_FLOAT. If you're dealing with PCM files, then your samples are going to be 8, 16,
24 or 32 bit integers (8 bit PCM uses unsigned integers from 0..255, 24 bit PCM can be packed or unpacked (packed == 3 byte values packed next to each other, unpacked == 3 byte values in
a 4 byte package).
up vote 1 And then there's the issue of alternate encodings - For instance in Win7, all the windows sounds are actually MP3 files in a WAV container.
down vote
It's unfortunately not as simple as it sounds :(.
Sorry, "WAV file" was just shorthand for sampled audio data, generically. I know all about WAV and MP3 file formats, although I have to say I have never encountered 24bit or 32bit PCM
files in the wild. – MusiGenesis Jul 21 '09 at 2:42
I'm trying to guess what the purpose of a 24 bit unpacked PCM WAV file would be. Meant to record output from a 24-bit mixer, I would guess? – MusiGenesis Jul 21 '09 at 2:58
I just finished reading a good chunk of your blog. As an audio programmer, I bow before you. :) – MusiGenesis Jul 21 '09 at 3:09
And obviously, I know less about the WAV file format than I thought. – MusiGenesis Jul 21 '09 at 3:23
MusiGenesis: I believe the point of the unpacked 24 bit PCM is ease of use - 24bits gives you a LOT more precision than 16 bits but it's far easier to deal with the data as DWORD
1 values instead of cracking the stream 3 bytes at a time. But you're right, there are relatively few files that are WAV files encoded with 24bit PCM. – Larry Osterman Jul 21 '09 at
add comment
Oops I misunderstood the question… You can see my python implementations of converting from dB to a float (which you can use as a multiplier on the amplitude like you show above)
and vice-versa
In a nutshell it's:
up vote 0 down
vote 10 ^ (db_gain / 10)
so to reduce the volume by 6 dB you would multiply the amplitude of each sample by:
10 ^ (-6 / 10) == 10 ^ (-0.6) == 0.2512
add comment
Not the answer you're looking for? Browse other questions tagged audio wav or ask your own question. | {"url":"http://stackoverflow.com/questions/1149092/how-do-i-attenuate-a-wav-file-by-a-given-decibel-value","timestamp":"2014-04-16T14:30:42Z","content_type":null,"content_length":"89321","record_id":"<urn:uuid:d54a4484-b705-46ff-bb31-511beecad66d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
BCS and the formation of Copper Pairs
In my understanding, "deformation of lattice" is a classical description and may not be used in quantum mechanical explanation.
The phrase “lattice deformation” does not imply a system that is fully classical. The nuclei part of the lattice is much more massive then the valence electrons in the chemical bonds that hold the
lattice together. Therefore, one can use an approximation of quantum mechanics where the nuclei and core electrons of the lattice move according to classical dynamics, but the electrons are behaving
as a wave in quantum mechanics.
So an "explanation" can have "quantum" electrons and "classical" phonons. However, the composite particle called a Cooper pair is governed mostly by quantum mechanics. The Cooper pair is a quantum
mechanical object because it is so large compared to the spacing between the Cooper pairs. Basically, the size of the Cooper pair is an indeterminacy of its position. So if you want to treat the
Cooper pair as a quasiparticle, you have to take the quantum mechanics into account. So we have a "quantum" Cooper pair.
What you are possibly saying is that perturbation techniques aren’t entirely valid under the conditions of “lattice deformation”. However, there are other approimxations. The adiabatic approximation
and the WKB approximation are valid under the conditions of a “slowly moving” lattice.
The electrons are moving far faster than the nuclei under these conditions. The indeterminacy in the position of the nuclei are much smaller than the undeterminacy of the valence electrons.
Therefore, one can use “hybrid” mechanics where nuclei with core electrons are “classical particles” and both conduction electrons and valence electrons are “quantum waves”.
The distorted lattice, consisting of displaced nuclei, can be pictured as generate and electric field. A higher concentration of positive charges (the nuclei) are the source of an electric field that
is moving outward from the points of greatest concentration of nuclei. The conduction electrons are in a potential that is caused by this electric field. So the electrons can “move” coherently with
the deformed lattice.
Two electrons can move together toward the region where the positive charge density is greatest. However, the electrons are also pulling at the nuclei. They are making the regions of high density
nuclei less dense. Although this is a self consistent picture classically, one can get more accuracy by assuming that the electrons at least are quantum mechanical.
So basically two electrons are interacting through the lattice deformation, otherwise called a phonon.
Here is a link that claims that variational and WKB approximation are valid with Cooper pairs. You can assume that the electrons are quantum mechanical.
“An example of this phenomenon may be found in conventional superconductivity, in which the phonon-mediated attraction between conduction electrons leads to the formation of correlated electron pairs
known as Cooper pairs. When faced with such systems, one usually turns to other approximation schemes, such as the variational method and the WKB approximation. This is because there is no analogue
of a bound particle in the unperturbed model and the energy of a soliton typically goes as the inverse of the expansion parameter. However, if we "integrate" over the solitonic phenomena, the
nonperturbative corrections in this case will be tiny; of the order of or in the perturbation parameter g. Perturbation theory can only detect solutions "close" to the unperturbed solution, even if
there are other solutions (which typically blow up as the expansion parameter goes to zero).”
“ The binding energy of superconducting electrons dominates the superconducting transition temperature in the corresponding material. Under an electric field, superconducting electrons move
coherently with lattice distortion wave and periodically exchange their excitation energy with chain lattice, that is, the superconducting electrons transfer periodically between their dynamic bound
state and conducting state, so the superconducting electrons cannot be scattered by the chain lattice, and supercurrent persists in time. Thus, the intrinsic feature of superconductivity is to
generate an oscillating current under a dc voltage.”
You mustn’t think that the Cooper pairs are individual particles. Actually, they are squashed together. Therefore, position of each electron is highly uncertain. So even if you think of the lattice
deformation as classical, the electrons are not classical.
“The binding energy of a Cooper pair turns out to be small, 104103 eV, so low temperatures are needed to preserve the binding in spite of the thermal motion. According to Heisenberg’s Uncertainty
Principle a weak binding is equivalent to a large extension of the composite system, in this case the above-mentioned d = 100 1000 nm. As a consequence, the Cooper pairs in a superconductor overlap
each other. In the space occupied by a Cooper pair there are about a million other Cooper pairs. Figure 22 gives an illustration. The situation is totally different from other composite systems like
atomic nuclei or atoms which are tightly bound objects and well-separated from another. The strong overlap is an important prerequisite of the BCS theory because the Cooper pairs must change their
partners frequently in order to provide a continuous binding.” | {"url":"http://www.physicsforums.com/showthread.php?p=4201672","timestamp":"2014-04-16T13:49:47Z","content_type":null,"content_length":"80120","record_id":"<urn:uuid:137d1176-248b-43fd-ac7b-099f10e40747>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Having trouble with loans/future value
November 14th 2011, 01:20 PM #1
Nov 2011
Having trouble with loans/future value
Okay, so I have a few problems dealing with "buying a house." There are six parts to the question and I'm on the sixth part so I'll add some additional information that I have from other parts of
the question.
1F. Most people choose to put away 12 equal amounts in an escrow account to pay property taxes. How much would your monthly bill be to pay your mortgage and property taxes?
I feel like this would be a future value problem since you are paying many payments.
A= m*((((1+i)^N)-1)/i) where i=r/n and N=nt
The mortgage amount [I believe] is $213,800 and the property taxes are $5032.
n=12, t=1
However, while I know that I'm trying to find m, I cannot for the life of me figure out what r is [the % compounded monthly] since it's not given in the problem.
Any ideas/suggestions? Or anything I'm doing incorrectly?
Re: Having trouble with loans/future value
You seem to be on the right track, but missing a bit of logic. You need more information. You are exactly correct that you cannot find the interest rate from the given information.
November 14th 2011, 02:00 PM #2
MHF Contributor
Aug 2007 | {"url":"http://mathhelpforum.com/business-math/191913-having-trouble-loans-future-value.html","timestamp":"2014-04-20T22:34:17Z","content_type":null,"content_length":"32968","record_id":"<urn:uuid:709b0d6b-0dd7-44fa-ad4f-1559681ab5a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
G. Beni, X. Liu, "A Least Biased Fuzzy Clustering Method," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 9, pp. 954-960, September, 1994.
BibTex x
@article{ 10.1109/34.310694,
author = {G. Beni and X. Liu},
title = {A Least Biased Fuzzy Clustering Method},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {16},
number = {9},
issn = {0162-8828},
year = {1994},
pages = {954-960},
doi = {http://doi.ieeecomputersociety.org/10.1109/34.310694},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - A Least Biased Fuzzy Clustering Method
IS - 9
SN - 0162-8828
EPD - 954-960
A1 - G. Beni,
A1 - X. Liu,
PY - 1994
KW - entropy; fuzzy set theory; pattern recognition; least biased fuzzy clustering method; minimal biases; maximum entropy principle; clustering entropy; hard partitions; fuzzy partitions;
multiscale analysis; relative stability; cluster structure
VL - 16
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
A new operational definition of cluster is proposed, and a fuzzy clustering algorithm with minimal biases is formulated by making use of the maximum entropy principle to maximize the entropy of the
centroids with respect to the data points (clustering entropy). The authors make no assumptions on the number of clusters or their initial positions. For each value of an adimensional scale parameter
/spl beta/', the clustering algorithm makes each data point iterate towards one of the cluster's centroids, so that both hard and fuzzy partitions are obtained. Since the clustering algorithm can
make a multiscale analysis of the given data set one can obtain both hierarchy and partitioning type clustering. The relative stability with respect to /spl beta/' of each cluster structure is
defined as the measurement of cluster validity. The authors determine the specific value of /spl beta/' which corresponds to the optimal positions of cluster centroids by minimizing the entropy of
the data points with respect to the centroids (clustered entropy). Examples are given to show how this least biased method succeeds in getting perceptually correct clustering results.
[1] A. K. Jain and R. C. Dubes,Algorithms for Clustering Data. Englewood Cliffs, NJ: Prentice-Hall, 1988.
[2] G. H. Ball and D. J. Hall, "Isodata, a novel method of data analysis and pattern classification," Stanford Res. Inst. Tech. Rep. (NTIS AD 699616), Stanford, CA, 1965.
[3] J. C. Bezdek, "Fuzzy mathematics in pattern classification," Ph.D. dissertation, Cornell Univ., Ithaca, NY, 1973.
[4] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, "Optimization by simulated annealing,"Science, vol. 220, pp. 671-680, 1983.
[5] S. Geman and D. Geman, "Stochastic relaxation, Gibbs distribution, and bayesian restoration of images,"IEEE Trans. Pattern. Anal. Machine Intell., vol. PAMI-6, pp. 721-741, 1984.
[6] K. Rose, E. Gurewitz, and G. Fox, "A deterministic annealing approach to clustering,"Pattern Recognit. Lett., vol. 11, no. 9, pp. 589-594, 1990.
[7] R. O. Duda and P. E. Hart,Pattern Classification and Scene Analysis. New York: Wiley, 1973.
[8] I. Gath and A. B. Geva, "Unsupervised optimal fuzzy clustering,"IEEE Trans. Pattern Anal. Machine Intell.. vol. PAMI-6, pp. 721-728, 1989.
[9] X.-L. Xie and G. Beni, "Validity measure for fuzzy clustering,"IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 8, pp. 841-847, Aug. 1991.
[10] B. S. Everitt,Cluster Analysis. New York: John Wiley, 1974.
[11] R. Dubes and A. K. Jain, "Clustering techniques: The user's dilemma,"Pattern Recognit., vol. 8, pp. 247-260, 1976.
[12] G. Matthews and J. Hearne, "Clustering without a metric,"IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 2, pp. 175-184, Feb. 1991.
[13] E. T. Jaynes, "Information theory and statistical mechanics," inPapers on Probability, Statistics, and Statistical Physics, R. D. Rosenkrantz, Ed. Dordrecht, The Netherlands, Kluwer Academic
Publishers, 1989 (Reprint of a 1957 paper).
[14] R. A. Fisher, "Multiple measurements in taxonomic problems,"Ann. Eugenics, vol. 7, pp. 179-188, 1936.
[15] M. James,Classification Algorithm. New York: John Wiley, 1985, ch. 7.
[16] G. Beni and X. Liu, "An efficiency computation algorithm for the least biased fuzzy clustering method," in preparation.
Index Terms:
entropy; fuzzy set theory; pattern recognition; least biased fuzzy clustering method; minimal biases; maximum entropy principle; clustering entropy; hard partitions; fuzzy partitions; multiscale
analysis; relative stability; cluster structure
G. Beni, X. Liu, "A Least Biased Fuzzy Clustering Method," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 9, pp. 954-960, Sept. 1994, doi:10.1109/34.310694
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tp/1994/09/i0954-abs.html","timestamp":"2014-04-16T23:22:47Z","content_type":null,"content_length":"53827","record_id":"<urn:uuid:5d2cd66f-cc83-41dd-b735-f47960945d1b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
This graduate-level text is intended for initial courses in algebra that begin with first principles but proceed at a faster pace than undergraduate-level courses. It employs presentations and proofs
that are accessible to students, and it provides numerous concrete examples.
Exercises appear throughout the text, clarifying concepts as they arise; additional exercises, varying widely in difficulty, are included at the ends of the chapters. Subjects include groups, rings,
fields and Galois theory, modules, and structure of rings and algebras. Further topics encompass infinite Abelian groups, transcendental field extensions, representations and characters of finite
groups, Galois groups, and additional areas.
Based on many years of classroom experience, this self-contained treatment breathes new life into abstract concepts.
Read More Show Less
Product Details
• ISBN-13: 9780486439471
• Publisher: Courier Corporation
• Publication date: 6/17/2010
• Pages: 322
• Sales rank: 1,156,347
• Product dimensions: 5.50 (w) x 8.50 (h) x 0.67 (d)
Read an Excerpt
By Larry C. Grove
Dover Publications, Inc.
Copyright © 1983 Larry C. Grove
All rights reserved.
ISBN: 978-0-486-14213-5
CHAPTER 1
1. GROUPS, SUBGROUPS, AND HOMOMORPHISMS
A nonempty set with an associative binary operation is called a semigroup, and a semigroup S having an identity element 1 such that 1x = x1 = x for all x [member of] S is called a monoid. Most of the
algebraic systems discussed herein will be semigroups or monoids, but almost always with further requirements imposed, so the semigroup or monoid aspect will seldom be explicitly emphasized.
One trivial consequence of the definition of a monoid deserves mention.
Proposition 1.1. The identity element of a monoid S is unique.
Proof. Suppose 1 and e are identities in S. Then 1 = 1e = e.
A group is a set G with an associative binary operation (usually called multiplication) and an identity element 1 satisfying the further requirement that for each x [member of] G there is an inverse
element y [member of] G such that xy = yx = 1.
Proposition 1.2. If G is a group and x [member of] G, then x has a unique inverse element.
Proof. Let y and z be inverses for x. Then
y = y1 = y(xz) = (yx)z = 1z = z.
The unique inverse for x [member of] G is denoted by x-1. Note that (x-1)1 = x.
Proposition 1.3. If G is a group and x, y [member of] G, then (xy)-1 = y-1x-1.
(xy)(y-1 x-1) = ((xy)y-1)x-1 = (x(yy-1))x-1 = (x1)x-1 = xx-1 = 1, and similarly (y-1x-1)(xy) = 1.
As Coxeter [7] has pointed out, the "reversal of order" in Proposition 1.3 becomes clear when we think of the operations of putting on our shoes and socks.
If the binary operation of a group G is written as addition, then the identity element is commonly denoted by 0 rather than 1, and the inverse of x by — x rather than x-1. It is customary to use
additive notation only if x + y = y + x for all x, y [member of] G.
In general, a group G (multiplicative again) is called abelian (or commutative) if xy = yx for all x, y [member of] G.
We write x0 = 1, x1 = x, x2 = xx, and in general x = xn-1x for 1 ≤ n [member of] [member of] Z. Define x-n = (x-1)n, again for 1 ≤ n [member of] Z. It is easy to verify by induction that the usual
laws of exponents hold in any group, viz.,
xmxn = xm+n and (xm)n = xmn
for all x [member of] G, all m, n [member of] Z. The additive analog of xn is nx, so the additive analogs of the laws of exponents are mx + nx = (m + n)x and n(mx) = (mn)x.
Exercise 1.1. Verify the laws of exponents for groups.
1. Let G = {1, — 1} [subset or equal to] R, with multiplication as usual. Then G is a group.
2. Let G = Z, Q, R, or C, with the usual binary operation of addition. Then G is a group.
3. Let G = Q\{0}, the set of nonzero rational numbers, under multiplication. Then G is a group. Similarly this holds for R\{0} and C\{0}, but not for Z\{0}. (Why?)
4. Let S be a nonempty set. A permutation of S (sometimes called a bijection of S) is a 1—1 function φ from S onto S. Let G be the set of all permutations of S. If φ, θ [member of] G, we define φθ to
be their composition product, i.e., φθ(s) = φ(θ(s)) for all s [member of] S. Composition is a binary operation on G (verify), and it is associative, for if φ, θ, σ [member of] G and s [member of] S,
(φ(θσ))(s) = φ(θσ(s)) = φ [θ(σ(s))],
((φθ)σ)(s) = φθ(σ(s)) = φ[θ(σ(s))].
G has an identity element, the permutation 1 = 1s defined by 1(s) = s, all s [member of] S, and each φ [member of] G has an inverse φ-1 defined by φ-1(s1) = s2 if and only if φ(s2) = s1 (there are a
few details to be verified). Thus G is a group; we write G = Perm(S). This example is of considerable importance and will be pursued much further.
5. As a special case of the preceding example take S = {1, 2, 3, ..., n}. The group G of all permutations of S is called the symmetric group on n letters and is denoted by G = Sn. If φ [member of] Sn
, it is convenient to display the function φ explicitly in the form
For example, if n = 3, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the permutation that maps 1 to 2, 2 to 3, and 3 to 1. The notation makes it quite simple to carry out explicit
computations of the composition product. Suppose, for example, that n = 3 and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Note from the
definition of φθ in Example 4 that θ acts first and φ second. Thus θ maps 1 to 3 and φ then maps 3 to 1, and so the composite φθ maps 1 to 1. Similarly, φθ maps 2 to 3 and maps 3 to 2. Thus
Observe that
so S3 is not an abelian group. It is easy to see that Sn is, likewise, not abelian for any n > 3, although S1 and S2 are abelian.
6. Let T be an equilateral triangle in the plane with center O. Let D3 denote the set of symmetries of T, i.e., distance-preserving functions from the plane onto itself that carry T onto T (as a set
of points). The elements of D3 are called congruences of the triangle T in plane geometry. With composition as the binary operation, D3 is a group. Let us list its elements explicitly. There is, of
course, the identity function 1, with 1(x) = x for all x in the plane. There are two counterclockwise rotations, φ1 and φ2, about O as center through angles of 120° and 240°, respectively, and three
mirror reflections θ1, θ2, θ3 across the three lines passing through the vertices of T and through O (see Fig. 1).
It is edifying to cut a cardboard triangle, label the vertices, and determine composition products explicitly. The result is the "multiplication table" (Fig. 2) for D3.
A routine inspection of the table shows that each element has an inverse, and also (if enough time is spent) that the operation is associative. Associativity is also clear from the fact that each
element of D3 is a permutation of the points of the plane. Thus D3 is a group.
If we let S = {1, 2, 3} be the set of vertices of T, then each element of D3 gives rise to a permutation of S, i.e., to an element of the symmetric group S3. For example, [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII], [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], etc. The result is a 1-1 correspondence between the group D3 of symmetries of T and the symmetric group S3. It is
instructive to label the elements of S3 accordingly [e.g., [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] etc.], to write out the
multiplication table for S3 and to compare with the table above.
7. This time let T be a square in the plane, with center O, and let D4 be its set (in fact group) of symmetries. There are four rotations (one of them the identity, through 0°) and four reflections
(see Fig. 3). The multiplication table should be computed.
Again each element of D4 gives rise to a permutation of the set S = {1, 2, 3, 4} of vertices of T, i.e., to an element of S4. For example, the rotation φ1 through 90° counterclockwise about O gives
the permutation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Note in this case, however, that not all elements of S4 occur. For example, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is
not the result of any symmetry of the square.
8. The quaternion group Q2 consists of 8 matrices ± 1, ±i, ±j, ±k under multiplication, where
and 1 denotes the 4 × 4 identity matrix. It is easy to verify that i2 = j2 = k2 = — 1 and that ij = k. All other products can be determined from those. For example, since ijk = k2 = -1 we have i2jk =
-jk = -i, and hence jk = i. The chief advantage of presenting Q2 as a set of matrices is that the associative law is automatically satisfied.
9. Klein's 4-group K consists of four 2 × 2 matrices:
Its multiplication table is Fig. 4.
10. Let T be a regular tetrahedron and let G be the set of all rotations of three-dimensional space that carry T to itself (as a set of points), i.e., all the rotational symmetries of T. Thus G
consists of the identity 1, rotations through angles of 180° about each of three axes joining midpoints of opposite edges, and rotations through 120° and 240° about each of four axes joining vertices
with centers of opposite faces. Thus |G| = 12.
Exercise 1.2. Let G be the set of 12 rotational symmetries of a regular tetrahedron.
(1) Verify that G is a group and write out its multiplication table.
(2) Each element of G gives rise to a permutation of the set of vertices of the tetrahedron, numbered 1, 2, 3, and 4. List the resulting permutations in S4.
(3) Each element of G also gives rise to a permutation of the set of 6 edges of the tetrahedron. List the resulting permutations in S6.
Exercise 1.3. Describe the groups of rotational symmetries of a cube (there are 24) and of a regular dodecahedron (there are 60). It will be helpful to have cardboard models.
Many more examples will appear as we continue. It will be convenient at this point to introduce some concepts, some terminology, and some elementary consequences of the definitions.
The cardinality |G| of a group G is called its order. If G is not finite we usually say simply that G has infinite order. An easy counting argument shows that the symmetric group Sn has order n!.
Excerpted from ALGEBRA by Larry C. Grove. Copyright © 1983 Larry C. Grove. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Read More Show Less
Table of Contents
List of Symbols
I. Groups
II. Rings
III. Fields and Galois Theory
IV Modules
V. Structure of Rings and Algebras
VI. Further Topics
Read More Show Less | {"url":"http://www.barnesandnoble.com/w/algebra-larry-c-grove/1101325760?ean=9780486439471&itm=1&usri=larry+c.+grove","timestamp":"2014-04-16T19:37:52Z","content_type":null,"content_length":"132792","record_id":"<urn:uuid:a718f83f-c884-4b7a-af24-22d299cd40b9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 256180, 15 pages
Research Article
Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches
Department of Business Administration, Lunghwa University of Science and Technology, No. 300, Section 1, Wanshou Road, Guishan, Taoyuan County 33306, Taiwan
Received 7 September 2012; Revised 3 December 2012; Accepted 4 December 2012
Academic Editor: Baozhen Yao
Copyright © 2013 Jui-Yu Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
Stochastic global optimization (SGO) algorithms such as the particle swarm optimization (PSO) approach have become popular for solving unconstrained global optimization (UGO) problems. The PSO
approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately,
PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and
error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO) method and an artificial
immune algorithm-based PSO (AIA-PSO) method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is
applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their
ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be
considered alternative SGO approaches for solving standard-dimensional UGO problems.
1. Introduction
An unconstrained global optimization (UGO) problem can generally be formulated as follows: where is an objective function and represents a decision variable vector. Additionally, , denotes search
space (), which is dimensional and bounded by parametric constraints as follows: where and are the lower and upper boundaries of the decision variables , respectively.
Many conventional nonlinear programming (NLP) techniques, such as the golden search, quadratic approximation, Nelder-Mead, steepest descent, Newton, and conjugate gradient methods, have been used to
solve UGO problems [1]. Unfortunately, such NLP methods have difficulty in solving UGO problems when an objective function of an UGO problem is nondifferential. Many stochastic global optimization
(SGO) approaches developed to overcome this limitation of the traditional NLP methods include genetic algorithms (GAs), particle swarm optimization (PSO), ant colony optimization (ACO), and
artificial immune algorithms (AIAs). For instance, Hamzaçebi [2] developed an enhanced GA incorporating a local random search algorithm for eight continuous functions. Furthermore, Chen [3] presented
a two-layer PSO method to solve nine UGO problems. Zhao [4] presented a perturbed PSO approach for 12 UGO problems. Meanwhile, Toksari [5] developed an ACO algorithm for solving UGO problems.
Finally, Kelsey and Timmis [6] presented an AIA method based on the clonal selection principle for solving 12 UGO problems.
This work focuses on a PSO algorithm, based on it is being effective, robust and easy to use in the SGO methods. Research on the PSO method has considered many critical issues such as parameter
selection, integration of the PSO algorithm with the approaches of self-adaptation, and integration with other intelligent optimizing methods [7]. This work surveys two issues: first is a PSO
approach that integrates with other intelligent optimizing methods and second is parameter selection for use in a PSO approach.
Regarding the first issue, the conventional PSO algorithm lacks evolution operators of GAs, such as crossover and mutation operations. Therefore, PSO has premature convergence, that is, a rapid loss
of diversity during optimization [4]. To overcome this limitation, many hybrid SGO methods have been developed to create diverse candidate solutions to enhance the performance of a PSO approach.
Hybrid algorithms have some advantages; for instance, hybrid algorithms outperform individual algorithms in solving certain problems and thus can solve general problems more efficiently [8]. Kao and
Zahara [9] presented a hybrid GA and PSO algorithm to solve 17 multimodal test functions. Their study used the operations of GA and PSO methods to generate candidate solutions to improve solution
quality and convergence rates. Furthermore, Shelokar et al. [10] presented a hybrid PSO and ACO algorithm to solve multimodal continuous optimization problems. Their study used an ACO algorithm to
update the particle positions to enhance a PSO algorithm performance. Chen et al. [11] presented a hybrid PSO and external optimization based on the Bak–Sneppen model to solve unimodal and multimodal
benchmark problems. Furthermore, Thangaraj et al. [12] surveyed many algorithms that combine the PSO algorithm with other search techniques and compared the performances obtained using hybrid
differential evolution PSO (DE-PSO), adaptive mutation PSO (AMPSO), and hybrid GA and PSO (GA-PSO) approaches to solve nine conventional benchmark problems.
Regarding the second issue, a PSO algorithm has numerous parameters that must be set, such as cognitive parameter, social parameter, inertia weight, and constriction coefficient. Traditionally, the
optimal parameter settings of a PSO algorithm are tuned based on trial and error. The abilities of a PSO algorithm to explore and exploit are constrained to optimum parameter settings [13, 14].
Therefore, Jiang et al. [15] used a stochastic process theory to analyze the parameter settings (e.g., cognitive parameter, social parameter, and inertia weight) of a standard PSO algorithm.
This work focuses on the second issue related to the application of a PSO method. Fortunately, the optimization of parameter settings for a PSO algorithm can be viewed as an UGO problem. Moreover,
real-coded GA (RGA) and AIA are efficient SGO approaches for solving UGO problems. Based on the advantage of a hybrid algorithm [8], this work develops two hybrid SGO approaches. The first approach
is a hybrid RGA and PSO (RGA-PSO) algorithm, while the second one is a hybrid AIA and PSO (AIA-PSO) algorithm. The proposed RGA-PSO and AIA-PSO algorithms are considered as a means of solving the two
optimization problems simultaneously. The first UGO problem (optimization of cognitive parameter, social parameter, and constriction coefficient) is optimized using external RGA and AIA approaches,
respectively. The second UGO problem is then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using a set of benchmark UGO
problems and compared with those of many hybrid algorithms [9, 10, 12].
The rest of this paper is organized as follows. Section 2 describes RGA, PSO, and AIA approaches. Section 3 then presents the proposed RGA-PSO and AIA-PSO methods. Next, Section 4 compares the
experimental results of the proposed RGA-PSO and AIA-PSO approaches with those of many hybrid methods. Conclusions are finally drawn in Section 5.
2. Related Works
The SGO approaches such as RGA, PSO, and AIA [16] are described as follows.
2.1. Real-Coded Genetic Algorithm
GAs are based on the concepts of natural selection and use three genetic operations, that is, selection, crossover, and mutation, to explore and exploit the solution space. In solving continuous
function optimization problems, RGA method outperforms binary-coded GA approach [17]. Therefore, this work describes operators of a RGA method [18].
2.1.1. Selection Operation
A selection operation picks up strong individuals from a current population based on their fitness function values and then reproduces these individuals into a crossover pool. Many selection
operations developed include the roulette wheel, the ranking, and the tournament methods [17, 18]. This work employs the normalized geometric ranking method as follows: where = probability of
selecting individual , = probability of choosing the best individual (here ), , and = individual ranking based on fitness value, where 1 represents the best, , and ps[RGA] = population size of the
RGA method.
2.1.2. Crossover Operation
While exploring the solution space by creating new offspring, the crossover operation randomly chooses two parents from the crossover pool and then uses these two parents to create two new offspring.
This operation is repeated until the ps[RGA]/2 is satisfied. The whole arithmetic crossover is easily performed as follows: where and = parents (decision variable vectors), and = offspring (decision
variable vectors), and = uniform random number in the interval .
2.1.3. Mutation Operation
Mutation operation can improve the diversity of individuals (candidate solutions). Multi-non-uniform mutation is described as follows: where , perturbed factor, and = uniform random variable in the
interval , = maximum generation of the RGA method, = current generation of the RGA method, = current decision variable , and = trial decision variable (candidate solution) .
2.2. Particle Swarm Optimization
Kennedy and Eberhart [19] first presented a standard PSO algorithm, which is inspired by the social behavior of bird flocks or fish schools. Like GAs, a PSO method is a population-based algorithm. A
population of candidate solutions is called a particle swarm. The particle velocities can be updated by (6) as follows: where = particle velocity of decision variable of particle at generation , =
particle velocity of decision variable of particle at generation , = cognitive parameter, = social parameter, = particle position of decision variable of particle at generation , = independent
uniform random numbers in the interval at generation , = best local solution at generation , = best global solution at generation , and ps[PSO] = population size of the PSO algorithm.
The particle positions can be obtained using (7) as follows:
Shi and Eberhart [20] introduced a modified PSO algorithm by incorporating an inertia weight () into (8) to control the exploration and exploitation capabilities of a PSO algorithm as follows:
A constriction coefficient () in (9) is used to balance the exploration and exploitation tradeoff [21–23] as follows: where = uniform random variable in the interval , , ,.
This work considers parameters and to modify the particle velocities as follows: where , increased value reduces the , and = maximum generation of the PSO algorithm.
According to (11), the optimal values of parameters , , and are difficult to obtain through trial and error. This work thus optimizes these parameter settings by using RGA and AIA approaches.
2.3. Artificial Immune Algorithm
Wu [24] presented an AIA approach based on clonal selection and immune network theories to solve constrained global optimization problems. The AIA method consists of selection, hypermutation,
receptor editing, and bone marrow operations. The selection operation is performed to reproduce strong antibodies (Abs). Also, diverse Abs are created using hypermutation, receptor editing, and bone
marrow operations, as described in the following subsections.
2.3.1. Ab and Ag Representation
In the human system, an antigen (Ag) has multiple epitopes (antigenic determinants), which can be recognized by many Abs with paratopes (recognizers), on its surface. In the AIA approach, an Ag
represents known parameters of a solved problem. The Abs are the candidate solutions (i.e., decision variables , ) of the solved problem. The quality of a candidate solution is evaluated using an
Ab-Ag affinity that is derived from the value of an objective function of the solved problem.
2.3.2. Selection Operation
The selection operation controls the number of antigen-specific Abs. This operation is defined according to Ab-Ag and Ab-Ab recognition information as follows: where = probability that recognizes
(the best solution), = the best with the highest Ab-Ag affinity, = decision variables of , and rs[AIA] = repertoire (population) size of the AIA approach.
The is recognized by other in a current Ab repertoire. Large implies that can effectively recognize . The with that is equivalent to or larger than the threshold degree of AIA approach is reproduced
to generate an intermediate Ab repertoire.
2.3.3. Hypermutation Operation
The somatic hypermutation operation can be expressed as follows: where = perturbation factor, = current generation of the AIA method, = maximum generation number of the AIA method, and and = uniform
random number in the interval .
This operation has two tasks, that is, a uniform search and local fine tuning.
2.3.4. Receptor-Editing Operation
A receptor-editing operation is developed based on the standard Cauchy distribution , in which the local parameter is zero and the scale parameter is one. Receptor editing is implemented using Cauchy
random variables that are created from , owing to their ability to provide a large jump in the Ab-Ag affinity landscape to increase the probability of escaping from the local Ab-Ag affinity
landscape. Cauchy receptor editing can be defined by where , vector of Cauchy random variables, and = uniform random number in the interval .
This operation is used in local fine-tuning and large perturbation.
2.3.5. Bone Marrow Operation
The paratope of an can be created by recombining gene segments and [25]. Therefore, based on this metaphor, diverse are synthesized using a bone marrow operation. This operation randomly selects two
from the intermediate repertoire and a recombination point from the gene segments of the paratope of the selected . The selected gene segments (e.g., gene of and gene of the ) are reproduced to
create a library of gene segments. The selected gene segments in the paratope are then deleted. The new is formed by inserting the gene segment, which is gene of the in the library plus a random
variable created from standard normal distribution , at the recombination point. The details of the implementation of the bone marrow operation can be found [24].
3. Methods
This work develops the RGA-PSO and AIA-PSO approaches for solving UGO problems. The implementation of the RGA-PSO and AIA-PSO methods is described as follows.
3.1. RGA-PSO Algorithm
Figure 1 shows the pseudocode of the proposed RGA-PSO algorithm. The best parameter setting of the internal PSO algorithm is obtained by using the external RGA method. Benchmark UGO problems are
solved by using the internal PSO algorithm.
External RGA
Step 1 (initialize the parameter settings). Parameter settings such as , crossover probability of the RGA method , mutation probability of the RGA approach , as well as lower and upper boundaries of
the parameters (, , and ) for a PSO algorithm are given. The candidate solution (individual) of a RGA method represents the optimized parameters of internal PSO algorithm. Figure 2 illustrates the
candidate solution of the RGA method.
Step 2 (calculate the fitness function value). The fitness function value of the external RGA method is the best objective function value obtained from the best solution of each internal PSO
algorithm execution as follows:
The candidate solution of the external RGA method is incorporated into the internal PSO algorithm and, then, the internal PSO algorithm is used to solve an UGO problem. The internal PSO algorithm is
executed as follows.
Internal PSO Algorithm (1)Generate an initial particle swarm. An initial particle swarm is created based on from [] of an UGO problem. A particle represents a candidate solution of an UGO problem.
(2)Compute the objective function value. The objective function value of the internal PSO algorithm of particle is the objective function value of an UGO problem. (3)Update the particle velocity and
position. Equations (7) and (11) can be used to update the particle position and velocity. (4)Perform an elitist strategy. A new particle swarm is generated from internal step . Notably, of a
candidate solution (particle ) in the particle swarm is evaluated. This work makes a pairwise comparison between the of candidate solutions in the new particle swarm and that in the current particle
swarm. A situation in which the candidate solution () in the new particle swarm is better than candidate solution in the current particle swarm implies that the strong candidate solution in the new
particle swarm replaces the candidate solution in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current
particle swarm is updated to the particle swarm of the next generation.
Internal steps from to are repeated until the maximum generation number of the PSO method of the internal PSO algorithm is satisfied.
Step 3 (perform a selection operation). Equation (3) is used to select the parents into a crossover pool.
Step 4 (implement a crossover operation). The crossover operation performs a global search. The candidate solutions are created by using (4).
Step 5 (conduct a mutation operation). The mutation operation implements a local search. A solution space is exploited using (5).
Step 6 (perform an elitist strategy). This work presents an elitist strategy to update the population. A situation in which the of candidate solution in the new population is larger than the of
candidate solution in the current population suggests that a replacement of the weak candidate solution takes place. Additionally, a situation in which the of candidate solution in the new population
is equal to or worse than that in the current population implies that the candidate solution in the current population survives. In addition to maintaining the strong candidate solutions, this
strategy effectively eliminates weak candidate solutions.
External steps 2 to 6 are repeated until the of the external RGA method is met.
3.2. AIA-PSO Algorithm
Figure 3 shows the pseudocode of the proposed AIA-PSO algorithm. The external AIA method is used to optimize the parameter settings of the internal PSO method, which is employed to solve benchmark
UGO problems.
External AIA
Step1 (initialize the parameter settings). Several parameters must be predetermined. These include repertoire (population) size and . An available Ab repertoire (population) is randomly generated
using from the lower and upper boundaries of parameters [], [], and []. Figure 4 shows the Ab and Ag representation.
Step2 (evaluate the Ab-Ag affinity).
Internal PSO Algorithm. The external AIA approach offers parameter settings , , and for the internal PSO algorithm. Subsequently, internal steps – of the PSO algorithm are implemented. The internal
PSO method returns the best fitness value of PSO to the external AIA method.(1)Generate an initial particle swarm. An initial particle swarm is created based on from [, ] of an UGO problem. A
particle represents a candidate solution of an UGO problem.(2)Compute the fitness value. The fitness value of the internal PSO algorithm is the objective function value of an UGO problem. (3)Update
the particle velocity and position. Equations (7) and (11) can be used to update the particle position and velocity. (4)Perform an elitist strategy. A new particle swarm (population) is generated
from internal step . Notably, of a candidate solution (particle ) in the particle swarm is evaluated. This work makes a pairwise comparison between the of candidate solutions in the new particle
swarm and that in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the
particle swarm of the next generation.
Internal steps from to are repeated until the of the internal PSO algorithm is satisfied.
Consistent with the Ab-Ag affinity metaphor, an Ab-Ag affinity is determined using (16) as follows:
Following the evaluation of the Ab-Ag affinities of Abs in the current Ab repertoire, the Ab with the highest Ab-Ag affinity () is chosen to undergo clonal selection in external Step3.
Step3 (perform a clonal selection operation). To control the number of antigen-specific Abs, (12) is employed.
Step4 (implement an Ab-Ag affinity maturation operation). The intermediate Ab repertoire that is created in external Step3 is divided into two subsets. These Abs undergo somatic hypermutation
operation by using (13) when the random number is 0.5 or less. Notably, these Abs suffer receptor-editing operation using (14) when the random number exceeds 0.5.
Step5 (introduce diverse Abs). Based on the bone marrow operation, diverse Abs are created to recruit the Abs suppressed in external Step 3.
Step6 (Update an Ab repertoire). A new Ab repertoire is generated from external Steps 3–5. The Ab-Ag affinities of the Abs in the generated Ab repertoire are evaluated. This work presents a strategy
for updating the Ab repertoire. A situation in which the Ab-Ag affinity of in the new Ab repertoire exceeds that in the current Ab repertoire implies that a strong Ab in the new Ab repertoire
replaces the weak Ab in the current Ab repertoire. Additionally, a situation in which the Ab-Ag affinity of in the new Ab repertoire equals to or is worse than that in the current Ab repertoire
implies that the in the current Ab repertoire survives. In addition to maintaining the strong Abs, this strategy eliminates nonfunctional Abs
Repeat external Steps 2–6 until the termination criterion is satisfied.
4. Results
The proposed RGA-PSO and AIA-PSO algorithms were applied to a set of benchmark UGO problems taken from other studies [9, 10, 17, 26], as detailed in the Appendix. The proposed RGA-PSO and AIA-PSO
approaches were coded in MATLAB software and run on a Pentium D 3.0 (GHz) personal computer. One-hundred independent runs were conducted for each test problem (TP). To have comparable numerical
results, the accuracy was chosen based on the numerical results reported in [9, 10, 17, 26]. Numerical results were summarized, including rate of successful minimizations (success rate %), best mean
worst, mean computational CPU time (MCCT), and mean error ME (average value of the gap between the objective function values calculated using the AIA-PSO and RGA-PSO solutions and the known global
minimum value). Table 1 lists the parameter settings for the proposed RGA-PSO and AIA-PSO approaches. The table shows 20,000 objective function evaluations of the internal PSO approach for an UGO
problem with decision variables and 60,000 objective function evaluations of the internal PSO for an UGO problem with decision variables. Moreover, the external AIA and RGA methods stop when = 20 and
= 20 are met or the best fitness value of the RGA approach (or the best Ab-Ag affinity of the AIA method) does not significantly change for the past five generations.
4.1. Numerical Results Obtained Using the RGA-PSO and AIA-PSO Algorithms for Low-Dimensional UGO Problems ()
Table 2 lists the numerical results obtained using the proposed RGA-PSO algorithm. The numerical results indicate that the RGA-PSO algorithm can obtain the global minimum for each test UGO problem
since these MEs equal or closely approximate “0,” and the RGA-PSO algorithm has an acceptable MCCT for each TP. Table 3 lists the optimal parameter settings obtained using the proposed RGA-PSO
algorithm to solve 14 UGO problems.
Table 4 lists the numerical results obtained using the proposed AIA-PSO algorithm. Numerical results indicate that the AIA-PSO algorithm can obtain the global minimum for each test UGO problem since
these MEs equal or closely approximate “0,” and that the AIA-PSO algorithm has an acceptable MCCT for each TP. Table 5 lists the optimal parameter settings obtained using the proposed AIA-PSO
algorithm for solving 14 UGO problems.
4.2. Numerical Results Obtained Using the RGA-PSO and AIA-PSO Algorithms for a Standard-Dimensional UGO Problem ()
To investigate the effectiveness of the RGA-PSO and AIA-PSO methods for solving a standard-dimensional UGO problem, the Zakharov problem with 30 decision variables (ZA[30]), as described in the
Appendix, has been solved using the RGA-PSO and AIA-PSO approaches. Fifty independent runs were performed to solve the UGO problem. To increase the diversity of candidate solutions for use in the
external RGA method, the parameter was set from 0.15 to “1.” Table 6 lists the numerical results obtained using the RGA-PSO and AIA-PSO approaches. This table indicates that the two approaches
converge to the global optimum value, since the MEs closely approximate “0,” and the MCCT of the RGA-PSO method is larger than that of the AIA-PSO method. Moreover, the success rates of the proposed
RGA-PSO and AIA-PSO approaches are 100%. The Wilcoxon test is performed to the difference of median values of the MEs obtained using the RGA-PSO and AIA-PSO methods. The value of the Wilcoxon test is
0.028, which is smaller than the significance level of 0.05, indicating that the performance of the RGA-PSO method is statistically different to that of the AIA-PSO method. Table 7 summarizes the
optimal parameter settings obtained using the proposed RGA-PSO and AIA-PSO algorithms for the UGO problem ZA[30.]
The UGO problem ZA[50] was solved using the RGA-PSO and AIA-PSO methods. The RGA-PSO and AIA-PSO methods fail to solve the UGO problem, since the diversity of the particle swarm in the internal PSO
method cannot be maintained. Hence, future work will focus on improving the diversity of the particle swarm by applying mutation operations.
4.3. Comparison
Table 8 lists the results of the Wilcoxon test for the MEs obtained using the proposed RGA-PSO and AIA-PSO methods for 14 UGO problems. In this table, the “**” represents the value of Wilcoxon test
which cannot be obtained, since the MEs obtained using the RGA-PSO and AIA-PSO methods for a TP are identical. Moreover, the median values of the MEs obtained using the RGA-PSO and AIA-PSO methods
for TP 10 are not statistically different, since their value is larger than the significance level of 0.05. Overall, the performances obtained using the RGA-PSO and AIA-PSO methods are statistically
Table 9 compares the numerical results obtained using the RGA-PSO and AIA-PSO methods with those obtained using the hybrid algorithms for 11 TPs. Specifically, the table lists the numerical results
obtained using the Nelder-Mead simplex search and PSO (NM-PSO) and GA-PSO taken from [9], those obtained using particle swarm ant colony optimization (PSACO), continuous hybrid algorithm (CHA) and
continuous tabu simplex search (CTSS) taken from [10], and those obtained using DE-PSO, AMPSO1, and AMPSO2 taken from [12]. Table 9 indicates that the RGA-PSO and AIA-PSO methods yield superior
accuracy of MEs obtained using the NM-PSO, GA-PSO, DE-PSO, AM-PSO1, AM-PSO2, CHA and CTSS approaches for TPs 2, 3, 4, 5, 7, 9, 11, and 12 and that RGA-PSO and AIA-PSO approaches yield superior
accuracy of MEs obtained using the PSACO method for TPs 4, 5, 7, 9, 10, 11, 12, 13 and 14. Table 10 compares the percentage success rates of the proposed RGA-PSO and AIA-PSO approaches and those of
the hybrid algorithms for 11 TPs, indicating that all algorithms except for the CHA and CTSS methods achieved identical performance (100% success rate) for all TPs.
4.4. Summary of Results
The proposed RGA-PSO and AIA-PSO algorithms have the following benefits.(1)Parameter manipulation of the internal PSO algorithm is based on the solved UGO problems. Owing to their ability to
efficiently solve UGO problems, the external RGA and AIA approaches are substituted for trial and error to manipulate the parameters (, , and ). (2)Besides obtaining the optimum parameter settings of
the internal PSO algorithm, the RGA-PSO and AIA-PSO algorithms can yield a global minimum for an UGO problem.(3)Beside, outperforming some published hybrid SGO methods, the proposed RGA-PSO and
AIA-PSO approaches reduce the parametrization for the internal PSO algorithm, despite being more complex than individual SGO approaches.
The proposed RGA-PSO and AIA-PSO algorithms are limited in that they cannot solve high-dimensional UGO problems (such as ). Future work will focus on increasing the diversity of the particle swarm of
the internal PSO method by applying mutation to solve high-dimensional UGO problems.
5. Conclusions
This work developed RGA-PSO and AIA-PSO algorithms. Performances of the proposed RGA-PSO and AIA-PSO approaches were evaluated using a set of benchmark UGO problems. Numerical results indicate that
the proposed RGA-PSO and AIA-PSO methods can converge to global minimum for each test UGO problem and obtain the best parameter settings of the internal PSO algorithm. Moreover, the numerical results
obtained using the RGA-PSO and AIA-PSO algorithms are superior to those obtained using many alternative hybrid SGO methods. The RGA-PSO and AIA-PSO methods can thus be considered efficient SGO
approaches for solving standard-dimensional UGO problems.
Six-hump camel back (SHCB) (two variables) [17]
search domain: ; one global minimum at two different points: and , .
Goldstein-price (GP) (two variables) [9, 10]
search domain: , four local minima; one global minimum: , .
Easom (ES) (two variables) [9]
search domain: , several local minima (exact number unspecified in usual literature); one global minimum: , .
B2 (two variables) [9, 10, 17]
search domain: , several local minima (exact number unspecified in usual literature); one global minimum , .
De Jong (DJ) (three variables) [9, 10]
search domain: , one global minimum: , .
Booth (BO) (two variables) [26]
search domain: , one global minimum: , .
Branin RCOC (RC) (two variables) [9, 10]
search domain: , no local minimum;three global minima: , , , .
Rastrigin (RA) (two variables) [10]
search domain: ,50 local minima;One global minimum: ,.
Rosenbrock (RSn) (N variables) [9, 10]
Two functions were considered: RS[2], RS[5]search domain: , several local minima (exact number unspecified in usual literature); global minimum: , .
Shubert (SH) (two variables) [9, 10]
search domain: , 760 local minima; 18 global minima;.
Zakharov (ZAn) (N variables) [9, 10]
Three functions were considered: ZA[2], ZA[5], ZA[10], ZA[30]search domain: ,several local minima (exact number unspecified in usual literature); global minimum: , .
Conflict of Interests
The author confirm that he does not has a conflict of interest with the MATLAB software.
The author would like to thank the National Science Council of the Republic of China, Taiwan for financially supporting this research under Contract no. NSC 100-2622-E-262-006-CC3.
1. W. Y. Yang, W. Cao, T.-S. Chung, and J. Morris, Applied Numerical Methods Using MATLAB, John Wiley & Sons, Hoboken, NJ, USA, 2005. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
2. C. Hamzaçebi, “Improving genetic algorithms' performance by local search for continuous function optimization,” Applied Mathematics and Computation, vol. 196, no. 1, pp. 309–317, 2008. View at
Publisher · View at Google Scholar · View at Scopus
3. C. C. Chen, “Two-layer particle swarm optimization for unconstrained optimization problems,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 295–304, 2011. View at Publisher · View at Google
Scholar · View at Scopus
4. X. Zhao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 119–124, 2010. View at Publisher · View at Google Scholar · View at
5. M. D. Toksari, “Minimizing the multimodal functions with Ant Colony Optimization approach,” Expert Systems with Applications, vol. 36, no. 3, pp. 6030–6035, 2009. View at Publisher · View at
Google Scholar · View at Scopus
6. J. Kelsey and J. Timmis, “Immune inspired somatic contiguous hypermutation for function optimisation,” in Proceedings of the Genetic and Evolutionary Computation (GECCO '03), pp. 207–208,
Chicago, Ill, USA, 2003.
7. M. Gang, Z. Wei, and C. Xiaolin, “A novel particle swarm optimization algorithm based on particle migration,” Applied Mathematics and Computation, vol. 218, no. 11, pp. 6620–6626, 2012.
8. H. Poorzahedy and O. M. Rouhani, “Hybrid meta-heuristic algorithms for solving network design problem,” European Journal of Operational Research, vol. 182, no. 2, pp. 578–596, 2007. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
9. Y. T. Kao and E. Zahara, “A hybrid genetic algorithm and particle swarm optimization for multimodal functions,” Applied Soft Computing Journal, vol. 8, no. 2, pp. 849–857, 2008. View at Publisher
· View at Google Scholar · View at Scopus
10. P. S. Shelokar, P. Siarry, V. K. Jayaraman, and B. D. Kulkarni, “Particle swarm and ant colony algorithms hybridized for improved continuous optimization,” Applied Mathematics and Computation,
vol. 188, no. 1, pp. 129–142, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
11. M. R. Chen, X. Li, X. Zhang, and Y. Z. Lu, “A novel particle swarm optimizer hybridized with extremal optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 367–373, 2010. View at
Publisher · View at Google Scholar · View at Scopus
12. R. Thangaraj, M. Pant, A. Abraham, and P. Bouvry, “Particle swarm optimization: hybridization perspectives and experimental illustrations,” Applied Mathematics and Computation, vol. 217, no. 12,
pp. 5208–5226, 2011. View at Publisher · View at Google Scholar · View at Scopus
13. Z. H. Zhan, J. Zhang, Y. Li, and H. S. H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 39, no. 6, pp. 1362–1381, 2009. View at
Publisher · View at Google Scholar · View at Scopus
14. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary
Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
15. M. Jiang, Y. P. Luo, and S. Y. Yang, “Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm,” Information Processing Letters, vol. 102, no.
1, pp. 8–16, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
16. J. Y. Wu, “Solving constrained global optimization problems by using hybrid evolutionary computing and artificial life approaches,” Mathematical Problems in Engineering, vol. 2012, Article ID
841410, 36 pages, 2012. View at Publisher · View at Google Scholar
17. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, New York, NY, USA, 1999.
18. C. R. Houck, J. A. Joines, M. G. Kay, and in:, “A genetic algorithm for function optimization: a MATLAB implementation,” in NSCU-IE TR 95-09, North Carolina State University, Raleigh, NC, USA,
19. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, WA, Australia, December 1995. View at
20. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, Anchorage, Alaska, USA, May
1998. View at Scopus
21. M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1951–1957, Washington,
DC, USA, 1999.
22. M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73,
2002. View at Publisher · View at Google Scholar · View at Scopus
23. A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons, 2005.
24. J. Y. Wu, “Solving constrained global optimization via artificial immune system,” International Journal on Artificial Intelligence Tools, vol. 20, no. 1, pp. 1–27, 2011. View at Publisher · View
at Google Scholar · View at Scopus
25. L. N. de Castro and F. J. Von Zuben, “Artificial Immune Systems—Part I—Basic Theory and Applications,” FEEC/Universidade Estadual de Campinas, Campinas, Brazil, 1999, ftp://ftp.dca.fee.unicamp.br
26. S. K. S. Fan and E. Zahara, “A hybrid simplex search and particle swarm optimization for unconstrained optimization,” European Journal of Operational Research, vol. 181, no. 2, pp. 527–548, 2007.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH | {"url":"http://www.hindawi.com/journals/mpe/2013/256180/","timestamp":"2014-04-19T10:36:29Z","content_type":null,"content_length":"453744","record_id":"<urn:uuid:ce30e43d-9144-4804-9b1c-51ae0675a253>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Cannon Is Placed At The Edge Of A Cliff Of Height ... | Chegg.com
A cannon is placed at the edge of a cliff of height H overlooking the ocean. The cannon has a muzzle velocity V0 and is pointed at a positive angle THETA above the horizontal. 1. The kinetic energy
of the projectile when it hits the water depends on ________________. A: H only B: V0 only C: THETA only D: H and V0 only E: H and THETA only F: THETA and V0 only G: H, V0 and THETA Tries 0/20 2. The
kinetic energy of the projectile at the its highest point depends on ______________. A: H only B: V0 only C: THETA only D: H and V0 only E: H and THETA only F: THETA and V0 only G: H, V0 and THETA | {"url":"http://www.chegg.com/homework-help/questions-and-answers/cannon-placed-edge-cliff-height-h-overlooking-ocean-cannon-muzzle-velocity-v0-pointed-posi-q2588499","timestamp":"2014-04-18T16:41:24Z","content_type":null,"content_length":"21930","record_id":"<urn:uuid:3c6aba03-3ceb-45c2-97be-9030f945eb75>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
May 25th 2009, 05:47 AM #1
Mar 2006
Cramer Rao
hello there,
I need some help with this question.
( look picture attached )
find the Cramer Rao lower bound for the unbiased estimator of theta.
cheers !
Assuming the Xi are independent... This is how I've been taught. Dunno how you've learnt it...
We have the joint pdf of the Xi equal to :
$g(x,\theta)=\prod_{i=1}^n \exp\left(-(x_i-\theta)-\exp\left(-(x_i-\theta)\right)\right)=\prod_{i=1}^n \exp\left(\theta-x_i-\exp\left(\theta-x_i\right)\right)$
$=\exp\left(\sum_{i=1}^n [\theta-x_i-e^{\theta-x_i}]\right)$
$=\exp\left(n\theta-\sum_{i=1}^n x_i+e^{\theta-x_i}\right)$
$\Rightarrow \log(g(x,\theta))=n\theta-\sum_{i=1}^n x_i-\sum_{i=1}^n e^{\theta-x_i}$
$\Rightarrow \frac{\partial \log(x,\theta)}{\partial \theta}=n-\sum_{i=1}^n e^{\theta-x_i}=n-e^{\theta} \sum_{i=1}^n e^{-x_i}$
An estimator $\hat{\theta}$ will be solution of $\frac{\partial \log(x,\hat{\theta})}{\partial \theta}=0$
Thus $e^{\hat{\theta}}\sum_{i=1}^n e^{-x_i}=n \Rightarrow \hat{\theta}=\log\left(\frac{n}{\sum_{i=1}^n e^{-x_i}}\right)$
Wow...now, that's ugly !
I've been taught to find Fisher's information, $I(\theta)$ and h such that $h(\theta)=\mathbb{E}(\hat{\theta})$
(if it's an unbiased estimator, then h=Id)
And then Cramer Rao's lower bound is $\frac{[g'(\theta)]^2}{I(\theta)}$
it is ugly !
anyone knows how to finish it ?
May 25th 2009, 11:30 AM #2
May 29th 2009, 12:44 PM #3
Mar 2006 | {"url":"http://mathhelpforum.com/advanced-statistics/90406-cramer-rao.html","timestamp":"2014-04-16T16:50:32Z","content_type":null,"content_length":"38617","record_id":"<urn:uuid:35e44e55-ecb7-4e5e-ae22-65d886c85989>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Problem putting enclosed brackets.
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Problem putting enclosed brackets.
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Problem putting enclosed brackets.
Date Thu, 1 Apr 2010 11:53:34 +0100
The problem is with double quotation marks "", informally often known as (double) quotes.
(The word brackets even at its most generous doesn't I think extend beyond
round brackets or parentheses ()
[square] brackets []
and curly brackets or braces {}.)
The problem is that Stata uses " " in two ways, as string delimiters and as literal characters. First time round, the " " are being interpreted as delimiters and as such stripped.
Although I tried a few solutions using compound double quotes `" "' I also failed to get precisely what Amadou wants. So, I am tempted to change the question. Why do you want precisely this? I suspect that there are other ways of getting the same ultimate result.
Amadou B. DIALLO, PhD.
I want to have the following labels of my variables enclosed into
encapsulated brackets but the code fails:
Instead of having the following desired form:
"Augmenté" "Inchangé" "Diminué" "Non concerné"
I have :
Augmenté "Inchangé" "Diminué" "Non concerné"
I.e. stata keeps ignoring the first label value.
My code is as follows:
qui foreach i of local vars {
loc vall : val la `i'
if "`vall'" ~= "" {
levelsof `i', l(l)
foreach k of local l {
loc lab : lab `vall' `k'
loc names `names' "`lab'" // loc names `names' "`: lab `vall' `k''"
di `"`names'"'
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-04/msg00009.html","timestamp":"2014-04-19T17:11:31Z","content_type":null,"content_length":"8860","record_id":"<urn:uuid:27f0e75f-731a-4db2-a391-19bb84671298>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Hi Justin. This may not help, but who knows?
Define the bottom or top of the parabola at (d,D)
I'll rename your (x1,y1) to be (a,A), just because I feel like it.
And (x2,y2) is (b,B). And (x3,y3) is (c,C).
Also your "a" multiplier, I will call "N", as it makes the parabola narrower.
Now if you get the point (d,D) and find N, then the equation for the parabola will be
(y-D) = N (x-d)^2
Okay, let's begin:
There are distance relationships in a parabola to do with squares,
hence we can write these equations:
N(a-d)^2 = |A-D|
N(b-d)^2 = |B-D|
N(c-d)^2 = |C-D| | {"url":"http://www.mathisfunforum.com/post.php?tid=2423&qid=23493","timestamp":"2014-04-17T12:46:10Z","content_type":null,"content_length":"17436","record_id":"<urn:uuid:62220b39-13e0-42d3-861f-0dbf399a0401>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Line touches curve at only one point
February 25th 2011, 04:01 AM #1
Super Member
Dec 2009
Line touches curve at only one point
Sketch the curve $C$ with equation $y^2+4x^2=4+2y$(i have no problem with this part)
If the line $y=2x+a$ touches the curve $C$ at only one point, show that $a^2-2a-9=0$
Substitute $\displaystyle y = 2x + a$ into the equation for $\displaystyle C$ and simplify. This will give you a Quadratic Equation.
There will only be one intersection when the discriminant of this quadratic $\displaystyle = 0$.
1. Calculate the point of intersection. (Replace the y in the 1st equation by the term of y of the 2nd equation:
$(2x-a)^2+4x^2=4+2(2x-a)~\implies~8x^2 + 4x(1 - a) + a^2 - 2a -4 = 0$
2. Solve for x: (Use the quadratic formula)
$x = \dfrac{-4(1-a)\pm\sqrt{(4(a-1))^2-4 \cdot 8 \cdot (a^2-2a-4)}}{2 \cdot 8}$
Expand the brackets in the root:
$x = \dfrac{-4(1-a)\pm\sqrt{-16(a^2-2a-9)}}{2 \cdot 8}$
3. YOU'll get only one point of intersection (the tangent point!) if the term in the root equals zero.
1. Calculate the point of intersection. (Replace the y in the 1st equation by the term of y of the 2nd equation:
$(2x-a)^2+4x^2=4+2(2x-a)~\implies~8x^2 + 4x(1 - a) + a^2 - 2a -4 = 0$
2. Solve for x: (Use the quadratic formula)
$x = \dfrac{-4(1-a)\pm\sqrt{(4(a-1))^2-4 \cdot 8 \cdot (a^2-2a-4)}}{2 \cdot 8}$
Expand the brackets in the root:
$x = \dfrac{-4(1-a)\pm\sqrt{-16(a^2-2a-9)}}{2 \cdot 8}$
3. YOU'll get only one point of intersection (the tangent point!) if the term in the root equals zero.
thanks! but how do i show that $a^2-2a-9=0$?
As was pointed out twice, if there's only one point of intersection, then the discriminant of your resulting quadratic has to be 0.
The discriminant is all the stuff under the square root.
If the discriminant of the quadratic equation is negative, there will be no (real number) solution- the line will NOT intersect the circle. If the discriminant of the quadratic equation is
positive, there will be two (real number) solutions- the line will intersect the circle twice. If the discriminant of the quadratic equation is zero, there will be exactly one (real number)
solution- the line will intersect the circle exactly once. And that was what you were told was to happen.
February 25th 2011, 04:51 AM #2
February 25th 2011, 05:01 AM #3
February 25th 2011, 11:26 PM #4
Super Member
Dec 2009
February 25th 2011, 11:32 PM #5
February 26th 2011, 02:27 AM #6
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/pre-calculus/172574-line-touches-curve-only-one-point.html","timestamp":"2014-04-17T11:03:07Z","content_type":null,"content_length":"52957","record_id":"<urn:uuid:12767d94-9bc0-41ae-95e0-8b8d651f825e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defining the midpoint of a colormap in matplotlib
up vote 12 down vote favorite
I want to set the middle point of a colormap, ie my data goes from -5 to 10, i want zero to be the middle. I think the way to do it is subclassing normalize and using the norm, but i didn't find any
example and it is not clear to me, what exactly i have to implement.
python matplotlib
this is called a "diverging" or "bipolar" colormap, where the center point of the map is important and the data goes above and below this point. sandia.gov/~kmorel/documents/ColorMaps – endolith
May 31 '12 at 4:20
add comment
4 Answers
active oldest votes
I know this is late to the game, but I just went through this process and came up with a solution that perhaps less robust than subclassing normalize, but much simpler. I
thought it'd be good to share it here for posterity.
The function
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import AxesGrid
def shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap'):
Function to offset the "center" of a colormap. Useful for
data with a negative min and positive max and you want the
middle of the colormap's dynamic range to be at zero
cmap : The matplotlib colormap to be altered
start : Offset from lowest point in the colormap's range.
Defaults to 0.0 (no lower ofset). Should be between
0.0 and 1.0.
midpoint : The new center of the colormap. Defaults to
0.5 (no shift). Should be between 0.0 and 1.0. In
general, this should be 1 - vmax/(vmax + abs(vmin))
For example if your data range from -15.0 to +5.0 and
you want the center of the colormap at 0.0, `midpoint`
should be set to 1 - 5/(5 + 15)) or 0.75
stop : Offset from highets point in the colormap's range.
Defaults to 1.0 (no upper ofset). Should be between
0.0 and 1.0.
cdict = {
'red': [],
'green': [],
'blue': [],
'alpha': []
# regular index to compute the colors
reg_index = np.linspace(start, stop, 257)
# shifted index to match the data
shift_index = np.hstack([
np.linspace(0.0, midpoint, 128, endpoint=False),
np.linspace(midpoint, 1.0, 129, endpoint=True)
for ri, si in zip(reg_index, shift_index):
up vote 4 down vote r, g, b, a = cmap(ri)
cdict['red'].append((si, r, r))
cdict['green'].append((si, g, g))
cdict['blue'].append((si, b, b))
cdict['alpha'].append((si, a, a))
newcmap = matplotlib.colors.LinearSegmentedColormap(name, cdict)
return newcmap
An example
biased_data = np.random.random_integers(low=-15, high=5, size=(37,37))
orig_cmap = matplotlib.cm.coolwarm
shifted_cmap = shiftedColorMap(orig_cmap, midpoint=0.75, name='shifted')
shrunk_cmap = shiftedColorMap(orig_cmap, start=0.15, midpoint=0.75, stop=0.85, name='shrunk')
fig = plt.figure(figsize=(6,6))
grid = AxesGrid(fig, 111, nrows_ncols=(2, 2), axes_pad=0.5,
label_mode="1", share_all=True,
cbar_location="right", cbar_mode="each",
cbar_size="7%", cbar_pad="2%")
# normal cmap
im0 = grid[0].imshow(biased_data, interpolation="none", cmap=orig_cmap)
grid[0].set_title('Default behavior (hard to see bias)', fontsize=8)
im1 = grid[1].imshow(biased_data, interpolation="none", cmap=orig_cmap, vmax=15, vmin=-15)
grid[1].set_title('Centered zero manually,\nbut lost upper end of dynamic range', fontsize=8)
im2 = grid[2].imshow(biased_data, interpolation="none", cmap=shifted_cmap)
grid[2].set_title('Recentered cmap with function', fontsize=8)
im3 = grid[3].imshow(biased_data, interpolation="none", cmap=shrunk_cmap)
grid[3].set_title('Recentered cmap with function\nand shrunk range', fontsize=8)
for ax in grid:
Results of the example:
This should be add to matplotlib! – G M Apr 11 at 12:55
This is better than my own solution, thanks! – tillsten Apr 15 at 11:54
add comment
Not sure if you are still looking for an answer. For me, trying to subclass Normalize was unsuccessful. So I focused on manually creating a new data set, ticks and tick-labels to get the
effect I think you are aiming for.
I found the scale module in matplotlib that has a class used to transform line plots by the 'syslog' rules, so I use that to transform the data. Then I scale the data so that it goes from 0
to 1 (what Normalize usually does), but I scale the positive numbers differently from the negative numbers. This is because your vmax and vmin might not be the same, so .5 -> 1 might cover
a larger positive range than .5 -> 0, the negative range does. It was easier for me to create a routine to calculate the tick and label values.
Below is the code and an example figure.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mpl as mpl
import matplotlib.scale as scale
NDATA = 50
def makeTickLables(vmin,vmax,linthresh):
make two lists, one for the tick positions, and one for the labels
at those positions. The number and placement of positive labels is
different from the negative labels.
nvpos = int(np.log10(vmax))-int(np.log10(linthresh))
nvneg = int(np.log10(np.abs(vmin)))-int(np.log10(linthresh))+1
ticks = []
labels = []
lavmin = (np.log10(np.abs(vmin)))
lvmax = (np.log10(np.abs(vmax)))
llinthres = int(np.log10(linthresh))
# f(x) = mx+b
# f(llinthres) = .5
# f(lavmin) = 0
m = .5/float(llinthres-lavmin)
b = (.5-llinthres*m-lavmin*m)/2
for itick in range(nvneg):
# add vmin tick
up vote 3 ticks.append(b+(lavmin)*m)
down vote
# f(x) = mx+b
# f(llinthres) = .5
# f(lvmax) = 1
m = .5/float(lvmax-llinthres)
b = m*(lvmax-2*llinthres)
for itick in range(1,nvpos):
# add vmax tick
return ticks,labels
data = (VMAX-VMIN)*np.random.random((NDATA,NDATA))+VMIN
# define a scaler object that can transform to 'symlog'
scaler = scale.SymmetricalLogScale.SymmetricalLogTransform(10,LINTHRESH)
datas = scaler.transform(data)
# scale datas so that 0 is at .5
# so two seperate scales, one for positive and one for negative
data2 = np.where(np.greater(data,0),
cmap = mpl.cm.jet
fig = plt.figure()
ax = fig.add_subplot(111)
im = ax.imshow(data2,cmap=cmap,vmin=0,vmax=1)
cbar = plt.colorbar(im,ticks=ticks)
Feel free to adjust the "constants" (eg VMAX) at the top of the script to confirm that it behaves well.
Thanks for you suggestion, as seen below, i had success in subclassing. But your code is still very useful for making the ticklabels right. – tillsten Oct 12 '11 at 20:27
add comment
Ok, i was able to subclass Normalize. The code from Yann is still very helpful to make the ticks of the colorbar right, because the automatic don't work very well with this norm.
Thanks for anyones help:
from matplotlib.colors import Normalize
class myNorm(Normalize):
def __init__(self,linthresh,vmin=None,vmax=None,clip=False):
def __call__(self, value, clip=None):
if clip is None:
clip = self.clip
result, is_scalar = self.process_value(value)
vmin, vmax = self.vmin, self.vmax
if vmin > 0:
raise ValueError("minvalue must be less than 0")
if vmax < 0:
raise ValueError("maxvalue must be more than 0")
elif vmin == vmax:
result.fill(0) # Or should it be all masked? Or 0.5?
vmin = float(vmin)
vmax = float(vmax)
if clip:
mask = ma.getmask(result)
result = ma.array(np.clip(result.filled(vmax), vmin, vmax),
up vote 3 down mask=mask)
vote # ma division is very slow; we can take a shortcut
resdat = result.data
resdat[resdat>0] /= vmax
resdat[resdat<0] /= -vmin
result = np.ma.array(resdat, mask=result.mask, copy=False)
if is_scalar:
result = result[0]
return result
def inverse(self, value):
if not self.scaled():
raise ValueError("Not invertible until scaled")
vmin, vmax = self.vmin, self.vmax
if cbook.iterable(value):
val = ma.asarray(value)
return val
if val<0.5:
return 2*val*(-vmin)
return val*vmax
add comment
It's easiest to just use the vmin and vmax arguments to imshow (assuming you're working with image data) rather than subclassing matplotlib.colors.Normalize.
import numpy as np
import matplotlib.pyplot as plt
data = np.random.random((10,10))
up vote 2 # Make the data range from about -5 to 10
down vote data = 10 / 0.75 * (data - 0.25)
plt.imshow(data, vmin=-10, vmax=10)
1 Is it possible to have the example updated to a gaussian curve so we can better see the gradation of the color? – Dat Chu Sep 13 '11 at 15:34
I don't like this solution, because it doesn't use the full dynamic range of available colors. Also i would like to a example of normalize to build a symlog-kind of normalization. –
tillsten Sep 13 '11 at 15:43
2 @tillsten - I'm confused, then... You can't use the full dynamic range of the colorbar if you want 0 in the middle, right? You're wanting a non-linear scale then? One scale for values
above 0, one for values below? In that case, yeah, you'll need to subclass Normalize. I'll add an example in just a bit (assuming someone else doesn't beat me to it...). – Joe Kington
Sep 13 '11 at 15:49
@Joe: You are right, it is not linear (more exactly, two linear parts). Using vmin/vmax, the colorange for the values smaller than -5 is not used (which makes sense in some
applications, but not mine.). – tillsten Sep 13 '11 at 16:00
your example should use some kind of smoothly-varying function, not random data, and the high points should be farther from zero than the low points, to show how you moved the center,
and you shouldn't use the jet colormap for pretty much anything ever. jwave.vt.edu/~rkriz/Projects/create_color_table/color_07.pdf – endolith May 31 '12 at 4:23
show 1 more comment
Not the answer you're looking for? Browse other questions tagged python matplotlib or ask your own question. | {"url":"http://stackoverflow.com/questions/7404116/defining-the-midpoint-of-a-colormap-in-matplotlib","timestamp":"2014-04-21T10:59:53Z","content_type":null,"content_length":"91991","record_id":"<urn:uuid:079c0468-af9e-4369-99cc-bcfa3362b17d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
EC6012, International Monetary Economics, Problem Set 1
Apologies for the delay in posting this problem set, I've changed the due date to compensate.
This problem set is worth 10% of your final mark. All questions carry equal marks, and the date for submission is Friday, March 6th, to the departmental office, KB 3-22a, by 4pm. Work in groups, but
submit alone.
Email me if you've questions on the problem set. It is a mix of theoretical, data work, and applied policy analysis, exactly the stuff we want you to learn in this course.
Update: Some sensible emails suggest I help you prepare for these questions a bit more, in order that I not break the class :), so here are some hints.
For question 1, read any introductory textbook, this and this.
For question 2, read the notes, and work through to the final equation with the $I(r)$ term included, then just see what happens to things when $r$ goes up.
For question 3, plug values of 1, 2, 3, 4, etc, into $u$ and $v$, and graph it. You'll find interesting equilibrium values. Helpful notes on the maths are here, here, and here. I'm not looking for an
algebraic solution though, just the graph will do.
For question 4, use your data skills from the trading floor and the notes from lecture 2.
For question 5, read the paper, tell me what you think.
I'll be on hand by email at any time to answer questions about the problem sets. Promise. | {"url":"http://www.stephenkinsella.net/2009/02/18/ec6012-international-monetary-economics-problem-set-1/","timestamp":"2014-04-16T13:51:12Z","content_type":null,"content_length":"16001","record_id":"<urn:uuid:0418acbd-d558-4166-9943-27a4bafb1ebc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
derivative question
For the function f(x) = abs value(x^2-3), why is the function not differentiable at x = (3)^.5? Thanks
Hey, It's not differentiable at 3^0.5 because: 3^0.5 = x means that f(x) = |[(3^0.5)^2]-3| [3^0.5]^2 becomes 3^(0.5*2) = 3 Which means that f(x) = 0 If we plot the normal x^2 - 3 we find that at y =
0 f(x) crosses the x axis and becomes negative. If we take the absolute value of this we have to flip the graph at this point which means there is a corner at the point x=3 The limit of the
derivative approaching from the left does not equal the limit of the derivative approaching from the right and hence it does not exist. Here is a graph of the absolute value function. | {"url":"http://mathhelpforum.com/calculus/115088-derivative-question-print.html","timestamp":"2014-04-20T07:26:47Z","content_type":null,"content_length":"4084","record_id":"<urn:uuid:0c2b382c-3455-4364-b7da-4f4937d476ec>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boyle Heights, CA Calculus Tutor
Find a Boyle Heights, CA Calculus Tutor
...I have been academically intimidated, and occasionally over-matched, having attended very competitive undergraduate and graduate schools myself. I also recognize which advisors were able to
help me and which ones were not. In turn, I've learned how to be patient while remaining resolute in jointly pursuing a scholastic goal.
58 Subjects: including calculus, English, reading, writing
...I have a lot of experience using SAS to deal with medical related problems. I have base and advanced certificate in SAS. I have a master's degree in Biostatistics and I am currently a doctoral
student in Biostatistics.
10 Subjects: including calculus, statistics, Chinese, SAT math
...My passion for mathematics is contagious, and I'm thrilled when my former students further pursue mathematics in education or profession.In addition to tutoring multiple students in Algebra 1,
I have taught the equivalent courses College Algebra, Elementary Algebra 1, and Elementary Algebra 2 at ...
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...The example problems in a practice exam cover ALL of the material you will see on the test, (which, by the way, ONLY covers up to Algebra). So now, you just have to understand the limited
number of concepts that you will be tested on. If you choose me to tutor you in SAT Math, I recommend you bu...
18 Subjects: including calculus, chemistry, algebra 2, SAT math
...Right now, I'm a graduate in EE major in University of Southern California, and I specialize in tutoring math, engineering, and courses like Java, C languages, SQL and web Design. For standard
exams, I'm holding a great score in SAT, TOEFL, GRE and ASVAB (I acquired 94/100 score at last). I hold...
40 Subjects: including calculus, chemistry, English, writing | {"url":"http://www.purplemath.com/Boyle_Heights_CA_Calculus_tutors.php","timestamp":"2014-04-18T08:29:48Z","content_type":null,"content_length":"24391","record_id":"<urn:uuid:54022323-a2f2-4f72-8db5-a8765392c211>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about computing the number of nodes in a Binary Tree
Question about computing the number of nodes in a Binary Tree
A Computer Science Professor at Stanford University used the following function to compute the number of nodes in a Binary Tree.
Compute the number of nodes in a tree.
int size(struct node* node) {
if (node==NULL) {
} else {
return(size(node->left) + 1 + size(node->right));
Why use this method instead of doing something like a pre-order traversal?
Well whatever way its traversed the result should be the same. Also, they all have to check each node one by one, so they are all "equally fast", in terms of order of magnitude.
Why use this method instead of doing something like a pre-order traversal?
You never said whether he/she said to use this one over a different method, so use whatever one you want! The only real difference is that this one uses recursion. Recursion has the overhead of
making many function calls, so it might or might not be a little slower than say, an in-order (or other type of) traversal using an iterative approach.
You don't care about the order, so why bother going in order?
Perhaps you could show what code you were thinking of for a preorder traversal. One might consider this a preorder traversal:
int size(struct node* node) {
if (node==NULL) {
} else {
return(1 + size(node->left) + size(node->right));
Note the moving of the 1. | {"url":"http://cboard.cprogramming.com/c-programming/122546-question-about-computing-number-nodes-binary-tree-printable-thread.html","timestamp":"2014-04-21T16:35:22Z","content_type":null,"content_length":"8594","record_id":"<urn:uuid:6ab0a0c9-b2ec-4419-8d91-93a57ca9d2b7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Low delay software-based bandpass filters (for dummies [me = dummy]).
There are 11 messages in this thread.
You are currently looking at messages 1 to .
Is this discussion worth a thumbs up?
Low delay software-based bandpass filters (for dummies [me = dummy]). - 2007-04-03 16:01:00
I want to design a real time, multiband gate for audio signals, all in
software (as a VST module). I thought it would be relatively simple
but both the problem itself and the results of my experiments (through
my headphones) are making my ears bleed. So I have a number of
My initial thought was to use an FFT to split up the audio, apply the
gate effects, and take the inverse. FFTs are the only real filter
types I'm familiar with enough to use. Obviously this has some
problems, the biggest of which is the incredible output latency when I
use a block size large enough to get the frequency resolution I want.
So, consider a software multiband compressor (Waves C4, for example).
I watch these compressors work, and the latency is near 0. When
specifying bands for one of these effects, typically you just specify
passband start and stop frequencies for each band and a magic Q value
that determines how much each band overlaps (higher Q means the
frequency response approaches a square, sort of like increasing the
order of a FIR/IIR filter).
Additionally, one thing that boggles my mind, is that the multiband
compressors can separate the signal into multiple frequency bands, and
then when you mix the bands all back together, everything is correct
(for example, there are no "peaks" in frequency response in the
overlap between bands caused by the frequency response for the two
bands at that point adding up to something > 1, if that makes any
I've been digging into FIR and IIR filters not too long now (reading
stuff on the internet for the last couple of days), but I'm not having
much luck with the implementation of them... although I think one of
these seems like the way to go... and I've been leaning towards
Butterworth filters just because the frequency response graphs look
So, questions, about this stuff in general (these aren't exclusively
related to the gate effect I'm trying to design):
1) What type of filters are these effects using that gives them both
very low latency, and the ability to recombine separated bands without
distorting the frequency response?
2) I've never noticed any obvious phase distortions when going through
these types of effects (not just multiband compressors, but even
realtime simple low/band/high pass filters). I have a VST lowpass
filter, for example, that if I set the cutoff frequency as high as it
will go (i.e. the output should be identity) and run a square wave
through it, the output is still a square wave, no ripples. Again, what
kind of filter can do this?
3) Why does anybody care if a filter has linear phase response? It
seems to me that any non-constant phase response will distort the
output wave shape in some undesirable way, so I don't understand why
linear phase response is considered better than non-linear phase
response -- all non-constant phase response curves seem equally "bad"
in general to me.
4) I'm having a ton of trouble finding tips/hints/examples/
explanations for the implementation of non-hardware IIR filters, and
my translations from descriptions of hardware based ones to software
seem to be failing (not going for blinding speed just yet; just trying
to get something working to prove to myself that I know what's going
on). Inevitably, all my output signals quickly degenerate into
earsplitting Nyquist frequency tones. The impulse response of my
filters is always immediate Nyquist noise, converging to 0 after a
number of samples equal to the order of the filter (for my IIR filters
I'm just using some web applet to design them). Not asking what
specifically I'm doing wrong, since it could be anything, but just
saying, I have no idea what I'm doing... and it's frustrating because
the concept of applying an IIR filter to a signal -seems- simple.
Now from what I've been reading, in my case a big advantage of FIR
filters seems to be better phase response, while a big advantage of
IIR filters seems to be that they can accomplish the same thing with a
lower order, and therefore a lower output latency. But I don't know.
Any advice would be appreciated. If I happened to use correct
terminology for anything above, I assure you it was purely by
coincidence and by no means implies that I understand what I'm saying.
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - jason.cipriani@gmail.com - 2007-04-03 16:07:00
Oh, and another example of an audio effect that seems magic to me is a
realtime, software, parametric EQ. I have an EQ here that can have up
to 16 filters in it, and you can make them all any combination of low,
high, band, or notch pass filters. It runs with virtually 0 latency
and no phase distortions. I can't comprehend how this is possible;
it's the same deal as with the multiband compressors and simple
filters that I see. What are they doing here?
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - Rune Allnor - 2007-04-03 17:37:00
On 3 Apr, 22:01, jason.cipri...@gmail.com wrote:
> I want to design a real time, multiband gate for audio signals, all in
> software (as a VST module). I thought it would be relatively simple
> but both the problem itself and the results of my experiments (through
> my headphones) are making my ears bleed. So I have a number of
> questions.
> My initial thought was to use an FFT to split up the audio, apply the
> gate effects, and take the inverse. FFTs are the only real filter
> types I'm familiar with enough to use. Obviously this has some
> problems, the biggest of which is the incredible output latency when I
> use a block size large enough to get the frequency resolution I want.
What "frequency resolution" would his be? A well-designed filter
will work on continuous bands, not isolated frequency components.
> Additionally, one thing that boggles my mind, is that the multiband
> compressors can separate the signal into multiple frequency bands, and
> then when you mix the bands all back together, everything is correct
> (for example, there are no "peaks" in frequency response in the
> overlap between bands caused by the frequency response for the two
> bands at that point adding up to something > 1, if that makes any
> sense).
These are most probably filters that have been carefully
designed for this sort of perfect reconstruction.
> I've been digging into FIR and IIR filters not too long now (reading
> stuff on the internet for the last couple of days), but I'm not having
> much luck with the implementation of them...
Get Rick Lyons' book.
> 1) What type of filters are these effects using that gives them both
> very low latency, and the ability to recombine separated bands without
> distorting the frequency response?
The term "perfect reconstruction filter banks" comes to mind.
> 2) I've never noticed any obvious phase distortions when going through
> these types of effects (not just multiband compressors, but even
> realtime simple low/band/high pass filters). I have a VST lowpass
> filter, for example, that if I set the cutoff frequency as high as it
> will go (i.e. the output should be identity) and run a square wave
> through it, the output is still a square wave, no ripples. Again, what
> kind of filter can do this?
None. There is always some ripple in the frequency domain
or overshoot in time domain.
> 3) Why does anybody care if a filter has linear phase response? It
> seems to me that any non-constant phase response will distort the
> output wave shape in some undesirable way, so I don't understand why
> linear phase response is considered better than non-linear phase
> response -- all non-constant phase response curves seem equally
> in general to me.
Nope. Linear phase means all signal components are delayed
an equal amount through the filter. If a 1 Hz sine is delayed
by 1 s through a system, the phase delay corresponds to 360
degrees. If a 2 Hz sine is delayed by 1 second, this corresponds
to a phase delay of 720 degrees.
> 4) I'm having a ton of trouble finding tips/hints/examples/
> explanations for the implementation of non-hardware IIR filters, and
> my translations from descriptions of hardware based ones to software
> seem to be failing (not going for blinding speed just yet; just trying
> to get something working to prove to myself that I know what's going
> on).
Get Rick Lyons' book.
> Inevitably, all my output signals quickly degenerate into
> earsplitting Nyquist frequency tones. The impulse response of my
> filters is always immediate Nyquist noise, converging to 0 after a
> number of samples equal to the order of the filter (for my IIR filters
> I'm just using some web applet to design them). Not asking what
> specifically I'm doing wrong, since it could be anything, but just
> saying, I have no idea what I'm doing... and it's frustrating because
> the concept of applying an IIR filter to a signal -seems- simple.
Get Rick Lyons' book.
> Now from what I've been reading, in my case a big advantage of FIR
> filters seems to be better phase response, while a big advantage of
> IIR filters seems to be that they can accomplish the same thing with a
> lower order, and therefore a lower output latency. But I don't know.
Nope, not lower latency, lower computational load.
Meaning you can get the same job done with simpler/cheaper/
less power-hungry chips.
> Any advice would be appreciated. If I happened to use correct
> terminology for anything above, I assure you it was purely by
> coincidence and by no means implies that I understand what I'm saying.
Get Rick Lyons' book.
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - maury - 2007-04-03 17:49:00
On Apr 3, 2:07 pm, "jason.cipri...@gmail.com"
<jason.cipri...@gmail.com> wrote:
> Oh, and another example of an audio effect that seems magic to me is a
> realtime, software, parametric EQ. I have an EQ here that can have up
> to 16 filters in it, and you can make them all any combination of low,
> high, band, or notch pass filters. It runs with virtually 0 latency
> and no phase distortions. I can't comprehend how this is possible;
> it's the same deal as with the multiband compressors and simple
> filters that I see. What are they doing here?
> -Jason
Look up frequency sampling filters. I use them for sub-band
decomposition. The delay through the filter is one sample.
Maurice Givens
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - jason.cipriani@gmail.com - 2007-04-03 19:14:00
> Look up frequency sampling filters. I use them for sub-band
> decomposition. The delay through the filter is one sample.
> Maurice Givens
Thanks Maury. It seems I've opened up a new can of worms here, so
while I'm waiting on this Rick Lyons book looks like I'll be messing
around with this. I've got 5 PDFs printed right next to me so wish me
luck; this looks like exactly what I was looking for and also seems to
go hand in hand with perfect reconstruction filter banks as well.
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - jason.cipriani@gmail.com - 2007-04-03 19:30:00
Thanks to you Rune, I will have a new book soon, had a nice head-
slapping epiphany moment (wrt linear phase), and used a good half an
ink cartridge printing PDFs from the internet. :)
Before I order this book, just to make sure, you are referring to
"Understanding Digital Signal Processing" (I'm assuming you don't
"Making Miniature Furniture" by Richard -A- Lyons)? Is the 2nd
the most current? Also, does this book cover perfect reconstruction
filters and frequency sampling filter design?
Oh, and:
> What "frequency resolution" would his be? A well-designed filter
> will work on continuous bands, not isolated frequency components.
I want to be able to set crossover frequencies between the bands as
arbitrarily as possible, and I also want to minimize latency and have
the overlap between the bands be reasonably small as well. By
"minimize latency" I mean no more than a few tenths of a millisecond.
So let's say at a 44.1kHZ sampling rate, I use an FFT of size 16.
That's about 0.4 ms, which is acceptable, but then since the FFT is
size 16, that means that what, I can only measure frequencies down
to... something like 2.8kHZ? (Period is 16/44100 seconds so frequency
is 44100/16). I don't know exactly but that isn't low enough, and so
there's not much I can do about low frequency components being sucked
up into the constant term. Also it's important that the response is
fast for the purposes of the gate: let's say you have an FFT that's, I
dunno, 100ms long as an extreme example. If you have a short sound
that's < 100ms long, to be able to clip the end of it, it seems like
you'd have to do a lot of windowing just to find where the power goes
below a threshold with acceptable accuracy, and then some more magic
to make sure that an entire 100ms block following that point isn't
affected (because really the gate my be dropping much less than 100ms
of audio... with a 100ms FFT block size I can't imagine how you would
accurately gate two, say, 25ms sounds that both occur within 100ms of
The frequency sampling FIR design, while I haven't dug into it yet,
seems like it will let me pick a fairly arbitrary frequency response
curve and design a filter to that specification. Whereas with an FFT,
if the block size is small enough to give me the latency I want, then
the band width is too large, and coming up with a nice arbitrary curve
becomes harder, especially if the subbands have vastly different
widths. I think.
So I guess I'll be getting Rick Lyons book and seeing how far I get
with this other stuff in the mean time.
Thanks a lot!
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - jason.cipriani@gmail.com - 2007-04-04 00:57:00
> Also it's important that the response is
> fast for the purposes of the gate...
Ignore what I said here. For some reason I had it in my head that I'd
operate on frequency domain output. If I were to use FFTs I'd use them
just as bandpass filters then invert to get time domain subbands, and
do the gating on those, then combine them all at the end. In any case,
though, I don't think I can balance block length vs. useful frequency
resolution. Also CPU time starts to become an issue, I think, more
than it would with IR filters... I'd have to do the FFT once, and then
one inverse for each band, so for, say, 8 bands, that's 9 FFTs, and
with windowing and overlap on top of that, it would start to push the
limit of my machine. Plus there's lots of other stuff going on at the
same time, so I can't afford to end up dedicating an entire CPU core
to this one silly effect.
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - Rune Allnor - 2007-04-04 04:27:00
On 4 Apr, 01:30, "jason.cipri...@gmail.com"
> Thanks to you Rune, I will have a new book soon, had a nice head-
> slapping epiphany moment (wrt linear phase), and used a good half an
> ink cartridge printing PDFs from the internet. :)
> Before I order this book, just to make sure, you are referring to
> "Understanding Digital Signal Processing" (I'm assuming you don't
> "Making Miniature Furniture" by Richard -A- Lyons)?
> Is the 2nd edition
> the most current?
> Also, does this book cover perfect reconstruction
> filters and frequency sampling filter design?
It does cover frequency sampling; I doubt it covers the
filter banks, as these are more advanced stuff, but
I might be wrong.
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - Marc Brooker - 2007-04-04 04:39:00
j...@gmail.com wrote:
> Thanks Maury. It seems I've opened up a new can of worms here, so
> while I'm waiting on this Rick Lyons book looks like I'll be messing
> around with this. I've got 5 PDFs printed right next to me so wish me
> luck; this looks like exactly what I was looking for and also seems to
> go hand in hand with perfect reconstruction filter banks as well.
> Jason
While you are waiting for Lyon's book (which is certainly worth waiting
for), you might want to have a look at http://www.dspguide.com/. It's a
whole book on DSP (aimed at the beginner) and can be viewed for free. If
you like it, then buy a copy.
Other books worth owning for the DSP beginner are "Oppenheim, Schafer
and Buck" and "Proakis and Manolakis", depending on your budget.
Re: Low delay software-based bandpass filters (for dummies [me = dummy]). - Rick Lyons - 2007-04-04 10:36:00
On 3 Apr 2007 16:30:57 -0700, "j...@gmail.com"
<j...@gmail.com> wrote:
>So I guess I'll be getting Rick Lyons book and seeing how far I get
>with this other stuff in the mean time.
>Thanks a lot!
Hi Jason,
I sure hope you benefit from the book.
Just thought I'd mention a thoughts about the
phrase "frequency sampling filter".
Some people use the phrase "frequency sampling filter"
to mean a filter designed by way of:
* define your desired filter's freq-domain
response samples
* take the inverse DFT of those freq-domain
* use the time-domain inverse DFT samples
as the coefficients in a tapped-delay
line, nonrecursive, FIR filter structure.
However! The original meaning of the phrase
"frequency sampling filter" meant a filter designed
by way of:
* define your desired filter's freq-domain
response samples
* use those freq-domain response samples as
coefficients in a parallel-bank of simple
2nd-order recursive filters.
Even though recursive structures are used, the
overall filter is both FIR and linear phase.
I cover this second design method in a fair amount
of detail in my book. I did so because not only
is that topic educational from a DSP standpoint,
in some filter applications an FIR filter
designed using the 2nd "frequency sampling filter"
definition will be more computationally efficient
than FIR filters designed using the currently
more popular Parks-McClellan FIR filter design method.
Best of Luck Jason,
When your copy of the book arrives, send me an
E-mail and I'll send the errata to you. | {"url":"http://www.dsprelated.com/showmessage/74694/1.php","timestamp":"2014-04-19T04:30:29Z","content_type":null,"content_length":"45999","record_id":"<urn:uuid:cc10eee4-0254-4348-8d36-1c3e9f0e27fd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
M12 Q17
Author Message
45% (medium)
Question Stats:
(01:41) correct
45% (00:59)
based on 111 sessions
Joined: 27 May 2008
How many roots does this equation have?
Posts: 552
sqrt(x^2+1) + sqrt(x^2+2) = 2
Followers: 5
(A) 0
Kudos [?]: 143 [0],
given: 0 (B) 1
(C) 2
(D) 3
(E) 4
Source: GMAT Club Tests
- hardest GMAT questions
Spoiler: OA
Kaplan Promo Code Knewton GMAT Discount Codes Manhattan GMAT Discount Codes
abhijit_sen This post received
Square original equation
Joined: 10 Sep 2007 x^2+1 + x^2+2 + 2sqrt[(x^2+1)(x^2+2)] = 4
=> 2sqrt[(x^2+1)(x^2+2)] = 1 - 2x^2
Posts: 951
Square again
Followers: 7 4(x^4+3x^2+2) = 1 + 4x^4 - 4x^2
=>8x^2 = -7
Kudos [?]: 172 [2] , => x^2 = -7/8
given: 0
As x^2 is -ve, so this equation cannot have solution for any real value of x. Although complex numbers solutions can be obtained, these are not part of regular GMAT.
Answer A.
This post received
abhijit_sen wrote:
Square original equation
durgesh79 x^2+1 + x^2+2 + 2sqrt[(x^2+1)(x^2+2)] = 4
=> 2sqrt[(x^2+1)(x^2+2)] = 1 - 2x^2
Square again
Joined: 27 May 2008 4(x^4+3x^2+2) = 1 + 4x^4 - 4x^2
=>8x^2 = -7
Posts: 552 => x^2 = -7/8
Followers: 5 As x^2 is -ve, so this equation cannot have solution for any real value of x. Although complex numbers solutions can be obtained, these are not part of regular GMAT.
Kudos [?]: 143 [1] , Answer A.
given: 0
thanks Abhijit,
technically the language of the question is worng, should be changed to
"How many
roots does this equation have"
This post received
durgesh79 wrote:
How many roots does this equation have?
IanStewart sqrt(x^2+1) + sqrt(x^2+2) = 2
GMAT Instructor 0
Joined: 24 Jun 2008 2
Posts: 967 4
Location: Toronto I don't think we need to do any algebra here:
Followers: 236 x^2 \geq 0
Kudos [?]: 576 [7] , , so
given: 3
\sqrt{x^2+1} + \sqrt{x^2+2} \geq 1 + \sqrt{2}
, which is larger than 2. Hence no (real) solutions for x.
Nov 2011: After years of development, I am now making my advanced Quant books and high-level problem sets available for sale. Contact me at ianstewartgmat at gmail.com for
Private GMAT Tutor based in Toronto
This post received
IanStewart wrote:
durgesh79 wrote:
How many roots does this equation have?
sqrt(x^2+1) + sqrt(x^2+2) = 2
x2suresh 2
SVP 4
Joined: 07 Nov 2007 I don't think we need to do any algebra here:
Posts: 1833 x^2 \geq 0
Location: New York , so
Followers: 23 \sqrt{x^2+1} + \sqrt{x^2+2} \geq 1 + \sqrt{2}
Kudos [?]: 390 [1] , , which is larger than 2. Hence no (real) solutions for x.
given: 5
Agreed .. good point.
+1 for you.
Question wording need to be changed to
real roots
to make it clear.
Your attitude determines your altitude
Smiling wins more friends than frowning
Thanks guys. We'll change the wording of the question.
+1 for everyone.
Joined: 02 Oct 2007
Welcome to GMAT Club!
Posts: 1218
Want to solve GMAT questions on the go? GMAT Club iPhone app will help.
Followers: 85 Please read this before posting in GMAT Club Tests forum
Result correlation between real GMAT and GMAT Club Tests
Kudos [?]: 597 [0], Are GMAT Club Test sets ordered in any way?
given: 334
Take 15 free tests with questions from GMAT Club, Knewton, Manhattan GMAT, and Veritas.
basik21 Square original equation
x^2+1 + x^2+2 + 2sqrt[(x^2+1)(x^2+2)] = 4
Intern => 2sqrt[(x^2+1)(x^2+2)] = 1 - 2x^2
Joined: 16 Feb 2010 Square again
4(x^4+3x^2+2) = 1 + 4x^4 - 4x^2
Posts: 3 =>8x^2 = -7 should be 16x^2=-7
=> x^2 = -7/8 then x^2=-7/16
Followers: 0
As x^2 is -ve, so this equation cannot have solution for any real value of x. Although complex numbers solutions can be obtained, these are not part of regular GMAT.
Kudos [?]: 7 [0],
given: 1 Answer A.
But even answer is right, A
IanStewart wrote:
Joined: 18 Jul 2009 I don't think we need to do any algebra here: x^2 \geq 0, so \sqrt{x^2+1} + \sqrt{x^2+2} \geq 1 + \sqrt{2}, which is larger than 2. Hence no (real) solutions for x.
Posts: 54 sorry but i didnt understand this can anyone explain.
Followers: 2
Kudos [?]: 31 [0],
given: 7
Math Expert 1
Joined: 02 Sep 2009 This post received
Posts: 17292
Expert's post
Followers: 2870
Kudos [?]: 18357 [1]
, given: 2347
I got A but solved thru a different (less clever) method. I think I may have answered correct in spite of myself though, can someone tell me if my solution is mathmatically sound
or if I arrived at the correct solution by luck.
Begin by squaring both sides
(x^2+1) + (x^2 +2) = 4
TheSituation drop brackets and solve
Manager 2x^2=1
Joined: 09 Dec 2009 x^2=1/2
Posts: 122 x = sq root of 1/2
Followers: 3 therefore no real roots, therefore A
Kudos [?]: 21 [0], feedback?
given: 19
G.T.L. - GMAT, Tanning, Laundry
Round 1: 05/12/10 handling-a-grenade-thesituation-s-official-debrief-94181.html
Round 2: 07/10/10 - This time it's personal.
This post received
Expert's post
TheSituation wrote:
I got A but solved thru a different (less clever) method. I think I may have answered correct in spite of myself though, can someone tell me if my solution is mathmatically sound
or if I arrived at the correct solution by luck.
Begin by squaring both sides
(x^2+1) + (x^2 +2) = 4
drop brackets and solve
x = sq root of 1/2
therefore no real roots, therefore A
This would be the longer way, plus you'll need to square twice not once, as you made a mistake while squaring first time.
, so when you square both sides you'll get:
x^2+1+2\sqrt{(x^2+1)(x^2 +2)}+x^2+2= 4
2\sqrt{(x^2+1)(x^2 +2)}=1-2x^2
. At this point you should square again -->
, no real
Math Expert
Joined: 02 Sep 2009
satisfies this equation.
Posts: 17292
There is one more problem with your solution:
Followers: 2870
You've got (though incorrectly):
Kudos [?]: 18357 [3]
, given: 2347 x^2=\frac{1}{2}
and then you concluded that this equation has no real roots, which is not right. This quadratic equation has TWO real roots:
. Real roots doesn't mean that the roots must be integers, real roots means that roots must not be complex numbers, which I think we shouldn't even mention as GMAT deals ONLY
with real numbers. For example
has no real roots and for GMAT it means that this equation
has no roots
, no need to consider complex roots and imaginary numbers.
Hope it's clear.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12
Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Pardon my ignorance but what is RHS and LHS?
hrish88 wrote:
Bunuel wrote:
Joined: 14 Aug 2009
x^2 \geq 0, means that the lowest value of LHS is when x=0 --> \sqrt{x^2+1} + \sqrt{x^2+2}=1 + \sqrt2=2.4>RHS=2, hence as the lowest possible value of LHS is still greater than
Posts: 54 RHS, the equation has no real roots.
Followers: 1 Hope it's clear.
Kudos [?]: 14 [0], Thanks Bunuel.
given: 29
My will shall shape the future. Whether I fail or succeed shall be no man's doing but my own. I am the force; I can clear any obstacle before me or I can be lost in the maze. My
choice; my responsibility; win or lose, only I hold the key to my destiny - Elaine Maxwell
This post received
TheSituation wrote:
Hope it's clear
Abudantly clear...thanks. +1
Joined: 09 Dec 2009
Your posts are incredibly helpful, thank you for taking the time.
Posts: 122
Just curious, have you written the GMAT yet? If so, how did you score? If you say anything less than 790 I won't believe you..
Followers: 3
Thanks again!!!
Kudos [?]: 21 [1] ,
given: 19 _________________
G.T.L. - GMAT, Tanning, Laundry
Round 1: 05/12/10 handling-a-grenade-thesituation-s-official-debrief-94181.html
Round 2: 07/10/10 - This time it's personal.
rbriscoe wrote:
TheSituation Hrish88,
Manager Pardon my ignorance but what is RHS and LHS?
Joined: 09 Dec 2009 Ha, finally a question I can help out on!!
Posts: 122 RHS and LHS are referring to right-hand side and left-hand side of the equation.
Followers: 3 _________________
Kudos [?]: 21 [0], G.T.L. - GMAT, Tanning, Laundry
given: 19
Round 1: 05/12/10 handling-a-grenade-thesituation-s-official-debrief-94181.html
Round 2: 07/10/10 - This time it's personal.
Manager RHS means Right Hand Side and LHS means Left Hand Side of the given equation.
Joined: 18 Jul 2009 here RHS is 2 and LHS is \sqrt{{x^2+1}} + \sqrt{{x^2+2}}
Posts: 54
Followers: 2
Kudos [?]: 31 [0],
given: 7
I am sooo lost on this question. I solved it the same as TheSiuation. I am do not understand your method below. Although I am sure it is pbvious to everyone but please clarify
the rule, therum or postulate that you followed to solve the equation. Thanks in advacne.
Bunuel wrote:
TheSituation wrote:
I got A but solved thru a different (less clever) method. I think I may have answered correct in spite of myself though, can someone tell me if my solution is mathmatically sound
or if I arrived at the correct solution by luck.
Begin by squaring both sides
(x^2+1) + (x^2 +2) = 4
drop brackets and solve
x = sq root of 1/2
therefore no real roots, therefore A
This would be the longer way, plus you'll need to square twice not once, as you made a mistake while squaring first time.
, so when you square both sides you'll get:
x^2+1+2\sqrt{(x^2+1)(x^2 +2)}+x^2+2= 4
2\sqrt{(x^2+1)(x^2 +2)}=1-2x^2
rbriscoe . At this point you should square again -->
Manager 4x^4+12x^2+8=1-4x^2+4x^4
Joined: 14 Aug 2009 -->
Posts: 54 16x^2=-7
Followers: 1 , no real
Kudos [?]: 14 [0], x
given: 29
satisfies this equation.
There is one more problem with your solution:
You've got (though incorrectly):
and then you concluded that this equation has no real roots, which is not right. This quadratic equation has TWO real roots:
. Real roots doesn't mean that the roots must be integers, real roots means that roots must not be complex numbers, which I think we shouldn't even mention as GMAT deals ONLY
with real numbers. For example
has no real roots and for GMAT it means that this equation
has no roots
, no need to consider complex roots and imaginary numbers.
Hope it's clear.
My will shall shape the future. Whether I fail or succeed shall be no man's doing but my own. I am the force; I can clear any obstacle before me or I can be lost in the maze. My
choice; my responsibility; win or lose, only I hold the key to my destiny - Elaine Maxwell
Expert's post
rbriscoe wrote:
I am sooo lost on this question. I solved it the same as TheSiuation. I am do not understand your method below. Although I am sure it is pbvious to everyone but please clarify
the rule, therum or postulate that you followed to solve the equation. Thanks in advacne.
Bunuel wrote:
TheSituation wrote:
I got A but solved thru a different (less clever) method. I think I may have answered correct in spite of myself though, can someone tell me if my solution is mathmatically sound
or if I arrived at the correct solution by luck.
Begin by squaring both sides
(x^2+1) + (x^2 +2) = 4
drop brackets and solve
x = sq root of 1/2
therefore no real roots, therefore A
This would be the longer way, plus you'll need to square twice not once, as you made a mistake while squaring first time.
, so when you square both sides you'll get:
x^2+1+2\sqrt{(x^2+1)(x^2 +2)}+x^2+2= 4
2\sqrt{(x^2+1)(x^2 +2)}=1-2x^2
. At this point you should square again -->
, no real
satisfies this equation.
There is one more problem with your solution:
You've got (though incorrectly):
and then you concluded that this equation has no real roots, which is not right. This quadratic equation has TWO real roots:
. Real roots doesn't mean that the roots must be integers, real roots means that roots must not be complex numbers, which I think we shouldn't even mention as GMAT deals ONLY
with real numbers. For example
has no real roots and for GMAT it means that this equation
has no roots
, no need to consider complex roots and imaginary numbers.
Hope it's clear.
Frankly speaking I solved this question in different manner, as shown in my first post in this thread. The above solution is also valid, however it's quite time consuming.
Math Expert
We have:
Joined: 02 Sep 2009
Posts: 17292
. Now, if you choose to square both sides, then when squaring LHS
Followers: 2870
Kudos [?]: 18357 [0]
, given: 2347 , you should apply the formula:
(\sqrt{x^2+1})^2+2{(\sqrt{x^2+1})}{(\sqrt{x^2+2})}+(\sqrt{x^2+2})^2=x^2+1+2\sqrt{(x^2+1)(x^2 +2)}+x^2+2
. When squaring RHS yuo'll get
So you'll get:
x^2+1+2\sqrt{(x^2+1)(x^2 +2)}+x^2+2=4
, rearrange -->
2\sqrt{(x^2+1)(x^2 +2)}=1-2x^2
. At this point you should square again -->
(2\sqrt{(x^2+1)(x^2 +2)})^2=(1-2x^2)^2
4(x^2+1)(x^2 +2)=1-4x^2+4x^4
, rearrange again,
will cancel out -->
. Now,
can not equal to negative number, which means that there doesn't exist an
to satisfies the given equation. So, no roots.
Hope it's clear.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12
Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Manager at first glance i started substituting values and identified that x^2 cannot be greater than zero
if so then it wouldn't satisfy the equation
Joined: 26 Nov 2009
but i really don't what they meant as real roots and thanks bunuel for the explanation.if i had known that it is very to identify that if x^2<0 then it has no real solution
Posts: 178
Followers: 3
Kudos [?]: 53 [0],
given: 5
basik21 1
Intern This post received
Joined: 16 Feb 2010
don't think we need hard algebra,
Posts: 3 just under the square should be x^2+1>0 or x^2+2>0
Followers: 0 x^2>-1 incorrect
x^2>-2 incorrect
Kudos [?]: 7 [1] , thus no real roots... answer is A) 0
given: 1
I tried like this :
Let (x^2 + 1) = y
sqrt(y) + sqrt(y+1) = 2
Joined: 16 Nov 2010
y + y + 1 + 2(sqrt(y) * sqrt(y+1)) = 4
Posts: 1698
2y + 2(sqrt(y) * sqrt(y+1)) = 3
Location: United
States (IN) => 4 * y * (y+1) = 4y^2 + 9 - 12y
Concentration: => 4y^2 + 4y = 4y^2 + 9 - 12y
Strategy, Technology
=> y = 9/16
Followers: 29
=> x^2 + 1 = 9/16 , not possible so 0 roots, answer is A.
Kudos [?]: 263 [0],
given: 36 _________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant) | {"url":"http://gmatclub.com/forum/m12-q17-68647.html","timestamp":"2014-04-17T13:11:55Z","content_type":null,"content_length":"247697","record_id":"<urn:uuid:52cbdfc1-c5d3-4c47-b766-974e9d510f97>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Elliptic partial differential equations of second order. Reprint of the 1998 ed.
(English) Zbl 1042.35002
Classics in Mathematics. Berlin: Springer (ISBN 3-540-41160-7/pbk). xiii, 517 p. DM 69.00; öS 504.00; sFr. 61.00; £ 24.00; $ 34.95 (2001).
From the preface: This revision of the 1983 second edition (see Zbl 0562.35001) corresponds to the Russian edition, published in 1989 [Nauka, Moscow; Zbl 0691.35001], in which we essentially updated
the previous version to 1984. The additional text relates to the boundary Hölder derivative estimates of Nikolai Krylov, which provided a fundamental component of the further development of the
classical theory of elliptic (and parabolic), fully nonlinear equations in higher dimensions. In our presentation we adapted a simplification of Krylov’s approach due to Luis Caffarelli.
The theory of nonlinear elliptic second order equations has continued to flourish during the last fifteen years and, in a brief epilogue to this volume, we signal some of the major advances. Although
a proper treatment would necessitate at least another monograph, it is our hope that this book, most of whose text is now more than twenty years old, can continue to serve as background for these and
future developments.
Since our first edition (see the review in Zbl 0361.35003) we have become indebted to numerous colleagues, all over the globe. It was particularly pleasant in recent years to make and renew
friendships with our Russian colleagues, Olga Ladyzhenskaya, Nina Ural’tseva, Nina Ivochkina, Nikolai Krylov and Mikhail Safonov, who have contributed so much to this area. Sadly, we mourn the
passing away in 1996 of Ennio De Giorgi, whose brilliant discovery forty years ago opened the door to higher-dimensional nonlinear theory.
35-02 Research monographs (partial differential equations)
35J65 Nonlinear boundary value problems for linear elliptic equations
35B45 A priori estimates for solutions of PDE
35J25 Second order elliptic equations, boundary value problems
35B50 Maximum principles (PDE)
35B05 Oscillation, zeros of solutions, mean value theorems, etc. (PDE)
47H10 Fixed point theorems for nonlinear operators on topological linear spaces | {"url":"http://zbmath.org/?q=an:1042.35002&format=complete","timestamp":"2014-04-16T19:10:33Z","content_type":null,"content_length":"22391","record_id":"<urn:uuid:49e0b07d-b9b4-4efb-a19a-9c6c6378521e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Option Basics -- VIII
Butterfly tactics
Anup Menon
ASSUME you are an investor and you want to take a view on the volatility of the stock of say ABC company. Further assume that you are sure that either the volatility of the stock will increase or
that the volatility of the stock will decrease. In such circumstances, an investor can consider using a butterfly spread to profit and at the same time have a limited risk exposure. Buying a
butterfly would yield a profit when volatility is expected to come down and selling a butterfly would result in a profit when volatility is expected to go up.
A butterfly strategy requires three distinct transactions. For instance, in the case of a long butterfly, investors have to buy two options with different strike prices and sell two options on the
strike price which is the middle of the ones that have been bought.
Therefore, if the investor buys options with strike prices of 50 and 60 respectively, he has to sell options with a strike of 55. The reverse strategy is applicable for a short butterfly. A butterfly
is by definition "contract-neutral". This means that the number of contracts bought should be equal to the number of contracts sold.
Say that you create a long butterfly by buying two options with strike prices fixed at Rs 90 and Rs 110 respectively. At the same time you have to sell two options with the strike price of Rs 100.
Assume for the moment that the premiums paid for the options purchased is worth Rs 15 and the premiums received on the sale of the options is Rs 10. Therefore, the net cost for the investor is the
difference in premium which is equal to Rs 5.
At maturity, the maximum loss that can be suffered by the investor is the net difference in premium which is equal to Rs 5. For instance, the price of the asset closes at Rs 115. Then what happens?
Both the options that the investor has bought will be in the money and it will give him a net gain of Rs 30 (25+5). However, remember that he has also written options at Rs 100.
Therefore, his loss is equal to two times the loss of Rs 15 each per option. Therefore the net loss is Rs 30. This implies since the options cancel out each other, the maximum loss for the investor
is always the premium paid.
Now the question is when are his profits maximum? His profits are maximum if the stock closes at the strike on which the investor has sold options. For instance consider what happens when the stock
closes at Rs 100. The option he bought for Rs 90 will give him a net gain of Rs 10. The other option that he has bought at Rs 110 is worthless. What will happen to the options that have been written.
These options also will not be exercised as the option buyer would not be able to cover even a fraction of the premium paid. Therefore, the contract that have been written will also not be exercised.
This means that the maximum possible gain for the investor from this strategy would be Rs 10. | {"url":"http://www.thehindubusinessline.in/businessline/iw/2001/07/15/stories/0715g05m.htm","timestamp":"2014-04-17T18:26:52Z","content_type":null,"content_length":"8184","record_id":"<urn:uuid:dc676d8d-5750-41bc-8063-daddbf10e5d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winnetka, CA Geometry Tutor
Find a Winnetka, CA Geometry Tutor
...As a writer, I've published ten books, one of which recently reached #1 in the Amazon Free Kindle store for a day and a half. This year, the Princeton Review hired me to write and edit an AP
European History test prep book for global distribution. To other writers, I provide various editorial services ranging from light proofreading to heavy developmental work.
25 Subjects: including geometry, English, writing, GRE
...This test and the tutoring I do is directed at students motivated to do more than short term intensive test prep. This section often requires students to review some math skills and I work with
them to be persistent in reviewing/practicing/grinding the numbers as well as how to be efficient in p...
50 Subjects: including geometry, English, reading, writing
...I am well-versed in a variety of math and science subjects, including Algebra 1 & 2, Geometry, Trigonometry, SAT Math 1 & 2, and Physics. I am a hardworking and passionate tutor who calls
himself the "Invictus Math Tutor", because I believe the key to success is an indomitable will and work ethi...
18 Subjects: including geometry, algebra 1, GRE, GED
...The most rewarding aspect of tutoring is working with a child and seeing them improve their grades. Some even learn to like the subject. However, most learn that they don't have to be
intimidated by difficult subjects.
22 Subjects: including geometry, English, reading, writing
...I have taught in LAUSD adult school for 5 years, teaching individually in history, English, and math. I taught basic math, Algebra and Geometry. We prepared students for either passing high
school courses, passing the exit exam, or passing the GED; and some students were preparing for the test for admission to the nursing program.
13 Subjects: including geometry, English, reading, writing | {"url":"http://www.purplemath.com/Winnetka_CA_geometry_tutors.php","timestamp":"2014-04-20T14:07:27Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:b7c36989-9fb8-4bb6-8f79-4b815fd18963>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winthrop Harbor Statistics Tutor
Find a Winthrop Harbor Statistics Tutor
...It can work as a stand alone application or it can be linked with a network. The program allows users to access emails from multiple sources, track schedules and tasks, enter contact
information, and utilize journal and note features. I am highly versed in multiple versions of this program, including the most recent version.
39 Subjects: including statistics, reading, English, calculus
...I worked as a Manufacturing Engineer for over ten years, and in that capacity I have had to train people on various machines and processes. I am an easy going person, and as long as you are
willing to listen and try to understand I'm sure we will make progress, because I don't mind repeating mys...
39 Subjects: including statistics, reading, English, physics
...I have 15 years of mathematics teaching under my belt and I can teach everything from 6th grade math to AP Calculus. If you are looking for a refresher course in ACT or SAT mathematics, then
you have found the right person! I look forward to hearing from you.Algebra 2 is a higher level algebra course that improves abstract thinking capabilities.
11 Subjects: including statistics, calculus, geometry, algebra 1
...Finally, trigonometry always finds its way into my day-to-day work, from teaching college-level physics concepts to building courses for professional auditors. I was an advanced math student,
completing calculus in high school and then taking statistics as part of the engineering curriculum in college. I tutored math through college to stay fresh.
12 Subjects: including statistics, geometry, algebra 2, GED
...While I specialize in high school and college level mathematics, I have had success tutoring elementary and middle school students as well. I have experience working with ACT College Readiness
Standards and have been successful improving the ACT scores of students. In first tutoring sessions wi...
19 Subjects: including statistics, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Winthrop_Harbor_Statistics_tutors.php","timestamp":"2014-04-18T19:16:08Z","content_type":null,"content_length":"24517","record_id":"<urn:uuid:b05ef96c-4362-4fdf-b9f0-4d1eb745833e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Payroll and Wins
Over at the Wages of Wins, Stacey Brook discusses the issue of payroll and wins in baseball. I’m surprised at how many problems people are having with an argument that they make in their book:
payroll is not the largest determinant of wins. Using data from 1988-2006, Brook reports that payroll differences explain approximately 18% of the variance of wins across teams. This result come from
regressing wins on a measure of relative payroll. Now the point is that if 18% is responsible, then 82% of other stuff is playing a bigger role. It seems pretty straight forward, but some people are
getting caught up in the argument. Let me clarify a few things.
An R^2 of .18 does not mean that there is no correlation between payroll and wins. Quite the opposite. There is strong and statistically significant correlation. In fact, the authors make this quite
clear in the book. The coefficient on salary is statisitically significant. However, the point is that sum of other factors appear to be more important. The results indicate it’s quite a stretch to
say that success on the field is a product of financial determinism.
Using data on salaries and wins from 1985-2006 I estimated the impact of payroll on wins using linear regression, while correcting for a few problems in the data. I used the percent difference of a
team’s salary from the league mean to account for different values of players over time. Also, I used year dummies to capture influence of individual seasons. The result: every 10% above the mean in
payroll is worth about 1 win, and salary explains about 17% of wins. These estimates conform to what the WoW authors find: a majority of the explanation is determined by factors other than salary.
However, let’s take it a bit further. Having a payroll of one standard deviation (34%) above/below average is associated with 3.4 more/less wins. Is that a lot? Well, let’s see how well it predicts.
Below is a table of the percentage salary differences from the mean by franchise from 1985-2006 (excluding 1994). The table includes the average wins, the regression predicted wins above average
(based on the 10% ==> 1 win prediction) and the actual wins above average. To me, the data indicates a trend, but not a strong one. For example, the Royals have spent a little less than average, but
they’ve lost a lot more than average. The Braves have spent more than average, but won more above the average than predicted.
Club Wins % Diff Pred. Actual
ANA 82.05 9.93 0.98 1.05
ARI 80.89 13.30 1.31 -0.11
ATL 87.19 28.36 2.79 6.19
BAL 75.81 10.11 0.99 -5.19
BOS 86.67 33.68 3.31 5.67
CHC 77.38 11.15 1.10 -3.62
CHW 82.86 -7.53 -0.74 1.86
CIN 80.52 -8.62 -0.85 -0.48
CLE 80.33 -8.11 -0.80 -0.67
COL 74.77 -6.66 -0.65 -6.23
DET 73.62 -8.95 -0.88 -7.38
FLA 76.15 -33.16 -3.26 -4.85
HOU 84.24 -4.18 -0.41 3.24
KCR 73.81 -12.41 -1.22 -7.19
LAD 83.43 33.05 3.25 2.43
MIL 75.43 -24.52 -2.41 -5.57
MIN 79.90 -26.08 -2.56 -1.10
NYM 84.05 27.73 2.72 3.05
NYY 90.24 70.35 6.91 9.24
OAK 86.81 -13.39 -1.32 5.81
PHI 77.48 -5.26 -0.52 -3.52
PIT 74.52 -30.87 -3.03 -6.48
SDP 78.43 -11.21 -1.10 -2.57
SEA 79.90 -7.15 -0.70 -1.10
SFG 84.24 3.87 0.38 3.24
STL 85.71 9.78 0.96 4.71
TBD 64.33 -38.87 -3.82 -16.67
TEX 79.38 -0.27 -0.03 -1.62
TOR 84.00 2.63 0.26 3.00
WSN 77.95 -36.49 -3.58 -3.05
This figure makes a similar point.
There is clearly a positive trend between average payroll and average wins, but it also shows a wide variance in performance outside the prediction. The points are not clustered closely around the
line, and that is because other factors are affecting how teams perform. There is plenty of room for other factors to matter. That’s how the Marlins can compete for the Wild Card with $15 million and
the Mets can come in last with a $90 million payroll. Just compare the Cubs and the A’s in the graph. Does money have an effect on winning? Yes, but that impact isn’t as certain as it’s widely
believed to be. That’s the point. Winning isn’t as simple as spending.
Now, in an effort to offer full disclosure, let me say the following. I have met or corresponded with all of the authors of the book—we are economists working in the same field. The book was largely
written before I met the authors, though I did check a fact for them right before the book went to press. I also gave the book a postive review in the International Journal of Sport Finance, as I
really do like the book. If I thought they were wrong, I’d tell them. Heck, I’d write up my critique and submit it to the Journal of Sports Economics. I’d love the line on my vita. I’d also like to
ask for politeness among those arguing. The tone of the response has been too harsh to be productive. If we are all after the truth, it’s best to keep things civil.
10 Responses “Payroll and Wins”
1. JC,
What would we see (or what would you predict we would see) if the study was based on something slightly different like number of players paid above the league average, or something of that sort.
This is where I think the Braves’ winning can be explained: great expensive talent and then good cheap pieces added on. Any thoughts?
2. I don’t know, but I suspect it would predict much worse, since it doesn’t weight for the quality of the player.
3. I haven’t read the book, but the problem I have with this sort of argument is that it ignores the basic structure of baseball. When you have almost 40% of your total value coming from players who
are, by “law”, paid the minimum or close to, of course your correlation between salary and wins will be significantly less than one.
Depending on how the authors interpreted their finding, it can be misleading.
4. Having a high payroll gives you a better chance to land the best free agents and to keep your own young stars. But of course it’s also possible to spend $200 million on players like Carl Pavano
and Jaret Wright and then not make it to the World Series.
5. I agree that the reserve rules are one of the reasons the correlation is so low. The reserve clause is one of the big equalizers for small-market teams. Instead of going after big-name free
agents, the Marlins (who really don’t play in a small market, but pretend to) concentrate on finding young talent that is cheap. It also means that teams who spend a lot don’t necessarily win.
The rule contributes to dampening financial determinism.
6. I think what you can take from this, and I have always felt spending doesnt neccessarily equal wins, that to have winning club you need a good mix of very good young/cheap talent, mixed with more
expensive pieces to fill those holes. In other words, being efficent, for example this years mets team. This is why the Yankees have to spend so much money, they have no good cheap players on
their squad, and have to overspend dearly to get any kind of value. All in all, you should aim to have a payroll around the average, this chart also shows how the Cubs are screwed for the next
few years. They have no farm system, so the cubs would need to spend about 140-150 million just to have an ample chance to post 82-84 wins, and that isnt guaranteed.
7. I did a study here of 1999-2005, that basically corresponds to what JC is saying, that every 10% of payroll is 1 win:
Posts #6 and #7 give you the equation of how to convert payroll into wins.
The key point that you’ll find there is that the structure of baseball makes it so that it’s not an efficient market, and therefore, looking at payroll in isolation will tell you only a little
about what is happening.
8. Cyril Morong writes:
“Is there some kind of divisional effect? The Yankees and Red Sox both pay very high salaries, yet they play each other alot (19 times each year? when beofe it was 11-12). Before 2000 or 2001,
teams played a balanced schedule. You played every other team in the league about the same number of times. Now you play almost half your games against your own division. What if the top 4-5
teams in salaries were all in one division? If they play each other alot, they can’t all win. This would hold down their pct. And teams with low payrolls in other divisions will see increases in
their pct. So I wonder if somehow this analysis could be done divsion by division. Maybe have some kind of variable in there for what division you play in. I don’t know what that would be,
9. I look at it a little differently. It seems to me if you have one element that accounts for 18% of the variance, that means something. High payroll, by itself, doesn’t guarantee wins if you piss
it away, but it makes it a lot easier and gives you more leeway for mistakes. In the case of the Braves, it means the difference between keeping Andruw Jones and losing him. Clearly, the teams
with more resources to spend, all other things being equal, have an advantage over those that don’t. Of course, I guess the question is, are there gradations; ie, obviously, a $200 mm payroll
will give you an advantage over an $80 mm payroll, but is there the same difference between $80 mm and, say, $95 mm?
10. I wonder if efficacy of payroll spending would be found to be higher if instead of using “wins” as the desired outcome one used “successfully qualifying for the post-season” as the desired
outcome. It is not clear to me that large payroll increases to get a team from say a .450 winning percentage to a .490 winning percentage are likely to be an efficient way for teams to spend
money, and much of the payroll/wins relationship for teams unlikely to compete seriously for post-season spots may be noise. As a hypothesis I would suggest that perhaps a significant portion of
additional payroll spending is spent to acheive additional percentages of probability of making the post-season rather than simply wins. Or to put it another way, not all wins are of equal value
to a team and part of the reason behind a finding of a relatively low payroll efficacy in achieving wins might be that it is not really wins that are being sought but high-value wins (i.e., wins
that achieve the highest increase in probability of making the post-season). | {"url":"http://www.sabernomics.com/sabernomics/index.php/2006/11/payroll-and-wins-2/comment-page-1/","timestamp":"2014-04-17T01:48:16Z","content_type":null,"content_length":"49167","record_id":"<urn:uuid:f5eb8268-14f1-4ff4-a7d9-8ba8b3cb2a46>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
practical applications of eigenvalues and eigenvectors
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
We're making a video presentation on the topic of eigenvectors and eigenvalues. Unfortunately we have only reached the theoretical part of the discussion. Any comments on
practical applications would be appreciated.
up vote 3 down vote
favorite eigenvector
add comment
We're making a video presentation on the topic of eigenvectors and eigenvalues. Unfortunately we have only reached the theoretical part of the discussion. Any comments on practical applications would
be appreciated.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help
center, please edit the question.
The problem of ranking the outcomes of a search engine like Google is solved in terms of an invariant measure on the net, seen as a Markov chain. Finding the invariant measure
up vote 6 down requires the spectral analysis of the associated matrix.
add comment
The problem of ranking the outcomes of a search engine like Google is solved in terms of an invariant measure on the net, seen as a Markov chain. Finding the invariant measure requires the spectral
analysis of the associated matrix.
I would comment on Peitro's answer, but I don't have enough reputation; for a marvelously-titled explanation of Google's Pagerank, see The $25,000,000,000 Eigenvector.
up vote 4 down vote
add comment
I would comment on Peitro's answer, but I don't have enough reputation; for a marvelously-titled explanation of Google's Pagerank, see The $25,000,000,000 Eigenvector.
Google's pagerank system is most likely the most canonical example, however others include,
-Dynamical System If you are able to express a model in terms of a matrix acting on vectors, one can look at the iterations and ask what occurs? This can be done to model the life cycle of
some species in an environment (bacteria on a petri dish, wolf/sheep interaction, fibonacci sequence as the spread of a population of bunnies, etc...). These examples are fairly small,
however you can certainly have massive systems to model, and if your matrix is diagonalizable, the iterations of this map correspond to iterations of a diagonal matrix (very easy to do!)
instead of the standard $m^{2}$ operations to multiply out an $m\times m$ matrix. Think about a $1 000 000 \times 1 000 000$ matrix $M$, where you're looking at whether a certain species will
die out (i.e., itererating $M^{n}$ and checking as $n\to\infty$. Quite the time saver!)
up vote -Graph theory As an undergrad one of my summer research projects looked into special graphs called (3,6)-fullerenes, where we were finding that, looking at the adjacency matrix of the graph,
3 down one could pick 3 well chosen eigenvalues and their corresponding eigenvetors, and generate nice 3d plots of the graphs, whereas other choices would produce degenerate images, involving some
vote twisted 2d surface.
-Differential equations One can use eigenvalues and eigenvectors to express the solutions to certain differential equations, which is one of the main reasons theory was developed in the first
I would highly recommend reading the wikipedia article, as it covers many more examples than any one reply here will likely contain, with examples along to way! (Schrödinger equation,
Molecular Orbitals, Geology and Glaciology, Factor Analysis, Vibration Analysis, Eigenfaces, Tensor of Inertia, Stress Tensor, Eigenvalues of a Graph)
add comment
Google's pagerank system is most likely the most canonical example, however others include,
-Dynamical System If you are able to express a model in terms of a matrix acting on vectors, one can look at the iterations and ask what occurs? This can be done to model the life cycle of some
species in an environment (bacteria on a petri dish, wolf/sheep interaction, fibonacci sequence as the spread of a population of bunnies, etc...). These examples are fairly small, however you can
certainly have massive systems to model, and if your matrix is diagonalizable, the iterations of this map correspond to iterations of a diagonal matrix (very easy to do!) instead of the standard $m^
{2}$ operations to multiply out an $m\times m$ matrix. Think about a $1 000 000 \times 1 000 000$ matrix $M$, where you're looking at whether a certain species will die out (i.e., itererating $M^{n}$
and checking as $n\to\infty$. Quite the time saver!)
-Graph theory As an undergrad one of my summer research projects looked into special graphs called (3,6)-fullerenes, where we were finding that, looking at the adjacency matrix of the graph, one
could pick 3 well chosen eigenvalues and their corresponding eigenvetors, and generate nice 3d plots of the graphs, whereas other choices would produce degenerate images, involving some twisted 2d
-Differential equations One can use eigenvalues and eigenvectors to express the solutions to certain differential equations, which is one of the main reasons theory was developed in the first place!
I would highly recommend reading the wikipedia article, as it covers many more examples than any one reply here will likely contain, with examples along to way! (Schrödinger equation, Molecular
Orbitals, Geology and Glaciology, Factor Analysis, Vibration Analysis, Eigenfaces, Tensor of Inertia, Stress Tensor, Eigenvalues of a Graph)
All of Quantum Mechanics is based on the notion of eigenvectors and eigenvalues. Observables are represented by hermitian operators Q, their determinate states are eigenvectors of Q, a
measure of the observable can only yield an eigenvalue of the corresponding operator Q. If you measure an observable in the state $\psi$ in a system and find as result the eigenvalue $a$,
the state of the system just after the measurement will be the normed projection of $\psi$ onto the eigenvector associated to $a$. And so on and so forth.
Of course Quantum Physics is not mathematically trivial: the arena is infinite dimensional Hilbert Space (or more complicated functional analytic structures like Gelfand triples), operators
up vote 3 are not bounded, etc...However, in the extremely fast growing field of Quantum Computing the algebra is mostly limited to finite-dimensional spaces and their operators.
down vote
Finally, let me mention that Frank Wilczek, a winner of the 2004 Nobel Prize in Physics, has interestingly reminisced that as a student he found Quantum Mechanics easier than Classical
Mechanics because of its nice axiomatization alluded to above..
add comment
All of Quantum Mechanics is based on the notion of eigenvectors and eigenvalues. Observables are represented by hermitian operators Q, their determinate states are eigenvectors of Q, a measure of the
observable can only yield an eigenvalue of the corresponding operator Q. If you measure an observable in the state $\psi$ in a system and find as result the eigenvalue $a$, the state of the system
just after the measurement will be the normed projection of $\psi$ onto the eigenvector associated to $a$. And so on and so forth.
Of course Quantum Physics is not mathematically trivial: the arena is infinite dimensional Hilbert Space (or more complicated functional analytic structures like Gelfand triples), operators are not
bounded, etc...However, in the extremely fast growing field of Quantum Computing the algebra is mostly limited to finite-dimensional spaces and their operators.
Finally, let me mention that Frank Wilczek, a winner of the 2004 Nobel Prize in Physics, has interestingly reminisced that as a student he found Quantum Mechanics easier than Classical Mechanics
because of its nice axiomatization alluded to above..
For visual appeal, you should look into the area of pendulums. There is a good demonstration with swinging bottles, I recall, and this does depend on eigenvalues that are nearly
up vote 2 down equal. Do a Web search on "coupled pendulums".
add comment
For visual appeal, you should look into the area of pendulums. There is a good demonstration with swinging bottles, I recall, and this does depend on eigenvalues that are nearly equal. Do a Web
search on "coupled pendulums".
Principal Component Analysis is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. It is very difficult to
visualize data in high dimensional space, but PCA can be used their to analyze data. From the data set covariance matrix is formed and then eigen values and eigen vectors of that covariance
up vote 1 matrix are found. These eigne values and eigen vectors then can be compared to figure out the contribution of a particular feature in the data set. Thus PCA can be successfully applied to
down vote reduce dimension of the data.
add comment
Principal Component Analysis is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. It is very difficult to visualize data
in high dimensional space, but PCA can be used their to analyze data. From the data set covariance matrix is formed and then eigen values and eigen vectors of that covariance matrix are found. These
eigne values and eigen vectors then can be compared to figure out the contribution of a particular feature in the data set. Thus PCA can be successfully applied to reduce dimension of the data.
In telecommunications the so-called "beam-forming" algorithm in case of multiple antennas requires calculation of eigenvectors.
up vote 1 down vote
add comment
In telecommunications the so-called "beam-forming" algorithm in case of multiple antennas requires calculation of eigenvectors.
I think the book $Spectra$ $of$ $Graphs$$:$ $Theory$ $and$ $Applications$ by Dragos M. Cvetkovic, Michael Doob, Horst Sachs and M. Cvetkovic is very good source for practical applications
of eigenvalues and eigenvectors.
up vote 1 In communication theory, coding theory and cryptography, the minimum distance of codes is very important parameter in decoding and also is very important in coding based cryptography (for
down vote example McEliece cryptosystem). It is interesting that the second largest eigenvalue of related graph to a code, can determine a good lower-bond for minimum distance of code.
add comment
I think the book $Spectra$ $of$ $Graphs$$:$ $Theory$ $and$ $Applications$ by Dragos M. Cvetkovic, Michael Doob, Horst Sachs and M. Cvetkovic is very good source for practical applications of
eigenvalues and eigenvectors.
In communication theory, coding theory and cryptography, the minimum distance of codes is very important parameter in decoding and also is very important in coding based cryptography (for example
McEliece cryptosystem). It is interesting that the second largest eigenvalue of related graph to a code, can determine a good lower-bond for minimum distance of code.
Another interesting application is rigid body rotation theory. No matter how complicated an object looks, there's always (at least) a set of three mutually orthogonal directions
around which it can rotate perfectly without precession.
up vote 0 down
vote Maybe not something you can base a whole lecture on, but it's a nice remark.
add comment
Another interesting application is rigid body rotation theory. No matter how complicated an object looks, there's always (at least) a set of three mutually orthogonal directions around which it can
rotate perfectly without precession.
Maybe not something you can base a whole lecture on, but it's a nice remark. | {"url":"http://mathoverflow.net/questions/40454/practical-applications-of-eigenvalues-and-eigenvectors","timestamp":"2014-04-17T15:33:25Z","content_type":null,"content_length":"78163","record_id":"<urn:uuid:e9e53a33-261d-406f-8d18-42d8a519cf62>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Responses to "Jay Earley and his algorithm"
mark@hubcap.clemson.edu (Mark Smotherman)
Fri, 9 Jun 89 17:00:01 -0400
From comp.compilers
| List of all articles for this month |
Date: Fri, 9 Jun 89 17:00:01 -0400
From: mark@hubcap.clemson.edu (Mark Smotherman)
Summary of responses to "Jay Earley and his algorithm"
The three questions were:
1) What are significant improvements that have been made to Earley's
parsing algorithm since 1968?
2) What compilers use Earley's algorithm or variant thereof? (I am
aware of the Western Digital Ada compiler.)
3) Where is Earley these days, and what is he doing? (OK, make that
four questions!)
Many thanks to the following responders (who have been a great help!):
compilers@ima.isc.com (our fearless moderator)
lang@inria.inria.fr (Bernard Lang)
worley@compass.com (Dale Worley)
mauney@cscadm.ncsu.edu (Jon Mauney)
peterd@june.cs.washington.edu (Peter C. Damron)
Dick Grune <dick@cs.vu.nl>
Ron Rasmussen <ras%hpclras@sde.hp.com>
joop@cs.kun.nl (Joop Leo)
Edited responses follow:
From: compilers@ima.isc.com
[It's my impression that for most purposes LR argorithms have supplanted
Earley's, because they are so much faster and are adequate for the grammars
of most current languages. -John]
From: lang@inria.inria.fr (Bernard Lang)
1) significant improvements
There is much literature on this. A paper is coming up in the next
SIGPLAN'89 conference in Portland. Much of the work on this topic,
though not all of it has taken place in the Computational Linguistics
community (which to some extent rediscovered the algorithm).
Look for "chart parsing" in that community. The main conferences
are the ACL yearly meeting, and the COLING conference (every other year).
I contributed myself several papers.
The topics are the usual ones: optimization, formal analysis, error
detection and correction, incrementality, ...
The algorithm is related to others used in Database query evaluation,
and in logic programming.
2) compilers using Earley's algorithm or variant
Earley parsers are used in interfaces for programming environments.
3) Where is Earley
Last I heard of him (Early seventies), he was becoming something like a
psychologist. I have no idea where.
From: worley@compass.com (Dale Worley)
There is a paper on some new Earley-like parsers, "A parser generator
for finitely ambiguous context-free grammars" by J. Rekers (report
CS-R8712 from the Center for Mathematics and Computer Science,
Mathematisch Centrum, Amsterdam), which discusses Tomita's algorithm.
Tomita's algorithm acts like an LR parser until you get to a situation
that is not LR parsable, then it (reasonably efficiently) clones the
LR parser to keep track of each possible parse. It has almost the
generality of Earley's parser (the grammar is required to be "finitely
ambiguous"), but runs much faster for ordinary grammars. The paper
has a reasonably good bibliography.
From: mauney@cscadm.ncsu.edu (Jon Mauney)
The most important theoretical improvement to Earley's parser is
described in
S.L. Graham, M.A. Harrison, W.L. Ruzzo
"An improved context-free recognizer"
TOPLAS 2(3), pp415-462, July 1980
I don't have any information on use of Earley's algorithm in actual
compilers. The Graham-Harrison-Ruzzo algorithm was used in my
syntax-error repair algorithm
(SIGPLAN Symposium on Compiler Construction, June 1982)
(Mauney and Fischer, TOPLAS 10(3), July 1988)
From: peterd@june.cs.washington.edu (Peter C. Damron)
Try reading:
S. L. Graham, M. A. Harrison, W. L. Ruzzo
"An Improved Context-Free Recognizer"
ACM Trans. on Programming Languages and Systems
v2, n3, July 1980, pp. 415-462
The Earley algorithm has been used for parsing, code generation,
and natural language processing, but I think there are better ways
to do all three.
From: Dick Grune <dick@cs.vu.nl>
The original Earley algorithm, which featured a 1-token reduction look-ahead
was extended to k-token reduction and t-token prdecition look-ahead by
%A M. Bouckaert
%A A. Pirotte
%A M. Snelling
%T Efficient parsing algorithms for general context-free parsers
%J Inf. Sci.
%V 8
%N 1
%D Jan. 1975
%P 1-26
They recommend using a 1-token prdecition look-ahead rather than follow
Earley's suggestion and have simulations to prove their point.
The Earley and CYK algorithms were bent toward each other and combined in to
an "almost" encompassing algorithm in
%A S.L. Graham
%A M.A. Harrison
%A W.L. Ruzzo
%T An improved context-free recognizer
%J TOPLAS
%V 2
%N 3
%D July 1980
%P 415-462
None of these beat the n^3 barrier for the general case. There are impractical
algorithms using Strassen matrix multiplication that break this barrier.
The algorithm was extended for context-sensitive grammars in
%A G.Sh. Vold'man
%T A parsing algorithm for context-sensitive grammars
%J Program. Comput. Software
%V 7
%D 1981
%P 302-307
I know it is sometimes used in natural language parsers, and there is at least
one attempt to use it in a Graham-Glanville code generator, in SIGPLAN
Notices, June 1984.
From: Ron Rasmussen <ras%hpclras@sde.hp.com>
I talked to some people from Interactive Software Engineering, the producers
of Eiffel (Bertrand Meyers), and their language sensitive editor and Eiffel
parser is based upon Earleys' algorithms. The person I talked to mentioned
some improvements etc. Try:
Jean-Marc Nerson @
270 Storke Road, Suite 7
Goleta, Ca.
He mentioned Earley update papers by "Charles Anres" and Jean-Marc Julia
in France.
From: joop@cs.kun.nl (Joop Leo)
In 1987 I wrote a paper titled
"A general CF parsing algorithm running in linear time on
every LR(k) grammar without using lookahead"
It appeared in the Proc. of the SION congress CSN 87 (Amsterdam,
2/3 november 1987) p.53-57.
I also submitted it to Theoretical Computer Science. Within a few month
I will have finished a revised version.
If you are interested I can send you a preliminary version.
From: lang@inria.inria.fr (Bernard Lang)
Here is a partial bibliography on Earley like algorithms.
(quickly extracted from my papers).
It is in LaTeX format.
I believe all these papers deal in some form with some variant of
Earley's algorithm, though I did not check and my memory may have
betrayed me in one or two cases.
I know of several others, but I do not really have time to computerize now
the complete bibbliography I have.
I hope this will help you.
Billot, S. 1988
{\em Analyseurs Syntaxiques et Non-D\'{e}terminisme}.
Th\`{e}se de Doctorat, Universit\'{e} d'Orl\'{e}ans la Source,
Orl\'{e}ans (France).
Billot, S.; and Lang, B. 1989 % june 26-29
The structure of Shared Forests in Ambiguous Parsing.
INRIA Research Report 1038. % may 1989
To appear in {\em
Proc. of the $\rm 27^{th}$ Annual Meeting of the Association for Computational
Vancouver (British Columbia).
Bouckaert, M.; Pirotte, A.; and Snelling, M. 1975
Efficient Parsing Algorithms for General Context-Free Grammars.
{\em Information Sciences} 8(1): 1-26%january 1975
Cocke, J.; and Schwartz, J.T. 1970
{\em Programming Languages and Their Compilers}.
Courant Institute of Mathematical Sciences, New York University,
New York.
Earley, J. 1968
{\em An Efficient Context-Free Parsing Algorithm.}
Ph.D. thesis, Carnegie-Mellon University,
Pittsburgh, Pennsylvania.
Earley, J. 1970
An Efficient Context-Free Parsing Algorithm.
{\em Communications ACM}
13(2): 94-102.
Graham, S.L.; Harrison, M.A.; and Ruzzo W.L. 1980 %july
An Improved Context-Free Recognizer.
{\em ACM Transactions on Programming Languages and Systems} 2(3): 415-462.
Hays, D.G. 1962
Automatic Language-Data Processing.
In {\em Computer Applications in the Behavioral Sciences},
(H. Borko ed.), Prentice-Hall, pp. 394-423.
Kasami, J. 1965
{\em An Efficient Recognition and Syntax Analysis Algorithm for Context-Free
Report of Univ. of Hawaii,
%from Valiant's paper
AFCRL-65-758, Air Force Cambridge Research Laboratory,
Bedford (Massachusetts),
%from 2nd dragon book
also 1966,
University of Illinois Coordinated Science Lab. Report, No. R-257.
%from Younger's paper
Kay, M. 1980
Algorithm Schemata and Data Structures in Syntactic Processing.
{\em Proceedings of the Nobel Symposium on Text Processing},
Knuth, D.E. 1965
On the Translation of Languages from Left to Right.
{\em Information and Control}, 8: 607-639.
Lang, B. 1971
Parallel Non-deterministic Bottom-up Parsing.
Proc. of International Symposium on Extensible Languages, Grenoble,
{\em SIGPLAN Notices} 6(12): 56-57.
Lang, B. 1974
Deterministic Techniques for Efficient Non-deterministic Parsers.
{\em Proc. of the $2^{nd}$ Colloquium on Automata, Languages and Programming},
J. Loeckx (ed.), Saarbr\"{u}cken,
Springer Lecture Notes in Computer Science 14: 255-269.
Also: Rapport de Recherche 72, IRIA-Laboria, Rocquencourt (France).
Lang, B. 1988 %22-27 August
Parsing Incomplete Sentences.
{\em Proc. of the $12^{th}$ Internat. Conf. on Computational Linguistics
(COLING'88)} Vol. 1 :365-371, D. Vargha (ed.), % De'nes Vargha
Budapest (Hungary).
Lang, B. 1988 %June 28-30
Datalog Automata.
{\em Proc. of the $3^{rd}$ Internat. Conf. on Data and Knowledge Bases},
C. Beeri, J.W. Schmidt, U. Dayal (eds.), Morgan Kaufmann Pub.,
pp. 389-404,
Jerusalem (Israel).
Lang, B. 1988
{\em Complete Evaluation of Horn Clauses, an Automata Theoretic Approach}.
INRIA Research Report 913.
Lang, B. 1988 %
{\em The Systematic Construction of Earley Parsers:
Application to the Production of ${\cal O}(n^6)$ Earley Parsers
for Tree Adjoining Grammars}.
In preparation.
Li, T.; and Chun, H.W. 1987 %june 23-27
A Massively Parallel Network-Based Natural Language Parsing System.
{\em Proc. of 2nd Int. Conf. on Computers and Applications}
Beijing (Peking), : 401-408.
Nakagawa, S. 1986
{\em Speaker-Independent Sentence Recognition by Phoneme-Based Word Spotting
and Time-Synchronous Context-Free Parsing}.
Technical Report CMU-CS-86-109, Carnegie-Mellon University,
Pittsburgh, Pennsylvania.
Nakagawa, S. 1987
Spoken Sentence Recognition by Time-Synchronous Parsing Algorithm of
Context-Free Grammar.
{\em Proc. ICASSP 87}, Dallas (Texas), Vol. 2 : 829-832.
Pereira, F.C.N.; and Warren, D.H.D. 1983 %june 15-17
Parsing as Deduction.
{\em Proceedings of the 21st Annual Meeting of the Association for
Computational Linguistics}: 137-144,
Cambridge (Massachusetts).
Phillips, J.D. 1986 %automn
A Simple Efficient Parser for Phrase-Structure Grammars.
{\em Quarterly Newsletter of the Soc. for the Study of Artificial
Intelligence (AISBQ)} 59: 14-19.
Pratt, V.R. 1975 %august
LINGOL --- A Progress Report.
In {\em Proceedings of the 4th IJCAI}: 422-428.
Rekers, J. 1987 %march
{\em A Parser Generator for Finitely Ambiguous Context-Free Grammars}.
Report CS-R8712,
Computer Science/Dpt. of Software Technology,
Centrum voor Wiskunde en Informatica,
Amsterdam (The Netherlands).
Sheil, B.A. 1976
%june 76 for tech report
Observations on Context Free Parsing.
%page numbers are from Tomita's PhD, I have not checked.
in {\em Statistical Methods in Linguistics}: 71-109, Stockholm (Sweden),
Proc. of Internat. Conf. on Computational Linguistics (COLING-76),
Ottawa (Canada).
Also: Technical Report TR 12-76,
Center for Research in Computing Technology,
Aiken Computation Laboratory,
Harvard Univ., Cambridge (Massachusetts).
Shieber, S.M. 1985
Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based
{\em Proceedings of the 23rd Annual Meeting of the Association for
Computational Linguistics}: 145-152.
Thompson, H. 1983 %August 22-26
MCHART: A Flexible, Modular Chart Parsing System.
{Proc. of the National Conf. on Artificial Intelligence (AAAI-83)},
Washington (D.C.), pp. 408-410.
Tomita, M. 1985
{\em An Efficient Context-free Parsing Algorithm for
Natural Languages and Its Applications}.
Ph.D. thesis, Carnegie-Mellon University,
Pittsburgh, Pennsylvania.
Tomita, M. 1986
An Efficient Word Lattice Parsing Algorithm for Continuous Speech Recognition.
In {\em Proceedings of IEEE-IECE-ASJ International Conference on Acoustics,
Speech, and Signal Processing (ICASSP 86)}, Vol. 3: 1569-1572.
%Tokyo, April 1986
Tomita, M. 1987
An Efficient Augmented-Context-Free Parsing Algorithm.
{\em Computational Linguistics}
13(1-2): 31-46.
Tomita, M. 1988
Graph-structured Stack and Natural Language Parsing.
{\em Proceedings of the $26^{th}$ Annual Meeting of the Association for
Computational Linguistics}: 249-257.
%Kuniaki Uehara;
%Ryo Ochitani;
%Osamu Kakusho;
%Junichi Toyoda
Uehara, K.;
Ochitani, R.;
Kakusho, O.;
Toyoda, J.
1984 %february
A Bottom-Up Parser based on Predicate Logic:
A Survey of the Formalism and its Implementation Technique.
{\em 1984 Internat. Symp. on Logic Programming},
Atlantic City (New Jersey), : 220-227.
Valiant, L.G. 1975
General Context-Free Recognition in Less than Cubic Time.
{\em Journal of Computer and System Sciences}, 10: 308-315.
Villemonte de la Clergerie, E.; and Zanchetta, A. 1988
{\em Evaluateur de Clauses de Horn.}
Rapport de Stage d'Option, Ecole Polytechnique, Palaiseau (France).
Ward, W.H.; Hauptmann, A.G.; Stern, R.M.; and Chanak, T. 1988 % april 11-14
% Wayne Alexander Richard Thomas
Parsing Spoken Phrases Despite Missing Words.
In {\em Proceedings of the 1988 International Conference on Acoustics,
Speech, and Signal Processing (ICASSP 88)}, Vol. 1: 275-278.
%New York City (New York), April 11-14 1988
% Context-Free Language Processing in Time $n^{3}$.
%G.E. Res. Develop. Cr., Schenectady, N.Y. 1966.
% reference found in Cocke & Schwartz 1970 (Prog. Lang. and their Compilers).
Younger, D.H. 1967
Recognition and Parsing of Context-Free Languages in Time $n^{3}$.
{\em Information and Control}, 10(2): 189-208
[From mark@hubcap.clemson.edu (Mark Smotherman)]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/89-06-014","timestamp":"2014-04-18T08:24:50Z","content_type":null,"content_length":"21912","record_id":"<urn:uuid:b23e725a-dd9f-4352-b1b2-a6ca3e2854c3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radiosity Over
Radiosity OverView Part 3
Reference: SIGGRAPH 1993 Education Slide Set, by Stephen Spencer
Slide 14 : Progressive Radiosity Variants.
Several variations on the basic progressive radiosity algorithm have been developed, in an effort to find the optimal method for producing the most pleasing results with minimum cost. Each variant
calculates the form factors from a point on one surface to all other surfaces.
The "gathering" variant collects light energy from all other surfaces in the environment, attenuated by the calculated form factors, and updates the "base" surface. In this variant, as well as the
"shooting" variant, the "base" surface is arbitrarily chosen.
The "shooting" variant distributes light energy from the "base" surface to all other surfaces in the environment, attenuated by the calculated form factors.
The "shooting and sorting" variant first calculates the surface with the greatest amount of unshot light energy, then uses this surface as the "base" surface in the "shooting" variant.
In addition to these, an initial "ambient" term can be approximated for the environment and adjusted at each iteration, gradually replaced by the true ambient contribution to the rendered image.
The "shooting and sorting" method is the most desirable, as it finds the surface with the greatest potential contribution to the intensity solution and updates all other surfaces in the environment
with its energy.
Slide 15 : Comparison of Progressive Variants.
This slide shows four variations on the basic progressive radiosity algorithm, each halted after one hundred iterations. The upper left image is the "gathering" variant, the upper right image the
"shooting" variant, the lower left image the "shooting and sorting" variant, and the lower right image is the "shooting and sorting and ambient" progressive radiosity variant.
The "shooting" variants show their superiority over the "gathering" variant here, as more of the scene is illuminated by them earlier. The "shooting and sorting" variant's concentration on those
surfaces which contribute most to the overall solution is also shown.
Slide 16: The Two-Pass Radiosity Solution.
Several important variations on the basic diffuse radiosity solution have been developed. The first is designed to relax the restriction on the diffuse-only nature of the basic radiosity solution, by
breaking the intensity solution into two steps: first, a pass with a traditional radiosity algorithm to calculate the diffuse intensity of the surfaces, followed by a pass with a ray tracing
algorithm, which collects the diffuse intensity information from the surfaces and adds to it specular information. This second pass is viewpoint- dependent: specular highlights on a surface are
dependent on the location of both the light and the viewer relative to the surface.
Slide 17 : Participating Media.
Another variation on the basic diffuse radiosity solution adds the contribution of light passing through a participating medium, such as smoke, fog, or water vapor in the air.
In this algorithm, light energy is sent through a three-dimensional volume representing a participating medium, which both attenuates the light energy and, potentially, adds to the intensity solution
through illumination of the participating medium.
Slide 18 : Advantages And Disadvantages.
The largest single advantage of the radiosity method for computer image generation is the highly realistic quality of the resulting images. No other method accurately calculates the diffuse
interreflection of light energy in an environment. Soft shadows and color bleeding are natural by-products of this method, just as hard shadows and mirror-like reflections are natural by-products of
a typical ray-tracing algorithm.
In addition to being visually pleasing, the method can be quite accurate in its treatment of energy transport between surfaces.
The viewpoint independence of the basic radiosity algorithm provides the opportunity for interactive "walkthroughs" of environments, as one intensity solution for an environment will serve as the
base for any particular view of the environment.
The costs associated with the radiosity method are substantial. The "full matrix" radiosity method requires a large amount of storage and long computation times for form factor calculation and matrix
solution. The "progressive" method must also calculate a large number of form factors, many more than once.
Accuracy in the resulting intensity solution requires preprocessing the environment, subdividing large surfaces into a set of smaller surfaces, and more surfaces means more storage and computation.
Slide 19 : State Of The Art / Future Work.
More recently, several new algorithms have been developed which help to alleviate the restrictions of the basic radiosity solution.
Ray tracing algorithms can be modified to handle the intricacies of accurate light transport between surfaces without explicit form factor calculation.
"Intelligent" pre-processing of environments can subdivide the surfaces of an environment based on the geometry of the environment and on the probable location of light-shadow boundaries, creating an
optimal subdivision.
As noted previously, the inherent diffuse nature of the basic radiosity algorithm has been relaxed with the development of multiple pass algorithms which incorporate both the diffuse and specular
components of light.
Current research efforts include more accurate modeling of the characteristics of lights and surfaces, through BRDFs (bidirectional reflectance distribution functions), concentration on minimizing
the cost of form factor calculation, and increasing the accuracy of form factor calculation.
Slide 20 : Consolation Room Image.
This image suggests one treatment of a consolation room in a hospital or physician's office. It is part of a research experiment comparing the effect of different lighting on the overall appearance
and perception of an environment.
Last changed April 01, 1998, G. Scott Owen, owen@siggraph.org | {"url":"http://www.siggraph.org/education/materials/HyperGraph/radiosity/overview_3.htm","timestamp":"2014-04-21T02:02:55Z","content_type":null,"content_length":"8609","record_id":"<urn:uuid:204d6f23-5c9d-4fad-9337-c7a2467478b8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Operating cost for the lowest 4 percent of the airplanes
1. 356890
Operating cost for the lowest 4 percent of the airplanes
Assume that the mean hourly cost to operate a commercial airplane follows the normal distribution with a mean of $2,050 per hour and a standard deviation of $205.
What is the operating cost for the lowest 4 percent of the airplanes? (Round z value to 2 decimal places. Omit the "$" sign in your response.)
What is the operating cost for the lowest 4 percent of the airplanes? (Round z value to 2 decimal places. Omit the "$" sign in your response.) | {"url":"https://brainmass.com/statistics/normal-distribution/356890","timestamp":"2014-04-18T13:48:10Z","content_type":null,"content_length":"28116","record_id":"<urn:uuid:87aa5b21-2086-434c-8c3f-c075b72f779a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Longitudinal data (change scores)
Baseline-adjustment is generally preferred to change scores, because it is less parameterized - allowing the data more freedom to fit itself. In calculation terms it is the difference between models
like these:
wt2 = 1*wt1 + error
wt2 = 0.973*wt1 + error
Interestingly, you can use change on the left side and get "identical" results - i.e. treatment effects will be identical but the wt1 coefficient will change its estimate/units.
wt2-wt1 = 0.973*wt1 + error
That makes sense because you are modeling two equations like these:
y = ax + b
y-x = ax + b (alternatively y = (a+1)x + b , a linear transformation)
On to a more complicated question...
Study design: Randomized trial of weight loss
Question: How to model weight loss maintenance from time 4 to time 5
Warning: I went out on a limb to answer this question
The specific question was whether to adjust for time 4 ("baseline") when modeling change from 4 to 5 and adjusting for change from 1 to 4. (Background: This is about maintenance and weight loss from
baseline to 4 may predict performance from 4 to 5. Do the successful people remain successful or do the people who lost have greater regain potential? Are there two effects here, and will the results
be influenced by whether completeness of data is related to early performance?)
In other words, choose between these 2 models:
wt(5-4) = wt(4-1) + error
wt(5-4) = wt(4-1) + wt4 + error
The latter is algebraically equivalent to wt5 = wt4 + wt1 + error and gives identical results.
The former is NOT equivalent to wt5 = wt1 + error, but it's close. Therefore I prefer the model with wt4 because I don't want wt4 to drop out of the equation. This is my intuition rather than
something I know to be correct. | {"url":"http://blog.lib.umn.edu/mitc0186/houristics/2010/05/longitudinal_data_cahnge_score.html","timestamp":"2014-04-19T17:09:04Z","content_type":null,"content_length":"5943","record_id":"<urn:uuid:d3b79288-36b4-4c57-99a1-4c156ec139eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
"orthogonal" and "parallel" morphisms?
up vote 3 down vote favorite
Let $\mathbf C$ a category with an initial object named $0$.
Is there a name for the pair of arrows $f,g\colon A\to B$ such that the unique arrow $0\to A$ is their equalizer? And dually, is there a special name for $f,g\colon A\to B$ such that the coequalizer
is $B\to 1$, when $1$ is the terminal object of $\mathbf C$? Finally, is it useful to name them? :)
I can figure out how it works in case $\mathbf C$ is concrete: I want to map the fact that a couple of arrows is "everywhere equal" (if "coker(f,g)=the whole") or "nowhere equal" (if "ker(f,g)=
nothing unnecessary").
No ideas for general situation + I'm searching references (something make me think about Lawvere but I'm not able to recover anything).
Thanks a lot!
reference-request ct.category-theory
3 The former might be called "having disjoint images", at least if C is extensive. I think "everywhere equal" is the wrong intuition for the dual notion; for instance, $id_N\colon N\to N$ and $(+1)\
colon N\to N$ have both properties. – Mike Shulman Mar 17 '11 at 18:14
I agree with Mike's comment about the intuition "everywhere equal", and that the former is something to do with disjointness. In fact Diers calls a pair with the latter property "codisjointed". –
Steve Lack Mar 18 '11 at 12:19
For the first, surely "disjoint" (or even just "unequal" or "apart") would be enough: it's all about limits and no "image" need ever be formed. I wonder what happens under Stone duality if we
apply this to parallel homomorphism that agree only on constants. – Paul Taylor Mar 19 '11 at 18:23
For the dual property, the situation in $\bf Set$ is a binary relation whose equivalence closure is the total relation, or "indiscriminate" as I called it in my book, so maybe we could call this
an indiscriminate pair. We could also think of the relation dynamically: it gets you from anywhere to anywhere else, which is called "transitive" in the theory of permutation groups. However, some
more examples in other categories would be useful before fixing a name. – Paul Taylor Mar 19 '11 at 18:27
1 Further to my objection to "disjoint <i>image</i>", consider the identity and swap maps $2\rightrightarrows 2$. These have the property of the question. However, their images are both $2\
rightarrowtail 2$ and the intersection of these is again the whole of $2$, not $0$. – Paul Taylor Mar 19 '11 at 19:50
show 2 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged reference-request ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/58769/orthogonal-and-parallel-morphisms","timestamp":"2014-04-17T15:54:31Z","content_type":null,"content_length":"52050","record_id":"<urn:uuid:b0b85c19-c81a-44dc-b9d6-91a12998789e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
of knot
Results 1 - 10 of 21
- J. ACM , 1999
"... We consider the problem of deciding whether a polygonal knot in 3-dimensional Euclidean space is unknotted, capable of being continuously deformed without self-intersection so that it lies in a
plane. We show that this problem, unknotting problem is in NP. We also consider the problem, unknotting pr ..."
Cited by 55 (6 self)
Add to MetaCart
We consider the problem of deciding whether a polygonal knot in 3-dimensional Euclidean space is unknotted, capable of being continuously deformed without self-intersection so that it lies in a
plane. We show that this problem, unknotting problem is in NP. We also consider the problem, unknotting problem of determining whether two or more such polygons can be split, or continuously deformed
without self-intersection so that they occupy both sides of a plane without intersecting it. We show that it also is in NP. Finally, we show that the problem of determining the genus of a polygonal
knot (a generalization of the problem of determining whether it is unknotted) is in PSPACE. We also give exponential worstcase running time bounds for deterministic algorithms to solve each of these
problems. These algorithms are based on the use of normal surfaces and decision procedures due to W. Haken, with recent extensions by W. Jaco and J. L. Tollefson.
- Theoretical Computer Science , 2001
"... A fundamental issue in theoretical computer science is that of establishing unambiguous formal criteria for algorithmic output. This paper does so within the domain of computeraided geometric
modeling. For practical geometric modeling algorithms, it is often desirable to create piecewise linear appr ..."
Cited by 28 (14 self)
Add to MetaCart
A fundamental issue in theoretical computer science is that of establishing unambiguous formal criteria for algorithmic output. This paper does so within the domain of computeraided geometric
modeling. For practical geometric modeling algorithms, it is often desirable to create piecewise linear approximations to compact manifolds embedded in and it is usually desirable for these two
representations to be "topologically equivalent". Though this has traditionally been taken to mean that the two representations are homeomorphic, such a notion of equivalence suffers from a variety
of technical and philosophical difficulties; we adopt the stronger notion of ambient isotopy. It is shown here, that for any , compact, 2-manifold without boundary, which is embedded in , there
exists a piecewise linear ambient isotopic approximation. Furthermore, this isotopy has compact support, with specific bounds upon the size of this compact neighborhood. These bounds may be useful
for practical application in computer graphics and engineering design simulations. The proof given relies upon properties of the medial axis, which is explained in this paper. E-mail address:
amenta@cs.utexas.edu. This author acknowledges, with appreciation, funding from the National Science Foundation under grant number CCR-9731977, and a research fellowship from the Alfred P. Sloan
Foundation. The views expressed herein are those of the author, not of these sponsors.
, 1998
"... The research presented here examines topological drawing, a new mode of constructing and interacting with mathematical objects in three-dimensional space. In topological drawing, issues such as
adjacency and connectedness, which are topological in nature, take precedence over purely geometric issues ..."
Cited by 18 (1 self)
Add to MetaCart
The research presented here examines topological drawing, a new mode of constructing and interacting with mathematical objects in three-dimensional space. In topological drawing, issues such as
adjacency and connectedness, which are topological in nature, take precedence over purely geometric issues. Because the domain of application is mathematics, topological drawing is also concerned
with the correct representation and display of these objects on a computer. By correctness we mean that the essential topological features of objects are maintained during interaction. We have chosen
to limit the scope of topological drawing to knot theory, a domain that consists essentially of one class of object (embedded circles in three-dimensional space) yet is rich enough to contain a wide
variety of difficult problems of research interest. In knot theory, two embedded circles (knots) are considered equivalent if one may be smoothly deformed into the other without any cuts or
self-intersections. This notion of equivalence may be thought of as the heart of knot theory. We present methods for the computer construction and interactive manipulation of a
, 2000
"... Abstract. Let T be a triangulation of S 3 containing a link L in its 1-skeleton. We give an explicit lower bound for the number of tetrahedra of T in terms of the bridge number of L. Our proof
is based on the theory of almost normal surfaces. 1. ..."
Cited by 8 (6 self)
Add to MetaCart
Abstract. Let T be a triangulation of S 3 containing a link L in its 1-skeleton. We give an explicit lower bound for the number of tetrahedra of T in terms of the bridge number of L. Our proof is
based on the theory of almost normal surfaces. 1.
- electronic), arXiv: math.GT/0205057. MR MR2219001
"... Abstract. We show that the problem of deciding whether a polygonal knot in a closed three-dimensional manifold bounds a surface of genus at most g is NP-complete. We also show that the problem
of deciding whether a curve in a PL manifold bounds a surface of area less than a given constant C is NP-ha ..."
Cited by 6 (0 self)
Add to MetaCart
Abstract. We show that the problem of deciding whether a polygonal knot in a closed three-dimensional manifold bounds a surface of genus at most g is NP-complete. We also show that the problem of
deciding whether a curve in a PL manifold bounds a surface of area less than a given constant C is NP-hard. 1.
- Chaos, Solitons and Fractals , 1998
"... Algorithms are of interest to geometric topologists for two reasons. First, they have bearing on the decidability of a problem. Certain topological questions, such as finding a classification of
four dimensional manifolds, admit no solution. ..."
Cited by 6 (3 self)
Add to MetaCart
Algorithms are of interest to geometric topologists for two reasons. First, they have bearing on the decidability of a problem. Certain topological questions, such as finding a classification of four
dimensional manifolds, admit no solution.
- Results of the NFS Workshop on Computational Topology , 1999
"... Here we present the results of the NSF-funded Workshop on Computational Topology, which met on June 11 and 12 in Miami Beach, Florida. This report identifies important problems involving both
computation and topology. Author affiliations: Marshall Bern, Xerox Palo Alto Research Ctr., bern@parc. ..."
Cited by 5 (0 self)
Add to MetaCart
Here we present the results of the NSF-funded Workshop on Computational Topology, which met on June 11 and 12 in Miami Beach, Florida. This report identifies important problems involving both
computation and topology. Author affiliations: Marshall Bern, Xerox Palo Alto Research Ctr., bern@parc.xerox.com. David Eppstein, Univ. of California, Irvine, Dept. of Information & Computer Science,
eppstein@ics.uci.edu. Pankaj K. Agarwal, Duke Univ., Dept. of Computer Science, pankaj@cs.duke.edu. Nina Amenta, Univ. of Texas, Austin, Dept. of Computer Sciences, amenta@cs.utexas.edu. Paul Chew,
Cornell Univ., Dept. of Computer Science, chew@cs.cornell.edu. Tamal Dey, Ohio State Univ., Dept. of Computer and Information Science, tamaldey@cis.ohio-state.edu. David P. Dobkin, Princeton Univ.,
Dept. of Computer Science, dpd@cs.princeton.edu. Herbert Edelsbrunner, Duke Univ., Dept. of Computer Science, edels@cs.duke.edu. Cindy Grimm, Brown Univ., Dept. of Computer Science, cmg@cs.brown.edu.
, 1994
"... The subject of this paper arises from the study of equivalence classes of equivariant periodic tilings of euclidean three-space. An equivariant tiling is just an ordinary tiling, i.e. a
subdivision of a space into a set of topological balls, together with a group of symmetries which are compatible w ..."
Cited by 4 (4 self)
Add to MetaCart
The subject of this paper arises from the study of equivalence classes of equivariant periodic tilings of euclidean three-space. An equivariant tiling is just an ordinary tiling, i.e. a subdivision
of a space into a set of topological balls, together with a group of symmetries which are compatible with this subdivision. Such tilings are called equivalent or of the same type if there exists a
homeomorphism between the underlying spaces which respects the subdivisions on both sides and also maps one symmetry group bijectively onto the other. The special case we are interested in here are
tilings of euclidean three-space with associated symmetry groups which are discrete groups of euclidean motions with compact orbit spaces, i.e. which are socalled crystallographic space groups...
"... A census is presented of all closed non-orientable 3-manifold triangulations formed from at most seven tetrahedra satisfying the additional constraints of minimality and P 2-irreducibility. The
eight different 3-manifolds represented by these 41 different triangulations are identified and the combin ..."
Cited by 3 (0 self)
Add to MetaCart
A census is presented of all closed non-orientable 3-manifold triangulations formed from at most seven tetrahedra satisfying the additional constraints of minimality and P 2-irreducibility. The eight
different 3-manifolds represented by these 41 different triangulations are identified and the combinatorial structures of the triangulations are described in detail. Furthermore, infinite families of
triangulations are constructed that highlight the common features of the census triangulations and allow similar triangulations of additional 3-manifolds to be formed. Algorithms and techniques used
in constructing the census are included. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=97726","timestamp":"2014-04-17T16:07:20Z","content_type":null,"content_length":"36522","record_id":"<urn:uuid:b80fee92-6aaa-4ef8-a950-fd3ce663e5d2>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Floral Park Math Tutor
...Finally, I believe relating the subject matter being taught to their everyday experience helps to keep students interested. I graduated from Columbia University in 1986, and earned a PhD. in
Sociology from Binghamton University in 1999. I have five years of teaching experience at the college level.
19 Subjects: including prealgebra, algebra 1, algebra 2, SAT math
...I have been involved in the world of music and music education, since I was a child starting violin lessons at the age of 5 and piano lessons at the age of 8. I studied violin with Fudeko
Takahashi and piano with Clara Slater at the New England Conservatory Preparatory School and also participat...
37 Subjects: including SAT math, ACT Math, English, reading
...That program covered material from elementary through high school and even freshman college level work. I have held increasing levels of responsibilities at my job and I am now a manager in
operations. That said, teaching have always been my first love and I am delighted to be able to see the lights on in kid minds as they learn.
12 Subjects: including trigonometry, geometry, physics, precalculus
...My main area of focus has been middle school. I have had an enormous amount of success improving the math grades and state test scores of my students. I am flexible, easy to work with and enjoy
9 Subjects: including algebra 1, algebra 2, prealgebra, elementary (k-6th)
...As a teacher, I can be demanding at times, but I also try to help the student enjoy the subject matter and try to share my passion for learning math and science with them. As a student, I
enjoyed and excelled in Math and Science and competed regularly in Math and Science competitions. I have a BS degree in Biology from MIT and have worked for 5 years in the investment banking
24 Subjects: including algebra 1, ACT Math, geometry, SAT math | {"url":"http://www.purplemath.com/floral_park_ny_math_tutors.php","timestamp":"2014-04-19T12:39:59Z","content_type":null,"content_length":"24027","record_id":"<urn:uuid:6a0c5fc7-ab3f-43f2-864a-a824b69a3f25>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math.PI Property
Siebel eScript Language Reference > Siebel eScript Commands > The Math Object >
Math.PI Property
This property holds the number value for pi.
This property holds the value of pi, which is the ratio of the circumference of a circle to its diameter. This value is represented internally as approximately 3.14159265358979323846.
For examples, see Math.atan() Method, Math.atan2() Method, and Math.cos() Method. | {"url":"http://docs.oracle.com/cd/B31104_02/books/eScript/eScript_JSReference242.html","timestamp":"2014-04-17T00:00:22Z","content_type":null,"content_length":"4282","record_id":"<urn:uuid:14d446e4-1ea7-450e-93a8-6b0aac717c23>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing the Volume of Closed 3-Manifolds and the Geometrization Conjecture
up vote 2 down vote favorite
My question is whether or not if I generalize Theorem 2(i) of "Contact Graphs of Unit Sphere Packings Revisited" [2012] by K. Bezdek and S. Reid (arXiv link) which states
The number of touching triplets (resp., quadruples) in an arbitrary packing of $n \geq 3$ (resp., $n \geq 4$) unit balls in $\mathbb{E}^3$ is at most $\frac{25}{3}n$ (resp., $\frac{11}{4}n$).
by replacing $\mathbb{E}^3$ by any of the other Thurston Geometries $\mathbb{S}^2 \times \mathbb{R}, \mathbb{H}^2 \times \mathbb{R}, \mathbb{S}^3, \mathbb{H}^3, \widetilde{SL_{2}(\mathbb{R})}$,
Nilgeometry, or Solvgeometry, if I can use the Geometrization Conjecture to say something about the volume of closed 3-manifolds.
A succint statement of the Geometrization Conjecture for my purposes would be that for any closed 3-manifold $\mathcal{M}$ there exists a decomposition (I think it is called the JSJ-torus
decomposition, denoted by $\otimes$) of $\mathcal{M}$ into prime 3-manifolds $\mathcal{N}_{i}$ (such a decomposition exists due to the Geometrization Conjecture recently proved by G. Perelman and
neatly presented in "Completion of the Proof of the Geometrization Conjecture" [2008] by John Morgan and Gang Tian)
$$\mathcal{M} = \bigotimes_{i=1}^{n} \mathcal{N}_{i}$$
where each $\mathcal{N}_{i}$ admits one of the eight Thurston Geometries
$\mathbb{S}^2 \times \mathbb{R}, \mathbb{H}^2 \times \mathbb{R}, \mathbb{E}^3, \mathbb{S}^3, \mathbb{H}^3, \widetilde{SL_{2}(\mathbb{R})}$, Nilgeometry, or Solvgeometry.
and is of a finite volume.
My idea now is that the volume of each prime 3-manifold which $\mathcal{M}$ was decomposed into can have it's volume approximated by determining the maximum number of regular 3-simplices in a
simplicial 3-complex $\mathcal{K}_{i}$ which can be embedded into the $i$-th prime 3-manifold in the decomposition. Then,
$$\text{vol}\left(\mathcal{M}\right) > \sum_{i=1}^{n} \text{vol}(\mathcal{K}_{i})$$
With a generalization of Theorem 2(i) to each of the Thurston Geometries, then I would be able to compute this bound by multiplying the maximum number of unit balls I can fit in the space by the
volume of the unit ball and dividing by the optimal known packing density (note that a regular 3-simplex corresponds to a touching quadruple of spheres, which is why Theorem 2(i) would be useful).
Does this general outline make sense? I don't know a lot about the decomposition of manifolds or the volume of manifolds, so any feedback on the idea or references would be appreciated. In
particular, my question is:
If I do all of the work to get a version of Theorem 2(i) in each Thurston Geometry, can I use the Geometrization Conjecture for studying (in this example, I was thinking volume computation of
3-manifolds) some interesting properties of 3-manifolds?
mg.metric-geometry gt.geometric-topology sphere-packing
Samuel: Except for the hyperbolic geometry, volumes of all other geometric 3-manifolds are easily computable by using, say, Gauss-Bonnet formula in the case of the geometries fibered over the
2 hyperbolic plane, etc. In the hyperbolic case, I do not see how your method would yield something interesting (comparing to Gromov-norm interpretation of volume or computation using an ideal
triangulation). However, maybe I am missing something here. Also, take a look at the work of Gabai, Meyerhoff and Milley on volumes of hyperbolic 3-manifolds, it is also based on packing bounds. –
Misha Feb 26 '13 at 20:10
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged mg.metric-geometry gt.geometric-topology sphere-packing or ask your own question. | {"url":"http://mathoverflow.net/questions/123022/computing-the-volume-of-closed-3-manifolds-and-the-geometrization-conjecture","timestamp":"2014-04-16T07:41:25Z","content_type":null,"content_length":"49893","record_id":"<urn:uuid:ce6a923d-2b7d-400d-9a3b-ff6cff5679c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
The Birthday Problem and Some Generalizations
The birthday problem asks, "How many randomly selected people must there be in a room in order for the probability that two people share a birthday to exceed 0.5?" and has the well-known answer 23.
The following generalizations are illustrated here, along with answers:
1. The probability of 0.5 can be replaced by any value from 0.01 to 0.99, in increments of 0.01.
2. The number of days in a year can be any value from 2 through 5000, for the convenience of extraterrestrials.
3. The question "How many randomly selected people must there be in a room in order for the probability that two people share a birthday or have birthdays on consecutive days to exceed 0.5?" is
Any combination of these generalizations can be used simultaneously.
Contributed by:
Marc Brodie
(Wheeling Jesuit University) | {"url":"http://demonstrations.wolfram.com/TheBirthdayProblemAndSomeGeneralizations/","timestamp":"2014-04-19T09:27:43Z","content_type":null,"content_length":"43986","record_id":"<urn:uuid:04564402-eb54-4b53-bbe9-fba3f1ba3e8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heath Jarrow and Morton Example One: Modeling Interest Rates with One Factor and Maturity-Dependent Volatility
The author wishes to thank Robert A. Jarrow for his encouragement and advice on this series of worked examples of the HJM approach. What follows is based heavily on Prof. Jarrow’s Modeling Fixed
Income Securities and Interest Rate Options (second edition, 2002), particularly chapters 4, 6, 8, 9 and 15.
In this blog series, we use data from the Federal Reserve statistical release H15 published on April 1, 2011. U.S. Treasury yield curve data was smoothed using Kamakura Risk Manager version 7.3 to
create zero coupon bonds via the maximum smoothness forward rate technique of Adams and van Deventer as documented in these two recent blog issues:
van Deventer, Donald R. “Basic Building Blocks of Yield Curve Smoothing, Part 10: Maximum Smoothness Forward Rates and Related Yields versus Nelson-Siegel,” Kamakura blog, www.kamakuraco.com,
January 5, 2010. Redistributed on www.riskcenter.com on January 7, 2010.
van Deventer, Donald R. “Basic Building Blocks of Yield Curve Smoothing, Part 12: Smoothing with Bond Prices as Inputs,” Kamakura blog, www.kamakuraco.com, January 20, 2010. Redistributed on
The smoothed U.S. Treasury yield curve and the implied forward yield curves monthly for ten years looks like this:
The continuous forward rate curve and zero coupon bond yield curve that prevailed as of the close of business on March 31, 2011 were as follows:
Objectives of the Example and Key Input Data
Following Jarrow (2002), we want to show how to use the HJM framework to model random interest rates and value fixed income securities using the simulated random rates. For all of the examples in
this blog series, we assume the following:
• Zero coupon bond prices for the U.S. Treasury curve on March 31, 2011 are the basic inputs.
• Interest rate volatility assumptions are based on the Dickler, Jarrow and van Deventer blog series on daily U.S. Treasury yields and forward rates from 1962 to 2011.
• The modeling period is 4 equal length periods of one year each.
• The HJM implementation is that of a “bushy tree” which we describe below
The HJM framework is usually implemented using Monte Carlo simulation or a “bushy tree” approach where a lattice of interest rates and forward rates is constructed. This lattice, in the general case,
does not “recombine” like the popular “binomial” or “trinomial” trees used to replicate Black-Scholes options valuation or simple 1 factor term structure models. In general, the bushy tree does not
recombine because the interest rate volatility assumptions imply a path-dependent interest rate model, not one that is path independent like the simplest one factor term structure model
implementations. Instead the bushy tree looks like this:
At each of the points in time on the lattice (time 0, 1, 2, 3 and 4) there are sets of zero coupon bond prices and forward rates. At time 0, there is one set of data. At time one, there are two sets
of data, the “up set” and the “down set.” At time two, there are four sets of data (up up, up down, down up, and down down), and at time three there are 8=2^3 sets of data.
In order to build a bushy tree like this, we need to specify how many factors we are modeling and what the interest volatility assumptions are. The number of ending nodes on a bushy tree for n
periods is 2^n for a 1 factor model, 3^n for a 2 factor model, and 4^n for a 3 factor model. For this example, we will build a one factor bushy tree.
The simplest interest rate volatility function one can assume is that the one year spot U.S. Treasury rate and the three one year forward rates (the one year forwards maturing at times 2, 3 and 4)
all have an identical interest rate volatility σ. This is the discrete time equivalent of the Ho and Lee model (1986). We start by examining the actual standard deviations of the daily changes from
1962 to 2011 for 10 interest rates: the continuously compounded 1 year zero coupon bond yield, the continuously compounded 1 year forward rate maturing in year 2, the continuously compounded 1 year
forward rate maturing in year 3, and so on for each one year forward rate out to the one maturing in year 10. The daily volatilities are as follows:
The one year spot rate’s volatility was 0.0832% over this period. The volatilities for the forward rates maturing in years 2, 3 and 4 were 0.0911%, 0.0981%, and 0.0912% respectively. We can reach two
conclusions immediately:
• Interest rate volatility is not constant over the maturity horizon (contrary to the Ho and Lee [1986] assumptions)
• Interest rate volatility is not declining over the maturity horizon (contrary to the Vasicek [1977] assumptions).
The HJM approach is so general that the Ho and Lee and Vasicek models can be replicated as special cases, but there is no need to do so since the data shows that the volatility assumptions of those
models are not consistent with actual market movements.
Instead, we assume that forward rates have their actual volatilities over the period from 1962-2011, but we need to adjust for the fact that we have measured volatility over a time period of 1 day
(which we assume is 1/250th of a year based on 250 business days per year) and we need a period for whom the length of the period Δ is 1, not 1/250. We know that the variance changes as the length of
the time period:
and therefore
Therefore the 1 year forward rate, which has a volatility of 0.00091149 over a one day period has an annual volatility of 0.00091149/[(1/250)^1/2]=0.01441187.
We use the annual volatilities from this chart in our HJM implementation:
We will use the zero coupon bond prices prevailing on March 31, 2011 as our other inputs:
Our final assumption is that there is one random factor driving the forward rate curve. This implies (since all of our volatilities are positive) that all forward rates move up or down together. We
relax this assumption when we move to 2 and 3 factor examples.
Key Implications and Notation of the HJM Approach
The Heath Jarrow and Morton conclusions are very complex to derive but their implications are very straightforward. Once the zero coupon bond prices and volatility assumptions are made, the mean of
the distribution of forward rates (in a Monte Carlo simulation) and the structure of a bushy tree are completely determined by the constraints that there be no arbitrage in the economy. Modelers who
are unaware of this insight would choose means of the distributions for forward rates such that valuation would provide different prices for the zero coupon bonds on March 31, 2011 than those used as
input. This would create the appearance of an arbitrage opportunity, but it is in fact a big error that calls into question the validity of the calculation, as it should.
We show in this example that the zero coupon bond valuations are 100% consistent with the inputs. We now introduce our notation:
Δ Length of time period, which is 1 in this example
r(t,s[t]) The simple 1 period risk free interest rate as of current time t in state s[t]
R(t,s[t]) 1+r(t,s[t]), the total return, the value of $1 dollar invested at the risk free rate for 1 period
σ (t,T,s[t]) Forward rate volatility at time t for the forward rate maturing at T in state s[t ](a sequence of ups and downs)
P(t,T,s[t]) Zero coupon bond price at time t maturing at time T in state s[t ](i.e. up or down)
Index = 1 if the state is “up” and = -1 if the state is “down”
U(t,T,s[t]) = the total return that leads to bond price P(t+Δ,t,s[t]=up) on a t maturity bond in an upshift state
D(t,T,s[t]) = the total return that leads to bond price P(t+Δ, t, s[t]=down) on a t maturity bond in a downshift state
K(t,T,s[t]) is the sum of the forward volatilities as of time t for the 1 period forward rates from t+Δ to T-Δ, as shown here:
We will also see the rare appearance of a trigonometric function in finance, one found in common spreadsheet software:
Note that the current times that will be relevant in building a bushy tree of zero coupon bond prices are current times t=0, 1, 2, and 3. We’ll be interested in maturity dates T=2, 3, and 4. We know
that at time zero, there are 4 zero coupon bonds outstanding. At time 1, only the bonds maturing at T = 2, 3, and 4 will remain outstanding. At time 2, only the bonds maturing at times T = 3 and 4
will remain, and at time 3 only the bond maturing at time 4 will remain. For each of the boxes below, we need to fill in the relevant bushy tree (one for each of the four zero coupon bonds) with each
up shift and down shift of the zero coupon bond price as we step forward one more period (by Δ = 1) on the tree:
In the interests of saving space, we’ll arrange the tree to look like a table by stretching the bushy tree as follows:
A completely populated zero coupon bond price tree would then be summarized like this where the final shift in price is color coded for up (green) or down (red):
The mapping of the sequence of up and down states is shown here, consistent with the stretched tree above:
In order to populate the trees with zero coupon bond prices and forward rates, there is one more piece of information which we need to supply.
Pseudo Probabilities
In Chapter 7 of Jarrow (2002), Prof. Jarrow shows that a necessary and sufficient condition for no arbitrage is that, at every node in the tree, the one period return on a zero coupon bond neither
dominates nor is dominated by a one period investment in the risk free rate. Written as the total return on a $1 investment,
U(t,T,s[t]) > R(t,s[t])>D(t,T,s[t]) for all points in time t and states s[t].
Prof. Jarrow adds “This condition is equivalent to the following statement: there exists a unique number π(t,s[t]) strictly between 0 and 1 such that
R(t,s[t])= π(t,s[t])U(t,T,s[t]) + (1- π(t,s[t]))D(t,T,s[t])”
Solving for π(t,s[t]) gives us the relationship
If the computed π(t,s[t]) values are between 0 and 1 everywhere on the bushy tree, then the tree is arbitrage free.
Professor Jarrow continues as follows:
“Note that each π(t,s[t]) can be interpreted as a pseudo probability of the up state occurring over the time interval (t, t+Δ)…We call these π(t,s[t])s pseudo probabilities because they are ‘false’
probabilities. They are not the actual probabilities generating the evolution of the money market account and the [T] period zero coupon bond prices. Nonetheless, these pseudo probabilities have an
important use…” in valuation.
Prof. Jarrow goes on to explain on page 129 that “risk neutral valuation” is computed by “taking the expected cash flow, using the pseudo probabilities, and discounting at the spot rate of interest.”
He adds “this is called risk neutral valuation because it is the value that the random cash flow ‘x’ would have in an economy populated by risk-neutral investors, having the pseudo probabilities as
their beliefs.”
We now demonstrate how to construct the bushy tree and use it for risk-neutral valuation.
The Formula for Zero Coupon Bond Price Shifts
Prof. Jarrow demonstrates that the values for the risk neutral probability of an upshift π(t,s[t]) are determined by a set of limit conditions for an assumed evolution of forward rates. Since these
limit conditions allow multiple solutions, without loss of generality, one can always construct a tree whose limit is the assumed forward rate evolution with π(t,s[t]) =1/2 for all points in current
time t and all up and down states s[t]. Using our notation above and remembering that the variable Index is 1 in an upstate and -1 in a downstate, then the shifts in zero coupon bond prices can be
written as follows:
where for notational convenience cosh[K(t,T,s[t])Δ]≡1 when T-Δ. This is equation 15.17 in Jarrow (2002, page 286). We now put this formula to use.
Building the Bushy Tree for Zero Coupon Bonds Maturing at Time T=2
We now populate the bushy tree for the 2 year zero coupon bond. We calculate each element of equation (1). When t=0 and T=2, we know Δ=1 and
P(0,2,s[t]) = 0.98411015.
The one period risk free rate is
The scaled sum of sigmas K(t,T,s[t]) for t=0 and T=2 becomes
and therefore K(0,T,s[t]) =(√1)( 0.01441187) = 0.01441187
Using formula 1 with these inputs and the fact that the variable Index=1 for an upshift gives
P(1,2,s[t] = up) = 1.001290
For a downshift we set Index = -1 and recalculate formula 1 to get
P(1,2,s[t] = down) = 0.972841
We have fully populated the bushy tree for the zero coupon bond maturing at T=2 (note values have been rounded to six decimal places for display only), since all of the up and down states at time t=2
result in a riskless pay-off of the zero coupon bond at its face value of 1. Note also that, as a result of our volatility assumptions, the price of the bond maturing at time T=2 after an upshift to
time t=1 has resulted in a value higher than the par value of 1. This implies negative interest rates. Negative nominal interest rates have appeared in Switzerland, Japan, Hong Kong and (selectively)
in the United States. The academic argument is that this cannot occur because one could costlessly store cash to avoid a negative interest rate. The assumption that cash can be stored costlessly is
an assumption that is not true. In fact, in 2011, Bank of New York Mellon announced that it would charge a fee (i.e. creating a negative return) on demand deposit balances in excess of $50 million.
We will return to this issue in later blogs.
Building the Bushy Tree for Zero Coupon Bonds Maturing at Time T = 4
We now populate the bushy tree for the 4 year zero coupon bond. We calculate each element of equation (1). When t=0 and T=4, we know Δ=1 and
P(0,4,s[t]) =0.930855099
The one period risk free rate is
The scaled sum of sigmas K(t,T,s[t]) for t=0 and T=4 becomes
and therefore K(0,T,s[t]) =(√1)( 0.01441187+0.015506259+0.01442681)= 0.044344939.
Using formula 1 with these inputs and the fact that the variable Index=1 for an upshift gives
P(1,4,s[t] = up) =0.975026
For a downshift we set Index = -1 and recalculate formula 1 to get
P(1,4,s[t] = down) = 0.892275
We have correctly populated the first two columns of the bushy tree for the zero coupon bond maturing at T=4 (note values have been rounded to six decimal places for display only).
Now we move to the third column, which displays the outcome of the T=4 zero coupon bond price after 4 scenarios: up-up, up-down, down-up, and down-down. We calculate P(2,4,s[t] = up) and P(2,4,s[t] =
down) after the initial “down” state as follows. When t=1, T=4, and Δ=1 then
P(1,4,s[t] = down) = 0.892275, as shown in the second row of the second column.
The one period risk free rate is
R(1,s[t] = down)=1/P(1,2,s[t] = down)=1/ 0.972841 = 1.027917
The scaled sum of sigmas K(t,T,s[t]) for t=1 and T=4 becomes
and therefore K(1,4,s[t]) =(√1)( 0.01441187+0.015506259)= 0.029918
Using formula 1 with these inputs and the fact that the variable Index=1 for an upshift gives
P(2,4,s[t] = up) = 0.944617
For a downshift we set Index = -1 and recalculate formula 1 to get
P(2,4,s[t] = down) = 0.889752
We have correctly populated the third and fourth rows of column 3 (t=2) of the bushy tree for the zero coupon bond maturing at T=4 (note values have been rounded to six decimal places for display
only). The remaining calculations are left to the reader.
In a similar way, the bushy tree for the price of the zero coupon bond maturing at time T=3 can be calculated as follows:
If we combine all of these tables, we can create a table of the term structure of zero coupon bond prices in each scenario:
At any point in time t, the continuously compounded yield to maturity at time T can be calculated as y(T-t)=-ln[P(t,T)]/(T-t). When we display the term structure of zero coupon yields in each
scenario, we can see that our volatility assumptions have indeed produced some implied negative yields in this particular simulation. We discuss how to remedy that problem in the next installment in
this blog series.
Finally, we can display the 1 year U.S. Treasury spot rates and the associated term structure of 1 year forward rates in each scenario.
Valuation in the Heath Jarrow and Morton Framework
Prof. Jarrow in a quote above described valuation as the expected value of cash flows using the risk neutral probabilities. Note that column 1 denotes the riskless 1 period interest rate in each
scenario. For the scenario number 14 (three consecutive downshifts in zero coupon bond prices), cash flows at time T=4 would be discounted by the one year spot rates at time t=0, by the one year spot
rate at time t=1 in scenario 2 (“down”), by the one year spot rate in scenario 6 (“down down”) at time t=2, and by the one year spot rate at time t=3 in scenario 14 (“down down down”). The discount
factor is
Discount Factor (0,4, down down down) =1/(1.003003)(1.027917)(1.054642)(1.081260)
These discount factors are displayed here for each potential cash flow date:
When taking expected values, we can calculate the probability of each scenario coming about since the probability of an upshift is ½:
It is convenient to calculate the “probability weighted discount factors” for use in calculating the expected present value of cash flows:
We now use the HJM bushy trees we have generated to value representative securities.
Valuation of a Zero Coupon Bond Maturing at Time T=4
A riskless zero coupon bond pays $1 in each of the 8 nodes of the bushy tree that prevail at time T=4:
When we multiply this vector of 1s times the probability weighted discount factors in the time T=4 column in the previous table and add them, we get a zero coupon bond price of 0.93085510, which is
the value we should get in a no-arbitrage economy, the value observable in the market and used as an input to create the tree.
Valuation of a Coupon-Bearing Bond Paying Annual Interest
Next we value a bond with no credit risk that pays $3 in interest at every scenario at times T=1, 2, 3, and 4 plus principal of 100 at time T=4. The valuation is calculated by multiplying each cash
flow by the matching probability weighted discount factor, to get a value of 104.70709974:
Valuation of a Digital Option on the 1 Year U.S. Treasury Rate
Now we value a digital option that pays $1 at time T=3 if (at that time) the one year U.S. Treasury rate (for maturity at T=4) is over 8%. If we look at the table of the term structure of one year
spot and forward rates above, this happens in only one scenario, scenario 14 (down down down). The cash flow can be input in the table below and multiplied by the probability weighted discount
factors to find that this option has a value of 0.11495946:
The Dickler, Jarrow and van Deventer studies of movements in U.S. Treasury yields and forward rates from 1962 to 2011 confirm that 5-10 factors are needed to accurately model interest rate movements.
Popular one factor models (Ho and Lee, Vasicek, Hull and White, Black Derman and Toy) cannot replicate the actual movements in yields that have occurred. The interest rate volatility assumptions in
these models (constant, constant proportion, declining, etc.) are also inconsistent with observed volatility.
In order to handle a large number of driving factors and complex interest rate volatility structures, the Heath Jarrow and Morton framework is necessary. This blog, the first in a series, shows how
to simulate zero coupon bond prices, forward rates and zero coupon bond yields in an HJM framework with maturity-dependent interest rate volatility. Monte Carlo simulation, an alternative to the
bushy tree framework, can be done in a fully consistent way.
In the next blog in this series, we enrich the realism of the simulation by allowing for interest rate volatility that is also a function of the level of interest rates.
Dicker, Daniel T., Robert A. Jarrow and Donald R. van Deventer, “Inside the Kamakura Book of Yields: A Pictorial History of 50 Years of U.S. Treasury Forward Rates,” Kamakura Corporation memorandum,
September 13, 2011.
Dickler, Daniel T. and Donald R. van Deventer, “Inside the Kamakura Book of Yields: An Analysis of 50 Years of Daily U.S. Treasury Forward Rates,” Kamakura blog, www.kamakuraco.com, September 14,
Dicker, Daniel T., Robert A. Jarrow and Donald R. van Deventer, “Inside the Kamakura Book of Yields: A Pictorial History of 50 Years of U.S. Treasury Zero Coupon Bond Yields,” Kamakura Corporation
memorandum, September 26, 2011.
Dickler, Daniel T. and Donald R. van Deventer, “Inside the Kamakura Book of Yields: An Analysis of 50 Years of Daily U.S. Treasury Zero Coupon Bond Yields,” Kamakura blog, www.kamakuraco.com,
September 26, 2011.
Dicker, Daniel T., Robert A. Jarrow and Donald R. van Deventer, “Inside the Kamakura Book of Yields: A Pictorial History of 50 Years of U.S. Treasury Par Coupon Bond Yields,” Kamakura Corporation
memorandum, October 5, 2011.
Dickler, Daniel T. and Donald R. van Deventer, “Inside the Kamakura Book of Yields: An Analysis of 50 Years of Daily U.S. Par Coupon Bond Yields,” Kamakura blog, www.kamakuraco.com, October 6, 2011.
Heath, David, Robert A. Jarrow and Andrew Morton, "Contingent Claims Valuation with a Random Evolution of Interest Rates," The Review of Futures Markets, 9 (1), 1990.
Heath, David, Robert A. Jarrow and Andrew Morton, "Bond Pricing and the Term Structure of Interest Rates: A Discrete Time Approximation," Journal of Financial and Quantitative Analysis, December
Heath, David, Robert A. Jarrow and Andrew Morton, "Bond Pricing and the Term Structure of Interest Rates: A New Methodology for Contingent Claims Valuation," Econometrica, 60(1), January 1992.
Jarrow, Robert A. Modeling Fixed Income Securities and Interest Rate Options, second edition, Stanford Economics and Finance, Stanford University Press, Stanford, California, 2002.
Jarrow, Robert A. and Stuart Turnbull. Derivative Securities, 1996, Southwestern Publishing Co., second edition, fall 2000.
van Deventer, Donald R. “Pitfalls in Asset and Liability Management: One Factor Term Structure Models,” Kamakura blog, www.kamakuraco.com, November 7, 2011. Reprinted in Bank Asset and Liability
Management Newsletter, January, 2012.
van Deventer, Donald R. “Pitfalls in Asset and Liability Management: One Factor Term Structure Models and the Libor-Swap Curve,” Kamakura blog, www.kamakuraco.com, November 23, 2011. Reprinted in
Bank Asset and Liability Management Newsletter, February, 2012.
Slattery, Mark and Donald R. van Deventer, “Model Risk in Mortgage Servicing Rights,” Kamakura blog, www.kamakuraco.com, December 5, 2011.
van Deventer, Donald R., Kenji Imai, and Mark Mesler, Advanced Financial Risk Management, John Wiley & Sons, 2004. Translated into modern Chinese and published by China Renmin University Press,
Beijing, 2007. Second edition forthcoming in 2012.
Donald R. van Deventer
Kamakura Corporation
Honolulu, March 2, 2012
© Copyright 2012 by Donald R. van Deventer. All rights reserved. | {"url":"http://www.kamakuraco.com/Blog/tabid/231/EntryId/384/Heath-Jarrow-and-Morton-Example-One-Modeling-Interest-Rates-with-One-Factor-and-Maturity-Dependent-Volatility.aspx","timestamp":"2014-04-16T19:05:57Z","content_type":null,"content_length":"185948","record_id":"<urn:uuid:bbc45256-9805-40a7-8f0e-16158106432d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
In recent years, considerable progress has been made in studying algebraic cycles using infinitesimal methods. These methods have usually been applied to Hodge-theoretic constructions such as the
cycle class and the Abel-Jacobi map. Substantial advances have also occurred in the infinitesimal theory for subvarieties of a given smooth variety, centered around the normal bundle and the
obstructions coming from the normal bundle's first cohomology group. Here, Mark Green and Phillip Griffiths set forth the initial stages of an infinitesimal theory for algebraic cycles.
The book aims in part to understand the geometric basis and the limitations of Spencer Bloch's beautiful formula for the tangent space to Chow groups. Bloch's formula is motivated by algebraic
K-theory and involves differentials over Q. The theory developed here is characterized by the appearance of arithmetic considerations even in the local infinitesimal theory of algebraic cycles. The
map from the tangent space to the Hilbert scheme to the tangent space to algebraic cycles passes through a variant of an interesting construction in commutative algebra due to Angéniol and
Lejeune-Jalabert. The link between the theory given here and Bloch's formula arises from an interpretation of the Cousin flasque resolution of differentials over Q as the tangent sequence to the
Gersten resolution in algebraic K-theory. The case of 0-cycles on a surface is used for illustrative purposes to avoid undue technical complications.
Another Princeton book authored or coauthored by Mark Green:
Subject Area:
• Mathematics | {"url":"http://press.princeton.edu/titles/7906.html","timestamp":"2014-04-20T04:11:14Z","content_type":null,"content_length":"15397","record_id":"<urn:uuid:007199cd-4663-4896-910a-d3b822ec457b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motion in One Dimension
Motion in One Dimension
16. When a car's velocity is positive and its acceleration is negative, what is ... below, how does the acceleration at A compare with the acceleration at B? ... – PowerPoint PPT presentation
Number of Views:26
Avg rating:3.0/5.0
Slides: 36
Added by: Anonymous
more less
Transcript and Presenter's Notes | {"url":"http://www.powershow.com/view/cb020-OTUwY/Motion_in_One_Dimension_powerpoint_ppt_presentation","timestamp":"2014-04-20T01:36:28Z","content_type":null,"content_length":"107768","record_id":"<urn:uuid:a3dcfda9-8939-4dea-a224-6658aee67ddc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peachtree City Algebra 2 Tutor
Find a Peachtree City Algebra 2 Tutor
...I definitely tutor students at a rate that they feel comfortable with, and have found great success in tutoring these subjects. I am a science teacher who absolutely loves science and math! I
have tutored algebra since high school to students on both the high school and collegiate levels.
15 Subjects: including algebra 2, chemistry, geometry, biology
...In my experience tutoring chemistry and mathematics, a 2-hour lesson is preferred over a 1-hour lesson to better accomplish understanding of the assignment. I also prefer to know ahead of time
the assignment topic so I can review my personal textbooks and bring appropriate background material to...
12 Subjects: including algebra 2, chemistry, algebra 1, GED
...I spend countless hours in my spare time just doodling with this program and enjoy every minute of it. When I am not using it for boredom, I create magnificent ads, posters, flyers, and images
for my business. I have a Bachelor's degree in Informatics and dual Associates degrees in web design, where I maintained A average in these subjects, even in advanced level courses.
14 Subjects: including algebra 2, geometry, algebra 1, public speaking
...I can teach individuals at an advanced as well as an elementary level depending on what route you choose to accomplish your goal. Despite my busy schedule, I am always available via mainly
email, phone if needed and I am a very friendly and easy going person to work with. My goal as a tutor is ...
18 Subjects: including algebra 2, reading, writing, English
Hello, My name is Abdul W. I am a sophomore at Georgia Tech and majoring in Electrical Engineering. I have deep love for mathematics and always feel excited about learning and teaching
mathematics. I also have an associate's degree in Mathematics.
10 Subjects: including algebra 2, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Peachtree_City_algebra_2_tutors.php","timestamp":"2014-04-19T12:08:00Z","content_type":null,"content_length":"24260","record_id":"<urn:uuid:eba66411-0d5b-42bc-93b4-f838c9b2aace>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Barrington, IL Math Tutor
Find a Barrington, IL Math Tutor
...However, my forte would be tutoring in sciences and math (especially SAT/ACT math prep) as I tutor by stripping concepts to the basics and building upon those basics; this way, a foundation is
formed and the "secret" to success is by recognizing these patterns, whether it be in the SAT/ACT or in ...
16 Subjects: including calculus, chemistry, precalculus, study skills
...This experience allowed me to learn how to work with kids in a way that allowed them to truly understand what they were learning, as shown by the improvement they showed throughout the class.
I have also spent lots of time babysitting my neighbors who are all currently in elementary school, and ...
8 Subjects: including algebra 1, reading, grammar, vocabulary
...I also provide enough practice problems for them to improve their skills. I make sure I understand the need of the students and help them with patience. This way I create an environment where
learning math becomes fun and productive.
11 Subjects: including algebra 1, algebra 2, calculus, ACT Math
I am a math tutor at a reputable college. I have my bachelors in engineering and I can help you improve your grades and even score better in an exam. I have worked with high school students as
well as college students.
14 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
...I have spent over 10 years developing the study skills necessary for success in my students. Study skills are essential for acquiring good grades, and useful for learning throughout one's
life. I'm equipped to helping a child with the process of organizing, taking in new information, retaining information, or dealing with assessments.
19 Subjects: including prealgebra, reading, chemistry, biology
Related Barrington, IL Tutors
Barrington, IL Accounting Tutors
Barrington, IL ACT Tutors
Barrington, IL Algebra Tutors
Barrington, IL Algebra 2 Tutors
Barrington, IL Calculus Tutors
Barrington, IL Geometry Tutors
Barrington, IL Math Tutors
Barrington, IL Prealgebra Tutors
Barrington, IL Precalculus Tutors
Barrington, IL SAT Tutors
Barrington, IL SAT Math Tutors
Barrington, IL Science Tutors
Barrington, IL Statistics Tutors
Barrington, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Barrington Hills, IL Math Tutors
Deer Park, IL Math Tutors
Dundee, IL Math Tutors
East Dundee, IL Math Tutors
Fox River Grove Math Tutors
Holiday Hills, IL Math Tutors
Inverness, IL Math Tutors
Lake Barrington, IL Math Tutors
Lake Zurich Math Tutors
North Barrington, IL Math Tutors
Oak Ridge, IL Math Tutors
Oakwood Hills, IL Math Tutors
Sleepy Hollow, IL Math Tutors
South Barrington, IL Math Tutors
West Dundee, IL Math Tutors | {"url":"http://www.purplemath.com/barrington_il_math_tutors.php","timestamp":"2014-04-19T02:27:55Z","content_type":null,"content_length":"24103","record_id":"<urn:uuid:091c17a3-d431-46e7-ac88-6f68465ab6d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Be Brave
We had meetings all day yesterday. I got discouraged. Thinking about the past year has left me a bit deflated. I have grown more confident in my own thinking about teaching mathematics, but remain
discouraged about my ability to teach it well.
In the math department meeting yesterday, I heard my colleagues describing how our students simply don't think when we give them mathematics to do. Even when they have all the necessary skills, they
don't engage their minds. We were in agreement that this is not because they can't do it. We all believe they absolutely can. I believe that it's one of the things their brains are designed to do
naturally. My new idea is that somehow they are not deeply thinking in math class because they haven't found it useful to do so.
I know that I have a tendency to go too hard on myself and my colleagues in moments like this. I asked myself, "Do I want a revolutionary miracle in every class?" If I do, I am probably setting
myself up for failure. Is getting kids to use their naturally pattern-seeking powerful minds such a huge demand? What is the part that gets kids really deeply involved, not just taking a class for
the sake of passing this adolescent rite of passage?
How do I get kids to value the power of their own thinking?
I remember this day a few weeks ago when I was at a loss of how to teach finding the slope of a line in a way that demanded this from them. Of course I could just show them, and they could do it, no
problem. Maybe even some of them would think about what they were doing while they were doing it and notice some patterns and sense in the whole thing. Maybe even they would all be so successful that
if I gave them a quiz they would all ace it and I could pat myself on the back feeling very successful because my kids were successful.
I talked with Ben about it and he reminded me: "But it wouldn't be math."
Right. So I'm trying to teach math.
This last year I feel like in many ways I've been doing math for the first time. Truly discovering, playing, exploring and sense-making about actual phenomena. Maybe I just need a little more
practice to teach this way well.
I'm looking forward to next year. I will be better at classroom management, at cultivating positive open relationships with kids, at organizing and structuring transparent systems and routines in my
classroom, and at unit planning. So I'm excited that having those ducks more in a row might mean that I have more time to think about inviting my kids to really think, figuring out what they are
thinking and celebrating and honoring and valuing that so that they do it more.
2 comments:
1. Hey Jesse - I hate to respond with a book suggestion, I hate when people do that, but you might try What's Math Got to Do with It? by Jo Boaler. It's not like The Ultimate Answer or anything, but
it has me excited about the possibilities again for the first time in a while. And it might point you to some pertinent references about engaging kids in effective group work and dialog etc.
I'm also interested to hear your response to it, because I feel sometimes that she is kind of making stuff up. Like "We gave them some problem-based lessons, and bippity boppity boo, everybody
loved math!" That's not a direct quote but that's what it feels like sometimes.
2. I agree "What's Math Got to Do with It?" is an excellent book. In fact, I recommend it to my parents every year at curriculum night. In the book, Boaler presents the results of her research in
very accessible terms for the general public. Complex Instruction, the teaching methodology that her research focuses on, is very difficult to do well. I agree that when reading through case
studies and papers, it often seems like magic. What we don't get a good sense for is all of the set-up and training that makes that work possible. Problem-based lessons alone aren't going to do
the trick.
If you are interested in the more detailed ed journal articles that discuss the research methodology, I could provide pointers to those (or I'm sure someone else out here can). Some other
articles of interest might be "Fast kids, slow kids, lazy kids: Framing the mismatch problem in mathematics teachers' converstations", "Lessons learned from detracked mathematics departments",
and "Why do students drop advanced mathematics?" here. | {"url":"http://mathbebrave.blogspot.com/2010/06/kids-thinking.html","timestamp":"2014-04-16T22:21:00Z","content_type":null,"content_length":"88753","record_id":"<urn:uuid:57319f5e-b179-4f5e-a071-dc2ca599c6d1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
>Math Forum: Alejandre: Math 7
The main focus for this week will be addressing Number Sense 1.2 one of the Key Standards for the seventh grade as well as several Mathematical Reasoning Key Standards.
November 11 - Veteran's Day Holiday || November 12 - Non-Student Day
Main Activity
Interactive Mathematics Text
p. 44 Mountain Bike
This activity includes analyzing data and using mathematical tools and reasoning to defend responses.
Spreadsheet Activity - suggestions on how to use Claris Works spreadsheet files are included in this activity.
Alignment to CA Mathematics Standards:
Number Sense
1.2 add, subtract, multiply and divide rational numbers, integers, fractions and decimals and take rational numbers to whole number powers
Mathematical Reasoning
1.1 analyze problems by identifying relationships, discriminating relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing
1.2 formulate and justify mathematical conjectures based upon a general description of the mathematical question or problem posed
2.8 make precise calculations and check the validity of the results from the context of the problem
MAC Text
pp. 90-123 - Statistics and Data Analysis
pp. 470-471 - Circle Graphs
Alignment to CA Mathematics Standards:
Statistics, Data Analysis and Probability
1.1 know various forms of display for data sets, including a stem-and-leaf plot or box-and-whisker plot; use them to display a single set of data or compare two sets of data
1.2 represent two numerical variables on a scatter plot and informally describe how the data points are distributed and whether there is an apparent relationship between the two variables (e.g.,
time spent on homework and grade level)ß Read and interpret tally charts
1.3 understand the meaning of and be able to compute the minimum, the lower quartile, the median, the upper quartile and the maximum of a data set
Problem of the Week - pp. 66-72 - Unit 8 - data
On the first Wednesday of the month students will have a reading activity in class. They will read the different POWs in their text and select one to work on for that week.
Alignment to CA Mathematics Standards:
Mathematical Reasoning
1.1 analyze problems by identifying relationships, discriminating relevant from irrelevant information, identifying missing information, sequencing and prioritizing information and observing
1.2 formulate and justify mathematical conjectures based upon a general description of the mathematical question or problem posed.
1.3 determine when and how to break a problem into simpler parts.
2.4 make and test conjectures using both inductive and deductive reasoning
2.5 use a variety of methods such as words, numbers, symbols, charts, graphs, tables, diagrams and models to explain mathematical reasoning.
2.6 express the solution clearly and logically using appropriate mathematical notation and terms and clear language, and support solutions with evidence, in both verbal and symbolic work.
3.1 evaluate the reasonableness of the solution in the context of the original situation
3.2 note method of deriving the solution and demonstrate conceptual understanding of the derivation by solving similar problems.
□ Students will complete a written report showing how they analyzed data and made conclusions using reasoning skills.
□ Students will make graphs and charts to display data.
□ Students will use correctly the terms: mean, mode, range, minimum, lower quartile, median, upper quartile and maximum.
The main focus for this week will be addressing Number Sense 1.2 one of the Key Standards for the seventh grade as well as several Mathematical Reasoning Key Standards.
November 25-26 - Thanksgiving Holiday
Main Activity
Interactive Mathematics Text
p. 17 That's Sum Triangle
This activity includes calculating sums and using mathematical reasoning in a setting using triangles.
Finding Sums Activity
Alignment to CA Mathematics Standards:
Number Sense
1.2 add, subtract, multiply and divide rational numbers, integers, fractions and decimals and take rational numbers to whole number powers
Mathematical Reasoning
1.1 analyze problems by identifying relationships, discriminating relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing
1.2 formulate and justify mathematical conjectures based upon a general description of the mathematical question or problem posed
2.1 use estimation to verify the reasonableness of calculated results
2.2 apply strategies and results from simpler problems to more complex problems
2.4 make and test conjectures using both inductive and deductive reasoning
2.6 express the solution clearly and logically using appropriate mathematical notation and terms and clear language, and support solutions with evidence, in both verbal and symbolic work
MAC Text
p. 233 - arithmetic vocabulary
p. 204-206 - properties of addition
p. 8-10 - using rounding
Alignment to CA Mathematics Standards:
Number Sense
1.2 add, subtract, multiply and divide rational numbers, integers, fractions and decimals and take rational numbers to whole number powers
□ Students will explain the strategies that can be used to solve the arithmetical problem involving sums. | {"url":"http://mathforum.org/alejandre/frisbie/math/week9.10.11.12.html","timestamp":"2014-04-21T10:40:25Z","content_type":null,"content_length":"12092","record_id":"<urn:uuid:8a3f8f6b-6d87-4f31-bf91-e9eb74b484c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Q: Two entangled particles approach a black hole, one falls in and the other escapes. Do they remain entangled? What about after the black hole evaporates?
Posted on December 28, 2012 by The Physicist
Physicist: If all you had access to was the remaining particle, you’d never know the difference.
The way entanglement is often described in popular media makes it sound like voodoo; there’s some kind of magical connection between two or more particles, and doing something to one instantly
affects the other. From this point of view it might seem as though you could learn something about the inside of black holes by dropping entangled particles into them, or maybe the outside particle
would start acting black-hole-ish.
In practice (reality), entangled particles don’t have any kind of connection with each other, they’re just correlated in a funny way. In quantum physics things can be in multiple states, such as
being in multiple places or multiple energy levels. This is called a “superposition“. In classical physics things can only be in one state (I can’t tell you where my keys are, but they’re
definitely not in more than one place).
So, say I’ve got some paper bags and one red and one blue marble. Classically, the marble is either red or blue, but not both. In quantum mechanics you could prepare a state that’s a superposition
of red and blue.
Oddly enough, you can’t describe the world by describing the state of each particle individually. You have to include states that involve multiple particles that are in multiple states. Bringing a
second bag into the picture, and putting one of the two marbles into each, we find that there are two possible states: red/blue and blue/red. In quantum mechanics, there’s no problem having a
combination of these two states as well. Notice that since there’s only one marble of each color, the bags are correlated; if you know what’s in one, then you know what’s in the other.
There’s more detail about what exactly entanglement is and how it behaves here, but suffice it to say, when things are correlated and in a superposition of states they’re entangled. So finally, with
that background we can take a look at what happens when you destroy or just take away half of an entangled pair.
A black hole, in addition to all of its other weird properties, does a really good job doing exactly that. For all practical (experimental) purposes, dropping a particle into a black hole destroys
it plenty. By watching the black hole afterward, it would be impossible to recover information about the one particle you dropped in. So, what you’re left with is a single particle that’s still in
a superposition of states.
This, by the way, is what you have even if you don’t destroy the other particle. Without access to both halfs of an entangled pair, there’s no way to determine if they’re entangled at all.
There was a big debate between some of physic’s heavyweights over whether or not things that fall into black holes are genuinely destroyed, including all of their information, or if information is
somehow preserved. After a few decades of debate, the consensus today is: no, information is conserved. This has lead to some new ideas like black hole entropy, Hawking radiation, holography theory
, and black hole computation (this last one is a little more far-fetched than the others). In theory, if you somehow managed to keep track of absolutely every detail of everything that fell into the
black hole, and then managed to collect most of the Hawking radiation produced by the black hole over its entire life time (which is much, much greater than the age of the universe), then you could
in theory have a pretty good chance of correctly guessing which “marble” fell in.
But, for all intents and purposes, if you have an entangled pair of particles and you drop one into a black hole, you’ll be left with one particle that is still technically entangled to the other,
but it doesn’t matter. It’ll behave like any other particle.
Answer gravy: There are buckets more to be said about breaking entanglement, and the statistics of entangled pairs (and halfs of entangled pairs), including the exact nature of the superposition of
the remaining particle. Way too many buckets to fit into one article. However! There’s a great resource that goes into ludicrous detail here. The stuff relevant to this post is in chapter 4.
9 Responses to Q: Two entangled particles approach a black hole, one falls in and the other escapes. Do they remain entangled? What about after the black hole evaporates?
1. Crosse says:
Wait, I’m confused. How can the lifetime of the black hole be much greater than the age of the Universe?
2. The Physicist says:
That was poorly written on my part.
An run-of-the-mill black hole would take around 10^69 years to completely evaporate, which is about 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times older
than the present age of the universe.
3. Luke says:
I’m confused as to how you can prove a superposition without a connection between two particles. I understand that they are not connected physically, but because they behave that way it makes no
sense to me that they are separate or distinguishable from eachother…
4. The Physicist says:
It’s a little subtle. Basically, you prepare a huge number of particles identically, and then look at their statistics. This is how almost all of the great experiments in quantum physics are
Particles in a “pure state” behave in a very predictable way (or at least a measurement can be found in which they behave predictably), and particles in a “mixed state” are always at least a
little random. In the case of a “maximally entangled pair” each particle, on its own, is in a mixed state and is completely random regardless of measurement, and yet taken together the two
particles form a pure state. This is the subtlety I wanted to avoid in the post.
By the way, none of that should make any sense at all, so don’t stress. The last link in the post does talk about it in detail (although, looking at it again, it helps a lot to have read chapters
2 and 3 already, and to have a little back ground in quantum mech).
5. Will says:
Whenever someone talks about Quantum Mechanics I always get a little wierded out because they keep saying that it shouldn’t make any sense at all but it makes plenty of sense to me once the
terminology is explained and I’m pretty sure that means that I’ve operating on some kind of serious misconception.
6. Mark Eichenlaub says:
As I understand it, I think this post is incorrect when it says that a particle in the superposition (|0> + |1>)/sqrt(2) is indistinguishable from having access to the first particle of the pair
(|01> + |10>)/sqrt(2).
To see this, subject the first particle of the pair to the unitary transformation
|0> -> (|0> + |1>)/sqrt(2)
|1> -> (|0> – |1>)/sqrt(2)
If we have a particle that is not entangled, so it is described simply by (|0> + |1>)/sqrt(2) and we apply this transformation, we get the state |0>, so that we are certain to find 0 as the
result of a measurement.
On the other hand, if our initial state is (|01> + |10>)/sqrt(2) and we apply the same transformation, our final state is
(|01> + |11> + |00> – |10>)/2
When we measure the first bit, we get 0 and 1 with equal probability. Thus, it is possible to determine whether the particle is entangled.
In fact, this entanglement is exactly what “making a measurement” means. Suppose we have a right a left slit in a double slit experiment. If we send some photons towards the slits, we might call
going through the left slit 0 and the right slit 1. Then if we set it up right, the particle can have equal amplitudes for both slits and be in the state (|0> + |1>)/sqrt(2). Allowing the
particle to be detected on a screen beyond the slits acts in much the same way as our example unitary transformation. We see interference on the screen, which is analogous to always measuring 0.
However, if we have a detector at the slits that measure which slit the particle went through, that detector becomes entangled with the photon, and we now have the state |0L> + |1R>, where L and
R represent the state of the detector when it knows the photon has gone through the left or right slit respectively. So long as the detector has perfect information – as long as L and R are
orthogonal – we no longer see interference at the screen. This is how measurement destroy interference.
7. Blake Sims says:
In regards to the previous comment by Mark Eichenlaub I would love to know the physicists response to his ideas about the collapse of the wave function in the double slit experiment and if it is
in fact the particle entangling with the measuring device?
I know entanglement takes very specific situations to take place, but could this be an explanation for the collapse of the wave function?
It doesn’t make much sense though. If the detector is not there, then the photon can go through both slits and act like a wave. Once you put the detector there then the photon should still act
the same, unless you have retrocausality taking place whereby the photon goes through both slits, is measured and then entangles itself with the detector and then goes back to acting like a wave.
How can the photon KNOW to act differently?
8. The Physicist says:
There’s a post here that tries to talk about that a bunch.
9. marie ice says:
outside a blackhole there is a region of no retunr i.e event horizon.if 2 particles so said ”tangled”(cuz particles can only be in interaction to one another at only one point) were having
numerous paths available in space outside the event horizon they may move in any direction breaking thier contact but when either one of them crosses event horizon( where all the paths availabe
in space will be directed towards centre of a blackhole )it will fall into it not necesserally ”tangled” due to reason given above.larger blackholes are deformations in spacetime ,cant evaporate.
Leave a Reply Cancel reply
This entry was posted in -- By the Physicist, Physics, Quantum Theory. Bookmark the permalink. | {"url":"http://www.askamathematician.com/2012/12/q-two-entangled-particles-approach-a-black-hole-one-falls-in-and-the-other-escapes-do-they-remain-entangled-what-about-after-the-black-hole-evaporates/","timestamp":"2014-04-18T15:50:48Z","content_type":null,"content_length":"145599","record_id":"<urn:uuid:7c0aa55c-69e9-4e11-b458-d43f9bd8a988>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
I need help with 3 ODEs.
September 27th 2008, 12:49 PM
I need help with 3 ODEs.
I'm stuck with these three ODEs:
$u'' - \frac{1}{x}u' - 4x^2u=0$
$x^2 y''(x) + xy'(x)-n^2y(x)=0$
$u''' + 2e^xu'' -u' -2e^xu=0$
Nothing I tried worked, could you please help me? A hint on which method to try would be really great.
Thank you!
September 27th 2008, 01:22 PM
The second one is Bessel's equation, I think. I'll Google for the solution.
If anyone has an idea for the other two..
September 27th 2008, 01:31 PM
September 27th 2008, 01:39 PM
I've found that Euler equation has the form:
$a x^2 y'' + bxy' +cy=0$, but I don't have a constant times y, but something dependent on x..?
Edit: I thought you were talking about the first one, sorry. :-(
Also, we haven't done series solutions in class yet, but I'll try to Google it. Thank you.
September 27th 2008, 01:45 PM
-n^2 is a constant as far as a function of x is concerned
i will think of other ways to do the problems. if you haven't done series solutions, it would be a lot to learn just to do these. so you probably aren't expected to do them that way
September 27th 2008, 01:51 PM
Yes, I know, I edited the post above, I thought you were talking about the first one. It's not an excuse, but it's late in this timezone. :-)
i will think of other ways to do the problems. if you haven't done series solutions, it would be a lot to learn just to do these. so you probably aren't expected to do them that way
Oh, thank you! And I will read about series solutions, maybe they aren't that hard to learn.
September 27th 2008, 01:57 PM
I solved the second one. In case someone has a similar problem, I've found this page Pauls Online Notes : Differential Equations - Euler Equations to be useful.
September 27th 2008, 02:55 PM
put $x=\sqrt{t}.$ then $u'=2\sqrt{t} \frac{du}{dt},$ and $u''=2\frac{du}{dt} + 4t \frac{d^2u}{dt^2}.$ then your differential equation becomes: $\frac{d^2u}{dt^2}-u=0,$ which is very easy to
$u''' + 2e^xu'' -u' -2e^xu=0$
put $u'' - u=v.$ then your equation becomes: $v'+2e^xv=0,$ which clearly has $v=e^{-2e^x}$ as a solution. so now you need to solve $u''-u=e^{-2e^{x}}.$
September 28th 2008, 12:37 AM
Thank you so much!!!
September 28th 2008, 12:57 AM
Euler type.
Euler type equations have polynomial type solutions.
Just substitute $y(x)=x^{\lambda}$ and get a parabola of $\lambda$, solve it and get two roots $\lambda_{1}$ and $\lambda_{2}$ (this is for second-order equations, for higher-order equations you
get polynomial of degree $n$).
Then, the solution is
$<br /> y(x)=<br /> \begin{cases}<br /> c_{1}x^{\lambda_{1}}+c_{2}x^{\lambda_{2}},&\lambda _{1}eq\lambda_{2}\\<br /> x^{\lambda_{1}}\big[c_{1}+c_{2}\ln(x)\big],&\lambda_{1}=\lambda_{2},<br /> \
end{cases}<br />$
where $c_{1}$ and $c_{2}$ are arbitrary constants.
Note. Note that $\bigg(\frac{d}{dx}\bigg)^{k}$ will decrease the power of $y(x)=x^{\lambda}$ by $k$ while the factor $x^{k}$ will increase it by $k$, hence you will get $x^{k}\bigg(\frac{d}{dx}\
bigg)^{k}x^{\lambda}=\lamb da(\lambda-1)\cdots(\lambda-k+1)x^{\lambda}$ for any $k\in\mathbb{N}$.
This is why we search solutions of polynomial type. | {"url":"http://mathhelpforum.com/differential-equations/50822-i-need-help-3-odes-print.html","timestamp":"2014-04-20T08:38:52Z","content_type":null,"content_length":"17782","record_id":"<urn:uuid:bb5338de-a545-4c0f-ba38-4f7d266adc4e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Joint Mathematics Meetings
Joint Mathematics Meetings Program by Day
Current as of Tuesday, April 12, 2005 15:09:56
Program | Deadlines | Inquiries: meet@ams.org
1999 Joint Mathematics Meetings
San Antonio, TX, January 13-16, 1999
Meeting #939
Associate secretaries:
Susan J Friedlander, AMS susan@math.northwestern.edu
James J Tattersall, MAA tat@providence.edu
Saturday January 16, 1999
• Saturday January 16, 1999, 7:30 a.m.-10:55 a.m.
MAA Session on Geometry in the Classroom in the Next Millennium, III
Colm K. Mulcahy, Spelman College colm@mathcs.emory.edu
David W. Henderson, Cornell University
Barry Schiller, Rhode Island College
□ 7:30 a.m.
{\bit Using Dynamic Geometry Software with Majors and Preservice Secondary Teachers, I.}
□ 7:30 a.m.
Presenting Proofs Using Technology in Geometry .
Martin E. Flashman*, Humboldt State University
□ 7:30 a.m.
Using Technology to Connect the Empirical and Deductive Aspects of the College Geometry Course.
Jean J. McGehee*, University of Central Arkansas
□ 7:30 a.m.
Triangles, Evolutes, and Computers.
Alan McRae*, Washington and Lee University
□ 8:00 a.m.
□ 8:15 a.m.
{\bit Using Dynamic Geometry Software with Majors and Preservice Secondary Teachers, II.}
□ 8:15 a.m.
Enhancing "Modern Geometries" with Technology .
Mary J. Winter*, Michigan State University
□ 8:15 a.m.
Designing a Dynamic Geometry Course.
James M. Parks*, SUNY-Potsdam
□ 8:15 a.m.
Forging a more connected, and scientific geometry curriculum through student investigations of locus problems and geometrical curve drawing devices using the computer software Geometer's
David Dennis*, Univ. of Texas at El Paso
□ 8:45 a.m.
□ 9:00 a.m.
{\bit Teaching Non-Euclidean Geometry with Technology.}
□ 9:00 a.m.
Using polyhedral models and Java applets to teach spherical geometry.
John M Sullivan*, Univ. of Illinois
□ 9:00 a.m.
Exploring Hyperbolic Geometry with PoincareDraw .
Robert L. Foote*, Wabash College
□ 9:00 a.m.
Exploring Geometry through Java-based Software.
Michael D Hvidsten*, Gustavus Adolphus College
□ 9:00 a.m.
High and Low Technology in the Geometry Classroom.
Daniel H Steinberg*, Oberlin College
□ 9:40 a.m.
□ 10:00 a.m.
{\bit Teaching Non-Euclidean Geometry without Technology Emphasis.}
□ 10:00 a.m.
A Second Geometry Course: Discussion-Based Classes and Informal Texts.
Sarah-marie Belcastro*, University of Northern Iowa
□ 10:00 a.m.
Analytic Taxicab Geometry.
Jerry D. Taylor*, Campbell University
□ 10:00 a.m.
Geometry of the surface of the sphere for K-12 teachers.
Alice Mae Guckin*, College of St.Scholastica
□ 10:00 a.m.
A course in hyperbolic geometry and the topology and geometry of surfaces.
David Gay*, University of Arizona
□ 10:40 a.m.
• Saturday January 16, 1999, 8:00 a.m.-10:40 a.m.
GAMM-SIAM Special Session on Geometry in Dynamics, III
Krystyna Kuperberg, Auburn University kuperkm@mail.auburn.edu
• Saturday January 16, 1999, 8:00 a.m.-10:50 a.m.
AMS Special Session on Several Complex Variables, III
Emil J. Straube, Texas A \& M University emil.straube@math.tamu.edu
Harold P. Boas, Texas A \& M University
Marshall A. Whittlesey, Texas A \& M University mwhittle@math.tamu.edu
• Saturday January 16, 1999, 8:00 a.m.-10:50 a.m.
AMS Special Session on Commutative Algebra and Algebraic Geometry, III
Roger A. Wiegand, University of Nebraska and Purdue University rwiegand@math.purdue.edu
Susan Elaine Morey, Southwest Texas State University sm26@swt.edu
• Saturday January 16, 1999, 8:00 a.m.-10:50 a.m.
AMS Special Session on Computational Algebraic Geometry for Curves and Surfaces, III
Mika K. Seppala, Florida State University seppala@math.fsu.edu
Emil J. Volcheck, National Security Agency volcheck@acm.org
□ 8:00 a.m.
About the numerical uniformization of real hyperelliptic curves.
Peter Buser*, EPF, Lausanne
□ 8:30 a.m.
Equations for real hyperelliptic hyperbolic surfaces.
Robert Silhol*, Université de Montpellier II
□ 9:00 a.m.
□ 9:30 a.m.
Experiments with polynomials having prescribed branch values.
Kenneth Stephenson*, University of Tennessee, Knoxville
□ 10:00 a.m.
Numerical uniformization of piecewise flat surfaces.
Philip L Bowers*, FSU
Kenneth Stephenson, UTK
□ 10:30 a.m.
Using Circle Packings to Approximate Conformal Weldings.
George Brock Williams*, University of Tennessee
• Saturday January 16, 1999, 8:00 a.m.-10:45 a.m.
AMS Special Session on The Mathematics of the Navier-Stokes Equations, III (dedicated to the memory of Jean Leray)
Peter A. Perry, University of Kentucky perry@ms.uky.edu
Zhong-Wei Shen, University of Kentucky shenz@ms.uky.edu
• Saturday January 16, 1999, 8:00 a.m.-10:00 a.m.
MAA Minicourse \#11: Part B
Creating interactive texts in Mathematica.
John R. Wicks, North Park University
• Saturday January 16, 1999, 8:00 a.m.-10:00 a.m.
MAA Minicourse \#16: Part B
Using hand-held CAS throughout the mathematics curriculum.
Wade Ellis, West Valley College
L. Carl Leinbach, Gettysburg College
Bert K. Waits, Ohio State University
• Saturday January 16, 1999, 8:00 a.m.-10:55 a.m.
MAA Session on Discrete Mathematics Revisited, II
Richard K. Molnar, Macalester College molnar@macalester.edu
Suzanne M. Molnar, College of St. Catherine
□ 8:00 a.m.
Discrete Mathematics: One Person's View of the Present and the Future.
William A Marion*, Valparaiso University
□ 8:20 a.m.
Discrete Mathematics: Queen and Servant of Computer Science.
Wayne M. Dymacek*, Washington and Lee University
□ 8:40 a.m.
Discrete Mathematics for Mathematics and Computer Science Students.
Nancy L. Hagelgans*, Ursinus College
□ 9:00 a.m.
Introduction to Discrete Mathematics with ISETL.
William E Fenton*, Bellarmine College
□ 9:20 a.m.
Teaching and learning discrete mathematics through undergraduate research.
Anant P. Godbole*, Michigan Tech University
□ 9:40 a.m.
Establishing a Summer Program in Discrete Mathematics.
Glenn Acree*, Belmont University
Tosha Stanley, Belmont University
□ 10:00 a.m.
Constructivist hypertext support materials for post-secondary discrete mathematics courses aimed at retraining secondary teachers.
Nancy Casey*, Institute for Studies in Educational Mathematics and University of Idaho
□ 10:20 a.m.
Discrete Mathematics: A Basis and Gateway for Learning Mathematics.
Tabitha T. Mingus, Western Michigan University
Richard M. Grassl*, University of Northern Colorado
□ 10:40 a.m.
• Saturday January 16, 1999, 8:00 a.m.-9:20 a.m.
MAA Panel Discussion
Life after retirement.
Andrew Sterrett, Jr., Denison University
• Saturday January 16, 1999, 8:30 a.m.-10:50 a.m.
AMS-MAA Special Session on Research in Mathematics by Undergraduates, II
John E. Meier, Lafayette College meierj@lafayette.edu
Leonard A. VanWyk, James Madison University vanwyk@math.jmu.edu
• Saturday January 16, 1999, 8:30 a.m.-10:50 a.m.
AMS-MAA Special Session on The History of Mathematics, III
Karen H. Parshall, University of Virginia khp3k@unix.mail.virginia.edu
Victor J. Katz, University of the District of Columbia
• Saturday January 16, 1999, 8:30 a.m.-10:50 a.m.
AMS Special Session on Bergman Spaces and Related Topics, III
Peter L. Duren, University of Michigan, Ann Arbor duren@math.lsa.umich.edu
Michael Stessin, SUNY at Albany
• Saturday January 16, 1999, 8:30 a.m.-10:50 a.m.
AMS Special Session on Operator Algebras and Applications, III
Allan P. Donsig, University of Nebraska--Lincoln adonsig@math.unl.edu
Nik Weaver, Washington University nweaver@math.wustl.edu
• Saturday January 16, 1999, 8:30 a.m.-10:30 a.m.
MAA-National Alliance of State Science and Mathematics Coalitions-AMS Committee on Education Panel Discussion
The evaluation of state standards for school mathematics.
Kenneth A. Ross, University of Oregon
Joan F. Donahue, NASSMC
Rolf Blank, Council of Chief State School Officers
Heidi Glidden, American Federation of Teachers
Ralph A. Raimi, University of Rochester
Richard A. Askey, University of Wisconsin, Madison
Joseph Rosenstein, New Jersey Mathematics Coalition
Henry Alder, University of California-Davis
• Saturday January 16, 1999, 9:00 a.m.-9:50 a.m.
MAA Invited Address
Convex Polytopes and Partially Ordered Sets.
Rodica Simion*, The George Washington University
• Saturday January 16, 1999, 9:00 a.m.-9:50 a.m.
ASL Invited Address
Understanding automorphisms using templates.
Robert I. Soare*, University of Chicago
• Saturday January 16, 1999, 9:00 a.m.-10:50 a.m.
AMS Special Session on Dynamical, Spectral, and Arithmetic Zeta-Functions, III
Michel L. Lapidus, University of California, Riverside lapidus@math.ucr.edu
Machiel van Frankenhuysen, Institut des Hautes \'Etudes Scientifiques machiel@ihes.fr
• Saturday January 16, 1999, 9:00 a.m.-11:00 a.m.
MAA/ARUME Poster Session
Julie Clark,
Annie Selden, Tenessee Technological University
John Selden, MERC
• Saturday January 16, 1999, 9:00 a.m.-9:00 p.m.
MAA Special Presentation on the Legacy of R. L. Moore
"Challenge in the Classroom" screenings at 10:00 a.m. and 3:00 p.m.
Albert C. Lewis, Indiana University-Purdue University Indianapolis
Ben Fitzpatrick, Jr., Auburn University
Donald J. Albers, MAA
• Saturday January 16, 1999, 9:00 a.m.-10:00 a.m.
NAM Panel Discussion
Effective networking and research dialogue via teleconferences/telecommunication.
Leon C. Woodson, Morgan State University
James C. Turner, Arizona State University
• Saturday January 16, 1999, 9:00 a.m.-12:00 p.m.
Mathematical Sciences Employment Register
Interviews only.
• Saturday January 16, 1999, 9:30 a.m.-10:50 a.m.
MAA Presentation
Planning for retirement.
Carol Shaw, MAA
• Saturday January 16, 1999, 10:05 a.m.-10:55 a.m.
MAA Invited Address
Experimental Mathematics: Insight from Computation.
Jonathan M Borwein*, Simon Fraser University
• Saturday January 16, 1999, 10:20 a.m.-11:10 a.m.
ASL Invited Address
Aspects of nonlocal compactness.
Slawomir J. Solecki*, University of Indiana
• Saturday January 16, 1999, 10:30 a.m.-11:00 a.m.
AWM Graduate Student Poster Session
□ 10:30 a.m.
Posters will remain on display all morning:
□ 10:30 a.m.
The Gain of Regularity for the KP-II Equation.
Julie L. Benson*, Brown University
□ 10:30 a.m.
Isothermic Tori with Planar Lines of Curvature.
Holly E. Bernstein*, Washington University in St. Louis
□ 10:30 a.m.
Twisted Torsion on Compact Hyperbolic Spaces.
Maria G. Fung*, Cornell University
□ 10:30 a.m.
On Artin's Conjecture for Icosahedral Representations.
Theresa Girardi*, Rutgers University
□ 10:30 a.m.
Hecke C*-Algebras.
Rachel W. Hall*, Pennsylvania State University
□ 10:30 a.m.
Norms of powers and a central limit theorem for complex-valued probabilities.
Natalia A. Humphreys*, Ohio State University
□ 10:30 a.m.
Stochastic Models of Physical Systems.
Edna W. James*, Iowa State University
□ 10:30 a.m.
Adams operations and the Dennis trace map.
(Miriam) Ruth Kantoravitz*, University of Illinois at Urbana-Champaign
□ 10:30 a.m.
The Influence of Two Moving Heat Sources on Blow-up in a Reactive-Diffusive Medium.
Colleen Margarita Kirk*, Northwestern University
□ 10:30 a.m.
The Behavior of Relative Entropy in the Hydrodynamic Scaling Limit.
Elena Kosygina*, Courant Institute of Mathematical Sciences/NYU
□ 10:30 a.m.
Another Reason Why Exceptional Weyl Groups are Exceptional.
Amy E. Ksir*, University of Pennsylvania
□ 10:30 a.m.
The Relaxation Limit in a Biodegradation Model.
Regan E. Murray*, University of Arizona
□ 10:30 a.m.
Transport equations and velocity averages.
Guergana Petrova*, University of South Carolina
□ 10:30 a.m.
Positivity preserving numerical schemes for lubrication type equations.
Liya Zhornitskaya*, Duke University
• Saturday January 16, 1999, 12:30 p.m.-1:20 p.m.
ASL Invited Address
Simple theories.
Anand Pillay*, University of Illinois at Urbana-Champaign
• Saturday January 16, 1999, 12:30 p.m.-2:00 p.m.
AWM Workshop Panel Discussion
Launching a career in mathematics.
Catherine A. Roberts, Northern Arizona State University
Susan C. Geller, Texas A\&m University
Deborah Frank Lockhart, National Science Foundation
Elizabeth W. McMahon, Lafayette College
• Saturday January 16, 1999, 1:00 p.m.-2:00 p.m.
NAM Claytor-Woodard Lecture
Maximum Cliques and Minimum Colorings in Graphs.
Earl R. Barnes*, Georgia Institute of Technology
• Saturday January 16, 1999, 1:00 p.m.-3:40 p.m.
AMS-MAA Special Session on Research in Mathematics by Undergraduates, III
John E. Meier, Lafayette College meierj@lafayette.edu
Leonard A. VanWyk, James Madison University vanwyk@math.jmu.edu
□ 1:00 p.m.
Weight Distributions for Certain Codes.
Sukaina Alarakhia*, Mount Holyoke College
Paul Lambert, Furman University
Tom Wexler, Amherst College
Liangyi Zhao, Rutgers University
□ 2:00 p.m.
Minimal Volume Maximal Cusps.
Colin C Adams, Williams College
David P Biddle*, SUNY Binghamton
Carol A Gwosdz, University of Dallas
Katherine A Paur, MIT
Scott B Reynolds, Williams College
□ 3:00 p.m.
The topological fundamental group and generalized covering spaces.
Daniel Biss*, Harvard University
• Saturday January 16, 1999, 1:00 p.m.-5:50 p.m.
GAMM-SIAM Special Session on Geometry in Dynamics, IV
Krystyna Kuperberg, Auburn University kuperkm@mail.auburn.edu
• Saturday January 16, 1999, 1:00 p.m.-3:20 p.m.
AMS Special Session on Bergman Spaces and Related Topics, IV
Peter L. Duren, University of Michigan, Ann Arbor duren@math.lsa.umich.edu
Michael Stessin, SUNY at Albany
• Saturday January 16, 1999, 1:00 p.m.-4:50 p.m.
AMS Special Session on Commutative Algebra and Algebraic Geometry, IV
Roger A. Wiegand, University of Nebraska and Purdue University rwiegand@math.purdue.edu
Susan Elaine Morey, Southwest Texas State University sm26@swt.edu
• Saturday January 16, 1999, 1:00 p.m.-4:20 p.m.
AMS Special Session on Operator Algebras and Applications, IV
Allan P. Donsig, University of Nebraska--Lincoln adonsig@math.unl.edu
Nik Weaver, Washington University nweaver@math.wustl.edu
• Saturday January 16, 1999, 1:00 p.m.-3:00 p.m.
AMS Special Session on Computational Algebraic Geometry for Curves and Surfaces, IV
Mika K. Seppala, Florida State University seppala@math.fsu.edu
Emil J. Volcheck, National Security Agency volcheck@acm.org
□ 1:00 p.m.
Discussion and software demonstrations.
• Saturday January 16, 1999, 1:00 p.m.-3:20 p.m.
AMS Special Session on The Mathematics of the Navier-Stokes Equations, IV (dedicated to the memory of Jean Leray)
Peter A. Perry, University of Kentucky perry@ms.uky.edu
Zhong-Wei Shen, University of Kentucky shenz@ms.uky.edu
□ 1:00 p.m.
Regularity of solutions of the Navier-Stokes system in a rotating frame.
Anatoli V Babin*, University of California at Irvine
□ 1:30 p.m.
Large time asymptotic expansion of solutions of the Navier-Stokes equations on $R^d$ .
Daniel B. Dix*, University of South Carolina
□ 2:00 p.m.
Heat Transport in Rotating Convection.
Peter Constantin, University of Chicago
Chris Hallstrom*, University of Chicago
Vachtang Putkaradze, University of Chicago
□ 2:30 p.m.
Homoclinic Orbits to Multi-pulsed Traveling-wave solutions of a Boussinesq System.
Min F Chen*, Penn State
□ 2:50 p.m.
• Saturday January 16, 1999, 1:00 p.m.-3:00 p.m.
MAA Minicourse \#14: Part B
An introduction to wavelets.
Colm K. Mulcahy, Spelman College
• Saturday January 16, 1999, 1:00 p.m.-3:00 p.m.
MAA Minicourse \#15: Part B
Music and mathematics.
Leon Harkelroad, Bard College
• Saturday January 16, 1999, 1:00 p.m.-3:00 p.m.
MAA Minicourse \#6: Part B
Cooperative learning in undergraduate mathematics education.
Barbara E. Reynolds, Cardinal Stritch University
William E. Fenton, Bellarmine College
• Saturday January 16, 1999, 1:00 p.m.-3:40 p.m.
AMS Session on Ordinary Differential Equations and Dynamical Systems
• Saturday January 16, 1999, 1:00 p.m.-1:40 p.m.
AMS Session on Convex Geometry
□ 1:00 p.m.
On the Geometry of Locally Nonconical Convex Sets.
Glenn C. Shell*, Lincoln University of Missouri
□ 1:15 p.m.
Secondary Polytopes of Two-dimensional Point Sets with Few Interior Points.
Wendy A Weber*, University of Kentucky
□ 1:30 p.m.
The Moon, the Sun, and Convexity.
Noah S Brannen*, Samford University
• Saturday January 16, 1999, 1:00 p.m.-5:15 p.m.
MAA Session on Projects That Work in Applied Mathematics Courses, II
Alexandra Kurepa, North Carolina A \& T State University kurepaa@ncat.edu
Henry A. Warchall, University of North Texas
• Saturday January 16, 1999, 1:00 p.m.-5:10 p.m.
MAA Session on Innovative Use of Distance Learning Techniques to Teach Post-Secondary Mathematics, II
Brian E. Smith, McGill University smithb@management.mcgill.ca
Marcelle Bessman, Jacksonville University
• Saturday January 16, 1999, 1:00 p.m.-5:25 p.m.
MAA Session on Integrating Mathematics and Other Disciplines, III
William G. McCallum, University of Arizona wmc@math.arizona.edu
Nicholas T. Losito, SUNY at Farmingdale
Yajun Yang, SUNY at Farmingdale
• Saturday January 16, 1999, 1:00 p.m.-5:05 p.m.
MAA Session on The Integral Role of the Two-Year College in the Preservice Preparation of Elementary School Teachers, II
Mercedes A. McGowen, William Rainey Harper College mmcgowen@harper.cc.il.us
Joanne V. Peeples, El Paso Community College
William E. Haver, Virginia Collaborative for Excellence in the Preparation of Teachers
□ 1:00 p.m.
Investing in Tomorrow's Teachers: The Integral Role of Two-Year Colleges.
Sadie C. Bragg*, American Mathematical Association of Two-Year Colleges
□ 1:30 p.m.
A successful two-year/four-year college collaborative for the preparatoin of new teachers: J. Sargeant Reynolds Community College and Virginia Commonwealth University .
Susan S Wood*, J. Sargeant Reynolds Community College
□ 1:50 p.m.
Results of Research: A Means to Effect Changes in Preservice Teachers' Attitudes and Understanding.
Mercedes A McGowen*, William Rainey Harper College
Nancy Vrooman, William Rainey Harper College
□ 2:10 p.m.
The Integral Role of Borough of Manhattan Community College in the Mathematics Preparation of Prospective Teachers.
June L Gaston*, Borough of Manhattan CC
□ 2:30 p.m.
Increasing the Number and Quality of Preservice Teachers in Mathematics and Science: A Community College/University Partnership.
Lucy H. Michal, El Paso Community College
Emil J Michal, El Paso Community College
Joanne Peeples, El Paso Community College
Connie K Della-Piana*, University of Texas at El Paso
V. Anne Tubb, University of Texas at El Paso
□ 2:50 p.m.
"ITSMET" (Introduction to Teaching Science, Math, Engineering, and Technology)--Shiprock Campus of Din'\ e College.
Robin L. Sellen*, Din/'e College, Shiprock Campus, Shiprock, NM
Mark C. Bauer, Din/'e College, Shiprock Campus, Shiprock, NM
□ 3:10 p.m.
2-Year and 4-Year College Partnerships in Teacher Preparation.
Mark Greenhalgh*, Fullerton College
Judy Kasabian, El Camino College
Fran Manion, Santa Monica College
Michael A. McDonald, Occidental College
□ 3:30 p.m.
Training Teachers With Manipulatives and Technology.
Gladys G Whitehead, AMATYC
Deborah Zankofski*, AMATYC
Catherine B Cant, AMATYC
□ 3:50 p.m.
Strengthening the Mathematics Preparation of Prospective Elementary Teachers: A Texas State Systemic Initiative Action Supported by NSF and FIPSE Funding.
Jane F. Schielack*, Texas A&M University
Jamie Whitehead Ashby, Texarkana College
□ 4:10 p.m.
Revitalizing Teacher Education Through Advanced Technologies.
Reginald K.U. Luke*, Middlesex County College, Edison, NJ
□ 4:30 p.m.
Measuring, Analysizing and Communicating for Elementary Science Teachers.
Michael H. Farmer*, Greenivlle Technical College
Elizabeth T. Higgins, Greenville Technical College
□ 4:50 p.m.
Collaborative Learning Materials.
Raymond F Coughlin*, Temple University
• Saturday January 16, 1999, 1:00 p.m.-2:20 p.m.
MAA Committee on the Teaching of Undergraduate Mathematics Panel Discussion
Teaching collaborations between graduate departments in mathematics at four-year institutions, and community colleges.
Pamela E. Matthews, American University
Paul Latiolais, Portland State University
Janet P. Ray, Seattle Central Community College
Ginger Warfield, University of Washington
• Saturday January 16, 1999, 1:00 p.m.-2:20 p.m.
MAA Committee on the Mathematical Education of Teachers Panel Discussion
Improved teacher preparation: What mathematics departments can do.
James Loats, Metropolitan State College of Denver
• Saturday January 16, 1999, 1:00 p.m.-2:20 p.m.
SUMMA Special Presentation
Intervention programs for minority precollege students.
William A. Hawkins, Jr., SUMMA
Manuel Berrioz\'abal, University of Texas-San Antonio
Claudette Bradley-Kawagley, University of Alaska-Fairbanks
Max Warshauer, Southwest Texas State University
• Saturday January 16, 1999, 1:00 p.m.-3:00 p.m.
MAA Workshop
Integrating active learning techniques into lectures.
Sandra L. Rhoades, Keene State College
• Saturday January 16, 1999, 1:30 p.m.-2:20 p.m.
ASL Invited Address
Computable aspects of ordered groups.
Reed Solomon*, University of Wisconsin
• Saturday January 16, 1999, 1:30 p.m.-4:20 p.m.
AMS Special Session on Several Complex Variables, IV
Emil J. Straube, Texas A \& M University emil.straube@math.tamu.edu
Harold P. Boas, Texas A \& M University
Marshall A. Whittlesey, Texas A \& M University mwhittle@math.tamu.edu
• Saturday January 16, 1999, 1:30 p.m.-3:30 p.m.
Special Poster Session
Guided student exploration as a means of learning mathematics.
Harriet Pollatsek, Mount Holyoke College
• Saturday January 16, 1999, 2:00 p.m.-5:00 p.m.
AMS Special Session on Dynamical, Spectral, and Arithmetic Zeta-Functions, IV
Michel L. Lapidus, University of California, Riverside lapidus@math.ucr.edu
Machiel van Frankenhuysen, Institut des Hautes \'Etudes Scientifiques machiel@ihes.fr
□ 2:00 p.m.
Complex Dimensions of Self-Similar Fractal Strings .
Michel L. Lapidus, University of California, Riverside
Machiel van Frankenhuysen*, University of California, Riverside
□ 2:30 p.m.
Zeros of the Riemann Zeta-Function, Complex Dimensions of Fractal Strings, and Geometric and Spectral Oscillations .
Michel L. Lapidus*, University of California, Riverside
Machiel van Frankenhuysen, University of California, Riverside
□ 3:00 p.m.
Approximately self-similar measures and their zeta-functions.
Gabor Elek*, Mathematical Institute of the Hungarian Academy of Sciences
Michel L Lapidus, University of California at Riverside
□ 3:30 p.m.
Small Eigenvalues and Hausdorff Dimension of Sequences of Hyperbolic 3-Manifolds.
Carol E Fan*, Pepperdine University
Jay Jorgenson, Oklahoma State University
□ 4:00 p.m.
Discussion/open problems.
• Saturday January 16, 1999, 2:15 p.m.-3:05 p.m.
AMS Invited Address
Undercompressive shocks in thin film flow.
Andrea L. Bertozzi*, Duke University
• Saturday January 16, 1999, 2:30 p.m.-4:10 p.m.
MAA Special Presentation
Mobilization to support higher achievement in mathematics.
Linda P. Rosen, U.S. Department of Education
• Saturday January 16, 1999, 2:30 p.m.-4:00 p.m.
Mathematical Sciences Education Board Panel Discussion
Teacher preparation and professional development.
Bradford R. Findell, Mathematical Sciences Education Board
• Saturday January 16, 1999, 2:40 p.m.-3:30 p.m.
ASL Invited Address
Club sets and large cardinals.
William J. Mitchell*, University of Florida
MAA Online Inquiries: meet@ams.org | {"url":"http://jointmathematicsmeetings.org/meetings/national/jmm/2031_program_saturday.html","timestamp":"2014-04-17T15:43:48Z","content_type":null,"content_length":"89758","record_id":"<urn:uuid:6b6e5e18-c6fc-46a2-89f3-8ca6cdc00324>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Handling branch cuts in trig functions
Replies: 9 Last Post: Mar 26, 2013 4:54 PM
Messages: [ Previous | Next ]
Re: Handling branch cuts in trig functions
Posted: Mar 26, 2013 1:17 AM
this is still nonsense.
-3 is a square root of 9, whether the 9 was produced by squaring 3 or
squaring -3.
-x is a square root of x^2 whether the x^2 was produced by squaring x or -x.
It doesn't matter whether x is positive or negative.
On 3/25/2013 5:52 AM, G. A. Edgar wrote:
> In article <kimoma$hru$1@speranza.aioe.org>, Nasser M. Abbasi
> <nma@12000.org> wrote:
>> But I am using Maple 17?
>> -----------------------------------
>> ans:=simplify(sqrt(sec(x)^2)) assuming x::positive;
>> 1
>> --------
>> |cos(x)|
this is wrong; see below
>> simplify(abs(sec(x))- ans);
>> 0
well, this should be zero.
>> -------------------------------------
>> Unless x::positive implies x::real (since positive does
>> not apply to complex numbers). Is this what you meant?
> Yes, positive implies real. You will also get that result assuming x
> is negative, or assuming x is an integer, and so on. Not only on the
> reals, but also on any subset of the reals we have sqrt(x^2) = abs(x) .
If you visualize f(z)=sqrt(z^2) in the complex plane, you can specialize
it for real z and see if it corresponds to abs(z).
>> So Maxima was wrong then:
>> sqrt(sec(x)^2);
>> |sec(x)|
>> No assumptions!
Yes, this is wrong. The issue, at its core, is that computer algebra
systems are not programmed to deal with multiple-valued object
in a satisfactory way.
> We cannot tell whether Maxima is wrong unless we know whether Maxima
> assumes x is real (when you do not tell it). Maple assumes x is
> complex, as was said. Perhaps the documentation for Maxima tells you
> about this?
> sec(1+i) is about .4983370306+.5910838417*i,
> and the square-root of the square of that is itself, not its absolute
> value. (Assuming principal branch.)
Date Subject Author
3/24/13 Handling branch cuts in trig functions Nasser Abbasi
3/24/13 Re: Handling branch cuts in trig functions G. A. Edgar
3/24/13 Re: Handling branch cuts in trig functions Nasser Abbasi
3/24/13 Re: Handling branch cuts in trig functions Richard Fateman
3/24/13 Re: Handling branch cuts in trig functions Axel Vogt
3/25/13 Re: Handling branch cuts in trig functions G. A. Edgar
3/26/13 Re: Handling branch cuts in trig functions Richard Fateman
3/26/13 Re: Handling branch cuts in trig functions Axel Vogt
3/24/13 Re: Handling branch cuts in trig functions clicliclic@freenet.de
3/24/13 Re: Handling branch cuts in trig functions Axel Vogt | {"url":"http://mathforum.org/kb/message.jspa?messageID=8769036","timestamp":"2014-04-17T13:23:05Z","content_type":null,"content_length":"29126","record_id":"<urn:uuid:7e8aed1b-1dc2-40af-96f2-ddfd46fe4ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rate of change
June 3rd 2009, 06:49 AM
Rate of change
This is one of two questions I got wrong on my exam...I would like if you could show me the right way of doing these for future use. The teacher put little notes on the assignments at the bottom.
The Manager of a company that produces graphing calculators determines that when x thousand calculators are produced, they will all be sold when the price is p(x)=1,000/0.3x^2+8 dollars per
A. At what rate is demand p(x) changing with respect to the level of production x when 3,000(x=3) calculators are produced?
B. The revenue derived from the sale of x thousand calculators is R(x)=xp(x) thousand dollars. At what rate is revenue changing when 3,000 calculators are produced? Is revenue increasing or
decreasing at this level of production?
a) dp/dx = (1000/0.3) (-2 x -3) = - 2000/(0.3x3)
When x = 3, this rate = - 2000/(0.3x3) = - 2000/(0.3*27) = - 246.9
b) R(x) = x p(x)
dR/dx = x dp/dx + p
dR/dx = x dp/dx + 1,000/0.3x^2+8
when x = 3, from part a, dp/dx = -246.9
So, at x = 3,
dR/dx = 3 * (-246.9) + 1,000/(0.3*9) + 8 = -362.3
dR/dx = -362.3 dollars/calculator
Revenue is decreasing.
**should have used quotient rule to find derivative**
I tried to post the attachment but it said invalid file. | {"url":"http://mathhelpforum.com/calculus/91655-rate-change-print.html","timestamp":"2014-04-17T18:31:14Z","content_type":null,"content_length":"4374","record_id":"<urn:uuid:5e609dc2-4ae0-45b8-a9ad-d134879be6f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exercise sheet 7
Exercises and solutions in LaTex
Resource details
Added By: Mathbank
Added On: 22 Apr 2009 13:16
Tags: Exercise, prime numbers , prime numbers Legendre symbol, Prime Numbers, factors, quadratics, prime numbers primitive roots, Gauss' lemma, Number Theory, mathbank, Quadratic and other
Forms, Legendre symbol
Permissions: World
Link: http://www.edshare.soton.ac.uk/2264/
quadratics Legendre symbol Gauss' lemma prime numbers and the Legendre symbol prime numbers and the Legendre symbol
9 files in this resource | {"url":"http://www.edshare.soton.ac.uk/2264/","timestamp":"2014-04-18T05:33:46Z","content_type":null,"content_length":"30524","record_id":"<urn:uuid:e0567d2c-c7ee-4629-bf27-168fdc0a0a7e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Mathematics of Huging my great Niece Jordan
I have already blogged about (trying to) teach me Nephew Jason math
and my Great Nephew Justin math
. Now its my Great Niece Jordan's turn.
I was at a dinner with relatives including my 12 year old great niece Jordan. There were 10 people at the dinner.
BILL: Jordan, if everyone at this table hugged everyone else, how many hugs would there be?
JORDAN: If I get it right will you give me a hug?
BILL: I'll give you a hug in any case.
JORDAN: Okay. 10 times 10... so 100.
BILL: Can you hug yourself.
JORDAN: Sure (she then hugs herself).
BILL: For this problem lets assume you cannot hug yourself. Then how many.
JORDAN: Oh, that changes things. Its 10 times 9... so 90.
BILL: If I hug you and then you hug me, does that count as one hug or two?
JORDAN: Oh, that changes things. How do you do it?
BILL: Your answer of 90 counted BILL-HUGS-JORDAN and also JORDAN-HUGS-BILL. The same is true for every pair. So every pair was counted twice.
JORDAN: So... is the answer (9 times 10)/2 ... 45 ?
BILL: YES! Great. (They hug.)
JORDAN: You're not just a great uncle, you're an AWESOME Uncle!
BILL: And you're an AWESOME Niece!
8 comments:
1. Would you say that combinatorics, among other fields of mathematics, is the easiest way to get children in to learning math?
2. YES I would say that since combinatorics does not need prior knowledge and the problems can be expressed in ways children can understand them.
This scales up to High School Students
and Ramsey Theory as well.
3. AWW. That is such a sweet way of teaching math! I love it.
4. If you teach her basic math skills, but not basic spelling skills, then she too might one day look rather perverse for "Huging" her niece instead of "Hugging" her.
5. "huging" ? is that a spello garro ?
6. I 10-choose-2 you, Pikachu...
But seriously, I really like this. First you show kids that a specific computational problem is neat and then you can show how the techniques of mathematics enable one to scale up. Beats the hell
out of just producing a formula and saying "this is how you apply it."
7. Cool stuff. Now ask her if she can 4-color a 17 by 17 grid. The world is waiting.
8. That is a great post, Sir. You really outdone yourself. | {"url":"http://blog.computationalcomplexity.org/2011/04/mathematics-of-huging-my-great-niece.html","timestamp":"2014-04-18T14:03:03Z","content_type":null,"content_length":"165760","record_id":"<urn:uuid:e9f6d408-6575-4feb-96db-556fe08c36f5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find closest number
Feb 23rd, 2012, 09:05 PM #1
Thread Starter
Lively Member
Join Date
Dec 2008
Find closest number
I am making a program that deals with basic algebraic functions. The following function is meant to calculate the value of x given a certain value of y. The problem is that the answer can't be
exact, so I need to figure out how to find the closest number. I thought rounding would work, but it doesn't.
Private Sub cmdX_Click()
Dim math As New clsMathParser
Dim x As Single
If lstEqu.Text = vbNullString Then
MsgBox "You need to select an equation from the list.", vbCritical
Exit Sub
End If
math.StoreExpression lstEqu.Text
For x = xmin To xmax Step 0.001
If Round(math.Eval1(x), 3) = Val(txtY) Then
txtX = x
Exit For
End If
Next x
End Sub
Re: Find closest number
Seems like an odd way to go about it? Normally if you know the value of Y then you would plug Y into your equation and the result would be the value of X. It seems that stepping up the value of x
in a loop and testing to see if it is equal to Y is just guessing.
Re: Find closest number
How do you know that the range of X values will give a result which will satisfy the equation for a given value of Y ?
eg take the equation Y = (X / 2) -10
If you wanted to find X for Y = 20 and your X range was 0 to 10 you'd never get a result.
You could, instead of rounding, check if the calculated value of Y was within a given tolerance of the required Y
Dim math As New clsMathParser
Dim x As Single
Dim y As Single
For x = xmin To xmax Step 0.001
y = math.Eval1(x)
If y => CSng(txtY.Text) - 0.0005 And y =< CSng(txtY.Text) + 0.0005 Then
txtX.Text = CStr(x)
Exit For
End If
Next x
in the above you will get a result if the calculcated value of y is inclusively within + or - 0.0005 of the value desired.
Last edited by Doogle; Feb 24th, 2012 at 12:29 AM. Reason: Changed the example equation
Re: Find closest number
Seems like an odd way to go about it? Normally if you know the value of Y then you would plug Y into your equation and the result would be the value of X.
Yeah that's how a person might do it, but to program it that way would be very hard because instead of having the x value which is easy to convert the right side of the equation to polish
notation and then evaluate, you have the y value so it's seems easier to just brute force it. (Unless maybe im missing something?)
How do you know that the range of X values will give a result which will satisfy the equation for a given value of Y ?
I don't, but it's a graphing utility so the user is meant to enter the xmin and xmax correctly lol.
Whenever I enter zero for y, I get -10 using your example with any equation and I can't figure out why.
Re: Find closest number
I got it to work. I forgot to add the line math.StoreExpression and I also had to reduce the accuracy to 2 decimal places because it was giving me one off answers fairly often with three and I
have no idea why.
math.StoreExpression lstEqu.Text
For X = xmin To xmax Step 0.01
Y = math.Eval1(X)
If Y >= CSng(txtY.Text) - 0.005 And Y <= CSng(txtY.Text) + 0.005 Then
txtX.Text = CStr(Round(X, 2))
Exit For
End If
Next X
Re: Find closest number
You may wish to try FormatNumber () and lock it into a certain number of decimals. For example:
Y = 355 / 113
MsgBox FormatNumber(Y, 3)
Note the incredible accuracy of using 355 / 113 to approximate Pi.
Doctor Ed
Re: Find closest number
It still doesn't work how I want. Gives me 0.001 off and if I just add 0.001 most of the time it works, but when checking higher numbers it gives 0.001 too much. I really need this to work. I am
going to attach the whole program so someone can test it. The get X button is the problem. To test it add a liner equation (ex 4x-6) and then press get x and it will give you 1.499 instead of
I might even consider paying someone a small amount through paypal if they can figure this out.
Re: Find closest number
Your fundamental problem is that you are rounding to the same number of significant figures that you are performing your computations with. However, since rounding takes place at one significant
figure higher than that, this means your displayed solution will be impacted by rounding error. To get around that, you merely need to either round to one digit less, or operate at one
significant digit more. I assume you can't reduce your displayed digit, so that makes the decision easy:
txtX = vbNullString
math.StoreExpression lstEqu.Text
For X = xmin To xmax Step 0.0001
Y = math.Eval1(X)
If Y >= CSng(txtY) - 0.0005 And Y <= CSng(txtY) + 0.0005 Then
txtX = CStr(Round(X, 3))
Exit For
End If
Next X
Unfortunately, this adds 10 times as many computations. There are a few ways to address this. Notably, you can clean up the code a little pit by minimizing the processing that take place inside
the For loop:
Dim minY As Single
Dim maxY As Single
minY = CSng(txtY) - 0.0005
maxY = minY + 0.001
txtX = vbNullString
math.StoreExpression lstEqu.Text
For X = xmin To xmax Step 0.0001
Y = math.Eval1(X)
If Y >= minY And Y <= maxY Then
txtX = CStr(Round(X, 3))
Exit For
End If
Next X
But if you really want to make it run faster you need to redesign your approach entirely to use things like binary searches or simulated annealing, and you may need to customize this stuff for
non-linear functions.
Re: Find closest number
It doesn't work if I remove the exit for (which I would need to for problems with multiple answers). It gives me the same answer twice.
Dim minY As Single
Dim maxY As Single
minY = CSng(txtY) - 0.0005
maxY = minY + 0.001
'txtX = vbNullString
txtInfo = vbNullString
math.StoreExpression lstEqu.Text
For x = xmin To xmax Step 0.0001
y = math.Eval1(x)
If y >= minY And y <= maxY Then
'txtX = CStr(Round(x, 3))
txtInfo = txtInfo & Round(y, 4) & vbTab & Round(x, 3) & vbCrLf
End If
Next x
Any ideas? Would I need to change to a binary search to make this work?
Re: Find closest number
Of course it gives you the same answer twice. That is a direct consequence of your algorithm. The most obvious way to get around this is to compare rounded results and if they are equal to one
another, discard the second result and keep searching. However, you're going to have to figure out how to handle a bevy of problems. For example, take the following quadrilateral equations in
which you want to solve for x when y = 0:
y = x^2 + 1
If y = 0, x = i (the imaginary unit). Thus, there are no real solutions (give that your algorithm is only searching for roots between -10 and +10, this likely isn't a big deal).
y = x^2 + 2x + 1 = (x + 1)^2
Both roots of this quadrilateral are -1. Thus, if you discard redundant results as suggested above, you're going to skip over a root (depending on what you are actually trying to do, this also
might not be a problem).
y = x^2 - 3x + 2.2499999999 = (x - 1.49999)(x - 1.50001)
In this case there are two separate roots, x = 1.49999 and x = 1.50001. However, the difference between the two roots (0.00002) has more significant figures than your algorithm works with, and
thus the differences between them will be rounded and your algorithm will simply return 1.5 for the first root and discard remaining results.
Frankly, I think it's a bit of a fool's errand to try to program for all possibilities, even on something as seemingly straightforward as this. I suggest you simply make a design decision as to
what functionality is important and accept that you will not be able to produce perfect results for all situations.
Re: Find closest number
I found someone who was working on a similar program and I emailed him and asked how he did it. He said that he just wrote an algorithm that keeps dividing by two and checking if greater or less
than to determine the answer. This seems a lot simpler than this and I think it will give me better results. I think it may take more time to compute though. Do you think I should use this
Don't know how I'd make it work with multiple answers though.
Last edited by veebee123; Mar 10th, 2012 at 03:16 PM.
Re: Find closest number
the process for solving your equation is called the newton-raphson method
you simple enter any x value into the derivative of you target equation and feed the output back into the equation untill the input andout put are the same!
there is a problem with NRM in that it can perform some perfect-oscilations and so not resolve the equation..
for that reason it is worth while capturing the last pair of outputs to ensure you have not started oscilating.
There is a paper on this problem, I wrote, that is now taught at OU
here to help
Re: Find closest number
you simple enter any x value into the derivative of you target equation and feed the output back into the equation untill the input andout put are the same!
Thanks for the help, but if I use this method, then I also have to write code to find the derivative first.
Anyone know if I could make the divide by 2 method work with multiple answers?
Feb 23rd, 2012, 11:35 PM #2
Join Date
Feb 2012
West Virginia
Feb 24th, 2012, 12:09 AM #3
Join Date
Jul 2006
Maldon, Essex. UK
Feb 24th, 2012, 01:33 AM #4
Thread Starter
Lively Member
Join Date
Dec 2008
Feb 24th, 2012, 03:01 AM #5
Thread Starter
Lively Member
Join Date
Dec 2008
Feb 24th, 2012, 01:43 PM #6
Join Date
Mar 2007
Omaha, Nebraska
Feb 28th, 2012, 05:24 PM #7
Thread Starter
Lively Member
Join Date
Dec 2008
Feb 28th, 2012, 05:56 PM #8
Hyperactive Member
Join Date
Sep 2009
Feb 28th, 2012, 08:38 PM #9
Thread Starter
Lively Member
Join Date
Dec 2008
Feb 29th, 2012, 09:11 AM #10
Hyperactive Member
Join Date
Sep 2009
Mar 10th, 2012, 03:03 PM #11
Thread Starter
Lively Member
Join Date
Dec 2008
Mar 10th, 2012, 03:31 PM #12
Frenzied Member
Join Date
Nov 2010
Mar 10th, 2012, 05:35 PM #13
Thread Starter
Lively Member
Join Date
Dec 2008 | {"url":"http://www.vbforums.com/showthread.php?673256-Find-closest-number&p=4139852","timestamp":"2014-04-19T14:54:09Z","content_type":null,"content_length":"135817","record_id":"<urn:uuid:66c48235-bc6e-43c9-bf79-72d3c17b8ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bellaire, TX Trigonometry Tutor
Find a Bellaire, TX Trigonometry Tutor
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including trigonometry, calculus, geometry, algebra 1
Your progress is what is most important in all this. I am just here to help. I can help in the math related fields.
17 Subjects: including trigonometry, calculus, physics, geometry
...I occasionally assisted students with their English and Biology subjects, but my expertise in the physical sciences was more sought-after. I aim to explain the subject in a way that the
student can understand. Not every student thinks in the same way, so I explain the subject in a variety of ways.
41 Subjects: including trigonometry, chemistry, English, calculus
...While in college, I worked for my professors and tutored college students in Calculus I, College Math, and Geometry. My experience as a teacher taught me that every child does not learn math
the same way. I used a variety of hands-on learning experiences in my classroom to make the math comprehension easier for the diverse student population I taught.
24 Subjects: including trigonometry, calculus, statistics, geometry
...Thomas High School. About me: I started tutoring in college, and now that I am out of school, I continue to do what I love, which is to help struggling students overcome their academic
difficulties and to help all students develop the practical learning skills they need to be successful in Life...
22 Subjects: including trigonometry, chemistry, calculus, physics
Related Bellaire, TX Tutors
Bellaire, TX Accounting Tutors
Bellaire, TX ACT Tutors
Bellaire, TX Algebra Tutors
Bellaire, TX Algebra 2 Tutors
Bellaire, TX Calculus Tutors
Bellaire, TX Geometry Tutors
Bellaire, TX Math Tutors
Bellaire, TX Prealgebra Tutors
Bellaire, TX Precalculus Tutors
Bellaire, TX SAT Tutors
Bellaire, TX SAT Math Tutors
Bellaire, TX Science Tutors
Bellaire, TX Statistics Tutors
Bellaire, TX Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Arcola, TX trigonometry Tutors
Brookside Village, TX trigonometry Tutors
Bunker Hill Village, TX trigonometry Tutors
Greenway Plaza, TX trigonometry Tutors
Hedwig Village, TX trigonometry Tutors
Highlands, TX trigonometry Tutors
Hilshire Village, TX trigonometry Tutors
Hunters Creek Village, TX trigonometry Tutors
Jacinto City, TX trigonometry Tutors
Meadows Place, TX trigonometry Tutors
Piney Point Village, TX trigonometry Tutors
Southside Place, TX trigonometry Tutors
Spring Valley, TX trigonometry Tutors
Thompsons trigonometry Tutors
West University Place, TX trigonometry Tutors | {"url":"http://www.purplemath.com/Bellaire_TX_Trigonometry_tutors.php","timestamp":"2014-04-16T21:55:22Z","content_type":null,"content_length":"24531","record_id":"<urn:uuid:ecf8e1b0-3d42-4af1-a74b-8ec120349261>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Intelligence and Analytics Resources
This page contains links to various resources available throughout our network, for analytics practitioners. It is an attempt to add structure to our content. Time permitting, we will also add tags
to all recent or great postings for easier navigation.
You might want to bookmark this page, as it will be regularly updated.
1. General Resources
2. Weekly Digests, and articles from top news outlets
3. Big Data
4. Visualization
5. Best and Worst of Data Science
6. New Analytics Start-up Ideas
7. Rants about Healthcare, Education, etc.
8. Career Stuff, Training, Salary Surveys
9. Miscellaneous | {"url":"http://www.analyticbridge.com/page/links","timestamp":"2014-04-16T15:59:57Z","content_type":null,"content_length":"98509","record_id":"<urn:uuid:5541552b-6ce4-44a3-b554-beca3f639a68>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
It may seem, after considering cubic systems, that any lattice plane (hkl) has a normal direction [hkl]. This is not always the case, as directions in a crystal are written in terms of the lattice
vectors, which are not necessarily orthogonal, or of the same magnitude. A simple example is the case of in the (100) plane of a hexagonal system, where the direction [100] is actually at 120° (or
60° ) to the plane. The normal to the (100) plane in this case is [210]
Click to open video (13.27 MB) ... in separate window ... video alone
Weiss Zone Law
The Weiss zone law states that:
If the direction [UVW] lies in the plane (hkl), then:
hU + kV + lW = 0
In a cubic system this is exactly analogous to taking the scalar product of the direction and the plane normal, so that if they are perpendicular, the angle between them, θ, is 90° , then cosθ = 0,
and the direction lies in the plane. Indeed, in a cubic system, the scalar product can be used to determine the angle between a direction and a plane.
However, the Weiss zone law is more general, and can be shown to work for all crystal systems, to determine if a direction lies in a plane.
From the Weiss zone law the following rule can be derived:
The direction, [UVW], of the intersection of (h[1]k[1]l[1]) and (h[2]k[2]l[2]) is given by:
U = k[1]l[2] − k[2]l[1]
V = l[1]h[2] − l[2]h[1]
W = h[1]k[2] − h[2]k[1]
As it is derived from the Weiss zone law, this relation applies to all crystal systems, including those that are not orthogonal.
previous | next | {"url":"http://www.doitpoms.ac.uk/tlplib/miller_indices/vector_plane.php","timestamp":"2014-04-20T13:18:56Z","content_type":null,"content_length":"12145","record_id":"<urn:uuid:b45892cf-4452-47d7-b593-db8dc015172f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
infinte number of terms from a sequence in a sub-interval
I have across the following argument, which seems wrong to me, in a larger proof (Theorem 4 on page 9 of the document available at
). I would appreciate if someone can shed light on why this is true.
The argument is that given a sequence $a_k$ of points in [a,b], we can say that a sub-interval of [a,b] exists such that it is smaller than some value $g<b-a$ and contains an infinite number of terms
from $a_k$.
I disagree with the above statement because lets say that the sequence $a_k$ always returns a constant value, say b. Then the above statement doesnt hold. | {"url":"http://www.physicsforums.com/showthread.php?p=4155779","timestamp":"2014-04-19T19:37:26Z","content_type":null,"content_length":"37472","record_id":"<urn:uuid:0fd26aae-0392-4213-95dc-4d5fe846d537>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alvin, TX Geometry Tutor
Find an Alvin, TX Geometry Tutor
...My strongest areas are chemistry and math up through precalculus. I will also tutor freshman/sophomore college students in these areas. My tutoring method tends to be old school and emphasizes
learning by doing.
11 Subjects: including geometry, chemistry, English, physics
Hi, my name's Brian, I have a lot of experience tutoring and I'm fun and easy to work with. I can guarantee that I will help you get an A in your course or ace that big test you're preparing for.
I am a Trinity University graduate and I have over 4 years of tutoring experience.
38 Subjects: including geometry, English, calculus, reading
...I am a graduate student at the University of Houston and a mathematics tutor at San Jacinto College. I have 3 to 4 years experience in mathematics. I have many interest in mathematics.
16 Subjects: including geometry, calculus, algebra 1, algebra 2
...That being said, there is a certain thrill that I have when I see someone comprehend a subject that was previously foreign to them. I have seen the look regarding subjects from an elementary
level to a collegiate level. Regardless of what subject I am working with someone on, I will strive to make sure the student understands.
38 Subjects: including geometry, reading, physics, chemistry
...I have also been the Geometry team leader for eleven years while at Willowridge and also taught Honors/GT Geometry. I have also taught Pre-Algebra, Algebra and TAKS math and have done a lot of
tutoring in all of the subjects that I have tutored. I have also taught middle school mathematics before teaching high school.
5 Subjects: including geometry, ASVAB, algebra 1, prealgebra
Related Alvin, TX Tutors
Alvin, TX Accounting Tutors
Alvin, TX ACT Tutors
Alvin, TX Algebra Tutors
Alvin, TX Algebra 2 Tutors
Alvin, TX Calculus Tutors
Alvin, TX Geometry Tutors
Alvin, TX Math Tutors
Alvin, TX Prealgebra Tutors
Alvin, TX Precalculus Tutors
Alvin, TX SAT Tutors
Alvin, TX SAT Math Tutors
Alvin, TX Science Tutors
Alvin, TX Statistics Tutors
Alvin, TX Trigonometry Tutors
Nearby Cities With geometry Tutor
Bacliff geometry Tutors
Dickinson, TX geometry Tutors
El Lago, TX geometry Tutors
Fresno, TX geometry Tutors
Hillcrest, TX geometry Tutors
Hitchcock, TX geometry Tutors
Hunters Creek Village, TX geometry Tutors
Iowa Colony, TX geometry Tutors
La Marque geometry Tutors
Liverpool, TX geometry Tutors
Manvel, TX geometry Tutors
Nassau Bay, TX geometry Tutors
Santa Fe, TX geometry Tutors
Seabrook, TX geometry Tutors
Webster, TX geometry Tutors | {"url":"http://www.purplemath.com/Alvin_TX_Geometry_tutors.php","timestamp":"2014-04-16T16:38:27Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:60342b1c-2dd9-4938-af96-31c9bf577d12>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
JavaScript implement numbers as floating point values, that is, they're attaining decimal values as well as whole number values.
Basic UseEdit
To make a new number, a simple initialization suffices:
var foo = 0; // or whatever number you want
After you have made your number, you can then modify it as necessary. Numbers can be modified or assigned using the operators defined within JavaScript.
foo = 1; //foo = 1
foo += 2; //foo = 3 (the two gets added on)
foo -= 2; //foo = 1 (the two gets removed)
Number literals define the number value. In particular:
• They appear as a set of digits of varying length.
• Negative literal numbers have a minus sign before the set of digits.
• Floating point literal numbers contain one decimal point, and may optionally use the E notation with the character e.
• An integer literal may be prepended with "0", to indicate that a number is in base-8. (8 and 9 are not octal digits, and if found, cause the integer to be read in the normal base-10).
• An integer literal may also be found with prefixed "0x", to indicate a hexadecimal number.
The Math ObjectEdit
Unlike strings, arrays, and dates, the numbers aren't objects, so they don't contain any methods that can be accessed by the normal dot notation. Instead a certain Math object provides usual numeric
functions and constants as methods and properties. The methods and properties of the Math object are referenced using the dot operator in the usual way, for example:
var varOne = Math.ceil(8.5);
var varPi = Math.PI;
var sqrt3 = Math.sqrt(3);
Generates a pseudo-random number.
var myInt = Math.random();
max(int1, int2)Edit
Returns the highest number from the two numbers passed as arguments.
var myInt = Math.max(8, 9);
document.write(myInt); //9
min(int1, int2)Edit
Returns the lowest number from the two numbers passed as arguments.
var myInt = Math.min(8, 9);
document.write(myInt); //8
Returns the greatest integer less than the number passed as an argument.
var myInt = Math.floor(90.8);
document.write(myInt); //90;
Returns the least integer greater than the number passed as an argument.
var myInt = Math.ceil(90.8);
document.write(myInt); //91;
Returns the closest integer to the number passed as an argument.
var myInt = Math.round(90.8);
document.write(myInt); //91;
Properties of the Math Object are most commonly used constants.
• PI: Returns the value of pi.
• E: Returns the constant e.
• SQRT2: Returns the square root of 2.
• LN10: Returns the natural logarithm of 10.
• LN2: Returns the natural logarithm of 2.
Further readingEdit
Last modified on 7 March 2014, at 19:05 | {"url":"https://en.m.wikibooks.org/wiki/JavaScript/Numbers","timestamp":"2014-04-21T09:39:40Z","content_type":null,"content_length":"25304","record_id":"<urn:uuid:585c1412-7918-41cc-86c0-c06df9a3bc18>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaluating forecast uncertainty due to errors in estimated coefficients: empirical comparison of alternative methods
Bianchi, Carlo and Calzolari, Giorgio (1982): Evaluating forecast uncertainty due to errors in estimated coefficients: empirical comparison of alternative methods. Published in: Evaluating the
reliability of macro-economic models No. Ed. by G.C.Chow and P.Corsi, John Wiley & Sons, Ltd. (1982): pp. 251-277.
Download (1077Kb) | Preview
This paper is concerned with the contribution to forecast errors of errors in the estimated structural coefficients of a macro-econometric model (simultaneous equations). Its main purpose is to
perform, on several "real-world" models, an empirical comparison of alternative techniques available in the literature for this purpose.
Item Type: MPRA Paper
Original Evaluating forecast uncertainty due to errors in estimated coefficients: empirical comparison of alternative methods
Language: English
Keywords: Forecast errors; coefficient estimation errors; Monte Carlo; simultaneous equation models
C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C53 - Forecasting and Prediction Methods; Simulation Methods
Subjects: C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C52 - Model Evaluation, Validation, and Selection
C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation Models; Multiple Variables > C30 - General
Item ID: 22559
Depositing Giorgio Calzolari
Date 28. May 2010 09:39
Last 11. Feb 2013 16:29
Amemiya, T. (1977): "The Maximum Likelihood and the Nonlinear Three-Stage Least Squares in the General Nonlinear Simultaneous Equation Model", Econometrica 45, 955-968.
Bianchi, C., and G. Calzolari (1980): "The One-Period Forecast Errors in Nonlinear Econometric Models", International Economic Review 21, 201-208.
Bianchi, C., G. Calzolari, and P. Corsi (1979): ``A Monte Carlo Approach to Compute the Asymptotic Standard Errors of Dynamic Multipliers'', Economics Letters 2, 161-164.
Bianchi, C., G. Calzolari, and P. Corsi (1981): ``Standard Errors of Multipliers and Forecasts from Structural Coefficients with Block-Diagonal Covariance Matrix'', in Dynamic Modelling
and Control of National Economies (IFAC), ed. by J. M. L. Janssen, L. F. Pau, and A. J. Straszak. Oxford: Pergamon Press, 311-316.
Brundy, J. M., and D. W. Jorgenson (1971): "Efficient Estimation of Simultaneous Equations by Instrumental Variables", The Review of Economics and Statistics 53, 207-224.
Christ, C. F. (1966): Econometric Models and Methods. New York: John Wiley & Sons, Inc.
Cooper, J. P., and S. Fischer (1974): "Monetary and Fiscal Policy in the Fully Stochastic St. Louis Econometric Model", Journal of Money, Credit, and Banking 6, 1-22.
Denton, F. T., and E. H. Oksanen (1972): "A Multi - Country Analysis of the Effects of Data Revisions on an Econometric Model", Journal of the American Statistical Association 67,
Dhrymes, P. J. (1970): Econometrics: Statistical Foundations and Applications. New York: Harper & Row.
Fair, R. C. (1980): "Estimating the Expected Predictive Accuracy of Econometric Models", International Economic Review 21, 355-378.
Gallant, A. R. (1977): "Three-Stage Least-Squares Estimation for a System of Simultaneous, Nonlinear, Implicit Equations", Journal of Econometrics 5, 71-88.
Goldberger, A. S., A. L. Nagar and H. S. Odeh (1961): "The Covariance Matrices of Reduced-Form Coefficients and of Forecasts for a Structural Econometric Model", Econometrica 29,
Haitovsky, Y., and N. Wallace (1972): "A Study of Discretionary and Nondiscretionary Monetary and Fiscal Policies in the Context of Stochastic Macroeconometric Models", in The Business
Cycle Today, ed. by V. Zarnowitz. New York: NBER, 261-309.
Hatanaka,M. (1978): On the efficient estimation methods for the macro-economic models nonlinear in variables, Journal of Econometrics 8, 323-356.
Klein, L. R. (1950): Economic Fluctuations in the United States, 1921-1941. New York: John Wiley & Sons, Inc., Cowles Commission Monograph No. 11.
References: Klein, L. R. (1969): "Estimation of Interdependent Systems in Macroeconometrics", Econometrica 37, 171-192.
Mariano,R.S. (1980): "Analytical small sample distribution theory in econometrics: The simultaneous equations case", Universite Catholique de Louvain, Center for Operations Research &
Econometrics, discussion paper.
McCarthy, M. D. (1972,a): "Some Notes on the Generation of Pseudo-Structural Errors for Use in Stochastic Simulation Studies", in Econometric Models of Cyclical Behavior, ed. by
B.G.Hickman. New York: NBER, Studies in Income and Wealth No.36, 185-191.
McCarthy,M.D. (1972,b): A note on the forecasting properties of two-stage least squares restricted reduced forms: The finite sample case, International Economic Review 13, 757-761.
Nagar, A. L. (1969): "Stochastic Simulation of the Brookings Econometric Model", in The Brookings Model: Some Further Results, ed. by J. S. Duesenberry, G. Fromm, L. R. Klein, and E.
Kuh. Amsterdam: North Holland, 425-456.
Nissen, D. H. (1968): "A Note on the Variance of a Matrix", Econometrica 36, 603-604.
Oliver, J. (1980): "An Algorithm for Numerical Differentiation of a Function of One Real Variable", Journal of Computational and Applied Mathematics 6, 145-153.
Rao, C. R. (1973): Linear Statistical Inference and its Applications. New York: John Wiley.
Sartori, F. (1978): "Caratteristiche e Struttura del Modello", in Un Modello Econometrico dell'Economia Italiana; Caratteristiche e Impiego. Roma: Ispequaderni 1, 9-36.
Schink, G. R. (1971): "Small Sample Estimates of the Variance Covariance Matrix of Forecast Error for Large Econometric Models: The Stochastic Simulation Technique". University of
Pennsylvania, Ph.D. Dissertation.
Schmidt, P. (1974): "The Asymptotic Distribution of Forecasts in the Dynamic Simulation of an Econometric Model", Econometrica 42, 303-309.
Schmidt, P. (1976): "Econometrics", New York: Dekker.
Schink, G. R. (1971): "Small Sample Estimates of the Variance Covariance Matrix of Forecast Error for Large Econometric Models: The Stochastic Simulation Technique". University of
Pennsylvania: Ph.D. Dissertation.
Schmidt, P. (1973): "The Asymptotic Distribution of Dynamic Multipliers", Econometrica 41, 161-164.
Schmidt, P. (1974): "The Asymptotic Distribution of Forecasts in the Dynamic Simulation of an Econometric Model", Econometrica 42, 303-309.
Theil, H. (1971): Principles of Econometrics. New York: John Wiley & Sons, Inc.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/22559 | {"url":"http://mpra.ub.uni-muenchen.de/22559/","timestamp":"2014-04-19T11:09:12Z","content_type":null,"content_length":"28232","record_id":"<urn:uuid:ed102c2d-fa4e-40f0-a21c-04a2681dc50b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Systems and Methods for Phase Compensated Harmonic Sensing in Fly Height Control
Patent application title: Systems and Methods for Phase Compensated Harmonic Sensing in Fly Height Control
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Various embodiments of the present invention provide systems and methods for phase compensated harmonic sensing. For example, a circuit for harmonics calculation is disclosed that includes a phase
difference estimation circuit and a phase offset compensation circuit. The harmonic calculation circuit is operable to calculate a first harmonic based on a periodic data pattern and a second
harmonic based on the periodic data pattern. The phase difference estimation circuit operable to calculate a phase difference between the first harmonic and the second harmonic. The phase offset
compensation circuit operable to align the second harmonic with the first harmonic to yield an aligned harmonic.
A harmonics calculation circuit, the circuit comprising: a harmonic calculation circuit operable to calculate a first harmonic based on a periodic data pattern and a second harmonic based on the
periodic data pattern; a phase difference estimation circuit operable to calculate a phase difference between the first harmonic and the second harmonic; and a phase offset compensation circuit
operable to align the second harmonic with the first harmonic to yield an aligned harmonic.
The circuit of claim 1, the circuit further comprising: an averaging circuit, wherein the averaging circuit is operable to calculate a harmonics average including at least the first harmonic and the
aligned harmonic.
The circuit of claim 1, wherein the circuit further comprises: a read/write head assembly disposed in relation to a storage medium; and wherein the periodic data pattern is derived by the read/write
head assembly from a user data region of the storage medium.
The circuit of claim 1, wherein the circuit further comprises: a read/write head assembly disposed in relation to a storage medium; and wherein the periodic data pattern is derived by the read/write
head assembly from a servo data region of the storage medium.
The circuit of claim 4, wherein the periodic data pattern is a burst demodulation pattern.
The circuit of claim 1, wherein the first harmonic includes a first imaginary portion and a first real portion, and wherein the second harmonic includes a second imaginary portion and a second real
The circuit of claim 6, wherein the phase difference estimation circuit is operable to calculate a real portion of the phase difference and an imaginary portion of the phase difference using the
first real portion, first imaginary portion, second real portion and a second imaginary portion.
The circuit of claim 7, wherein the phase offset compensation circuit is operable to align the second harmonic with the first harmonic based at least in part on the first phase difference and the
second phase difference.
A method for harmonics calculation, the method comprising: providing a harmonics calculation circuit; receiving a first data set representing a periodic data from a first region of a storage medium;
calculating a first harmonic based on the first data set; receiving a second data set representing the periodic data from a second region of the storage medium; calculating a second harmonic based on
the second data set; and phase aligning the second harmonic and the first harmonic to yield a phase aligned harmonic.
The method of claim 9, wherein phase aligning the second harmonic and the first harmonic includes: calculating a phase difference between the second harmonic and the first harmonic.
The method of claim 10, wherein the method further comprises: calculating an average harmonics value including at least the phase aligned harmonic.
The method of claim 10, wherein the first harmonic includes a first imaginary portion and a first real portion, and wherein the second harmonic includes a second imaginary portion and a second real
The method of claim 12, wherein the phase difference includes an imaginary portion and a real portion computed based on the first imaginary portion, the second imaginary portion, the first real
portion and the second real portion.
The method of claim 13, wherein phase aligning the second harmonic and the first harmonic to yield a phase aligned harmonic is based at least in part on the real and imaginary portions of the phase
The method of claim 9, wherein the harmonics calculation circuit includes: a read/write head assembly disposed in relation to a storage medium; and wherein the first data set is derived by the read/
write head assembly from the first region of the storage medium, and wherein the second data set is derived by the read/write head assembly from the second region of the storage medium.
The method of claim 15, wherein the first region is a first user data region, and wherein the second region is a second user data region.
The method of claim 15, wherein the first region is a first servo data region, and wherein the second region is a second servo data region.
A storage device, the storage device comprising: a storage medium having a first region and a second region; a harmonic calculation circuit operable to receive a first data set corresponding to a
periodic data pattern derived from the first region and a second data set corresponding to the periodic data pattern derived from the second region, and wherein the harmonic calculation circuit is
operable to calculate a first harmonic based on a periodic data pattern and a second harmonic based on the periodic data pattern; a phase difference estimation circuit operable to calculate a phase
difference between the first harmonic and the second harmonic; and a phase offset compensation circuit operable to align the second harmonic with the first harmonic to yield an aligned harmonic.
The storage device of claim 18, the storage device further comprising: an averaging circuit, wherein the averaging circuit is operable to calculate a harmonics average including at least the first
harmonic and the aligned harmonic.
The storage device of claim 18, wherein the first harmonic includes a first imaginary portion and a first real portion; wherein the second harmonic includes a second imaginary portion and a second
real portion; wherein the phase difference estimation circuit is operable to calculate a real portion of the phase difference and an imaginary portion of the phase difference using the first real
portion, first imaginary portion, second real portion and a second imaginary portion; and wherein the phase offset compensation circuit is operable to align the second harmonic with the first
harmonic based at least in part on the real and imaginary portions of the phase difference.
BACKGROUND OF THE INVENTION [0001]
The present inventions are related to systems and methods for transferring information to and from a storage medium, and more particularly to systems and methods for positioning a sensor in relation
to a storage medium.
Various electronic storage media are accessed through use of a read/write head assembly that is positioned in relation to the storage medium. The read/write head assembly is supported by a head
actuator, and is operable to read information from the storage medium and to write information to the storage medium. The distance between the read/write head assembly and the storage medium is
typically referred to as the fly height. Control of the fly height is critical to proper operation of a storage system. In particular, increasing the distance between the read/write head assembly and
the storage medium typically results in an increase in inter symbol interference. Where inter symbol interference becomes unacceptably high, it may become impossible to credibly read the information
originally written to the storage medium. In contrast, a fly height that is too small can result in excess wear on the read/write head assembly and/or a premature crash of the storage device.
In a typical storage device, fly height is set to operate in a predetermined range. During operation, the fly height is periodically measured to assure that it continues to operate in the
predetermined region. A variety of approaches for measuring fly height have been developed including optical interference, spectrum analysis of a read signal wave form, and measuring a pulse width
value of the read signal. Such approaches in general provide a reasonable estimate of fly height, however, they are susceptible to various errors. Such errors require that the predetermined operating
range of the fly height be maintained sufficiently large to account for the various errors. This may result in setting the fly height such that inter-symbol interference is too high undermining the
data recovery performance of the storage system.
Hence, for at least the aforementioned reasons, there exists a need in the art for advanced systems and methods for positioning a sensor in relation to a storage medium.
BRIEF SUMMARY OF THE INVENTION [0005]
The present inventions are related to systems and methods for transferring information to and from a storage medium, and more particularly to systems and methods for positioning a sensor in relation
to a storage medium.
Various embodiments of the present invention provide harmonics calculation circuits that include a harmonic calculation circuit, a phase difference estimation circuit and a phase offset compensation
circuit. The harmonic calculation circuit is operable to calculate a first harmonic based on a periodic data pattern and a second harmonic based on the periodic data pattern. The phase difference
estimation circuit operable to calculate a phase difference between the first harmonic and the second harmonic. The phase offset compensation circuit operable to align the second harmonic with the
first harmonic to yield an aligned harmonic. In some instances, the circuit further includes an averaging circuit that is operable to calculate a harmonics average including at least the first
harmonic and the aligned harmonic. In various instances, the circuit further includes a read/write head assembly disposed in relation to a storage medium. In such instances, the periodic data pattern
is derived by the read/write head assembly from a user data region of the storage medium. In other instances, the periodic data pattern is derived by the read/write head assembly from a servo data
region of the storage medium. In one particular case, the periodic data pattern is a burst demodulation pattern.
In some instances of the aforementioned embodiments, the first harmonic includes a first imaginary portion and a first real portion, and the second harmonic includes a second imaginary portion and a
second real portion. In some cases, the phase difference estimation circuit is operable to calculate a real portion of the phase difference and an imaginary portion of the phase difference using the
first real portion, first imaginary portion, second real portion and a second imaginary portion. In particular cases, the phase offset compensation circuit operable to align the second harmonic with
the first harmonic based at least in part on the real portion of the phase difference and the imaginary portion of the phase difference.
Other embodiments of the present invention provides methods for harmonics calculation. The methods include providing a harmonics calculation circuit; receiving a first data set representing a
periodic data from a first region of a storage medium; calculating a first harmonic based on the first data set; receiving a second data set representing the periodic data from a second region of the
storage medium; calculating a second harmonic based on the second data set; and phase aligning the second harmonic and the first harmonic to yield a phase aligned harmonic. In some instances of the
aforementioned embodiments, phase aligning the second harmonic and the first harmonic includes calculating a phase difference between the second harmonic and the first harmonic.
Yet other embodiments of the present invention provide storage devices that include a storage medium, a harmonic calculation circuit, a phase difference estimation circuit, and a phase offset
compensation circuit. The storage medium includes at least a first region and a second region. The harmonic calculation circuit is operable to receive a first data set corresponding to a periodic
data pattern derived from the first region and a second data set corresponding to the periodic data pattern derived from the second region, and to calculate a first harmonic based on a periodic data
pattern and a second harmonic based on the periodic data pattern. The phase difference estimation circuit is operable to calculate a phase difference between the first harmonic and the second
harmonic. The phase offset compensation circuit is operable to align the second harmonic with the first harmonic to yield an aligned harmonic.
This summary provides only a general outline of some embodiments of the invention. Many other objects, features, advantages and other embodiments of the invention will become more fully apparent from
the following detailed description, the appended claims and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS [0011]
A further understanding of the various embodiments of the present invention may be realized by reference to the figures which are described in remaining portions of the specification. In the figures,
like reference numerals are used throughout several drawings to refer to similar components. In some instances, a sub-label consisting of a lower case letter is associated with a reference numeral to
denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar
FIG. 1a depicts the sampling of corresponding periodic waveforms during two different sectors resulting in a phase offset between the samples of the respective periodic waveforms;
FIG. 1b depicts an existing storage device format that may be used in relation to different embodiments of the present invention;
[0014] FIG. 2
depicts a phase compensated harmonics calculation circuit in accordance with one or more embodiments of the present invention;
[0015] FIG. 3
is a flow diagram depicting a method in accordance with some embodiments of the present invention for phase compensated harmonics calculation;
[0016] FIG. 4
depicts a phase compensated harmonics calculation circuit in accordance with one or more embodiments of the present invention relying on periodic patterns derived from servo data;
[0017] FIG. 5
is a flow diagram depicting a method in accordance with some embodiments of the present invention for phase compensated harmonics calculation relying on periodic patterns derived from servo data;
[0018] FIG. 6a
depicts a storage device including a read channel including phase compensated harmonic calculation in accordance with one or more embodiments of the present invention; and
[0019] FIG. 6b
is a cross sectional view showing the relationship between the disk platter and the read/write head assembly of the storage device of
FIG. 6a
DETAILED DESCRIPTION OF THE INVENTION [0020]
The present inventions are related to systems and methods for transferring information to and from a storage medium, and more particularly to systems and methods for positioning a sensor in relation
to a storage medium.
Turning to FIG. 1a, the sampling of corresponding periodic waveforms 105, 115 during two different sectors is graphically depicted. As shown, the two distinct samplings result in a phase offset
between the samples of the respective periodic waveforms. In particular, an instance 105 of a periodic waveform is shown that has been sampled at five distinct points (S1, S2, S3, S4, S5).
Subsequently, another instance 115 of the periodic waveform is sampled at four points (S1, S2, S3, S4) that correspond to the respective point of the same number at which instance 105 was sampled. As
shown, there is a phase offset 125 between corresponding samples (i.e., S1) of instance 105 and instance 115. This phase offset may occur due to any number of reasons and results in incoherence of
the data samples across multiple instances of a given periodic waveform. Such incoherence compromises harmonic calculations performed across multiple instances of the periodic waveform.
FIG. 1b shows a storage medium 100 with two exemplary tracks 150, 155 indicated as dashed lines. The tracks are segregated by servo data written within wedges 160, 165. These wedges include data and
supporting bit patterns 110 that are used for control and synchronization of the read/write head assembly over a desired location on storage medium 100. In particular, these wedges generally include
a preamble pattern 152 followed by a servo address mark 154 (SAM). Servo address mark 154 is followed by a Gray code 156, and Gray code 156 is followed by burst information 158. It should be noted
that while two tracks and two wedges are shown, hundreds of each would typically be included on a given storage medium. Further, it should be noted that a servo data set may have two or more fields
of burst information. Yet further, it should be noted that different information may be included in the servo fields such as, for example, repeatable run-out information that may appear after burst
information 158. Between the bit patterns 110, a user data region 184 is provided.
In some cases, instance 105 of the periodic waveform occurs during one user data region (e.g., user data region 184) of a track on a storage medium, and instance 115 of the periodic waveform occurs
in a later user data region (e.g., user data region 184) of the same track on the storage medium. In other cases, instance 105 of the periodic waveform occurs during one servo data region (e.g.,
wedges 160, 165) of a track on a storage medium, and instance 115 of the periodic waveform occurs in a later servo data region (e.g., wedges 160, 165) of the same track on the storage medium.
Various embodiments of the present invention provide for estimating the phase offset between corresponding samples across instances of a given periodic waveform. The estimated phase offset is used to
increase the coherence of corresponding samples allowing the use of multiple instances of a periodic waveform in calculating harmonics. Among various advantages, allowing the use of multiple
instances of the periodic waveform may improve noise averaging where relatively short but recurring periodic waveforms are used for harmonics calculation. Where the accuracy of the resulting
harmonics improves, a corresponding accuracy in a fly height calculated from the harmonics is also improved. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize
a variety of other advantages either in addition to or in place of the aforementioned advantage that may be achieved using different embodiments of the present invention.
Turning to
FIG. 2
, a phase compensated harmonics calculation circuit 200 is shown in accordance with one or more embodiments of the present invention. Phase compensated harmonics calculation circuit 200 includes an
analog front end circuit 210. Analog front end circuit 210 may be any analog front end circuit known in the art. In the depicted embodiment, analog front end circuit 210 includes a pre-amplifier
circuit 220, an analog filter circuit 225 and an analog to digital converter circuit 230. Pre-amplifier circuit 220 is electrically coupled to a read/write head assembly 215 that is disposed a fly
height distance 295 from a storage medium 290.
In a read operation, read/write head assembly 215 senses magnetically represented information on medium 290, and converts the information into an electrical signal. The electrical signal is amplified
by pre-amplifier circuit 220. The resulting amplified signal from pre-amplifier circuit 220 is provided to analog filter circuit 225 that provides a filtered output to analog to digital converter
circuit 230. In response, analog to digital converter circuit 230 provides a series of digital samples corresponding to the received analog input. The series of digital samples are provided to a
downstream data detection circuit 235, and to a read channel pattern detection and harmonic calculation circuit 240. In a write operation, a write data output 205 is provided to pre-amplifier circuit
220 that in turn provides an output corresponding to write data output 205 to read/write head assembly 215. Read/write head assembly 215 converts the received signal into magnetic information that is
stored to medium 290.
Read channel pattern detection and harmonic calculation circuit 240 is operable to detect a periodic pattern incorporated in the series of digital samples from analog to digital converter circuit
230. For example, where the periodic pattern is written within user data regions of medium 290, read channel pattern detection and harmonic calculation circuit 240 utilizes various location
information on medium 290 to determine when the periodic pattern has been found. This may include, for example, utilizing servo data to determine when a given user data has been detected and using
one or more predefined fields in the user data region to indicate the periodic pattern. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of
approaches for identifying a periodic pattern on medium 290. As another example, the periodic pattern may be one or both of the preamble and burst information within the servo data region. In such a
case, standard servo data processing may be used to detect the periodic pattern. As a particular example, where the periodic pattern is the burst pattern of the servo data, identification of the
preceding preamble and servo address mark may be used to indicate the beginning of the burst pattern. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a
variety of periodic patterns and identification thereof that may be used in relation to different embodiments of the present invention.
Once the periodic pattern is identified, read channel pattern detection and harmonic calculation circuit 240 performs a harmonic calculation. This calculation may be done consistent with any harmonic
calculation known in the art. For example, the harmonic may be sensed/calculated in accordance with the circuits and processes disclosed in U.S. patent application Ser. No. 12/851,455 entitled
"Systems and Methods for Servo Data Based Harmonics Calculation" and filed by Mathew et al. on Aug. 5, 2010. The entirety of the aforementioned reference is incorporated herein by reference for all
purposes. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of systems, circuits and methods known in the art that may be used for calculating
harmonics in accordance with different embodiments of the present invention.
The aforementioned harmonic computation results in a real and an imaginary portion as set forth in the following equation for l-th harmonic of a periodic pattern with period N:
H l
= n = 0 N - 1 x [ n ] c l [ n ] - j n = 0 N - 1 x [ n ] s l [ n ] ##EQU00001## where ##EQU00001.2## x [ n ] = 1 M m = 0 M - 1 y [ n + mN ] . ##EQU00001.3##
, y[n] denotes M cycles of the samples of the periodic data from in a sector before averaging and x[n] denotes one cycle of these samples after averaging. Further,
= 0 N - 1 x [ n ] c l [ n ] ##EQU00002##
is the real portion
= 0 N - 1 x [ n ] s l [ n ] ##EQU00003##
is the imaginary portion
, l denotes the harmonic index, and c
[n] and s
[n] are parameters chosen based upon the periodicity of the periodic pattern.
Where the calculated harmonic is selected as an initial harmonic, a discrete Fourier transform of the harmonic is stored as a reference harmonic to a storage circuit 245. In particular, the
aforementioned real and imaginary portions of the calculated harmonic are stored. This stored initial harmonic is considered to have a zero phase offset. Selecting the calculated harmonic may be
based on the first harmonic since startup, or may be updated periodically. Based on the disclosure provided herein, one of ordinary skill in the art will appreciate a variety of different ways that a
reference harmonic may be selected for use in relation to different embodiments of the present invention.
Where the calculated harmonic is not the initial harmonic, the calculated harmonic is provided to an estimation circuit 250 that estimates a phase difference between the previously stored reference
harmonic and the newly calculated harmonic (i.e., the harmonic from k-th read-event or sector). As an example, assume the first and third harmonics are computed as follows:
[k,l], for k=1, 2, . . . , M and l=1, 3,
and where
[k,l], X
[k,l]} are the real and imaginary parts of the l-th harmonic of the k-th read event. In this case, the estimated phase difference of the l-th harmonic of the k-th read event is calculated in
accordance with the following equation:
Φ [ k , l ] = Θ [ k , l ] Θ [ k , l ] = Φ r [ k , l ] + jΦ i [ k , l ] , with Θ [ k , l ] = X [ k , l ] X [ 1 , l ] . ##EQU00004##
In this case
Φ r [ k , l ] = X r [ k , l ] X r [ 1 , l ] + X i [ k , l ] X i [ 1 , l ] β [ k , l ] , Φ i [ k , l ] = X i [ k , l ] X r [ 1 , l ] - X r [ k , l ] X i [ 1 , l ] β [ k , l ] , and ##EQU00005## β [ k
, l ] = X [ 1 , l ] X [ k , l ] = X r 2 [ 1 , l ] + X i 2 [ 1 , l ] X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00005.2##
The aforementioned estimated phase difference is provided from estimation circuit 250 to a phase offset compensation circuit 260. Phase offset compensation circuit 260 uses the estimated phase
difference to align or rotate the recently calculated harmonics into phase with the initial reference harmonic from storage circuit 245 in accordance with the following equation:
~ k [ k , l ] = φ * [ k , l ] X [ k , l ] = ( φ r [ k , l ] X r [ k , l ] + φ i [ k , l ] X i [ k , l ] ) + j ( φ r [ k , l ] X i [ k , l ] - φ i [ k , l ] X r [ k , l ] ) . ##EQU00006##
Various embodiments of the present invention combine the processes of both estimation circuit 250 and phase offset compensation circuit 260 in a combination circuit (not shown as a combination) that
uses the same inputs as described in relation to estimation circuit 250 to yield an aligned output in accordance with the following equation:
~ k [ k , l ] = Θ * [ k , l ] Θ [ k , l ] X [ k , l ] = X [ 1 , l ] X [ 1 , l ] X [ k , l ] . ##EQU00007##
The aforementioned phase aligned harmonic value is provided to a running average circuit 265. Running average circuit 265 averages a number of phase compensated harmonics. Such averaging operates to
reduce noise characteristics of the calculated harmonics. In one particular embodiment of the present invention, harmonic calculations for forty or more regions are averaged together in the running
average. Accumulating the phase aligned harmonics over several read events (i.e., sectors), running average circuit 265 provides an output consistent with the following equation:
k X
~ k [ k , l ] = X [ 1 , l ] X [ 1 , l ] k X [ k , l ] = X r [ 1 , l ] + j X i [ 1 , l ] X r 2 [ 1 , l ] + X i 2 [ 1 , l ] k X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00008##
The above mentioned output is then divided by the number of constituent
elements (k) to yield an average harmonics output. The average harmonics output is provided as an averaged output 275. Based upon the disclosure provided herein, one of ordinary skill in the art will
recognize other numbers of harmonics that may be included in averaged output 275. Averaged output 275 may be provided to a fly height calculation circuit as are known in the art. Based upon the
disclosure provided herein, one of ordinary skill in the art will recognize a variety of other uses for averaged output 275.
As the aforementioned accumulation of phase aligned harmonics over several sectors and multiplying the accumulated value by the reference harmonic to a storage circuit 245 uses one square root
function and two multiplications for each harmonic calculated in relation to each sector, some embodiments of the present invention utilize one or more simplifications operable to yield less complex
circuitry. As a first level of simplification, normalization of the phase factor is eliminated while computing the phase difference of the l-th harmonic in the k-th sector with respect to the 1
sector in accordance with the following equation:
Φ ~ [ k , l ] = Θ [ k , l ] = X [ k , l ] X [ 1 , l ] = X [ k , l ] X * [ 1 , l ] X [ 1 , l ] 2 . ##EQU00009##
As another level of simplification
, the squaring and division operations may also be eliminated while computing the phase difference in accordance with the following equations:
{circumflex over (Φ)}[k,l]=X[k,l]X*[1,l]={circumflex over (Φ)}
[k,l]+j{circumflex over (Φ)}
{circumflex over (Φ)}
[1,l], and
{circumflex over (Φ)}
From this
, the real and imaginary parts of the phase aligned l-th harmonic of the k-th sector may be written as follows:
{circumflex over (X)} [k,l]={circumflex over (Φ)}[k,l]X*[k,l]={circumflex over (X)}
[k,l]+j{circumflex over (X)}
{circumflex over (X)}
[k,l]={circumflex over (Φ)}
[k,l]+{circumflex over (Φ)}
[k,l], and
{circumflex over (X)}
[k,l]={circumflex over (Φ)}
[k,l]-{circumflex over (Φ)}
Using the aforementioned simplifications, a combination of estimation circuit 250 and phase offset compensation circuit 260 provides a phase compensated version of the newly calculated harmonic
(i.e., the harmonic in k-th read-event or sector) in accordance with the following equation:
{circumflex over (X)}[k,l]=X[1,l]|X[k,l]|
where X
[1,l] is the output of reference harmonic storage circuit 245. Therefore, the accumulation process performed by running average circuit 265 can be reduced in accordance with the following equation:
k X
^ k [ k , l ] = X [ 1 , l ] k X [ k , l ] 2 = ( X r [ 1 , l ] + j X i [ 1 , l ] ) k ( X r 2 [ k , l ] + X i 2 [ k , l ] ) . ##EQU00010##
Such a simplification saves at least one square root function per
calculated harmonic per sector.
An additional scale factor may be used in relation to both the real and imaginary portions of the phase aligned harmonics. This scale factor may be accounted for when utilizing the resulting average
output 275 in relation to determining fly height or another purpose. This scale factor can be obtained by relating the phase aligned harmonics in exact and approximate approaches as follows:
~ [ k , l ] = X [ 1 , l ] X [ 1 , l ] X [ k , l ] & X ^ [ k , l ] = X [ 1 , l ] X [ k , l ] 2 . Therefore , X ^ [ k , l ] = { X [ 1 , l ] X [ 1 , l ] X [ k , l ] } { X [ k , l ] X [ 1 , l ] } = X ~ [
k , l ] { X [ k , l ] X [ 1 , l ] } . ##EQU00011##
From this it follows that:
|{circumflex over (X)}[k,l]|
, the harmonic value that may be used in relation to calculating fly height based upon average output 275 when harmonic computations are done using the simplified approach presented above is:
10 log 10 X ^ [ k , l ] 2 3 . ##EQU00012##
Turning to
FIG. 3
, a flow diagram 300 depicts a method in accordance with some embodiments of the present invention for phase compensated harmonics calculation. Following flow diagram 300, a series of digital samples
is continuously received (block 305). This series of digital samples may be received for example, from an analog to digital converter circuit that is converting an analog signal corresponding to
information derived from a storage medium. It is determined whether an initial harmonics calculation has been completed (block 310). The initial harmonic calculations are considered to have a phase
offset of zero. Later harmonics are phase aligned to this initial harmonics calculation.
Where an initial harmonics calculation has not yet been calculated (block 310), it is determined whether a predefined periodic data pattern is found in the received digital samples (block 312). The
periodic pattern may be part of data received from a user data region of a storage medium or may be part of the data received from a servo data region. In one particular instance, the periodic
pattern is the burst demodulation information included as part of the servo data. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of approaches
for identifying a periodic pattern within the received digital data samples, and various locations on a storage medium from which the periodic data pattern may be derived. Where a periodic data
pattern has not yet been identified (block 312), additional data samples are received (block 305) and the process of searching for the periodic data continues.
Alternatively, where the predefined periodic data pattern is identified (block 312), a harmonics calculation is performed on the periodic data for the initial sector (block 315). This harmonics
calculation may be done consistent with any harmonic calculation known in the art. For example, the harmonic may be sensed/calculated in accordance with the circuits and processes disclosed in U.S.
patent application Ser. No. 12/851,455 entitled "Systems and Methods for Servo Data Based Harmonics Calculation" and filed by Mathew et al. on Aug. 5, 2010. Based upon the disclosure provided herein,
one of ordinary skill in the art will recognize a variety of systems, circuits and methods known in the art that may be used for calculating harmonics in accordance with different embodiments of the
present invention.
The aforementioned computation results in a real and an imaginary portion as set forth in the following equation:
[ 1 , l ] = n = 0 N - 1 x 1 [ n ] c l [ n ] - j n = 0 N - 1 x 1 [ n ] s l [ n ] , ##EQU00013##
where x
[n] for n=0, 1, . . . , N-1, denote the periodic pattern averaged over the first sector,
= 0 N - 1 x 1 [ n ] c l [ n ] ##EQU00014##
is the real portion
= 0 N - 1 x 1 [ n ] s l [ n ] ##EQU00015##
is the imaginary portion
, and c
[n] and s
[n] are parameters chosen based upon the periodicity of the periodic pattern. The real and imaginary portions of the calculated harmonics are stored (block 320) and a flag indicating that the initial
harmonics calculation is complete and stored is set (block 322). Again, the initial harmonic calculations are considered to have a phase offset of zero. Later harmonics are phase aligned to this
initial harmonics calculation that is stored.
Alternatively, where an initial harmonics calculation has been calculated (block 310), it is determined whether a subsequent instance of the predefined periodic data pattern is found in the received
digital samples (block 325). Again, the periodic pattern may be part of data received from a user data region of a storage medium or may be part of the data received from a servo data region. In one
particular instance, the periodic pattern is the burst demodulation information included as part of the servo data. Based upon the disclosure provided herein, one of ordinary skill in the art will
recognize a variety of approaches for identifying a periodic pattern within the received digital data samples, and various locations on a storage medium from which the periodic data pattern may be
derived. Where a subsequent instance of the periodic data pattern has not yet been identified (block 325), additional data samples are received (block 305) and the process of searching for the
periodic data continues.
Alternatively, where a subsequent instance of the predefined periodic data pattern is identified (block 325), a harmonics calculation is performed on the periodic data for the subsequent sector from
which the periodic data was derived (block 330). As with the initial harmonics calculation, the calculation may be done consistent with any harmonic calculation known in the art. For example, the
harmonic may be sensed/calculated in accordance with the circuits and processes disclosed in U.S. patent application Ser. No. 12/851,455 entitled "Systems and Methods for Servo Data Based Harmonics
Calculation" and filed by Mathew et al. on Aug. 5, 2010. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of systems, circuits and methods known in
the art that may be used for calculating harmonics in accordance with different embodiments of the present invention.
The aforementioned computation results in a real and an imaginary portion as set forth in the following equation:
[ k , l ] = n = 0 N - 1 x 1 [ n ] c l [ n ] - j n = 0 N - 1 x k [ n ] s l [ n ] , ##EQU00016##
where X[k]
[n] for n=0, 1, . . . , N-1, denote the periodic pattern averaged over the k-th sector,
= 0 N - 1 x k [ n ] c l [ n ] ##EQU00017##
is the real portion
= 0 N - 1 x k [ n ] s l [ n ] ##EQU00018##
is the imaginary portion
, and c
[n] and s
[n] are parameters chosen based upon the periodicity of the periodic pattern.
Next, an estimate of the phase difference between the initial harmonics value and the recently calculated harmonics value (i.e., the k-th harmonic) (block 335). As an example, assume the first and
third harmonics are computed as follows:
[k,l], for l=1, 3,
and where
[k,l], X
[k,l]} are the real and imaginary parts of the l-th harmonic of the k-th read event. In this case, the estimated phase difference of the l-th harmonic of the k-th read event is calculated in
accordance with the following equation:
Φ [ k , l ] = Θ [ k , l ] Θ [ k , l ] = Φ r [ k , l ] + j Φ i [ k , l ] , with ##EQU00019## Θ [ k , l ] = X [ k , l ] X [ 1 , l ] . ##EQU00019.2##
In this case
Φ r [ k , l ] = X r [ k , l ] X r [ 1 , l ] + X i [ k , l ] X i [ 1 , l ] β [ k , l ] , Φ i [ k , l ] = X i [ k , l ] X r [ 1 , l ] - X r [ k , l ] X i [ 1 , l ] β [ k , l ] , and ##EQU00020## β [ k
, l ] = X [ 1 , l ] X [ k , l ] = X r 2 [ 1 , l ] + X i 2 [ 1 , l ] X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00020.2##
The aforementioned estimated phase difference is utilized to compute phase aligned harmonics for the currently processing sector (block 340). Such phase aligned harmonics may be calculated in
accordance with the following equation:
~ k [ k , l ] = φ * [ k , l ] X [ k , l ] = ( φ r [ k , l ] X r [ k , l ] + φ i [ k , l ] X i [ k , l ] ) + j ( φ r [ k , l ] X i [ k , l ] - φ i [ k , l ] X r [ k , l ] ) . ##EQU00021##
The aforementioned phase aligned harmonic value is then incorporated in a running average value (block 345). Such averaging operates to reduce noise characteristics of the calculated harmonics. In
one particular embodiment of the present invention, harmonic calculations for forty or more regions are averaged together in the running average. The aforementioned phase aligned harmonic is
equivalent to:
~ k [ k , l ] = Θ * [ k , l ] Θ [ k , l ] X [ k , l ] = X [ 1 , l ] X [ 1 , l ] X [ k , l ] . ##EQU00022##
, accumulating the phase aligned harmonics over several read events (i.e., sectors), the resulting running average is consistent with the following equation:
k X
~ k [ k , l ] = X [ 1 , l ] X [ 1 , l ] k X [ k , l ] = X r [ 1 , l ] + j X i [ 1 , l ] X r 2 [ 1 , l ] + X i 2 [ 1 , l ] k X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00023##
The above mentioned output is then divided by the number of constituent
elements (k) to yield an average harmonics output. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize other numbers of harmonics that may be included in
averaged output. The resulting averaged output may be provided to a fly height calculation circuit as are known in the art. Based upon the disclosure provided herein, one of ordinary skill in the art
will recognize a variety of other uses for the averaged output.
[0050] FIG. 4
depicts a phase compensated harmonics calculation circuit 400 in accordance with one or more embodiments of the present invention relying on periodic patterns derived from servo data. Phase
compensated harmonics calculation circuit 400 includes an analog front end circuit 410. Analog front end circuit 410 may be any analog front end circuit known in the art. In the depicted embodiment,
analog front end circuit 410 includes a pre-amplifier circuit 420, an analog filter circuit 425 and an analog to digital converter circuit 430. Pre-amplifier circuit 420 is electrically coupled to a
read/write head assembly 415 that is disposed a fly height distance 495 from a storage medium 490.
In a read operation, read/write head assembly 415 senses magnetically represented information on medium 490, and converts the information into an electrical signal. The electrical signal is amplified
by pre-amplifier circuit 420. The resulting amplified signal from pre-amplifier circuit 420 is provided to analog filter circuit 425 that provides a filtered output to analog to digital converter
circuit 430. In response, analog to digital converter circuit 430 provides a series of digital samples corresponding to the received analog input. The series of digital samples are provided to a
downstream data detection circuit 435, and to a read channel pattern detection and harmonic sensor circuit 442 through a servo field detector circuit 440. In a write operation, a write data output
405 is provided to pre-amplifier circuit 420 that in turn provides an output corresponding to write data output 405 to read/write head assembly 415. Read/write head assembly 415 converts the received
signal into magnetic information that is stored to medium 490.
A servo filed detection circuit 440 monitors the received samples from analog front end circuit 410 looking for servo data as is known in the art. Servo field detection circuit 440 may be any circuit
known in the art that is capable of detecting and processing servo data from a storage medium. In the case where burst demodulation data from the servo data field is used to calculate harmonics
values, servo field detection circuit 440 asserts a field found output high whenever the burst demodulation field is identified. This output is provided as an enable input to a read channel harmonic
calculation circuit 442 including real and imaginary computation circuitry.
Read channel harmonic calculation circuit 442 is operable to calculate harmonics using samples received when the field found output is asserted by servo field detection circuit 440. This calculation
may be done consistent with any harmonic calculation known in the art. For example, the harmonic may be sensed/calculated in accordance with the circuits and processes disclosed in U.S. patent
application Ser. No. 12/851,455 entitled "Systems and Methods for Servo Data Based Harmonics Calculation" and filed by Mathew et al. on Aug. 5, 2010. Based upon the disclosure provided herein, one of
ordinary skill in the art will recognize a variety of systems, circuits and methods known in the art that may be used for calculating harmonics in accordance with different embodiments of the present
The aforementioned harmonic computation results in a real and an imaginary portion as set forth in the following equation for l-th harmonic of a periodic pattern with period N:
H l
= n = 0 N - 1 x [ n ] c l [ n ] - j n = 0 N - 1 x [ n ] s l [ n ] ##EQU00024## where ##EQU00024.2## x [ n ] = 1 M m = 0 M - 1 y [ n + mN ] . ##EQU00024.3##
, y[n] denotes M cycles of the samples of the periodic data from in a sector before averaging and x[n] denotes one cycle of these samples after averaging. Further,
= 0 N - 1 x [ n ] c l [ n ] ##EQU00025##
is the real portion
= 0 N - 1 x [ n ] s l [ n ] ##EQU00026##
is the imaginary portion
, l denotes the harmonic index, and c
[n] and s
[n] are parameters chosen based upon the periodicity of the periodic pattern.
Where the calculated harmonic is selected as an initial harmonic, a discrete Fourier transform of the harmonic is stored as a reference harmonic to a storage circuit 445. In particular, the
aforementioned real and imaginary portions of the calculated harmonic are stored. This stored initial harmonic is considered to have a zero phase offset. Selecting the calculated harmonic may be
based on the first harmonic since startup, or may be updated periodically. Based on the disclosure provided herein, one of ordinary skill in the art will appreciate a variety of different ways that a
reference harmonic may be selected for use in relation to different embodiments of the present invention.
Where the calculated harmonic is not the initial harmonic, the calculated harmonic is provided to an estimation circuit 450 that estimates a phase difference between the previously stored reference
harmonic and the newly calculated harmonic (i.e., the harmonic in k-th read-event or sector). As an example, assume the first and third harmonics are computed as follows:
[k,l], for l=1, 3,
and where
[k,l], X
[k,l]} are the real and imaginary parts of the l-th harmonic of the k-th read event. In this case, the estimated phase difference of the l-th harmonic of the k-th read event is calculated in
accordance with the following equation:
Φ [ k , l ] = Θ [ k , l ] Θ [ k , l ] = Φ r [ k , l ] + j Φ i [ k , l ] , with ##EQU00027## Θ [ k , l ] = X [ k , l ] X [ 1 , l ] . ##EQU00027.2##
In this case
Φ r [ k , l ] = X r [ k , l ] X r [ 1 , l ] + X i [ k , l ] X i [ 1 , l ] β [ k , l ] , Φ i [ k , l ] = X i [ k , l ] X r [ 1 , l ] - X r [ k , l ] X i [ 1 , l ] β [ k , l ] , and ##EQU00028## β [ k
, l ] = X [ 1 , l ] X [ k , l ] = X r 2 [ 1 , l ] + X i 2 [ 1 , l ] X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00028.2##
The aforementioned estimated phase difference is provided from estimation circuit 450 to a phase offset compensation circuit 460. Phase offset compensation circuit 460 uses the estimated phase
difference to align or rotate the recently calculated harmonics into phase with the initial reference harmonic from storage circuit 445 in accordance with the following equation:
~ k [ k , l ] = Φ * [ k , l ] X [ k , l ] = ( Φ r [ k , l ] X r [ k , l ] + Φ i [ k , l ] X i [ k , l ] ) + j ( Φ r [ k , l ] X i [ k , l ] - Φ i [ k , l ] X r [ k , l ] ) . ##EQU00029##
The aforementioned phase aligned harmonic value is provided to a running average circuit 465. Running average circuit 465 averages a number of phase compensated harmonics. Such averaging operates to
reduce noise characteristics of the calculated harmonics. In one particular embodiment of the present invention, harmonic calculations for forty or more regions are averaged together in the running
average. The aforementioned phase aligned harmonic is equivalent to:
~ k [ k , l ] = Θ * [ k , l ] Θ [ k , l ] X [ k , l ] = X [ 1 , l ] X [ 1 , l ] X [ k , l ] . ##EQU00030##
, accumulating the phase aligned harmonics over several read events (i.e., sectors), running average circuit 465 provides an output consistent with the following equation:
k X
~ k [ k , l ] = X [ 1 , l ] X [ 1 , l ] k X [ k , l ] = X r [ 1 , l ] + j X i [ 1 , l ] X r 2 [ 1 , l ] + X i 2 [ 1 , l ] k X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00031##
The above mentioned output is then divided by the number of constituent
elements (k) to yield an average harmonics output. The average harmonics output is provided as an averaged output 475. Based upon the disclosure provided herein, one of ordinary skill in the art will
recognize other numbers of harmonics that may be included in averaged output 475. Averaged output 475 may be provided to a fly height calculation circuit as are known in the art. Based upon the
disclosure provided herein, one of ordinary skill in the art will recognize a variety of other uses for averaged output 475.
As the aforementioned accumulation of phase aligned harmonics over several sectors and multiplying the accumulated value by the reference harmonic to a storage circuit 445 uses one square root
function and two multiplications for each harmonic calculated in relation to each sector, some embodiments of the present invention utilize one or more simplifications operable to yield less complex
circuitry. As a first level of simplification, normalization of the phase factor is eliminated while computing the phase difference of the l-th harmonic in the k-th sector with respect to the 1
sector in accordance with the following equation:
Φ ~ [ k , l ] = Θ [ k , l ] = X [ k , l ] X [ 1 , l ] = X [ k , l ] X * [ 1 , l ] X [ 1 , l ] 2 . ##EQU00032##
As another level of simplification
, the squaring and division operations may also be eliminated while computing the phase difference in accordance with the following equation:
{circumflex over (Φ)}[k,l]=X[k,l]X*[1,l]={circumflex over (Φ)}
[k,l]+j{circumflex over (Φ)}
{circumflex over (Φ)}
[1,l], and
{circumflex over (Φ)}
From this
, the real and imaginary parts of the phase aligned l-th harmonic of the k-th sector may be written as follows:
{circumflex over (X)}[k,l]={circumflex over (Φ)}*[k,l]X[k,l]={circumflex over (X)}
[k,l]+j{circumflex over (X)}
{circumflex over (X)}
[k,l]={circumflex over (Φ)}
[k,l]+{circumflex over (Φ)}
[k,l], and
{circumflex over (X)}
[k,l]={circumflex over (Φ)}
[k,l]-{circumflex over (Φ)}
Using the aforementioned simplifications, a combination of estimation circuit 450 and phase offset compensation circuit 460 provides a phase compensated version of the newly calculated harmonic
(i.e., the harmonic in k-th read-event or sector) in accordance with the following equation:
{circumflex over (X)}[k,l]=X[1,l]|X[k, l]|
, the accumulation process performed by running average circuit 465 can be reduced in accordance with the following equation:
k X
^ k [ k , l ] = X [ 1 , l ] k X [ k , l ] 2 = ( X r [ 1 , l ] + jX i [ 1 , l ] ) k ( X r 2 [ k , l ] + X i 2 [ k , l ] ) . ##EQU00033##
Such a simplification saves at least one square root function per
calculated harmonic per sector.
An additional scale factor may be used in relation to both the real and imaginary portions of the phase aligned harmonics. This scale factor may be accounted for when utilizing the resulting average
output 475 in relation to determining fly height or another purpose. This scale factor can be obtained by relating the phase aligned harmonics in exact and approximate approaches as follows:
~ [ k , l ] = X [ 1 , l ] X [ 1 , l ] X [ k , l ] & X ^ [ k , l ] = X [ 1 , l ] X [ k , l ] 2 . Therefore , X ^ [ k , l ] = { X [ 1 , l ] X [ 1 , l ] X [ k , l ] } { X [ k , l ] X [ 1 , l ] } = X ~ [
k , l ] { X [ k , l ] X [ 1 , l ] } . ##EQU00034##
From this it follows that
|{circumflex over (X)}[k,l]|
, the harmonic value that may be used in relation to calculating fly height based upon average output 475 when harmonic computations are done using the simplified approach presented above is:
10 log 10 X ^ [ k , l ] 2 3 . ##EQU00035##
Turning to
FIG. 5
, a flow diagram 500 depicts a method in accordance with some embodiments of the present invention for phase compensated harmonics calculation relying on periodic patterns derived from servo data. It
should be noted that the method of
FIG. 5
is specific to servo data, but that the processes of the method may be equally applicable to periodic data from either or both of a servo data region or a user data region. Following flow diagram
500, a series of digital samples is continuously received (block 505). This series of digital samples may be received for example, from an analog to digital converter circuit that is converting an
analog signal corresponding to information derived from a storage medium. It is determined whether servo data has been identified in the series of digital samples (block 560). This determination may
be done using any approach known in the art and may include, for example, identifying a preamble within the identified servo data. Where a servo data region has been found (block 560), it is
determined whether the burst demodulation data within the servo data region is being received (block 565).
Where the burst demodulation data is being received (block 565), it is determined whether an initial harmonics calculation has been completed (block 510). The initial harmonic calculations are
considered to have a phase offset of zero. Later harmonics are phase aligned to this initial harmonics calculation. Where an initial harmonics calculation has not yet been calculated (block 510), a
harmonics calculation is performed on the burst demodulation data for the initial sector (block 515). This harmonics calculation may be done consistent with any harmonic calculation known in the art.
For example, the harmonic may be sensed/calculated in accordance with the circuits and processes disclosed in U.S. patent application Ser. No. 12/851,455 entitled "Systems and Methods for Servo Data
Based Harmonics Calculation" and filed by Mathew et al. on Aug. 5, 2010. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of systems, circuits and
methods known in the art that may be used for calculating harmonics in accordance with different embodiments of the present invention.
The aforementioned computation results in a real and an imaginary portion as set forth in the following equation:
[ 1 , l ] = n = 0 N - 1 x 1 [ n ] c l [ n ] - j n = 0 N - 1 x 1 [ n ] s l [ n ] , ##EQU00036##
where X[l]
[n] for n=0, 1, . . . , N-1, denote the periodic pattern averaged over the first sector,
= 0 N - 1 x 1 [ n ] c l [ n ] ##EQU00037##
is the real portion
= 0 N - 1 x 1 [ n ] s 1 [ n ] ##EQU00038##
is the imaginary portion
, and c
[n] and s
[n] are parameters chosen based upon the periodicity of the periodic pattern. These real and imaginary portions of the calculated harmonics are stored (block 520) and a flag indicating that the
initial harmonics calculation is complete and stored is set (block 522). Again, the initial harmonic calculations are considered to have a phase offset of zero. Later harmonics are phase aligned to
this initial harmonics calculation that is stored.
Alternatively, where an initial harmonics calculation has been previously calculated (block 510), a harmonics calculation is performed on the burst demodulation data for the subsequent sector from
which the periodic data was derived (block 530). As with the initial harmonics calculation, the calculation may be done consistent with any harmonic calculation known in the art. For example, the
harmonic may be sensed/calculated in accordance with the circuits and processes disclosed in U.S. patent application Ser. No. 12/851,455 entitled "Systems and Methods for Servo Data Based Harmonics
Calculation" and filed by Mathew et al. on Aug. 5, 2010. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of systems, circuits and methods known in
the art that may be used for calculating harmonics in accordance with different embodiments of the present invention.
The aforementioned computation results in a real and an imaginary portion as set forth in the following equation:
[ k , l ] = n = 0 N - 1 x k [ n ] c l [ n ] - j n = 0 N - 1 x k [ n ] s l [ n ] , ##EQU00039##
where x[k]
[n] for n=0, 1, . . . , N-1, denote the periodic pattern averaged over the k-th sector,
= 0 N - 1 x k [ n ] c l [ n ] ##EQU00040##
is the real portion
= 0 N - 1 x k [ n ] s l [ n ] ##EQU00041##
is the imaginary portion
, and c
[n] and s
[n] are parameters chosen based upon the periodicity of the periodic pattern.
Next, an estimate of the phase difference between the initial harmonics value and the recently calculated harmonics value (i.e., the harmonic of k-th read-event or sector) (block 535). As an example,
assume the first and third harmonics are computed as follows:
[k,l], for l=1, 3,
and where
[k,l], X
[k,l]} are the real and imaginary parts of the l-th harmonic of the k-th read event. In this case, the estimated phase difference of the l-th harmonic of the k-th read event is calculated in
accordance with the following equation:
Φ [ k , l ] = Θ [ k , l ] Θ [ k , l ] = Φ r [ k , l ] + jΦ i [ k , l ] , with ##EQU00042## Θ [ k , l ] = X [ k , l ] X [ 1 , l ] . ##EQU00042.2##
In this case
Φ r [ k , l ] = X r [ k , l ] X r [ 1 , l ] + X i [ k , l ] X i [ 1 , l ] β [ k , l ] , Φ i [ k , l ] = X i [ k , l ] X r [ 1 , l ] - X r [ k , l ] X i [ 1 , l ] β [ k , l ] , and ##EQU00043## β [ k
, l ] = X [ 1 , l ] X [ k , l ] = X r 2 [ 1 , l ] + X i 2 [ 1 , l ] X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00043.2##
The aforementioned estimated phase difference is utilized to compute phase aligned harmonics for the currently processing sector (block 540). Such phase aligned harmonics may be calculated in
accordance with the following equation:
~ k [ k , l ] = Φ * [ k , l ] X [ k , l ] = ( Φ r k , l X r k , l + Φ i k , l X i k , l ) + j ( Φ r [ k , l ] X i [ k , l ] - Φ i [ k , l ] X r [ k , l ] ) . ##EQU00044##
The aforementioned phase aligned harmonic value is then incorporated in a running average value (block 545). Such averaging operates to reduce noise characteristics of the calculated harmonics. In
one particular embodiment of the present invention, harmonic calculations for forty or more regions are averaged together in the running average. The aforementioned phase aligned harmonic is
~ k [ k , l ] = Θ * [ k , l ] Θ [ k , l ] X [ k , l ] = X [ 1 , l ] X [ 1 , l ] X [ k , l ] . ##EQU00045##
, accumulating the phase aligned harmonics over several read events (i.e., sectors), the resulting running average is consistent with the following equation:
k X
~ k [ k , l ] = X [ 1 , l ] X [ 1 , l ] k X [ k , l ] = X r [ 1 , l ] + j X i [ 1 , l ] X r 2 [ 1 , l ] + X i 2 [ 1 , l ] k X r 2 [ k , l ] + X i 2 [ k , l ] . ##EQU00046##
The above mentioned output is then divided by the number of constituent
elements (k) to yield an average harmonics output. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize other numbers of harmonics that may be included in
averaged output. The resulting averaged output may be provided to a fly height calculation circuit as are known in the art. Based upon the disclosure provided herein, one of ordinary skill in the art
will recognize a variety of other uses for the averaged output.
Turning to
FIG. 6a
, a storage device 600 including a read channel circuit 610 including phase compensated multi-sector harmonic sensing circuit is shown in accordance with one or more embodiments of the present
invention. Storage device 600 may be, for example, a hard disk drive. Read channel circuit 610 includes phase compensated multi-sector harmonic sensing that may be implemented consistent with that
discussed in relation to
FIG. 2
FIG. 4
above, and/or may operate consistent with the methods discussed above in relation to
FIG. 3
FIG. 5
above. Further, read channel circuit 610 may include a data detector, such as, for example, a Viterbi algorithm data detector, and/or a data decoder circuit, such as, for example, a low density
parity check decoder circuit. In addition to read channel circuit 610, storage device 600 includes a read/write head assembly 676 disposed in relation to a disk platter 678. Read/write head assembly
676 is operable to sense information stored on disk platter 678 and to provide a corresponding electrical signal to read channel circuit 610.
Storage device 600 also includes an interface controller 620, a hard disk controller 666, a motor controller and fly height controller 668, and a spindle motor 672. Interface controller 620 controls
addressing and timing of data to/from disk platter 678. The data on disk platter 678 consists of groups of magnetic signals that may be detected by read/write head assembly 676 when the assembly is
properly positioned over disk platter 678. In one embodiment, disk platter 678 includes magnetic signals recorded in accordance with a perpendicular recording scheme. In other embodiments of the
present invention, disk platter 678 includes magnetic signals recorded in accordance with a longitudinal recording scheme. Motor controller and fly height controller 668 controls the spin rate of
disk platter 678 and the location of read/write head assembly 676 in relation to disk platter 678.
As shown in a cross sectional diagram 691 of
FIG. 6b
, the distance between read/write head assembly 676 and disk platter 678 is a fly height 690. Fly height 690 is controlled by motor controller and fly height controller 668 based upon a harmonics
value 612 provided by read channel circuit 610. The accuracy of harmonics value 612 is improved by the phase offset based spectral aliasing compensation applied by read channel circuit 610.
In a typical read operation, read/write head assembly 676 is accurately positioned by motor controller and fly height controller 668 over a desired data track on disk platter 678. Motor controller
and fly height controller 668 both positions read/write head assembly 676 in relation to disk platter 678 (laterally and vertically) and drives spindle motor 672 by moving read/write head assembly
676 to the proper data track on disk platter 678 under the direction of hard disk controller 666. Spindle motor 672 spins disk platter 678 at a determined spin rate (RPMs). Once read/write head
assembly 678 is positioned adjacent the proper data track, magnetic signals representing data on disk platter 678 are sensed by read/write head assembly 676 as disk platter 678 is rotated by spindle
motor 672. The sensed magnetic signals are provided as a continuous, minute analog signal representative of the magnetic data on disk platter 678. This minute analog signal is provided by read/write
head assembly 676 to read channel circuit 610. In turn, read channel circuit 610 decodes and digitizes the received analog signal to recreate the information originally written to disk platter 678.
This data is provided as read data 603 to a receiving circuit. A write operation is substantially the opposite of the preceding read operation with write data 601 being provided to read channel
circuit 610. This data is then encoded and written to disk platter 678.
At times, a signal derived from disk platter 678 and/or internally generated by read channel circuit 610 may be processed to determine a harmonics value relevant to fly height. In some embodiments of
the present invention, determining the harmonics value may be done consistent with the methods discussed above in relation to
FIG. 3
FIG. 5
above. In various cases, a circuit consistent with that discussed in relation to
FIG. 2
FIG. 4
above may be used. In various cases, fly height is re-evaluated when a change in operational status of storage device 600 is detected. Such an operational change may include, but is not limited to, a
change in an operational voltage level, a change in an operational temperature, a change in altitude, or a change in bit error rate. Based upon the disclosure provided herein, one of ordinary skill
in the art will recognize a variety of operational status that may be monitored in storage device 600, and how changes in such status may be utilized to trigger a re-evaluation of fly height.
In conclusion, the invention provides novel systems, devices, methods and arrangements for phase compensated harmonics measurement. While detailed descriptions of one or more embodiments of the
invention have been given above, various alternatives, modifications, and equivalents will be apparent to those skilled in the art without varying from the spirit of the invention. Therefore, the
above description should not be taken as limiting the scope of the invention, which is defined by the appended claims.
Patent applications by George Mathew, San Jose, CA US
Patent applications by Hongwei Song, Longmont, CO US
Patent applications in class Phase comparison (e.g., between cyclic pulse voltage and sinusoidal current, etc.)
Patent applications in all subclasses Phase comparison (e.g., between cyclic pulse voltage and sinusoidal current, etc.)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120056612","timestamp":"2014-04-21T04:25:21Z","content_type":null,"content_length":"108449","record_id":"<urn:uuid:cb1a05d9-479c-46f9-9a79-9b2e6a362840>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |