content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
1. Classify the triangle by its angles and its sides.
A. acute, scalene B. obtuse, scalene
C. acute, equilateral D. obtuse, isosceles
2. Classify the triangle by its angles and its sides.
A. right, scalene B. right, isosceles
C. acute, equilateral D. acute, isosceles
3. Find mB in ABC if mA = 85° and mC = 11°.
A. 84° B. 6°
C. 104° D. 92°
4. Find the missing measure in the triangle below.
A. 146° B. 34°
C. 43° D. 124°
5. Classify the triangle by it angles and by its sides.
A. right, isosceles B. acute, equilateral
C. acute, isosceles D. acute equilateral | {"url":"http://www.glencoe.com/sec/math/studytools/cgi-bin/msgQuiz.php4?isbn=0-07-829633-1&chapter=10&lesson=4&headerFile=4&state=in","timestamp":"2014-04-21T07:50:57Z","content_type":null,"content_length":"15487","record_id":"<urn:uuid:5f9786f0-5a8d-489a-b823-a54a4b03bb48>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Plausibility Arguments
Date: Mar 13, 1997 6:47 PM
Author: Ted Alper
Subject: Re: Plausibility Arguments
(By coincidence, my previous post made reference to a proof that all
triangles were isosceles and I was thinking of the very one Andre
posted... Of course, I did recently read the book "Mathematical
Circles" [I enjoyed it greatly!] and saw the proof there, typos
and all.)
Andre adds:
> I want to continue that proofs of wrong statements are just
> one kind of useful ways to puzzle children. [...]
> But all this is just a preparation. What should
> come next is training in solving problems right - that is so
> that to avoid all these `miracles'. That is why it is so
> important for children to solve problems where it is clear how
> to check the answer and to find out whether it is right or wrong.
I find nothing to disagree with here. "Proofs of wrong statements"
are certainly not the main course! They are an anjoyable and useful
supplement, though, and particularly relevant to this thread of
"plausibility arguments".
Ted Alper | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1461123","timestamp":"2014-04-16T23:05:49Z","content_type":null,"content_length":"2051","record_id":"<urn:uuid:3f53ab14-3562-4285-b1ad-f2ed89af0052>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Mark on Tuesday, March 23, 2010 at 4:47pm.
Use the given function values and trigonometric identities (including the relationship between a trigonometric function and its cofunction of a complementary angle) to find the indicated
trigonometric functions.
sec Q = 5 tan = 2sqrt6
a) cos Q b) cotQ
c) cot(90 degrees - Q) d) sin Q
Please explain. I do not understand how to do this. Thank you!!
• Math - Reiny, Tuesday, March 23, 2010 at 5:22pm
sec Q = 5 tanQ
1/cosQ = 5sinQ/cosQ
so sinQ = 1/5
I then made a right angles triangle with hypotenuse 5, opposite 1 and let x be the adjacent.
By Pythagoras
x^2 + 1^2 = 5^2
x^2 = 24
x = √24 = 2√6
but you said secQ = 2√6
if, as you started, secQ = 5tanQ, then my calculations stands as correct and
secQ would be 5/(2√6) and not 2√6
I will wait till you correct your opening statement of
sec Q = 5 tan = 2sqrt6 , which leads to a contradiction.
• Math - Aaron, Wednesday, July 20, 2011 at 1:54am
What this person means is :
cosQ = 5 and tanQ = 2sqrt6
Related Questions
Math(Correction) - Use the given function values and trigonometric identities (...
Math(Correction) - Use the given function values and trigonometric identities (...
Math(Correction) - Use the given function values and trigonometric identities (...
Trigonometry - I don't understand how I'm supposed set the problem up or what ...
Trig. - I really need help so please help. write the first trigonometric ...
Trigonometric Identities - Which of the following are trigonometric identities, ...
Math - Use the sum or difference identities to find the exact value of each ...
tRiG/ AlGeBrA - 1) Use double-angle identities to write the following expression...
trig - 1) Use double-angle identities to write the following expression, using ...
Pre-Cal - Suppose è is an acute angle of a right triangle. For this function, ... | {"url":"http://www.jiskha.com/display.cgi?id=1269377222","timestamp":"2014-04-21T03:40:29Z","content_type":null,"content_length":"9226","record_id":"<urn:uuid:a7b3aea8-649c-45c8-973a-a2592db12f19>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Friday, April 18, 2014
Here's another from xkcd.com, on our "good graphics" theme.
Monday, April 14, 2014
Time for something light. Check out xkcd.com, "A webcomic of romance, sarcasm, math, and language," written by a literate former NASA engineer. Really fine stuff. Thanks to my student M.D. for
introducing me to it. Here's one on Fisher vs. Bayes:
Monday, April 7, 2014
Here's a new one for your reading pleasure. Interesting history. Minchul and I went in trying to escape the expected loss minimization paradigm. We came out realizing that we hadn't escaped, but
simultaneously, that not all loss functions are created equal. In particular, there's a direct and natural connection between our stochastic error divergence (SED) and absolute-error loss, elevating
the status of absolute-error loss in our minds and perhaps now making it our default benchmark of choice. Put differently, "quadratic loss is for squares." (Thanks to Roger Koenker for the cute
Diebold, F.X. and Shin, M. (2014), "
Assessing Point Forecast Accuracy by Stochastic Divergence from Zero
," PIER Working Paper 14-011, Department of Economics, University of Pennsylvania.
Abstract: We propose point forecast accuracy measures based directly on the divergence of the forecast-error c.d.f.
from the unit step function at 0, and we explore several variations on the basic theme. We also provide a precise characterization of the relationship between our approach of stochastic error
divergence (SED) minimization and the conventional approach of expected loss minimization. The results reveal a particularly strong connection between SED and absolute-error loss and generalizations
such as the ``check function" loss that underlies quantile regression.
Monday, March 31, 2014
Good writing is good thinking, so when you next hear some pretentious moron boast that ``I don't like to write, I like to think," rest assured, he's surely a bad writer
a bad thinker. Again,
good writing is good thinking
. If you like "to do research" but don't like "to write it up," then you're not thinking clearly.
Research and writing are inextricably intertwined
How to get there? Read and absorb McCloskey's
Rhetoric of Economics
, and Strunk and White's
Elements of Style
. There's no real need to read or absorb much else (about writing). But do bolt the
Chicago Manual of Style
to your desk. Then get going. Think about what you want to say, why, and to whom. Think hard and critically about logical structure and flow, at all scales, small and large. Revise and edit, again
and again. Make things
for your readers.
to your words; push your prose toward poetry.
Good graphics is
good thinking, and precisely the same advice holds. Read and absorb Tufte's
Visual Display of Quantitative Information
. Notice, by the way, how well Tufte writes (even if he sometimes goes overboard with the poetry thing). It's no accident. As Tufte says:
show the data
, and
appeal to the viewer
. Recognize that your first cut using default software settings will never, ever, be satisfactory. (If that statement doesn't instantly resonate with you, then you're in desperate need of a Tufte
infusion.) So revise and edit, again and again. And again.
Friday, March 28, 2014
I'm happy that Nate Silver and his
are back. Nate generally provides interesting and responsible data-based journalism for the educated layperson. (Of course he sometimes gets in over his head, but don't we all?)
Now Krugman suddenly starts to dislike Silver; see his "
Tarnished Silver
" post. Funny, he never complained much when Silver worked at the
New York Times
(the trough where Krugman feeds), but now that Silver has moved elsewhere, Krugman's vitriol erupts. Perhaps Krugman always felt that way but was mum so as not to offend his NYT. Or perhaps he now
wants to punish Silver for defecting. Or perhaps it's a little of both. In any event it strikes me as an embarrassment. Let's call it the Krugman Embarrassment.
I'm not the only one who's noticed the Krugman Embarrassment. See the recent post from
Big Data, Plainly Spoken
, which I think gets things right in labeling FiveThirtyEight-bashing "premature and immature." Also see the
chart at FiveThirtyEight's Data Lab
, which speaks for itself.
Monday, March 24, 2014
If you're in the area: Sheldon Hackney Celebration, Thursday, March 27. Program 4-5, reception 5-6, Irvine Auditorium, 34th and Spruce, Philadelphia. See my
earlier memorial post
Sunday, March 23, 2014
Generalized Autoregressive Score (GAS) models, also known as Dynamic Conditional Score (DCS) models, are an important development. They extend significantly the scope of observation-driven models,
with their simple closed-form likelihoods, in contrast to parameter-driven models whose estimation and inference require heavy simulation.
Many talented people are pushing things forward. Most notable are the Amsterdam group (
Siem Jan Koopman
et al.; see the
GAS site
) and the Cambridge group (
Andrew Harvey
et al., see
Andrew's interesting new book
). The GAS site is very informative, with background description, a catalog of GAS papers, code in Ox and R, conference information, etc. The key paper is
Creal, Koopman and Lucas (2008)
. (It was eventually published in 2012 in
Journal of Applied Econometrics,
proving once again that the better the paper, the longer it takes to publish.)
The GAS idea is simple. Just use a conditional observation density \(p(y_t |f_t)\) whose time-varying parameter \(f_t\) follows the recursion
$$f_{t+1} = ω + β f_t + α S(f_t) \left [ \frac{∂logp(y_t | f_t)}{∂ f_t} \right ],~~~~~~~(1)$$ where \(S(f_t)\) is a scaling function. Note in particular that the scaled score drives \(f_t\). The
resulting GAS models retain observation-driven simplicity yet are quite flexible. In the volatility context, for example, GAS can be significantly more flexible than GARCH, as Harvey emphasizes.
Well, the GAS idea
simple. At least it's simple to implement if taken at face value. But I'm not sure that I understand it fully. In particular, I'm hungry for a theorem that tells me in what sense (1) is the "right"
thing to do. That is, I can imagine other ways of updating \(f_t\), so why should I necessarily adopt (1)? It would be great, for example, if (1) were the provably unique solution to an optimal
approximation problem for non-linear non-Gaussian state space models.
it? (It sure looks like a first-order approximation to
.) And if so, might we want to acknowledge that in doing the econometrics, instead of treating (1) as if it were the DGP? And could we somehow
the approximation?
To the best of my knowledge, the GAS/DCS literature is silent on such fundamental issues. But based on my experience with the fine scholarship of Creal, Harvey, Koopman, Lucas, and their co-authors
and students, I predict that answers will arrive soon.
Sunday, March 16, 2014
Interdisciplinarity is clearly the flavor of the month (read: two decades) among the academic cognoscenti. Although it makes for entertaining popular press, what's the real intellectual benefit of a
top-down interdisciplinary "industrial policy"? Difficult question! That's not necessarily to suggest that there
no benefit; rather, it's simply to suggest that the issues are subtle and deserving of serious thought from all sides.
Hence it's refreshing to see a leading academic throw his hat in the ring with a serious evidence-based defense of the traditional disciplines, as does Penn sociologist
Jerry Jacobs
in his new book
In Defense of Disciplines.
Perhaps the best thing I can do to describe the book and whet your appetite is to reprint some of the book's back-cover blurbs, which get things just right. So here goes:
“Jerry Jacobs’s new book provides the missing counterpoint to the fanfare for interdisciplinary collaboration that has swept over much of academe during the last three decades. Thanks to Jacobs’s
creative and painstaking research, we now know that disciplines are not the ‘silos’ they are so often made out to be; instead, they are surprisingly open to good ideas and new methods developed
elsewhere. Nor are universities rigidly bound to the disciplines—instead, they, routinely foster interdisciplinary work through dozens of organized research centers. This book is more than a
necessary corrective. It is a well-crafted piece of social science, equally at home in the worlds of intellectual history, organizational studies, and quantitative methods. It deserves to be read by
all who care about the future of universities—defenders and critics of the disciplines alike.” (Steven G. Brint, University of California, Riverside)
“At a time of undue hoopla about interdisciplinarity, this is a sobering, highly readable, and data-driven defense of retaining disciplinary units as the primary mode of organizing research
universities. A must read for those concerned with the future of knowledge innovation.” (Myra H. Strober, Stanford University)
“This is a timely, subtle and much needed evaluation of interdisciplinarity as a far reaching goal sweeping around the globe. Jerry Jacobs sets new standards of discussion by documenting with great
new data the long term fate of interdisciplinary fields and the centrality of disciplines to higher education and the modern research university.” (Karin Knorr Cetina, University of Chicago)
Sunday, March 9, 2014
Alexi Onatski
has an interesting recent paper,
"Asymptotic Analysis of the Squared Estimation Error in Misspecified Factor Models."
There's also an
Four interesting cases have emerged in the literature, corresponding to two types of data-generating process (exact factor structure -- diagonal idiosyncratic covariance matrix vs. approximate factor
structure -- non-diagonal idiosyncratic covariance matrix) and two modes of asymptotic analysis (strong factor structure vs. weak -- see Alexi's paper for the technical definitions, but you can
Much recent work focuses on approximate factor structure and strong factor asymptotics. The
classic work of Bai and Ng (2002)
, for example, is in that tradition. Alexi instead focuses on weak factor asymptotics. Crucially and compellingly, moreover, he focuses on selecting the number of factors \(p\) for best estimation of
the common component, since estimation of the common component is typically the goal in factor modeling.
Let's get a bit more precise. The DGP is the usual approximate factor model,
X=\Lambda F^{\prime }+e,
$$where \(X\) is an \(n\times T\) matrix of data, \(\Lambda\) is an \(n\times r\) matrix of factor loadings, \(F\) is a \(T\times r\) matrix of factors and \(e\) is an \(n\times T\) matrix of
idiosyncratic terms.
We want to select \(p\), the number of factors, to get the best principal-component estimate, \(\hat{\Lambda}_{1:p}\hat{F}_{1:p}^{\prime }\), of the common component \(\Lambda F^{\prime }\) under
quadratic loss. That is, the objective is minimization (over time and space) of
L_{p}=\ tr \left[ (\hat{\Lambda}_{1:p}\hat{F}_{1:p}^{\prime }-\Lambda
F^{\prime })(\hat{\Lambda}_{1:p}\hat{F}_{1:p}^{\prime }-\Lambda F^{\prime
})^{\prime }\right] /\left( nT\right).
$$Among many other things, Alexi shows that under weak-factor asymptotics the optimal number of factors is not generally the "true" number!
All told, I find highly compelling the move to loss functions explicitly based on divergence between the true and estimated common component. I'm a little less sure how I feel about the move to
weak-factor asymptotics, as my gut tells me that the common component in many macroeconomic and financial environments is driven by a few strong factors, and not much else. We'll see. In any event
Alexi's contribution is refreshing, original, and valuable.
By the way, I first saw the paper at the
SoFiE Lugano conference
(with the Swiss Finance Institute (SFI), and Labex Louis Bachelier, "Large-Scale Factor Models in Finance," generously hosted by The Faculty of Economics of the Università della Svizzera Italiana,
Lugano, Switzerland). The title of Alexi's talk was "Asymptotic Analysis of the Squared Estimation Error in Misspecified Factor Models," but the actual paper is the one cited above.
Here's the Lugano program FYI, as there were lots of other interesting papers as well.
Invited Session 1 (Chair: E. Renault)
R. Korajczyk (Northwestern University): Small-sample Properties of Factor Mimicking Portfolio Estimates (with Zhuo Chen and Gregory Connor) Contributed Session 1: Factor Models and Asset Pricing
(Chair: F. Trojani) S. Ahn, A. Horenstein, N. Wang: Beta Matrix and Common Factors in Stock Returns, Paper T. Chordia, A. Goyal, J. Shanken: Cross-Sectional Asset Pricing with Individual Stocks:
Betas vs. Characteristics, Slides P. Gagliardini, E. Ossola, O. Scaillet: Time-Varying Risk Premium in large Cross-Sectional Equity Datasets- Paper, Slides Poster Session E. Andreou, E. Ghysels: What
Drives the VIX and the Volatility Risk Premium? T. Berrada, S. Coupy: It Does Pay to Diversify S. Darolles, S. Dubecq, C. Gouriéroux: Contagion Analysis in the Banking Sector D. Karstanje, M. van der
Wel, D. van Dijk: Common Factors in Commodity Futures Curves P. Maio, D. Philip: Macro factors and the cross-section of stock returns, Paper
Contributed Session 2: Dynamic Factor Models (Chair: M. Deistler) G. Fiorentini, E. Sentana: Dynamic Specification Tests for Dynamic Factor Models- Paper, Slides M. Forni, M. Hallin, M. Lippi, P.
Zaffaroni: One-Sided Representations of Generalized Dynamic Factor Models Invited Session 2 (Chair: E. Ghysels) C. Gourieroux (CREST and University of Toronto): Positional Portfolio Management (with
P. Gagliardini and M. Rubin) Contributed Session 3: Systemic Risk (Chair: S. Darolles) J. Boivin, M. P. Giannoni, D. Stevanovic: Dynamic Effects of Credit Shocks in a Data-Rich Environment S. Giglio,
B. Kelly, S. Pruitt, X. Quiao: Systemic Risk and the Macroeconomy: An Empirical Evaluation B. Schwaab, S. J. Koopman, A. Lucas: Modeling Global Financial Sector Stress and Credit Market Dislocation
Invited Session 3 (Chair: F. Diebold) Alexei Onatski (University of Cambridge): Loss-Efficient Selection of the Number of Factors Contributed Session 4: Model Specification (Chair: O. Scaillet) M.
Carrasco, B. Rossi: In-sample Inference and Forecasting in Misspecified Factor Models F. Pegoraro, A. Siegel, L. Tiozzo Pezzoli: Specification Analysis of International Treasury Yield Curve Factors
F. Kleibergen, Z. Zhan: Unexplained Factors and their Effects on Second Pass R-Squared’s and t-Tests- Paper, Slides
Wednesday, March 5, 2014
Check out the
web page of my Penn Statistics buddy, Mike Steele
, probabilist, statistician and mathematician extraordinaire. (And that's just his day job. At night he battles the
hard stuff -- financial markets.) Among other things, you'll like his
Favorite Quotes
Semi-Random Rants | {"url":"http://www.fxdiebold.blogspot.com/","timestamp":"2014-04-19T22:19:12Z","content_type":null,"content_length":"144880","record_id":"<urn:uuid:57999eeb-116b-4ffc-b32e-be81ec474f44>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Premultiply transform functions before interpolation on unequal transform function primitives
Resolve if current spec behavior or the behavior suggested by Mozilla should be chosen.
If two transforms lists should be interpolated but transform function pairs don't equal, the spec currently wants the UA to premultiply all functions on each list and interpolate the resulting
translate() rotate()
translate() scale()
get to
and interpolated.
Problem Statement
Mozilla asks if we can do the decision on a per transform function pair basis. For the example above:
translate() rotate()
translate() scale()
get to
translate() matrix()
translate() matrix()
The first function pair gets interpolated numerically, the second needs matrix interpolation.
I would suggest keeping the specified behavior for performance reasons.
Imagine following example:
translate() rotate() scale()
scale() translate() rotate()
would get to
matrix() matrix() matrix()
matrix() matrix() matrix()
Each function pair would need matrix decomposing and interpolation. I also don't see any benefit of this way.
Note: For WebKit (for Safari) the HW acceleration needs premultiplied transform functions. Therefore WebKit can't switch to the preferred way from Mozilla anyway.
Links to More Info
• Unordered List Itemtransform functions should be interpolated per pair, if the number of transforms and the types match. Independent of the actual transform function. That means special cases in
the spec like premultiplying transform functions if one value is perspective need to get removed.
• Check interpolation behavior on rotate3d on current browsers. Consider numerical interpolation of rotate3d on certain cases. | {"url":"http://wiki.csswg.org/topics/transform-interpolation","timestamp":"2014-04-21T12:52:27Z","content_type":null,"content_length":"10337","record_id":"<urn:uuid:f709e6eb-a009-4d6b-bfaf-f4a74b517b64>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Say the total sum s, the common difference d and the last term of an arithmetic progression satisfy the following relation = 20*s*d = (l + 5d) ^2. Then can we find a relationship between the first
term of the series and the common difference? Or does it depend on the number of terms in the series.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f2626cee4b0a2a9c267507b","timestamp":"2014-04-18T16:29:15Z","content_type":null,"content_length":"25441","record_id":"<urn:uuid:72e2ee02-91aa-443b-9edf-30b7f4ce859c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry Tutors
Bellevue, WA 98004
Master's in Teaching with Math, English, ESL, and Computers
...Then, I worked overseas for two years preparing high school juniors and seniors to take Cambridge O-Level Exams with math and English. After returning to the US, I worked with middle and high
school students in algebra,
, English, and computers. I taught...
Offering 10+ subjects including geometry | {"url":"http://www.wyzant.com/Edmonds_geometry_tutors.aspx","timestamp":"2014-04-19T13:12:08Z","content_type":null,"content_length":"60890","record_id":"<urn:uuid:d1deaeaa-700b-441f-bbb4-56cd67a298b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parabola and Line Intersection Points
March 14th 2009, 10:29 PM #1
Junior Member
Dec 2007
Parabola and Line Intersection Points
Here is the problem:
A graphic designer draws a logo involving a parabola ‘sitting’ in a V shape on a set of axes as shown at right.
Find the equation of the parabola, given it is of the form y = kx2 and the points of intersection of the V with the parabola.
You probably can't see the picture, so the equations of the lines are -2x-2 and 2x-2, and the only point I have for the parabola is (0,0) as the turning point. I'm having trouble with the first
part, finding the equation of the first point, because I can't find k. If I can get this bit, I can work the rest out
Equation of parabola
Hello Steven
Here is the problem:
A graphic designer draws a logo involving a parabola ‘sitting’ in a V shape on a set of axes as shown at right.
Find the equation of the parabola, given it is of the form y = kx2 and the points of intersection of the V with the parabola.
You probably can't see the picture, so the equations of the lines are -2x-2 and 2x-2, and the only point I have for the parabola is (0,0) as the turning point. I'm having trouble with the first
part, finding the equation of the first point, because I can't find k. If I can get this bit, I can work the rest out
Although you haven't said so, I'm guessing that the lines $y=\pm 2x - 2$ are tangents to the parabola. So, on that assumption, we can find the tangent at the point $(x_1, kx_1^2)$, and then
compare it to one of these equations. Like this:
$y = kx^2$
$\Rightarrow \frac{dy}{dx}= 2kx =2kx_1$ at $(x_1, kx_1^2)$
So the tangent at this point is
$y-kx_1^2 =2kx_1(x-x_1)$
i.e. $y =2kx_1x -kx_1^2$
So if this is the same as the line $y = 2x -2$, we can compare coefficients to get:
$2kx_1 =2$ and $-kx_1^2 = -2$
$\Rightarrow kx_1= 1$ and $kx_1^2 =2$
$\Rightarrow k =0.5$ and $x_1 = 2$
Can you complete it now?
Ok now I'm really lost
Last edited by Stevo_Evo_22; March 15th 2009 at 02:07 AM.
Hello Steven
Although you haven't said so, I'm guessing that the lines $y=\pm 2x - 2$ are tangents to the parabola. So, on that assumption, we can find the tangent at the point $(x_1, kx_1^2)$, and then
compare it to one of these equations. Like this:
$y = kx^2$
$\Rightarrow \frac{dy}{dx}= 2kx =2kx_1$ at $(x_1, kx_1^2)$
So the tangent at this point is
$y-kx_1^2 =2kx_1(x-x_1)$
i.e. $y =2kx_1x -kx_1^2$ This is the equation of the tangent to the parabola at any general point $\color{red}(x_1, kx_1^2)$ on the parabola. Do you understand how I got this?
So if this is the same as the line $y = 2x -2$(i.e. one of the two lines you were given in the question), we can compare coefficients to get:
Coefficients of $\color{red}x$ in the two equations are the same: $2kx_1 =2$ and the constant terms are the same: $-kx_1^2 = -2$
$\Rightarrow kx_1= 1$ and $kx_1^2 =2$
Divide the second equation by the first: $\color{red}\frac{kx_1^2}{kx_1} = \frac{2}{1}$
$\Rightarrow k =0.5$ and $x_1 = 2$
Can you complete it now?
I've put further explanations in red. What this shows is that the parabola $y = 0.5x^2$ has a tangent at the point $(2, 2)$ whose equation is $y = 2x -2$; i.e. one of the lines that you were
given. Because the parabola is symmetrical about the y-axis, the tangent at $(-2,2)$ will be the other line you were given: $y = -2x -2$.
Is that OK now?
Thankyou very much for your help, and sorry I couldn't get back to you sooner
Unfortunately I still do not understand how you did it, because I've never seen something like that before, however I was able to work out the question using the discriminant.
Thanks again,
Hello Steven
Sorry if my explanation used stuff you haven't covered in your course. I assumed that you had done some calculus, which it now appears you haven't.
Yes, you can solve the problem without calculus using the discriminant of a quadratic equation. I imagine you said something like this:
The curve $y = kx^2$ meets the line $y = 2x - 2$ where
$kx^2 = 2x - 2$
$\Rightarrow kx^2 - 2x +2 = 0$
The line is a tangent to the curve if the roots of this equation are equal. (This is the hard bit - do you understand it?)
i.e. if $(-2)^2 - 4\cdot k \cdot 2 = 0$
$\Rightarrow 8k = 4$
$\Rightarrow k = 0.5$, which is the same answer as I got using calculus methods.
$\Rightarrow x = ...$ etc.
Yes you're correct-I have yet to do calculus
So as you said, the line only touches the curve so the discriminant must equal 0 for there to only be one solution. All I had to do was simply solve for k as you did with calculus...
March 14th 2009, 11:10 PM #2
March 15th 2009, 01:43 AM #3
Junior Member
Dec 2007
March 15th 2009, 03:22 AM #4
March 16th 2009, 09:19 PM #5
Junior Member
Dec 2007
March 17th 2009, 01:50 AM #6
March 17th 2009, 12:39 PM #7
Junior Member
Dec 2007 | {"url":"http://mathhelpforum.com/pre-calculus/78737-parabola-line-intersection-points.html","timestamp":"2014-04-16T05:23:28Z","content_type":null,"content_length":"62050","record_id":"<urn:uuid:278b26cb-016f-4e8d-8c06-8f974463579f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Machine-checkable proofs: NP, or worse?
Mitch Harris maharri at gmail.com
Fri Jun 22 09:50:52 EDT 2007
On 6/20/07, someone wrote:
> On Monday 18 June 2007 00:21, Mitch Harris wrote:
> > Since 'winnability in nxn checkers' is in PSPACE', the
> > proof/calculation for 8x8 might be bad. And even if some one claims to
> > have one, it may take a long time, as far as we can guess now, even
> > exponential in the size of the proof to check it.
> since you're saying that checking may take time exponential in the size of the
> proof, maybe you could elaborate on your notions of "proof" and "checking".
> Don't most proof systems allow checking in polynomial (or even linear) time in
> the size of the proof?
Yes, I terribly misspoke in that last sentence of mine, using the
terms ambiguously, not only conflating proof verification with proof
finding, but also informal vs. formal proof.
In that paragraph above, I was writing about -checking- an -informal-
proof, where so-called 'trivial' steps in checking a human oriented
proof may request the reader to use a well known decision procedure
(solving a linear system of equations, deciding satisfiability of a
boolean formula, deciding equality of two regular expressions, etc up
the complexity hierarchy). Such a 'trivial' step, having arbitrarily
bad complexity, is essentially proof -finding- (even if a proof or
proof witness is not produced).
As to the original question of proof finding (for checkers), for a
-formal- proof, I don't see how checking it could be anything other
than poly time in the size of the proof (linear time with appropriate
data structures?) in the size of the proof (almost by definition of a
formal proof) since all that is being checked is correctness of each
step. But of course, the size of the proof could be exponential in the
size of the theorem statement (e.g. any problem complete for EXPSPACE)
Mitch Harris
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-June/011681.html","timestamp":"2014-04-20T20:58:49Z","content_type":null,"content_length":"4350","record_id":"<urn:uuid:74f99227-ab70-41dd-81d8-a1f4b1cb1582>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
December 29th 2009, 01:46 AM #1
Dec 2008
hello i'm new to things like these and I need your help guys:
two construction companies make bids of X and Y on a remodeling project.The joint p.d.f of X and Y is uniform on the space 2<x<2.5 2<y<2.3
If X and Y are within 0.1 of each other,the companies will be asked to rebid; otherwise the lower bidder will be awarded the contract.
What is the probability that they will be asked to rebid?
I have no idea can you give me at least a guide or hint?
Thank you
Last edited by mr fantastic; January 18th 2010 at 01:54 PM. Reason: Restored deleted question.
hello i'm new to things like these and I need your help guys:
two construction companies make bids of X and Y on a remodeling project.The joint p.d.f of X and Y is uniform on the space 2<x<2.5 2<y<2.3
If X and Y are within 0.1 of each other,the companies will be asked to rebid; otherwise the lower bidder will be awarded the contract.
What is the probability that they will be asked to rebid?
I have no idea can you give me at least a guide or hint?
Thank you
First draw a diagram. The support is a rectangle defined by 2<x<2.5 and 2<y<2.3. Therefore the joint pdf is f(x, y) = 20/3 (why?).
Now note that if the two bids are within 0.1 of each other, then the region of interest lies between the lines y = x - 0.1 and y = x + 0.1.
The answer can be found using simple geometry - no need for calculus.
sry i don't really get it
I tried to integrate 20/3 for y=0.1-x to y=0.1+x and for x=2 to x=2.5 but i do not get the correct answer(which should be 11/30)
Plus how did u get 20/3?
thank you
ok i got the 20/3
because they are independant fo the joint pmf is that of x times that of y
but then?
OK, now go back and read my first reply. Draw the two lines. Use geometry to calculate the area of the required region. Multiply that area by 20/3.
By the way, if your calculations lead to the wrong answer, you should post all your calculations for review - perhaps there is a simple error. Using calculus is the long way.
i will use integral
is it int (20/30 for y=x-0.1 to y=x+0.1 and for x=?
i cannot find the value for x
there is no reason to integrate, just draw and figure out the region of interest.
in my case i have to integrate
please i have a test tomorrow and i wanan know how to solve this
thank you very much for your help
A test on New Years Day?
Have you studied multi-variable calculus, specifically integration over a region? If you have not, then using a calculus approach is pointless. And if simple geometry can get the answer, why are
you insisting on using calculus. For all the help it might be, here:
$\int_{x = 2}^{x = 2.2} \int_{y = 2}^{y = x + 0.1} f(x, y) \, dy \, dx + \int^{x = 2.4}_{x = 2.2} \int^{y = 2.3}_{y = x - 0.1} f(x, y) \, dy \, dx$.
Edit: big mistake here due to mistaking a 2.1 for a 2.2 on my diagram. See my later post.
Last edited by mr fantastic; January 1st 2010 at 03:56 PM.
the answer is still wrong though
it should be 11/30
sry it worked
the x u wrote should be from 2.2 to 2.5 in the second integral
thank you very much
btw one question
if we integrated first with respect to x, we would also have to slipt the integral in two parts?
thank you mr fantastic
Last edited by mr fantastic; January 1st 2010 at 03:47 PM.
YOU must draw the region if you want to integrate
AND once you draw it you might even realize that all you need is the area of that strip.
So, integration isn't even necessary.
I made a mistake in my reading of my diagram (I have to say my reluctance in using calculus here probably contributed to carelessness on my part). The mistake is NOT the 2.5 (integrating to 2.5
is wrong). Did you draw the diagram? It should be very clear that x = 2.5 is NOT on the boundary of the region.
As it happens, if you integrate first wrt y you have to integrate over three seperate regions. So it's best to integrate first wrt to x. Then:
$\int_{y = 2}^{y = 2.1} \int_{x = 2}^{x = y + 0.1} f(x, y) \, dx \, dy + \int_{y = 2.1}^{y = 2.3} \int_{x = y - 0.1}^{x = y + 0.1} f(x, y) \, dx \, dy$.
But, and this has been said several times, why insist on using calculus to do something that can be done so easily using simple geometry .... (But I suppose if you want to spend 15 minutes in an
exam on a question that can be done in 5 minutes, then that's your choice).
December 29th 2009, 04:35 PM #2
December 30th 2009, 09:47 AM #3
Dec 2008
December 30th 2009, 09:51 AM #4
Dec 2008
December 30th 2009, 07:02 PM #5
December 30th 2009, 09:19 PM #6
Dec 2008
December 30th 2009, 09:51 PM #7
December 30th 2009, 09:53 PM #8
Dec 2008
December 31st 2009, 09:45 PM #9
January 1st 2010, 03:39 AM #10
Dec 2008
January 1st 2010, 03:42 AM #11
Dec 2008
January 1st 2010, 06:16 AM #12
January 1st 2010, 03:55 PM #13 | {"url":"http://mathhelpforum.com/statistics/121867-probability.html","timestamp":"2014-04-24T00:29:06Z","content_type":null,"content_length":"71302","record_id":"<urn:uuid:c81748ed-a4ca-4556-b1af-9ef83202172b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nested Squares NetLogo Model
Produced for the book series "Exercises for Artificial Intelligence";
Author: W. J. Teahan; Publisher: Ventus Publishing Aps, Denmark.
powered by NetLogo
view/download model file: Nested-Squares.nlogo
WHAT IS IT?
This model is provided as a solution for Exercise 3.6.8 for the Exercises for Artificial Intelligence book series.
The model draws nested squares. In one configuration, it will draw 6 nested squares using 6 different methods illustrating the versatility of NetLogo for drawing. In another configuration, it will
draw the nested squares using the same method.
The six different methods for drawing the squares are as follows:
Type 1: Draws the square using a single turtle agent with the shape "square 2".
Type 2: Draws the square using a single turtle agent with the setxy command.
Type 3: Draws the square using a single turtle agent and the stamp command.
Type 4: Draws the square using multiple turtle agents.
Type 5: Draws the square using four turtle agents with link agents to draw the lines between them.
Type 6: Draws the square using patch agents.
The purpose is to illustrate how versatile NetLogo is at drawing. A secondary purpose is to show how turtle, patch and link agents in NetLogo are programmed, and to illustrate the use of the turtle
drawing commands. A tertiary purpose is to illustrate how patterns can emerge through the execution of relatively straightforward commands. (If further interested, the reader should research the
topics of L-systems and fractals).
The model uses a single turtle agent for Types 1 to 3, and draws the squares by manipulating the shape and size of the turtle to animate the squares growing larger, or by moving the turtle around the
outer edge of the squares using either the turtle's pen or by stamping successive locations.
The model uses multiple turtle agents for Type 4, and once drawn, these turtles remain in the same place. i.e. There are many little square turtles that are used to draw the outside of the larger
squares, like square Lego bricks all placed alongside each other.
The model uses link agents between four hidden turtle agents at the four corners for Type 5.
The model uses patch agents for Type 6. Like for the Type 4 turtle agents, each patch agent is used to draw the outside of the larger squares, like square Lego bricks all placed alongside each other.
HOW TO USE IT
Press the setup button first. This will clear the environment. Then choose the type of drawing you want using the type-of-drawing chooser. If you wish to draw the nested squares using six different
methods, then set this chooser to "Draw different squares" or "Types 1 to 6 repeated". Otherwise the squares will be drawn all the same using a single type as specified by the chooser.
If you wish the squares to be drawn from the inside out, then set the draw-inside-out? switch to On. The innermost-size slider value defines the size of the innermost square, and therefore the size
of the hole at the centre. The number-of-squares slider value defines how many squares to draw when the type-of-drawing chooser is set to drawing the squares all the same (Types 1 to 6).
To draw the squares, press either the draw all squares once button, which will draw them once only, or if you wish them to be drawn continuously, then press the draw all squares forever button.
The model's Interface buttons are defined as follows:
- setup: This will clear the environment.
- draw all squares once: This will draw all the nested squares once only.
- draw all squares forever: This will draw all the nested squares continuously.
- draw-innermost-square: This will draw the square in the centre, using the Type 1 method.
- draw-inner-square: This will draw the next square out from the centre, using the Type 2 method.
- draw-middle-inner-square: This will draw the next square out, using the Type 3 method.
- draw-middle-outer-square: This will draw the next square out, using the Type 4 method.
- draw-outer-square: This will draw the next square out, using the Type 5 method.
- draw-outermost-square: This will draw the outside square, using the Type 6 method.
The model's Interface sliders, chooser and switch are defined as follows:
- type-of-drawing: This defines the method used for drawing. The different types are defined as follows:
"Draw different squares": This will draw six different squares using different methods - defined as Type 1 to Type 6 (see above).
"Type 1 (draw the same squares)": This will draw the squares all the same using the Type 1 method.
"Type 2 (draw the same squares)": This will draw the squares all the same using the Type 2 method.
"Type 3 (draw the same squares)": This will draw the squares all the same using the Type 3 method.
"Type 4 (draw the same squares)": This will draw the squares all the same using the Type 4 method.
"Type 5 (draw the same squares)": This will draw the squares all the same using the Type 5 method.
"Type 6 (draw the same squares)": This will draw the squares all the same using the Type 6 method.
"Types 1 to 6 repeated": This will repeat drawing the nested squares, cycling through Types 1 to 6.
- draw-inside-out?: If this is set to On, the squares will be drawn from the inside out with the innermost square drawn first, otherwise they will be drawn from the outside in. (Try changing this as
the squares are being drawn).
- innermost-size: This sets the size of the innermost square. As a consequence, it also defines the size of the inner hole.
- number-of-squares: This sets the number of squares to be drawn when the type-of-drawing is not set to "Draw different squares".
Notice how the inner square (Type 2) has rounded corners. This is because it is drawn using the turtle's pen.
Notice how the middle inner square has very faint lines where the turtle squares have not overlapped with each other. To get rid of this, change "set size 1" to "set size 1.1" in the code when the
first turtle is created.
Notice what happens when the innermost-size slider value is set to a large number, or the number-of-squares-the-same slider is set to a large number when the type-of-drawing chooser is not set to
"Draw different squares".
Notice which methods seem to take longer than the others at drawing the squares.
Notice that the Type 1 method (using a single turtle with shape "square 2") only allows one square to be visible at any one time. Why is this? What happens when you choose Type 1 and press the draw
all squares forever button?
Notice that the Type 2 (using a single agent's pen) and Type 5 methods (using four hidden turtle agents plus link agents between them) draw some of the sides of the squares in the wrong direction
when the size of the side gets long (towards the edge of the world rather than towards the other corners). Why is this?
Notice that for Type 6, if you slow down the Interface speed slider, it will show the individual patches being drawn at random locations around the edge of each square.
Try slowing down the speed using the Interface's speed slider. You will be able to see how the different squares are drawn.
Try running this model in 3D mode for the various configurations. What happens and why?
This demonstrates how to draw using three different types of agents - turtle agents, link agents and patch agents.
Are there further ways for drawing the squares?
See the Nested Triangles model.
To refer to this model in publications, please use:
Teahan, W. J. (2010). Nested Squares NetLogo model.
Exercises for Artificial Intelligence. Ventus Publishing Aps.
; Nested Triangles model.
; Copyright 2010 William John Teahan. All Rights Reserved.
to setup
to draw-all-squares
ifelse (type-of-drawing = "Draw different squares")
[ draw-all-squares-differently ]
[ ifelse (type-of-drawing != "Types 1 to 6 repeated")
[ draw-all-squares-the-same-single-method ]
[ draw-all-squares-the-same-all-methods ]
to draw-all-squares-differently
; Draws the six nested squares six different ways.
ifelse (draw-inside-out?)
draw-square-type-1 lime (innermost-size)
draw-square-type-2 orange (innermost-size + 1)
draw-square-type-3 yellow (innermost-size + 3)
draw-square-type-4 magenta (innermost-size + 5)
draw-square-type-5 violet (innermost-size + 8)
draw-square-type-6 blue (innermost-size + 11)
draw-square-type-6 blue (innermost-size + 11)
draw-square-type-5 violet (innermost-size + 8)
draw-square-type-4 magenta (innermost-size + 5)
draw-square-type-3 yellow (innermost-size + 3)
draw-square-type-2 orange (innermost-size + 1)
draw-square-type-1 lime (innermost-size)
to draw-nested-squares-all-the-same-type [drawing-type]
; Draws the Draws the nested squares using the same method.
let size-total 0
let size-inc 2
if (not draw-inside-out?)
[ ; draw squares decreasing in size
set size-total (number-of-squares-the-same * size-inc)
set size-inc -2
let colour 15
repeat number-of-squares-the-same
if (drawing-type = "Type 1 (draw the same squares)")
[ draw-square-type-1 colour (innermost-size + size-total) ]
if (drawing-type = "Type 2 (draw the same squares)")
[ draw-square-type-2 colour (innermost-size + size-total) ]
if (drawing-type = "Type 3 (draw the same squares)")
[ draw-square-type-3 colour (innermost-size + size-total) ]
if (drawing-type = "Type 4 (draw the same squares)")
[ draw-square-type-4 colour (innermost-size + size-total) ]
if (drawing-type = "Type 5 (draw the same squares)")
[ draw-square-type-5 colour (innermost-size + size-total) ]
if (drawing-type = "Type 6 (draw the same squares)")
[ draw-square-type-6 colour (innermost-size + size-total) ]
set colour colour + 10
set size-total size-total + size-inc
to draw-all-squares-the-same-single-method
; Draws the nested squares using the same method.
let size-inc 0
let colour 15
draw-nested-squares-all-the-same-type type-of-drawing
to draw-all-squares-the-same-all-methods
; Draws the six nested squares using same type and repeated for all six methods.
draw-nested-squares-all-the-same-type "Type 1 (draw the same squares)"
draw-nested-squares-all-the-same-type "Type 2 (draw the same squares)"
draw-nested-squares-all-the-same-type "Type 3 (draw the same squares)"
draw-nested-squares-all-the-same-type "Type 4 (draw the same squares)"
draw-nested-squares-all-the-same-type "Type 5 (draw the same squares)"
draw-nested-squares-all-the-same-type "Type 6 (draw the same squares)"
to draw-square-type-1 [colour this-size]
;; Draws the square using a single turtle agent with the shape "square 2".
create-turtles 1
set shape "square 2"
set size this-size
set color colour
to draw-square-type-2 [colour this-size]
;; Draws the square using a single turtle agent with the setxy command.
create-turtles 1
hide-turtle ; we do not need to see the turtle
set color colour
let halfedge int (this-size / 2)
setxy (- halfedge) (halfedge) ; upper left corner
pen-down ; use the turtle's drawing pen to draw the square
set pen-size 12
setxy (halfedge) (halfedge) ; upper right corner
setxy (halfedge) (- halfedge) ; lower right corner
setxy (- halfedge) (- halfedge) ; lower left corner
setxy (- halfedge) (halfedge) ; upper left corner
to draw-line-of-turtles [colour this-turtle number x-inc y-inc stamped?]
;; Draws a line of turtles. The variable number is the number of turtles to draw.
;; The variables x-inc, and y-inc, specify whether to draw the line horizontally
;; or vertically (they control how much the x and y co-ordinates are incremented each
;; iteration). If stamped? is true, it will use a single turtle and stamp it as it is
;; drawn successively. Otherwise it will hatch multiple turtles as it goes across.
repeat number
ask this-turtle
ifelse stamped?
setxy (xcor + x-inc) (ycor + y-inc) ; move to next position
[ hatch 1
set this-turtle self
setxy (xcor + x-inc) (ycor + y-inc) ; change position for new turtle
to draw-square-type-3 [colour this-size]
;; Draws the square using a single turtle agent and the stamp command.
let this-turtle nobody
create-turtles 1
set shape "square 3"
; If you use the shape "square" here, then there will be gaps between each turtle's shape.
; The "square 3" shape is the "square" shape but edited using the Turtle Shapes Editor (in
; the Tools menu) so that the square fits the entire space. This will still leave faint
; gaps between adjacent turtles, however (which can be changed by changing the turtle's size).
let halfedge (this-size / 2)
setxy (- halfedge) (halfedge) ; start off at top left corner of the square
set this-turtle self
set size 1 ; this results in the turtle squares not overlapping
; change this to 1.1 to get rid of them
set color colour
; Draw the horizontal line of turtles across the top.
draw-line-of-turtles colour this-turtle this-size 1 0 true
; Draw the vertical line of turtles down to the bottom right.
draw-line-of-turtles colour this-turtle this-size 0 -1 true
; Draw the horizontal line of turtles across to the bottom left.
draw-line-of-turtles colour this-turtle this-size -1 0 true
; Draw the vertical line of turtles back up to the top left.
draw-line-of-turtles colour this-turtle this-size 0 1 true
to draw-square-type-4 [colour this-size]
;; Draws the square using multiple turtle agents.
let this-turtle nobody
let halfedge 0
create-turtles 1
set this-turtle self
set size 1 ; this results in the turtle squares not overlapping
; change this to 1.1 to get rid of them
set color colour
set shape "square 3"
; If you use the shape "square" here, then there will be gaps between each turtle's shape.
; The "square 3" shape is the "square" shape but edited using the Turtle Shapes Editor (in
; the Tools menu) so that the square fits the entire space.
set halfedge (this-size / 2)
setxy (- halfedge) (halfedge) ; start off at top left corner of the square
; Draw the horizontal line of turtles across the top.
ask this-turtle [ setxy (- halfedge) (halfedge) ] ; start off line at top left corner of the square
draw-line-of-turtles colour this-turtle this-size 1 0 false
; Draw the vertical line of turtles down to the bottom right.
ask this-turtle [ setxy (halfedge) (halfedge) ] ; start off line at top right corner of the square
draw-line-of-turtles colour this-turtle this-size 0 -1 false
; Draw the horizontal line of turtles across to the bottom left.
ask this-turtle [ setxy (halfedge) (- halfedge) ] ; start off line at bottom right corner of the square
draw-line-of-turtles colour this-turtle this-size -1 0 false
; Draw the vertical line of turtles back up to the top left.
ask this-turtle [ setxy (- halfedge) (- halfedge) ]; start off line at bottom left corner of the square
draw-line-of-turtles colour this-turtle this-size 0 1 false
to draw-square-type-5 [colour this-size]
;; Draws the square using four turtle agents with link agents to draw the lines between them.
let halfedge int (this-size / 2)
let t1 nobody
let t2 nobody
let t3 nobody
let t4 nobody
create-turtles 1
hide-turtle ; we do not need to see the turtle
setxy (- halfedge) (halfedge) ; upper left corner
set t1 self
create-turtles 1
hide-turtle ; we do not need to see the turtle
setxy (halfedge) (halfedge) ; upper right corner
set t2 self
create-turtles 1
hide-turtle ; we do not need to see the turtle
setxy (halfedge) (- halfedge) ; lower right corner
set t3 self
create-turtles 1
hide-turtle ; we do not need to see the turtle
setxy (- halfedge) (- halfedge) ; lower left corner
set t4 self
ask t1
[ create-link-with t2 [ set thickness 1 set color colour ]]
ask t2
[ create-link-with t3 [ set thickness 1 set color colour ]]
ask t3
[ create-link-with t4 [ set thickness 1 set color colour ]]
ask t4
[ create-link-with t1 [ set thickness 1 set color colour ]]
to draw-square-type-6 [colour this-size]
;; Draws the square using patch agents.
;; This procedure is based on Uri Wilensky's code provided by the Box Drawing Example in NetLogo
;; Models Library.
let halfedge int (this-size / 2)
ask patches
if (pxcor = (- halfedge) and pycor >= (- halfedge) and pycor <= (0 + halfedge) )
[set pcolor colour] ;; ... draws left edge
if ( pxcor = (0 + halfedge) and pycor >= (- halfedge) and pycor <= (0 + halfedge) )
[set pcolor colour] ;; ... draws right edge
if ( pycor = (- halfedge) and pxcor >= (- halfedge) and pxcor <= (0 + halfedge) )
[set pcolor colour] ;; ... draws bottom edge
if ( pycor = (0 + halfedge) and pxcor >= (- halfedge) and pxcor <= (0 + halfedge) )
[set pcolor colour] ;; ... draws upper edge
; Copyright 2010 by William John Teahan. All rights reserved.
; Permission to use, modify or redistribute this model is hereby granted,
; provided that both of the following requirements are followed:
; a) this copyright notice is included.
; b) this model will not be redistributed for profit without permission
; from William John Teahan.
; Contact William John Teahan for appropriate licenses for redistribution for
; profit.
; To refer to this model in publications, please use:
; Teahan, W. J. (2010). Nested Squares NetLogo model.
; Artificial Intelligence. Ventus Publishing Aps | {"url":"http://files.bookboon.com/ai/Nested-Squares.html","timestamp":"2014-04-16T04:41:09Z","content_type":null,"content_length":"55339","record_id":"<urn:uuid:24ad6ced-9974-4b97-a156-28ffcf0ed213>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the value(s) of c guaranteed by Rolle's Theorem for f(x)= x^2 +3x on [0,2] - WyzAnt Answers
How do you solve this and which answer choice is right and why?
a) c= -3/2
b) c= 0,3
c) Rolle's Theorem does not apply as f is not continuous on [0,2]
d) Rolle's Theorem does not apply as f(0) does not equal f(2)
e) none of these
Tutors, please sign in to answer this question.
1 Answer
This is straight forward based upon the definition of Rolle's Theorm:
"If a real-valued function ƒ is continuous on a closed interval [a, b], differentiable on the open interval (a, b), and ƒ(a) = ƒ(b), then there exists a c in the open interval (a, b) such that f'(c)
The closed interval for f is [0,2]. f(0)= 0 and f(2)= 10. f(a)≠ f(b). f(x) does not meet the conditions of Rolle's Theorem. Answer=d.
Bruce S | {"url":"http://www.wyzant.com/resources/answers/3193/find_the_value_s_of_c_guaranteed_by_rolle_s_theorem_for_f_x_x_2_3x_on_0_2","timestamp":"2014-04-24T21:16:48Z","content_type":null,"content_length":"37632","record_id":"<urn:uuid:f259e35f-6010-4dfc-ac21-bbed657523b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Functions (dont run away) need help badly
October 3rd 2009, 10:06 AM #1
Junior Member
Oct 2009
Advanced Functions (dont run away) need help badly
1. Find the number of digits in the expansion of (2^120)(5^125) without any calculator or computer.
2. Find the coordinates of the two points that trisect the line segment with the endpoints (A(2,3) B(8,2).
l l = Absolute value
3. Give the function f(x) = x^3 - 2x, sketch y = f(lxl).
Sketch g(x) = lx^2 - 1l - lx^2 - 4l
Sketch the region in the plane to show all points (x,y)
such that lxl + lyl <= 2
Hello, shane99!
1. Find the number of digits in the expansion of: $(2^{120})(5^{125})$ without any calculator or computer.
We have: . $2^{120}\cdot 5^{125} \;=\;2^{120}\cdot5^{120}\cdot5^5 \;=\;(2^{120}\cdot5^{120})\cdot5^5 \;=\;(2\cdot5)^{120}\cdot5^5 \;=\;10^{120}\cdot5^5$
This is: . $5^5 = 3125$ followed by 120 zeros.
Therefore, the product has 124 digits.
2. Find the coordinates of the two points that trisect the line segment with the endpoints A(2,3) and B(8,2).
l l = Absolute value
3. Give the function f(x) = x^3 - 2x, sketch y = f(lxl).
Sketch g(x) = lx^2 - 1l - lx^2 - 4l
Sketch the region in the plane to show all points (x,y)
such that lxl + lyl <= 2
2. Here's one way I can think of, but someone else could probably find a simpler way.
Find the equation of the straight line joining A and B.
You want to trisect the line AB, so find 1/3 of the distance between A and B, call it r.
Find the equations of the circles centred at A and B with radius r.
Find where the circles intersect the straight line joining A and B. Take the coordinates of the relevant points (the straight line will intersect the two circles at four points, but you're only
looking for two points).
3. f(lxl) = |x|^3 - 2|x| = |x^3| - 2|x|
Draw x^3 and then draw |x^3|, that is, where x^3 is negative you 'flip' it about the x-axis (y=0) so it becomes positive.
Draw 2|x|.
Subtract the two graphs (you have to have drawn the two graphs on the same set of axes).
Try sketching g(x) yourself but doing similar reasoning as above.
For $|x|+|y|\leq 2$ use the definition of the absolute value and look at the different cases, i.e.
x<0 and y<0
x<0 and y>0
x>0 and y<0
x>0 and y>0
For instance, in the region x<0 and y>0 we have |x|=-x and |y|=y, and so $|x|+|y|\leq 2$ becomes $-x+y\leq 2$ and sketch this only in the region x<0 and y>0.
Last edited by qspeechc; October 3rd 2009 at 11:29 AM.
1. Find the number of digits in the expansion of (2^120)(5^125) without any calculator or computer.
If I may give my 2 cents. A way to find the number of digits in a huge number is use logs.
We have $log(2^{120}\cdot 5^{125})=log(2^{120})+log(5^{125})=120log(2)+125lo g(5)\approx 123.494850022$
Round up and we see there are 124 digits.
To find the first few digits of this number. Take $10^{\text{the decimal part of the above}}$
We get $10^{.494850022}=3.12500000$
As Soroban said, it is 3125 and then 120 0's.
Though you may need a calculator, it is a cool way to know and I thought I would show it anyway.
5*5=25, 25*5=125, 125*5=625, 625*5=3125.
Do it the old-fashioned way sans calculator.
f(|x|) = |x^3| - 2|x|
use the definition of the absolute value and look at the different regions:
|x^3| = -x^3 when -x^3<0, i.e. when x<0
|x^3| = x^3 when x^3>0, i.e. when x>0
|x| = -x when x<0
|x| = x when x>0
So for x<0
f(|x|) = -x^3 +2x
and for x>0
f(|x|) = x^3-2x
g(x) is done similarly, but is trickier because the regions you have to consider will be more complicated.
Need more help with 3c)
Last edited by shane99; October 4th 2009 at 10:22 AM.
October 3rd 2009, 10:24 AM #2
Super Member
May 2006
Lexington, MA (USA)
October 3rd 2009, 10:42 AM #3
October 3rd 2009, 11:07 AM #4
October 3rd 2009, 11:19 AM #5
Junior Member
Oct 2009
October 3rd 2009, 11:30 AM #6
October 3rd 2009, 11:34 AM #7
October 4th 2009, 09:38 AM #8
Junior Member
Oct 2009 | {"url":"http://mathhelpforum.com/pre-calculus/105849-advanced-functions-dont-run-away-need-help-badly.html","timestamp":"2014-04-19T07:33:07Z","content_type":null,"content_length":"56622","record_id":"<urn:uuid:6a238edf-053b-4987-8206-6a55ff07d3a7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Learning and Mathematics: Papert, mathetics (repost)
Replies: 29 Last Post: Nov 19, 1995 4:46 PM
Messages: [ Previous | Next ]
Re: Learning and Mathematics: Papert, mathetics
Posted: Nov 10, 1995 3:05 AM
Long term projects which require students to generate and process numerical
data are becoming important supplements to short answer tests of various
styles. Portfolio assessment reveal the understanding and application of
from Steven S. Means means@belnet.bellevue.k12.wa.us
Math and Technology Teacher at Sammamish High School
Andrea Hall and Eric Sasoon wrote about the needs to test for concepts
as well as practice problems of a form already familiar to students.
Date Subject Author
11/3/95 Learning and Mathematics: Papert, mathetics (repost) K. Ann Renninger
11/6/95 Re: Learning and Mathematics: Papert, mathetics Problem of the Week
11/6/95 Re: Learning and Mathematics: Papert, mathetics Tom Davis
11/7/95 Re: Learning and Mathematics: Papert, mathetics Michael Keyton
11/7/95 Re: Learning and Mathematics: Papert, mathetics Brian Hutchings
11/9/95 Re: Learning and Mathematics: Papert, mathetics Ken Wood
11/9/95 Re: Learning and Mathematics: Papert, mathetics Andrea Hall
11/9/95 Re: Learning and Mathematics: Papert, mathetics Katie Laird
11/9/95 Re: Learning and Mathematics: Papert, mathetics Katie Laird
11/10/95 Re: Learning and Mathematics: Papert, mathetics Steve Means
11/10/95 Re: Learning and Mathematics: Papert, mathetics Steve Means
11/10/95 Re: Learning and Mathematics: Papert, mathetics Andrea Hall
11/10/95 Re: Learning and Mathematics: Papert, mathetics Jim LaCasse
11/10/95 Re: Learning and Mathematics: Papert, mathetics Steve Means
11/11/95 Re: Learning and Mathematics: Papert, mathetics Andrea Hall
11/11/95 Re: Learning and Mathematics: Papert, mathetics Steve Means
11/11/95 Re: Learning and Mathematics: Papert, mathetics John Burnette
11/12/95 Re: Learning and Mathematics: Papert, mathetics Lou Talman
11/13/95 Re: Learning and Mathematics: Papert, mathetics Tarin Bross
11/13/95 Re: Learning and Mathematics: Papert, mathetics Katie Laird
11/13/95 Re: Learning and Mathematics: Papert, mathemaatics Steve Means
11/13/95 Re: Learning and Mathematics: Papert, mathetics Kristina Lasher
11/14/95 Re: Learning and Mathematics: Papert, mathetics David Weksler
11/14/95 Re: Learning and Mathematics: Papert, mathetics Jane Ehrenfeld
11/15/95 Re: Learning and Mathematics: Papert, mathetics Andrea Hall
11/16/95 Re: Learning and Mathematics: Papert,... Robin J Healey
11/16/95 Re: Learning and Mathematics: Papert,... Pat Ballew
11/17/95 Re: Learning and Mathematics: Papert, mathetics Johnny Hamilton
11/17/95 Re: Learning and Mathematics: Papert, mathetics Steve Means
11/19/95 Re: Learning and Mathematics: Papert, mathematics Tarin Bross | {"url":"http://mathforum.org/kb/thread.jspa?threadID=350857&messageID=1075457","timestamp":"2014-04-16T19:54:23Z","content_type":null,"content_length":"51612","record_id":"<urn:uuid:7c5d0e2b-1e6d-461f-a769-9a3bc08b0de6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS timeline voting: the results are in!
The top ten:
1. Euclid’s Elements: 116 votes
2. Turing’s “On Computable Numbers”: 110 votes
3. Gödel’s Incompleteness Theorem: 107 votes
4. Gödel’s P vs. NP Letter to von Neumann: 106 votes
5. George Boole’s Logic: 88 votes
6. Shor’s Algorithm: 88 votes
7. Wikipedia: 85 votes
8. Claude Shannon’s Digital Logic: 82 votes
9. PRIMES in P: 82 votes
10. Cook-Levin Theorem: 80 votes
The rest:
Al-Khwarizmi’s “On the Calculation with Hindu Numerals”: 79 votes
Bardeen, Brattain, and Shockley Invent Transistor: 79 votes
Babbage’s Analytical Engine: 77 votes
Tim Berners-Lee Invents WWW: 75 votes
Fast Fourier Transform: 73 votes
Brin and Page Create Google: 73 votes
von Neumann Architecture: 71 votes
RSA: 70 votes
Hilbert Calls for Mechanization of Mathematical Reasoning: 69 votes
Simplex Algorithm: 69 votes
Claude Shannon Formalizes Cryptography: 68 votes
Dijkstra’s Algorithm: 68 votes
Gaussian Elimination Described in Ancient China: 67 votes
Quicksort: 65 votes
UNIX and C: 65 votes
Newton’s Method: 64 votes
Leibniz Describes Binary Notation, Calculus Ratiocinator: 64 votes
First Program written by Ada Lovelace: 64 votes
Gauss’s Disquisitiones Arithmeticae: 62 votes
Monte Carlo Method: 62 votes
“Bit” Coined: 62 votes
TeX Typesetting: 62 votes
Ginsparg Creates arXiv: 61 votes
Kleene Invents Regular Expressions: 61 votes
McCarthy Invents LISP: 59 votes
“The Art of Computer Programming”: 59 votes
TCP/IP Protocol: 58 votes
Strassen’s Algorithm: 58 votes
PCP Theorem: 56 votes
Turing Test: 55 votes
Randomized Primality Testing: 55 votes
IP=PSPACE: 55 votes
Scott and Rabin’s Paper on Nondeterminism: 54 votes
Jacquard Loom: 54 votes
Colossus Begins Operation at Bletchley Park: 53 votes
Integrated Circuit: 53 votes
Chomsky Hierarchy: 52 votes
Pascal Builds Arithmetic Machine: 51 votes
First Genome Sequenced: 51 votes
Reed-Solomon Codes: 50 votes
Time Hierarchy Theorem: 50 votes
ARPAnet: 49 votes
Four Color Map Theorem Proved: 49 votes
Linux: 49 votes
Diophantine Equations Proved Undecidable: 46 votes
Feynman Suggests Quantum Computing: 46 votes
Deep Blue Defeats Kasparov: 46 votes
Solomonoff-Kolmogorov-Chaitin Complexity: 44 votes
Lempel-Ziv Data Compression: 43 votes
GPS: 42 votes
Marian Rejewski’s “Bombe” + Alan Turing’s Improvements: 41 votes
Diffie-Hellman Public Key Exchange Protocol: 41 votes
Zuse’s Z1: 40 votes
Viterbi Algorithm: 40 votes
First Email Message: 38 votes
Pseudorandom Generators: 37 votes
Oughtred Invents Slide Rule: 36 votes
FORTRAN: 36 votes
ENIAC: 35 votes
Semaphores: 35 votes
Gottlob Frege’s “Begriffsschrift”: 34 votes
Grace Murray Hopper Creates A-O Compiler: 34 votes
Conway’s Game of Life: 34 votes
Xerox Parc’s Alto With First GUI: 33 votes
Kuttaka Algorithm from Ancient India: 32 votes
Scientific Computing During Manhattan Project: 30 votes
Wilkes, Wheeler, and Gill Define Closed Subroutines: 29 votes
Stroustrup creates C++: 28 votes
Zimmermann creates PGP: 28 votes
Dartmouth Conference Popularizes Term “AI”: 27 votes
Moore’s Law: 27 votes
Boosting in Machine Learning: 27 votes
Codd Proposes Relational Databases: 26 votes
Ethernet Invented: 26 votes
Valiant Proposes PAC-Learning: 26 votes
Stallman Writes GNU Manifesto: 25 votes
Wiesner Proposes Quantum Money and Multiplexing: 24 votes
Antikythera Mechanism: 23 votes
BitTorrent: 23 votes
Low-Density Parity Check Codes: 23 votes
McCulloch and Pitts’ “A Logical Calculus Immanent in Nervous Activity”: 22 votes
Engelbart and English Invent Mouse: 22 votes
Dijkstra’s “Go To Statement Considered Harmful”: 22 votes
Back-Propagation: 22 votes
MIT SAGE Creates First Large-Scale Computer Network: 21 votes
Vannevar Bush Creates First Large-Scale Analog Calculator: 20 votes
IBM Introduces Hard Drive: 20 votes
Checkers Solved: 20 votes
First Packet-Switching Network: 20 votes
Atanasoff and Berry’s Vaccum-tube Computer: 19 votes
Vannevar Bush’s “As We May Think”: 19 votes
Hollerith’s Electromechanical Counting Machine: 18 votes
MIT Builds First Time-Sharing System: 18 votes
First Computer Virus: 18 votes
IEEE Floating-Point Standard: 18 votes
IBM PC: 18 votes
“Spacewar!”, First Computer Game: 17 votes
RISC Architecture: 17 votes
Intel’s 8086: 17 votes
al-Jazari’s Water Clocks and Musical Automata: 17 votes
Edward Lorenz (Re)discovers Chaos Theory: 16 votes
Apollo Guidance Computer: 16 votes
CAPTCHAs: 16 votes
VC Dimension: 16 votes
Macsyma Computer Algebra System: 15 votes
Amazon.com: 15 votes
UNIVAC I: 13 votes
DaVinci Surgical Robot: 13 votes
Mark II Incident Popularizes Word “Bug”: 12 votes
Weizenbaum Creates ELIZA: 12 votes
ASCII: 11 votes
TI Handheld Calculator: 11 votes
Simula 67: 11 votes
MIT Whirlwind I Displays Graphics: 10 votes
Sketchpad, First CAD Software: 10 votes
NCSA Mosaic: 10 votes
Robert Morris’ Computer Worm: 9 votes
Pixar Releases “Toy Story”: 9 votes
Stuxnet Worm: 9 votes
IBM System/360: 8 votes
Mac Hack Chess Program: 7 votes
Microsoft Windows: 7 votes
Sojourner on Mars: 7 votes
BASIC: 6 votes
Apple Macintosh: 6 votes
SETI@home: 6 votes
IBM’s Watson Wins At Jeopardy!: 5 votes
Atari’s Pong: 4 votes
Atlas Computer in Manchester: 4 votes
Norbert Wiener Founds Cybernetics: 3 votes
First ATM in Tokyo: 3 votes
Youtube Launched: 3 votes
VisiCalc: 2 votes
Jevon’s Logic Piano: 1 vote
Apple II: 1 vote
Adobe PostScript: 1 vote
SABRE Travel Reservation System: 0 votes
Fischer-Lynch-Paterson Theorem: 0 votes
Facebook, Twitter Use in Egypt Revolution: 0 votes
First Machine Translation Demonstration: -1 vote
Usenet: -1 vote
Akamai: -2 votes
TX-0: -3 votes
CDC 6600: -3 votes
Compact Disc Invented: -3 votes
Aiken’s Mark I: -4 votes
CM-1 Connection Machine: -4 votes
Whirlwind I Displays Graphics: -5 votes
Floppy Disk Invented: -6 votes
MITS Altair Microcomputer and Microsoft BASIC: -6 votes
Axelrod’s “The Evolution of Cooperation”: -7 votes
Microsoft Office: -7 votes
Pentium FDIV Bug: -7 votes
EDSAC: -8 votes
UNIMATE, First Industrial Robot: -9 votes
CLU Programming Language: -9 votes
1ESS Switching System: -11 votes
UNIVAC Predicts Presidential Election: -12 votes
Stanford Arm: -13 votes
“2001 A Space Odyssey” Introduces HAL: -15 votes
“Spam” Coined: -16 votes
First Denial-of-Service Attack: -17 votes
Y2K Bug: -18 votes
Facebook Launched: -18 votes
Nintendo’s Donkey Kong: -19 votes
“Robot” Coined: -21 votes
CSIRAC -21
Apple’s iPhone: -21 votes
Slashdot: -27 votes
Godwin’s Law: -29 votes
Asimov’s Three Laws of Robotics: -32 votes
Match.com: -34 votes
de Vaucanson’s Mechanical Duck: -39 votes
von Kempelen’s Mechanical Turk: -52 votes
A few comments:
1. It’s (just-barely) conceivable that the results could have been slightly skewed by the quantum- and complexity-loving readership of this blog.
2. Voters really didn’t like fiction/pop-culture references, mechanical contrivances, or anything that sounded like a publicity stunt. They were much keener on conceptual advances (even to the
extent of putting Gödel well ahead of the transistor).
I need to catch a plane to give the Buhl Lecture at Carnegie Mellon tomorrow, so I’ll leave you to draw any further conclusions.
Kemper Boyd Says:
Comment #1 April 28th, 2011 at 6:32 pm
Dear Scott: I wish you had trumpeted tomorrow’s lecture at CMU a bit earlier! I live in Des Moines and am going to have to drive most of the night to make it on time…I only hope I’m not too punch
drunk from lack of sleep to appreciate the show. Well, got to go gas up the VW and buy some munchies. See you in Pittsburgh!
Steve Huntsman Says:
Comment #2 April 28th, 2011 at 6:52 pm
Maybe I miscounted, but isn’t the 150 mark at either machine translation or Usenet? If so it is amusing that these are the first two entries with negative vote totals.
Scott Says:
Comment #3 April 28th, 2011 at 7:09 pm
Kemper: Sorry about that! I had no idea anyone would drive to such a talk from outside Pittsburgh; I would’ve just stayed home and watched it on the web… (I’m flattered, though!)
Paul Beame Says:
Comment #4 April 28th, 2011 at 7:44 pm
As far as we can tell Godel’s letter to von Neumann had zero influence on the direction of CS and it gets 106 votes.
VisiCalc which introduced the spreadsheet gets 2 votes.
Circe Says:
Comment #5 April 28th, 2011 at 8:16 pm
Another anomaly, in my opinion, is the high position of Euclid’s Elements, as compared to the more algorithmic problem solving oreiented work of Diophantes, Aryabhata, and Al-Khwarizmi for example.
Although Euclid founded the “theorem-proof” structure of mathematics, CS in my opinion owes more to the idea of a “mechanical algorithm’ which should not require any “intelligence” to use. Looked
this way, I think the development of decimal arithmetic, and general algebraic procedures for the solution of equations is probably much more important in the history of CS.
djm Says:
Comment #6 April 28th, 2011 at 9:37 pm
Amazing, an online poll without a single L. Ron Hubbard or Ayn Rand reference in the top 10!
Sniffnoy Says:
Comment #7 April 28th, 2011 at 9:51 pm
Yes, could you please clarify where the 150 mark is?
Sniffnoy Says:
Comment #8 April 28th, 2011 at 9:54 pm
Oh, I feel silly. That’s easy to determine. The 150 mark falls “between” machine translation and Usenet. Except it actually can’t, since they have the same number of votes (-1). So more properly, the
149 mark falls between Egyptian revolution and machine translation/Usenet, and the 151 mark falls between machine translation/Usenet and Akamai, and it’s up to Scott whether he prefers to include
machine translation or Usenet for #150…
Alpha Omega Says:
Comment #9 April 28th, 2011 at 10:30 pm
This is an absurdly academic list! Euclid’s Elements #1? And how is Konrad Suze, inventor of the world’s first functional program-controlled Turing-complete computer, not on this list?? Please try
again, this is an obvious fail!
Alpha Omega Says:
Comment #10 April 28th, 2011 at 10:31 pm
Excuse my dyslexia, of course that should be Zuse!
Alpha Omega Says:
Comment #11 April 28th, 2011 at 10:55 pm
Sorry, my post was an obvious fail! Dyslexia strikes again!
Scott Says:
Comment #12 April 28th, 2011 at 11:06 pm
Alpha Omega: Not only is Zuse on the list, he got 40 votes!
Also, you do realize this was a poll, right?
Alpha Omega Says:
Comment #13 April 28th, 2011 at 11:09 pm
Ha yes I that’s why my post was an obvious fail. In any case, I feel that building the first computer deserves to be *much* higher! So yes, maybe a new blog-readership is in order!
mercury Says:
Comment #14 April 28th, 2011 at 11:47 pm
Didn’t Kevin Spacey play Konrad Suze in the movie version?
Mehmet Ali Anil Says:
Comment #15 April 29th, 2011 at 12:27 am
I couldn’t find any streams to the talk, you’re sure that there’s one? Only to guests?
Ralph Says:
Comment #16 April 29th, 2011 at 5:02 am
How much of a bias was there to Euclid’s Elements by merit of it being at the top of the chronological list? When I saw the page I started voting, but almost stopped (or at least started going much
more quickly) once I saw how long the list was
Jan Says:
Comment #17 April 29th, 2011 at 5:54 am
Zuse’s Z1 was not a Turing-complete computer, it was program-controlled, but did only execute straight-line programs without any control structures. The program also was not stored in memory, but fed
to the machine on hole-punched film rolls.
Xamuel Says:
Comment #18 April 29th, 2011 at 12:09 pm
Euclid probably enjoyed a little of an unfair advantage since the poll was sorted by time. A better poll would have been randomly sorted differently for every user (though I admit that would’ve have
been so much fun). I myself gave Euclid an upvote when I first started, but then later went back and switched it to a downvote (this was before we could cancel a vote) after seeing what kind of stuff
it was up against.
Annony Mouse Says:
Comment #19 April 30th, 2011 at 10:02 pm
Any thoughts about the New Yorker article on quantum computing?
Is your talk online?
Peter Shor Says:
Comment #20 May 1st, 2011 at 5:59 am
If I were Scott, I would throw out the Fischer-Lynch-Patterson theorem (because of the obvious bias towards theory of the poll participants) and stick in both Machine Translation and Usenet. Mike,
Nancy, Mike, if you’re reading this, please accept my apologies — I still think it’s a wonderful result.
rrtucci Says:
Comment #21 May 1st, 2011 at 10:39 pm
So how was the Bull lecture?
Scott Says:
Comment #22 May 2nd, 2011 at 12:32 pm
rr: Not bad! It’s always nice to have an opportunity to spout some Buhl. You can see my slides here; not sure when the streaming video will become available.
A Says:
Comment #23 May 4th, 2011 at 4:01 pm
From a theory person, it is very depressing to see how narrow-minded people are.
Brian Wang Says:
Comment #24 May 13th, 2011 at 4:17 pm
any comment about what appears to be solid peer reviewed proof that Dwave has been leveraging quantum effects (at least quantum annealing) in their adiabatic quantum computer
T. Says:
Comment #25 May 15th, 2011 at 5:24 pm
I visited the MIT museum today with a friend, and yes it could definitely use the timeline. There’s room in the hall! | {"url":"http://www.scottaaronson.com/blog/?p=608","timestamp":"2014-04-17T03:56:33Z","content_type":null,"content_length":"27937","record_id":"<urn:uuid:5c757fea-211c-4bc9-b0f1-cccab71ca070>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can Mutuality Be Expressed in OWL?
Imagine I have a property :loves that is not :Symmetric and I'm working in the following closed "space"
:Naoko :loves :Sato .
:Sato :loves :Naoko .
:Ryooko :loves :Naoko .
I can the express this idea in a pidgin of first order logic
AND(:A :loves :B,:B :loves :A) -> :A :mutuallyLoves :B
which infers the new facts
:Naoko :mutallyLoves :Sato .
:Sato :mutallyLoves :Naoko .
Can I define this :mutallyLoves property in OWL?
EDIT: I'd be interested in seeing ways to materialize or query :mutallyLoves using other tools, like SPARQL, RIF, SPIN, whatever...
asked 06 Nov '11, 14:11
database_animal ♦
accept rate: 15%
I was thinking about property chains, but unfortunately loves o loves o loves is not what you need. If only we could define property intersections in OWL...
Is that the answer, or at least the beginning of an answer? That this is a property intersection and property intersections cause OWL to implode?
I can use SPARQL to materialize :mutallyLoves, or I can rewrite a SPARQL query that has it as a property. I'd like to make this capability reusable for arbitrary predicate ?p and available for
forward and backwards chain.
Property intersection is not a language feature of OWL 2 and probably cannot be expressed in terms of existing language features, not even in OWL 2 Full:
Of course, it should be mentioned that your question is more specific than general intersection, as it essentially asks for the intersection of a property and its inverse. But I cannot tell, without
deeper checking, whether this special case can still be expressed.
I think to do this in SPIN, you need to create your own sub class of rdf:Property. For example, my:DerrivedProperty. And your own property to say what property it is derived from. For example,
my:derrivedFrom. Then you would say the following:
:mutuallyLoves a my:DerrivedProperty
:mutuallyLoves my:derrivedFrom :loves
The following SPIN rule should create the inferences:
CONSTRUCT {?s ?pd ?o}
WHERE {?pd my:derrivedFrom ?p.
?s ?p ?o.
?o ?p ?s.}
You can also throw in
?pd a my:DerrivedProperty.
This rule will work collaboratively with SPIN OWL RL rules for symmetricity
answered 07 Nov '11, 17:48
accept rate: 21%
We can extend Signified's partial solution that uses a class :MutuallyLoved into a full solution for the property :mutuallyLoves, if we make two additional assumptions.
Firstly, and that's probably easy to accept, we state that if A mutually loves B, then A also (ordinarily) loves B:
(1) :mutuallyLoves rdfs:subPropertyOf :loves .
Secondly, we only consider /true/ love, and, as we all know, true love can only be given to a /single/ person (at most):
(2) :loves rdf:type owl:FunctionalProperty .
Next, we "translate" the class of people who are mutually loved into the class of people who are mutually loving /someone/:
(3) :MutuallyLoved owl:equivalentClass [
rdf:type owl:Restriction ;
owl:onProperty :mutuallyLoves ;
owl:minCardinality "1"^^xsd:nonNegativeInteger ] .
Now let's probe :Naoko's and :Sato's love! As :Naoko is a member of class :MutuallyLoved, there must be, via axiom (3), some Mr. X whom she :mutuallyLoves. Now, as we know from (1), to
:mutuallyLoves people means to :loves them, so we know that :Naoko :loves Mr. X. And as :loves has been determined to be a functional relationship by (2), it turn's out from the premise that ":Naoko
:loves :Sato" that Mr. X is, in fact, :Sato himself! Hence, there is no more reason to disbelieve in :Naoko's and :Sato's true love for each other:
:Naoko :mutuallyLoves :Sato .
Lovely story, isn't it! But what about reasoning?
The result is clearly too hard for the OWL 2 RL/RDF rules, with all the property restrictions and existential semantics being involved. I have checked with Ivan Herman's online RL reasoner, with all
additional features turned on, and got a negative result.
The solution /is/ in the scope of the OWL 2 Direct Semantics when applied to the /unrestricted/ OWL 2 Structural Specification, but is unfortunately beyond OWL 2 DL, since already Signified's
example is in conflict with one of the global restrictions of OWL 2 DL ("You shall not apply self-restrictions on properties defined by property chain axioms!"), and my three additional axioms above
make the situation even worse. HermiT 1.3.2 does not confirm the entailment (wrongly claims non-entailment). Pellet 2.3.0 claims entailment, but only after several suspicious warnings that it will
ignore diverse axioms while doing reasoning and that the conclusion ontology is empty, which is clearly not the case, so it seemed to be confused and only accidentally correct. Therefore, it turns
out that using an OWL 2 DL reasoner is not a reliant method. The tests were made with this cleaned-up version of the whole example in RDF/XML:
However, the solution is, formally, in OWL 2 Full. I have checked it with our "naive" approach to OWL 2 Full reasoning presented in http://dx.doi.org/10.1007/978-3-642-22438-6_35 and it worked! I'm
afraid, I still haven't a handy reasoner tool for people to easily play with themselves, but whoever is interested can copy and paste the TPTP code at
into any first-order theorem prover (see the link in the file there for FOL reasoners). That's a translation of the example here. The remaining stuff in the file is a sufficient selection of OWL 2
Full semantic conditions to make the example produce the result (the full set of semantic conditions would be a little too much and would be inefficient). Happy OWL 2 Full reasoning! :-)
answered 10 Nov '11, 20:13
Michael Schn... ♦
accept rate: 34%
I think {:mutuallyLoves rdfs:subClassOf :loves .} is a typo, intended to be {:mutuallyLoves rdfs:subPropertyOf :loves .} Overall, an interesting solution, but assumption (2) and lack of tooling
significantly limits its usefulness.
@irene_polikoff: Thanks, fixed it. It was only a typo in the text, the cited code was fine.
Assumption (2) is, of course, the relevant bit to make the story work. There may be other scenarios in which functionality of the basic property from which a mutual relation is to be derived is more
justified, and in such scenarios the pattern will be useful. We don't simply dismiss a new design pattern just because we don't like the example given for it, right?
Concerning lack of tooling: If there is no proper tool to deal with a given problem, the best thing to do is to develop such a tool. :-)
+1, I like the approach, and assumption (2) might hold for some cases.
This is conceptually the same problem as discussed here (aka. property intersection, role conjunction, etc.). The conclusion there is that it's not possible in OWL 2 DL, and probably not possible in
OWL 2 Full.
However, you can use :hasSelf to model the class MutuallyLoved.
:lovedBy owl:inverseOf :love .
:lovesLovedBy owl:propertyChainAxiom ( :loves :lovedBy ) .
:MutuallyLoved owl:equivalentClass [ owl:hasSelf true ; owl:onProperty :lovesLovedBy . ]
From this, you can see that :Sato and :Naoko are members of :MutuallyLoved...
...but (as far as we can tell), you cannot resolve the cyclical relationship between them using OWL 2. In summary, you can detect cyclical property paths in OWL 2, but you can (probably) only infer
class memberships from such detections, not binary relations.
EDIT: just saw the comments on the answer. Doh! ...anyways, there's a little bit more detail here.
answered 10 Nov '11, 12:52
Signified ♦
accept rate: 38% | {"url":"http://answers.semanticweb.com/questions/12550/can-mutuality-be-expressed-in-owl","timestamp":"2014-04-21T12:26:48Z","content_type":null,"content_length":"54340","record_id":"<urn:uuid:4e5456e2-00fe-499a-a5e4-b2754e5ceb7a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catalan number question (possibly)
June 1st 2013, 10:14 PM #1
Junior Member
Feb 2011
Catalan number question (possibly)
A group of 2n people are seated around a circular table. In how many ways can they shake hands simultaneously so that every participant shakes hands with another and no handshakes cross other
My strategy was to set up a correspondence between a set of handshakes and one way of triangulating a 2n-gon.
Then a handshake corresponds to either an edge or a diagonal, and the answer is C[2(n-1)].
But I'm a little unsure of my answer, and I'd like to know if I did it the right way.
I also have two questions.
#1. the meaning of "every participant shakes hands with another"
I took this to mean that no vertex (participant) is disconnected; is this the right interpretation?
At first I thought this meant each participant had to shake hands with all the other participants, but upon making a drawing, I found it impossible.
#2. connection between parentheses
Since the problem said 2n people instead of just n, I wondered if there is a way to set up a correspondence between handshakes and a string of (correctly arranged) parentheses.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/219515-catalan-number-question-possibly.html","timestamp":"2014-04-17T05:00:55Z","content_type":null,"content_length":"29921","record_id":"<urn:uuid:fab05383-3cce-479a-adbc-950dd7b183b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hasbrouck Heights Math Tutor
...Depending on the subject (and especially for computer subjects) before the first session, I will ask that we speak on the phone and will request the student send me via email information
regarding materials we'll be going over (syllabus, homework assignments, past exams). Feel free to reach ou...
9 Subjects: including precalculus, trigonometry, logic, algebra 1
...From my experience, many of the students' difficulty arises in part by having a gap between the prerequisite concepts that they know and the new concepts they are trying to grasp. As a tutor, I
fill in these gaps because not only will this help them learn the new material with ease, but it will ...
18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
...I also helped the children with their projects, for example the counting caterpillar. I also read out loud to the children and depending on the grade level helped them with their reading
skills. I am currently a Spanish major with a concentration in Linguistics.
13 Subjects: including prealgebra, English, algebra 1, algebra 2
...I have also been tutoring all levels of math from elementary through college for the past two years. I have a Bachelor's Degree in Math and a Master's Degree in Math Education. I am a certified
teacher with three years of experience teaching high school math.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...I also have a BS degree in math from Va Tech. I am an expert in all math concepts required for Algebra. I am licensed to teach this course in New York state, and I taught it for 5 years (ending
June 2013). I have also taught the courses that precede it (algebra 1 and 2 including trigonometry) a...
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra | {"url":"http://www.purplemath.com/hasbrouck_heights_nj_math_tutors.php","timestamp":"2014-04-20T16:36:44Z","content_type":null,"content_length":"24063","record_id":"<urn:uuid:1054c7a6-ed85-4ee8-91d3-c11690380130>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regression for Categorical Data
Regression for Categorical Data is a fine mathematical statistics book.
The standard topics of univariate and multivariate logit and probit models, in all their full mathematical detail, are presented well, if a little dryly. Interesting additional topics including
regularization, tree based methods, and non-parametric methods are well presented, and their connections to the main topics well established.
R software is used in the exercises, with code and data available. However, R is not integrated into the text, i.e. there are no printouts from R with accompanying text explaining them. I would have
liked R to be more integrated, but that would have made the book swell significantly beyond its 550 or so pages.
Each chapter ends with suggestions for further reading, and about 10 exercises. About one third of the exercises make use of the R data sets, the rest are theoretical. They range from the
straightforward, testing the readers comprehension of the chapters material, to quite interesting, asking for interpretations and the “why” behind some of the ideas.
The book will serve as a great reference. It would be also be an excellent a text for students who have completed a course at the level of Casella and Berger’s Statistical Inference. Although there
is too much material for a one semester course, the student will then have it as a useful reference.
The book looks attractive, as all the Cambridge Series in Statistical and Probabilistic Mathematics book do, but to my tired, old eyes some of the plots were difficult to read.
Peter Rabinovitch is a Systems Architect at Research in Motion. He recently defended his PhD thesis and is now trying to sell some lightly used Mallows permutations. | {"url":"http://www.maa.org/publications/maa-reviews/regression-for-categorical-data","timestamp":"2014-04-16T06:15:32Z","content_type":null,"content_length":"96986","record_id":"<urn:uuid:6dfa0f54-748b-4492-b963-8bb0fccd6e05>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
A standard deviation question
February 11th 2011, 10:44 AM #1
Feb 2011
A standard deviation question
I am a conceptual person, so it helps me to understand if I know why we are dong things. In a standard deviation we square the mean and then add it all up. My question is, why do we square the
difference scores? Why not simply take the sum of the absolute values of the differences between each score and the mean?
Well we don't square the mean, we square the difference between each observation and the mean.
The standard deviation describes the average variation around the mean. Conceptually its a measure of how tightly values are centred around the mean.
Two reasons we square each difference is to make them all positive and then to make the bigger differences stand out. This helps bound the smaller differences.
So far as I know, the primary reason why squares were chosen instead of absolute values is that squares are easier to work with analytically (because they are differentiable). But you can make a
good argument for absolute values.
Last edited by awkward; February 11th 2011 at 01:54 PM. Reason: spelling
A paper for the "Department of Educational Studies", and the first sentence indicates the author can safely be ignored:
"This paper discusses the reliance of numerical analysis on the concept of the standard deviation, and its close relative the variance."
Then we have the statement:
"The main reason that the standard deviation (SD) was created like this was because the squaring eliminates all negative deviations, making the result easier to work with algebraically" - rather
than the variance is a smooth function so allowing the methods of calculs to be deployed. In fact there is not a mention of calculus in this piece but a lot about algebra, makes me think the
author is not that familiar with calculus and how smoothness helps.
I could go on ...
Last edited by CaptainBlack; February 16th 2011 at 05:11 AM.
February 11th 2011, 12:12 PM #2
February 11th 2011, 01:53 PM #3
February 16th 2011, 02:38 AM #4
MHF Contributor
Oct 2008
February 16th 2011, 04:59 AM #5
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/statistics/170920-standard-deviation-question.html","timestamp":"2014-04-16T05:13:31Z","content_type":null,"content_length":"44187","record_id":"<urn:uuid:5eaef6c5-3065-463f-b975-88c9a18c3ffd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Each of 435 bags contains at least one of the following
Author Message
Each of 435 bags contains at least one of the following [#permalink] 20 Feb 2012, 12:12
This post received
45% (medium)
Question Stats:
Joined: 17 Jan 2012
Posts: 42
(04:59) correct
GMAT 1: 610 Q43 V31
36% (02:40)
Followers: 0
Kudos [?]: 12 [2] ,
given: 16 based on 113 sessions
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
A. 256
B. 260
C. 316
D. 320
E. 350
Spoiler: OA
Re: Each of 435 bags contains at least one of the following [#permalink] 20 Feb 2012, 13:38
This post received
Expert's post
docabuzar wrote:
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
A. 256
B. 260
C. 316
D. 320
E. 350
Bunuel Fill the diagram step by step:
Math Expert Attachment:
Joined: 02 Sep 2009
Raisins, almonds, and peanuts.PNG [ 6.49 KiB | Viewed 4103 times ]
Posts: 17318
Also given that there are total of 435 bags and 210 bags contain almonds.
Followers: 2876
From the diagram 20y=5x --> y=x/4. Now, Total=435={Almonds}+10x+y+x --> 435=210+10x+x/4+x --> x=20 --> # of bags that contain only one kind of item is the sum of yellow segments:
Kudos [?]: 18398 [ 10x+x+5x=16x=320.
11] , given: 2350
Answer: D.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Re: Each of 435 bags contains at least one of the following [#permalink] 12 Aug 2013, 06:59
This post received
Asifpirlo docabuzar wrote:
Senior Manager Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
Joined: 10 Jul 2013 peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
Posts: 344 A. 256
B. 260
Followers: 2 C. 316
D. 320
Kudos [?]: 80 [2] , E. 350
given: 102
My shortcut solution:
ratio application.png [ 36.31 KiB | Viewed 1980 times ]
Asif vai.....
Re: Each of 435 bags contains at least one of the following [#permalink] 14 Aug 2013, 09:57
Bunuel wrote:
docabuzar wrote:
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
Diipz A. 256
B. 260
Intern C. 316
D. 320
Joined: 20 Mar 2013 E. 350
Posts: 4 Fill the diagram step by step:
Followers: 0 Attachment:
Kudos [?]: 5 [0], Raisins, almonds, and peanuts.PNG
given: 10
Also given that there are total of 435 bags and 210 bags contain almonds.
From the diagram 20y=5x --> y=x/4. Now, Total=435={Almonds}+10x+y+x --> 435=210+10x+x/4+x --> x=20 --> # of bags that contain only one kind of item is the sum of yellow segments:
Answer: D.
Hi Bunuel,
I am unable to understand. Can you please elaborate?
Re: Each of 435 bags contains at least one of the following [#permalink] 14 Aug 2013, 10:36
Expert's post
Diipz wrote:
Bunuel wrote:
docabuzar wrote:
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
A. 256
B. 260
C. 316
D. 320
E. 350
Fill the diagram step by step:
Raisins, almonds, and peanuts.PNG
Math Expert
Also given that there are total of 435 bags and 210 bags contain almonds.
Joined: 02 Sep 2009
From the diagram 20y=5x --> y=x/4. Now, Total=435={Almonds}+10x+y+x --> 435=210+10x+x/4+x --> x=20 --> # of bags that contain only one kind of item is the sum of yellow segments:
Posts: 17318 10x+x+5x=16x=320.
Followers: 2876 Answer: D.
Kudos [?]: 18398 [0 Hi Bunuel,
], given: 2350
I am unable to understand. Can you please elaborate?
Please tell me what exactly didn't you understand. Thank you.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Re: Each of 435 bags contains at least one of the following [#permalink] 14 Aug 2013, 10:42
Diipz wrote:
Bunuel wrote:
docabuzar wrote:
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
Diipz A. 256
B. 260
Intern C. 316
D. 320
Joined: 20 Mar 2013 E. 350
Posts: 4 Fill the diagram step by step:
Followers: 0 Attachment:
Kudos [?]: 5 [0], Raisins, almonds, and peanuts.PNG
given: 10
Also given that there are total of 435 bags and 210 bags contain almonds.
From the diagram 20y=5x --> y=x/4. Now, Total=435={Almonds}+10x+y+x --> 435=210+10x+x/4+x --> x=20 --> # of bags that contain only one kind of item is the sum of yellow segments:
Answer: D.
Hi Bunuel,
I am unable to understand. Can you please elaborate?
The value of almonds from the venn diagram. 20y=5x. Also the explanation leading to the target. Thanks.
Re: Each of 435 bags contains at least one of the following [#permalink] 14 Aug 2013, 11:00
Expert's post
Diipz wrote:
Diipz wrote:
Bunuel wrote:
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
A. 256
B. 260
C. 316
D. 320
E. 350
Fill the diagram step by step:
Raisins, almonds, and peanuts.PNG
Also given that there are total of 435 bags and 210 bags contain almonds.
From the diagram 20y=5x --> y=x/4. Now, Total=435={Almonds}+10x+y+x --> 435=210+10x+x/4+x --> x=20 --> # of bags that contain only one kind of item is the sum of yellow segments:
Math Expert 10x+x+5x=16x=320.
Joined: 02 Sep 2009 Answer: D.
Posts: 17318 Hi Bunuel,
Followers: 2876 I am unable to understand. Can you please elaborate?
Kudos [?]: 18398 [0 The value of almonds from the venn diagram. 20y=5x. Also the explanation leading to the target. Thanks.
], given: 2350
The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts --> {only raisins and peanuts} = y --> {only almonds}=20y;
The number of bags that contain only peanuts is one-fifth the number of bags that contain only almonds --> {only peanuts}=x --> {only peanuts}={only almonds}/5 --> x={only
almonds}/5 --> {only almonds}=5x.
{only almonds}=5x=20y --> y=x/4.
Total=435={Almonds}+10x+y+x --> 435=210+10x+x/4+x
What else?
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Re: Each of 435 bags contains at least one of the following [#permalink] 23 Oct 2013, 16:08
Asifpirlo wrote:
docabuzar wrote:
Each of 435 bags contains at least one of the following three items: raisins, almonds, and peanuts. The number of bags that contain only raisins is 10 times the number of bags
Manager that contain only peanuts. The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. The number of bags that contain only
peanuts is one-fifth the number of bags that contain only almonds. 210 bags contain almonds. How many bags contain only one kind of item?
Joined: 23 May 2013
A. 256
Posts: 126 B. 260
C. 316
Followers: 0 D. 320
E. 350
Kudos [?]: 17 [0],
given: 108 My shortcut solution:
I really liked your method but 256 is also divisible by 16? Why wouldnt that we an answer?//
“Confidence comes not from always being right but from not fearing to be wrong.”
alphabeta1234 Re: Each of 435 bags contains at least one of the following [#permalink] 17 Nov 2013, 19:49
Manager Bunuel,
Joined: 12 Feb 2012 I would really appreciate your help. When I read this: "The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. ". I
thought that the number of bags that only contain almonds is equal to the number of bags that contain raisins and the number of bags that only contain peanuts. So I said A=20*
Posts: 103 (10x+x)=20*(11x). Rather than what you had which is the 20y, where y is the number of bags that are a mixtures of both raisins and peanuts.
Followers: 1 This is more of an english question but how is "only" used in word problems on the GMAT. When I get question with an "only A & B" is this equal to ("only" A) & ("only" B). Or
should I interpret it as "only" (A&B)
Kudos [?]: 8 [0],
given: 28
Re: Each of 435 bags contains at least one of the following [#permalink] 17 Nov 2013, 20:20
Expert's post
alphabeta1234 wrote:
I would really appreciate your help. When I read this: "The number of bags that contain only almonds is 20 times the number of bags that contain only raisins and peanuts. ". I
thought that the number of bags that only contain almonds is equal to the number of bags that contain raisins and the number of bags that only contain peanuts. So I said A=20*
(10x+x)=20*(11x). Rather than what you had which is the 20y, where y is the number of bags that are a mixtures of both raisins and peanuts.
VeritasPrepKarishma This is more of an english question but how is "only" used in word problems on the GMAT. When I get question with an "only A & B" is this equal to ("only" A) & ("only" B). Or
should I interpret it as "only" (A&B)
Veritas Prep GMAT
Instructor Interpretation usually depends on the context.
Joined: 16 Oct 2010 'Bag containing only A and B' means the bag has A and B only, not say C.
Posts: 4178 If the question wants to imply only A and only B, it will clarify as such. The bag containing only A and the bag containing only B.
Location: Pune, Note that here you have assumed that the statement implies this:
The number of bags that contain only almonds is 20 times the sum of the number of bags that contain only raisins and the number of bags that contain only peanuts.
Followers: 895
If number of bags of almonds is 20 times the sum of 2 other numbers, they will specify it as such.
Kudos [?]: 3792 [0]
, given: 148 In the question they only give number of bags of almonds is 20 times the number of bags of only raisins and peanuts (which means that the intersection of all 3 in not included).
So one number of bags is equal to another number of bags, not sum of two other numbers of bags.
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
gmatclubot Re: Each of 435 bags contains at least one of the following [#permalink] 17 Nov 2013, 20:20 | {"url":"http://gmatclub.com/forum/each-of-435-bags-contains-at-least-one-of-the-following-127872.html","timestamp":"2014-04-19T15:24:54Z","content_type":null,"content_length":"217908","record_id":"<urn:uuid:0a8fb3f0-99e1-4489-9a6f-7473c9a50534>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finance:Time value of money,net present value etc,
1. 277514
Finance:Time value of money,net present value etc,
1. The primary advantage of the proprietor form of business over the corporate form is:
a. ease of raising capital (money)
b. avoidance of double taxation
c. unlimited life
d. limited liability
2. All of the following are characteristics of proprietorships except:
a. ease of formation
b. income treated as part of the sole proprietor's income for tax purposes
c. ease of raising additional money for expansion
d. limited life
3. The principal financial advantage of the corporate form of organization is :
a. ease of transferability of ownership
b. accumulation of earnings for retention in the business
c. limited liability
d. ease of raising money through selling stock
4. Assume that pre-tax profit of $50,000 has been earned by a business, and the owner/proprietor wants to
withdraw all of the after-tax profit for personal use. Assume the tax rate for a C corporation is 34%, while the
rate for a person is 28%. The after-tax earnings available under the corporate and proprietorship forms are:
a. For a corporation, $33,000; for a proprietorship, $36,000
b. For a corporation, $23,760; for a proprietorship, $36,000
c. For either a corporation or a proprietorship, $36,000
d. For either a corporation or a proprietorship, $23,760
5. In finance the primary goal of management is to:
a. utilize its economic resources in the most advantageous way
b. minimize all possible expenses
c. maximize shareholder wealth which is generally achieved by maximizing stock price
d. make the best use of its assets
6. Selected accounts are listed below. How much is the firm's operating income?
Accrued payroll $ 2,000
Sales 45,000
Cost of goods sold 26,000
Interest expense 1,000
Expenses (other than interest) 8,000
a. $8,000
b. $10,000
c. $9,000
d. $11,000
7. Wessel Corp. plans to sell 1,000 units in 2002 at an average sale price of $45 each. Cost of goods sold
will be 40% of the sale price. Depreciation expense will be $3,000, interest expense $2,500, and other
expenses will be $4,000. Wessel's tax rate is 20%. What will Wessel Corp's net income be for 2002?
a. $ 3,500
b. $ 6,800
c. $14,000
d. $16,400
e. $28,400
8. Holding all other variables constant, an increase in Cost of Goods Sold will lead to:
a. a decreased cost ratio
b. a higher gross margin
c. lower net income
d. paying more in taxes
9. Which of the following will increase equity?
a. An increase in dividends paid
b. Issuance of new stock
c. An increase in retained earnings from net income
d. Both b & c
e. All of the above
10. Which of the following would cause a decrease in cash:
a. An increase in the Average Collection Period from 15 days to 30 days
b. Selling off fixed assets for more than book value
c. An increase in accrued salaries expense
d. Paying suppliers in 60 days versus 45 days
11. A high average collection period may indicate:
a. Management's willingness to quickly write-off questionable receivables
b. Customers are paying for purchases quickly
c. A strict collection policy
d. None of the above
12. The Ragin Cajun had an operating income (EBIT) of $260,000 last year. The firm had $18,000 in depreciation expenses, $15,000 in interest expenses, and $60,000 in selling, general, and
administrative expenses. If the Cajun has a marginal tax rate of 40 percent, what was its cash flow from operating activities last year?
a. $165,000
b. $230,000
c. $132,000
d. $162,000
13. Find the debt ratio of a firm with total debt equal to $800,000 and net worth equal to $2,400,000.
a. .33
b. .50
c. .75
d. .25
e. .67
14. If the firm's total equity is $600,000, its long-term debt is $300,000, and its current liabilities are
$100,000, then its debt to equity ratio is:
a. 3:1
b. 2:1
c. 1:1
d. none of the above
15. Given the following information, calculate the inventory for J&C videos: Quick ratio = 1.2; Current
assets = $12,000; Current ratio = 2.5
a. $4,800
b. $6,240
c. $7,200
d. $5,660
16. The higher the rate of interest:
a. the larger the present value of a future sum of money
b. the smaller the future value of an amount invested today
c. the smaller the present value of a future sum of money
d. all of the above
17. The principle behind time value of money is based on the fact that:
a. a sum of money in hand today is worth more than the same sum in the future
b. a sum of money in hand today is worth less than the same sum in the future
c. a sum of money in the future is worth less than the same sum in hand today
d. a and c
18. Holding all other variables constant, an increase in the interest rate will cause ________ to decrease.
a. Future values
b. Present values
c. Annuity payments
d. Growth rates
19. You have just calculated the present value of the expected cash flows of a potential investment. Management
thinks your figures are too low. Which of the following actions would increase the present value of your cash
a. assume a longer stream of cash flows of the same amount
b. increase the discount rate
c. decrease the discount rate
d. a and c
20. Your bank balance is exactly $10,000. Three years ago you deposited $7,938 and have not touched the account since. What annually compounded rate of interest has the bank been paying?
a. 8.65%
b. 26.00%
c. 8.00%
d. 6.87%
21. Find the present value of $100 to be received at the end of two years if the discount rate is 12%
compounded monthly.
a. $66.50
b. $78.76
c. $68.80
d. $91.80
e. $79.72
22. The Florida lottery agrees to pay the winner $250,000 at the end of each year for the next 20 years.
What is the future value of this prize if each payment is put in an account earning 9 percent?
a. $2.28 million
b. $12.79 million
c. $14.32 million
d. $ 5.00 million
23. Which of the following interest rates will come closest to doubling invested money in five years?
a. 13%
b. 14%
c. 15%
d. 16%
24. Suppose you put $100 into a savings account today, the account pays 8% compounded semiannually, and you withdraw $50 one year after your initial deposit. What would your ending balance be 20
years after the initial $100 deposit was made, assuming that you make no additional deposits?
a. $250.31
b. $257.45
c. $258.16
d. $430.10
e. $480.10
25. Calculate the NPV of a project requiring a $3,000 investment followed by an outflow of $500 in Year
1, and inflows of $1,000 in Year 2 and $4000 in Year 3. The cost of capital is 12%. (Round to nearest $)
a. $52
b. $198
c. $257
d. $486
26. What are the two primary drawbacks to the payback period method?
a. difficult to calculate; ignores time value of money
b. difficult to calculate; only works for long projects (e.g. 5 years or more)
c. ignores time value of money ; ignores cash flows after payback is reached
d. only works for long projects ; ignores cash flows after payback is reached
e. difficult to calculate ; ignores cash flows after payback is reached
27. If a project's NPV is negative
a. the project earns less than the cost of capital
b. the investment will not add value or contribute to shareholder wealth
c. the present value of expected cash outflows is greater than the present value of expected cash
d. all of the above
28. An investor's goal can be described best as
a. maximizing returns while minimizing risk
b. maximizing returns
c. capturing the high average returns of equity investing while limiting the associated risk as much as possible
d. avoiding risk regardless of the risk premium offered
29. If you invest 30% of your funds in AT&T stock with an expected rate of return of 10% and the
remainder in GM stock with an expected rate of return of 15%, the expected return on your portfolio is
a. 12.5%.
b. 13.0%.
c. 13.5%.
d. 14.5%.
e. none of the above
30. A project has the following cash flows:
($500) $100 $200 $250
What is the project's NPV if the interest rate is $6% ?
a. ($17.76)
b. $482.24
c. ($537.78)
d. $22.44
31. Use the following information to calculate your company's expected return.
State Probability Return
Boom 20% 40%
Normal 60% 15%
Recession 20% (20%)
a. 11%
b. 13%
c. 15%
d. 17%
32. Sentry Oil Inc. is considering two mutually exclusive projects as follows:
Year 0 1 2 3 4
Cash flow A ($185,000) $60,000 $75,000 $70,000 $70,000
Cash flow B ($125,000) ($60,000) $95,000 $90,000 $95,000
Sentry's a cost of capital is 14%. It can spend no more than $350,000 on capital projects this year, which of the following statements is applicable when evaluating the projects by the NPV method?
a. both projects add shareholder wealth and should be undertaken
b. project B appears to add more shareholder wealth than project A and should be done
c. project A appears to add more shareholder wealth than project B and should be done
d. project B should be undertaken because it requires a smaller investment
33. A firm has current assets of $10,000 and current liabilities of $6,000. Cash and marketable securities total $4,000, the balance in accounts receivable is $2,000, and the book value of inventory
is $4,000. The firm's net working capital is:
a. $2,000
b. $4,000
c. $6,000
d. $9,000
e. none of the above
34. You are considering buying a new car. The sticker price is $15,000 and you have $2,000 to put toward a down
payment. If you can negotiate a nominal annual interest rate of 10% and you wish to pay for the car over a 5-
year period, what are your monthly car payments?
a. $216.67
b. $252.34
c. $276.21
d. $285.78
e. $318.71
35. Muggles Manufacturing has asked you to calculate the company's current ratio. All you have is the partial balance sheet below, the year's sales revenue, and two ratios also shown below. Using
that information, calculate Muggles' current ratio.
Sales = $3,000
Cost Ratio (Cost of Goods sold/ Sales) = 45%
Inventory Turnover (Cost of goods sold/Inv) = 5.0
Assets Liabilities & Equity
Cash ? Accounts Payable $ 50
AR $ 40 Accruals ?
Inventory ? Long-term Debt $380
Net Fixed Assets $500 Equity $250
Total Assets $830 Total Liabilities & Equity ?
a. .35
b. .85
c. 1.65
d. 2.25
The question set include problems on time value of money concepts, net present value and other time related financing problems. | {"url":"https://brainmass.com/business/net-present-value/277514","timestamp":"2014-04-18T10:46:10Z","content_type":null,"content_length":"40872","record_id":"<urn:uuid:681afc86-7dcb-4370-85a8-dde2d624b573>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vineland Science Tutor
Find a Vineland Science Tutor
...GMAT. I just took the GMAT in October 2013 and scored a 770 (99th percentile). I can teach both the quantitative and the verbal section. Computer ProgrammingAs an electrical engineering
graduate I have significant experience in programming.
15 Subjects: including electrical engineering, calculus, chemistry, physics
...My areas of subject specialty are Reading, Math, HS English, HS Social Studies, and all subjects K-8. My areas of expertise in students with Special Needs are Gifted and Talented, Speech and
Language, ADHD, Hearing Impairment, Aspergers Syndrome, and Autism. Please contact me if you have any questions or would like to arrange an appointment.
42 Subjects: including sociology, dyslexia, geometry, ecology
I am an active New Jersey certified substitute teacher for grades K-12 in the Greater Egg Harbor area. I am interested in tutoring for the summer months in the subjects of English skills
pertaining to vocabulary, reading, and writing, math, and science. I believe learning should be interesting, enlightening, fun, and comprehensible.
14 Subjects: including physical science, reading, anatomy, writing
...I was an Enon Tabernacle after school ministry tutor for elementary and high school students 2011-2012. These are just a few qualifications. Please contact me if more is needed.
13 Subjects: including biochemistry, geometry, psychology, biology
...I hold a PhD in Biochemistry and am able to make complex concepts easy for the student to understand. I have a wealth of experience in chemistry, biology, anatomy, and physiology and have
always been successful in making students understand the subject matter. Students that I have tutored in the past typically increase their grades 15-20 points, or 1.5 to 2 letter grades.
5 Subjects: including biology, chemistry, biochemistry, anatomy | {"url":"http://www.purplemath.com/Vineland_Science_tutors.php","timestamp":"2014-04-16T07:28:47Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:f6d2b283-c549-412f-95dd-90fb80a78faa>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alamo, CA Algebra 1 Tutor
Find an Alamo, CA Algebra 1 Tutor
...I can help you or your child to excel in math and overcome any problem areas. I specialize in all areas of Math: Arithmetic, Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, etc. I also
can help you or your child in test preparation, like SAT Math and ACT Math etc.
17 Subjects: including algebra 1, calculus, geometry, statistics
...I have gotten awards in Contra Costa County's Sonata Contests in the past which includes two 1st place trophies, one 2nd place trophy, and a 3rd place trophy. I have also received two plaques
in Contra Costa County's Baroque Festival and have qualified for the California state convention in the ...
18 Subjects: including algebra 1, reading, elementary math, trigonometry
...Having tutored high school and college students in English, math, and other social science and humanities subjects, I certainly appreciate the value of showing students how to improve their
study skills. Whether this involves breaking a research paper effort into several steps, or setting up a s...
42 Subjects: including algebra 1, English, reading, writing
...While a grad student at UC Berkeley, I received the University teaching award for statistics, and my undergraduate students consistently gave me the top rating in the department. While there, I
tutored professors, grad students, and undergrads in math, advanced statistics, experimental design, and analysis. I have also tutored many high school students.
14 Subjects: including algebra 1, geometry, ASVAB, statistics
I am an experienced and licensed teacher. I have taught 6th grade earth science and 8th grade physical science. I have tutored algebra 2, geometry, and Spanish as well as various sciences.
24 Subjects: including algebra 1, reading, chemistry, physics | {"url":"http://www.purplemath.com/Alamo_CA_algebra_1_tutors.php","timestamp":"2014-04-21T05:19:25Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:9577e9e8-fbee-48f1-95ee-907c61dfade1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Diagonal Movement
Tue, 27 Sep 94 12:35:15 EDT
Operating System: HP-UX A.09.01 A
Mailer: Elm [revision: 70.85]
> > Intuition suggests that, under most circumstance, they should pass
> > each other. Our real life experiences reinforce this. However, given
> > the current movement rules in Olympia, unit [a] could move West then
> > South, while unit [b] moves East then North -- thus exchanging places
> > yet not encountering each other.
> This has nothing to do with ortholinear movement. It's true in every
> movement system known to man. Even with diagonal movement, there will
> be many equally good movement paths between two locations. For example,
> from aa01 to ab03, one might move aa01-aa02-ab03, or aa01-ab02-ab03,
> paths which do not intersect except at the endpoints.
You miss my point. Perhaps, i did not state it precisely enough.
First of all i did not use the term 'ortholinear movement'. Given that, your
initial statement has nothing to do with this discussion.
Secondly, i am not concerned with how many possible paths there are from point
A to point B.
I am concerned with the case where the 'most direct' route (ie. the shortest
route) would be the diagonal and where the 2 possible orthogonal routes are
of equal distance.
Main Index | Olympia | Arena | PBM FAQ | Links | {"url":"http://www.pbm.com/oly/archive/design94/0759.html","timestamp":"2014-04-21T14:45:37Z","content_type":null,"content_length":"3583","record_id":"<urn:uuid:b344979c-b80c-434c-9ebe-4fc327115fba>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mix System Elaborated at 600
Advances in Materials Science and Engineering
Volume 2012 (2012), Article ID 452383, 11 pages
Research Article
Electrical Properties of a CeO[2]-Mix System Elaborated at 600^∘C
^1Institut Matériaux Microélectronique et Nanosciences de Provence, IM2NP, UMR CNRS 7334, Université du Sud Toulon-Var, BP 20132, 83957 La Garde Cedex, France
^2Laboratoire Matériaux et Environnement LME, Faculté des Sciences, Université Ibn Zohr, BP 8106, Cité Dakhla, Agadir, Morocco
Received 29 July 2011; Accepted 26 December 2011
Academic Editor: V. P. S. Awana
Copyright © 2012 Lamia Bourja et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The electrical conduction of a series of polycristalline [(1−x)CeO[2]·x/2Bi[2]O[3]] samples has been analyzed using electrical impedance spectroscopy, in the temperature range 25 to C. Samples have
been prepared via a coprecipitation route followed by a pyrolysis process at C. For compositions , Ce[1−x]Bi[x]O[2−x/2] solid solutions, with fluorite cubic structure, are obtained. In the
composition range , the system is biphasic with coexistence of cubic and tetragonal structures. To interpret the Nyquist representations of electrical analyses, various impedance models including
constant phase elements and Warburg impedances have been used. In the biphasic range (), the conductivity variation might be related to the increasing fraction of two tetragonal β′ and β-Bi[2]O[3]
phases. The stabilization of the tetragonal phase coexisting with substituted ceria close to composition is associated with a high conduction of the mix system CeO[2]-Bi[2]O[3].
1. Introduction
Recently, in a structural analysis [1] of the polycrystalline system ( with , elaborated at 600°C, we have observed the partial stabilization of two tetragonal varieties of Bi[2]O[3] oxide coexisting
with a substituted ceria phase. Presently, we analyze the electrical properties of this CeO[2]-Bi[2]O[3] system as a function of composition x and at various temperatures.
Cerium dioxide (ceria) is well known for its applications as ceramic pigments, solid electrolyte in fuel cells, and catalyst in automotive gas converters [2–9]. Ceria presents mixed ionic and
electronic conductivity [10, 11]. In a previous work on the neodymium-substituted phase Ce[(1-x)]Nd[x]O[2-x/2] [12], we showed that the conductivity increased with composition x up to a value of in
the cubic lattice. Other studies have shown a similar effect with other elements: europium [13], iron [14], gadolinium [15], samarium [16], and terbium [17].
In 1937, Sillen [18] published the first study on the complex polymorphs of Bi[2]O[3] using X-ray diffraction (XRD) analysis. Four polymorph phases were proposed: (i) the α-Bi[2]O[3] monoclinic
phase, stable at low temperatures; (ii) the δ-Bi[2]O[3] face-centered cubic phase, stable at high temperatures (above 729°C); (iii) the two intermediate β- and γ-Bi[2]O[3] phases that can stabilize
with tetragonal and body-centered cubic (bcc) lattices, respectively, depending on the cooling mode. An additional polymorph was also observed by the authors [19]: a -Bi[2]O[3] tetragonal
modification that should be a superstructure of the β phase. During cooling of the high-temperature cubic δ-Bi[2]O[3] phase to T = 650°C, this δ phase can transform into the β-Bi[2]O[3 ]tetragonal
modification. However, below T = 640°C, this δ phase can transform into the γ (body centered cubic) modification. If this γ phase is formed, it transforms into the α-Bi[2]O[3] monoclinic phase close
to 500°C. However, if the β phase is formed, it transforms into the α phase close to 330°C [20–22].
Several studies on the electrical properties of Bi[2]O[3] [22–28] have shown that conduction is mainly ionic above 500°C. According to the authors, this ionic conduction should be favored by the
existence of vacancies or empty spaces in the high-temperature structure that allows fast mobility of the oxygen ions in the lattice. Chemical units noted as Bi[4]O[6]□[2] are organized into a
structure that is closely related to that of fluorite (basic A[4]O[8] units). Both cerium and bismuth oxides should exhibit similar crystal packing in which the oxygen vacancies are associated with
the Bi^3+ ions: the ceria chemical unit Ce[4]O[8] might be converted into the Bi[4]O[6]□[2] chemical unit in which the Bi^3+ ions substitute for the Ce^4+ ions and oxygen vacancies substitute for the
oxygen atoms. Some authors have proposed that the presence of lone pairs could play a role in the observed phase transition α (monoclinic) → δ (cubic) at 729°C and in the measured high conductivity.
The substituted phases Ce[1-x ]Bi[x]O[2-δ] have already been studied by Dikmen et al. [29], and the solubility limit was found to be . However, very few data are available on the section of the phase
diagram with .
It is interesting to note that the authors Chen and Eysel [30] studied the composite system CeO[2]-Bi[2]O[3] in the composition range to and observed that the metastable β phase was stabilized by the
presence of the ceria phase. They suggest that this stabilization should be provoked by a certain proximity effect. They considered that this stabilization was not due to any insertion of cerium ions
in the Bi[2]O[3] lattice.
Recently, using hydrothermal route, Sardar et al. [31] synthesized phases with having ceria-like fluorite structure with local distortions.
In our previous work [1], we observed at least three domains for these samples obtained at 600°C. In agreement with literature results, a first solid-solution domain was observed for , with a
limiting composition close to . Surprisingly, we also found evidence for a second biphasic system with , which should be constituted of the limit phase with coexisting with a tetragonal phase quite
similar to the -Bi[2]O[3] phase. A third biphasic system was also observed for , with coexistence of the tetragonal β-Bi[2]O[3] phase and the monoclinic α-Bi[2]O[3] phase, which is stable at low
temperatures. The tetragonal polymorph (lattice parameters: nm; nm) is a superstructure of the β phase (lattice parameters: nm; nm). In the past, both phases were considered to be metastable
varieties of pure Bi[2]O[3] oxide. In our specific case, the modification ↔β depending on composition should result from ordering-disordering of cerium defects in the lattice: ordering ( phase
stabilization) should require sufficient fraction of cerium atoms in the lattice, while disordering (β phase stabilization) should be due to insufficient fraction of cerium atoms in the lattice.
Finally, for , we obtained a stable monoclinic structure (lattice parameters: nm; nm; nm; β = 112.95°). In our study, we also observed typical variations in the volumes of the chemical units A[4]O
[8] (Ce[4]O[8] for the ceria structure and Bi[4]O[6]□[2] for the and β phases). For compositions , the volume increase is directly due to the increased Bi^3+ fraction. For compositions , the volume
increase can be interpreted in terms of the increasing fraction of metastable phase -Bi[2]O[3] coupled with a decreasing fraction of defects in the Bi[2]O[3] lattice. For compositions , the volume
reaches a stabilized value due to the very weak concentration of cerium defects. The large decrease in volume for should be due to the formation of the more compact monoclinic α-Bi[2]O[3] structure.
These variations are reported on Figure 1 (see [1]).
At present, we are trying to elucidate the role of the bismuth composition on the ionic conduction of this mixed system. Such composite systems could present a very high interest in the case of
electrolytic applications.
2. Experimental Section
2.1. Sample Elaboration
Samples of bismuth/cerium-based precursors having the composition ( with were prepared via a precipitation route [32–36] using appropriate quantities of cerium(III) nitrate hexahydrate, Ce(NO[3])[3]
·6H[2]O (purity 99.5%), and bismuth(III) nitrate pentahydrate Bi(NO[3])[3]·5H[2]O (purity ≥ 98%). Each nitrate was separately dissolved in suitable volumes of distilled water. The two nitrate
solutions were mixed and stirred for two hours at room temperature. Ammonium hydroxide (NH[4]OH) was added to the mixture to adjust the pH to 10. The resulting precipitate was filtered, washed with
distilled water to remove residual , and dried at 80°C. Finally, the precursor powder was heated in air at 600°C for 6 hours. We obtained eleven samples of compositions , 0.10, 0.20, 0.30, 0.40,
0.50, 0.60, 0.70, 0.80, 0.90, and 1.00. In Table 1, we report the experimental and calculated densities, the apparent porosities (), and grain sizes [1]. The calculated density is obtained from a
theoretical evaluation taking into account a mean molar mass (A[4]O[8] formula depending on composition ) and a mean volume of chemical unit A[4]O[8] (see Figure 1) depending on composition ,
determined from the crystal structure of each phase. In the mix system where substituted ceria and tetragonal phases ( or β) coexist, the phase ratio has been calculated using the classical lever
2.2. Electrical Analyses
The electrical study was performed using an electrical impedance spectrometer SOLARTRON SI 1260 coupled to an electrical cell operating under air and in the temperature range from 25 to 750°C. The
samples were cylindrical pellets (diameter mm, thickness 2 ± 0.05mm) initially compacted at 5 kbar under ambient conditions. Each apparent density was calculated and compared with the theoretical
value to determine the fraction of cavities. The pellets were placed between two cylindrical platinum electrodes in a specific cell. A constant pressure was applied to the electrodes via rings. The
cell was placed in a furnace operating at up to 750°C.
The electrical analyses were carried out in the frequency range (ω = 2πν) 10^−1 to 10^7Hz, with an alternating current associated with a maximum voltage of 0.1V. Samples were stabilized for 15
minutes at a fixed temperature. The recording time for the frequency range was of 15 minutes. To ensure thermal stabilization of pellets, each sample was subjected to three successive measuring
cycles (with one temperature rise and drop for each cycle). The final impedance data were chosen during heating mode of the third cycles, as being representative of stabilized samples (these data
were identical to those of the second cycle).
The impedances ( and being, resp., the real and imaginary components) were represented using Nyquist plots (). The software Zview [37] was used to fit the impedances of specific electrical circuits
to the Nyquist experimental data (Nyquist representations). The equivalent circuits associated with each sample were generally based on parallel RC circuits; constant-phase elements (CPE) having the
form (CPE) = were systematically tested. At low temperature (°C), the impedance of such parallel RC circuits is generally expressed as a function of frequency ω as follows: In these expressions, is
the resistance (associated with the intersection of the Nyquist circle with real axis), the CPE term is expressed in Ω^−1·Hz^-n, the frequency ω is expressed in Hz, is the exponent describing the
deviation from the ideal capacitor model and is characteristic of the CPE model.
In the case of high temperature results (°C), it has been necessary to use a specific modified Warburg model [38–42] having the following form: with as a specific resistance (in Ω), as a specific
term depending on diffusion mechanisms, related to the electrode or interface responses (see the appendix), and being an exponent characteristic of the diffusion process coupled with the sample
heterogeneity. In the case of pure Warburg diffusion mechanism, this value should be equal to 1/2.
Three circuits placed in series were systematically tested for grain core, grain interface, and electrode contributions. At high temperature, the high frequency impedances for compositions included
an inductance term. The results of fitting calculations delivered the , , and parameters, the Warburg characteristics (for high temperatures) of samples. All observed impedances were normalized using
the dimensions of each pellet (surface and thickness ). As these dimensions were constant for all samples, no correction was necessary.
3. Results
3.1. Phase Identification after Thermal Cycles
In this section, we report a phase identification of typical samples before and after electrical analyses (each sample subjected to three thermal cycles). Let us recall that these identifications
were carried out in room conditions, for samples initially elaborated at 600°C and then thermally treated up to 750°C during EIS analyses. Scanning electron microscopy images have been presented and
commented in our previous work [1].
Figures 2(a), 2(b), and 2(c), respectively, report the six X-ray diffraction patterns for samples (phase ), (phase β), and (phase α) recorded in room conditions before and after EIS cycles. These
analyses, performed before and after EIS experiments, clearly show that, after successive thermal treatments, three types of complex stabilized systems are formed and observable at 25°C. For
composition , the major phase coexists with traces of monoclinic α-Bi[2]O[3] phase; for composition , the β phase is changed into the phase coexisting with the monoclinic phase; for composition , α
-Bi[2]O[3] monoclinic and traces of δ-Bi[2]O[3] cubic phase coexist. The presence of this cubic phase in this last sample is probably due to structural quenching during cooling process from 750°C.
The presence of traces of monoclinic form in samples and may be interpreted in terms of diffusion of defects from the tetragonal phases.
3.2. Electrical Properties
Figure 3 represents a series of Nyquist plots at 400°C for the 11 samples. Below composition , the plots constitute unique Nyquist circles associated with grain-core conduction. Above composition ,
two circles are observed (associated with grain-boundary and grain-core conduction) with a linear contribution at low frequency corresponding to conduction and diffusion at the electrodes. This
contribution might be assimilated mainly to an ionic conduction corresponding to ionic diffusion along grain boundaries and at the electrode interfaces.
On Figures 4(a) and 4(b), we have represented a series of Nyquist plots at 600, 650, 700, and 750°C for composition .
On Figures 5(a), 5(b), 5(c), and 5(d), we have reported a part of the results of modeling calculations, with corresponding equivalent circuits. Figure 5(a) is relative to sample (ceria) with
equivalent circuit at 700°C. Figure 5(b) is relative to sample with model at 400°C. Figure 5(c) is relative to sample with equivalent circuit at 300°C. Figure 5(d) is relative to sample with
equivalent circuit at 600°C.
For temperatures °C, the impedance is given by . The results concerning the CPE terms for typical temperatures (300, 350, 400, and 450°C) are reported in Table 2(a). The resistance values are not
reported here (see Figure 6 representing the conductivity versus temperature and composition). The values increase as composition (or temperature ) increases, and the values determined at low
temperatures (°C) decrease from 0.9 and 0.8 as temperature increases.
In the high temperature range (°C), the values can be described from several electrical components including an inductance . The general value has been expressed using three components as follows: .
The term should represent external electrical terminals (metallic junctions) mixed with a main sample contribution (grain core): as depends on composition and temperature, it cannot be linked to such
metallic contributions. The term has a CPE form and can be linked to sample interfaces: . The term should represent the electrode contribution. We have observed that a typical modified Warburg form
could better describe the Nyquist curves ( contribution). It is the reason why we did not use the usual CPE form. As the , , and values strongly decrease with temperature, we have considered that
they were mainly associated with sample and/or interface evolutions including electrode-sample interfaces. The sample resistance was determined as being the sole term representing the bulk
In Table 2(b), we have reported a series of results concerning a typical sample at composition and for various temperatures. This table is decomposed into three sections corresponding to three
temperature ranges: °C ( component), °C (), °C ().
The values associated with grain cores (sample ) increase as a function of temperature. The values decrease from to . The thermal variations of the characteristics (), and () are related to the
thermal evolutions of heterogeneous grain boundaries and heterogeneous electrode-material junctions.
The values decrease (from 0.7 to 0.5) and the values increase as a function of temperature, between 300 and 450°C. The , and values decrease as temperature increases. The mean values are close to
0.65. The values are close to 0.3. These low values of exponents validate the existence of diffusional process linked to ionic conduction. The values of increase with temperature, while the term
seems to irregularly vary.
In Table 2(c), we have reported partial results concerning a series of samples (variable composition ) at a fixed temperature of 700°C. In the composition range , the impedance model is . Above , the
impedance model is expressed as being .
At this high temperature, electrical evolutions are described through the following typical fitting parameters:(i)the values (grain core) reach a minimum for compositions close to ;(ii)the values
(electrode-sample interface) vary in an irregular way;(iii)the values (electrode-sample interface) increase with ;(iv)the values irregularly vary close to a mean value of 0.5;(v)the resistances
decrease with composition ;(vi)the values (Warburg diffusion component) increase with ;(vii) the values irregularly vary close to the mean value of 0.35.
It should be remarked that the exponent is close to 0.5, while the exponent is less than 0.5 (expected theoretical Warburg value).
Figure 6 gives the values of the logarithm of conductivity log(σ) as a function of the composition and for temperatures ranging between 400 and 750°C. The σ values were determined from the values and
sample dimensions. Table 3 gives the values of the activation energies as a function of composition and for various temperature ranges. At low temperatures, low values of activation energies due to
extrinsic defects are observed (close to eV). In the temperature range °C and for compositions , the activation energy first decreases from to 1.09eV then increases up to 1.19eV. For compositions
.5, we observe a modification in activation energy above 500°C: in the temperature range °C, this activation energy is of about 1.35eV while between 500 and 750°C, it decreases to a value of 1.0eV.
This modification might be related to a change in charge carriers with probably a significant contribution of ionic conduction.
Finally, the observed increasing conductivity as increases should have two complementary origins: increasing charge carriers due to Bi^3+ and vacancies increasing fractions, and lattice energy
softening due to Bi–O bonds (the activation energy being closely related to the lattice energy). This type of effect was previously suggested by Mandal et al. [43].
4. Discussion
The values of apparent conductivities, presently determined in our specific conditions, depend on experimental parameters, for example, compaction pressure, porosity, and grain sizes. These values
are weaker than the values obtained by Hull et al. [28] for samples sintered at various temperatures ranging between 900 and 1300°C and corresponding to compositions (solid solution). This can be
easily explained first by the fact that our samples were initially sintered at 600°C then heated up to 750°C during electrical measurements and secondly by the fact that they present a certain degree
of porosity (see Table 1). In the case of mix systems with high values, it is not possible to exceed the fusion temperature of Bi[2]O[3] (°C) while this is fully acceptable in the case of solid
solutions for low values.
In the composition range , the log(σ) values (Figure 5) increase with composition . For these samples, the nature of conduction is probably electronic and ionic. The ionic contribution can be
directly attributed to the increasing number of structural defects Bi^3+Bi^3+ that can offer a great number of diffusion paths for oxygen ions through the oxygen vacancies □.
In the composition range , we have observed a new evolution with a maximum of conductivity at . This evolution might be clearly ascribed to the existence of a biphasic solid-solution system based on
a solid solution having a ceria structure with disordered Bi^3+ and vacancy defects, coexisting with a tetragonal -Bi[2]O[3] lattice that probably has cerium defects. As the proportion of the highly
conducting Bi[2]O[3] phase increases, the ionic conduction also increases. For compositions greater than , the conductivity decreases. Two effects are in competition in these conductivity values: the
high ionic conduction of the bismuth phase and the microstructure of the samples. As shown in our previous work [1], the specific surface areas exhibit a strong variation as increases. In the
solid-solution range , the BET analyses showed that specific surface areas are very high, whereas the specific surface areas strongly decrease as increases in the biphasic system due to crystal
growth of the bismuth phase. Thus, this evolving microstructure in the compacted pellets could play a prominent role in conduction. Cavities may play a decisive role in ionic conduction due to the
formation of the tetragonal β phase.
Finally, in the case of the monoclinic Bi[2]O[3] phase obtained for , the observed large conductivity decrease can be attributed to the more compact monoclinic structure compared to the previous
tetragonal ones, which limits oxygen mobility. For this monoclinic Bi[2]O[3] phase, we have observed the expected transition [20, 23, 24] at °C, with a large increase in conductivity due to the
structural transformation involving a specific volume increase (monoclinic phase transforming into the cubic phase).
5. Conclusions
In this CeO[2]-Bi[2]O[3] system produced at 600°C under air, the main result should reside in the high conductivity of the polycrystalline and compacted mix system with compositions close to 0.6-0.7.
In this composition range, the stabilization of the tetragonal phase should be responsible for such a high conductivity. In our EIS experiments, this complex system is stabilized after two thermal
cycles. In the composition range , we observe a large increase in conductivity mainly due to ionic conduction of oxygen ions in this tetragonal -Bi[2]O[3] lattice. This ionic conduction is clearly
suggested by the Warburg components in Nyquist representations (for °C). For compositions , a new tetragonal structure β-Bi[2]O[3] is formed at 600°C: however, it evolves after thermal treatment and
gives rise to a more complex system where and α phases coexist. This modification of the β phase into and α phases might be ascribed to thermal cycling up to 750°C involving ionic diffusion: this
should argue in favor of a higher stability of the phase due to a sufficiently proportion of cerium ions in the tetragonal lattice. For , the monoclinic α-Bi[2]O[3] phase systematically presents a
conductivity lower than the one of the mix system with composition .
Finally, the mix system with composition close to should be an interesting optimized electrolyte for applications limited to temperatures of 700°C: below this temperature, the stability of the system
should be ensured.
According to authors of [39–43], the Warburg element can be expressed as follows: where is a pulsation and is equal to 1/2. In this expression, is related to the chemical diffusion constant (in
m2·s-1), , is a length characteristic of reaction process. The term has the dimension of a resistance (in Ω). In the case of heterogeneous interfaces, the exponent can be different from . This model
could account for ionic diffusion associated with gas formation at grain boundaries and electrode interfaces: O2- ↔ 2e- + (1/2)O2(g).
The authors gratefully acknowledge the Provence-Alpes-Côte d’Azur Regional Council, the General Council of Var, and the agglomeration community of Toulon Provence Mediterranean for their helpful
financial supports. This work was developed in the general framework of ARCUS CERES project (2008–2010).
1. L. Bourja, B. Bakiz, A. Benlhachemi, et al., “Structural, microstructural and surface properties of a specific CeO[2]-Bi[2]O[3] multiphase system obtained at ${600}^{\circ }$C,” Journal of Solid
State Chemistry, vol. 184, no. 3, pp. 608–614, 2011. View at Publisher · View at Google Scholar
2. J. Kašpar, P. Fornasiero, and M. Graziani, “Use of CeO[2]-based oxides in the three-way catalysis,” Catalysis Today, vol. 50, no. 2, pp. 285–298, 1999. View at Scopus
3. A. Trovarelli, “Catalytic properties of ceria and CeO[2]-containing materials,” Catalysis Reviews, vol. 38, no. 4, pp. 439–520, 1996. View at Scopus
4. A. Trovarelli, C. Leitenburg, M. Boaro, and G. Dolcetti, “The utilization of ceria in industrial catalysis,” Catalysis Today, vol. 50, no. 2, pp. 353–367, 1999. View at Scopus
5. A. Tschope, W. Liu, M. F. Stephanopoulos, and J. Y. Ying, “Redox activity of nonstoichiometric cerium oxide-based nanocrystalline catalysts,” Journal of Catalysis, vol. 157, no. 1, pp. 42–50,
1995. View at Publisher · View at Google Scholar · View at Scopus
6. T. Masui, K. Minami, K. Koyabu, and N. Imanaka, “Synthesis and characterization of new promoters based on CeO[2]-ZrO[2]-Bi[2]O[3] for automotive exhaust catalysts,” Catalysis Today, vol. 117, no.
1–3, pp. 187–192, 2006. View at Publisher · View at Google Scholar · View at Scopus
7. X. Zheng, X. Zhang, Z. Fang, X. Wang, S. Wang, and S. Wu, “Characterization and catalysis studies of CuO/CeO[2] model catalysts,” Catalysis Communications, vol. 7, no. 9, pp. 701–704, 2006. View
at Publisher · View at Google Scholar · View at Scopus
8. O. Demoulin, M. Navez, J.-L. Mugabo, and P. Ruiz, “The oxidizing role of CO[2] at mild temperature on ceria-based catalysts,” Applied Catalysis B, vol. 70, no. 1-4, pp. 284–293, 2007. View at
Publisher · View at Google Scholar · View at Scopus
9. E. Aneggi, M. Boaro, C. D. Leitenburg, G. Dolcetti, and A. Trovarelli, “Insights into the redox properties of ceria-based oxides and their implications in catalysis,” Journal of Alloys and
Compounds, vol. 408-412, pp. 1096–1102, 2006. View at Publisher · View at Google Scholar · View at Scopus
10. M. Mogensen, N. M. Sammes, and G. A. Tompsett, “Physical, chemical and electrochemical properties of pure and doped ceria,” Solid State Ionics, vol. 129, no. 1, pp. 63–94, 2000. View at Publisher
· View at Google Scholar · View at Scopus
11. R. N. Blumenthal and R. K. Sharma, “Electronic conductivity in nonstoichiometric cerium dioxide,” Journal of Solid State Chemistry, vol. 13, no. 4, pp. 360–364, 1975. View at Scopus
12. L. Aneflous, J. A. Musso, S. Villain, J. R. Gavarri, and H. Benyaich, “Effects of temperature and Nd composition on non-linear transport properties in substituted Ce[1-x]Nd[x]O[2-δ] cerium
dioxides,” Journal of Solid State Chemistry, vol. 177, no. 3, pp. 856–865, 2004. View at Publisher · View at Google Scholar · View at Scopus
13. P. Shuk, M. Greenblatt, and M. Croft, “Hydrothermal synthesis and properties of Ce[1-x]EuxO[2-δ] solid solutions,” Journal of Alloys and Compounds, vol. 303-304, pp. 465–471, 2000. View at Scopus
14. B. Matovic, Z. Dohcevic-Mitrovic, M. Radovic et al., “Synthesis and characterization of ceria based nanometric powders,” Journal of Power Sources, vol. 193, no. 1, pp. 146–149, 2009. View at
Publisher · View at Google Scholar · View at Scopus
15. R. O. Fuentes and R. T. Baker, “Synthesis and properties of Gadolinium-doped ceria solid solutions for IT-SOFC electrolytes,” International Journal of Hydrogen Energy, vol. 33, no. 13, pp.
3480–3484, 2008. View at Publisher · View at Google Scholar · View at Scopus
16. P. Jasinski, “Electrical properties of nanocrystalline Sm-doped ceria ceramics,” Solid State Ionics, vol. 177, no. 26–32, pp. 2509–2512, 2006. View at Publisher · View at Google Scholar · View at
17. P. Shuk, M. Greenblatt, and M. Croft, “Hydrothermal synthesis and properties of mixed conducting Ce[1-x]TbxO[2-δ] solid solutions,” Chemistry of Materials, vol. 11, no. 2, pp. 473–479, 1999. View
at Scopus
18. L. G. Sillen, Arkiv for Kemi, Mineralogi Och Geologi, vol. 12A, 1937.
19. V. Fruth, A. Ianculescu, D. Berger et al., “Synthesis, structure and properties of doped Bi[2]O[3],” Journal of the European Ceramic Society, vol. 26, no. 14, pp. 3011–3016, 2006. View at
Publisher · View at Google Scholar · View at Scopus
20. N. M. Sammes, G. A. Tompsett, H. Näfe, and F. Aldinger, “Bismuth based oxide electrolytes—structure and ionic conductivity,” Journal of the European Ceramic Society, vol. 19, no. 10, pp.
1801–1826, 1999. View at Scopus
21. L. E. Depero and L. Sangaletti, “Structural disorder and ionic conduction: the case of Bi[2]O[3],” Journal of Solid State Chemistry, vol. 122, no. 2, pp. 439–443, 1996. View at Publisher · View
at Google Scholar · View at Scopus
22. C. N. R. Rao, G. V. S. Rao, and S. Ramdas, “Phase transformations and electrical properties of bismuth sesquioxide,” The Journal of Physical Chemistry, vol. 73, no. 3, pp. 672–675, 1969. View at
Publisher · View at Google Scholar · View at Scopus
23. H. A. Harwig and A. G. Gerards, “Electrical properties of the α, β, γ, and δ phases of bismuth sesquioxide,” Journal of Solid State Chemistry, vol. 26, no. 3, pp. 265–274, 1978. View at Scopus
24. P. Shuk, H.-D. Wiemhöfer, U. Guth, W. Göpel, and M. Greenblatt, “Oxide ion conducting solid electrolytes based on Bi[2]O[3],” Solid State Ionics, vol. 89, no. 3-4, pp. 179–196, 1996. View at
25. O. Monnereau, L. Tortet, P. L. lewellyn, F. Rouquerol, and G. Vacquier, “Synthesis of Bi[2]O[3] by controlled transformation rate thermal analysis: a new route for this oxide?” Solid State Ionics
, vol. 157, no. 1–4, pp. 163–169, 2003. View at Publisher · View at Google Scholar · View at Scopus
26. F. Schröder and N. Bagdassarov, “Phase transitions and electrical properties of Bi[2]O[3] up to 2.5 GPa,” Solid State Communications, vol. 147, no. 9-10, pp. 374–376, 2008. View at Publisher ·
View at Google Scholar · View at Scopus
27. F. Schröder, N. Bagdassarov, F. Ritter, and L. Bayarjargal, “Temperature dependence of Bi[2]O[3] structural parameters close to the α -δ pphase transition,” Phase Transitions, vol. 83, no. 5, pp.
311–325, 2010. View at Publisher · View at Google Scholar · View at Scopus
28. S. Hull, S. T. Norberg, M. Tucker, S. Eriksson, C. Mohn, and S. Stølen, “Neutron total scattering study of the δ and β phases of Bi[2]O[3],” Dalton Transactions, no. 40, pp. 8737–8745, 2009. View
at Publisher · View at Google Scholar · View at PubMed · View at Scopus
29. S. Dikmen, P. Shuk, and M. Greenblatt, “Hydrothermal synthesis and properties of Ce[1-x]BixO[2-δ]solid solutions,” Solid State Ionics, vol. 112, no. 3-4, pp. 299–307, 1998. View at Scopus
30. X. L. Chen and W. Eysel, “The stabilization of β-Bi[2]O[3] by CeO[2],” Journal of Solid State Chemistry, vol. 127, no. 1, pp. 128–130, 1996. View at Publisher · View at Google Scholar · View at
31. K. Sardar, H. Y. Playford, R. J. Darton et al., “Nanocrystalline cerium-bismuth oxides: synthesis, structural characterization, and redox properties,” Chemistry of Materials, vol. 22, no. 22, pp.
6191–6201, 2010. View at Publisher · View at Google Scholar
32. M. J. Godinho, R. F. Gonçalves, L. P. S Santos, J. A. Varela, E. Longo, and E. R. Leite, “Room temperature co-precipitation of nanocrystalline CeO[2] and Ce[0.8]Gd[0.2]O[1.9-δ] powder,” Materials
Letters, vol. 61, no. 8-9, pp. 1904–1907, 2007. View at Publisher · View at Google Scholar · View at Scopus
33. J. G. Li, T. Ikegami, Y. Wang, and T. Mori, “Nanocrystalline Ce[1-x]Y[x]O[2-x/2] ($0\le x\le 0.35$) oxides via carbonate precipitation: synthesis and characterization,” Journal of Solid State
Chemistry, vol. 168, no. 1, pp. 52–59, 2002. View at Publisher · View at Google Scholar · View at Scopus
34. F. Ye, T. Mori, D. R. Ou, J. Zou, and J. Drennan, “Microstructural characterization of terbium-doped ceria,” Materials Research Bulletin, vol. 42, no. 5, pp. 943–949, 2007. View at Publisher ·
View at Google Scholar · View at Scopus
35. Y. Ikuma, K. Takao, M. Kamiya, and E. Shimada, “X-ray study of cerium oxide doped with gadolinium oxide fired at low temperatures,” Materials Science and Engineering B, vol. 99, no. 1–3, pp.
48–51, 2003. View at Publisher · View at Google Scholar · View at Scopus
36. Y. P. Fu and S. H. Chen, “Preparation and characterization of neodymium-doped ceria electrolyte materials for solid oxide fuel cells,” Ceramics International, vol. 36, no. 2, pp. 483–490, 2010.
View at Publisher · View at Google Scholar · View at Scopus
37. D. Johnson, Zview, Impedance software, Version 2.1a, Scribner Associates Inc, (1990–1998).
38. J. R. Macdonald, “Double layer capacitance and relaxation in electrolytes and solids,” Transactions of the Faraday Society, vol. 66, pp. 943–958, 1970. View at Publisher · View at Google Scholar
· View at Scopus
39. J. R. Macdonald, “Electrical response of materials containing space charge with discharge at the electrodes,” The Journal of Chemical Physics, vol. 54, no. 5, pp. 2026–2050, 1972, Erratum:
electrical response of materials containing space charge with discharge at the electrodes, The Journal of Chemical Physics, vol. 56, article, 681, 1972. View at Scopus
40. J. R. Macdonald, “Impedance spectroscopy: old problems and new developments,” Electrochimica Acta, vol. 35, no. 10, pp. 1483–1492, 1990. View at Scopus
41. J. R. Macdonald, “Characterization of the electrical response of high resistivity ionic and dielectric solid materials by immittance spectroscopy,” in Impedance Spectroscopy—Theory, Experiment,
and Applications, E. Barsoukov and J. R. Macdonald, Eds., pp. 264–282, John Wiley & Sons, New Jersey, NJ, USA, 2nd edition, 2005.
42. C. Ho, I. D. Raistrick, and R. A. Huggins, “Application of AC techniques to the study of lithium diffusion in tungsten trioxide thin film,” Journal of the Electrochemical Society, vol. 127, no.
2, pp. 343–349, 1980. View at Scopus
43. B. P. Mandal, S. K. Deshpande, and A. K. Tyagi, “Ionic conductivity enhancement in Gd[2]Zr[2]O[7] pyrochlore by Nd doping,” Journal of Materials Research, vol. 23, no. 4, pp. 911–916, 2008. View
at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/amse/2012/452383/","timestamp":"2014-04-16T05:32:50Z","content_type":null,"content_length":"262223","record_id":"<urn:uuid:f279b043-8e5b-4ab0-b618-9f3d6feb4fad>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: T
he success of pricing models is often measured by the extent to which
closed-form solutions of the Black-Scholes type are available for the
basic payouts. Analytic tractability is often crucial for the calibration
to market data and is helpful for implementing numerical algorithms for
exotics. Extensions of the Black-Scholes formula have been directed to-
wards three main model classes: local volatility models, which postulate a
deterministic relationship between the underlying state variable, time and
volatility; stochastic volatility models, which assume that the volatility fol-
lows a distinct but correlated process; and jump models, which can be re-
garded as limits of stochastic volatility models whereby the volatility can
occasionally be singularly large at some points in time, enough to cause
the underlying sample path to have discontinuous jumps.
As Dupire (1994) demonstrated, state-dependent volatility models are
able to reproduce arbitrage-free implied volatility surfaces. However, robust
estimations require either regularisations or settling on a parametric form for
the local volatility such as the constant elasticity of variance model in Cox
& Ross (1976), the quadratic volatility models in Rady (1997) and the more
comprehensive hypergeometric Brownian motions in Albanese et al (2001). | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/981/2107435.html","timestamp":"2014-04-19T01:50:08Z","content_type":null,"content_length":"8548","record_id":"<urn:uuid:3066965f-5f79-435d-b100-18a0d38f82c0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
A201252 - OEIS
A201252 Initial primes in prime septuplets (p, p+2, p+8, p+12, p+14, p+18, p+20) preceding the maximal gaps in A201251. 3
5639, 88799, 284729, 1146779, 8573429, 24001709, 43534019, 87988709, 157131419, 522911099, 706620359, 1590008669, 2346221399, 3357195209, 11768282159, 30717348029, 33788417009, 62923039169,
68673910169, 88850237459, 163288980299, 196782371699, 421204876439 (list; graph; refs; listen; history; text; internal format)
OFFSET 1,1
COMMENTS Prime septuplets (p, p+2, p+8, p+12, p+14, p+18, p+20) are one of the two types of densest permissible constellations of 7 primes. Maximal gaps between septuplets of this type are listed
in A201251; see more comments there. A233038 lists the corresponding primes at the end of the maximal gaps.
LINKS Alexei Kourbatov, Table of n, a(n) for n = 1..52
Tony Forbes, Prime k-tuplets
Alexei Kourbatov, Maximal gaps between prime k-tuples
Alexei Kourbatov, Tables of record gaps between prime constellations, arXiv preprint arXiv:1309.4053, 2013.
Eric W. Weisstein, k-Tuple Conjecture
EXAMPLE The gap of 83160 between septuplets starting at p=5639 and p=88799 is the very first gap, so a(1)=5639. The gap of 195930 between septuplets starting at p=88799 and p=284729 is a maximal
gap - larger than any preceding gap; therefore a(2)=88799. The next gap starts at p=284729 and is again a maximal gap, so a(3)=284729. The next gap is smaller, so it does not contribute to
the sequence.
CROSSREFS Cf. A022010 (prime septuplets p, p+2, p+8, p+12, p+14, p+18, p+20), A201251, A233038.
Sequence in context: A229591 A161193 A022010 * A184080 A234401 A203726
Adjacent sequences: A201249 A201250 A201251 * A201253 A201254 A201255
KEYWORD nonn,hard
AUTHOR Alexei Kourbatov, Nov 28 2011
STATUS approved | {"url":"http://oeis.org/A201252","timestamp":"2014-04-16T19:07:38Z","content_type":null,"content_length":"16345","record_id":"<urn:uuid:628ff63b-b57a-4f5a-90e8-661a40360a35>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi 295Ja
Here is an example where 'cross-multiplying' has been done incorrectly.
Student remembers about cross multiplying, but does it incorrectly:
It is because of this sort of mistake that teachers advise against using cross multiplication.
I prefer to teach 'multiply every term by the same amount', so in the above example:
Another thing you may find helpful is to convert the algebra to numbers.
In my example, suppose u = 3, v = 6 and f = 2
In the original:
This is correct.
In the error version:
This is NOT CORRECT so the error is revealed.
In my correct version:
This has come out correctly.
This is not a guarantee
that no mistake has been made as I might just have been lucky to get the 'right answer', but it will help to confirm that what you have done is right.
It is 11.14 am in the UK. | {"url":"http://www.mathisfunforum.com/post.php?tid=18012&qid=227853","timestamp":"2014-04-21T02:08:29Z","content_type":null,"content_length":"28140","record_id":"<urn:uuid:f149de9b-8449-48fb-8c31-b70522e35861>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: The NABT controversy
Christopher Morbey (cmorbey@vanisle.net)
Tue, 17 Feb 1998 11:23:08 -0800
Moorad Alexanian wrote:
> Dear Loren,
> In quantum mechanics there is a dynamical theory that indicates the possible
> outcomes of given experimental measurements and the associated probabilities
> for such outcomes. Can someone tell me what is the dynamical theory that tells
> us what are the possible outcomes and the associated probabilities in
> evolutionary theory? Any theory that uses the notion of randomness must makes
> such issues clear; otherwise, it is not a scientific theory and are mere
> words--on the same status as Genesis vis a vis the question of origins.
Dear Moorad:
You have put your finger on an interesting point.
The greatest part of my whole scientific career has been making models of
astronomical or astrophysical processes, then comparing how observations fit
these models. Even though I don't do those sorts of things now I can't help but
think back to what I actually did. In every case I was interested in showing how
much the observations were different from random. In fact, most of the science
with which I am familiar tries to extract what is not random out of the
observations. We calculate significance levels based on certain hypotheses,
always hoping to convey a quantitative estimate of how much our observations
differ from that which is random or that which has no deterministic influence.
What is deemed to be random is deemed to convey no information.
There is some irony to be pointed out. Some scientists spend their livelihood
trying to diminish the randomness that confounds their observations. They want
to show that their data are worth something, that their conclusions have some
authority. Other scientists, or even the same ones will go to extreme lengths to
prove that science over eons can only proceed by means of purposeless and random
mechanism. It's quite odd, I think.
Then there are those who claim that philosophical and religious statements are
irrevelent to actual scientific theory. They forget that the very words they
speak or statements they write are based on basic assumptions of information
transfer and understanding.
I would LOVE to read about significance tests with respect to "evolutionary"
randomness. Imagine. Trying to extract something random out of the confusing
perfections conjured in a perfect mind! This is the stuff of comedy or
Christopher Morbey | {"url":"http://www2.asa3.org/archive/asa/199802/0264.html","timestamp":"2014-04-16T10:50:01Z","content_type":null,"content_length":"4221","record_id":"<urn:uuid:1c1697b7-27bd-478d-859c-f2ada0418240>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is 1/4 oz. the same as .25 ounces?
You asked:
Is 1/4 oz. the same as .25 ounces?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/is_1/4_oz._the_same_as_.25_ounces","timestamp":"2014-04-18T16:14:56Z","content_type":null,"content_length":"58650","record_id":"<urn:uuid:78683d33-afa9-4521-a5cc-fbc010017812>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arcadia, CA Algebra 1 Tutor
Find an Arcadia, CA Algebra 1 Tutor
...It is so rewarding to work with students to help them gain confidence and see improvement in their grades in a subject they are having difficulty with. I greatly enjoy helping students connect
with math to help them gain the confidence they need to successfully master this subject. I have been a junior math teacher and taught pre-algebra, geometry and Algebra 1 for several years.
22 Subjects: including algebra 1, English, reading, writing
...Over the years, I have worked with people who have learning disabilities, mental disorders, and physical limitations. I consider it a privilege to work with people who have special needs and I
admire their tenacity and ability to persist when faced with challenges. It takes a lot of courage to fight back and I will be one of the first ones to encourage you and to cheer you on!
69 Subjects: including algebra 1, reading, chemistry, Spanish
...I prefer the physical sciences. The way I teach will be by showing you step by step how to do a problem for a few times. Then I will let you take on the problem itself and make sure you
understand why each step is necessary.
9 Subjects: including algebra 1, calculus, differential equations, physical science
...At higher levels, you also need trig to understand chemistry and biology as well. The SAT is a limited measure, but it does represent a very important core set of skills that a student needs
to be successful in college. Sometimes too much emphasis is placed on this test when grades, recommendations, and personal statements provide a more nuanced perspective on a student.
14 Subjects: including algebra 1, calculus, trigonometry, geometry
...I have a deep love of all things mathematical, and I find teaching reading strategies, writing techniques and even grammar rules to be a delight. I recently moved here from NYC, where I spent
two years working for a premium SAT test prep company. I've helped more than 100 students reach their h...
26 Subjects: including algebra 1, Spanish, reading, writing
Related Arcadia, CA Tutors
Arcadia, CA Accounting Tutors
Arcadia, CA ACT Tutors
Arcadia, CA Algebra Tutors
Arcadia, CA Algebra 2 Tutors
Arcadia, CA Calculus Tutors
Arcadia, CA Geometry Tutors
Arcadia, CA Math Tutors
Arcadia, CA Prealgebra Tutors
Arcadia, CA Precalculus Tutors
Arcadia, CA SAT Tutors
Arcadia, CA SAT Math Tutors
Arcadia, CA Science Tutors
Arcadia, CA Statistics Tutors
Arcadia, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Arcadia_CA_algebra_1_tutors.php","timestamp":"2014-04-18T00:52:56Z","content_type":null,"content_length":"24259","record_id":"<urn:uuid:b944cea1-87e2-48bb-938b-201be8461d6a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: [ap-stat] when sigma is known
Date: Feb 18, 2011 9:32 AM
Author: dstarnes@lawrenceville.org
Subject: Re: [ap-stat] when sigma is known
There has been such an emphasis on this listserv about the mantra "z is
for proportions, t is for means" that we may have gone a little overboard!
The more careful mantra is probably something like "when doing inference,
z is for proportions and t is for means (unless we somehow know the
population sigma)". That parenthetical is important in questions about
determining sample size when planning a study in which students are
expected to use a given value that represents the population standard
deviation. And yes, there have been both free response and multiple
choice questions of that sort on the AP exam.
If a question is being posed about calculating a probability involving the
sampling distribution of x-bar, it's quite likely that a value will be
given for the population mean and standard deviation. In that setting
(assuming we have either a Normal population distribution or a large
sample size so the CLT applies), students should be using the standard
Normal (z) distribution to find the desired probability. Using the t
distribution might yield a similar answer, but there isn't a valid
theoretical reason to do so, which would probably mean that a student
wouldn't receive full credit for using t.
Hope this helps.
Daren Starnes
Math Department Chair & Master Teacher
The Lawrenceville School
david indelicato <dind@optonline.net>
02/18/2011 09:20 AM
Please respond to
david indelicato <dind@optonline.net>
"AP Statistics" <ap-stat@lyris.collegeboard.com>
[ap-stat] when sigma is known
Hey stats teachers,
I came across a multiple choice question about a sampling distribution
for sample means in which a student is asked to compute an area. In the
problem, it is stated that the population standard deviation is known.
The question expects the student to use the z-distribution instead of
the t-distribution. The solution using z (.046) and the solution using
t (.050) are both given as choices. So if the student uses t, they will
get the wrong answer on this multiple choice question.
I was wondering, if this situation showed up in an open response
question and a student used t instead of z, would their solution be
close enough to get full credit? Is it wrong to use t when we know the
population standard deviation or just not as accuarate as using z?
Thanks in advance for any thoughts.
Dave Indelicato
Frequently asked questions (FAQ: http://www.mrderksen.com/faq.htm
List Archives from 1994: http://mathforum.org/kb/forum.jspa?forumID=67
To search the list archives for previous posts go to
To unsubscribe click here:
To change your subscription address or other settings click here:
Frequently asked questions (FAQ: http://www.mrderksen.com/faq.htm
List Archives from 1994: http://mathforum.org/kb/forum.jspa?forumID=67
To search the list archives for previous posts go to
To unsubscribe click here:
To change your subscription address or other settings click here: | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7387707","timestamp":"2014-04-16T04:54:51Z","content_type":null,"content_length":"5285","record_id":"<urn:uuid:e251d9c7-b72d-4e81-a37f-3468a11c7547>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
question #5297
physics question #5297
Peter Schwarz, a 67 year old male from Toronto asks on June 1, 2011,
I understand that even in a finite volume, there are an infinite number of points. Why does the Planck area not apply here?
viewed 5553 times
the answer
William George Unruh
answered on June 1, 2011,
Yes, any volume has an ifinite number of points. That is an aspect of mathematics of the real numbers. If you represent the volume as a part of R^3 (the 3-dimensional space of reals) it is trivial to
show that it has an infinite number of points.
Now, the question can be asked as to whether or not R^3 is a good model of a volume, whether a manifold captures the essense of the physics of a spatial volume. That is where the Planck scale comes
in. Does the best model of space have something like quantization of spatial volumes in terms of planck scale volumes, or is the Manifold structure a better model? No one knows. There is a suspicion
that the former is somwhow true, but we have no adequate mathematical model which would tell us. The loop quantum gravity people would argue that their results suggest that there is a minimal volume
and that any volume, if you measure it somehow, would give a finite number of something like planck volumes. String theory suggests there is a complementarity between volumes (or any dimensions) and
energy, such that if you try to measure to smaller and smaller scales, you will need more and more energy in the string, which wil then correspond to more and more curved spacetime which makes the
concept of the volume become more and more problematic. But the "right answer" is liable to be even weirder.
If you found this answer useful, please consider
making a small donation to science.ca. | {"url":"http://www.science.ca/askascientist/viewquestion.php?qID=5297","timestamp":"2014-04-17T04:06:22Z","content_type":null,"content_length":"17194","record_id":"<urn:uuid:f2ddf74a-79ba-4a0b-b308-90b743e22873>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric abelian class field theory
up vote 7 down vote favorite
There is a very nice geometric proof of Deligne for the Artin Reciprocity in the geometric setting, namely for a smooth, projective, geometrically irreducible curve $C$ over a finite field $\mathbb
{F}_{q}$, with function field $K=k(C)$, and idele group $\mathbb{I}_{K}:=\prod^{'}_{p\in|C|}K^{*}_{p}$ there is a one-to-one correspondence between the finite quotients of the double quotient space
$k(C)^{*}\backslash\mathbb{I}_{K}/\prod_{p \in |C|}\widehat{\mathcal{O}_{p}^{*}}$ (which is isomorphic to $Pic_{C}(\mathbb{F}_{q})$) and the finite quotients of $\pi^{ab}_{1}(C)$.
Now on the other hand the Artin Reciprocity Law for function fields states (e.g. in Artin-Tate: Class field theory) that the group $\mathbb{I}^{0}_{K}/K^{*}$ of norm 1 idele classes is isomorphic via
the Reciprocity map to $Gal(\bar{K}^{ab}/K\bar{k})$.
My questions would be:
1. These two statements seem to me first as different statements, don´t they?
2. If we put aside Deligne´s geometric proof for the geometric statement (not seriously and not so for long :-)) then how could one prove the geometric statement using the "number theoretic"
Reciprocity Law for function fields?
ag.algebraic-geometry arithmetic-geometry nt.number-theory
add comment
1 Answer
active oldest votes
They are different statements. What Deligne proves is the unramified case, i.e. the description of abelian extensions of $K$ unramified everywhere. If you could extend his argument to
up vote 5 affine curves then you could possibly prove Artin reciprocity by his method. Going the other way should not be difficult. Have you looked at Serre's book "Groupes algebriques et corps
down vote de classes"?
yes, i am trying now to prove the ramified case, namely for a finite subset S of C i consider 1-dimensional representations of Pic_C,S:= isomorphism classes of line bundles on C with
fixed isomorphisms of the stalks at the points in S. On the other hand there is the 1-dimensional representations of the tame fundamental group of the open U:=C-S. But i do not
understand, how could i then prove with it the number theoretic Artin Reciprocity? (I think this you mean by the other way...so how does it go?) – Peter Toth Feb 10 '11 at 6:16
1 I am not sure if just the statement you wrote about the Galois group of $K^{ab}$ is enough but the standard description of the abelian extensions of $K$ of "modulus m", that is with
bounded ramification, should do it. It's all is Serre's book. – Felipe Voloch Feb 10 '11 at 6:54
ok, so as i understand for a modulus m we can define the generalized jacobian J_m (which in my notation is basically the Pic^0_C,m and then every abelian, etale covering of U=C-m
comes from an isogeny over J_m. So my question transforms to: how can one relate these isogenies to 1-dimensional representations? – Peter Toth Feb 10 '11 at 9:01
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry arithmetic-geometry nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/54895/geometric-abelian-class-field-theory?sort=newest","timestamp":"2014-04-17T21:46:15Z","content_type":null,"content_length":"55598","record_id":"<urn:uuid:16d1af98-c903-42b7-bbdc-48e616817677>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information Processing
The two figures below (click for larger versions) are taken from the
Brookings report
by Gordon, Kane and Staiger: Identifying Effective Teachers Using Performance on the Job. The report has received a lot of attention recently thanks to Malcolm Gladwell's New Yorker
. Both are worth a look if you are interested in education. The top figure shows that certification has no impact on teaching effectiveness. The second shows that effectiveness measured in the years
1 and 2 is predictive of effectiveness in the subsequent year. In this case effectiveness is defined by the average change in percentile ranking of students in the teacher's class. Good teachers help
their students to improve their mastery, hence percentile ranking, relative to the average student studying the same material.
It's obvious to me that there is gigantic variation in effectiveness among teachers. Gladwell emphasizes how difficult it is to evaluate teaching capability in initial hiring, and how the single most
important impact on overall school effectiveness is due to individual teachers (he also makes the analogy to scouting college QBs for pro football -- it's very hard to predict NFL performance based
on college performance). The Brookings paper has many policy suggestions, but the basic idea is that if we were disciplined and data-driven we could easily determine which teachers are good and which
ones are not.
New Yorker: ...One of the most important tools in contemporary educational research is “value added” analysis. It uses standardized test scores to look at how much the academic performance of
students in a given teacher’s classroom changes between the beginning and the end of the school year. Suppose that Mrs. Brown and Mr. Smith both teach a classroom of third graders who score at
the fiftieth percentile on math and reading tests on the first day of school, in September. When the students are retested, in June, Mrs. Brown’s class scores at the seventieth percentile, while
Mr. Smith’s students have fallen to the fortieth percentile. That change in the students’ rankings, value-added theory says, is a meaningful indicator of how much more effective Mrs. Brown is as
a teacher than Mr. Smith.
It’s only a crude measure, of course. A teacher is not solely responsible for how much is learned in a classroom, and not everything of value that a teacher imparts to his or her students can be
captured on a standardized test.
Nonetheless, if you follow Brown and Smith for three or four years, their effect on their students’ test scores starts to become predictable: with enough data, it is possible to identify who the
very good teachers are and who the very poor teachers are. What’s more—and this is the finding that has galvanized the educational world—the difference between good teachers and poor teachers
turns out to be vast.
Eric Hanushek, an economist at Stanford, estimates that the students of a very bad teacher will learn, on average, half a year’s worth of material in one school year. The students in the class of
a very good teacher will learn a year and a half’s worth of material. That difference amounts to a year’s worth of learning in a single year. Teacher effects dwarf school effects: your child is
actually better off in a “bad” school with an excellent teacher than in an excellent school with a bad teacher. Teacher effects are also much stronger than class-size effects. You’d have to cut
the average class almost in half to get the same boost that you’d get if you switched from an average teacher to a teacher in the eighty-fifth percentile. And remember that a good teacher costs
as much as an average one, whereas halving class size would require that you build twice as many classrooms and hire twice as many teachers.
Hanushek recently did a back-of-the-envelope calculation about what even a rudimentary focus on teacher quality could mean for the United States. If you rank the countries of the world in terms
of the academic performance of their schoolchildren, the U.S. is just below average, half a standard deviation below a clump of relatively high-performing countries like Canada and Belgium.
According to Hanushek, the U.S. could close that gap simply by replacing the bottom six per cent to ten per cent of public-school teachers with teachers of average quality. After years of
worrying about issues like school funding levels, class size, and curriculum design, many reformers have come to the conclusion that nothing matters more than finding people with the potential to
be great teachers. But there’s a hitch: no one knows what a person with the potential to be a great teacher looks like. The school system has a quarterback problem.
In my experience as a university professor I find that most colleagues think of themselves as above-average teachers, even when they are not. Essentially no "value-added" analysis is ever done, so
people can have a 30 year teaching career without ever realizing that they aren't effective in the classroom. I've done many dozens of business presentations, to venture capitalists, technology
partners, customers, analysts and even potential M&A acquirers, which has helped me improve my own teaching and communication skills. Despite the business setting such meetings are 90 percent
teaching -- trying to convey key points to the audience in a limited time. I'm usually there with a team and my team isn't shy about telling me afterwards what worked and what didn't work, so I've
had a lot of honest feedback that most professors never get.
The New Yorker cartoon and article capture some essential aspects of teaching and communication that are not widely understood. The teacher has to be simultaneously on top of the material itself
aware of what the class is doing / thinking / confused about. The big neglected factors in teaching are the ability to be a kind of air traffic controller (or symphony conductor) for the class, and
the ability to empathize with (read the mind of) an individual student, to see what, exactly, is confusing them.
Teaching effectiveness
blog comments powered by Disqus | {"url":"http://infoproc.blogspot.com/2008/12/teaching-effectiveness.html","timestamp":"2014-04-20T08:16:03Z","content_type":null,"content_length":"144583","record_id":"<urn:uuid:1c028e3b-5c61-4439-b315-7eca8b261620>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Panorama City Trigonometry Tutor
...I know how to approach a problem and explain to students so they can do it. I have worked in High School and middle school as a math tutor and I have seen first hand how students are struggling
with even the easy notion of solving an equation. When I questioned them I saw that they are struggling also with long division, multiplication tables, and had no idea what negative numbers are.
7 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...I took Prealgebra in Middle School and passed with an A. I have many math books that I would pull information from when tutoring. I received an A for pre-calculus in high school and have used
this material for many courses throughout college.
13 Subjects: including trigonometry, physics, calculus, geometry
...Euler (in the 1700s) was the first man to write the equation down in the form that we know it, and while he is one of the truly monumental minds in the history of mathematics, his work built on
the work of the hundreds of mathematical discoveries that preceded him. In some real sense, the equati...
7 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...In fact, a lot of the learning process includes things that you probably already know! And the amazing thing about this process is that it can be applied to a lot of subjects, both mathematical
and otherwise. Teachers often do not try to make the LEARNING process easier than it can be.
39 Subjects: including trigonometry, English, reading, physics
I've been tutoring since 1993 and I taught high school for one year. I like to have a friendly relationship with my students so its not such a drag for them to show up to sessions and so they stay
inspired to learn. I've worked with students with different academic backgrounds and learning abilities and understand the potential problems students may run into while learning new material.
10 Subjects: including trigonometry, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/Panorama_City_trigonometry_tutors.php","timestamp":"2014-04-19T17:23:01Z","content_type":null,"content_length":"24530","record_id":"<urn:uuid:630768de-430e-443a-afb5-e5228dca82db>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard errors of forecasts in dynamic simulation of nonlinear econometric models: some empirical results
Bianchi, Carlo and Calzolari, Giorgio (1983): Standard errors of forecasts in dynamic simulation of nonlinear econometric models: some empirical results. Published in: Time Series Analysis: Theory
and Practice, ed. by O.D.Anderson No. Amsterdam: North Holland (1983): pp. 177-198.
This is the latest version of this item.
Download (930Kb) | Preview
In nonlinear econometric models, the evaluation of forecast errors is usually performed, completely or partially, by resorting to stochastic simulation. However, for evaluating the specific
contribution of errors in estimated structural coefficients, several alternative methods have been proposed in the literature. Three of these methods will be compared empirically in this paper
through experiments performed on a set of "real world" econometric models of small, medium and large size. This work extends to dynamic simulation of nonlinear econometric models, for which the
authors have recently analysed the one-period (static) forecast errors empirically.
Item Type: MPRA Paper
Original Standard errors of forecasts in dynamic simulation of nonlinear econometric models: some empirical results
Language: English
Keywords: Nonlinear econometric models; forecast; Monte Carlo; standard errors
Subjects: C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C53 - Forecasting and Prediction Methods; Simulation Methods
C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation Models; Multiple Variables > C30 - General
Item ID: 22657
Depositing Giorgio Calzolari
Date 17. May 2010 13:23
Last 17. Feb 2013 21:34
AMEMIYA, T. (1977). The maximum likelihood and the nonlinear three-stage least squares in the general nonlinear simultaneous equation model. Econometrica 45, 955-968.
BIANCHI, C. and CALZOLARI, G. (1980). The one-period forecast errors in nonlinear econometric models. International Economic Review 21, 201-208.
BIANCHI, C. and CALZOLARI, G. (1982). Evaluating forecast uncertainty due to errors in estimated coefficients: empirical comparison of alternative methods. In Evaluating the Reliability
of Macro-Economic Models. Eds: G.C. Chow and P. Corsi, John Wiley, New York. 251-277.
BIANCHI, C., CALZOLARI, G. and CORSI, P. (1976). Divergences in the results of stochastic and deterministic simulation of an Italian non linear econometric model. In Simulation of
Systems. Ed: L. Dekker, North Holland, Amsterdam, 653-661.
BIANCHI, C., CALZOLARI, G. and CORSI, P. (1981). Standard errors of multipliers and forecasts from structural coefficients with block-diagonal covariance matrix. In Dynamic Modelling and
Control of National Economies (IFAC). Eds: J.M.L.Janssen, L.F.Pau and A.Straszak, Pergamon Press, Oxford, 311-316.
BIANCHI, C., CALZOLARI, G. and SARTORI, F. (1982). Stime 2SLS con componenti principali di un modello non lineare dell'economia italiana. Note Economiche 2, 114-137.
BRUNDY, J.M. and JORGENSON, D.W. (1971). Efficient estimation of simultaneous equations by instrumental variables. The Review of Economics and Statistics 53, 207-224.
BRUNDY, J.M. and JORGENSON, D.W. (1974). The relative efficiency of instrumental variables estimators of systems of simultaneous equations. Annals of Economic and Social Measurement 3,
CALZOLARI, G. (1979). Antithetic variates to estimate the simulation bias in non-linear models. Economics Letters 4, 323-328.
CALZOLARI, G. (1981). A note on the variance of ex-post forecasts in econometric models, Econometrica 49, 1593-1595.
CALZOLARI, G. and STERBENZ, F.P. (1982). Efficient computation of reduced form variances in nonlinear econometric models. (Unpublished paper).
CONRAD, K. and KOHNERT, P. (1980). Economic activity, interest rates and the exchange rate in the Bonn forecasting system No. 10. Institut fuer Gesellschafts-und
Wirtschaftswissenschaften der Universitaet Bonn. Wirtschaftstheoretische Abteilung, discussion paper No.107.
COOPER, J.P. and FISHER, S. (1974). Monetary and fiscal policy in the fully stochastic St. Louis econometric model. Journal of Money. Credit, and Banking 6, 1-22.
DHRYMES, P.J. (1970). EConometrics: Statistical Foundations and Applications. Harper & Row. New York.
DHRYMES, P.J. (1973). Restricted and unrestricted reduced forms: asymptotic distribution and relative efficiency. Econometrica 41. 119-134.
FAIR, R.C. (1980). Estimating the expected predictive accuracy of econometric models. International Economic Review 21. 355-378.
FELDSTEIN, M.S. (1971). The error of forecast in econometric models when the forecast-period exogenous variables are stochastic. Econometrica 39. 55-60.
GALLANT, A.R. (1977). Three-stage least-squares estimation for a system of simultaneous, nonlinear, implicit equations. Journal of Econometrics 5, 71-88.
GOLDEBERGER, A.S., NAGAR, A.L. and ODEH, H.S. (1961). The covariance matrices of reduced-form coefficients and of forecasts for a structural econometric model. Econometrica 29, 556-573.
HAITOVSKY, Y. and WALLACE, N. (1972). A study of discretionary and nondiscretionary monetary and fiscal policies in the context of stochastic macroeconometric models. in The Business
References: Cycle Today. Ed: V. Zarnowitz, NBER, New York, 261-309.
HAUSMAN, J.A. (1974). Full information instrumental variables estimation of simultaneous equations systems. Annals of Economic and Social Measurement 3, 641-652.
HENDRY, D.F. and HARRISON, R.W. (1974). Monte Carlo methodology and the small sample behaviour of ordinary and two-stage least squares. Journal of Econometrics 2, 151-174.
KLEIN, L.R. (1950). Economic Fluctuations in the United States. 1921-1941. John Wiley. New York. Cowles Commission Monograph 11.
KLEIN, L.R. (1974). A Textbook of Econometrics. Prentice-Hall, Englewood Cliffs.
KLEIN, L.R. and YOUNG, R.M. (1980). An Introduction to Econometric Forecasting and Forecasting Models. D.C.Heath and Company. Lexington, Massachusetts.
KLEOK, T. and MENNES, L.B.M. (1960). Simultaneous equations estimation based on principal components of predetermined variables. Econometrica 28. 45-61.
KRELLE, W. (1976). Das Bonner Prognosemodell 10, Teil I, Institut fuer Gesellschafts-und Wirtschaftswissenschaften der Universitaet Bonn. Wirtschaftstheoretische Abteilung, discussion
MARIANO, R.S. (1980). Analytical small sample distribution theory in econometrics: the simultaneous equations case. Universite Catholique de Louvain, Center for Operations Research &
Econometrics, discussion paper No.8026.
MARIANO, R.S. and BROWN, B.W. (1980). Asymptotic behavior of predictors in a nonlinear simultaneous system. University of Pennsylvania, The Wharton School, Department of Economics,
discussion paper No.410.
MC CARTHY, M.D. (1972). A note on the forecasting properties of two-stage least squares restricted reduced forms: the finite sample case. International Economic Review 13, 757-761.
MC CARTHY, M.D. (1981). Notes on the existence of moments of restricted reduced form estimates. University of Pennsylvania, discussion paper.
MIKHAIL, W.M. (1972). Simulating the small-sample properties of econometric estimators. Journal of the American Statistical Association 67, 620-624.
RAO, C.R. (1973). Linear Statistical Inference and its Applications. John Wiley. New York.
SARGAN, J.D. (1976). The existence of the moments of estimated reduced form coefficients. London School of Economics & Political Science, discussion paper A6.
SARTORI, F. (1978). Caratteristiche e struttura del modello. In Un Modello Econometrico dell' Economia Italiana; Caratteristiche e Impiego. Ispequaderni 1, Roma, 9-36.
SCHINK, G.R. (1971). Small sample estimates of the variance covariance matrix of forecast error for large econometric models: the stochastic simulation technique. University of
Pennsylvania. Ph.D. dissertation.
SCHMIDT, P. (1974). The asymptotic distribution of forecasts in the dynamic simulation of an econometric model. Econometrica 42, 303-309.
SCHMIDT, P. (1976). Econometrics. Marcel Dekker, New York.
SCHMIDT, P. (1978). A note on dynamic simulation forecasts and stochastic forecast-period exogenous variables. Econometrica 46, 1227-1230.
THEIL, H. (1971). Principles of Econometrics. John Wiley, New York.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/22657
Available Versions of this Item
• Standard errors of forecasts in dynamic simulation of nonlinear econometric models: some empirical results. (deposited 17. May 2010 13:23) [Currently Displayed] | {"url":"http://mpra.ub.uni-muenchen.de/22657/","timestamp":"2014-04-20T13:24:45Z","content_type":null,"content_length":"31957","record_id":"<urn:uuid:c150d8a9-dd37-46d0-ac5f-405e554a0d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can' compile numpy 1.02rc3 on OSX 10.3.9
Markus Rosenstihl markusro at element.fkp.physik.tu-darmstadt.de
Fri Oct 20 06:13:49 CDT 2006
Am 20.10.2006 um 02:53 schrieb Jay Parlar:
>> Hi!
>> I try to compile numpy rc3 on Panther and get following errors.
>> (I start build with "python2.3 setup.py build" to be sure to use the
>> python shipped with OS X. I din't manage to compile Python2.5 either
>> yet with similar errors)
>> Does anynbody has an Idea?
>> gcc-3.3
>> XCode 1.5
>> November gcc updater is installed
> I couldn't get numpy building with Python 2.5 on 10.3.9 (although I
> had different compile errors). The solution that ended up working for
> me was Python 2.4. There's a bug in the released version of Python 2.5
> that's preventing it from working with numpy, should be fixed in the
> next release.
> You can find a .dmg for Python 2.4 here:
> http://pythonmac.org/packages/py24-fat/index.html
> Jay P.
I have that installed already but i get some bus errors with that.
Furthermore it is built with gcc4
and i need to compile an extra module(pytables) and I fear that will
not work, hence I try to compile myself. Python 2.5 dosent't compile
either (libSystemStubs is only on Tiger).
The linking works when i remove the -lSystemStubs and it compiled clean.
Numpy rc3 wass also compiling now with python 2.5, but the tests failed:
Python 2.5 (r25:51908, Oct 20 2006, 11:40:08)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1671)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.test(10)
Found 5 tests for numpy.distutils.misc_util
Found 4 tests for numpy.lib.getlimits
Found 31 tests for numpy.core.numerictypes
Found 32 tests for numpy.linalg
Found 13 tests for numpy.core.umath
Found 4 tests for numpy.core.scalarmath
Found 9 tests for numpy.lib.arraysetops
Found 42 tests for numpy.lib.type_check
Found 183 tests for numpy.core.multiarray
Found 3 tests for numpy.fft.helper
Found 36 tests for numpy.core.ma
Found 1 tests for numpy.lib.ufunclike
Found 12 tests for numpy.lib.twodim_base
Found 10 tests for numpy.core.defmatrix
Found 4 tests for numpy.ctypeslib
Found 41 tests for numpy.lib.function_base
Found 2 tests for numpy.lib.polynomial
Found 8 tests for numpy.core.records
Found 28 tests for numpy.core.numeric
Found 4 tests for numpy.lib.index_tricks
Found 47 tests for numpy.lib.shape_base
Found 0 tests for __main__
................................Warning: invalid value encountered in
..Warning: invalid value encountered in divide
..Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
..Warning: invalid value encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
..Warning: invalid value encountered in divide
..Warning: invalid value encountered in divide
..Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
.Warning: divide by zero encountered in divide
........Warning: invalid value encountered in divide
.Warning: invalid value encountered in divide
..Warning: divide by zero encountered in divide
..............................................Warning: overflow
encountered in exp
..........Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
Warning: divide by zero encountered in divide
.....................Warning: invalid value encountered in sqrt
Warning: invalid value encountered in log
Warning: invalid value encountered in log10
..Warning: invalid value encountered in sqrt
Warning: invalid value encountered in sqrt
Warning: divide by zero encountered in log
Warning: divide by zero encountered in log
Warning: divide by zero encountered in log10
Warning: divide by zero encountered in log10
Warning: invalid value encountered in arcsin
Warning: invalid value encountered in arcsin
Warning: invalid value encountered in arccos
Warning: invalid value encountered in arccos
Warning: invalid value encountered in arccosh
Warning: invalid value encountered in arccosh
Warning: divide by zero encountered in arctanh
Warning: divide by zero encountered in arctanh
Warning: invalid value encountered in divide
Warning: invalid value encountered in true_divide
Warning: invalid value encountered in floor_divide
Warning: invalid value encountered in remainder
Warning: invalid value encountered in fmod
FAIL: Ticket #112
Traceback (most recent call last):
packages/numpy/core/tests/test_regression.py", line 219, in
assert(str(a)[1:9] == str(a[0])[:8])
Ran 519 tests in 3.760s
FAILED (failures=1)
<unittest.TextTestRunner object at 0x1545f70>
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/011674.html","timestamp":"2014-04-16T17:01:15Z","content_type":null,"content_length":"9227","record_id":"<urn:uuid:15c84318-1d3d-41f7-af26-b090052b4e03>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sausalito Precalculus Tutor
...I have taught economics, operations research and finance related courses. I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong
background in statistics and econometrics.
49 Subjects: including precalculus, calculus, physics, geometry
...Students who are currently “making the grade” will be more confident regarding their abilities and will be fully prepared for “honors” level instruction. High School students will be prepared
for pre-calculus and calculus instruction, and will be in position to confidently pass their AP tests. ALL students are fully capable of high academic achievement.
13 Subjects: including precalculus, calculus, physics, geometry
...I was originally going to double major in these two subjects. My original interest and intent in majoring in these two subjects was to teach freshman and sophomore level college courses, and I
was enrolled in education to teach public school only as a back-up plan and because I thought I might a...
48 Subjects: including precalculus, reading, Spanish, English
...So I turned to something I had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING! I have been doing it professionally now for over ten years. I love
it when my student's understand a new concept.
10 Subjects: including precalculus, calculus, geometry, algebra 1
I just recently graduated from the Massachusetts Institute of Technology this June (2010) with a Bachelors of Science in Physics. While I was there, I also took various Calculus courses and
courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways the world works, and the value of a good education.
6 Subjects: including precalculus, physics, calculus, algebra 1 | {"url":"http://www.purplemath.com/Sausalito_Precalculus_tutors.php","timestamp":"2014-04-16T13:15:29Z","content_type":null,"content_length":"24168","record_id":"<urn:uuid:952e9fcb-b3f8-4272-af39-709a13c742e2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there any easy way to find the remainders of very large integer divisions?
May 23rd 2011, 10:13 PM #1
May 2011
Is there any easy way to find the remainders of very large integer divisions?
Hi all, I am Chaitanya, I am preparing for GRE, unfortunately I am poor at mathematics. But with some help I can understand quickly.
I came across few questions, solved few on mu own. But I came across one question,
What is the remainder of the division of 14414*14416*14418 by 14?
Is there any easy way to solve this?
Thank you all in advance and sorry if I ask a very silly question.
It's called modular arithmetic. Look it up. It's quite useful, and will open you up to some interesting mathematics.
Than you dude, do you mean I have to google "modular arithmetic"?
Yes, and you should start by looking at the first hit.
modular arithmetic - Google Search
I am reading from this page.
I dint understood this para.
Modular arithmetic lets us state these results quite precisely, and it also provides a convenient language for similar but slightly more complex statements. In the above example, our modulus is
the number 2. The modulus can be thought of as the number of classes that we have broken the integers up into. It is also the difference between any two "consecutive" numbers in a given class.
Can anyone explain? Thanks in advance.
we have to find the solution of the eqn
x≡14414*14416*14418 (mod 14)
When 14414 is divided by 14 , it leaves remainder as 8,
14416 is divided by 14, it leaves remainder as 10,
14418 is divided by 14, it leaves remainder as 12.
so we get
14414*14416*14418≡ 8*10*12≡80*12≡10*12≡8(mod 14)
the remainder is 8 when 14414*14416*14418 is divided by 14.
i think this will help.but please check the calculations
Hi all, I am Chaitanya, I am preparing for GRE, unfortunately I am poor at mathematics. But with some help I can understand quickly.
I came across few questions, solved few on mu own. But I came across one question,
What is the remainder of the division of 14414*14416*14418 by 14?
Is there any easy way to solve this?
Thank you all in advance and sorry if I ask a very silly question.
even if you don't know modular arithmetic you can do this:
$14414=14k+8, \, 14416=14l+10, \, 14418=14m+12 \Rightarrow 14484 \cdot 14416 \cdot 14418=(14k+8)(14l+10)(14m+12)$.
Expand this thing to get $14484 \cdot 14416 \cdot 14418=14p+8 \cdot 10 \cdot 12=14q+8$, where $p,q$ are integers.
Thanks dude. I thought the answer would be 3. In the page I was reading(this) the author was describing about congruency for addition and multiplication. I thought for division
Since even*even*even/even is even/even and 0/0≡1mod2, I thought answer is 3. Since the options given were
Since 3 is the only odd number I thought 3 would be the right answer. How ever there is little confusion in your post. How did you exactly said that 8 is the right answer. It can be either 10 or
12. Could you please explain.
Thank you in advance
Thanks dude. I thought the answer would be 3. In the page I was reading(this) the author was describing about congruency for addition and multiplication. I thought for division
0/0≡1mod2 i don't what do you mean by this. 0/0????
Since even*even*even/even is even/even and 0/0≡1mod2, I thought answer is 3. Since the options given were
Since 3 is the only odd number I thought 3 would be the right answer. How ever there is little confusion in your post. How did you exactly said that 8 is the right answer. It can be either 10 or
12. Could you please explain.
Thank you in advance
@abhishek, I just thought divisions will be represented like that. My guess was wrong.
All I want to know is, how you people are telling 8 is the right answer?
Thank you in advance.
Is modular arithimetic just a way of finding remainders?
It's actually quite a bit more than that, but that is something that it can be used for.
May 23rd 2011, 10:41 PM #2
Sep 2009
May 23rd 2011, 10:47 PM #3
May 2011
May 23rd 2011, 10:51 PM #4
Grand Panjandrum
Nov 2005
May 23rd 2011, 11:14 PM #5
May 2011
May 23rd 2011, 11:33 PM #6
May 23rd 2011, 11:51 PM #7
May 24th 2011, 12:00 AM #8
May 2011
May 24th 2011, 12:39 AM #9
May 2011
May 24th 2011, 04:45 AM #10
May 24th 2011, 07:39 AM #11
May 2011
May 24th 2011, 09:09 AM #12
May 24th 2011, 10:21 AM #13
May 2011
May 24th 2011, 10:25 AM #14
Sep 2009
May 25th 2011, 04:38 AM #15 | {"url":"http://mathhelpforum.com/number-theory/181448-there-any-easy-way-find-remainders-very-large-integer-divisions.html","timestamp":"2014-04-17T11:09:34Z","content_type":null,"content_length":"75020","record_id":"<urn:uuid:810a7471-b9b7-4ab4-9a22-932e266b49b9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find an equation of the tangent line to curve at the given point: y=e^(2x)cos(pi)x (0,1)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[y=e ^{2x}\cos \pi x\](0,1)
Best Response
You've already chosen the best response.
\[y'=2e^{2x}\cos(\pi x)-\pi e^{2x}\sin(\pi x)\] by the product and chain rule replace \(x\) by \(0\) and find your slope
Best Response
You've already chosen the best response.
why do you differentiate again?
Best Response
You've already chosen the best response.
you pretty much instantly get \(m=2\)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\((fg)'=f'g+g'f\) you have a product, you need the product rule
Best Response
You've already chosen the best response.
\(f(x)=e^{2x},f'(x)=2e^{2x}\) by the chain rule \[g(x)=\cos(\pi x), g'(x)=-\pi \sin(\pi x)\] again by the chain rule
Best Response
You've already chosen the best response.
no no i understand that but you're looking for the slope so what connection does the equation and the derivative have? The slope is connected nvm i answered my own question lol is that what you
do every time you want to find an equation of a tangent line like this? is you take the derivative?
Best Response
You've already chosen the best response.
yeah you need the slope and a point
Best Response
You've already chosen the best response.
slope is the value of the derivative evaluated at the \(x\) value , in your case 0
Best Response
You've already chosen the best response.
ok so from the beginning again... you said you take the derivative of the function and then plug 0 in for x?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Basically the function is curved and at the given point we want to know what it looks like if we zoom in on the graph. As we zoom in to that point the graph starts to look like a straight line
with a slope. In order to find that slope we find the derivative of the function and plug the x value of the point we zoom in. The output is out slope. Once we have that we can use point slope
formula to find the equation.
Best Response
You've already chosen the best response.
the tangent line to a curve at a point represents the rate of change of that function at that point. The tangent line not only passes through that point but extends outward with a slope that is
equal to the rate of change of the function at that point
Best Response
You've already chosen the best response.
ok so once i have the slope do i plug it into \[y-y _{1}=m(x-x _{1})\]
Best Response
You've already chosen the best response.
with the (0,1)
Best Response
You've already chosen the best response.
so the answer should be \[y=2x+1\] right? = )
Best Response
You've already chosen the best response.
yes, if the value of the derivative at (0,1) is 2
Best Response
You've already chosen the best response.
woo-hoo! I just needed a friendly reminder lol = )
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5105d771e4b03186c3f9dae8","timestamp":"2014-04-21T12:21:44Z","content_type":null,"content_length":"71898","record_id":"<urn:uuid:52d5d2d3-b46b-4807-bd0f-1c654bfcec84>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Theory of Networks & Systems -
July 28-August 1, 2008 -
The Inn at Virginia Tech and Skelton Conference Center -
Blacksburg, Virginia
Eighteenth International symposium on Mathematical Theory of Networks and Systems (MTNS2008)
In cooperation with the Society for
Industrial and Applied Mathematics (SIAM)
Virginia Tech, Blacksburg, Virginia, USA
July 28-August 1, 2008
MTNS is one of the main conferences in the general area of mathematical systems and control theory. The symposium is interdisciplinary and is aimed at mathematicians, engineers and researchers
interested in any aspect of systems theory. MTNS is usually organized every two years and traditionally covers areas involving a wide range of research directions in mathematical systems, networks
and control theory.
MTNS 2008 will be held on the campus of Virginia Tech, July 28-August 1, 2008. The symposium program will include plenary and semiplenary speakers, minicourses, special sessions, as well as
contributed papers.
Conference Themes
Adaptive Control, Algebraic Systems Theory, Applications of Algebraic and Differential Geometry in Systems Theory, Aerospace and Avionic Systems, Artificial Intelligence, Biological Systems, Cellular
Automata, Coding Theory, Communication Systems, Computational Control, Computer Networks, Control issues in Finance, Control of Distributed Parameter Systems, Delay Systems, Discrete Event Systems,
Feedback Control Systems, Hybrid Systems, Information Theory, Infinite Dimensional Systems Theory, Intelligent Control, Internet Control, Linear Systems, Mathematical Theory of Networks and Circuits,
Mechanical Systems, Multidimensional Systems, Multivariable and Large Scale Systems, Neural Networks, Nonlinear Filtering and Estimation, Nonlinear Systems and Control, Numerical and Symbolic
Computations in Systems Theory, Operator Theoretic Methods in Systems Theory, Optimal Control, Optimization : Theory and Algorithms, Process Control, Quantum Information Theory, Quantum Control,
Robotics, Robust and H-Infinity Control, Signal Processing, Stability, Stochastic Control and Estimation, Stochastic Modeling and Stochastic Systems Theory, Symbolic Dynamics, System Identification,
Systems on Graphs, Transportation Systems, VLSI Design, Wavelets.
More information:
Prof. Joseph A. Ball
Department of Mathematics
Virginia Tech
Blacksburg, Virginia 24061-0123
e-mail: mtns2008@math.vt.edu | {"url":"http://www.cpe.vt.edu/mtns08/","timestamp":"2014-04-18T21:13:13Z","content_type":null,"content_length":"9370","record_id":"<urn:uuid:2d3a6633-3e04-4d6f-b25c-7d9a53f91ec9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiation of Exponential Functions: 37,500^e10/8 - My Math Forum
November 2nd, 2013, 07:50 AM #3
Senior Member Re: Differentiation of Exponential Functions: 37,500^e10/8
Joined: Sep 2007 Quote:
Posts: 2,409 Originally Posted by SnappleG
Thanks: 3 Hello Everyone,
A tapestry purchased in 1998 for $300,000 is estimated to be worth v(t)=300,000e^t/8 dollars after t years. At what rate will the tapestry be appreciating in 2008?
I start v'(t)=300,000e^t/8*1/8=37,500e^t/8
Can you please explain how to solve v'(10)=37,500^e10/8? Thank you!
I have an example: v'(4)=112,500e^4/2
I don't understand what this has to do with the previous problem.
The answer is ~ 831,269
I just don't understand how you come to ~ 831,269
You don't.
10/8= 5/4= 1.24. [tex]e^{1.25}= 3.4903429[/tex]
37500 times 3.4903429 is 130,889, not 831,269. | {"url":"http://mymathforum.com/calculus/39275-differentiation-exponential-functions-37-500-e10-8-a.html","timestamp":"2014-04-20T21:29:52Z","content_type":null,"content_length":"33530","record_id":"<urn:uuid:865274c2-21df-4c5d-a9c7-70f62e95cba9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix Spectra with Troublesome Trigonometric Term
June 25th 2010, 01:07 AM
Matrix Spectra with Troublesome Trigonometric Term
I’m trying to find the spectra (eigenvalues) of matrix
$A = \[ \left( \begin{array}{ccc}<br /> cos(x) & -sin(x) & 0 \\<br /> sin(x) & cos(x) & 0 \\<br /> 0 & 0 & 1 \end{array} \right)\]$
I start by generating its characteristic polynomial which yields:
$(1-\lambda)[(cos(x)-\lambda)(cos(x)-\lambda)+sin(x)sin(x) ]$
Applying the Euler Formula:
$(1-\lambda)[-2\lambda cos(x) + \lambda^2 + 1]$
$(1-\lambda)(\lambda^2 - 2\lambda.cos(x) + 1)$
At this point, I want to extract roots, the solutions for eigenvalues $\lambda$. One, from the LH term, is clearly $\lambda = 1$. But that $-2cos(x)$ in the middle is stopping me from factorizing
it via any way I can work out. I expect, since A was orthogonal, that the other two roots will be a complex conjugate pair but am not 100% on that – in any case, I can’t see how I can legally
break that trig term into a complex one.
Any ideas?
June 25th 2010, 04:51 AM
I’m trying to find the spectra (eigenvalues) of matrix
$A = \[ \left( \begin{array}{ccc}<br /> cos(x) & -sin(x) & 0 \\<br /> sin(x) & cos(x) & 0 \\<br /> 0 & 0 & 1 \end{array} \right)\]$
I start by generating its characteristic polynomial which yields:
$(1-\lambda)[(cos(x)-\lambda)(cos(x)-\lambda)+sin(x)sin(x) ]$
Applying the Euler Formula:
$(1-\lambda)[-2\lambda cos(x) + \lambda^2 + 1]$
$(1-\lambda)(\lambda^2 - 2\lambda.cos(x) + 1)$
At this point, I want to extract roots, the solutions for eigenvalues $\lambda$. One, from the LH term, is clearly $\lambda = 1$. But that $-2cos(x)$ in the middle is stopping me from factorizing
it via any way I can work out. I expect, since A was orthogonal, that the other two roots will be a complex conjugate pair but am not 100% on that – in any case, I can’t see how I can legally
break that trig term into a complex one.
Any ideas?
You did everything fine, as far as I can see...! Now, the discriminant of $t^2-2\cos x\,\,t+1$ is
$\Delta=-4\sin^2x=(2i\sin x)^2$ , so using the roots formula for a quadratic eq. we
get $t_{1,2}=\frac{2\cos x\pm 2i\sin x}{2}=\cos x\pm i\sin x=e^{\pm ix}$, and here you have your conjugate pair of roots. (Wink)
June 25th 2010, 05:18 AM
Hi Tonio
Thanks for the quick reply!
I get most of what you're saying and it looks good. However - one think I don't follow. You identify the discriminant as $-4sin^2x$. However, when I try to determine it:
$\triangle = b^2-4ac$
Now squaring $b$ gives $b^2 = (4cos^2(x))$, so $\triangle = 4cos^2(x)-4$ I'm not sure how it converted to that sine function?
My guess is I got something wrong with my trig manipulations!
June 25th 2010, 05:32 AM
Just a few random thoughts: the eigenvalue problem has to do with operators whose action on an eigenvector is equivalent to multiplying the eigenvector by a scalar. Now, the matrix you've
exhibited is a rotation about the z axis through an angle x. Therefore, I would expect any ol' vector along the z axis to be an eigenvector with eigenvalue 1. Any other vector could be an
eigenvector with eigenvalue 1 if you rotated through $x=2\pi k$ radians, with $k\in\mathbb{Z}$. In addition, any vector in the xy plane could be an eigenvector with eigenvalue -1, if you rotate
through an angle $(2k+1)\pi$ with $k\in\mathbb{Z}$. I don't think there are any other eigenvector/eigenvalue pairs. At least, none come to mind, because the fact is, that aside from a vector
along the z axis, this rotation is going to change the direction of any other vector, unless $x=2\pi k$ with $k\in\mathbb{Z}$.
I just say all this by way of providing a double-check mechanism on your calculations, which seem to be in good hands.
June 25th 2010, 05:40 AM
Hi Tonio
Thanks for the quick reply!
I get most of what you're saying and it looks good. However - one think I don't follow. You identify the discriminant as $-4sin^2x$. However, when I try to determine it:
$\triangle = b^2-4ac$
Now squaring $b$ gives $b^2 = (4cos^2(x))$, so $\triangle = 4cos^2(x)-4$ I'm not sure how it converted to that sine function?
My guess is I got something wrong with my trig manipulations!
Ah, that trigonometry! $\cos^2x+\sin^2x=1\Longrightarrow \cos^2x-1=-\sin^2x$ ...
June 25th 2010, 05:44 AM
Ah - I should have seen that. Thanks a load!
Ackbeet - cheers for the background. I've not looked at rotations is much depth yet but will be moving on to it soon. I'll ponder your words! | {"url":"http://mathhelpforum.com/differential-geometry/149317-matrix-spectra-troublesome-trigonometric-term-print.html","timestamp":"2014-04-18T23:15:55Z","content_type":null,"content_length":"18784","record_id":"<urn:uuid:fc099d9c-2127-4e8b-a884-badf52c89940>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
(a) From The Multiple Angle Formulas For Cosines... | Chegg.com
(a) From the multiple angle formulas for cosines and sines, we see that
(b) Applying the three-term recurrence for Chebyshev polynomials, starting with T[0] (t) = 1 and T[1](t) = t, we have
(c) Substituting the expression for the roots into the definition T[k](t) = cos(k arccos(t)), we obtain
The extrema of T[k](t) correspond to the zeros of its derivative,
Substituting the expression for the extrema into the derivative, we obtain | {"url":"http://www.chegg.com/homework-help/multiple-angle-formulas-cosines-sines-see-b-applying-three-t-chapter-7-problem-13-solution-9780072399103-exc","timestamp":"2014-04-18T04:13:12Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:422aaf6a-48a7-4401-b8b5-c4b65f149c37>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Susanna Epp Homepage
Susanna S. Epp
Vincent de Paul Professor of Mathematical Sciences
Contact Information
Mailing Address:
Department of Mathematical Sciences
DePaul University
2320 N. Kenmore
Chicago, IL 60614
Office: Schmidt Academic Center 520
Telephone: (773) 325-4880
Fax: (773) 325-7807
E-mail: sepp followed by ("at" symbol).depaul.edu
Autumn 2013 Course
MAT 660 Discrete Structures for Mathematics Teachers
A Sampling of Websites with Applets and Other Activities for Discrete Mathematics
Click here for a listing.
Resource Page for Discrete Mathematics with Applications, 3rd Edition
Click here.
Resource Page for Discrete Mathematics with Applications, 4th Edition
Click here.
Resource Page for Discrete Mathematics: An Introduction to Mathematical Reasoning
Click here.
Ph.D. (Mathematics) University of Chicago, 1968
(Dissertation Title: Submodules of Cayley Algebras, Advisor: Irving Kaplansky)
M.S. (Mathematics) University of Chicago, 1965
B.A. with highest distinction (Mathematics) Northwestern University, 1964
Selected Publications
Examining the Role of Logic in Teaching Proof. In Proof and Proving in Mathematics Education, G. Hanna & M. de Villiers, Eds. Springer Publishing, pp. 369-389, 2012. (co-authors: V. Durand-Guerrier,
P. Boero, N. Douek, D. Tanguay)
Argumentation and Proof in the Mathematics Classroom. In Proof and Proving in Mathematics Education, G. Hanna & M. de Villiers, Eds. Springer Publishing, pp. 349-367, 2012. (co-authors: V.
Durand-Guerrier, P. Boero, N. Douek, D. Tanguay)
Variables in Mathematics Education. In Tools for Teaching Logic. Blackburn, P.; van Ditmarsch, H.; Manzano, M.; Soler-Toscano, F., eds. Springer Publishing, 2011. (Preprint) Reprinted in The Best
Writing on Mathematics 2012, M. Pitici, Ed., Princeton Univ. Press, 2012.
Proof Issues with Existential Quantification. In Proof and Proving in Mathematics Education: ICMI Study 19 Conference Proceedings, F. L. Lin et al eds., National Taiwan Normal University, 2009.
The Use of Logic in Teaching Proof. In Resources for Teaching Discrete Mathematics. B. Hopkins, ed. Washington, DC: Mathematical Association of America, 2009, pp. 313-322. (Preprint)
Taking Place Value Seriously: Arithmetic, Estimation, and Algebra (with Roger Howe), Resouces for PMET (Preparing Mathematicians to Educate Teachers), MAA Online, 2008.
Undergraduate Programs and Courses in the Mathematical Sciences: CUPM Curriculum Guide 2004, Washington, D.C.: The Mathematical Association of America, 2004. (Co-author with Harriet Pollatsek et al.)
Click here to link to the Internet version of the book. Click here for a brief article about the curriculum guide.
The Role of Logic in Teaching Proof, American Mathematical Monthly (110)10, Dec. 2003, 886-899. (©The American Mathematical Monthly, all rights reserved.)
Discrete Mathematics with Applications, 4th Edition, Boston, MA: Brooks/Cole Publishing Company, 2004. (1st Edition, Belmont: CA: Wadsworth Publishing Company, 1990, 2nd Edition. Pacific Grove, CA:
Brooks/Cole Publishing Company, 1995, 3rd Edition, Belmont, CA: Brooks/Cole Publishing Company, 2004.)
Report on the NCTM Principles and Standards for School Mathematics (with Kenneth Ross). In FOCUS, The Newsletter of the Mathematical Association of America, August-September 2000.
The Language of Quantification in Mathematics Instruction. In Developing Mathematical Reasoning in Grades K-12. Lee V. Stiff, Ed. Reston, VA: NCTM Publications, 1999, 188-197.
A Unified Framework for Proof and Disproof. The Mathematics Teacher, vol. 91, no. 8, November, 1998, 708-713.
Logic and Discrete Mathematics in the Schools. In Discrete Mathematics in the Schools, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 36. Joseph G. Rosenstein, D. S.
Franzblau, F. S. Roberts, Eds. Providence, RI: AMS Publications, 1997, 75-83.
The Role of Proof in Problem Solving. In Mathematical Thinking and Problem Solving. Alan H. Schoenfeld, Ed. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., Publishers, 1994, 257-269.
Precalculus and Discrete Mathematics. Glenview, IL: Scott Foresman, 1991. (Co-author with Anthony Peressini, Zalman Usiskin, et al.)
The Logic of Teaching Calculus. In Toward a Lean and Lively Calculus: Report of the Conference Workshop to Develop Curriculum and Teaching Methods for Calculus at the College Level. Ronald G.
Douglas, Ed. Washington, D.C.: Mathematical Association of America, 1987, 41-59. | {"url":"http://condor.depaul.edu/sepp/","timestamp":"2014-04-17T09:34:12Z","content_type":null,"content_length":"39331","record_id":"<urn:uuid:a86144e2-2c68-443c-a0a5-9d299f14f5da>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematical Sciences
Submissions from 1991
A presentation of the free group on finitely many generators in the variety generated by Dm, Michael H. Albert and David Patrick
Anisotropic motion of a phase interface, Sigurd B. Angenent and Morton E. Gurtin
Co-volume methods for degenerate parabolic problems, Lisa A. Baughman and Noel Walkington
Interpretation of the Lavrentiev phenomenon by relaxation, Giuseppe Buttazzo and Victor J. Mizel
On the expected performance of a parallel algorithm for finding maximal independent subsets of a random graph, Neil J. Calkin, Frieze, and L Kucera
On the fundamental eigenfunctions of a clamped punctured disk, C. V.(Charles Vernon) Coffman and Richard James Duffin
On the thermodynamics of periodic phases, Bernard D.(Bernard David) Coleman, Moshe Marcus, and Victor J. Mizel
A Neumann problem with critical Sobolev exponent, Myriam Comte and Gabriella Tarantello
Hamilton cycles in a class of random directed graphs, Colin Cooper and Frieze
A minimization problem involving variation of the domain, Dacorogna and Irene Fonseca
A regularized equation for anisotropic motion-by-curvature, Antonio Di Carlo, Morton E. Gurtin, and Paolo Podio-Guidugli
A regularized equation for anisotropic motion-by-curvature, Antonio Di Carlo, Morton E. Gurtin, and Paolo Podio-Guidugli
Computing the volume of convex bodies : a case where randomness provably helps, Martin Dyer and Frieze
Random walks, totally unimodular matrices and a randomised dual simplex algorithm, Martin Dyer and Frieze
The average performance of the greedy matching algorithm, Martin Dyer, Frieze, and Boris Pittel
Phase transitions and generalized motion by mean curvature, L. C. Evans, H. Mete. Soner, and Panagiotis E. Souganidis
Quasiconvex integrands and lower semicontinuity in L1, Irene Fonseca and Stefan Müller
Equilibrium configurations of defective crystals, Irene Fonseca and Gareth Parry
On a class of invariant functionals, Irene Fonseca and Gareth Parry
Remarks on variational problems for defective crystals, Irene Fonseca and Gareth Parry
Relaxation of multiple integrals in the space BV(Ω;Rp), Irene Fonseca and Piotr Rybka
Non-monotonic transformation kinetics and the morphological stability of phase boundaries in thermoelastic materials, Eliot Fried
Polychromatic Hamilton cycles, Frieze and Bruce A. Reed
Counting the number of Hamilton cycles in random digraphs, Frieze and Stephen Suen
Generalized interface evolution with the Neumann boundary condition, Yoshikazu Giga and Moto-Hiko Sato
Evolving phase boundaries in deformable continua, Morton E. Gurtin
On the formulation of mehcanical [i.e., mechanical] balance laws for structured continua, Morton E. Gurtin and Paolo Podio-Guidugli
Two-phase continuum mechanics with transport and stress, Morton E. Gurtin and Peter Voorhees
Frustration and microstructure : an example in magnetostriction, Richard D. James and David Kinderlehrer
Second variation of liquid crystal energy at x [absolute value of x], David Kinderlehrer and Biao Ou
Formation of singularities for viscosity solutions of Hamilton-Jacobi equations in higher dimensions, Georgios T. Kossioris
Formation of singularities for viscosity solutions of Hamilton-Jacobi equations in one space variable, Georgios T. Kossioris
Jacobians and Hardy spaces, P. L.(Pierre-Louis) Lions
Approximation of dissipative hereditary systems, Richard C. MacCamy
A variational problem arising from a model in thermodynamics, M Marcus
Variational problems for liquid crystals with variable degree of orientation, Victor J. Mizel
Lyapunov exponents and stochastic flows of linear and affine hereditary systems : a survey, Salah-Eldin A. Mohammed
A one dimensional nearest neighbor model of coarsening, W. W. Mullins
Dynamical modeling of phase transitions by means of viscoelasticity in many dimensions, Piotr Rybka
Uniqueness and singularities of cylindrically symmetric surfaces moving by mean curvature, H. Mete. Soner and Panagiotis E. Souganidis
A local translation of untyped [lambda] calculus into simply typed [lambda] calculus, Richard Statman
There is no hyperrecurrent S,K combinator, Richard Statman
Morse index concentration for elliptic problems with Sobolev exponent, Gabriella Tarantello
Some properties of finitely decidable varieties, Matthew Valeriote
Discriminating varieties, Matthew Valeriote and Ross Willard
Models of pattern formation, A Visintin
Decidable discriminator varieties with lattice stalks, Ross Willard
Quotient-topological completions and hulls of concrete categories, Oswald Wyler
Submissions from 1990
Regularity of stochastic delay equations under pth order degeneracy, Denis R. Bell and Salah-Eldin A. Mohammed
Dissipative boundary conditions for one-dimensional wave propagation, Jacobo Bielak and Richard C. MacCamy
Symmetric finite element and boundary integral coupling methods for fluid-solid interaction, Jacobo Bielak and Richard C. MacCamy
Spanning maximal planar subgraphs of random graphs, Béla Bollobás and Frieze
On the motion of a phase interface by surface diffusion, Fabrizio Davi and Morton E. Gurtin
Randomized greedy matching, Martin Dyer and Frieze
The Wulff theorem revisited, Irene Fonseca
An uniqueness proof for the Wulff theorem, Irene Fonseca and Stefan Müller
On perfect matchings in random bipartite graphs with minimum degree at least two, Frieze
On the length of the longest monotone subsequence in a random permutation, Frieze
A mechanical theory for crystallization of a rigid solid in a liquid melt : melting-freezing waves, Morton E. Gurtin
On the continuum mechanics of the motion of a phase interface, Morton E. Gurtin
On the thermodynamics of viscoelastic materials of single-integral type, Morton E. Gurtin and W Hrusa
Some remarks on the Stefan problem with surface structure, Morton E. Gurtin and H. Mete. Soner
Equilibrium models with singular asset prices, Ioannis Karatzas, John P. Lehoczky, and Steven E. Shreve
Equivalent martingale measures and optimal market completions, Ioannis Karatzas, John P. Lehoczky, and Steven E. Shreve
Characterizations of Young measures, David Kinderlehrer and Pedregal
Weak convergence of integrands and the Young measure representation, David Kinderlehrer and Pedregal
A variational problem for nematic liquid crystals with variable degree of orientation, Victor J. Mizel, Diego Roccato, and Epifanio G. Virga
Lyapunov exponents of linear stochastic functional differential equations driven by semimartingales., Salah-Eldin A. Mohammed and Michael K. R. Scheutzow
DET = DET : a remark on the distributional determinant, Stefan Müller
Weakly Lipschitzian mappings and restricted uniqueness of solutions of ordinary differential equations, David R. Owen and Keming Wang
A free boundary problem related to singular stochastic control, Steven E. Shreve and H. Mete. Soner
Optimal investment and consumption with transaction costs., Steven E. Shreve, H. Mete. Soner, and Gan-Lin Xu
Motion of a set by the curvature of its boundary, H. Mete. Soner
Decidable discriminator varieties from unary classes, Ross Willard
A duality method for optimal consumption and investment under short-selling prohibition, Gan-Lin Xu and Steven E. Shreve
Submissions from 1989
Asymptotic behavior of solutions to a class of time-dependent Volterra equations, Aizicovici, Stig-Olof Londen, and Simeon Reich
Characterizing stability and superstability by unions of chains and saturated models, Michael H. Albert and Rami Grossberg
Rich models, Michael H. Albert and Rami Grossberg
Multiphase thermomechanics with interfacial structure., Sigurd B. Angenent and Morton E. Gurtin
Necessary and sufficient conditions for a sum-free set of positive integers to be ultimately periodic, Neil J. Calkin
The strong perfect graph conjecture holds for diamonded odd cycle-free graphs, Olivia M. Carducci
The duals of harmonic Bergman spaces, C. V.(Charles Vernon) Coffman and Jonathan Cohen
A coercive bilinear form for Maxwell's equations, Costabel
A remark on the regularity of solutions of Maxwell's equations on Lipshitz domains, Costabel
Probabilistic analysis of the generalised assignment problem, Martin Dyer and Frieze
A spatial branch-and-bound algorithm for some unconstrained single-facility location problems, Richard H. Edahl and Carnegie Mellon University.Engineering Design Research Center.
Lower semicontinuity of surface energies, Irene Fonseca
Phase transitions of elastic solid materials, Irene Fonseca
The displacement problem for elastic crystals, Irene Fonseca and Luc Tartar
Probabilistic analysis of graph algorithms, Frieze
On small subgraphs of random graphs, Alan Frieze
Indiscernible sequences in stable models, Rami Grossberg
On chains of relatively saturated submodels of a stable model, Rami Grossberg
A hyperbolic theory for the evolution of plane curves, Morton E. Gurtin and Paolo Podio-Guidugli
Multiphase thermomechanics with interfacial structure., Morton E. Gurtin and A.(Allan A.) Struthers
A transport theorem for moving interfaces, Morton E. Gurtin, A.(Allan A.) Struthers, and William O. Williams
On formulation of singularities in one-dimensional nonlinear thermoelasticity, W Hrusa and Salim Messaoudi
Equilibrium in a simplified dynamic, stochastic economy with heterogeneous agents, Ioannis Karatzas
Martingale and duality methods for utuility [i.e., utility] maximization in an incomplete market, Ioannis Karatzas
An approximation result for strongly positive kernels, Stig-Olof Londen | {"url":"http://repository.cmu.edu/math/index.4.html","timestamp":"2014-04-18T15:47:17Z","content_type":null,"content_length":"67447","record_id":"<urn:uuid:c0572169-7b7a-4ad3-abfd-29a97c6ba2ec>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spintronics is a new paradigm for electronics which utilizes the electron's spin in addition to its charge for device functionality. The primary areas for potential applications are information
storage, computing, and quantum information. In terms of materials, the study of spin in solids now includes metallic multilayers, inorganic semiconductors, transition metal oxides, organic
semiconductors, and carbon nanostructures. The diversity of materials studied for spintronics is a testament to the advances in synthesis, measurement, and interface control that lie at the heart of
Why spin?
We begin with the question, "What is special about electron spin?" From a scientific and technological point of view, there are four important points. First is the connection between spin and
magnetism, which is useful for information storage. Second is an intrinsic connection between spin and quantum mechanics, which may be useful for quantum information. Third is the short range of
spin-dependent exchange interactions, which implies that the role of spin will continue to grow as the size of nanostructures continues to shrink. Fourth are the issues of speed and power
dissipation, which are becoming increasingly important for electronics at the nanoscale.
First, spin is connected to ferromagnetic materials because the spontaneous magnetization breaks time-reversal symmetry, which allows the electronic states within the material to become
spin-dependent. This contrasts with non-magnetic materials where time-reversal symmetry forces the electronic states to come in pairs with the same energy but opposite spin, thus leading to a density
of states that must be independent of spin. In the ferromagnetic metal, the density of states (DOS) is different for the two spin states. It is conventional to refer to the majority spin as "spin up"
while the minority spin is "spin down." Because most transport properties depend on the density of states near the Fermi level, the spin asymmetry in the density of states allows ferromagnets to
generate, manipulate, and detect spin.
Ferromagnetic materials also possess the property of hysteresis, where the magnetization can have two (or more) different stable states in zero magnetic field. The bistability is due to a property
called magnetic anisotropy, where the energy of a system depends on the direction of the magnetization. There is a preferred axis ("easy axis") with stable states for magnetization direction along Ø
= 90° and Ø = 270°. When a large magnetic field (H) is applied along an easy axis, the magnetization (M) will align with this field in order to lower the Zeeman energy, E[Zeeman] = - M- H. When the
magnetic field is turned off, the magnetization will ideally maintain all of its high-field magnetization. A magnetic field applied in the opposite direction will cause the magnetization to reverse
after the field crosses a value known as the "coercivity" or "coercive field," which depends on the height of the magnetic anisotropy energy barrier. This magnetic anisotropy generally depends on
both the material and its shape. In terms of information storage applications, the two stable magnetic states in zero magnetic field correspond to the logical "0" and "1" of a data bit. The data can
be written by applying a magnetic field larger than the coercivity to align the magnetization along the field. Due to the anisotropy energy barriers, this state is stable even when the magnetic field
is turned off. This property makes ferromagnets natural candidates for information storage. Thus, the connection between spin and ferromagnetism establishes a natural connection between spintronics
and information storage applications.
Second, spin is connected to quantum mechanics. In classical mechanics, the angular momentum (i.e. rotational motion) can be divided into two parts, an "orbital" angular momentum and a "spin" angular
momentum. If one considers the motion of the earth, the elliptical motion of the earth around the sun generates orbital angular momentum, while the rotation of the earth about its axis generates spin
angular momentum. For elementary particles such as the electron, the physics is governed by quantum mechanics. In an atom, the motion of the electron around the nucleus generates orbital angular
momentum. The electron also has spin angular momentum, but it is NOT due to a “spinning” motion of the electron. (OK, in a dinner conversation you can describe spin as the spinning motion of an
object, but technically this is wrong.)
Instead, spin comes out of quantum mechanics when you try to combine it with special relativity, as Dirac did in the 1920's. In solving the Dirac equation, one of the consequences is the requirement
of an internal property that is now known as spin. Because of its intrinsically quantum mechanical origin, it should be of little surprise that the electron spin has very unusual properties. For
example, its value along any particular axis (say, the z-axis) can only take on two values: S[z] = +ŋ/2 and -ŋ/2. In other notations, these are called "spin up" and "spin down", | ↑ > and , | ↓ > or
|m [s]= ½ > and |m [s]= -½ >. It also obeys a Heisenberg uncertainty principle where the three components of spin ( S[x], S[y], S[z ]) cannot be measured simultaneously. Most importantly from the
point of view of computing applications, the spin can be in a quantum superposition state, such as A| ↑ >+B| ↓ > where the coefficients A and B are complex numbers. If you think about digital
electronics as being built on bits that can have two states "0" or "1", you can think about spin as a "quantum bit" which can be in states | ↑ > , | ↓ > , or in a superposition state A| ↑ >+B| ↓ >,
where |A|[2] is the probability of finding the spin in the | ↑ > state and |B|[2] is the probability of finding the spin in the | ↓ > state. The quantum bit, or "qubit," lies at the heart of a new
type of proposed computer known as a quantum computer which could in principle perform some tasks such as factorizing numbers or performing searches much more efficiently than normal digital
computers. There are many schemes proposed for quantum computing (with most of them being unrelated to electron spin) but there is a debate about whether a scalable quantum computer will ever be
realized. Nonetheless, it is a worthy pursuit that pushes the boundaries of our knowledge and technical capabilities, since electron spin presents the opportunity to exploit quantum mechanical
behavior in solids in a manner that is generally inaccessible in purely charge-based electronics. It is this special relation between spin and quantum mechanics which forms a natural connection
between spintronics and advanced computing, whether the goal is a full-blown quantum computer or a more modest form of quantum information processing that has yet to be devised.
Third, the length scale of spin-dependent exchange interactions is on the order of a few atomic spacings. Because of this, the spin-dependent properties in solids are very sensitive to the atomic
scale structure. With the ability to engineer interfaces at the atomic level by growth techniques such as molecular beam epitaxy (MBE), and with the ongoing improvements in the fabrication of smaller
and smaller structures in the lateral dimensions, the role of spin is likely to become even more important as nanoscale science continues to advance. Understanding how spin and magnetism depend on
the atomic scale interface and material structure will be an important area of investigation for the development of spintronics. | {"url":"http://www.asdn.net/asdn/electronics/spintronics.shtml","timestamp":"2014-04-18T21:25:32Z","content_type":null,"content_length":"31182","record_id":"<urn:uuid:98bb9ebf-9184-4421-99e9-b4a2bca02032>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
A limit to Shoenfield Absoluteness
up vote 12 down vote favorite
Shoenfield's Absoluteness Theorem states that if $\phi$ is any $\Sigma^1_2$ sentence of second-order arithmetic, then $\phi$ is absolute between any two models of $ZF$ which share the same ordinals.
This means that such $\phi$ are unaffected by forcing, or by Axiom of Choice-related considerations (since for any $V\models ZF$, the corresponding subclass $L$ is a model of $ZFC$).
What I'm looking for is a simple example of either a $\Sigma^1_3$ or $\Pi^1_3$ sentence $\psi$ which is not absolute in this way - ideally, such a $\psi$ which is true in some $V\models ZFC$ and
false in a generic extension $V[G]$. This is a purely pedagogical question for me - I'm trying to internalize Shoenfield Absoluteness, and I feel that a nice example of why it can't be made much
stronger would help.
Thank you all in advance.
lo.logic set-theory forcing
add comment
3 Answers
active oldest votes
Noah: The sentence "there is a real not in $L$" is $\Sigma^1_3$: To say that $x\notin L$ means that for every $y$, if $y$ codes a model of the form $L_\alpha$, then $x$ is not in this
model; but to say that $y$ codes an $L_\alpha$ (for a sufficiently "closed" $\alpha$) means that $y$ codes a structure $(M,E)$ (this is arithmetic) that satisfies, say, $KP+V=L$ (you can
do this in, say, a $\Delta^1_1$ way); and you need to express that this structure is well-founded, for which you just have to say that no real codes a decreasing sequence through its
Counting quantifiers, this comes out $\Sigma^1_3$. It is false in $L$ and true after adding a Cohen real.
On the other hand, if $\phi$ is $\Pi^1_3$ and holds in an outer model of $V$, then it holds in $V$: The sentence says that every real (in the extension) has an absolute property, so in
particular every real in $V$ has that property, in $V$.
This, however, is not "the limit of absoluteness," as per your title, only of Shoenfield's absoluteness. Large cardinals in the universe grant you much stronger absoluteness properties
between $V$ and its forcing extensions. Even at the level of (boldface) $\Sigma^1_3$ generic absoluteness, this involves sharps. A nice proof is sketched in my paper with Ralf Schindler,
up vote 11 "Projective well-orderings of the reals" Archive for Mathematical Logic 45 (7) (2006), 783-793, available at my page.
down vote
accepted Beyond this level but still looking at projective statements, strong cardinals are involved, by a nice result of Woodin. You can read the details in John Steel's "the derived model
theorem," which also gives you a nice introduction to some of the topics that come up when studying absoluteness, such as universally Baire representations of sets. The paper is
available at John's page.
Even beyond this level something can be said. Now you require Woodin cardinals, and absoluteness is tied up with determinacy. In fact, the statement that "no set forcing can change the
theory of $L({\mathbb R})$, even allowing reals as parameters," is equivalent to saying that determinacy holds in $L({\mathbb R})$ in any set forcing extension. This in turns is
equivalent, for example, to the statement that the mouse $M_\omega^\sharp$ exists and is fully iterable. $M_\omega^\sharp$ is a fine structural object (a generalization of a level of $L$
or of $L[\mu]$) with $\omega$ Woodin cardinals. Full iterability is a technical condition, but it ensures that in any forcing extension, $L({\mathbb R})$ is a derived model, a
requirement from which determinacy follows.
And we can even continue beyond $L({\mathbb R})$ for a bit. But I'll stop here.
Thank you very much! This is an excellent answer. I've changed the title accordingly, and I'm starting to look at the papers you mentioned. – Noah S Aug 3 '11 at 16:53
Glad to be of help. – Andres Caicedo Aug 3 '11 at 16:57
add comment
The classical example is the statement:
Every real is constructible.
up vote To see that this is $\Pi^1_3$, first note that if $T$ is any theory in a language containing $\in$, then "$T$ has a countable wellfounded model" is a $\Sigma^1_2$ statement. For each $r \
10 down subseteq \omega$, take $T_r$ to be a sufficiently large fragment of ZFC+V=L plus the axioms $R(n)$ (resp. $\lnot R(n)$) for $n \in r$ (resp. $n \notin r$), where $R$ is a new unary
vote predicate and the $n$ are new constant symbols together with axioms which make them correspond to elements of $\omega$. Then $T_r$ has a countable wellfounded model for every $r \subseteq \
omega$ iff every real is constructible. Choosing the $T_r$ judiciously gives a $\Pi^1_3$ formulation of the above.
add comment
Jensen and Solovay dealt with the question. For example $0^\#$ is a $\Delta^1_3$ real number, which is obviously not absolute because it cannot go deeper than $L[0^\#]$, and clearly it
can't be absolute enough to be in $L$.
Jensen and Solovay point out that assuming a measurable cardinal, then there is such example. But what if one doesn't want to assume large cardinals?
Solovay constructed the following example. $M$ is a transitive countable model of $V=L$. Then there is $N=M[a]$ where $a$ is a real number and there is a $\Pi^1_2$ predicate $S(x)$ such
that $N\models(\exists!x\subseteq\omega)S(x)$, and $N\models S(a)$, and moreover $N=M[a]$.
up vote 2
down vote Jensen extended this result and showed that we can have that every $y\in\mathcal P(\omega)^L$ is recursive in $a$.
Both these results appear in the paper:
Jensen, R. B.; Solovay, R. M. "Some applications of almost disjoint sets." Mathematical Logic and Foundations of Set Theory (Proc. Internat. Colloq., Jerusalem, 1968) (1970) pp.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic set-theory forcing or ask your own question. | {"url":"https://mathoverflow.net/questions/71965/a-limit-to-shoenfield-absoluteness","timestamp":"2014-04-18T10:48:05Z","content_type":null,"content_length":"63247","record_id":"<urn:uuid:8b068e5d-db64-4dc6-8850-04b628223324>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Reflection Principles and Proof Speed-up
Ilya Tsindlekht eilya497 at 013.net
Fri Jun 8 03:09:11 EDT 2007
On Thu, May 31, 2007 at 07:09:05PM -0400, Richard Heck wrote:
> hendrik at topoi.pooq.com wrote:
> > On Wed, May 30, 2007 at 04:41:25PM +0100, Rob Arthan wrote:
> >
> >> Several people in the mechanized theorem-proving world have proposed reflection
> >> principles as a way of increasing the power of a proof tool. Nuprl for example
> >> has such a rule allowing you to infer P from a theorem asserting that P is provable.
> >>
> > Doesn't this just assert that anything proven inside the system is true?
> > If this is done within the system, doesn't it imply that the system
> > can show its own consistency, and hence is inconsistent?
> >
> > How does it get around this?
> >
> Certainly by L"ob's theorem, if any axiomatic system S (of sufficient
> strength, etc) proves all instances of Bew_S('A') --> A, then S is
> inconsistent. But formulating such a system isn't entirely trivial. Take
> PA. If we add the instances of Bew_{PA}('A') --> A, then we don't have
> PA anymore. So what is needed is going to be some kind of
> diagonalization: We want a system PA* that adds to PA precisely the
> instances of: Bew_{PA*}('A') --> A. I'm sure this can be done, but, as I
> said, it doesn't seem trivial.
It seems one can use the same technic as in construction of Goedel's
indecidable statement: take sentence which says 'PA + sentence obtained
from sentence with Goedel number N by substituting N for 'N' proves A ->
A' and then substitute its Goedel number for N.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-June/011656.html","timestamp":"2014-04-16T22:12:55Z","content_type":null,"content_length":"4237","record_id":"<urn:uuid:bada74bd-3109-45ec-b1c3-115a8dc4652c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with canvas
Join Date
Oct 2008
Rep Power
alright im having trouble getting this program to work correctly im sure its one of my equations but im not real good with trig and it works up to a point.
the goal: make a random number of dots that circle around a center point (int this case 250, 250) and stay at a constant distance from said point. each dot is to travel at different speeds.
Java Code:
import java.awt.Color;
import java.math.*;
import java.lang.*;
public class dots {
private double xposition;
private double yposition;
private double angle;
private double distance;
private double x;
private double y;
private double yChange;
private double xChange;
private Color color;
public dots()
public void reset()
xposition = (Math.random() * 400 + 1);
yposition = (Math.random() * 400 + 1);
color = new Color((int)(Math.random() * 256), (int)(Math.random() * 256), (int)(Math.random() * 256) );
xChange = (Math.random() * .0001);
yChange = (Math.random() * .0001);
x = 250 - xposition;
y = 250 - yposition;
distance = Math.sqrt((Math.pow(x, 2) + Math.pow(y, 2)));
public void advance()
x = 250 - xposition;
double a = x / distance;
angle = Math.acos(a);
xposition = 250 - (distance * Math.cos((angle - xChange)));
yposition = 250 - (distance * Math.sin((angle - yChange)));
System.out.println("angle: " + angle);
System.out.println("x pos: " + xposition);
System.out.println("y pos: " + yposition);
public Color getColor()
return color;
public int getX()
return (int)xposition;
public int getY()
return (int)yposition;
this is my dot class which is activated by my gui class. there is a problem in advance somewhere and ive tried several different equations but this is the only one that produces any resutls.
my problem is the dots all stop when they hit about the 9 o clock position, like they are hitting an invisible wall.
any help would be great! and i know my equations prob are very wrong. again not good at trig.
Join Date
Oct 2008
Rep Power
this is my gui code if you need it.
Java Code:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.math.*;
import java.lang.*;
public class SpinningDots implements ActionListener{
private JFrame window;
private LayoutManager layout;
private Canvas canvas;
private JButton run;
public SpinningDots(){
window = new JFrame("Spinning Dots");
layout = new FlowLayout();
window.setBounds(300, 250, 550, 600);
canvas = new Canvas();
run = new JButton("Run");
public void actionPerformed(ActionEvent e) {
Graphics g = canvas.getGraphics();
dots[] dots = new dots[15];
for(int i = 0; i < dots.length; i++)
dots[i] = new dots();
double StartTime = System.currentTimeMillis();
double time;
time = System.currentTimeMillis();
for(dots d : dots)
g.fillOval(d.getX(), d.getY(), 6, 6);
g.fillOval(d.getX(), d.getY(), 6, 6);
}while(time <= (StartTime + 10000));
public static void main(String[] args)
new SpinningDots();
the dots all stop when they hit about the 9 o clock position
How are the values changing before they stop?
And then what do they do after they stop?
Can you post the values with notes showing where the values change and where they stop?
Join Date
Jun 2008
Blog Entries
Rep Power
You may want to read up on Graphics coding at the Sun Graphics tutorial. A few suggestions:
1) Don't mix AWT components (the Canvas) with Swing components (JFrame and everything else). Stick with Swing and draw on a JPanel or JComponent.
2) Don't get a Graphics object by extracting it from a component. Instead paint within the JPanel or JComponent's paintComponent(Graphics g) method and use the Graphics object, g, that is
passed to the program from the JVM.
HTH and good luck.
Join Date
Oct 2008
Rep Power
Java Code:
x pos: 169.97101431583582
y pos: 250.00195732369585
angle: 1.9864818412147502E-5
x pos: 169.97101430261705
y pos: 250.00100910198697
angle: 8.016340004459174E-6
that is what it repeats the entire time.
and for the mixing of the types, this is for my intro to java class and so far this is how he has taught us to go about using canvas and stuff. i will indeed read the tut though.
edit: this is from a single dot not all 15
What is your algorithm for the moving of the dot?
Have you worked out on a piece of graph paper the correct values of x,y and angle for what you want to happen?
What are the values for x,x and angle you need computed to have the dot move where you want it to be?
Work with a single dot to figure out how to change the x, y and angle to have it move where you want.
Join Date
Oct 2008
Rep Power
What is your algorithm for the moving of the dot?
Have you worked out on a piece of graph paper the correct values of x,y and angle for what you want to happen?
What are the values for x,x and angle you need computed to have the dot move where you want it to be?
Work with a single dot to figure out how to change the x, y and angle to have it move where you want.
i havent worked it out on paper yet i will have to sit down and do that.
as for the angle it is random based on where the dot is formed from the center which is (250, 250).
Java Code:
x = 250 - xposition;
y = 250 - yposition;
distance = Math.sqrt((Math.pow(x, 2) + Math.pow(y, 2)));
that finds the dots x and y and distance from the center. this is called when the dot is created.
below is called when the dot is to move.
Java Code:
x = 250 - xposition; //again finds updated x location
//only x is needed because it is used to find the angle
double a = x / distance; //sets up to find the cos
angle = Math.acos(a); //finds the current angle from the center using acos, this should be the correct way to find the angle it is all i could find
xposition = 250 - (distance * Math.cos((angle - xChange))); //finds new x position by taking the center point and subtracting the distance to keep it the same distance away, times angle minus the angle of change which is determined when dot is made
yposition = 250 - (distance * Math.sin((angle - yChange))); //same as above
thanks for the suggestions and ill sit down and plot out the dot on a few points
I've seen some programs that have display the sweep hands for clocks. If you could find one of those, you could get the code for moving a position in a circle.
You don't say if the dot moves clockwise or counterclockwise.
A comment on using hardcoded values in code. Don't.
Make the 250 a final int variable and use that in your code.
Join Date
Oct 2008
Rep Power
ah sorry yeah they are supposed to be going counter clockwise and ill try to find some examples like you said. also thanks for pointing that out i didnt even think of doing that.
Join Date
Oct 2008
Rep Power
well i got it to go all the way around i wasnt using pi in my equation but now it is going way to fast and i havent found a good way to slow it down.
Java Code:
public void advance()
x = 250 - xposition;
y = 250 - yposition;
double a = x / distance;
angle = Math.acos(a);
/*if(getY() >= 248){
xposition = 250 - (distance * Math.cos((angle - xChange)));
yposition = 250 - (distance * Math.sin((angle - yChange)));
xposition = 250 + (distance * Math.cos(((angle - xChange) * Math.PI)));
yposition = 250 + (distance * Math.sin(((angle - yChange) * Math.PI)));
//System.out.println("angle: " + angle);
//System.out.println("x pos: " + xposition);
//System.out.println("y pos: " + yposition);
Use Thread.sleep() in the loop that computes the dots' new postions before calling repaint() to have the new positions drawn.
Join Date
Oct 2008
Rep Power
i tried doing that and its not that they are coming too fast sorry i wasnt clear its that now with my updated code
Java Code:
xposition = 250 + (distance * Math.cos(((angle - xChange) * Math.PI)));
yposition = 250 + (distance * Math.sin(((angle - yChange) * Math.PI)));
because its times PI now that the values are jumping to great so it is skipping wide spaces, there for i believe i need to divide something my somehting but ive tried many diff things and
nothing has helped.
You haven't stated what your algorithm is, so I can't tell what your code is supposed to do.
A design could be:
Start the dot at a given angle from the zero line. For example it could be straight up (the noon position). At each time increment change the angle by a small amount and compute the new x,y
for that angle and radius. draw the dot at its new position.
Join Date
Oct 2008
Rep Power | {"url":"http://www.java-forums.org/new-java/12857-help-canvas.html","timestamp":"2014-04-16T20:23:42Z","content_type":null,"content_length":"110497","record_id":"<urn:uuid:b00a2347-928f-4a96-93f5-cc9c8cae7753>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tenafly Geometry Tutor
Find a Tenafly Geometry Tutor
...I have worked for several tutoring organizations and privately tutored high school students on a one to one basis. This is my 3rd year with WyzAnt and am currently working with 6 different
students in Pre Algebra, Algebra 1, Geometry and Trig. I feel that my strengths are in communicating key m...
20 Subjects: including geometry, algebra 1, GRE, finance
...Let me be your guide and companion in your next academic journey and you will find the trip far easier and more pleasant than you imagined!I have taught algebra techniques not only as a topic
on its own but also in conjunction with the physical sciences and biology since the 1980s. I am an econo...
50 Subjects: including geometry, chemistry, physics, GRE
...From a teaching and tutoring standpoint, I am always patient and encouraging. In addition, I try to foster a learning environment that motivates and builds success. I take pleasure in showing
students that with appropriate instruction and a little hard work they are capable of significantly more than they imagined.
12 Subjects: including geometry, calculus, statistics, finance
...Usually, this entails helping a student learn challenging new material or providing a student with reinforcement that will keep them performing at their peak. In a fun and easygoing
atmosphere, I teach the student strategies for mastering new material and removing obstacles from their learning p...
17 Subjects: including geometry, reading, writing, algebra 1
...I have helped students achieve dramatic grade improvements and raise their test scores by several hundred points in short periods of time! I accomplish this by focusing on developing a
student's problem-solving, critical reasoning and test-taking abilities. Students have achieved perfect SAT math and essay scores, 450 point improvements and earned full merit scholarships!
34 Subjects: including geometry, calculus, writing, GRE | {"url":"http://www.purplemath.com/Tenafly_geometry_tutors.php","timestamp":"2014-04-17T21:26:50Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:7b75f69d-a352-443e-823b-94700a8730d4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arcadia, CA Algebra 1 Tutor
Find an Arcadia, CA Algebra 1 Tutor
...It is so rewarding to work with students to help them gain confidence and see improvement in their grades in a subject they are having difficulty with. I greatly enjoy helping students connect
with math to help them gain the confidence they need to successfully master this subject. I have been a junior math teacher and taught pre-algebra, geometry and Algebra 1 for several years.
22 Subjects: including algebra 1, English, reading, writing
...Over the years, I have worked with people who have learning disabilities, mental disorders, and physical limitations. I consider it a privilege to work with people who have special needs and I
admire their tenacity and ability to persist when faced with challenges. It takes a lot of courage to fight back and I will be one of the first ones to encourage you and to cheer you on!
69 Subjects: including algebra 1, reading, chemistry, Spanish
...I prefer the physical sciences. The way I teach will be by showing you step by step how to do a problem for a few times. Then I will let you take on the problem itself and make sure you
understand why each step is necessary.
9 Subjects: including algebra 1, calculus, differential equations, physical science
...At higher levels, you also need trig to understand chemistry and biology as well. The SAT is a limited measure, but it does represent a very important core set of skills that a student needs
to be successful in college. Sometimes too much emphasis is placed on this test when grades, recommendations, and personal statements provide a more nuanced perspective on a student.
14 Subjects: including algebra 1, calculus, trigonometry, geometry
...I have a deep love of all things mathematical, and I find teaching reading strategies, writing techniques and even grammar rules to be a delight. I recently moved here from NYC, where I spent
two years working for a premium SAT test prep company. I've helped more than 100 students reach their h...
26 Subjects: including algebra 1, Spanish, reading, writing
Related Arcadia, CA Tutors
Arcadia, CA Accounting Tutors
Arcadia, CA ACT Tutors
Arcadia, CA Algebra Tutors
Arcadia, CA Algebra 2 Tutors
Arcadia, CA Calculus Tutors
Arcadia, CA Geometry Tutors
Arcadia, CA Math Tutors
Arcadia, CA Prealgebra Tutors
Arcadia, CA Precalculus Tutors
Arcadia, CA SAT Tutors
Arcadia, CA SAT Math Tutors
Arcadia, CA Science Tutors
Arcadia, CA Statistics Tutors
Arcadia, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Arcadia_CA_algebra_1_tutors.php","timestamp":"2014-04-18T00:52:56Z","content_type":null,"content_length":"24259","record_id":"<urn:uuid:b944cea1-87e2-48bb-938b-201be8461d6a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: three level variance components using xtmixed
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: three level variance components using xtmixed
From "Huseyin Tastan" <hntastan@hotmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: three level variance components using xtmixed
Date Wed, 16 Aug 2006 06:40:23 -0400
No, I do not have multiple measurements per year. The reason for considering time (year) is that it captures some macroeconomic events affecting all firms in all industries. I want to get how much of
the variation in the dependent variable is due to firm, industry and time affects. I can run the model with two levels for each year and compare the results but I want to know if time effect is
important or not.
From: Joseph Coveney <jcoveney@bigplanet.com>
Reply-To: statalist@hsphsun2.harvard.edu
To: Statalist <statalist@hsphsun2.harvard.edu>
Subject: Re: st: three level variance components using xtmixed
Date: Wed, 16 Aug 2006 19:23:36 +0900
Huseyin Tastan wrote:
I want to analyze variance components of a measure of firm performance (such
as return on equity) using random effects at three levels: industry level,
firm level and time level.
I have data on
industries: i=1,2,...,20 (there are 20 industries)
firms: j=1,2,...,1000 (there are 1000 firms)
years: t=1,2,...,10 (there are 10 years)
The specific model is written as follows:
y_ijt = a + b_i + c_j + d_t + e_ijt
There are four variances to estimate in this model:
var(y) = var(b) + var(c) + var(d) + var(e)
I have tried xtmixed to estimate this model but the convergence was
extremely slow (I used reml option). And for some dependent variables it
didnt even converge. The command I used was something like this:
xtmixed y || industry: || firm: || year:, variance
From the description, it looks as if there are only two variance components
above the residuals.
y_ijt = mu + industry_i + firm_ij + e_ijt
Try -xtmixed y || industry: || firm:, variance-.
Do you have multiple measurements per year? Is there a reason to consider
time as random?
Joseph Coveney
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-08/msg00413.html","timestamp":"2014-04-24T06:23:09Z","content_type":null,"content_length":"8292","record_id":"<urn:uuid:8f3c6948-9799-4143-b556-4ab96408e4a6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
4.2 - Uni-polar stepper motors
A uni-polar stepper motor looks like this:
The only difference is that each of the coils now has 3 wires and will normally there have 6 leads. The new wire on each coil is called a ‘centre tap’ and is connected to the middle of the coil. If
you have no documentation on your motor then use your ohmmeter to check the connections. By checking for infinite resistance then you should be able to identify the leads that are for one coil, and
the other leads for the other coil. Given a group of 3 leads you can tell which is the centre tap because, for coil 1, the resistance between ‘1a’ and the centre tap will be same as that between the
centre tap and ‘1b’. The resistance between ‘1a’ and ‘1b’ will be double this value. NB the resistance tend to be very low, a few ohms, so you will need to select the appropriate resistance scale on
your meter.
Uni-polar stepper motor driver
As mentioned earlier: you can drive a uni-polar motor ‘as if’ it was a bi-ploar motor. To do this you just ignore the centre tap and then use the other two leads per coil as if it was a bi-polar.
Otherwise: you want to use the centre tap and, assuming you connect it to ground, then you will need two switches to dictate which direction the current flows through the coil.
Note that this mode of operation means that you are only using half of the coil in each direction. This will mean the ‘half coil’ only has half of the total resistance. Using Volts = Amps x
Resistance (and assuming your supply voltage is the same) then if the resistance is halved then the current drain has doubled.
So why would you choose uni-polar over bi-polar if it requires twice as much current?
Compared to the bi-polar H-Bridge driver, which requires ‘4 switches; per coil then the uni-polar circuit only needs ‘2 switches’ per coil. So less electronics!
The price you pay is that you may be using twice as much current and because you are using half of the coil at a time then you may only get half the torque. Despite its name (uni versus bi) it sounds
as if it is less capable but remember you can always use a uni-polar motor as if it was a bi-polar by ignoring the centre tap. So a uni-polar can be thought of as a bi-polar with extra choices. | {"url":"http://www.societyofrobots.com/member_tutorials/book/export/html/127","timestamp":"2014-04-18T00:20:56Z","content_type":null,"content_length":"4587","record_id":"<urn:uuid:d984ca3b-f7e3-4602-aa9e-435f38fed173>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
stratification structure
up vote 2 down vote favorite
Let $M$ be a smooth manifold on which $G$ acts smoothly, where $G$ is a compact Lie group. Suppose that $X=\bigcup_iS_i\subset M$ is a Whitney stratification with $G$-invariant strata, i.e. every
stratum $S_i$ is a $G$-invariant submanifold of $M$. Then we have $$X/G=\bigcup_i(S_i/G)\subset M/G.$$ It is well known that, for every smooth $G$-manifold $M$, the orbit space $M/G$ is a Whitney
stratifiction, so is $X/G$ a Whitney stratification as well ?
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/133510/stratification-structure","timestamp":"2014-04-16T13:53:31Z","content_type":null,"content_length":"44808","record_id":"<urn:uuid:4abf32be-ef5f-4339-a62b-39c0368f8985>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Scholarpedia
A solitary wave is a localized "wave of translation" that arises from a balance between nonlinear and dispersive effects. In most types of solitary waves, the pulse width depends on the amplitude. A
soliton is a solitary wave that behaves like a "particle", in that it satisfies the following conditions (Scott, 2005):
1. It must maintain its shape when it moves at constant speed.
2. When a soliton interacts with another soliton, it emerges from the "collision" unchanged except possibly for a phase shift.
That is, for a conservative (non-dissipative) system, a soliton is a solitary wave whose amplitude, shape, and velocity are conserved after a collision with another soliton. In Section 5, we provide
code that is set up to show the collision between two soliton solutions of the Korteweg-de Vries (KdV) equation. We show the result of running this code in Figure 1 and Figure 2.
In the physics literature, the terms "soliton" and "solitary wave" are often used interchangeably. Solitary waves (and solitons) arise in both continuous systems such as the KdV equation and discrete
systems such as the Toda lattice (Toda, 1967) (Toda, 1989) and in both one and multiple spatial dimensions. Key issues in studying solitary waves also include linear versus nonlinear (of course),
integrable versus nonintegrable, persistent versus transient, asymptotics (i.e., consideration of time scales), localization in physical space versus Fourier space, and the effects of noise.
Although it seems that solitons are now sometimes taken for granted, they are in fact very special objects (Scott, 2005):
• Without the benefit of hindsight, it is absolutely amazing that solitons exist at all. One might have expected nonlinearity to destroy such structures, particularly in light of decades of
experience with low-dimensional nonlinear dynamical systems.
• Many physical systems can be modeled quite successfully using equations that admit soliton solutions. Indeed, solitons and solitary waves have been observed in numerous situations and often
dominate long-time behavior.
• Equations with soliton solutions have a profound mathematical structure.
Historical development
Let's start with a brief historical overview. For additional information, see (Heyerhoff 1997) (Scott, 2005).
The first recorded solitary wave was observed in the 1834 when a young engineer named John Scott Russell was hired for a summer job to investigate how to improve the efficiency of designs for barges
that were designated to ply canals---particularly the Union Canal near Edinburgh, Scotland. One August day, the tow rope that was connecting the mules to the barge broke and the barge suddenly
stopped---but the mass of water in front of its blunt prow "... rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well defined heap of water,
which continued its course along the channel without change of form or diminution of speed." (Russell, 1844)
Russell pursued this serendipitous observation and "... followed it [the launched 'Wave of Translation'] on horseback, and overtook it still rolling on at a rate of some eight or nine miles per hour,
preserving its original form some thirty feet long and a foot to a foot and a half in height." He then conducted controlled laboratory experiments using a wave tank and quantified the phenomenon in
an 1844 publication (Russell, 1844). He demonstrated four facts:
1. The solitary waves that he observed had a hyperbolic secant shape.
2. A sufficiently large initial mass of water can produce two or more independent near-solitary waves that separate in time.
3. Solitary waves can cross each other "without change of any kind".
4. In a shallow water channel of height \(h\ ,\) a solitary wave of amplitude \(A\) travels at a speed of \([g(A+h)]^{1/2}\ ,\) where \(g\) is the gravitational acceleration. That is,
larger-amplitude waves move faster than smaller ones---a nonlinear effect.
In 1895, Dutch physicist Diederick Korteweg and his student Gustav de Vries (KdV) derived a nonlinear partial differential equation (PDE) (de Vries, 1895) (Scott, 2005), \[ \tag{1} \phi_t + \phi_
{xxx} + 6\phi\phi_x = 0, \]
that now bears their name. Korteweg and de Vries argued that the KdV equation (1) could describe Russell's experiments. Equation (1) shows that the rate of change of the wave's height in time is
governed by the sum of two terms: a nonlinear one (the amplitude effect) and a dispersive one (the effect that causes waves of different wavelengths to travel with different velocities). Korteweg and
de Vries found a periodic solution in addition to a solitary-wave solution that resembled the wave that Russell had followed. These solutions arose as a result of a balance between nonlinearity and
dispersion. Their work and Russell's observations fell into obscurity and were ignored by mathematicians, physicists, and engineers studying water waves until 1965 when Norman Zabusky and Martin
Kruskal published their numerical solutions of the KdV equation (and invented the term "soliton") (Zabusky, 1965). Kruskal derived (1) as an asymptotic (continuum) description of oscillations of
unidirectional waves propagating on the "cubic" Fermi-Pasta-Ulam (FPU) nonlinear lattice (Fermi, 1955)(Porter, 2009b)(Weissert, 1997). Meanwhile, Morikazu Toda became the first to discover a soliton
in a discrete, integrable system (the system is now referred to as the Toda lattice) (Toda, 1967).
In 1965, Gary Deem, Zabusky, and Kruskal (Deem, 1965) produced films that showed interacting solitary waves in an FPU lattice, the KdV equation, and a modified KdV equation; see the discussion in the
review article (Zabusky, 1984). We depict the dynamics of solitons in the KdV equation in the space-time diagram of Figure 1. Robert Miura recognized the significance of this result and found an
exact transformation between this modified KdV equation and equation (1) (Miura, 1976). This awakened the mathematical study of solitons, as Clifford Gardner, John Greene, Martin Kruskal, and Robert
Miura in 1967 were able to solve the initial-value problem for the KdV equation by introducing the inverse scattering method (Miura, 1968)(Gardner, 1967)(Gardner, 1974), providing an appropriate
notion of integrability for continuum frameworks. Vladimir Zakharov and Alexei Borisovich Shabat generalized the inverse scattering method in 1972 when they solved the nonlinear Schrödinger (NLS)
equation, another model nonlinear PDE, demonstrating both its integrability and the existence of soliton solutions. In 1973, Mark Ablowitz, David Kaup, Alan Newell, and Harvey Segur demonstrated the
existence of soliton solutions (and proved the integrability) of several other nonlinear PDEs, including the sine-Gordon equation (which was already known to be integrable based on Albert Backlünd's
19th century investigations of surfaces with constant negative Gaussian curvature). Other researchers have subsequently derived other integrable PDEs (in both one and multiple spatial dimensions) and
constructed accompanying soliton solutions. As the Kadomtsev-Petviashvili (KP) equation illustrates (see Section 3), one needs to be more nuanced as to what constitutes a "soliton" in multiple
spatial dimensions. When studying solitary waves in nonintegrable equations, analytical techniques typically rely on perturbative methods, asymptotic analysis, and/or variational approximations
(Kivshar, 1989) (Scott, 2005) An important example of a nonintegrable system with exact solutions for isolated solitary waves are the coupled mode equations for fiber Bragg gratings in optics.
Research on solitary waves and solitons remains one of the most vibrant areas of mathematics and physics (Scott, 2005). It has had a broad and far-reaching impact in myriad fields ranging from the
purest mathematics to experimental science. This has led to crucial results in integrable systems, nonlinear dynamics, optics, biophysics, supersymmetry, and more. Later in this article, we will
discuss some of the types and applications of solitary waves. First, we construct soliton solutions to (1).
Explicit construction of the KdV soliton
It is illustrative to demonstrate the construction of the soliton solution of the KdV equation (1) explicitly. We start with the ansatz \[ \tag{2} \phi = \psi(y)\,, \qquad y = x - Ut, \]
which describes a wave translating with speed \(U\ .\) Inserting this into (1) yields \[ \tag{3} -U\psi' + \psi''' + 6\psi\psi' = 0, \]
where \(' \equiv \frac{d}{dy}\ .\) Integrating (3) and then multiplying the resulting equation by \(\psi '\) and integrating again yields \[ \tag{4} -\frac{U}{2}\psi^2 + \frac{1}{2}(\psi')^2 + \psi^3
+ G_1\psi + G_2 = 0, \] where \(G_1\) and \(G_2\) are constants of integration.
We want a solution in the form of a localized pulse, so we need \(\psi\ ,\) \(\psi '\ ,\) and all higher derivatives to vanish as \(y \rightarrow \pm \infty\ .\) This implies that \(G_1 = G_2 = 0\ .
\) [If one keeps nonzero constants, one can instead derive extended waves in the form of elliptic functions (Whitham, 1974).] This gives \[ \tag{5} -\frac{U}{2}\psi^2 + \frac{1}{2}(\psi')^2 + \psi^3
= 0. \]
Solving (5) by separation of variables yields \[ \tag{6} \phi(x,t) = \frac{U}{2}{\rm sech}^2\left\{\frac{\sqrt{U}}{2}(x-Ut-x_0)\right\}, \]
where \(x_0\) is a constant. We depict the solution (6) in Figure 4.
The solitary wave menagerie
Since the discovery of solitary waves and solitons, a menagerie of localized pulses has been investigated in both one dimension and multiple spatial dimensions (Scott, 2005), though one must be
nuanced when considering what constitutes a solitary wave (or even a localized solution) in multiple spatial dimensions. Many localized pulses have been given a moniker ending in "on" for
conciseness, although they do not in general have similar interaction properties as solitons. The most prominent examples include the following:
• Envelope Solitons (Scott, 2005): Solitary-wave descriptions of the envelopes of waves, such as those that arise from the propagation of modulated plane waves in a dispersive nonlinear medium with
an amplitude-dependent dispersion relation. One typically uses the descriptor bright to describe solitary waves whose peak intensity is larger than the background (reflecting applications in
optics) and the descriptor dark to describe solitary waves with lower intensity than the background.
• Solitary waves with discontinuous derivatives: Examples of such solitary waves include compactons (Rosenau, 2005), which have finite (compact) support, and peakons (Camassa, 1993), whose peaks
have a discontinuous first derivative. There have also been studies of cuspons (Scott, 2005), which have a singularity in the first derivative rather than simply a discontinuity.
• Gap solitons (Scott, 2005)(Carretero-González, 2008): Solitary waves that occur in finite gaps in the spectrum of continuous systems. For example, gap solitons have been studied rather thoroughly
in NLS equations with spatially periodic potentials and have been observed experimentally in the context of both nonlinear optics and Bose-Einstein condensation.
• Intrinsic Localized Modes (ILMs) (Campbell, 2004)(Flach, 2008): ILMs, or discrete breathers, are extremely spatially-localized, time-periodic, stable or very long-lived excitations in spatially
extended, discrete, periodic (or quasiperiodic) systems. (At present, it is not clear whether analogous time-quasiperiodic solutions can be constructed for general lattice equations.) ILMs, which
are localized in real space, arise in a large variety of nonlinear lattice models and are typically independent of the number of spatial dimensions of the lattice, the size of the lattice (which
is, however, assumed to be large), and (for the most part) the precise choice of nonlinear forces acting on the lattice. The mechanism that permits the existence of ILMs has been understood
theoretically for more than a decade, and such waves have now been observed in a wide variety of physical systems.
• \(q\)-breathers (Mishagin, 2008)(Flach, 2008)(Flach, 2005)(Checin, 2002)(Checin, 2005): Exact time-periodic solutions of spatially extended nonlinear systems that are continued from the normal
modes of a corresponding linear system. In contrast to ILMs, \(q\)-breathers are localized in normal-mode (Fourier) space, so that almost all of the energy is locked into a single Fourier mode
for all time. (The label \(q\) refers to the wave number of the normal mode.) They also provide the best-known explanation for FPU recurrences.
• Topological Solitons (Scott, 2005): Solitons, such as some solutions to the sine-Gordon equation, that emerge because of topological constraints. One example is a skyrmion, which is the
solitary-wave solution of a nuclear model whose topological charge is the baryon number. Other examples include domain walls, which refer to interfaces that separate distinct regions of order and
which form spontaneously when a discrete symmetry (such as time-reversal symmetry) is broken, screw dislocations in crystalline lattices, and the magnetic monopole. One-dimensional topological
solitons are necessarily kinks, which we discuss below.
• Kinks (Scott, 2005): The only one-dimensional topological solitary wave, it represents a twist in the value of a solution and causes a transition from one value to another. Kinks can sometimes be
represented using heteroclinic orbits, whereas pulse-like solitary waves can sometimes be represented using homoclinic orbits. Kinks are sometimes used to represent domain walls.
• Vortex Solitons (Kevrekidis, 2008) (Scott, 2005): A term often applied to phenomena such as vortex rings (a moving, rotating, toroidal object) and vortex lines (which are always tangent to the
local vorticity). Coherent vortex-like structures also arise in dissipative systems.
• Dissipative Solitons (Cross, 1993) (Scott, 2005)(Purwins, 2005): Stable localized structures that arise in spatially extended dissipative systems. They are often studied in the context of
nonlinear reaction-diffusion systems.
• Oscillons (Umbanhowar, 1996): A localized standing wave that arises in granular and other dissipative media that results from, e.g., the vertical vibration of a plate topped by a layer of free
• Higher-Dimensional Solitary Waves (Scott, 2005): Solitary waves and other localized (and partially localized) structures have also been studied in higher-dimensional settings. One example of a
genuine two-dimensional soliton is the "lump" solution of the KP equation of the first type (i.e., the KP1 equation). This type of soliton decays algebraically rather than exponentially and is
sometimes described as "weakly localized". The KP1 equation also has unstable line soliton solutions (a generalization of the soliton solutions of the KdV equation), which decay exponentially in
all but a finite number of directions. The KP equation of the second type (i.e., the KP2 equation) differs from the KP1 equation in that it has the opposite sign in front of its diffusion term.
The KP2 equation has stable line-soliton solutions, which (unlike line solitons in the KP1 equation) can merge with each other to form a single line soliton (which can, in turn, disintegrate into
two separate line solitons).
Numerous generalizations of the above examples have also been investigated, as one can consider chains of solitons, discrete analogs of the above examples (such as discrete vortex solitons),
semi-discrete examples (such as spatiotemporal solitary waves in arrays of optical fibers), one type of soliton "embedded" in another type, solitary waves in nonlocal media, quantum solitary waves,
and more.
Solitary waves of all flavors arise ubiquitously in fluid mechanics, optics, atomic physics, biophysics, and more (Scott, 2005)(Dauxois, 2006)(Porter, 2009b)(Remoissenet, 1999). It is impossible to
discuss these manifestations exhaustively, so we show a few exciting figures and restrict ourselves to brief discussions of some of our favorite examples:
• Nonlinear Optics (Scott, 2005)(Agrawal, 1995)(Porter, 2009c)(Efremidis, 2003)(Stegeman, 1999)(Segev, 1992): Solitary waves are omnipresent in nonlinear optics. There have been extensive
experimental and theoretical investigations about both spatial solitary waves, in which nonlinearity balances diffraction, and temporal solitary waves, in which nonlinearity balances dispersion.
From a mathematical perspective, continuous nonlinear Schrödinger (NLS) equations are among the hallmark models in nonlinear optics, as they describe dispersive envelope waves (via solitary-wave
solutions of the NLS) of the electric field in optical bers, and discrete NLS (DNLS) equations can be used to describe the dynamics of pulses in, e.g., optical waveguide arrays and
photorefractive crystals. Classes of solitary waves known as second-harmonic generation (SHG) solitary waves, which are so-named because they occur in \(\chi^2\) (second-order nonlinearity)
materials in optics, have been created experimentally in both spatial and temporal domains (Di Trapani, 1998). Such materials have also been used to provide perhaps the only experimental
generation of spatiotemporal solitary waves, in which there is a simultaneous balance of diffraction by self-focusing modulation and dispersion by phase modulation (Liu, 1999). There have also
been numerous studies of light bullets, which are three-dimensional localized pulses in self-focusing media with anomalous group dispersion. The properties of optical solitary waves can be
manipulated experimentally through both "dispersion management" and "nonlinearity management" (Malomed, 2005)(Centurion, 2006).
• Bose-Einstein Condensates (BECs) (Kevrekidis, 2008)(Carretero-González, 2008)(Porter, 2009c): At very low temperatures, particles in a dilute Bose gas can occupy the same quantum (ground) state,
forming a BEC, a coherent cloud of atoms which appears as a sharp peak in both position and momentum space. As the gas is cooled, a large fraction of the atoms in the gas condense via a quantum
phase transition, which occurs when the wavelengths of individual atoms overlap and behave identically. The macroscopic dynamics of BECs near zero temperature is modeled by an NLS equation known
as the Gross-Pitaevskii (GP) equation. BEC solitary waves of numerous types have also been modeled using other models, such as DNLS equations. Because of the similarity of the model equations,
many of the solitary-wave phenomena that were originally studied in the context of nonlinear optics arise here as well, and the extreme tunability of BECs has been a major boon for both
theoretical and experimental studies. For example, bright solitary waves were created in \(^{7}\)Li atoms (Khaykovich, 2002)(Strecker, 2002) and gap solitons have been created in \(^{87}\)Rb
(Eiermann, 2004). Additionally, there have been several theoretical studies on manipulating the properties of solitary waves in BECs via nonlinearity management (which can be achieved in
principle by exploiting the properties of Feshbach resonances) (Malomed, 2005). Many novel types of solitary-wave structures have now been created in BEC laboratories, and research on nonlinear
waves in BECs continues to develop at a rapid pace. One of the most important current experimental challenges for work on solitary waves in BECs (and also nonlinear optics) is the creation of
stable two-dimensional and three-dimensional solitary waves in the presence of cubic self-focusing nonlinearity, as such structures must be stabilized in order to prevent them from collapsing (in
accord with theoretical predictions).
• Water Waves (Scott, 2005)(Whitham, 1974)(Kharif, 2009)(El, 2002): Russell's "wave of translation" was a water-wave soliton, and (as discussed above) Korteweg and de Vries derived their nonlinear
wave equation to describe the shallow water waves that Russell had observed. The KdV equation arises in the long-wavelength limit, and shallow-water solitary waves have been the subject of
numerous laboratory experiments. Solitary waves also arise in deep water, as shown by the pioneering work of Vladimir Zakharov who derived an envelope wave description whose limiting case
satisfies an NLS equation (Zakharov, 1968). Additionally, solitary-wave solutions have been constructed in more sophisticated models in fluid dynamics, and there has been a lot of work on myriad
types of solitary waves. For instance, various scientists have attempted to explain the large and seemingly spontaneous freak waves (or rogue waves) as solitary waves. Additionally, tidal bores
have been explained in terms of dispersive shock waves, which consist of a front followed by a train of solitary waves. Other interesting studies have focused on turbulent velocity fields that
can arise from the breaking of solitary waves (Ting, 2008), 3D vortex structures under breaking waves (Watanabe, 2005), and the spilling and plunging of waves (Jensen, 2005).
• Biophysics (Scott, 2005)(Davydov, 1982)(Scott, 1992)(Peyrard, 2004)(Campbell, 2004): There have been some attempts to use solitary-wave descriptions to describe various biophysical phenomena. One
example is the Davydov soliton, which satisfies an equation that was designed to model energy transfer in hydrogen-bonded spines that stabilize protein \(\alpha\)-helices. The Davydov soliton
represents a state composed of an excitation of amide-I and its associated hydrogen-bond distortion. It has been used to describe a local conformational change of the DNA \(\alpha\)-helix, and
there now exists experimental evidence of such states. Another type of DNA solitary wave was introduced by Peyrard and Bishop, who interpreted solitary-wave solutions of a model for DNA
denaturization as bubbles that appear in the DNA structure as temperature is increased. The Peyrard-Bishop model also admits ILM solutions, and ILMs have also been investigated both theoretically
and experimentally in the context of biopolymers. Using a model similar to Davydov's, local modes in molecular crystals have also been described using solitary waves. More controversially,
solitary waves have recently been used in neuroscience as an alternative to the accepted Hodgkin-Huxley model to describe the traveling of signals along a cell's membrane (Heimburg, 2005).
• Microelectromechanical (MEM) and Nanoelectromechanical (NEM) Devices (Campbell, 2004)(Sato, 2003)(Kenig, 2009): Among the primary classes of systems in which ILMs have been studied are MEMs or
NEMs consisting of arrays of nonlinear oscillators (such as cantilevers).
• Josephson Junctions (Scott, 2005)(Campbell, 2004): A Josephson junction is a nonlinear oscillator consisting of two weakly coupled superconductors that are connected by a non-conducting barrier.
Such junctions might prove to be important for producing quantum-mechanical circuits such as superconducting quantum interference devices (SQUIDs). Additionally, some of the most visually
striking ILMs have been observed in arrays of Josephson junctions. The first experimental realization of an array of such junctions revealed excitations that arose from spatially localized
voltage drops at particular junctions as a homogeneous DC bias current traversed an annular array. Solitary waves in "long Josephson junctions", which are much longer than the intrinsic length
scale known as the Josephson penetration depth (which is of the order 1--1000 \(\mu\)m), are known as fluxons because they contain one quantum of magnetic flux.
• Granular Crystals (Sen, 2008)(Nesterenko, 2001)(Boechler, 2010): Granular crystals consist of a tightly-packed array of solid particles that deform when they contact each other. They are modeled
by an FPU-like set of equations with an asymmetric potential (there is only a force when the particles are squeezing each other) arising from the Hertzian description for contact between elastic
particles. Granular crystals exhibit a highly nonlinear dynamic response, and the equations of motion give zero when they are linearized (although additional, linear forces, such as gravity and
precompression, can also be included with appropriate experimental setups). Taking a long-wavelength asymptotic limit of the equations of motion gives a partial differential equation whose only
diffusion term is nonlinear. This equation admits traveling compacton solutions that closely match waves that have observed experimentally. Other types of solitary waves, including ILMs, have
been observed in the presence of precompression.
• Surface Waves (Scott, 2005)(Cross, 1993)(Umbanhowar, 1996): Numerous interesting nonlinear wave phenomena can occur on the surface of a "continuum" (e.g., fluids, solids, and appropriate granular
materials---which can often be modeled using continuum descriptions), and some of them admit solitary-wave descriptions. Although it can be applied more broadly, the term surface wave is often
used to refer to a relatively specific class of examples. These include the pattern-forming standing waves called Faraday waves that form, e.g., on the surface of continua housed in vertically
vibrated receptacles [similar phenomena have now also been seen in other settings, such as BECs (Engels, 2007) ]; soliton-like oscillons that switch between peaks and craters and have been
demonstrated in vertically-vibrated plates of granular materials, viscous fluids, and colloids; and acoustic surface waves, which travel along the surfaces of solid materials.
• Plasmas (Scott, 2005)(Kourakis, 2005): One of the convenient testbeds to study the dynamics of solitary waves has been plasmas, which consist of a large number of charged particles. For example,
the KdV equation has been used to describe the local ion density (reflecting the local departure of the charge from neutrality) in a perturbation of the charge density. Other equations that admit
soliton and solitary-wave solutions, including the Kadomtsev-Petviashvili (KP) equations and more complicated variants of both the KdV and KP equations, are also prominent in the study of
plasmas. Dusty plasmas, which contain small suspended particles, have been modeled using nonlinear oscillator chains that admit several types of solitary-wave solutions (such as ILMs).
• Field Theory (Deligne, 1999)(Rajaraman, 1987)(Manton, 2001)(Mason, 1997)(Coleman, 1985): Solitons and their relatives, such as instantons, are also important in both classical and quantum field
theory. Topological solitons such as monopoles, kinks, vortices, and skyrmions are key to the modern understanding of field theory. (Non-topological solitons such as Q-balls have generally played
a less central role than their topological counterparts.) In \((1 + 1)\)-dimensional quantum field theory, topological soliton solutions of the sine-Gordon equation can be mapped to elementary
excitations of the Thirring model (an exactly solvable quantum field theory). This provides a toy model for more physically relevant examples in which the role of solitons is played by magnetic
monopoles that can be mapped to electrically charged elementary particles via an equivalence that is given the name strong-weak duality or, more commonly, S-duality. S-duality is also an
essential feature of string theory. Instantons give non-perturbative corrections to path integrals, and they play a crucial role in quantum field theory (especially in tunneling phenomena).
Because of their algebraic structure, topological instantons can sometimes be constructed explicitly using methods from subjects such as twistor theory. Topological solitons also arise in various
parts of string theory and supergravity (such as in studies of \(D\)-branes and \(NS\)-branes), as well as in the study of defects such as domain walls and cosmic strings.
• Bulk Solitary Waves in Solid Waveguides (Samsonov, 2001)(Porubov, 2005)(Khusnutdinova, 2008): Elastic rods, plates, and layered structures are ubiquitous in modern constructions and devices.
Nonlinearity can arise both from finite values of strain ("geometric nonlinearity") and from material properties ("physical nonlinearity"). Longitudinal bulk ("density") solitary waves can
propagate in solid waveguides when nonlinearity is balanced by spatial dispersion due to the non-negligible transverse size of a waveguide. In the simplest cases, the solitary waves can be
modelled as solutions of a "doubly dispersive" Boussinesq-type equation that contains two kinds of dispersion terms of the same order of magnitude. In contrast to linear waves and shock waves,
bulk solitary waves in homogeneous solid waveguides do not exhibit significant decay or shape transformation, which suggests that these waves could be used for nondestructive testing of
inhomogeneous waveguides. Splitting ("delamination") in a layered waveguide results in generation of a train of solitary waves from a single incident solitary wave. This has been observed in
experiments using holographic interferometry (Dreiden, 2010).
% kdv.m - Solve KdV equation by Fourier spectral/ETDRK4 scheme
% A.-K. Kassam and L. N. Trefethen 4/03
% This code solves the Korteweg-de Vries eq. u_t+uu_x+u_xxx=0
% with periodic BCs on [-pi,pi] and initial condition given by
% a pair of solitons. The curve evolves up to t=0.005 and at
% the end u(x=0) is printed to 6-digit accuracy. Changing N
% to 384 and h to 2.5e-7 improves this to 10 digits but takes
% four times longer.
% Set up grid and two-soliton initial data:
N = 256; x = (2*pi/N)*(-N/2:N/2-1)';
A = 25; B = 16;
u = 3*A^2*sech(.5*(A*(x+2))).^2+3*B^2*sech(.5*(B*(x+1))).^2;
p = plot(x,u,'linewidth',3);
axis([-pi pi -200 2200]), grid on
% Precompute ETDRK4 scalar quantities (Kassam-Trefethen):
h = 1e-6; % time step
k = [0:N/2-1 0 -N/2+1:-1]'; % wave numbers
L = 1i*k.^3; % Fourier multipliers
E = exp(h*L); E2 = exp(h*L/2);
M = 64; % no. pts for complex means
r = exp(2i*pi*((1:M)-0.5)/M); % roots of unity
LR = h*L(:,ones(M,1))+r(ones(N,1),:);
Q = h*mean( (exp(LR/2)-1)./LR ,2);
f1 = h*mean((-4-LR+exp(LR).*(4-3*LR+LR.^2))./LR.^3,2);
f2 = h*mean( (4+2*LR+exp(LR).*(-4+2*LR))./LR.^3,2);
f3 = h*mean((-4-3*LR-LR.^2+exp(LR).*(4-LR))./LR.^3,2);
g = -.5i*k;
% Time-stepping by ETDRK4 formula (Cox-Matthews):
disp('press <return> to begin'), pause % wait for user input
t = 0; step = 0; v = fft(u);
while t+h/2 < .005
step = step+1;
t = t+h;
Nv = g.*fft(real(ifft(v)).^2);
a = E2.*v+Q.*Nv; Na = g.*fft(real(ifft(a)).^2);
b = E2.*v+Q.*Na; Nb = g.*fft(real(ifft(b)).^2);
c = E2.*a+Q.*(2*Nb-Nv); Nc = g.*fft(real(ifft(c)).^2);
v = E.*v+(Nv.*f1+(Na+Nb).*f2+Nc.*f3);
if mod(step,25)==0
u = real(ifft(v)); set(p,'ydata',u)
title(sprintf('t = %7.5f',t),'fontsize',18), drawnow
text(-2.4,900,sprintf('u(0) = %11.7f',u(N/2+1)),...
We thank Margaret Beck, Paul Bressloff, Zhigang Chen, Chiara Daraio, Lars English, Roger Grimshaw, Atle Jensen, Panos Kevrekidis, Karima Khusnutdinova, Ron Lifshitz, Boris Malomed, Lionel Mason, John
Ockendon, Dmitry Pelinovsky, Alexander Samsonov, and Donald Spector for useful comments. We thank Eric Cornell, Galina Dreiden, Chris Eilbeck, Peter Engels, Lars English, Irina Semenova, Paul
Umbanhowar, and Xiaosheng Wang for permission to use figures and Nick Trefethen for permission to use code. We also thank Bernard Deconinck and Boris Malomed for the comments in their reviews of the
initial submitted version of this article.
Internal references
• Rob Schreiber (2007) MATLAB. Scholarpedia, 2(7):2929.
• Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
• Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
• Ablowitz, M J; Kaup, D J; Newell, A C and Segur, H (1973). Method for Solving the Sine-Gordon equation. Physical Review Letters 30: 1262-1264. doi:10.1103/physrevlett.30.1262.
• Adlam, J H and Allen, J E (1958). The Structure of Strong Collision-Free Hydromagnetic Waves. Philosophical Magazine 3(229): 448-455. doi:10.1080/14786435808244566.
• Boechler, N et al. (2010). Discrete Breathers in One-Dimensional Diatomic Granular Crystals. Physical Review Letters 104(24): 244302.
• Camassa, R and Holm, D D (1993). An Integrable Shallow Water Equation with Peaked Solitons. Physical Review Letters 71: 1661-1664. doi:10.1103/physrevlett.71.1661.
• Campbell, D K; Flach, S and Kivshar, Yu S (2004). Localizing Energy Through Nonlinearity and Discreteness. Physics Today 57(1): 43-49. doi:10.1063/1.1650069.
• Campbell, D K; Rosenau, P and Zaslavsky, G (2005). Introduction: The Fermi-Pasta-Ulam Problem. The First Fifty Years. Chaos 15(1): 015101. doi:10.1063/1.1889345.
• Carretero-González, R; Frantzeskakis, D J and Kevrekidis, P G (2008). Nonlinear Waves in Bose-Einstein Condensates: Physical Relevance and Mathematical Techniques. Nonlinearity 21: R139-R202.
• Centurion, M; Porter, M A; Kevrekidis, P G and Psaltis, D (2006). Nonlinearity Management in Optics: Experiment, Theory, and Simulation. Physical Review Letters 97(3): 033903. doi:10.1103/
• Checin, G M; Novikova, N V and Abramenko, A A (2002). Bushes of Vibrational Modes for Fermi-Pasta-Ulam Chains. Physica D 166: 208-238. doi:10.1016/s0167-2789(02)00430-x.
• Checin, G M; Ryabov, D S and Zhukov, K G (2005). Stability of Low-Dimensional Bushes of Vibrational Modes in the Fermi-Pasta-Ulam Chains. Physica D 203: 121-166. doi:10.1016/j.physd.2005.03.009.
• Coleman, S (1985). Q-Balls. Nuclear Physics B 262: 263-283. doi:10.1016/0550-3213(85)90286-x.
• Cross, M C and Hohenberg, P C (1993). Pattern Formation Outside of Equilibrium. Reviews of Modern Physics 65(3): 851-1112. doi:10.1103/revmodphys.65.851.
• de Vries, G and Korteweg, D J (1895). On the Change of Form of Long Waves Advancing in a Rectangular Canal, and on a New Type of Long Stationary Waves. Philosophical Magazine 39: 422-443.
• Di Trapani, P et al. (1998). Observation of Tenmporal Solitons in Second-Harmonic Generation with Titled Pulses. Physical Review Letters 81(3): 570-573.
• Dreiden, G V; Khusnutdinova, K R; Samsonov, A M and Semenova, I V (2010). Splitting Induced Generation of Soliton Trains in Layered Waveguides. Journal of Applied Physics 107(3): 034909.
• Eiermann, B et al. (2004). Bright Bose-Einstein Gap Solitons of Atoms with Repulsive Interaction. Physical Review Letters 92(23): 230401.
• Efremidis, N K; Christodoulides, D N; Fleischer, J W; Cohen, O and Segev, M (2003). Two-dimensional optical lattice solitons. Physical Review Letters 91: 213906. doi:10.1103/physrevlett.91.213906
• El, G A and Grimshaw, R H J (2002). Generation of Undular Bores in the Shelves of Slowly-Varying Solitary Waves. Chaos 12: 1015-1026. doi:10.1063/1.1507381.
• Engels, P; Atherton, C and Hoefer, M A (2007). Observation of Faraday Waves in a Bose-Einstein Condensate. Physical Review Letters 98: 095301. doi:10.1103/physrevlett.98.095301.
• English, L Q; Basu Thakur, R and Stearrett, R (2008). Patterns of Travelling Intrinsic Localized Modes in a Driven Electrical Lattice. Physical Review E 77: 066601. doi:10.1103/physreve.77.066601
• English, L Q; Sato, M and Sievers, A J (2003). Modulational Instability of Nonlinear Spin Waves in Easy-Axis Antiferromagnetic Chains. II. Inuence of Sample Shape on Intrinsic Localized Modes and
Dynamic Spin Defects. Physical Review B 67: 024403. doi:10.1103/physrevb.67.024403.
• Fermi, E; Pasta, J R and Ulam, S (1955). Studies of Nonlinear Problems. I. Report LA-1940: -.
• Flach, S and Gorbach, A V (2008). Discrete Breathers. Advances in Theory and Applications. Physics Reports 467: 1-116. doi:10.1016/j.physrep.2008.05.002.
• Flach, S; Ivanchenko, M V and Kanakov, O I (2005). \(q\)-Breathers and the Fermi-Pasta-Ulam Problem. Physical Review Letters 95: 064102. doi:10.1103/physrevlett.95.064102.
• Gardner, C S; Green, J M; Kruskal, M D and Miura, R M (1967). Method for Solving the Korteweg-de Vries Equation. Physical Review Letters 19: 1095-1097. doi:10.1103/physrevlett.19.1095.
• Gardner, C D; Green, J M; Kruskal, M D and Miura, R M (1974). Korteweg-de Vries Equation and Generalizations. VI. Methods for Exact Solution. Communications on Pure and Applied Mathematics 27(1):
97-133. doi:10.1002/cpa.3160270108.
• Heimburg, T and Jackson, A D (2005). On Soliton Propagation in Biomembranes and Nerves. Proceedings of the National Academy of Sciences 102(28): 9790-9795. doi:10.1073/pnas.0503823102.
• Jensen, A; Mayer, S and Pedersen, G K (2005). Experiments and Computation of Onshore Breaking Solitary Waves Measurement Science and Technology. 16: 1913-1920.
• Kenig, E; Malomed, B A; Cross, M C and Lifshitz, R (2009). Intrinsic Localized Modes in Parametrically-Driven Arrays of Nonlinear Resonators. Physical Review E 80(4): 046202. doi:10.1103/
• Ketov, S V (2006). Solitons, Monopoles, and Duality: From Sine-Gordon to Seiberg-Witten. Fortschritte der Physik/Progress of Physics 45(3-4): 237-292. doi:10.1002/prop.2190450303.
• Khaykovich, L et al. (2002). Formation of a Matter-Wave Bright Soliton. Science 296(5571): 1290-1293.
• Khusnutdinova, K R and Samsonov, A M (2008). Fission of a longitudinal strain solitary wave in a delaminated bar. Physical Review E 77(6): 066603. doi:10.1103/physreve.77.066603.
• Kivshar, Yu S and Malomed, B A (1989). Dynamics of Solitons in Nearly Integrable Systems. Reviews of Modern Physics 61(4): 763-915. doi:10.1103/revmodphys.61.763.
• Kourakis, I and Shukla, P K (2005). Discrete Breather Modes Associated with Vertical Dust Grain Oscillations in Dusty Plasma Crystals. Physics of Plasmas 12(1): 014502. doi:10.1063/1.1824908.
• Kruskal, M D and Zabusky, N J (1966). Exact Invariants for a Class of Nonlinear Wave Equations. Journal of Mathematical Physics 7(7): 1256-1267. doi:10.1063/1.1705028.
• Lax, P D (1968). Integrals of Nonlinear Equations of Evolution and Solitary Waves. Communications on Pure and Applied Mathematics 21(5): 467-490. doi:10.1002/cpa.3160210503.
• Lin, G; Grinberg, L and Em Karniadakis, G (2006). Numerical Studies of the Stochastic Korteweg-de Vries Equation. Journal of Computational Physics 213(2): 676-703. doi:10.1016/j.jcp.2005.08.029.
• Liu, X; Qian, L J and Wise, F W (1999). Generation of Optical Spatiotemporal Solitons. Physical Review Letters 82(23): 4631-4634. doi:10.1103/physrevlett.82.4631.
• Mishagin, K G; Flach, S; Kanakov, O I and Ivanchenko, M V (2008). \(q\)-Breathers in Discrete Nonlinear Schrödinger Lattices. New Journal of Physics 10(7): 073034. doi:10.1088/1367-2630/10/7/
• Miura, R M (1968). Korteweg de Vries Equation and Generalizations I. A Remarkable Explicit Nonlinear Transformation. Journal of Mathematical Physics 9: 1202-1204. doi:10.1063/1.1664700.
• Miura, R M (1976). The Korteweg-de Vries Equation: A Survey of Results SIAM Review 18: 412-459. doi:10.1137/1018076.
• Peyrard, M (2004). Nonlinear Dynamics and Statistical Physics of DNA. Nonlinearity 17: R1-R40. doi:10.1088/0951-7715/17/2/r01.
• Porter, M A; Carretero-González, R; Kevrekidis, P G and Malomed, BA (2005). Nonlinear Lattice Dynamics of Bose-Einstein Condensates. Chaos 15(1): 015115. doi:10.1063/1.1858114.
• Porter, M A; Daraio, C; Szelengowicz, I; Herbold, E B and Kevrekidis, P G (2009). Highly Nonlinear Solitary Waves in Heterogeneous Periodic Granular Media. Physica D 238: 666-676. doi:10.1016/
• Porter, M A; Zabusky, N J; Hu, B and Campbell, D K (2009b). Fermi, Pasta, Ulam and the Birth of Experimental Mathematics. American Scientist 97(6): 214-222. doi:10.1511/2009.78.214.
• Porubov, A V and Maugin, G A (2005). Longitudinal Strain Solitary Waves in Presence of Cubic Nonlinearity. International Journal of Nonlinear Mechanics 40(7): 1041-1048.
• Rosenau, P (2005). What is a Compacton? Notices of the American Mathematical Society. 52(7): 738-739.
• Russell, J S (1844). Report on Waves. in Report of the 14th Meeting of the British Association for the Advancement of Science : 311-390.
• Sato, M et al. (2003). Observation of Locked Intrinsic Localized Vibrational Modes in a Micromechanical Oscillator Array. Physical Review Letters 90: 044102.
• Scott, A C (1992). Davydov's Soliton. Physics Reports 217(1): 1-67. doi:10.1016/0370-1573(92)90093-f.
• Segev, M; Crosignani, B; Yariv, A and Fischer, B (1992). Spatial Solitons in Photorefractive Media. Physical Review Letters 68: 923-926. doi:10.1103/physrevlett.68.923.
• Sen, S; Hong, J; Bang, J; Avalos, E and Doney, R (2008). Solitary Waves in the Granular Chain. Physics Reports 462(2): 21-66. doi:10.1016/j.physrep.2007.10.007.
• Stecker, K E; Partridge, G B; Truscott, A G and Hulet, R G (2002). Formation and Propagation of Matter-Wave Soliton Trains. Nature 417(6885): 150-153. doi:10.1038/nature747.
• Stegeman, G and Segev, M (1999). Optical Spatial Solitons and Their Interactions: Universality and Diversity. Invited Paper, Special Issue on Frontiers in Optics, Science 286: 1518-1523.
• Ting, F C K (2008). Large-Scale Turbulence Under a Solitary Wave: Part 2: Forms and Evolution of Coherent Structures. Coastal Engineering 55(6): 522-536. doi:10.1016/j.coastaleng.2008.02.018.
• Toda, M (1967). Vibration of a Chain with Nonlinear Interaction. Journal of the Physical Society of Japan 22(2): 431-436. doi:10.1143/jpsj.22.431.
• Umbanhowar, P B; Melo, F and Swinney, H L (1996). Localized Excitations in a Vertically Vibrated Granular Layer. Nature 382: 793-796. doi:10.1038/382793a0.
• Wang, X; Chen, Z and Kevrekidis, P G (2006). Observation of Discrete Solitons and Soliton Rotation in Optically Induced Periodic Ring Lattices. Physical Review Letters 96: 083904. doi:10.1103/
• Watanabe, Y; Saeki, H and Hosking, R J (2005). Three-Dimensional Vortex Structures Under Breaking Waves. Journal of Fluid Mechanics 545: 291-328. doi:10.1017/s0022112005006774.
• Zabusky, N J (1981). Computational Synergetics and Mathematical Innovation. J. Comput. Phys. 43: 195-249. doi:10.1016/0021-9991(81)90120-0.
• Zabusky, N J (1984). Computational Synergetics. Physics Today 37(7): 36-46. doi:10.1063/1.2916319.
• Zabusky, N J (2005). Fermi-Pasta-Ulam, Solitons and the Fabric of Nonlinear and Computational Science: History, Synergetics, and Visiometrics. Chaos 15(1): 015102. doi:10.1063/1.1861554. [This
paper is a historical review, emphasizing the period 1960--1972.]
• Zabusky, N J and Deem, G S (1967). Dynamics of Nonlinear Lattices I. Localized Optical Excitations, Acoustic Radiation, and Strong Nonlinear Behavior. Journal of Computational Physic 2: 126-153.
• Zabusky, N J and Kruskal, M D (1965). Interaction of `Solitons' in a Collisionless Plasma and the Recurrence of Initial States. Physical Review Letters 15: 240-243. doi:10.1103/physrevlett.15.240
• Zabusky, N J; Sun, Z and Peng, G (2006). Measures of Chaos and Equipartition in Integrable and Nonintegrable Lattices. Chaos 16(1): 013130. doi:10.1063/1.2165592.
• Zakharov, V E (1968). Stability of Periodic Waves of Finite Amplitude on the Surface of a Deep Fluid. Zhurnal Prikladnoi Mekhaniki i Tekhniki Fizika 9: 190-194. doi:10.1007/bf00913182. [Journal
of Applied Mechanics and Technical Physics 9: 190-194 (1968)].
• Zakharov, V E and Shabat, A B (1972). Exact Theory of Two-Dimensional Self-Focusing and One-Dimensional Self-Modulation of Waves in Nonlinear Media. Soviet Physics JETP 34(1): 62-69.
Books and Book Chapters
• Ablowitz, M J and Clarkson, P A (2001). Solitons, Nonlinear Evolution Equations and Inverse Scattering, Cambridge University Press, Cambridge, UK.
• Ablowitz, M J and Segur, H (1981). Solitons and the Inverse Scattering Transform, Society for Industrial and Applied Mathematics, Philadelphia, PA.
• Agrawal, G P (1995). Nonlinear Fiber Optics, Academic Press, San Diego, CA.
• Akhmediev, N N and Ankiewicz, A (2010). Dissipative Solitons: From Optics to Biology and Medicine, Springer-Verlag, Berlin, Germany.
• Dauxois, T and Peyrard, M (2006). Physics of Solitons, Cambridge University Press, Cambridge, UK.
• Davydov, A S (1982). Biology and Quantum Mechanics, Pergamon Press, Oxford, UK.
• Deligne, P et al. (1999). Quantum Fields and Strings: A Course for Mathematicians, Volumes 1 and 2, American Mathematical Society, Providence, RI.
• Dodd, R K et al. (1984). Solitons and Nonlinear Wave Equations, Academic Press, San Diego, CA.
• Drazin, P G and Johnson, R S (1996). Solitons: An Introduction, Cambridge University Press, Cambridge, UK.
• Faddeev, L D; Takhtajan, L A and Reyman, A G (2007). Hamiltonian Methods in the Theory of Solitons, Springer-Verlag, Berlin, Germany.
• Filippov, A T (2000). The Versatile Soliton Springer-Verlag, Berlin, Germany.
• Gallavotti, G (2008). The Fermi-Pasta-Ulam Problem: A Status Report, Lecture Notes in Physics, Vol. 728, Springer-Verlag, Berlin, Germany.
• Heyerhoff, M (1997). The History of the Early Period of Soliton Theory, available here, in English and German.
• Kevrekidis, P G; Frantzeskakis, D J and Carretero-González, R (2008). Emergent Phenomena in Bose-Einstein Condensates: Theory and Experiment, Springer-Verlag, Berlin, Germany.
• Kharif, C; Pelinovsky, E and Slunyaev, A (2009). Rogue Waves in the Ocean, Springer-Verlag, Berlin, Germany.
• Lamb, G (1990). Elements of Soliton Theory, John Wiley & Sons, Inc., New York, NY.
• Lamb, G L (1976). Bäcklund Transforms at the Turn of the Century, in Bäcklund Transforms (R. M. Miura, Ed.), Springer Mathematics Series, No. 515, New York, NY.
• Malomed, B A (2005). Soliton Management in Periodic Systems, Springer-Verlag, Berlin, Germany.
• Manton, N and Sutcliffe, P (2001). Topological Solitons, Cambridge University Press, Cambridge, UK.
• Mason, L J and Woodhouse, N M J (1997). Integrability, Self-Duality, and Twistor Theory, London Mathematical Society Monographs, New Series, Oxford University Press, .
• Nesterenko, V F (2001). Dynamics of Heterogeneous Materials, Springer-Verlag, Berlin, Germany.
• Newell, A C (1974). Nonlinear Wave Motion, AMS Lectures in Applied Math. Vol. 15, Providence, RI.
• Pethick, C J and Smith, H (2008). Bose-Einstein Condensation in Dilute Gases, 2nd edition, Cambridge University Press, Cambridge, UK.
• Porter, M A (2009c). Experimental Results Related to DNLS Equations, in Discrete Nonlinear Schrödinger Equation: Mathematical Analysis, Numerical Computations, and Physics Perspectives (P. G.
Kevrekidis, Ed.), Springer Tracts in Modern Physics, Heidelberg, Germany.
• Purwins, H G; Bödeker, H U and Liehr, A W (2005). Dissipative Solitons in Reaction-Diffusion Systems, in Dissipative Solitons (N. Akhmediev, A. Ankiewicz, Eds.), Lecture Notes in Physics,
Springer-Verlag, Berlin, Germany.
• Rajaraman, R (1987). Solitons and Instantons: An Introduction to Solitons and Instantons in Quantum Field Theory, North-Holland Personal Library, Amsterdam, The Netherlands.
• Remoissenet, M (1999). Waves Called Solitons: Concepts and Experiments, 3rd edition, Springer-Verlag, Berlin, Germany.
• Russell, J S (1885). The Wave of Translation in the Oceans of Water, Air and Ether, Trübner & Co., London, UK .
• Samsonov, A M (2001). Strain Solitons in Solids and How to Construct Them, Chapman & Hall/CRC, Boca Raton, FL.
• Scott, A C (2005). Encyclopedia of Nonlinear Science, Routledge, Taylor & Francis Group, New York, NY.
• Scott, A C (2007). The Nonlinear Universe, Springer-Verlag, Berlin, Germany.
• Toda, M (1989). Nonlinear Waves and Solitons, Springer-Verlag, Berlin, Germany.
• Weissert, T P (1997). The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem, Springer-Verlag, Berlin, Germany.
• Whitham, G B (1974). Linear and Nonlinear Waves, Wiley-Interscience, New York NY.
• Grimshaw, R (2005). Nonlinear waves in Fluids: Recent Advances and Modern Applications, Springer-Verlag, Berlin, Germany.
• Deem, G S; Zabusky, N J and Kruskal, M D (1965). Formation, Propagation, and Interaction of Solitons: Numerical Solutions of Differential Equations Describing Wave Motion in Nonlinear Dispersive
Media Film Library of the Bell Telephone Laboratories, Inc., Whippany, NJ.
Recommended reading
Some useful books, review articles, and expository articles on solitary waves are the following:
• D. K. Campbell, S. Flach, Yu. S. Kivshar, Localizing Energy Through Nonlinearity and Discreteness, Physics Today 57(1):43-49 (2004).
• T. Dauxois, M. Peyrard, Physics of Solitons, Cambridge University Press, Cambridge, UK (2006).
• P. G. Drazin, R. S. Johnson, Solitons: An Introduction, Cambridge University Press, Cambridge, UK (1996).
• A. T. Filippov, The Versatile Soliton, Springer-Verlag, Berlin, Germany (2000).
• M. A. Porter, N. J. Zabusky, B. Hu, D. K. Campbell, Fermi, Pasta, Ulam and the Birth of Experimental Mathematics, American Scientist 97(6):214-222 (2009).
• M. Remoissenet, Waves Called Solitons: Concepts and Experiments, 3rd edition, Springer-Verlag, Berlin, Germany (1999).
• A. C. Scott (Ed.), Encyclopedia of Nonlinear Science, Routledge, Taylor & Francis Group, New York, NY (2005).
• A. C. Scott, The Nonlinear Universe, Springer-Verlag, Berlin, Germany (2007).
• G. B. Whitham, Linear and Nonlinear Waves, Wiley-Interscience, New York, NY (1974).
External links
See also
The following scholarpedia articles are germane to the present discussion: | {"url":"http://www.scholarpedia.org/article/Soliton","timestamp":"2014-04-16T19:20:20Z","content_type":null,"content_length":"113786","record_id":"<urn:uuid:159d1df7-3e23-4c1b-9134-b9aeda1e0abf>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Battlegrounds Games
Dice Pack
This free artpack for BRPG and BGE, which is intended for use in
BRPG v1.6f
or higher, contains 788 unique dice and coin graphics. These graphics are used by 128 dice of various types and colors, and the dice are organized into 16 different categories.
The primary purpose of this artpack is to allow you to place dice and coins directly in the map window, and interact with them there, instead of in the Dice Roller window. For some RPGs, boardgames,
wargames, and dice games, this style of dice rolling could provide for significantly faster gameplay. Particularly in games that use simple mechanics and don't require a lot of modifiers, in which
using dice macros would be overkill.
A secondary purpose of this artpack is to make certain RPGs, boardgames, wargames, and dice games playable in BRPG which were, until now, unplayable, due to the fact that prior to BRPG v1.6f, the
dice have always been confined to the Dice Roller window. For example, some RPG systems require "opposed rolls", where an attacker and defender must line up their dice after rolling them, such that
certain dice cancel out the opponent's dice. Another example would be games with Yahtzee-like dice rolling mechanics, where you want to keep some dice from your first roll, but re-roll the rest. This
wasn't possible before, but now it is.
I decided to put together this artpack and offer it for free to the BRPG and BGE community as a way of saying "thanks for your support", and to add value to BRPG and BGE in general.
Download: Dice Pack for BRPG and BGE
(7.4 MB)
(Installation and usage instructions are in the ReadMe which is included in the download.)
Last edited by heruca on Fri Feb 24, 2012 11:58 pm, edited 1 time in total.
I just thought of some other reasons that this artpack could be handy.
1) Any game which requires you to roll more than 2 die types or die colors in a single roll (since BRPG's dice macros only support two die types or die colors). For example, "Don't Rest Your Head"
requires rolls with 3 colors of d6s. White for Discipline, grey for Exhaustion, and red for Madness.
2) In face-to-face games, people sometimes place a die or dice next to their miniature to indicate altitude when they are flying. The same method could also be used to track other number-based data.
For example, If I cast Wall of Flame, a spell which lasts 3 turns, I would place an Area of Effect template on the map and I could put a d3 on it. Using BRPG's token-swapping feature, I would
decrease the number on the d3 every turn, until the spell ends and I remove the Area of Effect template. Remember, just because these are dice doesn't mean you have to roll them.
3) Sometimes the players in an RPG need to make a roll, but they shouldn't know what result they got (e.g., search for traps or secret doors). In BRPG, the GM could use these dice in conjunction with
a concealing object set to Screen, and the players could move the dice under the Screen (where the GM can see the dice but the players can't) and then make the roll.
Can you think of other uses for dice on the map window?
Yes. Just click on a die to select it (or drag a selection marquee over several dice) and press Control-R (Command-R, on a Mac) to make the roll.
I think this is a nice feature.
I think it would be improved though if all dice results were reported to the Main Window, and recorded in the Dice Roll History though, instead of just some special sets. It's inevitable that there
is a player who has to make a bunch of rolls in a row, and if there's some lag (as there always is at some point), those rolls will tend to come through very quickly when the lag dissipates, and
you'll want to look at the History.[/i]
heruca wrote:Yes. Just click on a die to select it (or drag a selection marquee over several dice) and press Control-R (Command-R, on a Mac) to make the roll.
Kazander, I will investigate the possibility of dynamically generating a dice macro when "map dice" are rolled, so that the roll is reported everywhere and stored in the Dice Roll History.
Kazander wrote:I think this is a nice feature.
I think it would be improved though if all dice results were reported to the Main Window, and recorded in the Dice Roll History though, instead of just some special sets.[/i]
I agree. This is a great idea.
Super job on this addition, heruca! Very nice feature to have. Thank you!
Welcome, Lockdown.
Glad you like this new feature.
I'm still doing a forehead slap for not having thought of it a year or two ago when I originally added multi-state tokens.
heruca wrote:Welcome, Lockdown.
Glad you like this new feature.
I'm still doing a forehead slap for not having thought of it a year or two ago when I originally added multi-state tokens.
lol heruca no one is perfect.
Works like a charm Heruca
I will probably use them most for stuff like counting the number of rounds a spell is still active, height in feet/yard, count number of still living opponents, etc.
The readme doesn't say how to use them, and CTRL-R brings up the normal dice-roller tabletop for me. What am I doing wrong? I have 2 dice unhidden and set to "All" for ownership.
Did you select the die (or dice) before pressing Control-R?
Are you using BRPG v1.6f? The ReadMe for v1.6f explains how to roll dice in the map window.
Nope, was still using the "release" version of 1.4i. Must have missed the reference to it requiring a beta. Fixing now!
No, it's still not working- is this related to the issue you just posted about in regards to needing to put out a "g" build? CTRL-R still brings up the old dice table for me.
If you installed the patch rather than the Full Install, double-check that you installed the patch correctly. You should have been prompted to overwrite a bunch of files. Were you?
No I used the full install and then copied over figures and maps and some other stuff specific to the build I had been using. I re-installed the full version again overtop my previous installation of
the full version, and it's working now. Musta inadvertently copied something inappropriate when I was moving figures and sounds and whatnot. It's working fine now...
Glad to hear it's working now.
Kepli wrote:I will probably use them most for stuff like counting the number of rounds a spell is still active, height in feet/yard, count number of still living opponents, etc.
Just wanted to point out that this sort of "non-rolling" use of the dice is possible in earlier versions of BRPG. v1.6f or later is only needed if you actually want to roll the dice.
I know, but you have already added all the "subtokens" for 1 - 20 under the d20 die ... I was too lazy to do that
Too lazy to create one d20?
Imagine how much fun I had creating 128 dice, each containing between 2 and 20 tokens. Okay, I actually cheated. I made one complete color set, and the rest of the color sets I made in a text editor
using "search and replace" and "Save As...". But still.
You are amazing, Heruca. Thank you for putting this free art pack together and for designing the unique "free board" way that it works.
This rules heruca, I noticed that players have to have it installed after reading the directions! | {"url":"http://battlegroundsgames.com/forums/viewtopic.php?p=19530","timestamp":"2014-04-17T00:48:34Z","content_type":null,"content_length":"66928","record_id":"<urn:uuid:e127ee36-00cc-44d5-9447-4a9162aa55a3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: conservative extensions
Stephen G Simpson simpson at math.psu.edu
Fri Oct 9 23:24:08 EDT 1998
Robert Tragesser writes:
> The notion "conservative extension" has acquired the reality of
> centrality and powerful significance in logical investigations in
> the foundations of mathematics.
I wouldn't phrase it in exactly this way, but yes, I agree. The
notion of conservative extension does seem to pop up a lot in f.o.m.,
and for very good reasons.
First, the definition: If T and T' are formal systems and T' is an
extension of T, we say that T' is conservative over T for a class of
sentences S if every S-sentence provable in T' is already provable in
T. In the special case where S is the class of *all* sentences in the
language of T, we say simply that T' is conservative over T.
This notion ought to be part of everyone's training in f.o.m. It can
be used to state a great many logical results which are of
f.o.m. significance. You can get an idea of the scope by searching
through the FOM archive for "conservative". Here are some favorite
conservative extension results:
RCA_0 is conservative over PRA for Pi^0_2 sentences.
WKL_0 is conservative over RCA_0 for Pi^1_1 sentences.
ACA_0 is conservative over PA.
Sigma^1_1-AC_0 is conservative over ACA_0 for Pi^1_2 sentences.
ATR_0 is conservative over IR for Pi^1_1 sentences.
Sigma^1_2-DC_0 is conservative over Pi^1_1-CA_0 for Pi^1_3 sentences.
ZFC + V=L is conserative over ZF for Pi^1_2 sentences.
ZFC + GCH (actually ZFC + V=L(r) for some real r) is conservative over
ZF for Pi^1_3 sentences.
ZFC + GCH is conservative over ZFC (actually over ZF + a well ordering
of the reals) for Pi^2_1 sentences.
VNBG set-class theory is conservative over ZF set theory.
VNBG plus global choice is conservative over ZFC.
Tragesser continues:
> Could someone explain/expose something of the history and more
> particularly something approaching the full sheaf of various
> "implications" of this "property" [as a foundational concept]?
Let me take a stab at it. I think the key word here is
"instrumentalism". (I heard this word recently from Alexander
If we show that T' is conservative over T for S-sentences, then that
result can be interpreted as saying that the primitives and/or axioms
that are present in T' but not in T can be "eliminated" or, viewing it
the other way round, these primitives and/or axioms can be "harmlessly
introduced" on top of T, i.e. they can be viewed as mere instruments
which we introduce artificially in order to make it easier or more
convenient for us to prove S-sentences, without increasing our
ontological commitments beyond what is already in T.
As an example, consider Hilbert's program. We can state Hilbert's
program in modern terms as follows: to show that ZFC is conservative
over PRA for Pi^0_1 sentences. In other words, to show that the
transfinite machinery of higher set theory is only a superstructure
that we erect for our own convenience; to show that all of the
concrete, universal, number-theoretic statements that are provable
with the aid of such machinery could have been proved finitistically,
without such machinery.
This turns out to be false, but that's far from the end of the story
with respect to finitistic reductionism. See my paper on partial
realizations of Hilbert's program, on-line at
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-October/002301.html","timestamp":"2014-04-20T03:16:02Z","content_type":null,"content_length":"5875","record_id":"<urn:uuid:b5db4db6-1d10-4143-bd0f-b11cfd1bccbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding two 8-bit two's complement binary numbers
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
What is the result (in 8-bit binary) of adding the two 8-bit two's complement numbers 10111011 and 11010101 and how would you go about calculating that?
up vote 0 down vote favorite binary numbers add
add comment
What is the result (in 8-bit binary) of adding the two 8-bit two's complement numbers 10111011 and 11010101 and how would you go about calculating that?
The nice thing about two's complement is that you don't need to know whether you have signed or unsigned numbers. Just take the binary representation, add, and discard the overflow bit.
If you've left the range of representable numbers, well, tough luck. But adding two negative numbers and getting a positive one should raise suspicion.
On a practical note: Do not try to guard against overflow in C by asking things like
a = b+c;
if (((b > 0) && (a < c)) || ((b < 0) && (a > c))) {
up vote 0 ...
down vote }
This would probably work in standard debug builds, but C (and C++) compilers are allowed to optimize this check away. (It is more often seen for unsigned arithmetic, if (a >= (a+b)) {
... }, and gcc -Wall will warn that it knows that to be false. Which is fine, since the C standard says that overflow is undefined behavior anyway.)
I don't know what the situation is like in other languages with limited-range integer types.
add comment
The nice thing about two's complement is that you don't need to know whether you have signed or unsigned numbers. Just take the binary representation, add, and discard the overflow bit. If you've
left the range of representable numbers, well, tough luck. But adding two negative numbers and getting a positive one should raise suspicion.
On a practical note: Do not try to guard against overflow in C by asking things like
a = b+c; if (((b > 0) && (a < c)) || ((b < 0) && (a > c))) { ... }
This would probably work in standard debug builds, but C (and C++) compilers are allowed to optimize this check away. (It is more often seen for unsigned arithmetic, if (a >= (a+b)) { ... }, and gcc
-Wall will warn that it knows that to be false. Which is fine, since the C standard says that overflow is undefined behavior anyway.)
I don't know what the situation is like in other languages with limited-range integer types.
This is probably something any decent programming calculator can tell you but, assuming it wraps at eight bits, this is the result.
[ hex,unsigned,signed]
10111011 [0xBB, 187, -69]
+ 11010101 [0xD5, 213, -43]
= (1)10010000 [0x90, 144, -112]
You can do this process manually as follows:
set carry to zero
for each position starting at right side, progressing left:
up vote 2 down vote set sum to carry
add bit from position in first number to sum
add bit from position in second number to sum
if sum is greater than one:
subtract two from sum
set carry to one
set carry to zero
end if
store sum to position of result
end for
add comment
This is probably something any decent programming calculator can tell you but, assuming it wraps at eight bits, this is the result.
[ hex,unsigned,signed] 10111011 [0xBB, 187, -69] + 11010101 [0xD5, 213, -43] -------- = (1)10010000 [0x90, 144, -112]
set carry to zero for each position starting at right side, progressing left: set sum to carry add bit from position in first number to sum add bit from position in second number to sum if sum is
greater than one: subtract two from sum set carry to one else set carry to zero end if store sum to position of result end for | {"url":"http://stackoverflow.com/questions/7147372/adding-two-8-bit-twos-complement-binary-numbers?answertab=active","timestamp":"2014-04-18T06:30:47Z","content_type":null,"content_length":"72687","record_id":"<urn:uuid:d91cb9f3-b87d-45d9-9944-5b6f67b7c363>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brian Reviews a $15 180man
This video is a two minute preview. To view the entire video, please sign in or Sign Up Now!
Aug 14th, 2011
Small Stakes
Teaching Method:
Replayed Hands
Full Ring
2556 Views
Brian reviews a $15 180man.
• Brian Reviews a $15 180man
Discussion for Brian Reviews a $15 180man.
• Re: Brian Reviews a $15 180man
Hi Brian, Could you show the math behind deciding what hands to raise-call shortstacks with? I have no clue what hands to do this with. Also, how big of a stack do you usually want before you
start minraise stealing? Thanks
• Re: Brian Reviews a $15 180man
"I'm not saying that everything i do is 100% right" "no one on earth can look through my hand history and find a mistake" "not a person on the planet plays better than me" LOL a bit contradictory
• Re: Brian Reviews a $15 180man
I dont have an exact formula to show when to raise/call short stacks. Its more of a "is this going to get through a lot, and how good of odds am I getting on the shortstack?" You should start
posting hand histories of the spots you are unsure of so I can have a look. FTPlayer- I didnt mean to contradict myself, just wanted to be clear that I am the best at 180s, but not perfect. And
that a lot of my ideas are subjective, but no one wants to object due to my best results.
• Re: Brian Reviews a $15 180man
Cool. Here's a couple of spots that I tend to struggle with. What should my range be in these and what should I be thinking about before making a decision? Poker Stars $2.28+$0.22 No Limit
Hold'em Tournament - t600/t1200 Blinds + t125 - 7 players [url=http://www.deucescracked.com/?referrer=converter_dc]DeucesCracked Poker Videos[/url] [url=http://www.handconverter.com]Hand History
Converter[/url] UTG+1: t9085 M = 3.40 [b]Hero (MP): t25918 M = 9.69[/b] [b]CO: t5710 M = 2.13[/b] BTN: t24856 M = 9.29 SB: t11631 M = 4.35 BB: t21636 M = 8.09 UTG: t32768 M = 12.25 [b]Pre Flop:[/
b] (t2675) Hero is MP with X:heart: X:heart: [i]2 folds[/i], [color=red]Hero ?[/color] Poker Stars $2.28+$0.22 No Limit Hold'em Tournament - t600/t1200 Blinds + t125 - 7 players [url=http://
www.deucescracked.com/?referrer=converter_dc]DeucesCracked Poker Videos[/url] [url=http://www.handconverter.com]Hand History Converter[/url] [b]BB: t27659 M = 10.34[/b] UTG: t33885 M = 12.67
UTG+1: t13619 M = 5.09 MP: t27694 M = 10.35 CO: t17372 M = 6.49 Hero (BTN): t30747 M = 11.49 SB: t4723 M = 1.77 [b]Pre Flop:[/b] (t2675) Hero is BTN with X:heart: X:heart: [i]4 folds[/i] Hero ?
• Re: Brian Reviews a $15 180man
Yikes that's a bit of a mess. Here's links to the hands in question, try and ignore hole cards and actions if poss :) http://www.handconverter.com/hands/1422406 http://www.handconverter.com/hands
• Re: Brian Reviews a $15 180man
1st hand - yes, exactly a spot where to raise/call shorty and raise/fold to everyone else. 2nd hand - Again, perfect spot to execute this play. You have the right idea.
• Re: Brian Reviews a $15 180man
But what kind of range should I be doing this with? What should I be thinking about when deciding whether to r/c in those hands?
• Re: Brian Reviews a $15 180man
Thats the beauty of it, you can do it with almost ANYTHING because you are getting such great odds if the short stack gets in there.
• Re: Brian Reviews a $15 180man
What about the other hand?
• Re: Brian Reviews a $15 180man
I mentioned the 2nd hand as well ^^
• Re: Brian Reviews a $15 180man
Forget it
• back to back
What do you do when you get back to back pocket pairs. i'm 60 bb deep raised mid position with 99 short stack re-raises me and I shove knowing his range is A5 or better. he calls with 88 and
looses...the very next hand i pick up aces what do I do?
• Re: Brian Reviews a $15 180man
^ answered in thread
• Re: Brian Reviews a $15 180man
You spend the first 10 minutes totally ripping it out of the guy for raising 2.5x with 3s OTB because "you make all your money when you flop a set." When the fact is it doesn't matter at all
whether its 2x or 2.5x in the first level. Then you talk about 3betting 8s to a min raise when tbh it seems like totally the worst option as if he did that then the villian would have hit top
pair on the flop. What do u do then? call every street till you hit your set on the river? Or check fold after 3 betting to 900? The hand where the guy hit top set with Kings maybe the flop lead
out was weird but after the villian called he seemed to play the hand perfectly and set the hand up to get stacks in on the river. You said bet an amount he will call, guess what he thought he
would call the shove cos maybe he thought it looked weak and the villian would make a hero call. Does this review sound harsh? Well guess what. That's the way you sound every time you review
someones play. Oh wait I almost forgot " No one will ever find a mistake in my hand history" LOLOLOL I switched the vid off at this point. Your obviously a good player but god damn ligthen up a
bit when reviewing someones play.
• Re: Re: Brian Reviews a $15 180man
Jackson007 wrote:
You spend the first 10 minutes totally ripping it out of the guy for raising 2.5x with 3s OTB because "you make all your money when you flop a set." When the fact is it doesn't matter at all
whether its 2x or 2.5x in the first level. Then you talk about 3betting 8s to a min raise when tbh it seems like totally the worst option as if he did that then the villian would have hit top
pair on the flop. What do u do then? call every street till you hit your set on the river? Or check fold after 3 betting to 900? The hand where the guy hit top set with Kings maybe the flop
lead out was weird but after the villian called he seemed to play the hand perfectly and set the hand up to get stacks in on the river. You said bet an amount he will call, guess what he
thought he would call the shove cos maybe he thought it looked weak and the villian would make a hero call. Does this review sound harsh? Well guess what. That's the way you sound every time
you review someones play. Oh wait I almost forgot " No one will ever find a mistake in my hand history" LOLOLOL I switched the vid off at this point. Your obviously a good player but god damn
ligthen up a bit when reviewing someones play.
my JJ instructor had a saying...."It's not sewing class ladies...". I think it applies here.
• Re: Brian Reviews a $15 180man
I dont at all try to be harsh. I just try to make my points very clear and I have always learned best when my mistakes were pointed out in a clear manner, with an explanation that was memorable.
If I sometimes sound a little "harsh" please do not take it personally. It has nothing to do with any human individual, we are all just virtual players in a virtual game when I am reviewing on
video. It is just a game, a game where good decision making wins and bad decision making loses and my job is to help you make better decisions and therefore make more money playing. And of course
someone can find a mistake in a random HH of mine, I didnt mean it literally. I just mean I dont have any obvious leaks at the 180s that other players are exploiting. And remember, any time you
are watching a video or reading a poker book, ect, that its just an individual sharing personal insight, knowledge and opinions. You dont always have to agree with them, or like what they have to
say. Thats the beauty of this world though. You just have to learn to see the positive in things and life becomes easier and less stressful. Good luck.
• Re: Brian Reviews a $15 180man
hello men good videos btw.. can i ask you this one 31:30 minutes how can u raise 58o there and fold to button shove even to sb its kinda strange!?
• Re: Brian Reviews a $15 180man
oh forget iam sleeping too little.. the table is short and u dont have the odds to call because the lack of antes | {"url":"http://www.bluefirepoker.com/videos/brian-reviews-a-15-180man/","timestamp":"2014-04-16T16:27:48Z","content_type":null,"content_length":"47293","record_id":"<urn:uuid:ed78ce0b-893e-478b-bb6a-7a5a00719160>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
Stanine (STAndard NINE) is a method of scaling test scores on a nine-point standard scale with a mean of five (5) and a standard deviation of two (2).
Some web sources attribute stanines to the U.S. Air Force during World War II. The earliest known use of Stanines was by the U.S. Air Force in 1943[1].
Test scores are scaled to stanine scores using the following algorithm:
1. Rank results from lowest to highest
2. Give the lowest 4% a stanine of 1, the next 7% a stanine of 2, etc., according to the following table:
Calculating Stanines
Result Ranking 4% 7% 12% 17% 20% 17% 12% 7% 4%
Stanine 1 2 3 4 5 6 7 8 9
The underlying basis for obtaining stanines is that a normal distribution is divided into nine intervals, each of which has a width of one half of a standard deviation excluding the first and last.
The mean lies approximately in the centre of the fifth interval.
Stanines can be used to convert any test score into a single digit number. This was valuable when paper punch cards were the standard method of storing this kind of information. However, because all
stanines are integers, two scores in a single stanine are sometimes further apart than two scores in adjacent stanines. This reduces their value.
Today stanines are mostly used in educational assessment[2]. The University of Alberta in Edmonton, Canada used the stanine system until 2003, when it switched to a 4-point scale [3]. | {"url":"http://psychology.wikia.com/wiki/Stanines?direction=prev&oldid=95343","timestamp":"2014-04-19T00:19:27Z","content_type":null,"content_length":"60273","record_id":"<urn:uuid:e8d5e30d-58da-47c2-8535-509eb120eed6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Radicals & Exponents
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f8b3d1e4b027eb5d9a4791","timestamp":"2014-04-20T11:02:27Z","content_type":null,"content_length":"54022","record_id":"<urn:uuid:9eed4335-5600-42e6-9c40-a7e5e3c2ca0b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Restricting range of values in a graph
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Restricting range of values in a graph
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Restricting range of values in a graph
Date Thu, 16 Jul 2009 14:11:04 +0100
I almost totally agree with Steve's advice. He uses the word Winsorize a
little more widely than is standard. (By the way, I can assure anyone
who reads that FAQ that the misbegotten word "gotten" did not appear in
my original draft.)
I'd favour making the omission of outliers a little more evident. In
this and some other respects -stripplot, box- is more flexible than
-graph box- or -graph hbox-. -stripplot- is downloadable from SSC.
Consider as an example -price- in the auto dataset.
sysuse auto
clonevar price2 = price
replace price2 = 14000 if price2 > 14000
stripplot price2, over(foreign) box center stack width(250) ///
xla(4000(2000)12000 14000 "outliers")
gen outliers = price > 14000
stripplot price2, over(foreign) box center stack width(250)
xla(4000(2000)12000 14000 "outliers") ///
separate(outliers) ms(oh S) legend(off)
Try the -nooutside- option or switch to another scale and show
everything. See: Nick Cox's FAQ at
http://www.stata.com/support/faqs/graphics/boxandlog.html . What he
demonstrates can apply to scales other than the log.
If you want to show some of the outside points, but not all, you will
have to Winsorize the points you want to hide. Replace them with a
value at the upper end of your desired graph range and give them an
invisible marker symbol. This will leave the rest of the boxplot
unchanged. You can add text at that value to show the number of
higher points excluded.
This problem comes up for other commands in which Stata computes the
plotting points; -stcurve- is an example. Stata has a -range- option
for axes, but it can only expand, not contract, the plotting range.
On Thu, Jul 16, 2009 at 3:09 AM, Dana Chandler<dchandler@gmail.com>
> I am preparing some graphs with simple boxplots over various groups.
> Thus on my x-axis, I have categorical variables for population groups.
> My y-axis has # of businesses of a certain type within each population
> group.
> Unfortunately, I would like to be able to only show the y-axis within
> a certain range (so as to not have outliers distort the picture). One
> idea I had was to simply do the graph and add "IF #businesses < 50".
> This will make the graph visible, but will distort the IQR of the
> boxplot. The "yscale(r(0 25))" command does not seem to work and seems
> only to "extend" a range of y-values rather than restrict it. Does
> anyone have a suggestion for how to construct a graph for the entire
> range of data but only display it over a specific range?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-07/msg00667.html","timestamp":"2014-04-21T07:30:52Z","content_type":null,"content_length":"8815","record_id":"<urn:uuid:c113f835-1a5a-4e13-8b9a-be0890aed763>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Covariant Derivative Chain Rule
up vote 6 down vote favorite
I'm looking for a resource regarding the chain rule for covariant derivatives.
The closest version I've seen is the one regarding differentials of maps: If F and G are differential maps, then $d(G \circ F)_p=dG_{F(p)} \circ dF_{p}$
(From deCarmo's Differential Geometry)
I'm not positive this extends to covariant derivatives, although I feel it should, as the differential applied to a vector $v$ and the covariant derivative in the direction $v$ differ only by a
projection onto the tangent plane. Could someone please point me towards the right resource? Thanks so much!
Edit: Regarding the comments below by Ryan and Deane:
I'm thinking about a covariant derivative (using the standard Levi-Civita connection) on a surface $S$ embedded in $\mathbb{R}_3$. Let $dN$ be the shape operator (differential of the Gauss map) and
let $u \in T_p(S)$. Say $\alpha(t)$ is a curve on the surface with $\alpha'(0) = u$. Let $v(t)$ be a vector field defined on $\alpha(t)$. Then, the equation I am interested in would be: $\nabla_u (dN
(v(t))) = (\nabla_u dN) (\nabla_u v(t))$. That is, the R.H.S is the covariant derivative (in the tensor sense) of $dN$ applied to the tangent vector $\nabla_u v$. I would like to know whether
equation is true, or if there is a similar type of rule. Thanks again for your time!
7 Covariant derivatives usually apply to objects like sections of fibre bundles when you have some kind of connection. So what kind of compositional operation do you want to consider? – Ryan Budney
Jun 17 '11 at 21:30
1 Agree with Ryan. Please cite a specific formula you think might be true. – Deane Yang Jun 17 '11 at 22:25
I'm not sure why this was closed as "not a real question". There actually is a very useful chain rule for the covariant derivative and it has to do with pullback bundles. If $\pi : E \to N$ is a
3 vector bundle (e.g. $TN$), $\nabla^E$ is a covariant derivative on $E$, $\sigma$ is a section of $\pi$, and $\phi : M \to N$ is a map, then $\nabla^{\phi^* E} (\sigma \circ \phi) = (\nabla^E \
sigma \circ \phi) \cdot T\phi$. This can be seen trivially using the "connection map" formulation $\nabla^E \sigma := \kappa \cdot T\sigma$. If someone re-opens this question, I'd gladly post a
full answer. – Victor Dods Dec 16 '11 at 1:18
add comment
closed as not a real question by Ryan Budney, Willie Wong, Deane Yang, Will Jagy, Mariano Suárez-Alvarez♦ Jun 18 '11 at 5:48
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying
this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
Browse other questions tagged dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/68097/covariant-derivative-chain-rule","timestamp":"2014-04-21T15:14:31Z","content_type":null,"content_length":"43075","record_id":"<urn:uuid:d0761a4a-d3bd-4eb1-836a-38d93d43a03e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Atomic Structure
Atomic Structure: IIT JEE and AIEEE Physics FREE Video Lectures
Numerous times when you would have asked for advice about the important chapters of IIT JEE Physics syllabus, people would have suggested Mechanics, Electricity and Magnetism and so on without
blinking once. That advice is not bad, however what most people forget is that Modern Physics is another important unit that is comparatively easier to master than all the other units that are
considered important. One thing about IIT JEE is that in the past few years its pattern has changed drastically, and Modern Physics is being given more and more importance with each change.
What is IIT JEE without the tricks it plays with its questions however this is one unit where there isn’t much scope for much twists and turns, thus it is wise to concentrate on it. Use your basic
IIT study material to master the concepts (H.C Verma in addition to NCERT books is recommended for concepts of Modern Physics). Follow this up with IIT JEE test series and AIEEE test series, because
the questions from this unit would be more and less standardized ones that you can find in any good IIT JEE study material.
The question of discussion is from the chapter Atomic Structure, and goes as follows:
A 12.5 eV electron beam is used to bombard gaseous hydrogen at room temperature. What series of wavelengths will be emitted?
The difference in energies would be calculated first by using the formula for energy in nth orbit (-13.6/n^2, where n is the number of the orbit) in Hydrogen atom as can be seen in the video. On
obtaining the differences it is observed that the electron can go only till the 3^rd orbit on excitation. The electron beam would come out as well along with the hydrogen atom because of its
increasing Kinetic Energy.
For finding the emitted wavelengths, the Rydberg formula is used and as seen in the video it is observed that 2 lines are from Lyman Series, and 1 from Balmer series.
Practice many such similar problems from IIT JEE test series after mastering the concepts form free IIT study material, because one thing
about IIT JEE
must always be remembered: It cannot be cracked without a strong conceptual foundation.
Share your thoughts with us!
Want regular updates on IIT JEE? | {"url":"http://crackiitjee.com/video-lectures/2011/09/13/atomic-structure-iit-jee-aieee-physics-free-video-lectures/","timestamp":"2014-04-16T10:20:13Z","content_type":null,"content_length":"85282","record_id":"<urn:uuid:4a8865b7-ed8b-499f-8b89-a6baa6f037e3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
My answer is C Choose the correct graph of the following system of inequalities: y < 3x + 5 y < 2x + 2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d33d3be4b069abbb71a6dc","timestamp":"2014-04-18T10:58:14Z","content_type":null,"content_length":"48232","record_id":"<urn:uuid:0eec60eb-761b-45c5-afcf-b2d1eccd349c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] using FortranFile to read data from a binary Fortran file
Brennan Williams brennan.williams@visualreservoir....
Tue Nov 3 20:18:13 CST 2009
Brennan Williams wrote:
> Hi
> I'm using FortranFile on 32 bit XP.
> The first record in the file has both 32 bit integer values and double
> precision float values.
> I've used readInts('i') to read the data into what I presume is a 32-bit
> integer numpy array.
> Items 0 and 1 of the array are the first double precision value.
> What's the best way to convert/cast these to get the double precision
> float value?
> I assume I need to do some sort of dpval=ival.astype('float64')
> so far....
> f= FortranFile(fname,endian='<')
> ihdr=f.readInts('i')
ok I took a closer look at FortranFile and I'm now doing the following.
Note that the first line in the file I'm reading
has two double precision reals/floats followed by 8 32 bit integers.
if f:
print 'hdr=',hdr
print 'len=',len(hdr)
print 't=',t
print 'i=',i
This gives me...
which is correct.
So is that the best way to do it, i.e. if I have a line of mixed data
types, use readString and then do my own unpacking?
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046371.html","timestamp":"2014-04-20T08:34:02Z","content_type":null,"content_length":"4402","record_id":"<urn:uuid:4deb4c76-47d6-4684-b827-74c642cfe9a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Beat signals
Thanks. I guess we are interested in the power, since that is what we measure. So I get
cos(ωt) . cos(ω+Ω)t = ½[cos(Ωt) + cos(2ω+Ω)] and
cos(ωt) . cos(ω-Ω)t = ½[cos(Ωt) + cos(2ω-Ω)].
Adding them and keeping terms at Ω gives cos(Ωt), so I get a nonzero DC-signal. Is the author wrong then?
Best regards, | {"url":"http://www.physicsforums.com/showpost.php?p=3814371&postcount=6","timestamp":"2014-04-17T12:48:55Z","content_type":null,"content_length":"7040","record_id":"<urn:uuid:87962452-e2ea-407d-b481-d4d532dc4411>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Give all of the names that apply to each quadrilateral: 1.Kite 2.Rectangle 3.Rhombus
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a41bf4e4b0f1696c137b94","timestamp":"2014-04-19T15:13:45Z","content_type":null,"content_length":"53968","record_id":"<urn:uuid:0066a5a2-8276-48c2-ba88-8564e5d7fb4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Improved Methods for Divisible Load Distribution on k-Dimensional Meshes Using Multi-Installment
November 2007 (vol. 18 no. 11)
pp. 1618-1629
ASCII Text x
Yeim-Kuan Chang, Jia-Hwa Wu, Chi-Yeh Chen, Chih-Ping Chu, "Improved Methods for Divisible Load Distribution on k-Dimensional Meshes Using Multi-Installment," IEEE Transactions on Parallel and
Distributed Systems, vol. 18, no. 11, pp. 1618-1629, November, 2007.
BibTex x
@article{ 10.1109/TPDS.2007.1103,
author = {Yeim-Kuan Chang and Jia-Hwa Wu and Chi-Yeh Chen and Chih-Ping Chu},
title = {Improved Methods for Divisible Load Distribution on k-Dimensional Meshes Using Multi-Installment},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {18},
number = {11},
issn = {1045-9219},
year = {2007},
pages = {1618-1629},
doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2007.1103},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Improved Methods for Divisible Load Distribution on k-Dimensional Meshes Using Multi-Installment
IS - 11
SN - 1045-9219
EPD - 1618-1629
A1 - Yeim-Kuan Chang,
A1 - Jia-Hwa Wu,
A1 - Chi-Yeh Chen,
A1 - Chih-Ping Chu,
PY - 2007
KW - Divisible load theory
KW - linear array
KW - k-dimensional mesh
KW - multi-installment
VL - 18
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
In the divisible load distribution, the classic methods on linear arrays divide the computation and communication processes into multiple time intervals in a pipelined fashion. Li (2003) has proposed
a set of improved algorithms for linear arrays which can be generalized to k-dimensional meshes. In this paper, we first propose the algorithm M (multi-installment) that employs the multi-installment
technique to improve the best algorithm Q proposed by Li. Second, we propose the algorithm S (start-up cost) that includes the computation and communication start-up costs in the design. While the
asymptotic speedups of our algorithms M and S derived from the closed-form solutions are the same as algorithm Q, our algorithms approach the optimal speedups considerably faster than algorithm Q as
the number of processors increases. Finally, we combine algorithms M and S and propose the algorithm MS. While algorithm MS has the same the asymptotic performance as algorithms Q and S, it achieves
a better speedup when the load to be processed is very large and the number of processors is fixed or when the load to be processed is fixed and the number of processors is small.
[1] D. Altilar and Y. Paker, “Optimal Scheduling Algorithms for Communication Constrained Parallel Processing,” Proc. Eighth Int'l Euro-Par Conf. (Euro-Par '02), pp. 197-206, 2002.
[2] V. Bharadwaj and H.M. Wong, “Scheduling Divisible Loads on Heterogeneous Linear Daisy Chain Networks with Arbitrary Processor Release Times,” IEEE Trans. Parallel and Distributed Systems, vol.
15, no. 3, pp. 273-288, Mar. 2004.
[3] V. Bharadwaj, D. Ghose, and V. Mani, “Multi-Installment Load Distribution in Tree Networks with Delays,” IEEE Trans. Aerospace and Electronic Systems, vol. 31, no. 2, pp. 555-567, 1995.
[4] V. Bharadwaj, D. Ghose, V. Mani, and T.G. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems. IEEE CS Press, 1996.
[5] V. Bharadwaj, D. Ghose, and T. Robertazzi, “Divisible Load Theory: A New Paradigm for Load Scheduling in Distributed Systems,” Cluster Computing, vol. 6, no. 1, pp. 7-17, 2003.
[6] V. Bharadwaj and H.M. Wong, “Scheduling Divisible Loads on Heterogeneous Linear Daisy Chain Networks with Arbitrary Processor Release Times,” IEEE Trans. Parallel and Distributed Systems, vol.
15, no. 3, 273-288, Mar. 2004.
[7] J. Blazewicz, M. Drozdowski, and M. Markiewicz, “Divisible Task Scheduling—Concept and Verification,” Parallel Computing, vol. 25, pp. 87-98, 1999.
[8] J. Blazewicz, M. Drozdowski, F. Guinand, and D. Trystram, “Scheduling a Divisible Task in a 2-Dimensional Mesh,” Discrete Applied Math., vol. 94, nos. 1-3, pp. 35-50, 1999.
[9] S. Chan, V. Bharadwaj, and D. Ghose, “Large Matrix-Vector Products on Distributed Bus Networks with Communication Delays Using the Divisible Load Paradigm: Performance and Simulation,” Math. and
Computers in Simulation, vol. 58, pp. 71-92, 2001.
[10] S. Charcranoon, T.G. Robertazzi, and S. Luryi, “Parallel Processor Configuration Design with Processing/Transmission Costs,” IEEE Trans. Computers, vol. 49, no. 9, pp. 987-991, Sept. 2000.
[11] Y.C. Cheng and T.G. Robertazzi, “Distributed Computation with Communication Delays,” IEEE Trans. Aerospace and Electronic Systems, vol. 24, no. 6, pp. 700-712, Nov. 1988.
[12] M. Drozdowski, Selected Problems of Scheduling Tasks in Multiprocessor Computing Systems. Poznan Univ. of Technology Press, 1997.
[13] M. Drozdowski and W. Glazek, “Scheduling Divisible Loads in a Three-Dimensional Mesh of Processors,” Parallel Computing, vol. 25, no. 4, pp. 381-404, 1999.
[14] M. Drozdowski and P. Wolniewicz, “Out-of-Core Divisible Load Processing,” IEEE Trans. Parallel and Distributed Systems, vol. 14, no. 10, pp. 1048-1056, Oct. 2003.
[15] M. Drozdowski and P. Wolniewicz, “Performance Limits of Divisible Load Processing in Systems with Limited Communication Buffers,” J. Parallel and Distributed Computing, vol. 64, no. 8, pp.
960-973, 2004.
[16] D. Ghose and H.J. Kim, “Computing BLAS Level-2 Operations on Workstation Clusters Using the Divisible Load Paradigm,” Math. and Computer Modelling, vol. 41, no. 1, pp. 49-70, Jan. 2005.
[17] W. Glazek, “Distributed Computation in a Three-Dimensional Mesh with Communication Delays,” Proc. Sixth Euromicro Workshop Parallel and Distributed Processing, pp. 38-42, Jan. 1998.
[18] J. Guo, J. Yao, and L.N. Bhuyan, “An Efficient Packet Scheduling Algorithm in Network Processors,” Proc. INFOCOM, Mar. 2005.
[19] C. Lee and M. Hamdi, “Parallel Image Processing Applications on a Network of Workstations,” Parallel Computing, vol. 21, pp. 137-160, 1995.
[20] K. Li, “Managing Divisible Load on Partitionable Networks,” High Performance Computing Systems and Applications, J. Schaeffer, ed., pp. 217-228, Kluwer Academic Publishers, 1998.
[21] K. Li, “Improved Methods for Divisible Load Distribution on k-Dimensional Meshes Using Pipelined Communications,” IEEE Trans. Parallel and Distributed Systems, vol. 14, no. 12, pp. 1250-1261,
Dec. 2003.
[22] K. Li, “Accelerating Divisible Load Distribution on Tree and Pyramid Networks Using Pipelined Communications,” Proc. 18th Int'l Parallel and Distributed Processing Symp. (IPDPS '04), p. 228,
Apr. 2004.
[23] X. Li, B. Veeravalli, and C.C. Ko, “Divisible Load Scheduling on a Hypercube Cluster with Finite-Size Buffers and Granularity Constraints,” Proc. First IEEE/ACM Int'l Symp. Cluster Computing and
the Grid (CCGrid '01), pp. 660-667, May 2001.
[24] X. Li, V. Bharadwaj, and C.C. Ko, “Distributed Image Processing on a Network of Workstations,” Int'l J. Computers and Applications, vol. 25, no. 2, pp. 1-10, 2003.
[25] T.G. Robertazzi, “Ten Reasons to Use Divisible Load Theory,” Computer, pp. 63-68, May 2003.
[26] B. Veeravalli, X. Li, and C.C. Ko, “On the Influence of Start-Up Costs in Scheduling Divisible Loads on Bus Networks,” IEEE Trans. Parallel and Distributed Systems, vol. 11, no. 12, pp.
1288-1305, Dec. 2000.
[27] R. Wang, A. Krishnamurthy, R. Martin, T. Anderson, and D. Culler, “Modeling Communication Pipeline Latency,” Proc. Joint Int'l Conf. Measurement and Modeling of Computer Systems (SIGMETRICS '98/
PERFORMANCE '98), pp. 22-32, 1998.
[28] Y. Yang, K.V.D. Raadt, and H. Casanova, “Multiround Algorithms for Scheduling Divisible Loads,” IEEE Trans. Parallel and Distributed Systems, vol. 16, no. 11, pp. 1092-1102, Nov. 2005.
Index Terms:
Divisible load theory, linear array, k-dimensional mesh, multi-installment
Yeim-Kuan Chang, Jia-Hwa Wu, Chi-Yeh Chen, Chih-Ping Chu, "Improved Methods for Divisible Load Distribution on k-Dimensional Meshes Using Multi-Installment," IEEE Transactions on Parallel and
Distributed Systems, vol. 18, no. 11, pp. 1618-1629, Nov. 2007, doi:10.1109/TPDS.2007.1103
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/2007/11/l1618-abs.html","timestamp":"2014-04-16T23:23:33Z","content_type":null,"content_length":"60624","record_id":"<urn:uuid:ab1df6e8-3183-4810-b57e-de0a7886eddf>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the equation of the line that passes through the point (3, 2) and has a slope of 4? A. y = 4x − 1 B. y = 4x − 5 C. y = 4x − 10 D. y = 4x + 1
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510c36fce4b09cf125bc600f","timestamp":"2014-04-19T04:32:41Z","content_type":null,"content_length":"56572","record_id":"<urn:uuid:e7573f5b-1162-433b-b934-a86b4f066a5f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information Sheet
EE372: Quantization, Compression, and Classification
spring 2006 - 2007
Handouts will be made available at http://www.stanford.edu/class/ee372/handouts.html as they are produced. These include lecture notes, copies of the slides used in class, homework, a pdf version
of this information sheet, and a discussion of the class project.
Robert M. Gray
Office: Packard 261
Phone: (650) 723-4001 (ISL), 723-6685 (EE)
rmgray at stanford.edu
Office hours: M 1:00-2:30, W 10:-11:30 or by email appointment.
Teaching Assistant
Sangho Yoon
Office: Packard 109
Phone: TBA
Office hours: TBA
e-mail: holyoon at stanford.edu
Office Hours: TBA
Admin. Ass't.
Kelly Yilmaz
Packard 259
(650) 723-4539 yilmaz@stanford.edu
MW 3:15-4:30, 60-61H (time may be shifted slightly to remove conflict)
Problem Sessions
Weekly sessions to discuss homework and projects. TBA
Course Objective
Develop the fundamentals of quantization and compression and quantization-based classification.
Course Description
Theory and design of codes for quantization and signal compression systems (source coding systems), systems which convert analog or high bit rate digital signals into relatively low bit rate
digital signals. Applications to analog-to-digital conversion, source coding and data compression, statistical classification, clustering, fitting discrete models to continuous models, density
estimation, and machine learning. Necessary conditions for optimality of codes and implied code design algorithms. Constrained and structured vector quantization.
Intended Audience
Electrical engineers and computer scientists interested in lossy and lossless compression and statistical classification, especially of speech and images.
□ Basic linear systems theory, including discrete and continuous parameter Fourier Transforms, basics of 2D Fourier transforms. The material covered in EE261.
□ Familiarity with Unix and Matlab (or willingness to learn!). Knowledge of some programming language such as C or C++ or Java.
□ Probability and random processes. The material covered in in EE278.
□ Information theory (EE376A) will certainly help, but it is not assumed and needed material is taught (rapidly) as part of this class.
Tentative Course Topics
The topics are likely to be closely approximated, but not necessarily in the same order. The lectures will begin with an introductory survey of most of the basic ideas in order to frontload
enough of the material to begin looking for projects and studying the literature. Topics will be then treated individually in more depth. The introductory tour will be covered in handout lecture
□ Topic 1: Introduction and overview A description of quantization in general terms and a brief sketch of some of the ideas and issues considered in the course. The first lecture will be a
broad survey of the entire course and a discussion of the course logistics. The subsequentfour lectures will go through the handout lecture notes An introductory tour of quantization.
☆ Quantization:extracting discrete information from a continuous world
☆ Encoder and decoder
☆ Cost functions: measures of distortion and rate
☆ Fixed-rate and variable-rate codes
☆ Average performance and optimality
☆ Theories of quantization
○ Nonasymptotic theory: code optimality properties
○ Shannon rate-distortion theory
○ High rate theory
☆ Applications
○ compression for transmission and storage
○ putting points in space
○ statistical classification/detection
○ statistical clustering and learning
○ statistical estimation/regression including a digital link
○ density estimation
○ designing and choosing Gauss mixture models
□ Topic 2: Fundamentals of quantization Elaboration of the basic ideas of quantization including codes, cost measures, measures of performance, and definitions of optimality.
☆ Preliminaries: Random variables, vectors, processes, and fields review
☆ Quantization: codes, partitions, and codebooks
☆ Distortion, rate, and optimal performance
☆ Lagrangian formulation of quantization
☆ Necessary conditions for optimal codes
☆ The Lloyd clustering algorithm
□ Topic 3: Lossless and almost lossless coding Quantization can be viewed as the Shannon model for lossy source coding. This topic treats lossless source coding because it usually forms a
component of overall quantization systems and because it provides operational significance to Shannon entropy as well as a simple example of a Shannon theory asymptotic result.
☆ Uniquely decodable and prefix codes
☆ Kraft inequality
☆ Shannon's lossless coding theorm
☆ Shannon codes
☆ Huffman codes
☆ Coding vectors
☆ Entropy rate and the Shannon limit
☆ Brief survey of modern lossless codes, esp. Fano, arithmetic and Lempel-Ziv. See EE 376 for more details.
☆ Predictive coding
☆ Code mismatch
☆ Universal codes
□ Topic 5: Structured quantization Quantizers are usually implemented with structural constraints in order to have manageable complexity.
☆ Scalar quantization
☆ Uniform quantization
☆ Lattice quantization
☆ Product quantization
☆ Transform coding and bit allocation
☆ Subband and wavelet coding
☆ Gain/shape quantization
☆ Successive approximation
☆ Tree- structured quantization (TSVQ): Growing and pruning, CART and BFOS
☆ Multistage quantization
☆ Predictive quantization
☆ Recursive quantization
☆ Classified/switched/composite quantization
☆ Universal quantization
☆ Quantizing vectors into models: LPC, minimum discrimination information and related distortion measures, Gauss mixture vector quantization
□ Topic 6: Quantization, Classification, and Estimation
☆ Quantization and classification
☆ Bayes quantization
☆ Gauss mixture quantization
☆ Classification by compression
☆ Quantization and density estimation
Primary Texts
□ Vector Quantization and Signal Compression, A. Gersho and R.M. Gray. Kluwer Academic Press, 1992.
□ R.M. Gray and D.L. Neuhoff, "Quantization," IEEE Transactions on Information Theory, vol. 44, pp. 2325-2384, Oct. 1998.
□ Handout lecture notes and copies of slides.
Other texts of possible interest
□ T.M. Cover and J.A. Thomas, Elements of Information Theory, Wiley, 1991. (Excellent text on information theory, including noiseless coding theory.)
□ T.C. Bell, J.G. Cleary, and I.H. Witten, Text Compression, Prentice-Hall, 1990. (Excellent reference on lossless compression algorithms.)
□ Ian H. Witten, Alistair Moffat, and Timothy C. Bell Managing Gigabytes: Compressing and Indexing Documents and Images, 1999.
□ H. Abut, ed., Vector Quantization, IEEE Press, 1990. (Collected papers on VQ through 1990.)
□ N.S. Jayant and P. Noll, Digital Coding of Waveforms, Prentice-Hall, 1984. (Old standby. Concentrates on various forms of scalar quantization. Has serious errors in treatment of dithering)
□ W.B. Pennebaker and J.L. Mitchell, JPEG: Still Image Data Comrpression Standard, Van Nostrand Reinhold, 1993.
□ T. Berger, Rate-Distortion Theory, Prentice-Hall, 1971. (The bible on rate-distortion theory.)
□ R. Gallager, Information Theory and Reliable Communication, Wiley, 1968. (The old standby on information theory, including noiseless coding and rate-distortion theory. Still a wonderful
□ S. Graf and H. Luschgy, Foundations of Quantization for Probability Distributions, Springer, Lecture Notes in Mathematics, 1730, Berlin, 2000.
□ R.M. Gray, Entropy and Information Theory, Springer-Verlag, 1990. (The hard core mathematics of information theory,including general source coding theorems.)
□ R. Veldhuis and M. Breeuwer, An Introduction to Source Coding, Prentice Hall, New York, 1993
□ M. Rabbani and P. W. Jones, Digital Image Compression Techniques, SPIE Optical Engineering Press, Bellingham, Washington, 1991.
□ Introduction to Data Compression, K. Sayood. Morgan Kauffman, 1996.
□ T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York, 2001.
□ Y. Fisher, Ed. Fractal Image Compression: Theory and Application, Springer-Verlag, NY, 1995.
□ Mark Nelson and Jean-Loup Gailly, The Data Compression Book, M&T Books, NY, 1996.
□ J.A. Storer, Data Compression - Methods and Theory, Computer Science Press, NY, 1988.
□ Data compression : the complete reference , D. Salomon, Springer, 1998.
Occasional notes will be handed out and posted to the Web.
Some relevant Web links
Course Requirements and Grading
4 Homework sets 35%
Midterm 30%
Final Project 35%
The midterm is tentatively scheduled for Thursday, 17 May 2006 from 6:30 PM to 9:30 PM. It will cover material from the first 11 lectures. The room on campus will be announced. The course project
will consist of either a theoretical analysis of a quantization or compression system or the design, programming, and simulation of a signal compression/quantization/classification algorithm for
a particular application (or some combination of the two). The project should be described in readable English in a report. There will also be a 15 minute oral presentation during the final week
of the quarter before finals. The report should provide appropriate references to the literature and a comparative discussion with existing methods. Suggestions for projects may be found at
projects.html, but creativity in developing a topic will be considered in the grade. The projects can be developed for any available platform. The project grade will be based on the creativity
and technical content of the project and on the quality of the presentation and participation in the discussion of the other project presentations. A 1 to 3 page proposal for the project should
be submitted prior to Wednesday 2 May describing the basic problem to be attacked, the general approach, and a list of relevant references. In previous years some of the class projects have
turned into conference publications, including the IEEE Data Compression Conference and the IEEE International Conference on Image Processing. Students are encouraged to develop a project that
has some relation to their own research interests. A list of suggested projects will be handed out the second week of class.
Collaboration Policy
You are encouraged to discuss together the concepts presented in the class and the book. You may also discuss the homework problems, and help each other if someone gets stuck. Projects may be
done by pairs, but individual responsibilities must be clearly described and separate reports submitted. You are required to implement new code by yourself, work out written problems by yourself,
and turn in only your own work. Specifically, you may not copy anyone else's computer files or copy anyone else's written homework. If you choose to share subroutines, then these subroutines
should be available to anyone in the class, e.g., via a web page. The goal is that each student should be able to give and receive sufficient help so that each person can complete the homework,
but that in the end each student is turning in a homework that he or she understands fully and would be able to reproduce in its entirety unaided. Obviously it is difficult to write a completely
specific collaboration policy; you are asked to abide by the spirit, rather than the letter, of this one.
Image Systems Engingeering Program Lab
Computing facilities will be made available at in the Image Processing Lab
for use in the homeworks and projects. | {"url":"http://www.stanford.edu/class/ee372/","timestamp":"2014-04-23T17:02:39Z","content_type":null,"content_length":"16165","record_id":"<urn:uuid:1bcdcfcb-c5f2-428d-be16-1b4c30c92477>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thoughts on a non-linear second order problem
[itex]h''(t)=-\frac{1}{h(t)^2}, h(0) = h_0, h'(0)=v_0[/itex]
The first step is to, I think, reduce this to a fist-order problem:
[itex]h'(t)h''(t)=-h'(t)\frac{1}{h(t)^2}[/itex] --- Multiply both sides by h'(t)
[itex]h'(t)^2=\frac{1}{h(t)}+c_1[/itex] --- Integrate both sides
[itex]1/h'(t) = \sqrt{\frac{h(t)}{c_1 h(t)+1}}[/itex] --- Rearrange
I haven't seen this explained anywhere, but I'm fairly certain that for f = f(x), [itex]\int \frac{1}{f'}df = x + c[/itex] (could someone outline a proof/disproof for this? Please no ad verecundiam.)
Therefore, integrating both sides with respect to h(t) gets a messy explicit formula for t(h):
[itex]t(h) = \int \sqrt{\frac{h}{c_1 h+1}} dh[/itex]
I am looking for a non-recursive, explicit formula for h(t). Any ideas? Is there a better approach than this one?
Thanks in advance!
P.S., here's my thought process for [itex]\int \frac{1}{f'}df = x + C[/itex]:
[itex]\frac{dx}{dy} = \frac{1}{\frac{dx}{dy}}[/itex]
[itex]\int \frac{1}{\frac{dx}{dy}} dy = \int 1 dx = x + C[/itex]
Even if this isn't valid, is it true? | {"url":"http://www.physicsforums.com/showthread.php?p=4267904","timestamp":"2014-04-16T04:35:03Z","content_type":null,"content_length":"20486","record_id":"<urn:uuid:4dce481c-ad8e-4255-9d8f-b3a94f737568>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving the Minimum Label Spanning Tree Problem by Mathematical Programming Techniques
Advances in Operations Research
Volume 2011 (2011), Article ID 143732, 38 pages
Research Article
Solving the Minimum Label Spanning Tree Problem by Mathematical Programming Techniques
Institute of Computer Graphics and Algorithms, Vienna University of Technology, 1040 Vienna, Austria
Received 20 November 2010; Accepted 5 March 2011
Academic Editor: I. L. Averbakh
Copyright © 2011 Andreas M. Chwatal and Günther R. Raidl. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We present exact mixed integer programming approaches including branch-and-cut and branch-and-cut-and-price for the minimum label spanning tree problem as well as a variant of it having multiple
labels assigned to each edge. We compare formulations based on network flows and directed connectivity cuts. Further, we show how to use odd-hole inequalities and additional inequalities to
strengthen the formulation. Label variables can be added dynamically to the model in the pricing step. Primal heuristics are incorporated into the framework to speed up the overall solution process.
After a polyhedral comparison of the involved formulations, comprehensive computational experiments are presented in order to compare and evaluate the underlying formulations and the particular
algorithmic building blocks of the overall branch-and-cut- (and-price) framework.
1. Introduction
The minimum label spanning tree (MLST) problem was first introduced in [1] and has, for instance, applications in telecommunication network design and data compression [2]. For the MLST problem we
are given an undirected graph with nodes (or vertices) and edges connecting pairs of nodes. In addition a labelling function is given, assigning to each edge an element, called “label”, from a finite
set . The objective is to find a minimum cardinality label subset inducing a spanning tree in the sense that for each edge in the spanning tree, its corresponding label is selected. We also consider
the situation of where more than one label can be assigned to an edge.
2. Related Work
The minimum label spanning tree (MLST) problem has been introduced by Chang and Leu [1] for the first time. In this work, the authors showed the MLST problem to be -hard, and proposed an exact and an
approximative algorithm called maximum vertex covering Algorithm (MVCA). Krumke and Wirth [3] proposed a modified construction algorithm and derived a performance guarantee for it. Moreover, it has
been shown that the problem cannot be approximated with a constant factor. An improved performance bound has been obtained by Wan et al. [4], and a tight bound has then been found by Xiong et al. [5
]. An experimental comparison of further MVCA variations is presented in [6].
Besides approximative methods many metaheuristic algorithms have been proposed and studied in the literature during the last decade. Various genetic algorithms have been developed in [7, 8]. Methods
based on local search have been treated from a theoretical point of view in [9], and from a more practical one in [10–14]. In particular, the latter publications also cover metaheuristics like greedy
randomized search procedures, local search, variable neighborhood search and the pilot method.
Less work does exist regarding exact algorithms. An exact algorithm based on -search has been proposed in [1], a similar approach, however, not using the guidance function of the -algorithm, has been
proposed in [12]. So far, only two mathematical-programming approaches have been considered in the literature. The first mixed integer programming (MIP) formulation proposed by Chen et al. [15] is
based on Miller-Tucker-Zemlin inequalities (cf. Section 3.1) which ensure that the decision variables for the edges induce a connected subgraph covering all nodes of the initial graph. In a recent
work of Captivo et al. [16], the authors propose an MIP formulation based on single commodity flows, a frequently used modelling technique for spanning trees. A branch-and-cut algorithm based on
directed connection cuts and cycle-elimination cuts for an extension of the MLST problem has been described in [17]. For a general introduction to integer linear programming-(ILP-) based algorithms
like branch-and-cut and branch-and-price we refer the reader to [18].
In this work we propose a branch-and-cut (and-price) (BCP) framework for the solution of moderately sized problem instances. We present a polyhedral and computational comparison of an underlying
flow-formulation to a formulation based on directed connection cuts. The latter is proposed for this particular problem for the first time, and it is further shown how the cut-separation can be
performed more efficiently than for many other spanning tree problems. New inequalities based on the label variables are introduced to strengthen the formulations. Optionally, also cycle-elimination
cuts are separated. Furthermore we show how to use odd hole inequalities to strengthen the formulation by cutting off fractional values of the label variables. For these particular inequalities, a
MIP-based separation heuristic is proposed for the first time. We further consider branch-and-cut-and-price, where instead of starting the algorithm with a full model, we start with a restricted set
of labels and include further (label) variables only on demand. In order to obtain valid integral solutions in each node of the branch-and-bound (B&B) tree fast, we apply primal heuristics based on
the well known MVCA-heuristic [3]. A detailed description of the formulations and algorithmic building blocks is given in Section 3, their properties are then theoretically investigated in Section
3.6. In Section 4, we finally present a comparison of the described formulations and algorithmic components based on computational experiments.
3. Mixed Integer Programming Framework
In this section we first give a rather abstract formulation of the MLST as mixed integer program (MIP). For the spanning-tree property, we present two concrete instantiations: (1) based on a
flow-formulation, and (2) a formulation based on directed connectivity cuts, respectively. Both formulations as well as additional inequalities to strengthen the formulations and methods for
cutting-plane separation and dynamic variable generation are described within one generic framework, as they can be used in different combinations.
We use the following variables: variables , for all indicate if label is part of the solution; edge variables , for all , denote if edge is used in the final spanning tree; variables , for all ,
denote directed arc variables used for the cut-based formulation, where we introduce for each edge two arcs and . For the flow formulation, we analogously introduce two directed flow variables . Let
further denote the set of labels associated to edge .
3.1. Mixed Integer Formulation
The basic formulation is given by the following abstract integer linear program:
The objective function (3.1) minimizes the number of required labels, Inequalities (3.2) ensure that for each selected edge (at least) one label is selected. For the abstract condition (3.3), we will
subsequently introduce alternative formulations.
The number of selected edges may be fixed according to a valid spanning tree: Note, that (3.5) is however not required for a valid description of the MLST problem.
3.1.1. Single-Commodity Flow Formulation
A single-commodity flow formulation, also considered in [16], is given as follows:
Equation (3.6) ensures the correct quantity of flow leaving the (arbitrary) root node with index 0. For all other nodes flow consumption (3.7) must hold, that is, one unit of flow is consumed at each
node. Inequalities (3.8) finally ensure that only edges with a sufficient amount of flow may be selected. Flow formulations have the big advantage that they permit to formulate a spanning tree by a
polynomial number of variables and therefore provide a relatively compact model.
3.1.2. Multicommodity Flow Formulation
The single-commodity flow formulation's major shortcoming is, however, that it provides a relatively poor LP-relaxation [19]. This is particularly due to the weak coupling of to -variables in
Inequalities (3.8), the linking constraints. This drawback can be circumvented by the introduction of multiple commodities for each node . Again, all flows of commodity originate from node 0 and must
be delivered to node . The formulation is given by the following equalities: Linkage of flow to edge variables is then given by
This formulation, however, has the drawback of having more variables than the single-commodity flow formulation, that is, flow variables in contrast to only .
3.1.3. Directed Cut Formulation
An alternative formulation is given by directed-connection inequalities, stating that to each node a valid (directed) path must exist. In contrast to the flow model, this formulation consists of an
exponential number of inequalities and therefore cannot be directly passed to an ILP-solver for larger instances. However, this formulation provides a better LP-relaxation to many spanning tree
problems, as it exactly describes the convex hull of the minimum spanning tree polyhedron. The corresponding inequalities, linkage to the edge variables are given by the following:
Here denotes the set of ingoing arcs to some node set . Instead of Inequalities (3.12) we could also directly link the labels to the directed arcs. However, we proceed with Inequalities (3.12) for
sake of a unified notation. The separation of these directed-connection inequalities is discussed in Section 3.2.
It is well known to be practically advantageous to initially add the inequalities to directed (cut-based) formulations, see [20, 21]. Inequalities (3.13) avoid short cycles corresponding to a single
edge, Inequalities (3.14) assure that each node has one incoming arc. By we denote the set of incoming arcs to node .
3.1.4. Cycle-Elimination Formulation
We can also ensure feasibility for integer solutions by cycle-elimination inequalities. These inequalities enforce the resulting graph not to contain any cycles, which is together with the enforced
number of arcs also a sufficient condition for spanning trees, and is given by the following Inequalities:
3.1.5. Miller-Tucker-Zemlin Formulation
A further way for prohibiting cycles is models based on the well-known Miller-Tucker-Zemlin inequalities [22]. Such a model for the MLST problem has been proposed in [15], however with some
differences. Let for all denote variables assigning numeric values to each node. By inequalities cycles can be inhibited by just using a polynomial number of variables, however with the drawback,
that a large multiplicative factor appears, usually leading to bad LP-relaxations. Main difference to the formulation proposed in [15] is the meaning of the variables. Whereas we use distinct
variables for labels and edges ( variables), and link them by Inequalities (3.8) which are in total constraints, they introduce variables with , corresponding to edges and index corresponding to
In [16], the authors pointed out an important property of the flow formulation. They showed that the edge variables are not required to be an integer in order to obtain the correct (optimal)
objective function value. Furthermore it is easy to derive a valid MLST solution based on the set of labels provided by the MIP solution. Based on this reasoning, we can establish the following
theorem, which extends this result to further MLST formulations, and also immediately provides an improved cut formulation with a fast separation method.
3.1.6. Epsilon-Connectivity Formulation
Theorem 3.1. For any MIP formulation given by (3.1), (3.2) and (3.5), , for all any set of labels corresponding to an optimal solution to this formulation, and additionally meeting the following
inequalities “epsilon-connectivity” implies a valid MLST. Here, denotes some arbitrary small real number.
Proof. The number of edges is fixed by (3.5), but a solution may still contain fractional edges. However, as the label variables are integer and required to be greater than the value of the
corresponding edge variables by Inequalities (3.2), they are always one if the corresponding edge variable has a value greater than . Consequently, fractional edge variables will only appear in the
final solution if they do not raise the objective function value (by requiring additional labels). Due to Inequalities (3.18), the labels obtained from the MIP solution facilitate paths between all
pairs of nodes.
Given a label set of an optimal MIP solution, a feasible spanning tree can easily be derived in polynomial time, by determining an arbitrary spanning tree on the edges induced by the label set, as
described in [16]. As a direct consequence of Theorem 3.1, the domain of the variables and need not be restricted to Boolean values, restricting them to nonnegative values by inequalities is already
Theorem 3.1 also suggests a further formulation for the MLST problem. Although not explicitly containing any constraints describing a valid spanning tree, (3.1), (3.2), (3.5), and (3.18) already
provide a complete description to the MLST problem, and could be further strengthened by and Inequalities (3.22), which are defined later on in Section 3.3. Inequalities (3.18) will again be
separated on demand as cutting planes, which can, however, be performed more efficiently than the separation for the directed connection cuts, which will be discussed in detail in Section 3.2.
Note that epsilon-connectivity as defined by Theorem 3.1 is not guaranteed if cycle-elimination Inequalities (3.15) are used exclusively to describe a valid spanning tree. A fractional LP-solution
not containing a cycle may still contain a subtour, that is, a subgraph where the sum over corresponding edges is larger than the size of its nodes minus one. Such a situation is depicted in Figure 1
. As a consequence, the domain of the -variables must be restricted to Boolean values if only cycle-elimination inequalities are used to describe a valid spanning tree. The same is true for the
Miller-Tucker-Zemlin formulation given by Inequalities (3.16).
We now draw our attention to the special case of having only one single label assigned to each edge. If we have not fixed the number of edges, we can impose further equalities instead of Inequalities
(3.2), which provide a more direct link between labels and their corresponding edges. This approach emphasizes the search for a feasible label set of minimal cardinality rather then the search for a
feasible spanning tree.
3.2. Cutting-Plane Separation
The directed connection Inequalities (3.11) can be separated by computing the maximum flow from the root node to each node as target node. This provides a minimum -cut. We have found a violated
inequality if the value of the corresponding arcs according to the sum of the LP-values is less than 1. Our separation procedure utilizes Cherkassky and Goldberg's implementation of the push-relabel
method for the maximum flow problem [23] to perform the required minimum cut computations.
The cycle-elimination cuts (3.15) can be easily separated by shortest path computations with Dijkstra's algorithm. Hereby we use as the arc weights with denoting the current value of the
LP-relaxation for arc in the current node of the B&B-tree. We obtain cycles by iteratively considering each arc and searching for the shortest path from to . If the value of a shortest path plus is
less than 1, we have found a cycle violating Inequalities (3.15). We add this inequality to the LP and resolve it. In each node of the B&B-tree, we perform these cutting plane separations until no
further cuts can be found.
Theorem 3.1 suggested a formulation not requiring any auxiliary variables (like flow or arc variables), where validity of the labels is obtained by Inequalities (3.18) exclusively. Instead of using
the minimum cut-based separation routine (which would also be valid), we can perform a faster separation by a simple depth first search (DFS). Given an LP-solution, we first select an arbitrary start
node for which we call the DFS procedure. Within this procedure we only consider edges with . Within the DFS, we keep track of all visited nodes, if there are unvisited nodes at the end of the DFS,
we have found a valid cut. The DFS can be carried out in time, which is clearly superior to the time of the maximum flow algorithm running in .
3.3. Strengthening the Formulations
As each node must be connected to the spanning tree by one of its incident edges, we can further impose additional inequalities to strengthen the formulation w.r.t. the label variables: Here, ,
denotes the set of labels being associated to the edges incident to node . We will subsequently refer to this set of inequalities as node-label-inequalities. Figure 2 gives a simple example of an LP
solution where the node is sufficiently connected according to the sum of the LP-values of the ingoing arcs and therefore its incident edges, but the corresponding sum over the labels associated to
these edges is clearly infeasible w.r.t. Inequalities (3.22). Therefore Inequalities (3.22) strengthen the presented formulations w.r.t. their LP-relaxation. In Section 3.6, we formally prove this
property with respect to the particular proposed MIP-formulations for the MLST. Note, that we will use MIP variables and their corresponding graph-entities equivalently in the the context of
subsequent figures and proofs for simplicity, for example, we will simply designate a label by (or ) instead of explicitly referring to the MIP variables .
This basic idea used in Inequalities (3.22) can be pursued by considering sets of two nodes, say and . Let denote the edge joining and . Let further denote the set of labels associated with this
edge. For set , we can observe, that at least two labels are required to feasibly connect the nodes and , if . However, if , we still require two labels from . We therefore obtain the following valid
inequalities: which are not directly implied by Inequalities (3.22). Figure 3 shows an example where Inequalities (3.23) dominate Inequalities (3.22).
As we can expect a lot of branching on the label variables, further cutting-planes cutting of fractional label solutions may be helpful. In order to identify such valid inequalities, we consider
situations where fractional label variables lower the objective value of LP solutions. Such a situation is depicted in Figure 4. If labels in the LP solutions, the corresponding arcs can be set to 1/
2 as well without violating any directed connectivity inequality. However, w.r.t. these arc set, at least two labels must be selected in an integer solution. Consequently, adding the inequality will
cut off this fractional solution, but is only valid if no additional arcs/edges are incident to these nodes.
In the following we show how to apply odd hole inequalities to cut-off such and more general situations. These inequalities are well known from studies of the set-covering polytope, their application
becomes evident by the observation that the MLST problem can be seen as a set covering problem where each node needs to be covered by a label from the set and the corresponding edges fulfilling
further constraints (i.e., forming a valid spanning tree). In particular, we use a MIP-based heuristic to separate valid inequalities for the set-covering problem with coefficients , which have been
proposed in [24].
Let be a matrix with if node is labeled with , otherwise. A submatrix of of odd order is called an odd hole if it contains exactly two ones per row and column. For the subproblem the inequality is
valid. In [24] the authors showed that this inequality even remains valid if , where is an odd hole, and being a special matrix closely related to . Finding an odd hole to a given matrix is -hard,
but if we have found such an odd hole, it is possible to decide in polynomial time whether and therefore (3.24) is valid [24].
3.3.1. Separation-Heuristic for the Odd Hole Inequalities
In order to cut off fractional label solutions, we consider the subset of nodes whose labels are either fractional or zero in the current LP solution. Let denote the matrix where each entry
represents the current LP value of label associated to node , or −1 if the label is not associated to node . Let further denote the corresponding matrix representing which labels are assigned to
particular nodes, that is, its elements are one if label , and zero otherwise. Our goal is to heuristically search for odd holes in , based on the information provided by matrix , and then transform
the related inequality to a valid inequality for the initial problem by the according lifting steps. We are hence searching for an odd hole with with and being odd. By the procedure of [24] we can
now decide if is valid for . The term results from lifting all labels which are associated to a node but are not part of the odd hole induced by and . The lifting-coefficient is denoted by , the
calculation of its value will be discussed later on. By the following MIP (3.26)–(3.38) we aim to find subsets and forming an odd hole and for which inequality (3.25) is violated according to the
current LP solution. For this purpose we define a bipartite directed graph , , , . Each cycle with length corresponds to an odd cycle w.r.t. the number labels, and is, therefore, a potential odd
hole. Variables represent the arcs from node to label and are intended to finally describe a valid odd hole. Variables denote other arcs which connect nodes being part of the odd hole (described by
the variables) and other labels not being part of the odd hole. For each arc the coefficient is the LP value of label if and zero otherwise;
From (3.28), we can see that . As we prefer solutions where (3.25) is considerably violated, we maximize the difference between and . The term gives a lower bound for the sum over all labels we need
to lift w.r.t. some particular . The correct coefficient which is to be discussed later on, cannot be formulated by a linear expression. By (3.27), this particular expression is enforced to be larger
than zero, as the resulting inequality to be added to the MLST-MIP would not be violated otherwise. As a consequence, all feasible solutions to MIP (3.26)–(3.38) fulfill this property which is
desirable for the heuristic separation procedure discussed subsequently. For each node on the cycle, the numbers of ingoing and outgoing arcs are limited to one by equations (3.29) and (3.30) and
flow-conservation is imposed for each node (3.31). The integer variables assign numeric values to the nodes and prevent multiple cycles in the solution by Miller-Tucker-Zemlin-inequalities (3.32),
that is, by enforcing for each arc on the cycle (except the one going out from the node with (3.33)) to have at least one smaller source than target node. By Inequalities (3.34) all arcs connecting
nodes which are part of the odd cycle to be determined (by -variables) to nodes not being part of this cycle. Finally, , for all are enforced to be smaller than (3.35), and the node selection and arc
variables are required to be Boolean (3.36), (3.37). The -variables only need to be restricted to , for all , as they are implicitly integer by Inequalities (3.34). Figure 5 shows an example for a
solution to the MIP. The arcs selected by -variables are depicted in red color, the dashed ones do not contribute to the objective function. The blue arcs correspond to the “lifting-arcs”, selected
by -variables.
Given a solution to the MIP (3.26)–(3.38), we still need to check, if (3.25) is valid for this particular solution. The -variables are derived by taking all labels selected by in (3.26)–(3.38). For
this purpose, we use the criteria described in [24]—here we only provide a rough explanation. An arc connecting two nodes on the odd cycle determined by (3.26)–(3.38) which is not part of the cycle
itself is called a chord. In order to fulfill (3.24), and therefore (3.25) after the lifting, all chords of the odd cycle must be compatible. The chord set is called compatible, if (1) no chord
induces even cycles (w.r.t. nodes on the cycle), and (2) every pair of crossing chords is compatible. Compatibility for crossing chords is defined on the basis of the mutual distances of their
adjacent nodes on the cycle. Let , , and , , be two crossing chords. We now remove and its two incident arcs from the odd hole. The chords are compatible, if the unique path from to has an even
distance w.r.t. nodes in in this graph.
It remains to determine the lifting-coefficients . If a lifting-label only covers one node of the odd hole, the sum over all labels necessary to feasibly cover all nodes from the odd hole does not
change. The label can, however, be used alternatively for one of the odd hole labels and therefore gets coefficient one. Otherwise, if one lifting-label covers all odd hole nodes, the coefficient
must equal the right-hand side of (3.25), that is, in this case. Suppose some lifting-label covers odd hole nodes, then the size of the remaining odd hole nodes is . These remaining nodes are still
adjacent to two labels in the odd hole, pairwise having one label in common. We can, therefore, derive the following value for the lifting coefficient
During the branch-and-bound MLST solution process, the MIP (3.26)–(3.38) is solved with very tight runtime limits. As soon as an incumbent integer solution has been found, this solution is checked
for validity by the mentioned criterions. Obtained valid MLST-inequalities are added immediately. Then the incumbent integer solution is rejected to the MIP solver by which we enforce to search for
further solutions. This process continues until the time limit is reached.
3.4. Heuristics
In order to improve the overall performance—in particular the ability to generate feasible integer solutions fast—we embed a primal heuristic into the framework. For this purpose we adopt the
well-known MVCA heuristic [1, 3, 6]. This heuristic can create feasible solutions from scratch, but also complete partial solutions given by label set . Creating complete solutions is important for
the acquisition of strong upper bounds to efficiently cut-off unprofitable branches of the B&B-tree from the beginning on, but also to obtain an initial solution for BCP (Section 3.5). On the other
hand the MVCA heuristic can be used to obtain feasible integer solutions and therefore upper bounds for each B&B-node based upon some variables already fixed to integer values. Many further fast
metaheuristic techniques do exist for this problem, which could also easily be integrated into this framework. This is however beyond the scope of this work, as we primarily focus on mathematical
programming methods for the MLST.
3.5. Pricing Problem
Problem formulations with a large (usually exponential) number of variables are frequently solved by column generation or branch-and-price algorithms. Such algorithms start with a restricted set of
variables and add potentially improving variables during the solution process on demand. If these algorithms also include cutting-plane generation, we call them branch-and-cut-and-price (BCP).
Although the presented MLST formulation only has a polynomial number of label variables, these particular variables typically lead to extensive branching on them, requiring a special treatment. Hence
we based a solution approach on BCP, operating on just a subset of variables. Such approaches follow the same idea as sparse graph techniques as proposed in [25].
We obtain the restricted master problem by replacing the complete set of labels by a subset in (3.1). The set is required to imply a feasible solution and is obtained by the MVCA heuristic. Then, new
variables and, therefore, columns potentially improving the current objective function value in the simplex tableau are created during the B&B process. These new variables are obtained from the
solution of the pricing problem which is based upon the dual variables. Let denote the dual variables corresponding to constraints (3.2), and the ones corresponding to (3.22). They reflect a measure
for the costs of some particular edge w.r.t. the currently selected labels (), and the costs of connecting some node w.r.t. the currently selected labels (). The pricing problem is to find a variable
with negative reduced costs within the set of all labels . Here denotes all arcs having label and denotes the set of nodes incident to arcs with label . Finding such a variable or even the one with
minimal reduced costs can be done by enumeration. Although only a polynomial number of labels is involved, we may benefit from the pricing scheme as we only need to solve smaller LPs within the B&B
3.6. Polyhedral Comparison
In this section we compare various formulations resulting from combining the equations and inequalities from Section 3 as listed in Table 1. The only formulation just requiring a polynomial number of
constraints is the flow-formulation with roughly variables and constraints. The directed cut-formulation requires variables and an exponential number of constraints. Also, the modified “epsilon”
cut-formulation requires exponentially many constraints, but only has variables.
In the following we use the graph depicted in Figure 6 to show the properties of the polyhedra defined by the formulations listed in Table 1.
Proposition 3.2.
Proof. As contains the same equations and inequalities as , but additionally Inequalities (3.22); thus we have . Figure 7 shows an LP solution of that is not contained in , which implies . Such an LP
solution may still contain fractional labels due to odd holes, as shown in Figure 5, by which we obtain .
If the values of the edge and label variables in Figure 7 are decreased as much as possible for , we obtain , and implying . As contains the additional Inequality (3.5), we can conclude that .
Proposition 3.3.
Proof. The proof of follows by the same reasoning as for the proof of Proposition 3.2. Figure 8 shows that . However, the requirement that each directed cut must have a value greater than one already
implies that , for all . This implies . An LP-solution to may contain more edges than an LP-solution to , which does, however, due to the minimality not affecting the objective value of the
LP-relaxation, that is, .
Let denote the projection of some polyhedron to a subspace .
Proposition 3.4.
Proof. By applying the same reasoning as for the proofs of the last two theorems, we can prove Proposition 3.4. Figure 9 gives an example for .
In the following we will show the relations between the formulations , and .
Theorem 3.5.
Proof. Figures 8, 7, and 9 already showed that the polyhedrons are not equal. To prove that , we show a procedure to transform all -variables of any valid LP-solution of to a valid -solution in . For
all , there exists at least one path from to with all edges having LP-values greater than zero. If we consider a network with source and target , only containing edges being part of one of these
paths and having capacities there exists a flow of at least one unit from to . We now arbitrarily select a root node (w.l.o.g. ) and show how to construct a valid flow permitting the same
-configuration for as in . For an edge to have LP value , a corresponding flow variable must be larger than . We start by setting all flow variables to zero. Then, for each node we construct all
paths from to , considering all edges with . Summing up for all edges on these paths may not exceed , as the number of edges is fixed by (3.5) when . However, this sum may usually be smaller than ,
say , but integer. Now, we backtrack all these paths and set their flow values to minimal values according to flow conservation (3.7) and LP-values for the edges. Note that after this first step. We
then continue this procedure for all further . According to (3.5) in step at most not yet considered edges need to be added, possibly increasing by exactly this amount. We finally end up with all
nodes being feasibly connected and fulfilling (3.6) and flow conservation (3.7) being fulfilled at each node.
It is trivial to see that the -variables of a valid LP-solution of are also valid for .
Theorem 3.6.
Proof. In the proof of Theorem 3.5, we already showed how each projection of a solution of to the subspace defined by the -variables can be transformed into a solution of , and likewise to . The only
difference of the polyhedrons considered in Theorem 3.6 are the constraints (3.22), which clearly do not affect this transformation. It needs to be shown that the polyhedrons are not equal, which is
done by the example in Figure 10. The depicted solution is not valid for or , respectively, although the node-label constraints (3.22) are fulfilled. However, the value of edge can be increased to 1/
5 (implying the need to decrease the values of edges and accordingly), which makes the solution feasible to . Nevertheless, this solution remains infeasible to , by which we have shown the theorem.
4. Results
In this section we present a comprehensive computational comparison of the presented formulations and separation strategies, and compare our methods to other work. Three different data sets are used
for our computational tests. We start by a description of the test instances used for our experiments and tests.
4.1. Test Instances
The first set is the publicly available benchmark set used in [6, 10, 12, 13]. We refer to this data set as Set-I. It consists of graphs with 100 to 500 nodes and various densities , defined by , and
different numbers of labels . The instances are organized in groups of ten for each configuration of and for each . So far, primarily metaheuristics have been applied to this instance set, but also
an exact algorithm based on -search, as reported in [12].
The second test set Set-II is created following the specification of the instances used in [16], in order to obtain comparable results to the MIPs presented therein. This set is organized in four
groups. In contrast to SET-I, the instances of the first two groups just contain very few labels, that is, . The number of nodes ranges from 20 to 1000, and network densities are set to . Moreover,
this set contains various grid-graphs (group 3) of sizes , , , , and . The fourth group contains instances with and and various network densities .
In addition to SET-I and SET-II we created a further test set Set-III containing also instances with multiple labels assigned to the edges. The construction is performed by first creating a spanning
tree and assigning labels from set to its edges. Usually if not stated otherwise, but is used to study the effect of having optimal solutions with significantly less labels than for completely random
label assignment for the particular graph properties. Next further edges are added until a specified density or specified number of edges is reached. Then, we randomly assign all labels not used yet.
In the final step we iterate over all edges and assign further labels by uniform random decision. Parameter specifies how many labels can be assigned to each edge, if not stated otherwise . Instead
of directly using as a parameter, we may also specify the size of the label set by parameter . In contrast to the other instances, the instances of SET-III have relatively high values of , that is,
and . Although such instances are less likely to occur within practical applications regarding telecommunication network design, they may be relevant for other scenarios, as for instance the
compression model based on the MLST problem presented in [17].
4.2. Test Environment
The generic framework presented in Section 3 has been implemented in C++ (gpp-4.3) within the SCIP framework [26]. The standard plugins have been used for all computational tests unless explicitly
stated otherwise. In addition some branch-and-cut algorithms (not involving any pricing procedures) have been implemented within the ILOG CONCERT framework [27] for comparison purposes. As LP solver
ILOG CPLEX (in version 12.0) [27] has been used for both frameworks.
All computational tests have been performed on an Intel Xeon E5540 processor operating at 2.53GHz and having 24GB for total 8 cores. The operation system is Ubuntu 9.10 with Linux-kernel 2.6.31.
All runs have been performed in single-threaded mode, CPU times have been limited to 7200 seconds, unless stated otherwise.
4.3. Comparison of Described Methods
In this section, we present a comparison of the described formulations based on computational tests. Furthermore, we analyze the impact of particular “components” to each of the formulations. These
components consist of the node-label-inequalities (3.22), the extended node-label-inequalities (3.23), the strong linkage of the edges to the edges (3.21), which can, however, only be used if only
one label is assigned to the edges and the number of edges is not fixed by (3.5). Table 1 provides an overview of these components and corresponding notation. After the comprehensive analysis and
comparison of the particular methods in this section, we compare the results of the newly proposed methods to previous work in Section 4.4.
4.3.1. MIP Formulations
In this section, we primarily focus on the comparison of formulations EC, DCut and SCF. However, particularities like node-label-constraints (3.22), or fixed number of edges (3.5), or the direct
linkage of labels to edges (3.21), may significantly change the picture regarding the superiority of one method over another one. For this reason, we present the results not only for three
formulations, but rather four to five variants of each formulation. Recall, that directly linking the labels to edges by (3.21) is only possible for instances with one label assigned to each edge (
3.21), that is, and is generally not possible for flow-formulations. In order not to be biased towards some particular class of instances, we report these results for each of the three instance sets.
Tables 2 and 3 show the results for instances of SET-I with and . These instances include graphs with various densities , where , and different numbers of labels, that is, , and . In these tables, as
well as in the following ones, we report the following entities for each method and group of instances. Columns “cnt” contain the number of instances within each group, which is 10 in most of the
cases. The reason for less than ten instances reported is not being able to finish some instances with particular formulations due to high memory requirements. Columns “opt” report the number of
instances that have been solved and proved to be optimal within the time limit. In columns “obj” the average objective value for all instances in the group is reported. If all instances have not been
solved to optimality, this value corresponds to the average value of feasible solutions that have been found within the timelimit. Average running times in seconds are then reported in columns “”.
The average number of branch-and-bound nodes is listed in columns “bbn”, the average number of generated cuts in column “cuts”. Results of the fastest method(s) for each group are emphasized with
bold letters.
From Tables 2 and 3 we can already observe that the difficulty of solving these instances is strongly correlated to the objective function values of the instances. Higher values, in particular those
larger than ten, require significantly more B&B-nodes, and the separation of more cuts. This also implies longer average running times. This property holds for all of the considered formulations. The
results in Tables 2 and 3 show that formulation consistently gives the best results for these instances. The single-commodity flow formulations show a slightly better performance than the
directed-cut formulations for most of the instances.
The strength of the node-label-inequalities (3.22) is also demonstrated by the results in Tables 2 and 3. Their addition to the plain formulations does not only yield a significant speedup, but also
enables to solve more instances regarding the set with . The difference between Inequalities (3.22) and their extended form, given by Inequalities (3.23) is examined in Section 4.3.2. Regarding
Equation (3.5) no clear conclusion can be drawn from these instances. If, however, combinations of these components are considered, the variants only using the node-label-constraints are superior in
most of the cases. For formulations EC and DCut it is also possible to directly link the edges to the labels by (3.21). In most of the cases, this yields the best results, when combined with the
node-label-inequalities for both formulations, and in particular in combination with EC the overall best results.
Table 4 reports the results for the same formulations for the instances of SET-II. These instances have the major difference to contain only graphs of extremely low density and just very few labels.
Again, we can observe a clear superiority of formulations and , which are able to solve all these instances with average running times of less than half second.
In Tables 5, 6, and 7, results for the instances from SET-III are reported. Table 5 shows the results for instances with and , that is, one single label assigned to the edges. As already mentioned in
Section 4.1, this instances differ from the previous ones in the way that they contain a higher number of labels, that is, and with . It can be observed that it is beneficial to limit the number of
edges to by (3.5) in this case. Thus, the stronger LP-relaxation implied by this restriction is beneficial in the case of higher values of . For instances with formulation EC still shows the best
performance, but DCut provides better results in the case of . Hence, the strong LP-relaxation becomes even more important if is in the same order of magnitude as .
With a single exception, the same effect can be observed for the instances with reported in Table 6. The effect of more than one label being assigned to the edges seems to make the problem easier to
solve, but the effect is relatively small. It is important to note, that directly linking the labels to the edges, which was beneficial for the instances with , cannot be applied to instances with
larger .
Table 7 shows the result for grid-graphs with 100 and 400 nodes and . The average optimal objective value on these graphs is relatively high, which makes them difficult to solve. However, all
instances with could be solved to optimality by formulation , which showed the overall best performance on this class of instances.
Having now analyzed the main variations of the discussed formulations we draw our attention to further approaches and enhancements that have been proposed in Section 3.
4.3.2. Further Methods
In Section 4.3.1 the node-label-inequalities (3.22) have been shown to be of utter importance for a strong formulation. In Section 3.3 we have also presented an extension of this idea, where two
nodes are considered instead of just one. This led to the class of Inequalities (3.23). Table 8 shows a comparison of formulations and with on the one hand the node-label-inequalities (3.22) and on
the other hand additional Inequalities (3.23). In particular for formulation these further inequalities turn out to be useful in many cases. They do not only speedup the solution process, but
moreover frequently enable to solve more instances to provable optimality. However, also the opposite is often the case. It is therefore not possible to decide which approach is superior over the
other based on the available data. On grid-graphs Inequalities (3.23) have not been beneficial at all.
Further formulations, considered in Section 3, are based on the property that a tree must not contain a cycle by definition. Formulation requires just a polynomial number of variables, but contains
constraints with infamous “Big-M” constants, as the SCF formulation does. On the contrary CEF contains an exponential number of Inequalities (3.15), which need to be separated as cutting-planes as
for the DCut or EC formulation. Due to their fast separation by a simple shortest-path computation, other formulations may benefit from additionally using cycle-elimination cuts. Corresponding
results are reported in Table 9, column “cec” lists the average number of separated cycle-elimination cuts. Whereas and show a relatively weak performance on the instances with , they provide good
results in the case of . In particular, for the low-density graphs could solve all instances to optimality, which no other method was able to do. For the dense graphs best results are obtained by and
Table 10 shows the results that have been obtained by including primal heuristics into the branch-and-bound algorithm. Formulations , , , and are considered for this purpose. As indicated by
preliminary experiments it turned out to be advantageous only to use the primal heuristics in the root node, as they were generally not able to find improved solutions based on the information
provided by the LP-solution in other B&B-nodes. Embedding MVCA in B&B has a positive effect w.r.t. the variants “” of formulations EC and DCut, but a negative impact concerning variants “”.
4.3.3. Odd hole Inequalities
We now draw our attention to the odd hole inequalities. Within preliminary tests, we determined a tight timelimit of 10^−3 seconds for solving the MIP (3.26)–(3.38) to show a generally good
performance. Two algorithmic variants are considered for the results reported in Table 11. The first version (denoted with index ) simply adds the found valid cutting-planes to the MIP.
Alternatively, the set of labels corresponding to the obtained odd hole can also be used to deduce a branching rule. This was motivated by the observation that many lifted odd hole cutting planes,
found by MIP (3.26)–(3.38), were not strong enough to define facets w.r.t. the involved label variables. As a consequence, these variables remained fractional after the cutting-plane was added to the
MIP. However, odd holes provide important information and references to situations where special configurations of label-variables artificially reduce the LP-relaxation. Hence it is likely that
immediately branching over these variables may be beneficial. This is done by inserting all labels of the odd hole into a global queue, and always branch over such a variable unless the queue is
empty. Index denotes this approach in Table 11. Odd hole cuts are separated with lowest priority amongst the user-defined cutting-planes, and are only separated in levels of the B&B-tree which are
multiples of ten.
The results in Table 11 show that the odd hole inequalities are beneficial in many cases, in particular when used to deduce branching rules from the corresponding label-variables. For instances from
SET-I and SET-II, almost no odd holes have been found with the described parameter settings. For dense graphs it is less likely to find odd holes that are violated by the current LP-solution, as each
node is incident to many edges. Hence is in the same order of magnitude as in the expected case. This implies many nonzero lifting coefficients in Inequalities (3.25), reducing the chance of finding
a valid inequality that is actually violated by the current LP-solution. Hence, the separation of odd hole inequalities is most beneficial for sparse graphs. Also the number of labels compared to the
number of edges has an impact on the efficiency of the odd hole separation. If the number of labels is relatively low, the expected label frequency will be high. This implies high values for the
lifting coefficients , which in turn reduces the chance of finding violated odd hole inequalities. If, on the other hand, the number of edges is too high, odd holes are generally less likely to
occur, as the sets , for all can be expected to be very small or even empty.
4.3.4. Branch-and-Cut-and-Price
Additionally, using the column generation approach within the B&C framework, that is, branch-and-cut-and-price (BCP) is only beneficial for a very special class of instances. For most of the
instances almost all variables are priced in during the solution process. The computational overhead for solving the pricing problem and resolving the MIP implies significantly higher running times
in this case. However, if the instances consist of a high number of labels, and have an optimal solution that is significantly lower than the average optimal solution value when assigning the labels
to the edges randomly in the instance construction process, BCP shows a superior performance. To study this effect, special instances have been created containing single optima having a relatively
low number of labels. The computational results for these instances are reported in Table 12. In particular for the larger instances a clear superiority of the BCP approach w.r.t. the corresponding B
&C algorithm can be observed. For this special class of instances, the percentage of created label variables is always less than 30% of the total number of labels (reported in column “priced”).
Although the importance of such instances may be quite limited for many purposes, the instances used for the data compression approach presented in [17] exhibit comparable properties. For the
data-compression application presented therein, the BCP approach is thus a valuable and important mean for exactly solving large instances.
4.3.5. Summary
In Table 13, we finally report the best method for each group of instances from the three instance sets. For this purpose, variations including primal heuristic and using cycle-elimination cut
separation are also considered. In the case a variant including a primal heuristic yields the best performance, we additionally report the best method not using primal heuristics. Formulations and
are the best formulations for almost all instances of SET-I, with the primal heuristic often yielding small improvements. The same is true for the instances of SET-II, where almost all variations of
formulation EC are able to solve the considered instances in less than a second. For SET-III formulation DCut is superior for many instances with , whereas EC is better for instances with . In
contrast to SET-I, it is beneficial to restrict the number of edges to as indicated with index “”. Additionally separating cycle-elimination cuts frequently yields the overall best method, in
particular for instances with . Furthermore it can be observed that variants using separation of odd hole inequalities are frequently the overall best methods for this group.
4.4. Comparison to Other Work
In this section, we present direct comparisons to existing work, in particular [16]. Table 14 shows the results presented in [16], running times have been rounded to integers. Formulation “MLSTb”
corresponds to formulation SCF of this work. Formulation “MLSTc” only uses a weaker coupling of labels to edges, given by the following inequalities: Table 14 furthermore reports results for the
implementation of the exact backtracking method from [1], labelled with “MLST-CL”. Table 15 shows the running times of selected MIP variants in comparison to our reimplementation of the flow
formulation “MLSTb” from [16] (SCF). Formulation is clearly superior to the others, all instances have been solved in less than one second. Higher running times of SCF as opposed to “MLSTb” can be
explained due to the fact that the SCIP framework [26] has been used for the implementation of SCF whereas “MLSTb” has been implemented with the ILOG CONCERT framework [27].
Table 16 shows the results of selected MIP variants in comparison to the exact backtracking-search procedure used in [12]. The -algorithm is very effective for instances with small optimal objective
value, but instances with larger objective values or large sets of labels cannot be solved. The time limit imposed by the authors of [12] was three hours. It is important to note that the running
times listed in Table 16 are not directly comparable, as the authors of [12] list the computation time at which the best solution was obtained, and also different hardware has been used. For some
groups, where could not solve all instance (indicated by “NF”), the MIP method was able to do so. Furthermore, it is reported if the MIP method could solve some but not all instances within some
group. In any case the average objective value for the ten instances of each group is reported in column “avg()”, also considering the best feasible solutions that have been found within the time
limit of two hours. If all instances have not been solved to optimality, this is indicated with “(*)” in this particular columns.
In general, it can be observed that relatively small instances could be solved efficiently by the MIP approach, but, for larger instances with and , it generally fails to produce provable optimal
solutions within the allowed time limit.
4.5. Summary
For all formulations, the node-label-constraints (3.22) significantly improved running times and reduced the number of branch-and-bound nodes. Despite its relatively poor LP-relaxation, formulation
turned out to be superior to the other ones for a broad class of test instances, which is mainly to the fast cut separation and the low number of involved variables. Amongst the other considered
formulations, is superior to for dense graphs with a huge number of labels.
The odd hole cuts (3.25) significantly improved running times and number of branch-and-bound nodes for some classes of instances, in particular when branching rules are deduced from the label sets
corresponding to the found odd holes. Using BCP for dynamically adding new labels during the solution process turned out to be beneficial only in the case where the input instances significantly
deviate from random label assignments, that is, where the optimal solution is much lower than the expectation value of randomly assigned labels. However, such solutions may likely easily be found
also by heuristic methods. Nevertheless, this could remain the only way to prove optimality for “easy” large-scale instances.
5. Conclusions
In this work we presented a branch-and-cut- (and-price) framework for solving MLST instances exactly. We gave a comparison of an underlying flow-formulation in comparison to the (better) directed
cut-based formulations, which has been applied for the MLST problem for the first time. Furthermore, a new connectivity formulation permitting a fast cutting-plane separation has been presented. We
further introduced new valid inequalities to strengthen the formulations and the application of odd hole inequalities to this problem. To separate cutting-planes based on these odd hole inequalities,
a new separation heuristic based on a mixed integer program using Miller-Tucker-Zemlin inequalities has been proposed.
Moreover, a detailed theoretical and computational comparison of the contribution of the presented algorithmic building blocks has been presented. Our results show that the presented framework is
able to solve small- to medium-sized instances to optimality within a relatively short amount of time. Existing benchmark instances could be solved within a significantly shorter computation time
than before, and new (larger) instances could be solved to proven optimality for the first time.
1. R.-S. Chang and S.-J. Leu, “The minimum labeling spanning trees,” Information Processing Letters, vol. 63, no. 5, pp. 277–282, 1997. View at Publisher · View at Google Scholar · View at
2. A. M. Chwatal, G. R. Raidl, and O. Dietzel, “Compressing fingerprint templates by solving an extended minimum label spanning tree problem,” in Proceedings of the 7th Metaheuristics International
Conference (MIC '07), Montreal, Canada, 2007.
3. S. O. Krumke and H.-C. Wirth, “On the minimum label spanning tree problem,” Information Processing Letters, vol. 66, no. 2, pp. 81–85, 1998. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
4. Y. Wan, G. Chen, and Y. Xu, “A note on the minimum label spanning tree,” Information Processing Letters, vol. 84, no. 2, pp. 99–101, 2002. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
5. Y. Xiong, B. Golden, and E. Wasil, “A one-parameter genetic algorithm for the minimum labeling spanning tree problem,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, pp. 55–60,
2005. View at Publisher · View at Google Scholar
6. S. Consoli, J. A. Moreno, N. Mladenovic, and K. Darby-Dowman, “Constructive heuristics for the minimum labelling spanning tree problem: a preliminary comparison,” Tech. Rep., DEIOC, 2006.
7. Y. Xiong, B. Golden, and E. Wasil, “Improved heuristics for the minimum label spanning tree problem,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 700–703, 2006. View at
Publisher · View at Google Scholar
8. J. Nummela and B. A. Julstrom, “An effective genetic algorithm for the minimum-label spanning tree problem,” in Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation
(GECCO '06), pp. 553–558, ACM Press, 2006.
9. T. Brüggemann, J. Monnot, and G. J. Woeginger, “Local search for the minimum label spanning tree problem with bounded color classes,” Operations Research Letters, vol. 31, no. 3, pp. 195–201,
2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. R. Cerulli, A. Fink, M. Gentili, and S. Voß, “Metaheuristics comparison for the minimum labelling spanning tree problem,” in The Next Wave in Computing, Optimization, and Decision Technologies,
vol. 29 of Operations Research/Computer Science Interfaces Series, pp. 93–106, Springer, New York, NY, USA, 2005.
11. S. Consoli, J. A. Moreno, N. Mladenovic, and K. Darby-Dowman, “Mejora de la exploracion y la explotacion de las heuristicas constructivas para el mlstp,” in Spanish Meeting on Metaheuristics,
12. S. Consoli, K. Darby-Dowman, N. Mladenovic, and J. Moreno-Pérez, “Heuristics based on greedy randomized adaptive search and variable neighbourhood search for the minimum labelling spanning tree
problem,” European Journal of Operational Research, vol. 196, no. 2, pp. 440–449, 2009.
13. S. Consoli, K. Darby-Dowman, N. Mladenovic, and J. Moreno-Pérez, “Solving the minimum labelling spanning tree problem using hybrid local search,” Tech. Rep., Brunel University, 2007.
14. S. Consoli, K. Darby-Dowman, N. Mladenović, and J. A. Moreno-Pérez, “Variable neighbourhood search for the minimum labelling Steiner tree problem,” Annals of Operations Research, vol. 172, pp.
71–96, 2009. View at Publisher · View at Google Scholar
15. Y. Chen, N. Cornick, A. O. Hall, et al., “Comparison of heuristics for solving the gmlst problem,” in Telecommunications Modeling, Policy, and Technology, vol. 44 of Operations Research/Computer
Science Interfaces Series, pp. 191–217, Springer, New York, NY, USA, 2008.
16. M. E. Captivo, J. C. N. Clímaco, and M. M. B. Pascoal, “A mixed integer linear formulation for the minimum label spanning tree problem,” Computers & Operations Research, vol. 36, no. 11, pp.
3082–3085, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
17. A. M. Chwatal, G. R. Raidl, and K. Oberlechner, “Solving a $k$-node minimum label spanning arborescence problem to compress fingerprint templates,” Journal of Mathematical Modelling and
Algorithms, vol. 8, no. 3, pp. 293–334, 2009. View at Publisher · View at Google Scholar
18. G. L. Nemhauser and L. A. Wolsey, Integer and Combinatorial Optimization, John Wiley & Sons, New York, NY, USA, 1999.
19. T. L. Magnanti and L. A. Wolsey, “Optimal trees,” in Network Models, M. O. Ball, et al., Ed., vol. 7 of Handbook in Operations Research and Management Science, pp. 503–615, North-Holland,
Amsterdam, The Netherlands, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
20. M. Chimani, M. Kandyba, I. Ljubić, and P. Mutzel, “Obtaining optimal k-cardinality trees fast,” Journal on Experimental Algorithmics, vol. 14, pp. 2.5–2.23, 2009.
21. I. Ljubić, Exact and memetic algorithms for two network design problems, Ph.D. thesis, Vienna University of Technology, Vienna, Austria, 2004.
22. C. E. Miller, A. W. Tucker, and R. A. Zemlin, “Integer programming formulation of traveling salesman problems,” Journal of the Association for Computing Machinery, vol. 7, pp. 326–329, 1960. View
at Zentralblatt MATH
23. B. V. Cherkassky and A. V. Goldberg, “On implementing the push-relabel method for the maximum flow problem,” Algorithmica, vol. 19, no. 4, pp. 390–410, 1997. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
24. G. Cornuéjols and A. Sassano, “On the $0,1$ facets of the set covering polytope,” Mathematical Programming, vol. 43, no. 1, pp. 45–55, 1989. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
25. M. Grötschel and O. Holland, “Solving matching problems with linear programming,” Mathematical Programming, vol. 33, no. 3, pp. 243–259, 1985. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
26. SCIP—Solving Constraint Integer Programs, “Konrad-Zuse-Zentrum für Informationstechnik Berlin,” Version 1.2, http://scip.zib.de/.
27. ILOG Concert Technology, CPLEX. ILOG. Version 12.0, http://www.ilog.com/. | {"url":"http://www.hindawi.com/journals/aor/2011/143732/","timestamp":"2014-04-18T08:39:04Z","content_type":null,"content_length":"551136","record_id":"<urn:uuid:bb67d269-6dba-4def-b953-e1bbe2733de3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paul C. Buff - Inverse Square Law
Studio Flash Explained:
Inverse Square Law Inverse Square Law
The common misconception is to think that moving a light twice as far from the subject will result in half the light and a one f-stop difference in illumination.
This is incorrect.
It must be remembered that light is projected in a 2D pattern – up/down and right/left. If you consider a reflector that illuminates, say, a 10-foot circle at a 10-foot distance, the area illuminated
pi times radius squared
or 11,310 square inches. If the distance is doubled to 20 feet, the projected pattern will be a 20-foot circle and the area of that circle will be 45,239 square inches – four times as great as the
first example. Since the light must now illuminate four times the surface area, its intensity is one fourth.
Thus the square law propagation of light:
The intensity of light varies according to the square of the change in distance from light to illuminated surface.
So, if you increase the distance by two, the light intensity becomes one fourth – 2 f-stops, not 1f. If you move the light in to one half the distance, the intensity increases by 2 f-stops.
To affect a 1f gain in light intensity, you would move the light in by a factor of .707 (from 10 feet to 7 feet). Moving the light out by a factor of 1.414 (to 14 feet) will result in a loss of 1f. | {"url":"http://paulcbuff.com/sfe-inversesquarelaw.php","timestamp":"2014-04-20T20:54:42Z","content_type":null,"content_length":"11259","record_id":"<urn:uuid:1b5e6acc-beb3-4cfc-91be-9de3391a1854>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calc 1 chapter 1 - Easy for you
August 27th 2008, 08:26 PM
Calc 1 chapter 1 - Easy for you
Course & Textbook Calculus Essential Calculus Early Transcendentals - Newest edition Chapter 1.1 Problem 18
Question Details:
A spherical balloon with radius r inches has volume V(r) = 4/3?(r^3). Find a function that represents the amount of air required to inflate the balloon from a radius of r inches to a radius of
r+1 inches.
I gotta be dumb but I'm not certain how to carry through the cube cause I don't have the solution.
August 28th 2008, 01:35 PM
All you need is a function that gives you the new air you need to put in. You need the volume of a sphere of radius r+1 that is not in a sphere of radius r, i.e. the difference between them. | {"url":"http://mathhelpforum.com/calculus/46956-calc-1-chapter-1-easy-you-print.html","timestamp":"2014-04-20T10:28:48Z","content_type":null,"content_length":"3890","record_id":"<urn:uuid:e409a7ca-3611-442d-b59a-ecb88c6c38c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Goodness of Fit for Random Subsets
This Demonstration performs a goodness-of-fit test on a set of random samples of a given sample size from a finite population. The count for each subset is indicated by a point; the expected value
is indicated by the horizontal line. Tooltips on each show the values. The "trial" slider controls a seed for the random number generator, and gives a new set of samples for each value. The -value
() is the probability that another set of truly random samples would be as far or farther away from the expected value (as measured by ).
This Demonstration shows that the random sample function in
seems to choose each possible sample with (at least approximately) equal likelihood. The -value ranges over the interval [0,1]. This Demonstration does not show that the -values have a uniform
distribution, but a user could keep track of the -values and plot his or her own distribution. | {"url":"http://demonstrations.wolfram.com/GoodnessOfFitForRandomSubsets/","timestamp":"2014-04-19T22:38:44Z","content_type":null,"content_length":"43257","record_id":"<urn:uuid:49dc049a-21ae-4a82-bdf6-a20fed36aff8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Group-based GMM
Scott posted on Sunday, October 07, 2007 - 3:33 pm
I am running growth mixture modeling analyses, specifying specific groups (e.g., male vs. female; sexual offender vs. index offender vs. nonoffender). My DVs are indexes of self report delinquency
across 7 assessment waves. How would I compare the trajectories across these different groups? Would I need to calculate t tests by hand for the intercepts, linear slopes, & quadratic slopes to see
if the trajectories are different, or is there some other method? I have been using MLR estimation.
Linda K. Muthen posted on Monday, October 08, 2007 - 5:48 am
Growth mixture modeling involves classes based on unobserved heterogeneity. Are all of your groups observed?
Scott posted on Monday, October 08, 2007 - 6:47 pm
Yes, they are observed. What would this be called then? Just growth model? In the User's Guide, there is an explanation for GMM for known classes (multiple group analysis). Is this somehow different?
If so, would I use the grouping option of the VARIABLE command and then use TYPE=GENERAL MISSING (there are MAR data) and ML option under the ANALYSIS command?
Linda K. Muthen posted on Tuesday, October 09, 2007 - 6:00 am
In the user's guide example, there are two categorical latent variables. One is used for KNOWNCLASS, the other is not. In your case, you are doing a multiple group growth model. All groups are
defined by observed variables. Yes, you would use the GROUPING option and TYPE=GENERAL MISSING H1;
Scott posted on Tuesday, October 09, 2007 - 8:06 pm
For a 2 group model, what should I look at in terms of output to see if the groups differ from each other? Is this an omnibus test (in other words, how would I know what groups differ from other
groups if I had 3 or more groups)? Here is an excerpt of the stats:
Chi-Square Test of Model Fit
Value 1073.940
Degrees of Freedom 198
P-Value 0.0000
Chi-Square Contributions From Each Group
NONSEX_OFFENDER 881.663
SEX_OFFENDER 192.277
Chi-Square Test of Model Fit for the Baseline Model
Value 5302.518
Degrees of Freedom 176
P-Value 0.0000
CFI 0.829
TLI 0.848
H0 Value -10506.506
H1 Value -9969.536
Information Criteria
Number of Free Parameters 46
AIC 21105.012
BIC 21355.849
Sample-Size Adjusted BIC 21209.712
Estimate 0.072
90 Percent C.I. 0.067 0.076
Value 0.210
Linda K. Muthen posted on Wednesday, October 10, 2007 - 7:22 am
The first step with multiple group growth modeling is to fit the growth model in each group separately. If the same growth model does not fit in both groups, it does not make sense to analyze the
groups together and compare growth factors means, variances, and covariances.
Scott posted on Wednesday, October 10, 2007 - 7:54 pm
I have tried to fit the growth model in each group, but the fit statistics do not reveal good fitting models (I have tried linear, quadratic, and cubic growth curves)--e.g., RMSEA > .10, CFI/TLI <
.90, sig chi sq. Do you have suggestions on what I could do to get adequate model fit?
Linda K. Muthen posted on Thursday, October 11, 2007 - 5:59 am
Have you looked at modification indices to see where the misfit is?
Michael Spaeth posted on Thursday, October 11, 2007 - 12:29 pm
Don't want to make a new thread and i think my question is comparable to scott's problems. I want to evaluate a primary prevention programme dealing with alcohol use in young adolescents. I have a
pre-test and three follow ups. I expect heterogeneity regarding the trajectories and differential programme effects for those.
I plan the following analysis:
1.) single class LGCA seperately for control and treatment group to get a clue
2.) multi group LCGA analyses to test for overall effects of the programme, also checking for treatment baseline interaction
3.) LGMM seperately for treatment and control group
4.) LGMM with treatment group as dummy coded covariate with slopes regressed on it
Regarding point 4... Obviously it is not possible in this model to test for treatment/initial status interaction...!? Would you recommend a model pretty much like Example 8.8 in the manual (gmm with
multiple group)? Or is this "too much"!? Can i use the above mentioned covariate model instead, when i have found no interaction in point 2?
a little bit off topic... Because we measure alcohol consume (continous variable) at an very early age at time 1 and time 2 (10 years old) we have many zero values. Is that a problem? how can i
handle that?
Many thanks!
Linda K. Muthen posted on Thursday, October 11, 2007 - 3:26 pm
Your analysis plan sounds reasonable.
If your trajectories differ more in the intercept of the trajectories rather than in the slope and the treatment effect is allowed to vary across classes, you are testing for a treatment/initial
status interaction.
For models with a preponderance of zeros, you could consider two-part modeling. See Example 6.16 and the reference mentioned there.
Michael Spaeth posted on Friday, October 12, 2007 - 2:11 am
Thanks so much!
regarding the zeros... Exists there any threshold for this problem so that it is not ingorable? the problems with zero becomes more and more weaker within the last two measurement points. Is this a
problem for model 6.16?
I've heard of adding a constant and doing a log transformation of those raw scores to overcome the problem of nonnormal distributions. Is that an alternative?
If not, seems a little bit complicated to me, to combine 6.16 with my analysis steps 3 and 4. Exists there any example or paper doing that!?
Thanks for helping very interested beginners in LGCA! :-)
Linda K. Muthen posted on Friday, October 12, 2007 - 10:13 am
I don't think a log transformation would avoid the piling up of values. I would say you should have at least 20 percent zeros for two-part modeling. Following is a cross-sectional paper that goes
through the steps of two-part modeling that may help you:
Kim, Y.K. & Muthén, B. (2007). Two-part factor mixture modeling: Application to an aggressive behavior measurement instrument.
It can be downloaded from the website.
Michael Spaeth posted on Monday, October 15, 2007 - 10:23 am
Once again, thanks for these helpful recommendations.
1.) I went through the above mentioned article and was wondering how to apply the two-part modeling issue within my longitudinal LGMM Study. The only related article i had found was the Brown et al.
paper from your website. But this study only deals with two-part LGM (single class analysis). Exists there any example/paper integrating the two-part modeling in the LGMM framework?
2.) Based on your recommendation to have at least 20 percent zeros for two part modeling. Imagine we measure alcohol consumption in a longitudinal study at the age of ten till the age of 16. At the
age of 14, 15 or 16 there might be less zeros in the sample, may be 19 percent or less. Would be piecewise modeling an alternative? (First piece with two-part modeling, second piece without) Or exist
there other ways to handle that problem?
Thanks so far!
Linda K. Muthen posted on Tuesday, October 16, 2007 - 12:27 pm
1. I don't know of any paper but the one I suggested. You would have to generalize from there.
2. I would stick with two-part.
EFried posted on Monday, March 26, 2012 - 7:38 am
Running GMMs with categorical outcome variables, does one use (apart from BIC, tech11 & tech14) the Chi-Square test "for ordered categorical outcomes" or "for MCAR under unrestricted LC indicator
model" to determine model fit?
Thank you
Linda K. Muthen posted on Monday, March 26, 2012 - 8:09 am
The MCAR test is not a test of model fit.
The two chi-square tests can be used if the number of indicators is eight or less and if the two tests agree.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=14&page=2619","timestamp":"2014-04-17T21:46:08Z","content_type":null,"content_length":"39727","record_id":"<urn:uuid:33fd8a96-ebf7-4aa4-897b-ff56661a25c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
On N-ary Algebras, Polyvector Gauge Theories in Noncommutative Clifford Spaces and Deformation Quantization
Authors: Carlos Castro
Polyvector-valued gauge field theories in noncommutative Clifford spaces are presented. The noncommutative binary star products are associative and require the use of the Baker-Campbell-Hausdorff
formula. An important relationship among the n-ary commutators of noncommuting spacetime coordinates [X^1,X^2, ......,X^n] and the poly-vector valued coordinates X^123...n in noncommutative Clifford
spaces is explicitly derived and is given by [X^1,X^2, ......,X^n] = n! X^123...n. It is argued how the large N limit of n-ary commutators of n hyper-matrices X[i[1]][i[2]]....[i[n]] leads to
Eguchi-Schild p-brane actions when p+1 = n. A noncomutative n-ary generalized star product of functions is provided which is associated with the deformation quantization of n-ary structures. Finally,
brief comments are made about the mapping of the Nambu-Heisenberg n-ary commutation relations of linear operators into the deformed Nambu-Poisson brackets of their corresponding symbols.
Comments: 13 Pages. This article has been submitted to Physics Letters B
Download: PDF
Submission history
[v1] 11 Jan 2010
Unique-IP document downloads: 186 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1001.0016","timestamp":"2014-04-18T18:13:37Z","content_type":null,"content_length":"8137","record_id":"<urn:uuid:47c69eb0-17b2-4045-ad38-47f9310f70f8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classify linear or nonlinear
March 30th 2010, 12:10 AM #1
Mar 2010
Classify linear or nonlinear
I have some trouble finding out whether the equation
$t^2 R''' - 4tR'' + R' + 3R = e^t$
is linear or non-linear. My guess is that it is linear, but I cannot explain why.
Thanks in advance!
Honey $\pi$
March 30th 2010, 02:53 AM #2 | {"url":"http://mathhelpforum.com/differential-equations/136450-classify-linear-nonlinear.html","timestamp":"2014-04-18T22:36:22Z","content_type":null,"content_length":"34340","record_id":"<urn:uuid:d8c767be-136a-4f20-9775-bdce540357fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mansfield, TX Trigonometry Tutor
Find a Mansfield, TX Trigonometry Tutor
...My chemistry background includes three semesters of general and inorganic chemistry, three semesters of organic chemistry, two semesters of physical chemistry, and three semesters of
biochemistry as an undergrad. In addition, I completed ten semesters of graduate level biochemistry courses leading to a Ph.D. My research for the past 20 years has used chemistry almost every
55 Subjects: including trigonometry, chemistry, reading, writing
...While in college, I spent 3 years tutoring high school students in math, from algebra to AP Calculus. I also tutored elementary students in reading and spent 6 months homeschooling first and
third grade. When I was in high school, I would help my classmates in every subject from English to government to calculus.
40 Subjects: including trigonometry, reading, chemistry, calculus
...I love to increase confidence in students that either fear or are frustrated with math by settling their nerves through real life application, then simplistic synthesis of the concept. I
provide background knowledge and importance of the concepts so that each student can see the connection to th...
7 Subjects: including trigonometry, algebra 1, algebra 2, SAT math
...Also, I tutored students in the Duke MBA program who needed help with courses in Advanced Algebra and Calculus. Later, my wife and I decided to educate our children at home. We started with
Kindergarten and continued through high school.
82 Subjects: including trigonometry, chemistry, reading, English
...Students need a fun or at least interesting way to master the basic fundamentals so that they never forget. I use simple, easy, and quick to remember methods, formulas and acronyms so that my
students retain information and know when to use it. I present in a systematic format, using repetition...
39 Subjects: including trigonometry, chemistry, reading, writing
Related Mansfield, TX Tutors
Mansfield, TX Accounting Tutors
Mansfield, TX ACT Tutors
Mansfield, TX Algebra Tutors
Mansfield, TX Algebra 2 Tutors
Mansfield, TX Calculus Tutors
Mansfield, TX Geometry Tutors
Mansfield, TX Math Tutors
Mansfield, TX Prealgebra Tutors
Mansfield, TX Precalculus Tutors
Mansfield, TX SAT Tutors
Mansfield, TX SAT Math Tutors
Mansfield, TX Science Tutors
Mansfield, TX Statistics Tutors
Mansfield, TX Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Arlington, TX trigonometry Tutors
Bedford, TX trigonometry Tutors
Benbrook, TX trigonometry Tutors
Burleson trigonometry Tutors
Cedar Hill, TX trigonometry Tutors
Dalworthington Gardens, TX trigonometry Tutors
Desoto trigonometry Tutors
Duncanville, TX trigonometry Tutors
Euless trigonometry Tutors
Forest Hill, TX trigonometry Tutors
Glenn Heights, TX trigonometry Tutors
Grand Prairie trigonometry Tutors
Highland Park, TX trigonometry Tutors
Midlothian, TX trigonometry Tutors
Pantego, TX trigonometry Tutors | {"url":"http://www.purplemath.com/Mansfield_TX_trigonometry_tutors.php","timestamp":"2014-04-18T11:17:32Z","content_type":null,"content_length":"24503","record_id":"<urn:uuid:5b18e8e4-5efb-458a-802f-bc300af23acb>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oberwolfach Problems?
Re: Oberwolfach Problems?
Aren't those graph theory problems?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19121","timestamp":"2014-04-17T04:08:59Z","content_type":null,"content_length":"10572","record_id":"<urn:uuid:57efa0df-3ae3-461d-afbc-55b940099d08>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meshfree & EFG methods
I'm looking for any material concerning meshfree methods, in particular element-free Galerkin method for solving PDEs ... haven't worked with the topic for the last couple of years and I'm
considering on doing an implementation for a "changing solution domain" PDE problem for the mechanical equilibrium problem with some special tweaks. Any recent theoretical, software etc. info would
be really useful, thanks ! | {"url":"http://www.physicsforums.com/showthread.php?p=378323","timestamp":"2014-04-16T10:22:05Z","content_type":null,"content_length":"19932","record_id":"<urn:uuid:520cfaa4-d901-40e3-912c-de65551d0284>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
$(0,1)$-Category theory
In a poset $P$, a top of $P$ is an element $\top$ of $P$ such that $a \leq \top$ for every element $a$. Such a top may not exist; if it does, then it is unique.
In a proset, a top may be defined similarly, but it need not be unique. (However, it is still unique up the natural equivalence in the proset.)
A top of $P$ can also be understood as a meet of zero elements in $P$.
A poset that has both top and bottom is called bounded.
As a poset is a special kind of category, a top is simply a terminal object in that category.
The top of the poset of subsets or subobjects of a given set or object $A$ is always $A$ itself.
Revised on March 15, 2012 17:24:09 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/top","timestamp":"2014-04-18T05:29:32Z","content_type":null,"content_length":"23007","record_id":"<urn:uuid:3c2f2b31-dc63-46cd-a274-9a40455b19e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thresholding with Random Field Theory
A few notes to begin. First: you can easily read this page without knowing any matlab programming, but you may gain extra benefit if you read it with the matlab code that will generate the figures.
This code is contained in the matlab scripthttp://imaging.mrc-cbu.cam.ac.uk/scripts/randomtalk.m. Second: some of the figures do not display well. Please click on any figure to get a pdf file that
has much better detail. Last, this page is based primarily on the Worsley 1992 paper (see refs below). For me, this paper is the most comprehensible of the various treatments of this subject. Please
refer to this paper and the Worsley 1996 paper for more detail on the issues here discussed.
See also An introduction to multiple hypothesis testing and brain data by Federico Turkeimer for a general introduction to the multiple comparison.
This page became the Random fields introduction chapter in the the Human Brain Function second edition - please see that chapter for a more up to date and better presented version of this page (but
without the script!).
Most statistics packages for functional imaging data create statistical parametric maps. These maps have a value for a certain statistic at each voxel in the brain, which is the result of the
statistical test done on the scan data for that voxel, across scans. For SPM96, this statistic is a Z statistic (see my SPM statistics tutorial ).
The null hypothesis for a particular statistical comparison probably will be that there is no change anywhere in the brain. For example, in a comparison of activation against rest, the null
hypothesis would be that there are no differences between the scans in the activation condition, and the scans in the rest condition. This null hypothesis implies that the volume of Z scores for the
comparison will be similar to a equivalent set of numbers from a random normal distribution.
The multiple comparison problem
The question then becomes; how do we decide whether some of the Z statistics we see in our SPM96 map are larger (more positive) than we would expect in a similar volume of random numbers?So, in a
typical SPM Z map, we have, say, 200000 Z scores. Because we have so many Z scores, even if the null hypothesis is true, we can be sure that some of these Z scores will appear to be significant at
standard statistical thresholds for the the individual Z scores, such as p<0.05 or p<0.01. These p values are eqivalent to Z = 1.64 and 2.33 respectively - see the http://imaging.mrc-cbu.cam.ac.uk/
scripts/randomtalk.m file.
So, if we tell SPM to show us only Z scores above 2.33, we would expect a number of false positives, even if the null hypothesis is true. So, how high should we set our Z threshold, so that we can be
confident that the remaining peak Z scores are indeed too high to be expected by chance? This is the multiple comparison problem.
Why not a Bonferroni correction?
The problem of false positives with multiple statistical tests is an old one. One standard method for dealing with this problem is to use the Bonferroni correction. For the Bonferroni correction, you
set your p value threshold for accepting a test as being significant as alpha / (number of tests), where alpha is the false positive rate you are prepared to accept. Alpha is often 0.05, or one false
positive in 20 repeats of your experiment. Thus, for an SPM with 200000 voxels, the Bonferroni corrected p value would be 0.05 / 200000 = [equivalent Z] 5.03. We could then threshold our Z map to
show us only Z scores higher than 5.03, and be confident that all the remaining Z scores are unlikely to have occurred by chance. For some functional imaging data this is a perfectly reasonable
approach, but in most cases the Bonferroni threshold will be considerably too conservative. This is because, for most SPMs, the Z scores at each voxel are highly correlated with their neighbours.
Spatial correlation
Functional imaging data usually have some spatial correlation. By this, we mean that data in one voxel are correlated with the data from the neighbouring voxels. This correlation is caused by several
factors: * With low resolution imaging (such as PET and lower resolution fMRI) data from an individual voxel will contain some signal from the tissue around that voxel;
• The reslicing of the images during preprocessing causes some smoothing across voxels;
• Most SPM analyses work on smoothed images, and this creates strong spatial correlation (see my smoothing tutorial for further explanation). Smoothing is often used to improve signal to noise.
The reason this spatial correlation is a problem for the Bonferroni correction is that the Bonferroni correction assumes that you have performed some number of independent tests. If the voxels are
spatially correlated, then the Z scores at each voxel are not independent. This will make the correction too conservative.
Spatial correlation and independent observations
An example can show why the Bonferroni correction is too conservative with non-independent tests. The code for the following figures is in thehttp://imaging.mrc-cbu.cam.ac.uk/scripts/
randomtalk.mfile. Let us first make an example image out of random numbers. We generate 16384 random numbers, and then put them into a 128 by 128 array. This results in a 2D image of spatially
independent random numbers. Here is an example: In this picture, whiter pixels are more positive, darker pixels more negative. The Bonferroni correction is appropriate for this image, because the
image is made up of 128*128 = 16384 random numbers from a normal distribution. Therefore, from the Bonferroni correction (alpha / N = 0.05 / 16384 = [Z equivalent] 4.52), we would expect only 5 out
of 100 such images to have one or more random numbers in the whole image larger than 4.52.
The situation changes if we add some spatial correlation to this image. We can take our image above, and perform the following procedure:
• Break up the image into 8 by 8 squares;
• For each square, calculate the mean of all 64 random numbers in the square;
• Replace the 64 random numbers in the square by the mean value.
(In fact, we have one more thing to do to our new image values. When we take the mean of 64 random numbers, this mean will tend to zero. We have therefore to multiply our mean numbers by 8 to restore
a variance of 1. This will make the numbers correspond to the normal distribution again).This is the image that results from following the above procedure on our first set of random numbers:
We still have 16384 numbers in our image. However, it is clear that we now have only (128 / 8) * (128 / 8) = 256 independent numbers. The appropriate Bonferroni correction would then be (alpha / N =
0.05 / 256 = [Z equivalent] 3.55). We would expect that if we took 100 such mean-by-square-processed random number images, then only 5 of the 100 would have a square of values greater than 3.55 by
chance. However, if we took the original Bonferroni correction for the number of pixels rather than the number of independent pixels, then our Z threshold would be far too conservative.
Smoothed images and independent observations
The mean-by-square process we have used above is a form of smoothing (see the smoothing tutorial for details). In the mean-by-square case, the averaging takes place only within the squares, but in
the case of smoothing with a kernel, the averaging takes place in a continuous way across the image. Here is our first random number image smoothed with a Gaussian kernel of FWHM 8 by 8 pixels: (As
for the mean-by-square example, the smoothing reduces the variance of the numbers in the image, because an average of random numbers tends to zero. In order to return the variance of the numbers in
the image to one, to match the normal distribution, the image must be multiplied by a scale factor. The derivation of this scaling factor is rather technical, and not relevant to our discussion here.
You will find the code in http://imaging.mrc-cbu.cam.ac.uk/scripts/randomtalk.m).
In our smoothed image, as for the mean-by-square image, we no longer have 16384 independent observations, but some smaller number, because of the averaging across pixels. If we knew how many
independent observations there were, we could use a Bonferroni correction as we did for the mean-by-square example. Unfortunately it is not easy to work out how many independent observations there
are in a smoothed image. So, we must take a different approach to determine our Z score threshold. The approach used by SPM and other packages is to use Random Field Theory (RFT).
Using Random Field Theory
You can think of the application of RFT as proceeding in three steps. First, you determine how manyreselsthere are in your image. Then you use the resel count and some sophisticated maths to work out
the expectedEuler characteristic(EC) of your image, when it is thresholded at various levels. These expected ECs can be used to give the correct threshold for the required control of false positives
What is a resel?
A resel is a "resolution element". The number of resels in an image is similar to the number of independent observations in the image. However, they are not the same, as we will see below. A resel is
defined as a block of pixels of the same size as the FWHM of the smoothness of the image. In our smoothed image above, the smoothness of the image is 8 pixels by 8 pixels (the smoothing that we
applied). A resel is therefore a 8 by 8 pixel block, and the number of resels in our image is (128 / 8) * (128 / 8) = 256. Note that the number of resels depends only on the number of pixels, and the
What is the Euler characteristic?
The Euler characteristic of an image is a property of the image after it has been thresholded. For our purposes, the EC can be thought of as the number of blobs in an image after it has been
thresholded. This is best explained by example. Let us take our smoothed image, and threshold it at Z greater than 2.75. This means we set to zero all the pixels with Z scores less than or equal to
2.75, and set to one all the pixels with Z scores greater than 2.75. If we do this to our smoothed image, we get the image below. Zero in the image displays as black and one as white. In this
picture, there are two blobs, corresponding to two areas with Z scores higher than 2.75. The EC of this image is therefore 2. If we increase the threshold to 3.5, we find that the lower left hand
blob disappears (the highest Z in the peak was less than 3.5).
The upper central blob remains; the EC of the image above is therefore 1. It turns out that if we know the number of resels in our image, it is possible to estimate the most likely value of the EC
at any given threshold. The formula for this estimate, for two dimensions, is on page 906 of Worsley 1992, and is implemented in http://imaging.mrc-cbu.cam.ac.uk/scripts/randomtalk.m to create the
graph below. The graph shows the expected EC of our smoothed image, of 256 resels, when thresholded at different Z values.
Note that the graph does a reasonable job of predicting the EC in our image; at Z = 2.75 threshold it predicted an EC of 2.8, and at a Z of 3.5 it predicted an EC of 0.3.
How does the Euler characteristic give a Z threshold?
The useful feature of the expected EC is this: when the Z thresholds become high and the predicted EC drops towards zero, the expected EC is a good approximation of the probability of observing one
or more blobs at that threshold. So, in the graph above, when the Z threshold is set to 4, the expected EC is 0.06. This can be rephrased thus: the probability of getting one or more regions where Z
is greater than 4, in a 2D image with 256 resels, is 0.06. So, we can use this for thresholding. If x is the Z score threshold that gives an expected EC of 0.05, then, if we threshold our image at x,
we can expect that any blobs that remain have a probability of less than or equal to 0.05 that they have occurred by chance.Note that this threshold, x, depends only on the number of resels in our
How does the Random Field maths compare to the Bonferroni correction?
I stated above that the resel count in an image is not exactly the same as the number of independent observations. If it was the same, then instead of using RFT for the expected EC, we could use a
Bonferroni correction for the number of resels. However, these two corrections give different answers. Thus, for an alpha of 0.05, the Z threshold according to RFT, for our 256 resel image, is Z=
4.06. However, the Bonferroni threshold, for 256 independent tests, is 0.05/256 = [Z equivalent] 3.55. So, although the RFT maths gives us a Bonferroni-like correction, it is not the same as a
Bonferroni correction. It is easy to show that the RFT correction is better than a Bonferroni correction, by simulation. Using the code in thehttp://imaging.mrc-cbu.cam.ac.uk/scripts/randomtalk.m,
you can repeat the creation of smoothed random images many times, and show that the RFT threshold of 4.06 does indeed give you about 5 images in 100 with a significant Z score peak.
To three dimensions
Exactly the same principles apply to a smoothed random number image in three dimensions. In this case, the EC is the number of 3D blobs - perhaps "globules" - of Z scores above a certain threshold.
Pixels might better be described as voxels (pixels with volume). The resels are now in 3D, and one resel is a cube of voxels that is of size (FWHM in x) by (FWHM in y) by (FWHM in z). The formula for
the expected EC is different in the 3D case, but still depends only on the resels in the image. Now, if we find the threshold giving an expected EC of 0.05, in 3D, we have a threshold above which we
can expect that any remaining Z scores are unlikely to have occurred by chance, with a p<0.05.
Random fields and SPM96
It is exactly this technique that is used to give corrected p values in SPM96. There is only one slight variation from the discussion above, and that is that SPM96 does not assume that the brain
volume is the same smoothness (FWHM) as the kernel you have used to smooth the images. Instead SPM looks at the data in the images (in fact the residuals from the statistical analysis) to calculate
the smoothness. From these calculations it derives estimates for the FWHM in x, y and z.Other than this, the corrected statistics are calculated just as described above. Below is a page from an SPM96
results printout (you can click on the picture to get the page in high detail pdf format):
You will see that the FWHM values are printed at the bottom of the page - here they are 7.1 voxels in x, 8.1 voxels in y, and 9.3 voxels in z. A resel is therefore a block of volume 7.1*8.1*9.3 =
537.3 voxels (if we use the exact FWHM values, before rounding). As there were 238187 intracerebral voxels in this analysis, this gives 238187 / 537.3 = 443.3 intracerebral resels (see the bottom of
the printout). The top line of the table gives the statistics for the most significant Z score in the analysis. The middle column, labelled 'voxel-level {Z}', shows a Z score (in brackets) of 4.37.
This is the Z score from the statistical analysis, before any statistical correction. The uncorrected p value, from which this Z score was derived, is shown in the column labelled (rather
confusingly) 'uncorrected k & Z'. It is the right hand of the two figures in this column, just before the x, y and z coordinates of the voxel, and is 0.000. In fact, from the Z score, we can infer
that the p value would have been 0.000006. The figure that we are interested in is the corrected p value for the height of the Z score, and this is the left hand value in the middle column
('voxel-level {Z}'). This figure is 0.068. 0.068 is the expected EC, in a 3D image of 443 resels, thresholded at Z = 4.37. This is equivalent to saying that the probability of getting one or more
blobs of Z score 4.37 or greater, is 0.068.
There is another corrected p value that is also based on RFT, in the 'cluster level {k,Z}' column. This is the corrected p value for the number of voxels above the overall Z threshold (the 'Height
threshold' at the bottom of the page - here Z = 2.33). This RFT correction is rather more complex, and I don't propose to discuss it further (you may be glad to hear). See the Friston et al paper for
more details.
More sophisticated Random Fielding
Two of the statements above are deliberate oversimplifications for the sake of clarity. Both are discussed in Worsley's 1996 paper.The first oversimplification is that the expected EC depends only on
the number of resels in the image. In fact, this is an approximation, which works well when the volume that we are looking at has a reasonable number of resels. This is true for our two dimensional
example, where the FWHM was 8 and our image was 128 by 128. However, the precise EC depends not only on the number of resels, but the shape of the volume in which the resels are contained. It is
possible to derive a formula for the expected EC, based on the number of resels in the area we are thresholding, and the shape of the area (see Worsley 1996). This formula is more precise than the
formula taking account of the number of resels alone. When the area to be thresholded is large, compared to the FWHM, as is the case when we are thresholding the whole brain, the two formulae give
very similar results. However, when the volume for thresholding is small, the formulae give different results, and the shape of the area must be taken into account. This is the case when you require
a threshold for a small volume, such as a region of interest. Please see my small volume correction page for more details and links to software to implement these corrections.
The second oversimplification was to state that the SPM Z statistics should be similar to an equivalent volume of random numbers on the null hypothesis. In fact, because of the way that the Z scores
are derived, this is only true for quite high degrees of freedom (see the Worsley 1996 paper again). At low degrees of freedom, say less than 40, the SPM Z scores can generate an excess of false
positives, using the RFT maths. It is therefore safer to generate thresholds for the t statistics that were the raw material for the SPM Z scores (see my SPM statistics tutorial ). For this, you can
use RFT formulae for the expected EC of t fields (instead of Z fields) to give a more accurate threshold.
Random fields and SPM99
SPM 99 takes into account both of the caveats in the preceding paragraph. Thus, in generating the expected EC (and corrected p value for height), it uses the raw t statistics. It also uses the EC
formula that takes into account the shape of the thresholded volume. Here is an example of an SPM99b results printout. I have reproduced the SPM96 analysis above, by running the relevant contrast,
selecting the spm96 default uncorrected p value as threshold (0.01), and the same voxel extent threshold (here 127). Then I clicked on the Volume button to get an SPM96-like summary of the peak
voxels. The t statistic for each voxel is shown in the 'T' column, under the 'Voxel level' heading. The 'Z=' column shows the equivalent Z score, as used by SPM96. You will see these are identical
to the equivalent Z scores for SPM96. There are some minor changes in the calculation of the smoothness of the data in SPM99, and this is reflected in slightly larger resel size (see the bottom of
the printout). The expected ECs are shown in the 'p corrected' column, under the 'Voxel level' heading. For SPM99, the expected EC for the peak voxel is now 0.280, instead of 0.068, as it was for
SPM96. This difference is almost entirely explained by the low degrees of freedom in this analysis; the degrees of freedom due to error are only 10. In SPM96, the low degrees of freedom have led to
bias in the Z score generation, and the EC calculation is therefore too liberal.
Other ways of detecting significant signal
The random fields method allows you to set a threshold allowing a known false positive rate across the whole brain. However, there are some problems with this approach. Firstly, this approach is an
hypothesis testing approach rather than an estimation approach which may be more appropriate. Second, the thresholding approach does not give you a good estimate of the shape of the signal. These
problems are discussed in more detail in Federico Turkheimer's tutorials: Multiple hypothesis testing and brain data
Using wavelets to detect activation signal
Other sources of information
Random field theory can be rather obscure, and more difficult to follow than the creation of the statistical maps. The best introductory paper is an early paper by Keith Worsley. This paper outlines
the Montreal approach to generating statistical parametric maps, which differs somewhat from that of SPM. However, the discussion of Gaussian random fields, Euler characteristics and corrected p
values is very useful.
There is an overview of the field in the SPM course notes. It is very technical, and not easy reading. There are slides for the talk on this chapter, given on the 1998 SPM course. These have some
good pictures illustrating the issues involved.
A more recent paper by Keith Worsley covers the maths of corrected p values in a statistical map, when you are only looking within a defined area of the map - i.e. when you have an anatomical
hypothesis as to the site of your activation. See also my small volume correction page for a very brief introduction to this area, and links to software to implement the Worsley corrections.
Here ends the lesson. I hope that it has been of some use. I would be very glad to hear from anyone with suggestions for improvements, detected errors, or other feedback.
MatthewBrett 19/8/99 (FB)
Worsley, K.J., Marrett, S., Neelin, P., and Evans, A.C. (1992). A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of Cerebral Blood Flow and Metabolism,
Worsley, K.J., Marrett, S., Neelin, P., Vandal, A.C., Friston, K.J., and Evans, A.C. (1996). A unified statistical approach for determining significant signals in images of cerebral activation. Human
Brain Mapping, 4:58-73.
Friston KJ, Worsley KJ, Frackowiak RSJ, Mazziotta JC, Evans AC (1994). Assessing the Significance of Focal Activations Using their Spatial Extent. Human Brain Mapping, 1:214-220. | {"url":"http://imaging.mrc-cbu.cam.ac.uk/imaging/PrinciplesRandomFields","timestamp":"2014-04-20T14:33:42Z","content_type":null,"content_length":"42855","record_id":"<urn:uuid:70afb62c-0ac4-4763-b8c6-0923dd5fa4ef>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture 12: Continuation: General Theory for Inhomogeneous ODEs
We are going to start today in a serious way on the inhomogenous equation, second-order linear differential, I'll simply write it out instead of writing out all the words which go with it. So, such
an equation looks like, the second-order equation is going to look like y double prime plus p of x, t, x plus q of x times y. Now, up to now the right-hand side has been zero. So, now we are going to
make it not be zero. So, this is going to be f(x). In the most frequent applications, x is time. x is usually time, often, but not always.
So, maybe just for today, I will use X in talking about the general theory. And, from now on, I'll probably make X equal time because that's what is most of the time in the applications. So, this is
the part we've been studying up until now. It has a lot of names. It's input, signal, commas between those, a driving term, or sometimes it's called the forcing term. You'll see all of these in the
literature, and it pretty much depends upon what course you're sitting at, what the professor habitually calls it. I will try to use all these terms now and then, probably most often I will lapse
into input as the most generic term, suggesting nothing in particular, and therefore, equally acceptable or unacceptable to everybody.
The response, the solution, then, the solution as you know is then called the response. The response, sometimes it's called the output. I think I'll stick pretty much with response. So, I'm using
pretty much the same terminology we use for studying first-order equations. Now, as you will see, the reason we had to study the homogeneous case first was because you cannot solve this without
knowing the homogeneous solutions. So, that's the inhomogeneous case. But the homogeneous one, the corresponding homogeneous thing, y'' + p(x) y' + q(x) y = 0 is an essential part of the solution to
this equation.
That's called, therefore, it has names. Now, unfortunately, it doesn't have a single name. I don't know what to call it, but I think I'll probably call it the associated homogeneous equation, or ODE,
the associated homogeneous equation, the one associated to the guy on the left. It's also called the reduced equation by some people. There is some other term for it, which escapes me totally, but
what the heck. Now, its solution has a name. So, its solution, of course, doesn't depend on anything in particular, the general solution, because the right-hand side is always zero.
So, its solution, we know can be written as y equals in the form c1y1 + c2y2, where y1 and y2 are any two independent solutions of that, and then c1's and c2's are arbitrary constants. Now, what you
are looking at this equation, you're going to need this also. And therefore, it has a name. It has various names. Sometimes there is a subscript, c, there. Sometimes there's a subscript, h. Sometimes
there's no subscript at all, which is the most confusing of all. But, anyway, what's the name given to it? Well, there is no name. Many books call it the solution to the associated homogeneous
equation. That's maximally long. Your book calls it the complementary solution.
Many people call it that, and many will look at you with a blank, who know differential equations very well, and will not have the faintest idea what you're talking about. If you call it (y)h, then
you are thinking of it as the solution; the h is for homogeneous to indicate it's the solution. So, it's the solution to the, I'm not going to write that. You put it in her books if you like writing.
Write solution to the associated homogeneous equation, y(h). But, it's all the same thing.
Now, or the solution to the reduced equation, I see I have in my notes. Okay, good, the solution to the reduced equation, too. Okay, now, the examples, there are, of course, two classical examples,
of which you know one. But, use them as the model for what solutions of these things should look like and how they should behave. So, the model you know already is the one, I won't make the leading
coefficient one because it usually isn't, is the one, mx'' + bx' + kx = f(t). That's the spring-mass system, the spring-mass-dashpot system.
Mass, the damping constant and the spring constant, except up to now, it's always been zero here. What does this f(t) represent? Well, if you think of the way in which I derived the equation, the mx,
that was the Newton's law. That's the acceleration. So, it's the acceleration, the mass times the acceleration. By Newton's law, this is equal to the imposed force on the little mass truck.
Okay, you got that truck, there. I'm not going to draw the truck for the nth time. You'll have to imagine it. So, here's our truck. Okay, forces are acting on it. Remember, the forces were -kx. That
came from the spring. There was a force, -bx'. That came from the dashpot, the damping force. So, this other guy is f(t). What's this? This is the external force, which is acting out. In other words,
instead of the little truck going back and forth and doing its own thing all by itself, here's someone with an electromagnet, and the mass it's carrying is a big pile of iron ore. You're turning it
on and off, and pulling that thing from afar where nobody can see it. So, this is the external force. Now, think, that is the model you must have in your mind of how these equations are treated.
In other words, when f(t) = 0, the system is passive. There is no external force on it when this is zero. The system is sitting, and just doing what it wants to do, all by itself. You wanted up by
giving it an initial push, and putting its initial position somewhere. But after that, you lay your hands off. The system then just passively responds to its initial conditions and does what it
The other model is that you don't let it respond the way it wants to. You force it from the outside by pushing it with an external force. Now, those are clearly two entirely different problems: what
it does by itself, or what it does when it's acted on from outside. And, when I explained to you how the thing is to be solved, you have to keep in mind those two models. So, this is the forced
I'll just use the word, forced system, that's where f(t) is not zero, versus the passive system where there is no external applied force. The passive system, the forced system, now, you have to both,
even if you wanted to solve the forced system, the way the system would behave if nothing would be done to it from the outside is nonetheless going to be an important part of the solution.
And, I won't be able to give you that solution without knowing this also. Now, I'd like to give you the other model very rapidly because it's in your book. It's in the problems I have to give you.
You know, it's part of everybody's culture, whether they like it or not. So, that's example number one. Example number two, which follows the differential equation just as perfectly as the
spring-mass-dashpot system is the simple electric circuit.
The inductance, you don't know yet what an inductance is, officially, but you will, a resistance, sorry, that's okay, put the capacitance up there, resistance, and then maybe a thing. So, this is a
resistance. I think you know these symbols. By now, you certainly know the system for capacitance. What I mean when I say C is the capacitance, you may not know yet what L is. That's called the
inductance. So, this is something called a coil because it looks like one. L is what's called its inductance. And, the differential equation, there are two differential equations which can be used in
this. They are essentially the same. One is simply the derivative of the other. Both differential equations come from Kirchhoff's voltage law, that the sum of the voltage drops as you move around the
circuit --
-- has to be zero because otherwise, I don't have to, that's because of somebody's law, Kirchhoff, with two h's. The sum of the voltage drops to zero, and now you know the voltage drop across this,
and you know the voltage drop across that because you learned in 8.02. You will, one day, learn the voltage drop across this. But, I already know it. It's Li. So, i is the current. I'll write this
thing in its primitive form first. So, i is the current that's flowing in the circuit. q is the charge on the capacitance. So, the voltage drop across the coil is Li. The voltage shop across the,
Li', the voltage drop across the resistance is, well, you know that.
And, the voltage shop across the capacitance is q/C. And so, that's equal to, well, it's equal to zero, except if there's a battery here or something generating a voltage drop, so, let's call that E
is a generic word. E could be a battery. It could be a source of alternating current, something like that. But, there's a voltage drop across it, and I'm giving E the name of the voltage drop.
So, and then there's the question of the signs, which I know I'll never understand. But, let's assume you've chosen the sign convention so that this comes out nicely on the right-hand side. So, this
might be varying sinusoidally, in which case you'd have source of alternating current. Or, it might be constant, in which that would be a battery, a little dry cell giving you direct current of a
constant voltage, stuff like that.
So, you could make this minus if you want, but everything will have the wrong signs, so don't do it. Now, this doesn't look like what it's supposed to look like because it's got q and i. So, the
final thing you have to know is that q' = i. The rate at which that charge leaves the condenser and hurries around the circuit to find its little soul mate on the other side is the current that's
flowing in the circuit. That's why current flows, except nothing really happens.
Electrons just push on each other, and they stay where they are. I don't understand this at all. So, if I differentiate this, you can do two things. Either you could integrate i, and expressed the
thing entirely in terms of q, or you can differentiate it, and express everything in terms of i. Your book does nicely both, does not take sides. So, let's differentiate it, and then it will look
like Li'' + Ri' + i/C equals, and now, watch out, you have now not the electromotive force, but its derivative. So, if you were so unfortunate as to put a little dry cell there, now you've got
nothing, and you've got the homogeneous case. That's okay. Where are the erasers? One eraser? I don't believe this.
So, there's the equation. There are our two equations. Why don't we put them up in colored chalk. There's the spring equation. And, here's the equation that governs the current, for how the current
flows in that circuit. And now, you can see, again, what does it mean? If this is zero, for example, if I have a dry cell there, or if I have nothing at all in the circuit, then this represents the
passive circuit.
It's just sitting there. It wouldn't do anything at all, except that you've put a charge on the capacitor, and waited, and of course, when you put a charge on there, it's got a discharge, and
discharges through the circuit, and swings back and forth a little bit if it's under-damped until finally towards the end the current dies away to zero. But, what usually happens is that you drive
this passive circuit by putting an effective E in it, and then you want to know how the current behaves. So, those are the two problems, the passive circuit without an applied electromotive force, or
plugging it into the wall, and wanting it to do things. That's the normal state of affairs.
People don't want passive circuits, they want circuits which do things because, okay, that's why they want to solve inhomogeneous equations instead of homogeneous equations. But as I said, you have
to do the homogeneous case first. Okay, you are now officially responsible for this, and I don't care that you haven't had it in physics yet. You will before the next exam. So, I don't even feel
But, you're going to start using it on the problem set right away. So, it's never too soon to start learning it. Okay, now, the main theorem, I now want to go, so that was just examples to give you
some physical feeling for the sorts of differential equations we'll be talking about. I now want to tell you briefly about the key theorem about solving the homogeneous equation. So, the main theorem
about solving the homogeneous equation is, the inhomogeneous equation.
So, I'm going to write the inhomogeneous equation out. I'm going to make the left-hand side a linear operator, and am going to write the equation as Ly = f(x). That's the inhomogeneous equation. So,
L is the linear operator, second order because I'm only talking about second-order equations. L is a linear operator, and then this is the differential equation. So, here's our differential equation.
It's inhomogeneous because it's go the f(x) on the right hand side. And, what the theorem says is that the solution has the following form, yp + yc.
So, the hypothesis is we've got the linear equation, and the conclusion is that that's what its solution looks like. Now, you already know what y sub c looks like. In other words, if I write this out
in more detail, it would be i.e., department of fuller explanation, -- -- the general solution looks like y equals yp, and then this thing is going to look like an arbitrary constant times y1 plus an
arbitrary constant times y2, where these are solutions of the homogeneous equation. So, Yc looks like this part, and the yp, what's yp? p stands for particular, the most confusing word in this
But, you've got at least four weeks to learn what it means. Okay, yp is a particular solution to Ly = f(x). Now, I'm not going to explain what particular means. First, I'll chat as if you knew what
it meant, and then we'll see if you have picked it up. In other words, the procedure for solving this equation is composed of two steps. First, to find this part. In other words, to find the
complementary solution, in other words, to do what we've been doing for the last week, solve not the equation you are given, but the reduced equation. So, the first step is to find this. The second
step is to find yp.
Now, what's yp? yp is a particular solution to the whole equation. Yeah, but which one? Well, if it's any one, then it's not a particular solution, yeah. I say, unfortunately the word particular here
is not being used in exactly the same sense in which most people use it in ordinary English. It's a perfectly valid way to use it. It's just confusing, and no one has ever come up with a better word.
So, particular means any one solution.
Any one will do. Okay, even these have slightly different meanings. Any questions about this? I refuse to answer them. [LAUGHTER] Now, well, examples of course will make it all clear. But I'd like,
first, to prove the theorem, to show you how simple it is. It's extremely simple if you just use the fact that L is a linear operator. We've got two things to prove. What have we got to prove? Well,
I have to prove two statements, first of all, that all the yp + c1y1 + c2y2 are solutions.
How are we going to prove that? Well, how do you know if something is a solution? Well, you plug it into the equation, and you see if it satisfies the equation. Good, let's do it, proof. L, I'm going
to plug it into the equation. That means I calculate L(yp + c1y1 + c2y2). Now, what's the answer? Because this is a linear operator, and notice, the argument doesn't use the fact that the equation is
second order. It immediately generalizes to a linear equation of any order, whatever-- 47. Okay, this is L of yp plus L(yp + c1y1 + c2y2. Well, what's that? What's L of the complementary solution?
What does it mean to be the complementary solution? It means when you apply the operator L to it, you get zero because this satisfies the homogeneous equation. So, this is zero. What's L(yp)? Well,
it was a particular solution to the equation. Therefore, when I plugged it into the equation, I must have gotten out on the right-hand side, f(x). So, this is since yp is a solution to the whole
So, what's the conclusion? That, if I take any one of these guys, no matter what c1 and c2 are, apply the linear operator, L to it, the answer comes out to be f(x). Therefore, this proves that this
shows that these are all solutions because that's what it means. Therefore, they satisfy L(y) = f(x). They satisfy the whole inhomogeneous differential equation, and that's it. Well, that's only half
the story. The other half of the story is to show that there are no other solutions. Okay, so we got our little u(x) coming up again, and he thinks he's a solution. Okay, so, to prove there are no
other solutions, it almost sounds biblical, thou shalt have no other solutions before me, okay.
There are no other solutions accept these guys for different values of c1 and c2. Okay, so, u(x) is a solution. I have to show that u of x is one of these guys. How am I going to do that? Easy. If
it's a solution that, L(u), okay, I'm going to drop the x, okay, just to make the, like I dropped the x over there. If it's a solution to the whole inhomogeneous equation, then this must come out to
be f(x). Now, what's L(yp)? That's f of x too, by secret little particular solution I've got in my pocket. Okay, I pull it out, ah-ha, L of yp, that's f of x, too. Now, I'm going to not add them. I'm
going to subtract them. What is L(u - yp)? Well, it's zero.
It's zero because this is a linear operator. This would be L(u) - L(yp). I guess the answer is zero on the right-hand side. And therefore, what is the conclusion? If that's zero, it must be a
solution to the homogeneous equation. Therefore, u - yp is equal to, there must be c1 and c2. I won't give them the generic names. I'll give them a name, a particular one.
I'll put a tilde to indicate it's a particular one. c1 plus c2 y2 tilde, so, in other words, for some choice of these constants, and I'll call those particular choices c1 tilde and c2 tilde, it must
be that these are equal. Well, what does that say? It says that u is equal to yp plus c1 tilde, blah, blah, blah, blah, plus c2 tilde, blah, blah, blah, blah, and therefore chose that u wasn't a new
It was one of these. So, u isn't new. So, I should write it down. Otherwise some of you will have missed the punch line. Okay, therefore, u is equal to yp plus c1 tilde y1 plus c2 tilde y2. And, it
shows. This guy who thought he was new was not new at all. It was just one of the other solutions. Okay, well, now, since the coefficient's a constant, apparently we've done half the work. We know
what the complementary solution is because you know how to do those in terms of exponentials and complex exponentials, signs and cosines, and so on.
So, what's left to do? All we have to do is find to solve equations, which are inhomogeneous. All we have to do is find a particular solution, find one solution. It doesn't matter which one, any one.
Just find one, okay? Now, we're going to spend the next two weeks trying to do this. I'll give you various methods. I'll give you a general method involving Fourier series because it's a good excuse
for learning what Fourier series are. But, the answer is that in general, for a few standard functions, it's known how to do this. You will learn those methods for finding those using operators. For
all the others, it's done by a series, or a method involving approximation.
Or, the worse comes to worst, you throw it on a computer and just take a graph and the numerical output of answers as the particular solution. Okay, now before, we are going to start that work, not
today. We'll start it next Monday, and it will last, as I say the next two weeks. And, we will be up to spring break. But, before we do that, I'd like to relate this to what we did for first order
equations because there is something to be learned from that.
Think back to the linear first-order equation, and I'm going to, since from now on for the rest of the period, I'm going to be considering the case for constant coefficients. In other words, this
case of springs or circuits or simple systems which behave like those and have constant coefficients. So, for the linear, first-order equation, there, too, I'm going to think of constant
coefficients. We talked quite a bit about this equation. What did I call the right-hand side? I think we usually called it q(t), right? This is in ancient history. The definition of ancient history
was before the first exam. Okay, now how does that fit into this theorem that I've given you? Remember what the solution looked like.
The solution looked like, remember, you took the integrating factor was e^(kt), and then after you integrated both sides, multiplied through, and then the final answer looked like this, y equaled, it
was e^(-kt) times either an indefinite integral, or a definite integral depending on your preference, q(t), so, x is metamorphosed into t. I gather you've got that, e to the kt plus, what was the
other term? A constant times e^(-kt).
How does this fit into the paradigm I've given you over there for solving the second order equation? Which term is which? Well, this has the arbitrary constant in it. So, this must be the
complementary solution. Is it? Is this the solution to the associated homogeneous equation? What's the associated homogeneous equation? Put zero here. Okay, if you put zero there, what's the
solution? Now, this you ought to know. y' = -ky. What's the solution? e^(-kt). You are supposed to come into this course knowing that, except there's an arbitrary constant in front. So, right, this
is exactly the solution to the associated homogeneous equation, where there is zero here. Then, what's this thing?
This is a particular solution. This is my yp. But that's not a particular solution because this indefinite integral, you know, has an arbitrary constant in it. In fact, it's just that arbitrary
constant. So, it's totally confusing. But, this symbol, you know when you actually solve the equation this way, all you did was you found one function here. You didn't throw in the arbitrary constant
right away. All you needed to do was find one function.
And, even if you really are bothered by the fact that this is so indefinite, and therefore, make it a particular solution by making this zero, make it a definite integral, zero, here, t there, and
then change those t's to dummy t's, t1's or t tildes, or something like that. So, this fits into that thing. In other words, I could have done it at that time, but I didn't the point because this can
be solved directly, whereas, of course, the general second order equation in homogeneous cannot be solved directly, and therefore you have to be willing to talk about what its solutions look like in
Now, remember I said, we talked, I said there was two different cases, although both of them had the identical looking solution. Their meaning in the physical world was so different that they really
should be considered as solving the same equation. And, one of these was the case. Of the two, perhaps the more important was the case when k was positive, and of course the other is when k is
negative. When k is positive, that had the effect of separating that solution into this part, which was a transient, and the other part, which was a steady state. The steady state solution, that was
the yp part of it in that terminology. And, the transient part, it was trangent because it went to zero.
If k is positive, the exponential dies regardless of what c is. So, the transient, that's the yc part. It goes to zero as Ttgoes to infinity. The transient depends on, uses, the initial condition,
whatever it is, because that's what determines the value of c. On the other hand, this initial condition makes no difference as t goes towards infinity. All that's left is this steady state solution.
And, all solutions tend to the steady state solution. So, if k is positive, one gets this analysis of the solutions into the sum of one basic solution, and the others, which just die away, have no
influence on this, less and less influence as time goes to infinity.
For k less than zero, this analysis does not work because this term, if k is less than zero, this term goes to infinity or negative infinity, and typically tends to dominate that. So, it's the start
that the important one. It depends on the initial conditions, and the analysis is meaningless. So, the above is meaningless. And now, what I'd like to do is try to see what the analog of that is for
second order equations, and higher order equations. If you understand second-order, that's good enough. Higher order goes exactly the same way. So, the question is, for second-order, let's make it
with constant coefficients plus, I could call it b and k, oh, no, b k, or p.
The trouble is, that wouldn't take care of the electrical circuits. So, I just want to use neutral letters, which suggest nothing. And, you can make them turn it into a circuit, so springs, or yet
other examples undreamt of. But these are constants. And I'm going to think of it as time. I think I'll switch back to time, let x be the time. So, B y equals f(t). So, there is our equation. A and B
are constants. And, the question is, the question I'm asking, can think of either of these two models or others, the question I'm asking is, under what circumstances can I make that same type of
analysis into steady-state and transient?
Well, what does the solution look like? The solution looks like yp + c1y1 + c2y2. Therefore, to make that look like this, the c1 and c2 contain the initial conditions. This part does not. Therefore,
if I want to say that the solutions look like a steady state solution plus something that dies away, which becomes less and less important as time goes on, what I'm really asking is, under what
circumstances is this part guaranteed to go to zero?
So, the question is, when, in other words, under what conditions on the equation A and B, in effect, is what we are asking. When does c1y1 + c2y2 -> 0 as t --> infinity, regardless of what c1 and c2
are for all c1 c2. Now, here there was no difficulty. We had the thing very explicitly, and you could see k is positive: this goes to zero. And if k is negative, it doesn't go to zero. It goes to
infinity. Here, I want to make the same kind of analysis, except it's just going to take, it's a little more trouble. But the answer, when it finally comes out is very beautiful. So, when are all
these guys going to go to zero? First of all, you might as well just have the definition. So, all the good things that this is going to imply, if this is so, in other words, if they all go to zero,
everything in the complementary solution, then the ODE is called stable.
Some people call it asymptotically stable. I don't know what to call it. I can make the analysis, and then I use the identical terminology, c1 y1 plus c2 y2. This is called the transient because it
goes to zero. This is called the particular solution now that we labored so hard to get for the next two weeks. It's the important part. It's the steady-state part. It's what lasts out to infinity
after the other stuff has disappeared. So, this is the steady-state solution, steady-state solution, okay?
And, the differential equation is called stable. Now, it's of the highest interest to know when a differential equation is stable, linear differential equation is stable in this sense because you
have a control. You know what its solutions look like. You have some feeling for how it's behaving in the long term. If this is not so, each equation is a law unto itself if you don't know. So, let's
do the work. For the rest of the period, what I'd like to do is to find out what the conditions are, which make this true. Those were the equations which we will have a right to call stable. So, when
does this happen, and where is it going to happen? I don't know. I guess, here.
Now, I think the first step is fairly easy, and it will give you a good review of what we've been doing up until now. So, I'm simply going to make a case-by-case analysis. Don't worry, it won't take
very long. What are the cases we've been studying? Well, what do the characteristic roots look like? The roots of the characteristic equation, in other words, remember, there are cases. The first
case is they are real and distinct, r1 not equal to r2, real and distinct. What are the other cases? Well, r1 = r2. And then, there's the case where there are complex. So, I will write it r equals a
plus or minus b i.
What do the solutions look like? So, my ham-handed approach to this problem is going to be, in each case, I'll look at the solutions, and first get the condition on the roots. So, in other words, I'm
not going to worry right away about the a and the b. I'm going, instead, to worry about expressing this condition of stability in terms of the characteristic roots. In fact, that's the only way in
which many people know the conditions.
You're going to be smarter. Okay, what do the solutions look like? Well, the general solution looks like e^(r1 t) + c2 e^(r2 t). Okay, so, what's the stability condition? In other words, if equation
happened to have its characteristic roots, real and distinct, under what circumstances would it be stable? Would it, in other words, all its solutions go to zero? So, I'm talking about the
homogeneous equation, the reduced equation, the associated homogeneous equation.
Why? Because that's all that's involved in this. In other words, when I write that, I am no longer interested in the whole equation. All I'm interested in is the reduced equation, the equation where
you turn the f(t) on the right-hand side into zero. So, what's the stability condition? Well, let's write it out. Under what circumstances will all these guys go to zero? If r1 and r2 should be
negative, can they be zero? No, because then it will be a constant and it will go to zero. How about this one? Well, in this one, it's (c1 + c2 t) e^(r1 t).
Of course, both of these are the same. I'll just arbitrarily pick one of them. What happens to this as things go to zero? Well, this part is rising, at least if c2 is positive. This part is either
helping or it's hindering. But, I hope you know what these functions look like, and you know which of them go to zero. They go to zero if r1 is negative. It might rise in the beginning, but after a
while they lose the energy. Of course, if r1 is equal to zero, what do these guys do? Linear, go to infinity. Well, we are doing okay. How about here? Well, here, it's a little more complicated. The
solutions look like e^(at) c1 cos(bt) + c2 sin(bt).
Now, this part is a pure oscillation. You know that. It might have a big amplitude, but whatever it does, it does the same thing all the time. So, whether this goes to zero depends entirely upon what
that exponential is doing. And, that exponential goes to zero if a is negative. So here, the condition is negative. And now, the only thing left to do is to say it nicely. I've got three cases, and I
want to say them all in one breath. So, the stability condition is, the ODE is stable. So, this is, or f(t). It doesn't matter. But, psychologically, you can put this as zero there, is stable if
In case one, this is true. In case two, that's true. In case three, that's true. But that's ugly. Make it beautiful. The beautiful way of saying it is if all the characteristic roots have negative
real parts. If the characteristic roots, the r's or the a plus or minus b i, have negative real part. That's the form in which the electrical engineers will nod their head, tell you, yeah, that's
right, negative real part, sorry. Isn't it right? Is that right here? Yeah. What's the real part of these guys?
They themselves, because they are real. What's the real part of this? Yeah. The only case in which I really had to use real part is when I talk about the complex case because a is just the real part
of a complex number. It's not the whole thing. | {"url":"http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/lecture-12-continuation-general-theory-for-inhomogeneous-odes/","timestamp":"2014-04-21T04:37:05Z","content_type":null,"content_length":"84689","record_id":"<urn:uuid:a1033d04-b3e6-4e95-96c1-16aa875cde87>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
NSHOF Sailing STEM Program - Navigation
Each of these lessons are a 90 minute lesson plan. They can be broken down fairly easily into two 45-minute plans.
MOI = Method of Instruction (Research, Lecture, Workshop, etc.)
Lesson 1: Reading a Topographic Map and Nautical Chart
Deciphering a Using a local map and a non-local map show different regions and explain how topographic maps work. The same can be done with charts.
Map MOI: Presentation requiring student interaction.
Deciphering a Using a Chart you may do the same thing as with the Local Map and Non-Local Map.
MOI: Presentation requiring student interaction.
Learning the The Key is a very important part of any map. However on most topographic maps you will find it lacking. Explain standard symbols using page 1 of a Chart book or Map pack.
MOI: Student Analysis, Student Discovery.
Contour Contour Lines are a great way to explain 3D and 2D and charting. Contour lines can’t touch so the rings represent a set elevation interval representing a 3D surface in 2D. It is
Lines, important to explain contour lines and their interval so students can more accurately read a map.
Interval, and
Depth MOI: Presentation requiring student interaction, Map/Chart Work.
Local Map/
Chart Using the local map show how it highlights certain reference points so that students who live in the area will be able to see the next time they step outside.
Reference MOI: Presentation, Student Analysis, Map/Chart Work.
The Three The Three Norths Explains how you make a Sphere into a Rectangle and why compasses don’t point North.
MOI: Map/Chart Work, Presentation.
Declination is a great way to challenge the modern mind with basic arithmetic. Explain how it is important to work off of true north and that adding or subtracting the declination of
Declination the area is important so as not to compound errors. Change maps often to allows for students to get into a routine of checking declination.
MOI: Presentation requiring student interaction, Map/Chart Work.
Lesson 2: Using a Compass
Compass Vocabulary is very important so that students can follow exact instructions. It is also a great way to talk about other vocabulary involved in sailing, boating, and navigation
Compass in general.
MOI: Presentation, Quiz, Worksheet
The history of navigation, the compass, and other navigation tools as well as the discovery of Longitude will show how we've come to this modern era of navigation, but also why it can
Compass be important not to forget the "old fashioned" way of doing things.
MOI: Presentation.
How a Compass How a compass works will explain where a compass points. How the Earths Magnetic Poles work and why we can use a compass to orient ourselves on a map.
MOI: Presentation, Compass Work.
This is comparing and contrasting two differing ways of reading a compass and giving headings. 360 degrees (Azimuth) vs. 45 degree intervals (Bearing). Allows you to create and
Bearing vs. additional challenge for the students.
MOI: Compass Work, Map/Chart Work.
How to Take a This lesson on how to orient the compass and how to take a heading is important, without this knowledge students cannot complete the course properly.
Bearing MOI: Compass Work.
Lesson 3: Triangulation
Points We Can Students should Identify what points on the map they could use to triangulate their position.
to MOI: Map/Chart Work, Presentation.
This exercise allows students to estimate. This skill is very essential for much of navigation. Getting the students to understand how useful estimation can be is important for future
Visual lessons. Have students identify three points that they could easily and readily see visually. From there have them estimate where they would probably be on the map based on what they
Triangulation can see. No bearings or headings should be taken for this exercise.
MOI: Student Analysis, Student Discovery.
This exercise is to improve on visual bearings. It is to show how precision matters in certain navigating situations. This exercise is done with a compass and three points to have
Bearing students check their estimated position and to verify it or correct it from their visual triangulation.
MOI: Compass Work, Map/Chart Work, Student Analysis, Student Discovery.
Reverse Triangulation is simply starting from a known point and taking headings on three points. This is a good in classroom exercise or rainy day exercise. For students who can't go
Reverse outside and truly navigate they can get the idea of where they are and where points are that they could use.
MOI: Student Analysis, Student Discovery, Map/Chart Work, Compass Work.
Lesson 4: Plotting a Course / Route
How to Set a Demonstrate or explain an Orienteering competition. This is to set expectations for the students. This exercise also explains how to combine points to create a route or course.
Course /
Route MOI: Presentation, Map/Chart Work, Compass Work.
When an object is in the way students will need to learn to head off to keep a proper course. If they don't head off they will add error to their courses.
Heading Off
MOI: Student Discovery, Student Analysis, Compass Work.
Thinking Students need to think ahead when planning a course. This lesson helps them start to translate what they have been learning in the class into mathematics. If they think geometrically
Ahead & they can make their courses much more complicated while keeping directions simple. They must also learn their audience. In addition they must figure out a proper way to back track
Leaving a their course without getting turned around.
MOI: Student Discovery, Student Analysis, Compass Work, Map/Chart Work.
Setting a Finally, students will set a course of their choice. This can be checked by peers, by the teacher or by a combination of both.
MOI: Student Discovery.
Lesson 5: Deciphering a Course / Route
How to Explains to students that have now created the course, how to properly navigate to more than one point.
Navigate a
Course MOI: Presentation, Student Analysis, Map Work/Chart Work, Student Discovery.
A brief lesson on some of the common mistakes that take place during navigation and why they occur. Also a good time to interject historical stories of mistakes that were made during
Common navigation.
Errors MOI: Student Discovery.
Dead Dead Reckoning is a great way to interject some basic algebra. Explain how based off of Time distance and speed you can find where you are even without reference points.
MOI: Presentation, Compass Work, Map/Chart Work, Problem Sheet.
If there is time you can then work from Dead reckoning into set and drift. Where on land you don't have these things on the water you need to worry about current and crosswinds. This
Set and Drift combines Algebra and geometry.
MOI: Presentation, Compass Work, Map/Chart Work, Problem Sheet.
Lesson 6: Using a GPS
Most students are reliant on Global Positioning Systems (GPS) to figure out where they are. It is time to embrace that this has made navigation significantly easier. It is also
important to realize that it is not as reliable and requires constant upkeep compared to other methods of navigation. It is also important for students to understand how now they are
How a GPS navigating on three dimensional planes and not on 2 dimensional planes such as on charts or maps. Explaining how a GPS works is a great way to add a technology component to this course
Works as well as an Engineering component.
MOI: Presentation requiring student interaction.
Without a you are here sign most students will find themselves lost on a true GPS reader. A chartplotter or car navigation explains where you are but it is not always that easy. The
Finding You instructor should help students figure out where they are and how to compare their Latitude and Longitude with their charts.
MOI: Presentation requiring student interaction, Student Discovery, Student Analysis.
Explain how to input a point or waypoint into the GPS. Each GPS reader is different and therfore this can vary greatly. Becoming familiar with multiple systems will make it easier to
Inputting a teach this section. Allow students to attempt to find where they are in comparison to other points.
MOI: Student Discovery, Student Analysis, Presentation.
Setting a Once students have plotted individual points of interest or waypoints it will be important for them to figure out how to navigate from point to point to create a route.
MOI: Student Discovery, Presentation requiring student interaction.
Navigating a Once students have set a route they must figure out how to actual navigate that route using both GPS and compass.
MOI: Student Discovery. | {"url":"http://www.nationalsailinghalloffame.org/index.php?option=com_content&view=article&id=744:2012-13-nshof-sailing-stem-program-navigation&catid=32:learning-math-a-science-through-sailing","timestamp":"2014-04-21T04:41:31Z","content_type":null,"content_length":"25177","record_id":"<urn:uuid:2f4466e5-b4a4-4a7d-a845-2e6f367e727b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving y=tanh(y-x) in terms of x
March 8th 2010, 10:02 PM #1
Mar 2010
solving y=tanh(y-x) in terms of x
Is it possible to obtain analytic solution for y=tanh(y-x) in terms of x?
Tanh function can be used to model input-output voltage characteristics of op-amp. The above equation describes the case, in which x represents input voltage to negative input, & y is the output
& the positive input.
Thank you very much for your help!
Probably not. $tanh(x)= \frac{e^x- e^{-x}}{e^{x}+ e^{-x}$
$tanh(x- y)= \frac{e^{x- y}- e^{y- x}}{e^{x-y}+ e^{y- x}}$[tex]= \frac{e^xe^{-y}- e^{-x}e^y}{e^xe^{-y}+ e^{-x}e^y}[tex]
I don't see any good way to separate the "x" and "y" terms and if you could you would still be left with y both in the exponential and linear so you would probably Lambert's W function to solve
Why not solve for x first: $x=y-\text{arctanh}(y)$ and then state the inverse series for that function in its radius of convergence is the analytic expression for the inverse? I do have a slight
problem when I attempt to extract the inverse series in Mathematica using the code:
myInverse = Normal[InverseSeries[Series[y - ArcTanh[y], {y, 0, 10}],x]]
which gives me:
$(-3)^{1/3} x^{1/3}+\frac{3 x}{5}-\frac{9}{175} (-3)^{2/3} x^{5/3}+\frac{2}{175} (-3)^{1/3} x^{7/3}$
but the exponents on -3 result in complex numbers. Something ain't right.
Last edited by shawsend; March 9th 2010 at 04:01 AM. Reason: put it in terms of x
Thanks guys for the help!
I asked Dr. Math, & the reply was it's impossible to analytically solve y in terms of x.
March 9th 2010, 02:46 AM #2
MHF Contributor
Apr 2005
March 9th 2010, 03:53 AM #3
Super Member
Aug 2008
March 9th 2010, 02:34 PM #4
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/132841-solving-y-tanh-y-x-terms-x.html","timestamp":"2014-04-18T09:41:34Z","content_type":null,"content_length":"39070","record_id":"<urn:uuid:c9e6fde4-5fb2-41bb-a42a-85e5f464dab7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Mary Anne Raymond.
Title: Definability and Randomness
Seminar: Job Candidate Talk
Speaker: Jan Reimann, Job Candidate
In mathematical logic, one often tries to classify objects by their descriptive complexity, for example, how many quantifier changes are needed to define a subset of the natural numbers. In the
theory of dynamical systems, one uses measure theoretic or topological concepts like entropy to describe complex, i.e. random behavior. Both approaches can be combined to define randomness for
individual objects such as infinite binary sequences. I will discuss the resulting interplay between measure theory and definability. I will argue that the view from logic opens up new and perhaps
unexpected perspectives on the concept of randomness regarding, for example, the role of infinity, the possibility of randomness extraction, or the study of fractal structures.
Room Reservation Information
Room Number: MB114
Date: 03 / 18 / 2010
Time: 02:00pm - 03:00pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=8376","timestamp":"2014-04-16T11:28:31Z","content_type":null,"content_length":"3759","record_id":"<urn:uuid:a4448e06-8bd1-48fe-8968-160d58af387b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Formula Tutors
Laguna Beach, CA 92651
AP Calculus, AP English
...Independent schools look for students who can think critically, and who trust their own brains. They're looking for fluency in vocabulary, mathematics, and reading skills. To measure these skills,
the ISEE contains four sections: a 20-minute Verbal Reasoning...
Offering 10+ subjects including algebra 2 | {"url":"http://www.wyzant.com/Laguna_Beach_quadratic_formula_tutors.aspx","timestamp":"2014-04-19T02:23:01Z","content_type":null,"content_length":"60832","record_id":"<urn:uuid:6815b1f9-985e-40c7-a3f8-27af42b3019b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Division Algorithm?
hi rhymin
Sorry about the earlier muddle over the | symbol. This next bit is my excuse. Skip it if you like but it makes me feel better to have an excuse.
EXCUSE. It must be 45 years since I last saw that symbol in some number theory at Uni. What I should have done is looked it up rather than relying on a faulty memory. But I think the symbol is poorly
REASON. If we want to put "42 divided by 6" into symbols we can say 42 ÷ 6 or 42/6 or even
Now, division must have come long before the concept of "is it divible by" so someone must have made up that definition. Now he/she could have defined " divides " or " is divisible by ". The result
is mathematically equivalent; it's all in the way you express the property. So why, oh why, did this person choose " divides " which puts the first number second and the second number first and just
to be really confusing invent the symbol | for it, when | is already heavily used in maths to mean other things, and looks a lot like the symbol for divided by \ ???
No wonder I got mixed up.
END OF EXCUSE.
So, to make amends here is a method for creating the linear combination that doesn't require a computer.
We want integers s and t so that
Divide the larger number by the smaller ( 260/33)
Divide the larger number by the smaller (33/29)
Divide the larger number by the smaller (29/4)
When one of the 'coefficients' is 1 you can stop this process and jump to simultaneous equations.
Solving gives t = -25 and s = 197.
This pair are different from bobbym's pair but both sets of answers for s and t work. There are, in fact, an infinite number of solutions so best wishes to any teacher who has to check them all.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=259570","timestamp":"2014-04-20T21:04:05Z","content_type":null,"content_length":"19267","record_id":"<urn:uuid:cc90084e-4a25-47f5-b572-4429c0852df8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that f(x) is an odd function?
Prove that $f(x) = \left\{ \begin{array}{cc}x^4\tan \frac{\pi x}{2}, &\mbox{if}\ |x|<1\\x|x|, &\mbox{if}\ |x|\geq 1\end{array}\right.$ is an odd function.
We want $f(x)=-f(-x) \: \forall x \in \mathbb{R}$. If $|x|<1, f(-x) = (-x)^4 \tan(-\pi x/2) = x^4 (-\tan (\pi x/2)) = -f(x)$ If $|x|>1, f(-x) = (-x)|(-x)| = \left\{ \begin{array}{cc}-x|-x|, &\mbox
{if}\ x<0\\-x|x|, &\mbox{if}\ x>0\end{array}\right.<br />$ so that in all cases $-f(-x)=f(x)$ | {"url":"http://mathhelpforum.com/calculus/93426-prove-f-x-odd-function.html","timestamp":"2014-04-18T22:23:38Z","content_type":null,"content_length":"33306","record_id":"<urn:uuid:93ea9914-56ee-47fa-b534-2401685183c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tyler Park, NJ Math Tutor
Find a Tyler Park, NJ Math Tutor
...I taught the courses that follow Algebra 1 for five more years. I taught Algebra 2 for 5 years and I have also taught the courses that proceed it and follow it. I taught high school math for
eight years (through June 2013), and I taught calculus for one of those years.
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...It is important to review basics to see where a student's weakness (if any) arises, then to explain and drill on exercises to remove that weakness. A planned out tutoring program will remove
weaknesses and add clarity to current topics so the student can gain confidence and move to the next leve...
19 Subjects: including SAT math, discrete math, SPSS, probability
...Please send feedback of successful tutoring sessions! Cancelling a lesson less than twenty four hours before it is scheduled to take place will incur a one hour fee. Not showing up on time is
considered a cancellation.I have taught genetics to middle/high schoolers through the New York Academy of Sciences.
22 Subjects: including prealgebra, ACT Math, SAT math, precalculus
...I have taught classes in middle schools and high schools for grades 7-11 and have run college recitation sessions. I began teaching test prep for the Princeton Review and now teach
independently and also sub-contract through elite Manhattan tutoring agencies (they charge $200+ p.h. for me). I h...
22 Subjects: including ACT Math, precalculus, trigonometry, statistics
...Recently I worked in the after-school STEM program as a New York State Academy Fellow. In the STEM program, I taught science, technology, engineering and math via robotics that led my students
to improve in science, math and confidence by showcasing their own creativity. I taught Introductory G...
37 Subjects: including prealgebra, chemistry, biology, algebra 2
Related Tyler Park, NJ Tutors
Tyler Park, NJ Accounting Tutors
Tyler Park, NJ ACT Tutors
Tyler Park, NJ Algebra Tutors
Tyler Park, NJ Algebra 2 Tutors
Tyler Park, NJ Calculus Tutors
Tyler Park, NJ Geometry Tutors
Tyler Park, NJ Math Tutors
Tyler Park, NJ Prealgebra Tutors
Tyler Park, NJ Precalculus Tutors
Tyler Park, NJ SAT Tutors
Tyler Park, NJ SAT Math Tutors
Tyler Park, NJ Science Tutors
Tyler Park, NJ Statistics Tutors
Tyler Park, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Bowling Green, NY Math Tutors
Captree Island, NY Math Tutors
Crotona Park, NY Math Tutors
Ellis Island, NJ Math Tutors
Five Corners, NJ Math Tutors
Greeley Square, NY Math Tutors
Heer Park, NY Math Tutors
Lake Gardens, NY Math Tutors
Lake Swannanoa, NJ Math Tutors
North Bergen Math Tutors
Oak Island, NY Math Tutors
Pine Cliff Lake, NJ Math Tutors
Shady Lake, NJ Math Tutors
Silver Lake, NJ Math Tutors
West New York Math Tutors | {"url":"http://www.purplemath.com/tyler_park_nj_math_tutors.php","timestamp":"2014-04-19T02:24:49Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:e563d0de-e1d1-4485-b9c7-bf11f0b30187>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Briarcliff, PA Trigonometry Tutor
Find a Briarcliff, PA Trigonometry Tutor
...With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. Beyond academics, I spend my time backpacking,
kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design.
14 Subjects: including trigonometry, calculus, physics, geometry
...My expertise allows me to quickly identify students' problem areas and most effectively address these in the shortest amount of time possible. For the SAT, each student receives a 95-page
spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous a...
19 Subjects: including trigonometry, calculus, statistics, geometry
...Latoya has been heavily involved in advocating education and excellence in young people through various avenues. These avenues include FOCUS--Facilitating Opportunity and Climate for
Under-represented Students--a departmental program at the University of Pittsburgh, where she tutored college stu...
13 Subjects: including trigonometry, chemistry, geometry, biology
Hello, my name is David, and I tutor in math and physics. I have a bachelor's in mathematics (with a minor in physics) from Rutgers University (go RU!) and a master's in mathematics education from
the Graduate School of Education at Rutgers. It was at college that I first began tutoring and discov...
16 Subjects: including trigonometry, English, physics, calculus
...Liberal use of concrete examples. May require review of arithmetic concepts, including fractions and decimals. I can provide that.
32 Subjects: including trigonometry, chemistry, English, biology | {"url":"http://www.purplemath.com/Briarcliff_PA_Trigonometry_tutors.php","timestamp":"2014-04-18T00:48:02Z","content_type":null,"content_length":"24299","record_id":"<urn:uuid:04e50979-4abb-4c92-b764-689fd9292fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
In the present paper we consider the existence of non-trivial classical and weak Lq-solutions of the Cosserat spectrum $$ Delta U u = a abla Div U u, qquad U u Big|_{partial G}=0 $$ where G is a
bounded or an exterior domain with sufficiently smooth boundary. This problem firstly was investigated by Eugene and Francois Cosserat. It is a special case of the Lame equation and describes the
displacement of a homogeneous isotropic linear static elastic body without exterior forces. We can prove that a = 1 is an eigenvalue of infinite multiplicity and a = 2 is an accumulation point of
eigenvalues of finite multiplicity. E. and F. Cosserat (1900) studied the classical Cosserat spectrum for certain types of domains like a ball, a spherical shell or an ellipsoid. General results
are due to Mikhlin (1973), who investigated the Cosserat spectrum for n=3 and q=2, and Kozhevnikov (1993), who treated bounded domains in the case n=3 and q=2. Kozhevnikovs proof is based on the
theory of pseudodifferential operators. Faierman, Fries, Mennicken and Möller (2000) gave a direct proof for bounded domains, n>=2 and q=2. Michel Crouzeix 1997 gave a simple proof for bounded
domains, n=2,3 and q=2. In this paper we use the idea of Crouzeix to prove the results for bounded and exterior domains, n>=2 and 1<q. For the Lq-solutions of eigenvalues a in R {1,2} we can
prove the existence of higher (classical) derivatives. Furthermore they do not depend on q. a =2 is an accumulation point of eigenvalues of the classical Cosserat spectrum, too, and a =1 is also
a classical eigenvalue. As an approach we searched for a relationship of Greens function of the Laplacian to the reproducing kernel in Bergman spaces. We couldnt prove that directly. But after
solving the Cosserat spectrum in another way we can prove the relationship indirectly. | {"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/all/start/0/rows/10/doctypefq/book/subjectfq/Au%C3%9Fengebiet","timestamp":"2014-04-20T01:13:41Z","content_type":null,"content_length":"12875","record_id":"<urn:uuid:45d1f461-6dc2-4815-be67-20b523f6fe99>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trouble Finding Derivative
November 16th 2011, 09:00 PM #1
Junior Member
Sep 2011
Trouble Finding Derivative
I've been working on optimization problems and have been having problems with this one. The only thing that I'm confused about is what's going on when I find the derivative of the function:
$\frac{d}{dx}[(x+d)^2(\frac{w^2}{x^2} +1)]$
Also, in case it matters, $d=24$ and $w=6$, but in the example, the derivative is found with respect to the variables rather than their respective values.
My attempt:
$[(2)(x+d) \cdot (1+1) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx^2-[2xw^2]}{x^4})]$
$[(2)(x+d) \cdot (2) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx-2w^2}{x^3})]$
$[(4)(x+d) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx-2w^2}{x^3})]$
From this point, I could do stuff like factor out a $2(x+d)$, but it looks like mine won't be very easy to solve for zero.
This is what the derivative is supposed to be, which is easier to solve for zero:
What am I doing incorrectly?
Re: Trouble Finding Derivative
I've been working on optimization problems and have been having problems with this one. The only thing that I'm confused about is what's going on when I find the derivative of the function:
$\frac{d}{dx}[(x+d)^2(\frac{w^2}{x^2} +1)]$
Also, in case it matters, $d=24$ and $w=6$, but in the example, the derivative is found with respect to the variables rather than their respective values.
My attempt:
$[(2)(x+d) \cdot (1+1) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx^2-[2xw^2]}{x^4})]$
$[(2)(x+d) \cdot (2) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx-2w^2}{x^3})]$
$[(4)(x+d) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx-2w^2}{x^3})]$
From this point, I could do stuff like factor out a $2(x+d)$, but it looks like mine won't be very easy to solve for zero.
This is what the derivative is supposed to be, which is easier to solve for zero:
What am I doing incorrectly?
1. You have introduced and extra factor of two in the first term erroneously.
2. The second term should be:
$(x+d)^2 \frac{d}{dx}\left( \frac{w^2}{x^2}+1\right) =(x+d)^2 \frac{(-2)w^2}{x^3}$
Re: Trouble Finding Derivative
I've been working on optimization problems and have been having problems with this one. The only thing that I'm confused about is what's going on when I find the derivative of the function:
$\frac{d}{dx}[(x+d)^2(\frac{w^2}{x^2} +1)]$
Also, in case it matters, $d=24$ and $w=6$, but in the example, the derivative is found with respect to the variables rather than their respective values.
My attempt:
$[(2)(x+d) \cdot (1+1) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx^2-[2xw^2]}{x^4})]$
Where did that "(1+1)"? Surely you didn't differentiate "(x+ d)" and get "(1+ 1)"? "d" is a constant. The derivative of x+ d with respect to x is 1. Also you appear to have tried to write $\frac
{w^2}{x^2}+ 1= \frac{w^2+ x^2}{x^2}$ and use the quotient rule- but you have done that incorrectly. Better is to write $\frac{w^2}{x^2}+ 1= w^2x^{-2}+ 1$ so the derivative is $-2w^2x^{-3}= \frac
$[(2)(x+d) \cdot (2) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx-2w^2}{x^3})]$
$[(4)(x+d) \cdot (\frac{w^2}{x^2} +1)] + [(x+d)^2 \cdot (\frac{2wx-2w^2}{x^3})]$
From this point, I could do stuff like factor out a $2(x+d)$, but it looks like mine won't be very easy to solve for zero.
This is what the derivative is supposed to be, which is easier to solve for zero:
What am I doing incorrectly?
Re: Trouble Finding Derivative
Perhaps if I hadn't neglected the $\frac{d}{dx}$ in front, things would've come together a lot quicker! Lesson learned.
Thanks a lot for the help!
November 16th 2011, 10:00 PM #2
Grand Panjandrum
Nov 2005
November 17th 2011, 05:13 AM #3
MHF Contributor
Apr 2005
November 17th 2011, 08:29 AM #4
Junior Member
Sep 2011 | {"url":"http://mathhelpforum.com/calculus/192088-trouble-finding-derivative.html","timestamp":"2014-04-19T07:57:38Z","content_type":null,"content_length":"49023","record_id":"<urn:uuid:abdf3165-3a09-4287-89d8-a3c4688621c4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Francesco Paolo Cantelli
1875 - 1966 Click the picture above
to see a larger version
Francesco Cantelli was an Italian mathematician who made contributions to the theory of probability.
Full MacTutor biography [Version for printing]
List of References (9 books/articles)
A Poster of Francesco Cantelli
Mathematicians born in the same country
Show birthplace location
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
JOC/EFR © August 2006
The URL of this page is:
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index | {"url":"http://www-gap.dcs.st-and.ac.uk/~history/Mathematicians/Cantelli.html","timestamp":"2014-04-17T06:46:58Z","content_type":null,"content_length":"4404","record_id":"<urn:uuid:f98d31f7-7f00-40f3-8f71-fc2f4259a6e8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
eFunda: Glossary: Units: Angle: Arc Minute (minute of Arc)
Glossary » Units » Angle » Arc Minute (minute of Arc)
Arc Minute (minute of Arc) (') is a unit in the category of Angle. It is also known as minutes of angle. This unit is commonly used in the INT unit system. Arc Minute (minute of Arc) (') has a
dimension of a. It can be converted to the corresponding standard SI unit rad by multiplying its value by a factor of 0.000290888208.
Note that the seven base dimensions are M (Mass), L (Length), T (Time),
(Temperature), N (Aamount of Substance), I (Electric Current), and J (Luminous Intensity).
Other units in the category of Angle include Arc Second (second of Arc) ("), Circumference, Degree (°), Gon (gr), Grade (gr), Hour of Arc, Milliradian (mrad), Percent (%), Quadrant (angle), Radian
(rad), Revolution (rev), and Sign.
Additional Information
Related Glossary Pages
Related Pages | {"url":"http://www.efunda.com/glossary/units/units--angle--arc_minute_minute_of_arc.cfm","timestamp":"2014-04-20T23:28:42Z","content_type":null,"content_length":"31674","record_id":"<urn:uuid:d69a5050-333d-4153-be1d-45624125c6f8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |