content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
a very elementary JavaScript question
04-13-2013, 02:27 AM #1
Registered User
Join Date
Apr 2013
a very elementary JavaScript question
<!DOCTYPE html>
<p>Given that y=5, calculate x=y+2, and display the result.</p>
<button onclick="myFunction()">Try it</button>
<p id="demo"></p>
function myFunction()
var y=5;
var x=y+2;
var demoP=document.getElementById("demo")
demoP.innerHTML="x=" + x;
Can anybody please explain me why on the "demoP.innerHTML="x=" + x;", the function has to be written in "="x=" + x;"? I got this example from w3school, but nobody explains anything about it.
Please help!!
The "x=" is a string to be displayed in the demo.
The x variable is the result of the calculation.
Take out either temporarily and view the results.
In one case you will see "x=" only and in the other you will see "7" only.
The nice thing about the "Try it" page is that you can make quick changes and see the results immediately.
thx JMRKER
I have tried taking out either the "x=" or "+x",
it's x= and 7 respectively.
Is there any importance for writing "x=" in the function? Can I omit it?
There is another example:
<!DOCTYPE html>
<p>Click the button to create a variable, and display the result.</p>
<button onclick="myFunction()">Try it</button>
<p id="demo"></p>
function myFunction()
var carname="Volvo";
As I have seen, the function(? sorry if I don't name the parts correctly) here doesn't have anything in quotation after innnerHTML.
Therefore, is the "x="necessary to be put in the function? If so, what's the importance of that? Does it affect anything? I have tried, the factor affects the calculation is y or 2 only. If I
change these 2, the output will be affected. To my understanding, x itself is enough for the output. Why bother writing "x="?
Sorry for my clumsy reasoning, I haven't touched science for more than 10 years. I hope I hadn't annoyed you.
The "x=" portion is just a string. Nothing more, nothing less.
It is in the example to show the user what the new value of x (the variable) is (sum of 5+2).
The "x=" has NO EFFECT on the actual calculation of x = 5+2;
Change the string to a different display and see, for example, that you
could substitute as "The old value of x = 5 has changed with the addition of +2 to a new value of x="+x;
BTW: You should enclose your scripts between [ code] and [ /code] tags (without the spaces)
to make it easier to see your program and preserve your formatting.
can i ask you another question?
<!DOCTYPE html>
<p>Given that x=10 and y=5, calculate x%=y, and display the result.</p>
<button onclick="myFunction()">Try it</button>
<p id="demo"></p>
function myFunction()
var x=10;
var y=5;
var demoP=document.getElementById("demo")
demoP.innerHTML="x=" + x;
Given that x=10 and y=5, calculate x%=y, and display the result.
Why x=0?
when we use the operator %=, the calculation is x=x%y, given x=10, y=5
the calculation should be x=5/100*10,
why it's 0?
The % character used here is a modulo operation, NOT the percentage as you seem to be expecting.
Modulo operations are like division, but only the integer remainder is returned as a result.
10 % 5 is 0
11 % 5 is 1
12 % 5 is 2
13 % 5 is 3
14 % 5 is 5
15 % 5 is 0
I would suggest you read about the Math. operations
like +, -, *, /, %, Math.random(), Math.power(), etc.
04-13-2013, 10:05 AM #2
Super Moderator
Join Date
Dec 2005
04-13-2013, 07:05 PM #3
Registered User
Join Date
Apr 2013
04-13-2013, 10:04 PM #4
Super Moderator
Join Date
Dec 2005
04-14-2013, 02:42 AM #5
Registered User
Join Date
Apr 2013
04-14-2013, 09:01 AM #6
Super Moderator
Join Date
Dec 2005
|
{"url":"http://www.webdeveloper.com/forum/showthread.php?276253-a-very-elementary-JavaScript-question&p=1261373","timestamp":"2014-04-19T08:40:23Z","content_type":null,"content_length":"82222","record_id":"<urn:uuid:67217d21-7c2c-4075-9b3d-5bdbbc8c8d29>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Panorama City Calculus Tutor
Find a Panorama City Calculus Tutor
Hello students and parents! My name is Johanna and I am a recent college graduate looking for opportunities to help students who are currently struggling with math. I hold a Bachelor's degree in
Mathematics with an emphasis in Secondary Teaching.
7 Subjects: including calculus, geometry, algebra 1, algebra 2
...Because of my heavy course load, transfer credits, and perseverance, I was able to graduate from this 4-year institution in 2 years and earned a B.A. in Music (emphasis on History/Theory/Jazz).
In my last and most difficult semester, I earned a 4.0 GPA. I recently graduated from Franklin W. Ol...
18 Subjects: including calculus, chemistry, algebra 2, SAT math
...I help students to develop concepts in functions including linear, quadratic, polynomial, exponential, and logarithmic ones, and how to match them as graphs. As an Engineering major, I have
taken Calculus and Engineering Math during Undergrad years. I have taken Advanced Engineering Math in Graduate school.
35 Subjects: including calculus, chemistry, physics, geometry
...I finished calculus 4 at Cerritos College before I transfered to UCLA. I also love to do origami. I learn new things very quickly.
7 Subjects: including calculus, geometry, algebra 1, algebra 2
...I have taught several sections of Calculus I and II at the university level and understand that often the hardest part of calculus is mastering all the algebra! Often times, students have
missed something in all classes leading up to calculus, and the calculus ends up confusing. Once the algebra is done correctly, the calculus becomes much more rewarding.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Panorama_City_calculus_tutors.php","timestamp":"2014-04-17T21:42:52Z","content_type":null,"content_length":"24040","record_id":"<urn:uuid:b9f769cb-c6af-492f-8dab-efb347096e91>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] From theorems of infinity to axioms of infinity
Monroe Eskew meskew at math.uci.edu
Thu Mar 21 15:56:25 EDT 2013
On Mar 21, 2013, at 3:31 AM, Nik Weaver <nweaver at math.wustl.edu> wrote:
> Again, my point is that number theoretic systems provide
> a far more comfortable fit for core mathematics than does set theory, which postulates a vast realm of nonseparable pathology about which
> it is unable to answer even the simplest questions.
On the other hand, there is a large collection of number theoretic questions, namely consistency questions, which a low-strength number theoretic system is unable to answer. At this time it seems only the methods of set theory are capable of providing some answers.
> I will stand on my assertion that this phenomenon raises serious
> doubt about the value of set theory as a foundational system, as
> the ideal arena for situating mainstream mathematics.
Why is a less powerful system better? Certainly set theory can serve as a foundation, and it also brings with it some extra power to answer even number theoretic questions. What's so bad about pathologies anyway? They easily coexist with non-pathological things.
> It sounds like your position is that we should be interested in set
> theory because we can use it to prove independence results, and the
> independence results are important because they answer questions
> about set theory. My position is that core mathematics can be
> detached from this feedback loop with no essential loss to it.
I tried to argue that it's not entirely a feedback loop, that many questions addressed by set theory come from mainstream or classical mathematics. Of course some feedback is inevitable, as a successful theory or method will generate questions internal to it.
> Again, you subtly misquote me to strengthen your response. The phrase
> I used was "little or no relevance", not a flat "irrelevant". You don't
> win this argument by producing examples where set theory has some very
> small, marginal relevance to mainstream mathematics. You have to find
> examples where the relevance is central.
> Not to put too fine a point on it, but I'm fairly familiar with some
> of the examples you cite. These are not central questions in
> mainstream areas. They are very marginal.
You'll have to explain why your use of "small" and "marginal" is objective. Was George Elliot's classification program for C^* algebras marginal? Results, including anti-classification results, were found by descriptive set theorists. (I am stealing this example from Matt Foreman.) It seems like you will have to claim that in some cases, the hard work of esteemed mathematicians (some of whom are not set theorists) is marginal and unimportant because set theory has something to say about it. Perhaps it is your viewpoint that is marginal? We need some objective standards here.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-March/017141.html","timestamp":"2014-04-18T06:19:25Z","content_type":null,"content_length":"5819","record_id":"<urn:uuid:89005142-5ec7-4179-966c-a9e4c3ea71e6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
al Chemistry
Chemical Excelets: Interactive Excel Spreadsheets
for General Chemistry
These interactive spreadsheets (aka - simulations) are used in-class and as out-of-class projects. Through the use of numerical experimentation and "what if" scenarios, we have a powerful discovery
learning tool for students using readily available off-the-shelf software. For a discussion on using Excelets, see Chemical Spreadsheet Investigations: Empowering Student Learning of Concepts via
Camouflaged Mathematical Exploration.
How do I interact with the spreadsheet?
The interactivity on the spreadsheets occurs as cells with a yellow background, where the number can be entered by typing in a value, or sliders, where one can drag the center bar or click on the
terminal arrows, are changed. Likewise, the spinner works by clicking on the arrows. You may also see check boxes or option buttons which perform the indicated task. A response will occur on the
graph and/or data by adjusting any of these items. List boxes are also used to select information. Comment cells (red triangle in upper right corner) are used to deliver information as well.
The Collection
Here are a variety of Excelets (hold the cursor over the link for a brief description) and some pdf handouts for topics in General Chemistry, including the laboratory. Some of these are simple
calculation aids, while others explore concepts by bringing the mathematics alive. A number of the more recent Excelets, marked with an * consider the influence of random and/or systematic error.
(Note - In Excel, you may need to resize these spreadsheets to fit your screen by going to View on the menu bar and selecting Zoom.)
For information on designing Excelets and links to mathematical modeling of data support materials, see Developer's Guide to Excelets. For more Excelets in materials science espeically for materials,
measurement and error plus further exploration of the periodic table and solid state stuff, see the MatSci Excelets page.
Other good sites for chemists:
Interactive Spreadsheet Demonstrations for Introductory, Inorganic, Analytical, and Physical Chemistry (new URL)
Interactive Spreadsheets in JCE WebWare
Please e-mail any corrections, modifications, suggestions, or questions.
Rate and comment on this collection in Merlot.
Scott A. Sinex Prince George’s Community College 4/2014
|
{"url":"http://academic.pgcc.edu/~ssinex/excelets/chem_excelets.htm","timestamp":"2014-04-21T12:09:46Z","content_type":null,"content_length":"34252","record_id":"<urn:uuid:410dd4a5-cb6c-43f7-bf60-46147c1c2c3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - Finite hyperbolic universe and large scale structure patterns
hellfire Apr16-04 04:13 PM
Finite hyperbolic universe and large scale structure patterns
This paper :
Hyperbolic Universes with a Horned Topology and the CMB Anisotropy
...press release:
proposes a universe with the shape of a horn. This is a hyperbolic space with negative curvature.
The paper mentions an interesting issue: the relation between finite hyperbolic spaces and chaos. Finite hyperbolic spaces generate chaotic mixing of trayectories, leading to fractal structure
formation. See e.g.:
Chaos and order in a finite universe
A fractal nature of large scale structures was already suggested due to the self-similarity of the distribution of galaxies and clusters (similar correlation functions AFAIK).
My knowledge of chaotic systems is almost non existent, thus I would like to know qualitatively why finite hyperbolic spaces do have such properties in relation to chaos and infinite flat spaces do
not (although you can find an interesting remark in the previous cited paper about the cosmological constant in infinite flat spaces).
But there is another thing that bothers me. In the paper it is claimed that the CMB data would not reflect the negative curvature. But why? Usually it is assumed that the angular scale of the first
peak of the CMB anisotropies gives a measure of the curvature.
An universe infinitely long but with finite volume: it remembers me a surface called Gabriel's Horn
And this thing called Picard topology must be an invention of F.Steiner. i did a google on "Picard topology", and only appeared 5 entries, and the 5 related to this horn-shaped-universe theory
Is this like a universe that grow from a singularity infinitely in the past in an accelerated manner?
All times are GMT -5. The time now is 04:58 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums
|
{"url":"http://www.physicsforums.com/printthread.php?t=21017","timestamp":"2014-04-17T09:58:30Z","content_type":null,"content_length":"6513","record_id":"<urn:uuid:4a2e1ccf-3a2f-4527-82c1-2530f63666be>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simultaneous disclosure
What is the name of the protocol that lets two parties to simultaneously get an output of a joint computation (within epsilon of certainty of each other)? That is, they don't simultaneously get it,
each party gets a little bit at a time, but no party gets more than a fraction of a bit more than the other?
|
{"url":"http://www.physicsforums.com/showthread.php?p=4199688","timestamp":"2014-04-19T04:38:12Z","content_type":null,"content_length":"55507","record_id":"<urn:uuid:42cb5038-1606-48d1-b6a3-e9231b0e74b2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
June 96 Game 2
Well I am really stuck on this game. I just got my LG Ultimate Setups, but was surprised and dissapointed that it doesnt go over each answer! And the setup for this game is less than acceptable, and
I dont really understand how they did it. Could you guys please help with this question? I didnt get up to it in the LG Bible, do they have simliar questions like this one in their, so I can learn
how to do it? Thanks...
|
{"url":"http://www.lawschooldiscussion.org/index.php?topic=7339.msg58903","timestamp":"2014-04-21T07:29:29Z","content_type":null,"content_length":"47542","record_id":"<urn:uuid:d9f31a8c-2acd-4ce2-a94f-df2bd125155e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Product topology
February 28th 2009, 09:24 PM
Product topology
Let X and Y be the topological spaces, ans assume that A⊂X and B⊂Y. Then the topology on AxB as a subspace of the product XxY is the same as the product topology on AxB, where A has the subspace
topology inherited from X and B has the subspace topology inherited from Y.
March 1st 2009, 05:04 AM
Let X and Y be the topological spaces, ans assume that A⊂X and B⊂Y. Then the topology on AxB as a subspace of the product XxY is the same as the product topology on AxB, where A has the subspace
topology inherited from X and B has the subspace topology inherited from Y.
Let $(X,T_1)$ and $(Y, T_2)$ be topological spaces. A basis element of the product topology on $A \times B$ has the form $V_1 \times V_2$, where $V_1, V_2$ be open sets in the subspace topology
on $(A,S_1)$, and $(B,S_2)$, respectively.
Since $S_1=\{A \cap U \text{ } | \text{ }U \in T_1\}$ and $S_2=\{B \cap V \text{ } | \text{ } V \in T_2\}$, we have $V_1 = A \cap U_1$ and $V_2 = B \cap U_2$, where $U_1, U_2$ be open sets in $
(X, T_1)$ and $(Y, T_2)$.
Since $V_1 \times V_2 = ((A \cap U_1) \times (B \cap U_2))$, we have a basis element of a topology on $A \times B$ as a subspace of the product $X \times Y$, which is $((A \cap U_1) \times (B \
cap U_2))=((A \times B) \cap (U_1 \times U_2))$, where $U_1, U_2$ be open sets in $(X, T_1)$ and $(Y, T_2)$.
The converse is similar to the above.
|
{"url":"http://mathhelpforum.com/differential-geometry/76253-product-topology-print.html","timestamp":"2014-04-20T20:14:39Z","content_type":null,"content_length":"8779","record_id":"<urn:uuid:f72cf9af-df23-4fa4-85b8-0c5a7ff0b9c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Richard Rebarber
Welcome to my home page. It's pretty rudimentary, so you won't find any mp3 files or pictures here. If it's mp3 files and pictures you want, check out the Floating Opera web page. This page mostly
exists as a place to find recent papers of mine. It was last updated February 13, 2010. If you want more information, please contact me at rrebarber1@unl.edu.
Professor, Department of Mathematics, University of Nebraska
Contact Information
322 Avery Hall
Department of Mathematics
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
Phone: (402) 472-7235 (office) and (402) 421-0901 (home)
FAX: (402) 472-8466
email: rrebarbe@math.unl.edu
Selected recent papers - PDF downloads:
(Please contact me at rrebarber1@unl.edu if you have any questions, or want access to papers which are not yet accepted)
Math Ecology:
Global asymptotic stability of density dependent integral population projection models, R. Rebarber, B. Tenhumberg and S. Townley, Theoretical Population Biology, 81 (2012) 81-87
Choice of density dependent seedling recruitment function affects predicted transient dynamics: A case study with Platte thistle, Eric Alan Eager, Richard Rebarber and Brigitte Tenhumberg, to appear
in Theoretical Ecology
Feedback control systems analysis of density dependent population dynamics, S. Townley, B. Tenhumberg and R. Rebarber, Systems & Control Letters, 61 (2012) 309-315.
The effects of belowground resources on aboveground allometric growth in Bornean tree species, Katherine D. Heineman, Ethan Jensen, Autumn Shapland, Brett Bogenrief, Sylvester Tan, Richard Rebarber
and Sabrina E. Russo, Forest Ecology and Management, 261 (2011) pp. 1820-1832.
Structured Population Dynamics and Calculus: An Introduction to Integral Modeling , by J. Briggs, K. Dabbs, M. Holm, J. Lubben, R. Rebarber, D. Riser-Espinoza and B. Tenhumberg, Mathematics Magazine
83: 4 (2010), pp. 243-257.
Parameterizing the growth-decline boundary for uncertain population projection models by J. Lubben, D. Boeckner, R. Rebarber, S. Townley and B. Tenhumberg, Theoretical Population Biology, 75(2009),
pp. 85-97.
Model complexity affects predicted transient population dynamics following a dispersal event: A case study with Acyrthosiphon pisum, by B. Tenhumberg, A. Tyre and Richard Rebarber, Ecology 90, 7
(2009), 1878-1890.
Management recommendations based on matrix projection models: the importance of considering biological limits , by J. Lubben, B. Tenhumberg, A. Tyre and R. Rebarber, Biological Conservation, 141, 2
(2008) pp. 517-523.
Robust population management under uncertainty for structured population models, by A. Diences, E. Peterson, D. Boeckner, J. Boyle, A. Keighley, J. Kogut, J. Lubben, R. Rebarber, R. Ryan, B.
Tenhumberg, S. Townley and A. Tyre, Ecological Applications, 17 (2007), pp. 2175-2183.
Distributed Parameter Control Theory:
Invariant Zeros of SISO Infinite-Dimensional Systems, by K. Morris and R. Rebarber, Interna- tional Journal of Control, 83, no. 12 (2010), pp. 2573-2579.
A sampled-data servomechanism for stable well-posed systems , by Z. Ke, H. Logemann, and R. Rebarber, IEEE Trans. Automatic Control, 54, 5(2009), pp. 1123-1128
Approximate tracking and disturbance rejection for stable infinite-dimensional systems using sampled-data low-gain control, by Z. Ke, H. Logemann, and R. Rebarber, SIAM J. Control Optimization, 48, 2
(2009), pp. 641-671
Feedback Invariance of SISO infinite-dimensional systems, by K. Morris and R. Rebarber, Mathematics of Control, Signals and Systems, 19 (2007), pp. 313-335.
Robustness with Respect to Sampling for Stabilization of Riesz Spectral Systems , by R. Rebarber and S. Townley, IEEE Trans. Automatic Control, 51, 9 (2006), pp. 1519-1522
Generalized sampled-data stabilization of well-posed linear infinite-dimensional systems, by Hartmut Logemann, Richard Rebarber and Stuart Townley, SIAM J. Control and Optimization, 44, No. 4 (2005),
pp. 1345-1369..
Stability of Infinite Dimensional Sampled Data Systems , by Hartmut Logemann, Richard Rebarber and Stuart Townley, Transactions of the American Mathematical Society, 355, No. 8 (2003), pp. 3301-3328.
Internal Model Based Tracking and Disturbance Rejection for Stable Well-Posed Systems, by Richard Rebarber and George Weiss, Automatica, 39, (2003), pp. 1555-1569.
Boundary Controllability of a Coupled Wave/Kirchoff System , by George Avalos, Irena Lasiecka and Richard Rebarber, Systems & Control Letters, 50, (2003), pp. 331-341.
Non-Robustness of Closed-Loop Stability for Infinite Dimensional Systems Under Sample-and-Hold , by Richard Rebarber and Stuart Townley, IEEE Transactions on Automatic Control, 47 (8), (2002) pp.
Last changed: February 13, 2011
|
{"url":"http://www.math.unl.edu/~rrebarber1/","timestamp":"2014-04-16T18:56:08Z","content_type":null,"content_length":"6293","record_id":"<urn:uuid:9f26464e-122d-4f79-8574-1270766ee6d3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Simplify: (3x - 5) - (4x + 6)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
(3x - 5) - (4x+6) When there is negative sign out side the bracket the sign changes 3x-5 -4x-6 can you do it now?
Best Response
You've already chosen the best response.
you can rearrange the terms of 3x-5 -4x-6 to get 3x-5 -4x-6 3x-4x-5 -6 Now combine 3x and -4x to get what?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
NO NO NO... HOLD ON
Best Response
You've already chosen the best response.
X+1? Is that right or no?
Best Response
You've already chosen the best response.
Noo First one is right
Best Response
You've already chosen the best response.
oh.... lol
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i have a few more.. can you help??
Best Response
You've already chosen the best response.
I'll try
Best Response
You've already chosen the best response.
Simplify: (3x^2 - 2) + (2x^2 - 6x + 3)
Best Response
You've already chosen the best response.
i love harry potter
Best Response
You've already chosen the best response.
just saying.
Best Response
You've already chosen the best response.
(3x - 5) - (4x+6) is really -x - 11
Best Response
You've already chosen the best response.
so your first one was close
Best Response
You've already chosen the best response.
ok thk you jim!
Best Response
You've already chosen the best response.
you're welcome
Best Response
You've already chosen the best response.
i though iw as wron.. lol
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
you were close though, just need to keep in mind that both 5 and 6 were negative
Best Response
You've already chosen the best response.
yes and i didnt even look to see.. lol i do that alot need to sop though. do you think you could help me on the other one that i posted?
Best Response
You've already chosen the best response.
its up ther somewhere ^
Best Response
You've already chosen the best response.
cuz it looks as if my help that was helping left.
Best Response
You've already chosen the best response.
(3x^2 - 2) + (2x^2 - 6x + 3) 3x^2 - 2 + 2x^2 - 6x + 3 ... parenthesis are unneeded here (3x^2+2x^2)+(-6x)+(-2+3) ... group like terms 5x^2 - 6x + 1 ... combine like terms So (3x^2 - 2) + (2x^2 -
6x + 3) simplifies to 5x^2 - 6x + 1
Best Response
You've already chosen the best response.
that makes sence
Best Response
You've already chosen the best response.
that's great :)
Best Response
You've already chosen the best response.
ugh... i ahve more infor it or no?
Best Response
You've already chosen the best response.
sure, go for it
Best Response
You've already chosen the best response.
A sports company conducted a road test of a new model of a geared bike. The test rider cycled (3x - 2) miles on a flat road, (x2 - 5) miles uphill, and (2x + 7) miles downhill. Which simplified
expression is equivalent to the total distance, in miles, for which the bike was tested?
Best Response
You've already chosen the best response.
hold on messed up..
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
sorry didnt think to put it when i was typing
Best Response
You've already chosen the best response.
oh that's perfectly fine, i figured that when you typed x2, you meant x^2
Best Response
You've already chosen the best response.
one sec while I type it up
Best Response
You've already chosen the best response.
thank u so much
Best Response
You've already chosen the best response.
total distance = result of adding up all the distances total distance = distance1 + distance2 + distance3 total distance = (3x-2)+(x^2-5)+(2x+7) total distance = 3x-2+x^2-5+2x+7 total distance =
(x^2)+(3x+2x)+(-2-5+7) total distance = x^2+5x+0 total distance = x^2+5x So the simplified expression that's equal to the total distance is x^2+5x
Best Response
You've already chosen the best response.
I tried to type it in the little box, but it drives me nuts cause it keeps changing on me...so I typed it elsewhere and pasted it.
Best Response
You've already chosen the best response.
okay.. lol... check this one for me.. i think i ve it correct...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Simplify the following expression: (5x^2 + 3x + 4) - (2x^2 + 5x - 1). If the final answer is written in the form Ax^2 + Bx + C, what is the value of A? Answer A=3 B= -2 and c=5
Best Response
You've already chosen the best response.
or is that wrong? l
Best Response
You've already chosen the best response.
you got it 100% correct, very nice job
Best Response
You've already chosen the best response.
ok now how di i answer it as the a?
Best Response
You've already chosen the best response.
you just give the answer of A since that's all they're asking for
Best Response
You've already chosen the best response.
well I meant A = 3
Best Response
You've already chosen the best response.
or maybe they want both the final expression AND the value of A
Best Response
You've already chosen the best response.
well duh... there is a box do3 or the /hole thing?
Best Response
You've already chosen the best response.
just one box?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so my guess is that they just want the value of A, so yes, 3
Best Response
You've already chosen the best response.
I thought they also wanted the final simplified expression, but I guess not
Best Response
You've already chosen the best response.
ok ty i have on more.. just like the one above
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
siplfy the following expression: (x + 6y) - (3x - 10y). If the final answer is written in the form Ax + By, what is the value of A
Best Response
You've already chosen the best response.
x - 3x is ...?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
what is 3x - x (I flipped it around, but it will guide us toward the answer)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
good, so because I flipped things around, the sign was flipped....so x - 3x is -2x
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Therefore, the value of A is -2
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
we only need to focus on the x terms because that's all they want to know (we can forget the y terms completely)
Best Response
You've already chosen the best response.
okay that that makes it a bit eaiser
Best Response
You've already chosen the best response.
yes it does
Best Response
You've already chosen the best response.
i have about 17 questons on quadracic *equations are u busy? lol
Best Response
You've already chosen the best response.
not too much, which ones are really giving you problems...we can do those first
Best Response
You've already chosen the best response.
its just questions in general from a textbook and i messed alot of school./ ima go ts t haty?
Best Response
You've already chosen the best response.
i go to a new post is that okay?
Best Response
You've already chosen the best response.
sure thing, just message me that you created it, thx
Best Response
You've already chosen the best response.
wait what over ther <---
Best Response
You've already chosen the best response.
is up now. over ther sorryidk how to message on hear
Best Response
You've already chosen the best response.
that's fine, thanks for letting me know
Best Response
You've already chosen the best response.
To send a message, just click on the person's name A box will come up. At the bottom of the box that pops up is the button "send message"
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fbd470fe4b0c25bf8fb938b","timestamp":"2014-04-18T16:37:35Z","content_type":null,"content_length":"204499","record_id":"<urn:uuid:0e5a0b0f-63d7-4a9a-b033-c795f3c0f4c7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics help!!
December 6th 2010, 07:45 PM #1
Junior Member
Feb 2009
Statistics help!!
I am trying to brush up on my stats but I havent done it in a long time!
Having trouble finding some definitions such as true proportion and sample proportion.
I think sample proportion is
(the number of success)/(total number observation in the sample)
in this case 0.5
Can someone help
Component 2 of the attachment
I am trying to brush up on my stats but I havent done it in a long time!
Having trouble finding some definitions such as true proportion and sample proportion.
I think sample proportion is
(the number of success)/(total number observation in the sample)
in this case 0.5
Can someone help
Component 2 of the attachment
Please read rule #6: http://www.mathhelpforum.com/math-he...ng-151418.html. Thread closed.
December 7th 2010, 03:47 PM #2
|
{"url":"http://mathhelpforum.com/statistics/165538-statistics-help.html","timestamp":"2014-04-20T04:23:15Z","content_type":null,"content_length":"34420","record_id":"<urn:uuid:50017142-d101-4d3c-8daf-51f67bb7bebc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about faster on Darren Wilkinson's research blog
This post follows on from the previous post on Gibbs sampling in various languages. In that post a simple Gibbs sampler was implemented in various languages, and speeds were compared. It was seen
that R is very slow for iterative simulation algorithms characteristic of MCMC methods such as the Gibbs sampler. Statically typed languages such as C/C++ and Java were seen to be fastest for this
type of algorithm. Since many statisticians like to use R for most of their work, there is natural interest in the possibility of extending R by calling simulation algorithms written in other
languages. It turns out to be straightforward to call C, C++ and Java from within R, so this post will look at how this can be done, and exactly how fast the different options turn out to be. The
post draws heavily on my previous posts on calling C from R and calling Java from R, as well as Dirk Eddelbuettel’s post on calling C++ from R, and it may be helpful to consult these posts for
further details.
We will start with the simple pure R version of the Gibbs sampler, and use this as our point of reference for understanding the benefits of re-coding in other languages. The background to the problem
was given in the previous post and so won’t be repeated here. The code can be given as follows:
for (i in 1:N) {
for (j in 1:thin) {
This code works perfectly, but is very slow. It takes 458.9 seconds on my very fast laptop (details given in previous post).
Let us now see how we can introduce a new function, gibbsC into R, which works in exactly the same way as gibbs, but actually calls on compiled C code to do all of the work. First we need the C code
in a file called gibbs.c:
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <R.h>
#include <Rmath.h>
void gibbs(int *Np,int *thinp,double *xvec,double *yvec)
int i,j;
int N=*Np,thin=*thinp;
double x=0;
double y=0;
for (i=0;i<N;i++) {
for (j=0;j<thin;j++) {
xvec[i]=x; yvec[i]=y;
This can be compiled with R CMD SHLIB gibbs.c. We can load it into R and wrap it up so that it is easy to use with the following code:
The new function gibbsC works just like gibbs, but takes just 12.1 seconds to run. This is roughly 40 times faster than the pure R version, which is a big deal.
Note that using the R inline package, it is possible to directly inline the C code into the R source code. We can do this with the following R code:
int i,j;
int N=*Np,thin=*thinp;
double x=0;
double y=0;
for (i=0;i<N;i++) {
for (j=0;j<thin;j++) {
xvec[i]=x; yvec[i]=y;
gibbsCin<-cfunction(sig=signature(Np="integer",thinp="integer",xvec="numeric",yvec="numeric"),body=code,includes="#include <Rmath.h>",language="C",convention=".C")
This runs at the same speed as the code compiled separately, and is arguably a bit cleaner in this case. Personally I’m not a big fan of inlining code unless it is something really very simple. If
there is one thing that we have learned from the murky world of web development, it is that little good comes from mixing up different languages in the same source code file!
We can also inline C++ code into R using the inline and Rcpp packages. The code below originates from Sanjog Misra, and was discussed in the post by Dirk Eddelbuettel mentioned at the start of this
gibbscode = '
int N = as<int>(n);
int thn = as<int>(thin);
int i,j;
RNGScope scope;
NumericVector xs(N),ys(N);
double x=0;
double y=0;
for (i=0;i<N;i++) {
for (j=0;j<thn;j++) {
x = ::Rf_rgamma(3.0,1.0/(y*y+4));
y= ::Rf_rnorm(1.0/(x+1),1.0/sqrt(2*x+2));
xs(i) = x;
ys(i) = y;
return Rcpp::DataFrame::create( Named("x")= xs, Named("y") = ys);
RcppGibbsFn <- cxxfunction( signature(n="int", thin = "int"),
gibbscode, plugin="Rcpp")
RcppGibbs <- function(N=50000,thin=1000)
This version of the sampler runs in 12.4 seconds, just a little bit slower than the C version.
It is also quite straightforward to call Java code from within R using the rJava package. The following code
import java.util.*;
import cern.jet.random.tdouble.*;
import cern.jet.random.tdouble.engine.*;
class GibbsR
public static double[][] gibbs(int N,int thin,int seed)
DoubleRandomEngine rngEngine=new DoubleMersenneTwister(seed);
Normal rngN=new Normal(0.0,1.0,rngEngine);
Gamma rngG=new Gamma(1.0,1.0,rngEngine);
double x=0,y=0;
double[][] mat=new double[2][N];
for (int i=0;i<N;i++) {
for (int j=0;j<thin;j++) {
mat[0][i]=x; mat[1][i]=y;
return mat;
can be compiled with javac GibbsR.java (assuming that Parallel COLT is in the classpath), and wrapped up from within an R session with
This code runs in 10.7 seconds. Yes, that's correct. Yes, the Java code is faster than both the C and C++ code! This really goes to show that Java is now an excellent option for numerically intensive
work such as this. However, before any C/C++ enthusiasts go apoplectic, I should explain why Java turns out to be faster here, as the comparison is not quite fair... In the C and C++ code, use was
made of the internal R random number generation routines, which are relatively slow compared to many modern numerical library implementations. In the Java code, I used Parallel COLT for random number
generation, as it isn't straightforward to call the R generators from Java code. It turns out that the COLT generators are faster than the R generators, and that is why Java turns out to be faster
Of course we do not have to use the R random number generators within our C code. For example, we could instead call on the GSL generators, using the following code:
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
#include <R.h>
void gibbsGSL(int *Np,int *thinp,int *seedp,double *xvec,double *yvec)
int i,j;
int N=*Np,thin=*thinp,seed=*seedp;
gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937);
double x=0;
double y=0;
for (i=0;i<N;i++) {
for (j=0;j<thin;j++) {
xvec[i]=x; yvec[i]=y;
It can be compiled with R CMD SHLIB -lgsl -lgslcblas gibbsGSL.c, and then called as for the regular C version. This runs in 8.0 seconds, which is noticeably faster than the Java code, but probably
not “enough” faster to make it an important factor to consider in language choice.
In this post I’ve shown that it is relatively straightforward to call code written in C, C++ or Java from within R, and that this can give very significant performance gains relative to pure R code.
All of the options give fairly similar performance gains. I showed that in the case of this particular example, the “obvious” Java code is actually slightly faster than the “obvious” C or C++ code,
and explained why, and how to make the C version slightly faster by using the GSL. The post by Dirk shows how to call the GSL generators from the C++ version, which I haven’t replicated here.
|
{"url":"http://darrenjw.wordpress.com/tag/faster/","timestamp":"2014-04-20T18:22:38Z","content_type":null,"content_length":"32652","record_id":"<urn:uuid:566b9b60-e229-4272-9b26-ebcd601825cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 3 of 3
1. CJM 2004 (vol 56 pp. 71)
Euclidean Rings of Algebraic Integers
Let $K$ be a finite Galois extension of the field of rational numbers with unit rank greater than~3. We prove that the ring of integers of $K$ is a Euclidean domain if and only if it is a principal
ideal domain. This was previously known under the assumption of the generalized Riemann hypothesis for Dedekind zeta functions. We now prove this unconditionally.
Categories:11R04, 11R27, 11R32, 11R42, 11N36
2. CJM 2002 (vol 54 pp. 1202)
Octahedral Galois Representations Arising From $\mathbf{Q}$-Curves of Degree $2$
Generically, one can attach to a $\mathbf{Q}$-curve $C$ octahedral representations $\rho\colon\Gal(\bar{\mathbf{Q}}/\mathbf{Q})\rightarrow\GL_2(\bar\mathbf{F}_3)$ coming from the Galois action on
the $3$-torsion of those abelian varieties of $\GL_2$-type whose building block is $C$. When $C$ is defined over a quadratic field and has an isogeny of degree $2$ to its Galois conjugate, there
exist such representations $\rho$ having image into $\GL_2(\mathbf{F}_9)$. Going the other way, we can ask which $\mod 3$ octahedral representations $\rho$ of $\Gal(\bar\mathbf{Q}/\mathbf{Q})$
arise from $\mathbf{Q}$-curves in the above sense. We characterize those arising from quadratic $\mathbf{Q}$-curves of degree $2$. The approach makes use of Galois embedding techniques in $\GL_2(\
mathbf{F}_9)$, and the characterization can be given in terms of a quartic polynomial defining the $\mathcal{S}_4$-extension of $\mathbf{Q}$ corresponding to the projective representation $\bar{\
Categories:11G05, 11G10, 11R32
3. CJM 2001 (vol 53 pp. 449)
Descending Rational Points on Elliptic Curves to Smaller Fields
In this paper, we study the Mordell-Weil group of an elliptic curve as a Galois module. We consider an elliptic curve $E$ defined over a number field $K$ whose Mordell-Weil rank over a Galois
extension $F$ is $1$, $2$ or $3$. We show that $E$ acquires a point (points) of infinite order over a field whose Galois group is one of $C_n \times C_m$ ($n= 1, 2, 3, 4, 6, m= 1, 2$), $D_n \times
C_m$ ($n= 2, 3, 4, 6, m= 1, 2$), $A_4 \times C_m$ ($m=1,2$), $S_4 \times C_m$ ($m=1,2$). Next, we consider the case where $E$ has complex multiplication by the ring of integers $\o$ of an imaginary
quadratic field $\k$ contained in $K$. Suppose that the $\o$-rank over a Galois extension $F$ is $1$ or $2$. If $\k\neq\Q(\sqrt{-1})$ and $\Q(\sqrt{-3})$ and $h_{\k}$ (class number of $\k$) is odd,
we show that $E$ acquires positive $\o$-rank over a cyclic extension of $K$ or over a field whose Galois group is one of $\SL_2(\Z/3\Z)$, an extension of $\SL_2(\Z/3\Z)$ by $\Z/2\Z$, or a central
extension by the dihedral group. Finally, we discuss the relation of the above results to the vanishing of $L$-functions.
Categories:11G05, 11G40, 11R32, 11R33
|
{"url":"http://cms.math.ca/cjm/msc/11R32","timestamp":"2014-04-19T14:37:13Z","content_type":null,"content_length":"30087","record_id":"<urn:uuid:75f82587-4ef4-4a63-a207-728bc491ade8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Why Dr Lewin solved Newton's equation wrong?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Can anyone tell me why Dr Lewin solved Newton's equation wrong?
Best Response
You've already chosen the best response.
Is that a trick? For students to figure it out?
Best Response
You've already chosen the best response.
What video? What time? Please give reference.
Best Response
You've already chosen the best response.
Elliptical orbits: Kepler said it is an ellipse From Kepler's equation T^2/R^3 = constant; the solution to this equation is not an ellipse but a visual rotating ellipse that gives Mercury's
perihelion same as Einstein. It is will known that there is an ellipse but Dr Lewin did not mention that there is a visual extension that solve mercury's perihelion. There is a paper on General
science journal "50 solutions of Mercury's perihelion" and it shows the ellipse by Kepler orbital solution and the visual extension of a rotating ellipse. Google/ 50 soltuioons of Mercury's
perihelion/ General science journal. This article shows without a least of doubt that you can use any physics formula written by any physicist from any period of time and find the visual rotation
and calculate the 43 arc seconds per century same as Einstein. Also the solution of Newton's equation is solved wrong> Newton's equation had been solved in real numbers and if you solve Newton's
equation in complex numbers you get quantum mechanics and if you subtract the two solutions you get Einstein numbers from Newton's equation. All that is found in the same article and Dr Lewin
does not mention any of it. It seems Dr Lewin is outdated or tricking
Best Response
You've already chosen the best response.
There is another article from 1977 said "relativity theory death certificate" and it shows how elliptical orbits visual are rotating ellipses and gives the 43 arc seconds per century. I do not
know why Dr Lewin keep giving outdated material. The method descired can be used to caculate visual effect measuering the motion of one object (planet) around another object (Sun) measured from a
third place (earth). You can use and physics definition like distance, velocity, acceleration, etc and calculate the visual or measurement errors nad it macthes Einstein's numbers to the last
digit of the decimal point. Dr Lewin keep talking about elliptical orbits and never mention that Kepler's laws are rotating elliptical orbits. Is Doctor Lewin outdated?
Best Response
You've already chosen the best response.
first of all walter lewi is a proffesor and no a doctor.second of all have you seen the course intoduction. this course is about nenewtonian mechanics.this course dosen't deal with realativity
and or quantom mechanics(they learn this subjects in later courses in m.i.t).so this is why he used this "outdated" meterials.i am sure he know about what you are saying but it is a freshman
course and he dosen't want to confuse the students.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50fe21e4e4b00c5a3be5cc8d","timestamp":"2014-04-20T13:48:11Z","content_type":null,"content_length":"41914","record_id":"<urn:uuid:774686b9-b32a-49fd-9f9d-046a4236d935>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the Laplace transform of each of the following:
August 19th 2009, 07:33 AM #1
Aug 2009
Find the Laplace transform of each of the following:
(a) e^4t
(b) 4 e^2t + 3 cosh 4t
(c) 2 sin3t + 4 sinh 3t
(d) t^3 + 2t^2 - 4t + 1
Just apply the formulas! Also note that the Laplace Transform is linear (I did two of them for you):
(a) $\mathcal{L}\left\{e^{4t}\right\}=\frac{1}{s-4}$
(c) $\mathcal{L}\left\{2\sin\left(3t\right)+4\sinh\left (3t\right)\right\}=2\mathcal{L}\left\{\sin\left(3t \right)\right\}+4\mathcal{L}\left\{\sinh\left(3t\r ight)\right\}$$=2\cdot\frac{3}{s^2+9}
If you can't use the short cuts, then apply the definition:
$\mathcal{L}\left\{f\left(t\right)\right\}=\int_0^{ \infty}e^{-st}f\left(t\right)\,dt$
I hope this helps...
August 19th 2009, 07:46 AM #2
|
{"url":"http://mathhelpforum.com/calculus/98577-find-laplace-transform-each-following.html","timestamp":"2014-04-19T08:09:43Z","content_type":null,"content_length":"34670","record_id":"<urn:uuid:5f5b63f2-62bb-4834-8b19-4a944f7fcff5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Digital Volume Control and SNR
From SqueezeboxWiki
Found this post by Sean Adams about volume control and Signal to Noise Ratio in the forum and thought it would be useful to enough folks that it should be posted here in the Wiki:
" I think the easiest way to understand this is to forget about numbers, decibels, and bits per sample for a minute, and just think about what's coming out of the DAC.
To oversimply only slightly: there are two things always coming from the DAC. 1) signal and 2) noise.
The level of the noise output stays the same no matter what signal level is being produced. That is really important to understand!
When the DAC is making a loud signal, there is a lot of signal and a little noise. That's a high SNR, which is good.
However, when the DAC is making a quiet signal, you have a little signal and a little noise. If we now consider the noise level in relation to the signal level, the noise is now louder. The noise
level hasn't gone up in absolute terms (eg volts), but relative to the signal it has, so you now have a bad SNR.
Now consider a simple resistor attenuator being fed by a loud (good SNR) signal from the DAC. When the voltage passes through the resistor divider, everything gets attenuated - the signal and noise
together. You have the same* SNR coming out of the divider as you had going in, i.e., the DAC's optimal SNR is preserved.
OK, now back to bits per sample. As you can see, the above effects really don't have much at all to do with bits per sample. We could send a million bits per sample, and it would still be the same.
So why does bit depth matter? What is the significance of 16 vs 24 bit?
What matters is that we send enough bits per sample that the DAC's full dynamic range is utilized. It is important to realize that the DAC's dynamic range is finite, and is less than its input word
size - more like 20 bits, since it is limited by its output noise level.
By "expanding" a 16 bit signal to 24 bit, all we are doing is saying "these 16 bits go in the most significant slots of the 24 bit word". We haven't improved the SNR of the signal, any more than you
can "enhance" a digital photo the way they do on CSI.
If we attenuate the 16 bit signal, yes, the zeroes and ones will migrate down into the least significant bits of the 24 bit word, and yes, if we still "have all the bits" we could then mathematically
go in reverse and get back to the same data. But that is not what the DAC does with the signal! The bits represent a smaller signal now than they did before. We still have exactly the same decreasing
SNR effect. Sending 24 bits into the DAC just means we aren't making it any worse than it already is. We haven't "bought more headroom"... it does NOT mean that those first 8 bits of attenuation are
To prove this, you could play a sine wave through the DAC and measure the SNR at each volume step. We would expect to see the SNR decrease as the volume is decreased. If there were anything special
about the point where we start "losing bits", or if we were really getting "extra headroom", then the plot would decrease slowly (or not at all) until it reaches that point, and then there would be
an inflection.
However, that is not what you'll see. The SNR will simply decrease with the signal level, all the way down.
I hope this helps... for extra credit maybe someone will try testing this?
• Actally, there are a number of secondary effects which reduce the SNR by the time it gets through the amplifier, but these are vanishingly small in comparison. "
|
{"url":"http://wiki.slimdevices.com/index.php/Digital_Volume_Control_and_SNR","timestamp":"2014-04-19T06:52:09Z","content_type":null,"content_length":"21042","record_id":"<urn:uuid:4cbeb9af-19b0-4332-b8b5-4063093f124d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cancelling Units: Examples
Cancelling / Converting Units: Examples (page 2 of 2)
• Suppose an object is moving at 66 ft/sec. How fast would you have to drive a car to keep pace with this object?
A car's speedometer doesn't measure feet per second, so you'll have to convert to some other measurement. You choose miles per hour. You know the following conversions: 1 minute = 60 seconds, 60
minutes = 1 hour, and 5280 feet = 1 mile. If 1 minute equals 60 seconds (and it does), then
The fact that the conversion can be stated in terms of "1", and that the conversion ratio equals "1" no matter which value is on top, is crucial to the process of cancelling units.
We have a measurment in terms of feet per second; we need a measurement in terms of miles per hour. To convert, we start with the given value with its units (in this case, "feet over seconds")
and set up our conversion ratios so that all undesired units are cancelled out, leaving us in the end with only the units we want. Here's what it looks like:
Why did we set it up like this? Because, just like we can cancel duplicated factors when we multiply fractions, we can also cancel duplicated units:
I would have to drive at 45 miles per hour.
How did I know which way to put the ratios? How did I know which units went on top and which went underneath? I didn't. Instead, I started with the given measurement, wrote it down complete with its
units, and then put one conversion ratio after another in line, so that whichever units I didn't want were eventually canceled out. If the units cancel correctly, then the numbers will take care of
If, on the other hand, I had done something like, say, the following:
...then nothing would have cancelled, and I would not have gotten the correct answer. By making sure that the units cancelled correctly, I made sure that the numbers were set up correctly too, and I
got the right answer. This "setting up so the units cancel" is a crucial aspect of this process.
• You are mixing some concrete for a home project, and you've calculated according to the directions that you need six gallons of water for your mix. But your bucket isn't calibrated, so you don't
know how much it holds. On the other hand, you just finished a two-liter bottle of soda. If you use the bottle to measure your water, how many times will you need to fill it?
For this, I take the conversion factor of 1 gallon = 3.785 liters. This gives me:
Since my bottle holds two liters, then:
I should fill my bottle completely eleven times, and then once more to about one-third capacity.
On the other hand, I might notice that the bottle also says "67.6 fl.oz.", right below where it says "2.0L". Since there are 128 fluid ounces in one (US) gallon, I might do the calculations like
...which, considering the round-off errors in the conversion factors, compares favorably with the answer I got previously.
• You find out that the average household in Mesa, Arizona, uses about 0.86 acre-feet of water every year. You get your drinking water home-delivered in those big five-gallon bottles for the water
dispenser. How many of these water bottles would have to be stacked in your driveway to equal 0.86 acre-feet of water?
The conversion ratios are 1 acre = 43,560 ft^2, 1ft^3 = 7.481 gallons, and five gallons = 1 water bottle. First I have to figure out the volume in one acre-foot. An acre-foot is the amount that
it would take to cover one acre of land to a depth of one foot. How big is 0.86 acres, in terms of square feet?
If I then cover this 37,461.6 ft^2 area to a depth of one foot, this would give me 0.86 acre-feet of water, or (37,461.6 ft^2)(1 ft deep) = 37,461.6 ft^3 volume of water. But how many bottles
does this equal?
...or about 56,000 bottles every year.
This works out to about 150 bottles a day. Can you imagine "living close to nature" and having to lug all that water in a bucket? Thank heaven for modern plumbing!
• You've been watching a highway construction project that you pass on the way home from work. They've been moving an incredible amount of dirt. You call up the information line, and find out that,
when all eighty trucks are running with full crews, the project moves about nine thousand cubic yards of dirt each day. You think back to the allegedly "good old days" when work was all done
manually, and wonder how many wheelbarrowsful of dirt would be equivalent to nine thousand cubic yards of dirt. You go to your garage, and see that your wheelbarrow is labeled on its side as
holding six cubic feet. Since people wouldn't want to overfill their barrows, spill their load, and then have to start over, you assume that this stated capacity is a good measurement. How many
wheelbarrow loads would it take to move the same amount of dirt as those eighty trucks?
The conversion ratios are 1 wheelbarrow = 6 ft^3 and 1 yd^3 = 27 ft^3. Then I get:
= 40,500 wheelbarrows
Wow; 40,500 wheelbarrow loads!
Even ignoring the fact the trucks drive faster than people can walk, it would require an amazing number of people just to move the loads those trucks carry. No wonder there weren't many of these big
projects back in "the good old days"!
When you get to physics or chemistry and have to do conversion problems, set them up as shown above. If, on the other hand, they just give you lots of information and ask for a certain resulting
value, think of the units required by your resulting value, and, working backwards from that, line up the given information so that everything cancels off except what you need for your answer.
For a table of common (and not-so-common) English unit conversions, look here. For metrics, try here. Here is another table of conversion factors.
When I was looking for conversion-factor tables, I found mostly Javascript "cheetz" that do the conversion for you, which isn't much help in learning how to do the conversions yourself. But along
with finding the above tables of conversion factors, I also found a table of currencies, a table of months in different calendars, the dots and dashes of Morse Code, how to tell time using ships'
bells, and the Beaufort scale for wind speed.
<< Previous Top | 1 | 2 | Return to Index
Cite this article as: Stapel, Elizabeth. "Cancelling / Converting Units: Examples." Purplemath. Available from
http://www.purplemath.com/modules/units2.htm. Accessed
|
{"url":"http://www.purplemath.com/modules/units2.htm","timestamp":"2014-04-16T04:18:14Z","content_type":null,"content_length":"36415","record_id":"<urn:uuid:0e5cd698-bc8c-4944-b2da-74766350629f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding algorithms front page
Quick description
An algorithm is a procedure for solving some class of problems by breaking it up into simple steps. This article contains links to other articles on the general theme of finding algorithms to do
various kinds of tasks.
Algorithms associated with polynomials
Randomized algorithms front page Brief summary ( Sometimes a very simple and efficient way of carrying out an algorithmic task is to make random choices. The obvious disadvantage, that one cannot be
certain that the algorithm will do what one wants, is in many situations not too important, since one can arrange for the probability of failure to be so small that in practice it is negligible. )
Login or register to post comments
|
{"url":"http://www.tricki.org/article/Finding_algorithms_front_page","timestamp":"2014-04-20T16:40:59Z","content_type":null,"content_length":"18351","record_id":"<urn:uuid:e2ce2e31-e167-4008-8d19-0cb95b2fd5b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Parallel Axis Theorem
The Parallel Axis Theorem
d. y. dA. Other Parallel Axis Theorems. Radius of gyration: Polar moment of inertia: r ... y' x' Composite Parts. Area composed of simple shapes. x. y. 1. 3. 2 ... – PowerPoint PPT presentation
Number of Views:121
Avg rating:3.0/5.0
Slides: 7
Added by: Anonymous
more less
Transcript and Presenter's Notes
|
{"url":"http://www.powershow.com/view1/1d674a-ZDc1Z/The_Parallel_Axis_Theorem_powerpoint_ppt_presentation","timestamp":"2014-04-20T08:18:51Z","content_type":null,"content_length":"105668","record_id":"<urn:uuid:506b2e2c-aea9-45cc-8b9a-75438fa5b7fb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximate Strong Equilibrium in Job Scheduling Games
M. Feldman and T. Tamir
A Nash Equilibrium (NE) is a strategy profile resilient to unilateral deviations, and is predominantly used in the analysis of multiagent systems. A downside of NE is that it is not necessarily
stable against deviations by coalitions. Yet, as we show in this paper, in some cases, NE does exhibit stability against coalitional deviations, in that the benefits from a joint deviation are
bounded. In this sense, NE approximates strong equilibrium.
Coalition formation is a key issue in multiagent systems. We provide a framework for quantifying the stability and the performance of various assignment policies and solution concepts in the face of
coalitional deviations. Within this framework we evaluate a given configuration according to three measures: (i) IR_min: the maximal number alpha, such that there exists a coalition in which the
minimal improvement ratio among the coalition members is alpha, (ii) IR_max: the maximal number alpha, such that there exists a coalition in which the maximal improvement ratio among the coalition
members is alpha, and (iii) DR_max: the maximal possible damage ratio of an agent outside the coalition.
We analyze these measures in job scheduling games on identical machines. In particular, we provide upper and lower bounds for the above three measures for both NE and the well-known assignment rule
Longest Processing Time (LPT). Our results indicate that LPT performs better than a general NE. However, LPT is not the best possible approximation. In particular, we present a polynomial time
approximation scheme (PTAS) for the makespan minimization problem which provides a schedule with IR_min of 1+epsilon for any given epsilon. With respect to computational complexity, we show that
given an NE on m >= 3 identical machines or m >= 2 unrelated machines, it is NP-hard to determine whether a given coalition can deviate such that every member decreases its cost.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
|
{"url":"http://www.aaai.org/Library/JAIR/Vol36/jair36-009.php","timestamp":"2014-04-18T05:45:00Z","content_type":null,"content_length":"3827","record_id":"<urn:uuid:4809122c-4c0d-4068-bf5d-588a98ceb429>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Smerat, Sebastian (2011): Ground state and dynamical properties of the finite Kondo lattice model and transport through carbon based nanodevices: a numerical study. Dissertation, LMU München: Faculty
of Physics
Metadaten exportieren
Autor recherchieren in
The first topic of this thesis is the study of many-body effects in an one-dimensional strongly correlated electronic system - the Kondo lattice model. This system is tackled numerically by means of
the density matrix renormalization group, since analytic method, i.e., perturbation the- ory fail due to competing coupling constants. The Kondo lattice model consists of a conduction band of
electrons which couple via a spin exchange coupling to a localized spin lattice. We study the spectral properties of the one-dimensional Kondo lattice model as a function of the exchange coupling,
the band filling, and the quasimomentum in the ferromagnetic and paramagnetic phases. We compute the dispersion relation of the quasiparticles, their lifetimes, and the Z factor. The exact ground
state and the quasiparticle-dispersion relation of the Kondo lattice model with one conduction electron are well known. The quasiparticle could be identified as the spin polaron. Our calculations of
the dispersion relation for partial band fillings give a result similar to the one-electron case, which suggests that the quasiparticle in both cases is the spin polaron. We find that the
quasiparticle lifetime differs by orders of magnitude between the ferromagnetic and paramagnetic phases and depends strongly on the quasimomentum. Further- more, we study the effects of the Coulomb
interaction on the phase diagram, the static magnetic susceptibility and electron spin relaxation. We show that onsite Coulomb interaction supports ferromagnetic order and nearest neighbor Coulomb
interaction drives, depending on the elec- tron filling, either a paramagnetic or ferromagnetic order. Furthermore, we calculate electron quasiparticle life times, which can be related to electron
spin relaxation and decoherence times, and explain their dependence on the strength of interactions and the electron filling in order to find the sweet spot of parameters where the relaxation time is
maximized. We find that effective exchange processes between the electrons dominate the spin relaxation and decoherence rate. In the second topic of this thesis, we numerically calculate the electron
transport through carbon nanotube based quantum dot devices. We use a master equation’s approach in first order of the tunneling rate to the leads and an extended constant interaction model to model
the carbon nanotube system. This work has been done in collaboration with two experimental groups and we compare their respective experimentally obtained data to our numerical calculations. In both
collaborations striking similarity between the numerical data and the experimental data is found. In the first collaboration transport through a carbon nanotube peapod, i.e, a carbon nanotube filled
with fullerenes, has been measured. We identify a small hybridization between a fullerene molecule and the surrounding carbon nanotube to be of crucial importance for the understanding of the
transport data. In the second collaboration, electron transport through a carbon nanotube rope, i.e., a bundle of carbon nanotubes has been measured. Also here, hybridization between the different
nanotubes plays a crucial role. Furthermore, an external magnetic field is applied, which enables the identification of specific spin states of the compound quantum dot system. This might be
important for future applications of such devices in spin-dependent electronics.
Item Type: Thesis (Dissertation, LMU Munich)
Keywords: physics, Kondo lattice model, carbon nanotubes, electron transport
Subjects: 600 Natural sciences and mathematics
600 Natural sciences and mathematics > 530 Physics
Faculties: Faculty of Physics
Language: English
Date Accepted: 25. March 2011
1. Referee: Schollwöck, Ulrich
Persistent Identifier (URN): urn:nbn:de:bvb:19-129416
MD5 Checksum of the PDF-file: 9e67cf55ce4454e3544429f0c6284129
Signature of the printed copy: 0001/UMC 19367
ID Code: 12941
Deposited On: 13. Apr 2011 12:02
Last Modified: 16. Oct 2012 08:49
|
{"url":"http://edoc.ub.uni-muenchen.de/12941/","timestamp":"2014-04-18T20:44:25Z","content_type":null,"content_length":"30102","record_id":"<urn:uuid:c4ebe90c-c5d6-4fd2-9efd-5664487e8168>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weil group, Weil-Deligne group scheme and conjectural Langlands group
up vote 10 down vote favorite
I was reading a series of article from the Corvallis volume. There are couple of questions which came to my mind:
1. Why do we need to consider representation of Weil-Deligne group? That is what is an example of irreducible admissible representation of $ Gl(n,F)$ which does not correspond to a representation of
$W_F$ of dimension $n$ ? An example for $ n=2 $ will be of great help.
2. In the setting of global Langlands conjecture, why extension of $W_F$ by $G_a$ or products of $W'_{F_v}$ does not work?
Thank you.
nt.number-theory langlands-conjectures
Delinge ==> Deligne. – Regenbogen Feb 28 '10 at 3:46
Corrected the spelling. Thanks – Dipramit Majumdar Feb 28 '10 at 3:55
add comment
3 Answers
active oldest votes
Regarding (1), from the point of view of Galois representations, the point is that continuous Weil group representations on a complex vector space, by their nature, have finite image on
On the other hand, while a continuous $\ell$-adic Galois representation of $G_{\mathbb Q_p}$ (with $\ell \neq p$ of course) must have finite image on wild inertia, it can have infinite
image on tame inertia. The formalism of Weil--Deligne representations extracts out this possibly infinite image, and encodes it as a nilpotent operator (something that is algebraic, and
doesn't refer to the $\ell$-adic topology, and hence has a chance to be independent of $\ell$).
As for (2): Representations of the Weil group are essentially the same thing as representations of $G_{\mathbb Q}$ which, when restricted to some open subgroup, become abelian. Thus (as
one example) if $E$ is an elliptic curve over $\mathbb Q$ that is not CM, its $\ell$-adic Tate module cannot be explained by a representation of the Weil group (or any simple
modification thereof). Thus neither can the weight 2 modular form to which it corresponds.
In summary: the difference between the global and local situations is that an $\ell$-adic representation of $G_{\mathbb Q_p}$ (or of $G_E$ for any $p$-adic local field) becomes, after a
finite base-change to kill off the action of wild inertia, a tamely ramified representation, which can then be described by two matrices, the image of a lift of Frobenius and the image
of a generator of tame inertia, satisfying a simple commutation relation.
On the other hand, global Galois representations arising from $\ell$-adic cohomology of varieties over number fields are much more profoundly non-abelian.
up vote 21
down vote Added: Let me also address the question about a product of $W_{F_v}'$. Again, it is simplest to think in terms of Galois representations (which roughly correspond to motives, which, one
accepted hopes, roughly correspond to automorphic forms).
So one can reinterpret the question as asking: is giving a representation of $G_F$ (for a number field $F$) the same as giving representations of each $G_{F_v}$ (as $v$ ranges over the
places of $F$). Certainly, by Cebotarev, the restriction of the global representation to the local Galois groups will determine it; but it will overdetermine it; so giving a collection
of local representations, it is unlikely that they will combine into a global one. ($G_F$ is very far from being the free product of the $G_{F_v}$, as Cebotarev shows.)
To say something on the automorphic side, imagine writing down a random degree 2 Euler product. You can match this with a formal $q$-expansion, which will be a Hecke eigenform, by
taking Mellin transforms, and with a representation of $GL_2(\mathbb A_F)$, by writing down a corresponding tensor product of unramified representations of the various $G_{F_v}$. But
what chance is there that this object is an automorphic representation? What chance is there that your random formal Hecke eigenform is actually a modular form? What chance is there
that your random Euler product is actually an automorphic $L$-function? Basically none.
You have left out some vital global glue, the same glue which describes the interrelations of all the $G_{F_v}$ inside $G_F$. Teasing out the nature of this glue is at the heart of
proving the conjectured relationship between automorphic forms and motives; its mysterious nature is what makes the theories of automorphic forms, and of Galois representations, so
Thanks a lot Matt. – Dipramit Majumdar Feb 28 '10 at 6:54
add comment
The answer to your first question would be a Steinberg representation (i.e. under suitable normalizations, the infinite-dimensional subquotient of the induction of $(\chi|\cdot|^{-1/2},\
up vote 3 chi|\cdot|^{1/2})$). Kudla's article in Motives II is a nice place to see this. I don't have an answer for number two.
down vote
I see, since it corresponds to a nontrivial nilpotent operator, it can not come from a representation of Weil group. Or is there some other argument? Thanks – Dipramit Majumdar Feb 28
'10 at 5:07
add comment
(I'm putting an "answer" to clarify Rob's question, and answer Dipramit's question in the comments, because I don't yet have the reputation to comment).
Let's first recall that the L-function of the Steinberg representation $\sigma = \sigma(\chi|\cdot|^{-1/2},\, \chi|\cdot|^{1/2})$ (for $\chi$ an unramified character) is $(1 - \chi(\varpi)q
^{-s-1/2})^{-1}$, where $\varpi$ is a uniformizer (Bump shows this in detail in his book). In particular, its reciprocal is a degree-one polynomial in $q^{-s}$.
By the ideas of Bernstein-Zelevinski (described in Kudla' article in "Motives"), $\sigma$ corresponds to the Weil-Deligne representation $\rho' = (\rho,\, V,\, N)$, where $\rho = \chi |\
cdot|^{-1/2} \oplus \chi |\cdot|^{1/2}$, and the operator $N$ takes the first summand to the second, and the second summand to $0$.
up vote 2
down vote If we only look at $\rho$, then we see that $V^I = V$ and therefore the $L$-function of $\rho$ would be the reciprocal of a degree-two polynomial. Thus the monodromy operator becomes
necessary for match of $L$-function: we see that $V^I_N \cong \chi|\cdot|^{1/2}$, which has the desired $L$-function.
This is a good specific example if you like thinking about match of $L$-functions. More generally, you need to consider Weil-Deligne representations because if $\pi_1$ and $\pi_2$ have the
same cuspidal support and correspond to $\rho_1' = (\rho_1,\, N_1,\, V_1)$ and $\rho_2' = (\rho_2,\, N_2,\, V_2)$, then $\rho_1\cong \rho_2$ as Weil representations; this follows from the
ideas of Bernstein-Zelevinski.
1 Actually the Steinberg representation $\sigma$ is ramified of conductor $1$, and its $L$-function equals $(1 - \chi(\varpi)q^{-s-1/2})^{-1}$. In general, let $\pi$ be a local irreducible
generic representation of $\mathrm{GL}_n$. Then $L(s,\pi)^{-1}$ is a polynomial of $q^{-s}$ of degree at most $n$, and the degree equals $n$ if and only if $\pi$ is unramified. See the
Proposition on page 203 of Jacquet-PiatetskiShapiro-Shalika: Conducteur des répresentations du groupe linéaire (Math. Ann. 256 (1981), 199-214). – GH from MO Jul 31 '13 at 17:12
Fixed. Thank you for pointing out my error. Apparently my understanding of the Artin conductor is limited, looks like it's time to go back and read Tate's Corvallis article again. As a
side question, my construction depends on the ideas of Bernstein-Zelevinski. Does anyone know of a construction of a "counterexample" that does not rely on these ideas? – John Binder Jul
31 '13 at 18:31
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory langlands-conjectures or ask your own question.
|
{"url":"http://mathoverflow.net/questions/16651/weil-group-weil-deligne-group-scheme-and-conjectural-langlands-group?sort=newest","timestamp":"2014-04-16T22:47:07Z","content_type":null,"content_length":"71951","record_id":"<urn:uuid:5a622aec-f969-42dc-9667-6290bcd2c2fb>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] On the Strengths of Inaccessible and Mahlo Cardinals
Dmytro Taranovsky dmytro at MIT.EDU
Sat Mar 5 11:21:15 EST 2005
By using weak set theories, one can clarify the strengths of certain cardinals.
Often, a weak set theory + large cardinal is inter-interpretable (that is there
is a correspondence of models) with a stronger set theory + slightly weaker
cardinals. That large cardinal corresponds to the class
of ordinals of the stronger theory. More precisely,
1. ZFC minus infinity is equiconsistent with rudimentary set theory plus the
axiom of infinity.
2. ZFC minus power set is equiconsistent with rudimentary set theory + there is
an uncountable ordinal.
3. ZFC is equiconsistent with rudimentary set theory + there is an inaccessible
4. ZFC + {there is Sigma_n correct inaccessible cardinal}_n is equiconsistent
with rudimentary set theory + there is a Mahlo cardinal.
In general, ZFC + {there is Sigma_n correct large cardinal}_n is
inter-interpretable with rudimentary set theory + there is a regular limit of
stationary many of these cardinals.
Rudimentary set theory is meant to be the weakest set theory in which some basic
things can be done. Levels of Jensen hierarchy for L satisfy it. I am not
sure how strong rudimentary set theory should be--suggestions to that effect
are welcome--but the following version/axiomatization works for the above
theorem: extensionality, foundation, empty set, pairing, union, existence of
transitive closure, existence of the set of all sets with transitive closure
less numerous than a given set, and bounded quantifier separation.
In the theorem, "rudimentary set theory" can be strengthened by extending ZFC
with a stronger "logic", and modifying Sigma_n correct and the replacement
schema to the full expressive power of the logic. Second order logic
corresponds to ZFC minus power set as the weak theory.
More about expressive logics and other topics can be found in my paper:
Dmytro Taranovsky
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-March/008825.html","timestamp":"2014-04-19T14:53:25Z","content_type":null,"content_length":"4379","record_id":"<urn:uuid:b556391d-d383-4d07-8b36-6c79d075acb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
|
finite plane
finite plane
A finite plane (synonym linear space) is the finite (discrete) analogue of planes in more familiar geometries. It is an incidence structure where any two points are incident with exactly one line
(the line is said to “pass through” those points, the points “lie on” the line), and any two lines are incident with at most one point — just like in ordinary planes, lines can be parallel i.e. not
intersect in any point.
A finite plane without parallel lines is known as a projective plane. Another kind of finite plane is an affine plane, which can be obtained from a projective plane by removing one line (and all the
points on it).
An example of a projective plane, that of order $2$, known as the Fano plane (for projective planes, order $q$means $q+1$ points on each line, $q+1$ lines through each point):
An edge here is represented by a straight line, and the inscribed circle is also an edge. In other words, for a vertex set $\{1,2,3,4,5,6,7\}$, the edges of the Fano plane are
Notice that the Fano plane is generated by the triple $\{1,2,4\}$ by repeatedly adding $1$ to each entry, modulo $7$. The generating triple has the property that the differences of any two elements,
in either order, are all pairwise different modulo $7$. In general, if we can find a set of $q+1$ of the integers (mod $q^{2}+q+1$) with all pairwise differences distinct, then this gives a cyclic
representation of the finite plane of order $q$.
finite geometry combinatorics
Mathematics Subject Classification
no label found
no label found
no label found
no label found
|
{"url":"http://planetmath.org/finiteplane","timestamp":"2014-04-18T15:42:07Z","content_type":null,"content_length":"51126","record_id":"<urn:uuid:15fbb5f7-9026-4cbf-927b-40eacf3fb0b5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rosemont, NJ Math Tutor
Find a Rosemont, NJ Math Tutor
...I have had experience teaching students to make presentations for their classes and also in helping adults prepare for career advancements through presentation skills. I spent many years as
the managing director of a very large learning center. During that time I had the opportunity to not only tutor students, but also counsel them on career and college choices.
43 Subjects: including prealgebra, ACT Math, GED, elementary (k-6th)
...Every student can be successful --- we, as educators, need to find the right learning style/method(s) to allow the student to experience success. I live in the Easton/Bethlehem area and will
travel to your home --- preferably in the early evening and/or on the weekends. I raised my son, as a si...
24 Subjects: including algebra 1, prealgebra, ACT Math, geometry
...I believe that individual tutoring sessions should generally not exceed two hours, for the sake of attention and comprehension. I require three-hours' notice for cancellation, and if for some
reason I must cancel, I will do the same. I look forward to working with you, and please feel free to c...
34 Subjects: including algebra 1, algebra 2, TOEFL, ACT Math
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including calculus, trigonometry, SAT math, ACT Math
...I have a thorough understanding of the expectations of secondary and post-secondary writing and have the resources to create a custom curriculum designed to help students write pieces that
meet their teachers' expectations. My BA and Ed.M degrees are in English Literature and English education, ...
15 Subjects: including SAT math, English, reading, writing
Related Rosemont, NJ Tutors
Rosemont, NJ Accounting Tutors
Rosemont, NJ ACT Tutors
Rosemont, NJ Algebra Tutors
Rosemont, NJ Algebra 2 Tutors
Rosemont, NJ Calculus Tutors
Rosemont, NJ Geometry Tutors
Rosemont, NJ Math Tutors
Rosemont, NJ Prealgebra Tutors
Rosemont, NJ Precalculus Tutors
Rosemont, NJ SAT Tutors
Rosemont, NJ SAT Math Tutors
Rosemont, NJ Science Tutors
Rosemont, NJ Statistics Tutors
Rosemont, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/rosemont_nj_math_tutors.php","timestamp":"2014-04-17T19:37:05Z","content_type":null,"content_length":"24048","record_id":"<urn:uuid:fc388b4a-832b-4db5-a2ca-115986554f5e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Change the Equation Blog
On Monday, we lamented the fact that a major retailer ran an ad featuring tech innovators who were all white men. That seemed like a lost opportunity to celebrate female and minority role models like
Marc Hannah or Dr. Mae Jemison. "Let's hope we'll soon see a new set of ads that celebrate a more diverse set of pioneers," we concluded.
It turns out that our wish had already been answered. The day before, that same retailer premiered an ad about "Future Innovators" whose inventions include invisible touch screens and soccer balls
that generate power. Most are young women, and one is black. That's certainly a better ratio than in the previous ad. It helps us envision a more diverse STEM workforce in the coming decades.
Keep 'em coming.
|
{"url":"http://changetheequation.org/change-equation-blog?page=18","timestamp":"2014-04-20T23:27:49Z","content_type":null,"content_length":"62783","record_id":"<urn:uuid:2aab0ef5-9728-4793-951d-40fc9c516e9a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simulating a loaded dice in a constant time
Neat! But if random number generation is expensive, I suggest using only one throw, multiplying the result by the number of columns and then use the integer part to select the column and
the fractional part to select inside the column.
Jerome Humbert
Neat! But if random number generation is expensive, I suggest using only one throw, multiplying the result by the number of columns and then use the integer part to select the column and the
fractional part to select inside the column.
|
{"url":"http://www.gamasutra.com/view/news/168153/Simulating_a_loaded_dice_in_a_constant_time.php","timestamp":"2014-04-21T07:24:43Z","content_type":null,"content_length":"71180","record_id":"<urn:uuid:6a2bf769-c7f1-452f-9f0b-dae2d5d80a7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: nbreg with fixed effect vs xtnbreg,fe
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: nbreg with fixed effect vs xtnbreg,fe
From Muhammad Anees <anees@aneconomist.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: nbreg with fixed effect vs xtnbreg,fe
Date Wed, 8 Feb 2012 10:50:07 +0500
Also the abstract in online from Guimarães, P (2008) is
In this paper I show that the conditional fixed effects negative
binomial model for count panel data does not control for individual
fixed effects unless a very specific set of assumptions are met. I
also propose a score test to verify whether these assumptions are met.
The full reference for the paper is
Guimarães, P., (2008), The fixed effects negative binomial model
revisited, Economics Letters, 99, pp63–66
It, thus, indicates to take care when to choose the fixed effects
model while using Negative Binomial Regressions.
On Wed, Feb 8, 2012 at 10:33 AM, Richard Williams
<richardwilliams.ndu@gmail.com> wrote:
> At 08:52 PM 2/7/2012, Shikha Sinha wrote:
>> Hi all,
>> I emailed my query to tech support at Stata corp and below is the
>> response;
>> Typically for a fixed effects negative binomial model, you would want to
>> use
>> the -xtnbreg, fe- command. -xtnbreg, fe- is fitting a conditional fixed
>> effects model. When you include panel dummies in -nbreg- command, you are
>> fitting an unconditional fixed effects model. For nonlinear models such
>> as
>> the negative binomial model, the unconditional fixed effects estimator
>> produces inconsistent estimates. This is caused by the incidental
>> parameters
>> problem. See the following references for theoretical aspects on the
>> incidental parameters problem:
>> Greene, William H. "Econometric Analysis". Prentice Hall.
>> Seventh Edition, page 413.
>> Baltagi, Badi "Econometric Analysis of Panel Data".
>> 4th. Edition. John Wiley and Sons LTD.
>> Section 11.1 (pages 237-8).
> Here is the abstract for the Allison & Waterman paper I mentioned before:
> "This paper demonstrates that the conditional negative binomial model for
> panel data, proposed by Hausman, Hall, and Griliches (1984), is not a true
> fixed-effects method. This method which has been implemented in both Stata
> and LIMDEP-does not in fact control for all stable covariates. Three
> alternative methods are explored. A negative multinomial model yields the
> same estimator as the conditional Poisson estimator and hence does not
> provide any additional leverage for dealing with overdispersion. On the
> other hand, a simulation study yields good results from applying an
> unconditional negative binomial regression estimator with dummy variables to
> represent the fixed effects.
> There is no evidence for any incidental parameters bias in the coefficients,
> and downward bias in the standard error estimates can be easily and
> effectively corrected using the deviance statistic. Finally, an approximate
> conditional method is found to perform at about the same level as the
> unconditional estimator."
> And, from the conclusion:
> "The negative binomial model of Hausman, Hall, and Griliches (1984) and its
> associated conditional likelihood estimator does not accomplish what is
> usually desired in a fixed-effects method, the control of all stable
> covariates. That is because the model is based on a regression decomposition
> of the overdispersion parameter rather than the usual regression
> decomposition of the mean. Symptomatic of the problem is that programs that
> implement the conditional estimator have no difficulty estimating an
> intercept or coefficients for time-invariant covariates."
> The empirical examples Allison provides (p. 64 of his book), for which he
> says "None of this makes sense for a true fixed effects estimator" seem
> pretty compelling to me, but I remain open to persuasion or correction.
> Allison, Paul D. and Richard Waterman (2002) "Fixed effects negative
> binomial regression models." In Ross M. Stolzenberg (ed.), Sociological
> Methodology 2002. Oxford: Basil Blackwell.
> Also, Paul Allison, "Fixed effects regression models", Sage, 2009.
> -------------------------------------------
> Richard Williams, Notre Dame Dept of Sociology
> OFFICE: (574)631-6668, (574)631-6463
> HOME: (574)289-5227
> EMAIL: Richard.A.Williams.5@ND.Edu
> WWW: http://www.nd.edu/~rwilliam
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Muhammad Anees
Assistant Professor/Programme Coordinator
COMSATS Institute of Information Technology
Attock 43600, Pakistan
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00399.html","timestamp":"2014-04-20T16:09:22Z","content_type":null,"content_length":"15697","record_id":"<urn:uuid:1be3b649-4aa6-4b7c-b458-a550958e4bd5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: March 2007 [00483]
[Date Index] [Thread Index] [Author Index]
Re: Integrate (fix a mistake)
• To: mathgroup at smc.vnet.net
• Subject: [mg74231] Re: Integrate (fix a mistake)
• From: "dimitris" <dimmechan at yahoo.com>
• Date: Thu, 15 Mar 2007 04:57:29 -0500 (EST)
• References: <esls78$q0v$1@smc.vnet.net><et5o76$jor$1@smc.vnet.net>
Let me mention one mistake (and only one, I hope!) in my message:
integrand = f[x]*dx /. x -> ArcSin[Sqrt[u]] /. dx ->
D[ArcSin[Sqrt[u]], u]
Integrate[integrand, {u, 0, Sin[z]^2}, Assumptions -> 0 < z < Pi]
Plot[%, {z, 0, Pi}]
(%% /. z -> Pi) - (%% /. z -> 0)
Simplify[D[%%%, z]] /. z -> x
Log[u]/(2*(1 - u))
(1/12)*(-Pi^2 + 6*PolyLog[2, Cos[z]^2])
=CF/=C7 Michael Weyrauch =DD=E3=F1=E1=F8=E5:
> Hello,
> Dimitris, this is a marvelous solution to my problem. I really apprecia=
> your help. I will now see if I can solve all my other (similar) integrals=
using the same trick.
> Timing is not really the big issue if I get results in a reasonable amoun=
t of time.
> Also the references you cited are quite interesting, because they give so=
me insight
> what might go on inside Mathematica concerning integration.
> I also agree with you that it is my task as a programmer to "help" Mathem=
> and guide it into the direction I want it to go. Sometimes this is an "ar=
> Nevertheless, in the present situation I do not really understand why Mat=
hematica wants
> me to do that rather trivial variable transformation, which is at the hea=
rt of your solution.
> The integrand is still a rather complicated rational function of the same=
order. The form
> of the integrand did not really change
> substantially as it is the case with some other ingenious substitutions o=
ne uses in order to
> do some complicated looking integrals "by hand".
> I think the fact that we are forced to such tricks shows that the Mathema=
tica integrator
> is still a bit "immature" in special cases, as also the very interesting =
article by D. Lichtblau,
> which you cite, seems to indicate. So all this is probably perfectly know=
n to the
> Mathematica devellopers. And I hope the next Mathematica version has all =
this "ironed out"??
> Many thanks again, Michael
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Mar/msg00483.html","timestamp":"2014-04-16T16:07:35Z","content_type":null,"content_length":"35769","record_id":"<urn:uuid:dc6b0a74-33e4-4e8b-ab6f-e04a5da1118e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do you reconcile classical quantum mechanics with general relativity?
It may, in fact, not be possible to do this at all, however there are a growing number of theoretical physicists who believe they are on the verge of finding a 'Theory of Everything' that does just
The problem is that quantum mechanics is a theory that describes matter and energy in terms of discrete packets of energy called 'quanta', which move unhindered through space. General relativity,
meanwhile, has nothing to say about the structure of matter. It says nothing about how the gravitational field is generated by matter and energy. It is a 'global' theory of space-time, not a 'local'
theory of space-time. One can scarcely imagine two great theories that have less to say to one another than quantum mechanics and general relativity!
Physicists are convinced that the general relativistic treatment of gravity must be replaced by one in which the gravitational force is replaced by one dealing with a quanta. The prototypes are the
quantum field theories for the other three forces in nature, which have proven to be very successful in the 'Standard Model'. It is a very vexing challenge, however, to find the right mathematics to
bridge the conceptual gap between classical gravity and space-time on the one hand, and a quantum description of gravity and space-time on the other hand.
Still, in the last 10 years there have been many 'miraculous' theories that have shown that under certain circumstances this gap can be bridged. It remains to see whether any experiments can be
devised to prove which of the many prototypical theories are correct.
Copyright 1997 Dr. Sten Odenwald
Return to
Ask the Astronomer.
|
{"url":"http://www.astronomycafe.net/qadir/q1292.html","timestamp":"2014-04-19T10:29:37Z","content_type":null,"content_length":"2695","record_id":"<urn:uuid:fa44337e-22e1-4c26-9e2b-7d5246599ec6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learning to use the applet
Another example: countries
Here we would like to get a 2D-diagram to classify some countries based on the following 3 features:
1. population
2. average income
3. area
see the data file
Description of the applet
The applet starts by plotting the data along the first two measurements. On the right of the image, you have the following commands and displays:
rotate!: To show the principal-components-finding algorithm in action. See how it tries to spread the points by a succession of rotations.
x-axis, y-axis: To decide which axes to see. (PC_1, PC_2,... are the principal components).
variance shown: Displays the quality measure of the displayed orientation (it is maximal when PC_1 and PC_2 are the axes).
show labels, show axes: To clutter or unclutter the picture at will.
show error bars: To show how far each point is from the plane-fit. Equivalently, the algorithm tries to minimize the sum of the squares of the lengths of these bars.
Application to image analysis
Image processing
In satellite imaging, it's easy to get overwhelmed by measurements by imaging at various wavelengths and by repeating the measurements in different conditions. Principal components analysis can be
used to reduce the dimensionality of the data to 3 (to get a red-green-blue image), or to analyze subtle variations by displaying less important components. The setup is:
│objects: │pixels │
│measurements:│intensities at various wavelengths, in different seasons,... │
Here are some links:
Image classification
A different situation is when you have many images and want to classify them. The brute-force approach is to declare that every pixel is a measurement. Principal components analysis will be useful in
cases where similarity can be measured pixel-wise. Such a case is face recognition when the images are controlled (same size, same orientation, same lighting conditions). The setup is:
│objects: │faces │
│measurements:│pixels of the image of the face (the dimensionality is very high) │
Here are some links:
Example: the letters of the alphabet
To demonstrate image classification, we will use something simpler. All 26 letters of the alphabet have an acceptable representation on a 5x5 grid as follows:
This gives us 25 measurements (0-1 valued) for each letter. Click on the "rotate!" button below to reduce this 25-dimensional space to 2 dimensions:
see the data file
The variance shown is low (44%), but it's actually very good for 2 dimensions out of 25. We can spot some pairs of similar letters like A and R. It's also fun to see pairs that are far apart: H and I
cannot be confused, neither can X and O (hence their use in the tic-tac-toe game).
In this application, the principal components can be viewed as filters that are applied to the bitmap to extract a feature.
Below are images of some of the filters. White is neutral, blue means a positive contribution and red means a negative contribution.
Permuting the roles of objects and measurements
What happens if we make a mistake and enter the objects as columns instead of as rows? This "mistake" is actually quite an interesting thing to do. Now we have one point per measurement, and we're
trying to map the measurements to find out which ones are positively correlated (are giving similar values when applied to our set of objects).
If making a measurement is expensive, then this trick can be used to spot and eliminate redundant measurements. This is a broad topic and you may want to see this page on Feature Selection.
Here's the same data on letters of the alphabet, but with objects and measurements permuted. Click on the "rotate!" button!
see the data file
The two closest points on the best-fit-plot are a12 and a14 (they overlap), and indeed, a12 == a14 for every letter.
The two furthest points on the best-fit-plot are a21 and a43, and indeed, a21 != a43 for every letter except J, X and Z. In fact these two points are so far apart that a21 and a43 become redundant:
one is practically the opposite of the other!
|
{"url":"http://www.cs.mcgill.ca/~sqrt/dimr/dimreduction.html","timestamp":"2014-04-18T20:47:34Z","content_type":null,"content_length":"17756","record_id":"<urn:uuid:613ff596-c7cf-4e43-b283-93ed0a55a666>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Random Time
Anonymous posted on Wednesday, June 02, 2004 - 11:09 am
I'm using v2.14 and I am estimating a growth model where time is based on the day of observation. That calculation works fine. However, I would like to add a second time variable (the value of time
squared) along with the first. In SAS I would do this by including the command "random Intercept time time2." Is there anyway to do this in Mplus.
Linda K. Muthen posted on Wednesday, June 02, 2004 - 12:13 pm
I think you are using the AT command. If so, to add a quadratic component, say
i s1 s2 | y1 y2 y3 y4 y4 AT t1 t2 t3 t4;
This is described on page 40 of the Addendum to the Mplus User's Guide. i is the intercept growth factor, s1 is the linear growth factor, and s2 is the quadratic growth factor.
Anonymous posted on Friday, November 11, 2005 - 12:17 am
Dear Drs Muthen,
I have data on a physical activity measure (continuous) on about 100 people measured annually over 3 years, with the aim to assess changes over time and predictors of it. However, the ages at first
measurement vary enormously (age range 40-80 years). I was hoping to use the SEM-style wide format in Mplus to get all the fit indices etc, but there are many distinct ages. Is there a way to handle
this age variation in wide format?
I have fit a multilevel random slopes model, but the question of time centering and time metric arose, as using age (age-60) as the time metric assumes linearity over the whole age range (which I
don't really want to assume), and produces a correlation between intercept and slope of about -0.80 at age 60, and close to one for more complicated models. I have thought about centering time at the
first measurement for each person to alleviate this, but is there another option in Mplus to reduce this numerical problem? Also, do you know of any references for these sort of decisions with hugely
variable initial ages and only a few relatively closely-spaced times of measurement?
bmuthen posted on Sunday, November 13, 2005 - 10:38 am
If you use the wide, multivariate approach I think you would have to use the multiple-cohort, multiple-group approach where each age category (say 40-50 year olds being one category) corresponds to a
group. In this approach you could also test the key assumption that the same growth model holds across all these quite different ages. But you may not have enough individuals within each group for
this approach. Otherwise, you would have to use a 2-level, long data format, approach where you read in age as a variable in line with conventional multilevel growth modeling. I don't know of any
pertinent references here.
Anonymous posted on Wednesday, October 11, 2006 - 3:49 pm
I would like to use number of months (or years) that a caregiver has been providing care as a time metric. Participants respond to this question at baseline and then respond to additional surveys 3
and 12 months later. The range in months for this variable is 0-288. If I convert this to years and winsorize these data so that each person has a range of 0-10 years providing care, could I then use
categories that correspond to each group (e.g., 0-3 yrs. = 0) as a metric? I would like to use a 2-level growth model. Otherwise, could you suggest an alternative solution?
Bengt O. Muthen posted on Wednesday, October 11, 2006 - 5:46 pm
Your proposal sounds reasonable - using an alternative time metric like you suggest is sometimes more meaningful.
Anonymous posted on Thursday, October 12, 2006 - 9:30 am
Thank-you. As I prepare these data for use in MPlus, how should I assign time scores for each person at each of the 3 measurement occasions? Unless participants move into another group at 12-months,
each person's time score at baseline would remain stable (i.e. 0, 1, or 2) across the occasions.
Linda K. Muthen posted on Thursday, October 12, 2006 - 2:43 pm
If you set the model up in the multivariate framework of Mplus, the time scores are parameters in the model. See Example 9.12 where 0, 1, 2, and 3 are the time scores.
Hanno Petras posted on Thursday, March 15, 2007 - 10:53 am
Dear Linda and Bengt,
what is the correct procedure to compute estimated time specific means when using time as data a la HLM (instead of as a parameter a la Mplus). Thanks.
Bengt O. Muthen posted on Thursday, March 15, 2007 - 4:48 pm
The growth equation gives for example
E(y_ti |x_ti) = alpha_0 + alpha_1*x_ti,
where the alpha's are growth factor means.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=14&page=413","timestamp":"2014-04-18T11:07:47Z","content_type":null,"content_length":"29102","record_id":"<urn:uuid:3121157e-fce6-440a-a91d-cc1a0968763f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Choosing RCOND for ZGELSY and/or a fast SVD solver.
I am trying to solve Ax = b system where A is a square, highly ill-conditioned, unsymmetric, partly dense, partly sparse, complex matrix. The linear system arises out of a physical problem where the
exact solution to the problem is known and thus the accuracy of the solution x can be verified. I have been able to solve this system using ZGELSS with a fixed RCOND = -1.0D0 so that LAPACK uses
machine precision as the SVD threshold. As I cannot predict what the condition number of A is going to be in advance, choosing RCOND = -1.0D0 seemed a good idea and the results with this setting in
ZGELSS are satisfactory. The only problem with ZGELSS is that it is very slow. I was aware that ZGELSY will be faster as it uses QR decomposition but do not know how to choose RCOND for ZGELSY. I did
try ZGELSY for my linear system and found that the results depend greatly on the input value of RCOND. So the questions are:
a) How do I choose RCOND if I do not know the condition number of A in advance for ZGELSY or
b) Alternatively, is there a faster solver than ZGELSS that relies on SVD which could be used instead? I want to stick to SVD as the condition number of A is likely to remain between 10^16 to 10^22
all the time. Some search on the internet indicates that Jacobi SVD (work of Drmac et al) will be faster and more accurate (?) but I could not find the solver based on Jacobi SVD in LAPACK.
Thanks in advance for help/suggestions.
|
{"url":"https://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=4492&p=10777","timestamp":"2014-04-18T13:08:05Z","content_type":null,"content_length":"14476","record_id":"<urn:uuid:7867298a-353a-4427-a4c4-69509e3c423c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Notes on INS (14): CKN theory
Last updated: 27/Oct/2010
CKN stands for Caffarelli-Kohn-Nirenberg. The theory is about partial regularity of solutions of INS. One should probably add the name of Scheffer, who introduced the concepts that CKN improved/
The main result is that the (parabolic) 1-dimensional Hausdorff measure of the (hypothetical) singular set in space-time is zero (which means that, if singular points exist, they indeed form a “very
small” set). The result is obtained in 1982, nearly 30 years ago. No major improvement since then. Basically, the result is optimal, if one uses only “generic” features of INS (quadratic nonlinear
term, viscosity term, …), and not the things which are very specific to INS (but what are the things which are specific to INS and how to use them ?!)
References for this part:
- Original paper of Caffarelli-Kohn-Nirenberg (1982)
- Scheffer
- Paper by Fanghua Lin, Commun. Pure Appl. Math. LI (1998), 241–257. [this paper has lots of typos everywhere which make it a bit annoying -- I wonder why the author/editors were not more careful --
but it contains a very nice proof]
- Lecture notes by Seregin (2010): Section 3 (called epsilon-regularity)
- Lecture notes by James Robinson (Campinas 2010) [these lecture notes don't contain the proof of the main theorem, only some parts of it]
- Lecture notes by Gallavotti (2008 ?) [complicated, even in the notations; uses inequalities which don't look very natural ?!] Gallavotti has also written a book on fluid dynamics.
- Paper by Ladyzhenskaya & Seregin (1999): contains some results similar to the ones in Lin’s paper (?). Ref: Ladyzhenskaya, O. A., Seregin, G. A., On partial regularity of suitable weak solutions to
the three-dimensional Navier-Stokes equations, J. math. fluid mech., 1(1999), pp. 356-387.
- Paper by AlexisVasseur ? [don't think it contains anything conceptually new ?]
- papers by Igor Kukavica: contains some improvements of CKN results (as mentioned in the lecture notes by Robinson)
Regular point. A point $(t,x)$ is regular if $u$ is essentially bounded in a neighborhood of $(t,x)$. Otherwise it’s called singular.
(Parabolic) Hausdorff measure. For a set $X$ in space-time and $k \geq 0$, define $k$-dim Hausdorff measure of $X$ to be:
$P^k(X) = \lim_{\delta \to 0+} P^k_\delta(X)$, where
$P^k_\delta (X) = \inf \{ \sum_i r_i^k: X \subset \cup_i Q^*_{r_i} (t_i, x_i), \ r_i < \delta\}$
and $Q^*_{r} (t, x) = ]t - r^2, t+r^2[ \times B_r(x)$, $Q_{r} (t, x) = ]t - r^2, t[ \times B_r(x)$
Suitable weak solution: is a weak solution $u$ such that the pressure $p \in L^{5/3}(]0,T[ \times \Omega)$ and satisfies a local form of the energy inequality: If we assume that all terms are smooth
and take the inner product of INS with $u\phi$, where $\phi$ is some smooth scalar cut-off function, we obtain:
$(1/2) (d/dt) \int |u|^2 \phi - (1/2) \int |u|^2 \phi_t + \int |abla u|^2 \phi - (1/2)$$\int |u|^2 \Delta \phi - (1/2) \int |u|^2 (u.abla) \phi - \int p (u.abla) \phi = 0$
If we integrate in time we obtain (the local form of the energy inequality):
$\int |u|^2 \phi|_t + 2\int \int |abla u|^2 \phi \leq$$\int \int [|u|^2 [\phi_t + \Delta \phi] + (|u|^2 + 2p) (u.abla) \phi]$
Suitable solutions satisfy the above inequality for any cutoff function $\phi$ with compact suport in $]0,T[ \times \Omega$.
Another (equivalent, and probably simpler) to express the local energy inequality, also called generalized energy inequality, is as follows:
$2\int_{Q}|abla v|^2\phi dx dt \leq \int_{Q}|v|^2(\phi_t +\Delta \phi) dx dt + \int_{Q}(|v|^2 + 2P) v. abla \phi dx dt$
for all non-negative cutoff function $\phi \in C^\infty(Q)$, where $Q$ is the direct product of an interval of time with a domain in space. (The term $\int |u|^2 \phi|_t dx$ in the first formulation
of the inequality vanishes here, because we take $t=T$ and $\phi(T,x) = 0$). However, the first formulation of generalized energy inequality is probably more useful.
Fact (CKN): Suitable weak solutions do exist. (Proved in the appendix of CKN paper).
Lin’s proof of CKN (1998)
We will play with the following 4 local quantities:
$A(r) = \sup_{-r^2 \leq t \leq 0} r^{-1} \int_{\{t\} \times B_r}|v|^2 dx$ — max energy quantity in ball $B_r$ for time $t \in [-r^2,0]$,
$B(r) = r^{-1} \int_{Q_r}|abla v|^2 dxdt$ – amount of local energy dissipation,
$C(r) = 1/r^{-2} \int_{Q_r}|v|^3 dxdt$ – loca L3 norm of velocity,
$D(r) = 1/r^{-2} \int_{Q_r}|P|^{3/2} dxdt$ – local L3/2 norm of pressure
The powers of $r$ in the above quantities are to normalize things, to make them invariant with respect to scaling: $x = rx_1, v = v_1/r, t = r^2t_1, P = P_1/r^2$, then the INS equation remains
invariant, and $A(r)$ becomes $A(1)$ and so on.
Lin’s Theorem 1. (Suitable solution). Let $(v_n, P_n)$ be a sequence of weak solutions in $Q_1$ such that, for some positive constants $E, E_0, E_1 < \infty$ one has:
a) $\int_{\{t\} \times B_1}|v_n|^2 dx \leq E_0$ for a.e. $-1 < t <0$,
b) $\int_{Q_1} |abla v_n|^2 dx dt \leq E_1$,
c) $\int_{Q_1} |P_n|^{3/2} dx dt \leq E$, and
d) $(v_n,p_n)$ satisfies the generalized energy inequality for all $n$.
Suppose that $(v,p)$ is the weak limit of $(v_n,p_n)$. Then $(v,P)$ is a weak suitable solution of INS.
Lin’s Theorem 2 (Holder continuity). Assume that $(v,P)$ is a suitable solution which satisfies
$\int_{Q_1} (|v|^3 + |P|^{3/2}) dx dt < \epsilon_0$ for some appropriate small positive constant $\epsilon_0$. Then $v$ is Holder-continuous (with some Holder exponent $\alpha > 0$) in some $Q_r$ ($0
< r < 1)$.
Lin’s Theorem 3 (local regularity criterion). There is a positive constant $\epsilon_0$ such that if a suitable weak solution satisfies
$\limsup_{r \to 0} B(r) \leq \epsilon_0$
then there are $\theta_0, r_0 \in ]0,1[$ such that either
$A^{3/2}(\theta_0 r) + D^2 (\theta_0 r) \leq (1/2) (A^{3/2} (r) + D^2(r))$
or $A^{3/2} (r) + D^2(r) \leq \epsilon_1 << 1$
where $0 < r < r_0$
Remark. Lin’s Theorem 3 implies that, starting at a point in space-time which satisfies the condition of the theorem, we can choose a small $Q_r$ centered at it which satisfy the conditions of Lin’s
Theorem 2, which implies that the point is regular. In other words, any point which satisfies the condition of Theorem 3 is regular. (The set of points which don’t satisfy the condition of Theorem 3
is of 1-dim Huasrdorff measure 0, and so we get the CKN theorem). The proof of Lin’s Theorem 3 is based on the following lemmas and the generalized energy inequality, which together allow to
“control” the quantities $A,B,C,D$ starting from $\limsup_{r \to 0} B(r) \leq \epsilon_0$.
Lin’s Lemma 1 (Lin’s interpolation inequality). There is a constant $C$ such that for any $v$ and any $0 < r \leq \rho$ we have:
$C(r) \leq C[(r/\rho)^3 A^{3/2}(\rho) + (\rho/r)^3 A^{3/4}(\rho) B^{3/4} (\rho)]$
Remark. By this Lin’s Lemma 2, we can bound $C(r)$ by $A(\rho), B(\rho)$, or, assuming that $B(\rho)$ is very small, bound $C(r)$ by $(r/\rho)^3 A(\rho)$. This
Lemma is a modification of the following Sobolev interpolation inequality:
Sobolev inequality. For $v \in H^1(B_r)$ (in 3-dimensional space) one has:
$\int_{B_r}|v|^q dx \leq c(\int_{B_r}|abla v|^2 dx)^a (\int_{B_r}|v|^2 dx)^{q/2 -a} + \frac{c}{r^{2a}}(\int_{B_r}|v|^2 dx)^{q/2}$
for all $2 \leq q \leq 6, a = 3(q-2)/4$.
Lin’s Lemma 2. Let $(v,P)$ be a weak solution in $Q_1$ (with the usual assumptions on $v$). Then $P \in L^{5/3}(Q_1)$.
Lin’s Lemma 3. Suppose that $(v,P)$ is a suitable weak solution such that $\int_{Q_1} (|v|^3 + |P|^{3/2}) dx dt \leq \epsilon_0$ for some sufficiently small $\epsilon_0$. Then
$\theta^{-5} \int_{Q_\theta} \theta^{-\alpha_0}[|v-v_\theta|^3 + |P - P_\theta(t)|^{3/2}] dx dt \leq \frac{1}{2} \int_{Q_1} [|v|^3 + |P |^{3/2}] dx dt$
for some positive constants $\theta, \alpha_0 \in ]0, 1/2[$, where $v_\theta$ is the mean value of $v$ in $Q_\theta$ and $P_\theta(t)$ is the mean value of $P$ in $\{t\} \times B_\theta$.
Lin’s Lemma 4. Let $(v,P)$ be a suitable weak solution in $Q_1$. Then, for almost all $t \in ]-1/2,0[$ one has
$\theta^{-3} \int_{\{t\} \times B_\theta}|P|^{3/2}dx \leq C_{\theta_0} \int_{\{t\} \times B_1}|v - \bar{v}|^{3/2}dx + C_0\int_{\{t\} \times B_1}|P|^{3/2}dx$
for all $\theta \in ]\theta_0, 1/2[$, where $C_0$ is a constant which does not depend on the choice of $\theta_0$.
Remark & proof. In Lin’s paper, the term $\theta^{-3}$ is completely missing in his lemma (apparently a typo), which makes the lemma senseless. The proof of Lin’s Lemma 4 is relatively simple:
decompose $P$ as the sum of a harmonic function and a function with the same Laplacian of $P$ but which is zero on the boundary of some appropriate $B_\rho$, $1/2 < \rho < 1$. Bound the first
function to the power 3/2 by its value on $B_\rho$ using subharmonicity, and the second function by Calderon-Zygmund inequality.
Lin’s Lemma 5. For any $r \in (\theta_0\rho, \rho/2), \rho \leq 1$, one has:
$D(r) \leq C_{\theta_0} \frac{1}{\rho^2} \int_{Q_ \rho} |v - v_\rho|^3 dx dt + C_0 \frac{r}{\rho} D(\rho)$
Remark: Lin’s Lemma 5 follows directly from Lin’s Lemma 4, and it allows one to bound $D(r)$ by $C(\rho)$ and $\frac{r}{\rho} D(\rho)$ (note the coefficient $r/\rho$, which can be chosen very small
when $r$ is small).
Proof of Lin’s Lemma 3. Suppose that the conclusion of Lin’s Lemma 3 is false. Then there is a sequence $(v_k, P_k)$ of suitable weak solutions such that $\|v_k\|_{L^3} + \|P_k\|_{L^{3/2}} = \
epsilon_k \to 0$ when $k \to \infty$, which do not satisfy the above property. Let $(u_k,\tilde{P}_k) = (v_k/\epsilon_k, P_k/\epsilon_k)$, then $(u_k,\tilde{P}_k)$ is a suitable weak solution of the
$\partial u_k / \partial t + \epsilon_k u_k abla u_k + abla \tilde{P}_k = \Delta u_k$.
One also notices that
$\Delta \tilde{P}_k = -\epsilon_k \sum_{i,j} (\partial u_k^i/\partial x_j)(\partial u_k^j/\partial x_i)$
It follows from the generalized energy inequality that the $u_k$‘s lie in a bounded set of $L^\infty([-1,0], L^2_{loc}(B_1)) \cap L^2([-1,0], H^1_{loc}(B_1))$, and hence they lie in a bounded set of
$L^{10/3}([-1,0]; L^{10/3}_{loc}(B_1))$ (by an interpolation inequality).
By taking a subsequence, we may assume that $(u_k,\tilde{P_k})$ converge weakly to $(u,P)$ which satisfy:
$\partial u/\partial t + abla P = \Delta u; div u = 0$, $\int_{Q_1}|u|^3 dx dt \leq 1, \int_{Q_1}|P|^{3/2} dx dt \leq 1$
A simple estimate for the Stokes equation (i.e. the linearized Navier-Stokes equation, without the quadratic term in velocity; see Seregin’s lecture notes Section 2) –> $u,P$ are smooth in the
spatial variable, and Holder-continuous in the time variable with, say, exponent $2 \alpha_0$. Thus, for a suitable $\theta \in ]0, 1/2[$ one has:
$\theta^{-5} \int_{Q_\theta} |u-u_\theta|^3 dx dt \leq \theta^{\alpha_0}/4$
(the number 4 in the above inequality is a bit arbitrary; in fact can choose any number we like). We can also assume that $u_k \to u$ strongly in $L^3([-1,0],L^3_{loc}(B_1))$. Hence we have:
$\theta^{-5} \int_{Q_\theta} |u_k-u_{k,\theta}|^3 dx dt \leq \theta^{\alpha_0}/3$ for all $k$ sufficiently large.
Next consider $\tilde{P_k}$. Decompose it as: $\tilde P_k = h_k + g_k$ in $]0,1[ \times B_{2/3}$, where
$\Delta g_k = \Delta \tilde{P_k}$ in $B_{2/3}$ and $g_k = 0$ on $\partial B_{2/3}$ (hence $h_k$ is harmonic in $B_{2/3}$). Using the fact that $h_i$ is harmonic to bound it, and the Calderon-Zygmund
inequality to bound $g_k$ via $u_k$, we get (see Lin’s paper for details):
$\theta^{-5} \int_{Q_\theta} |\tilde{P}_k- \tilde{P}_{k,\theta}|^{3/2} dx dt \leq \theta^{\alpha_0}/3$ for all $k$ sufficiently large.
Summing this inequality with the previous one for $|u_k-u_{k,\theta}|$, we get a contradiction with the beginning assumption. Thus the lemma is proven.
Proof of Lin’s Theorem 2. We will use Lin’s Lemma 3. Let $(v,P)$ be a weak solution such that $\int_{Q_1} (|v|^3 + |P|^{3/2}) dx dt \leq \epsilon_0$. Put
$v_1(t,x) = \frac{v - v_\theta}{\theta^{\alpha_0/3}}(\theta^2 t, \theta x)$,
$P_1(t,x) = \theta^{1-\alpha_0/3} (P(\theta^2 t, \theta x) - P_\theta(t))$.
Then $(v_1,P_1)$ is a suitable weak solution of
$\partial v_1 / \partial t + \theta (v_\theta + \theta^{\alpha_0/3} v_1). abla v_1 + abla P_1 = \Delta v_1$ in $Q_1$
Moreover, Lin’s Lemma 3 implies that
$\int_{Q_1} (|v_1|^3 + |P_1|^{3/2}) dx dt \leq \epsilon_0/2$
Now repeating the same arguments as in the proof of Lin’s Lemma 3 (with minor modifications), we can conclude that
$\theta^{-5} \int_{Q_\theta} \theta^{-\alpha_0}[|v_1-v_{1,\theta}|^3 + |P_1 - P_{1,\theta(t)}|^{3/2}] dx dt \leq \frac{1}{2} \int_{Q_1} [|v_1|^3 + |P_1|^{3/2}] dx dt \leq \epsilon_0/4$
By a simple iteration, we then conclude that
$r^{-5} \int_{Q_r} |v - v_r|^3 dx dt \leq C \epsilon_0 r^{\alpha_0}$
for all $r \in ]0,1/2[$. Thus $v$ is Holder-continuous in $(t,x)$ (per Campanato Lemma about Holder continuity, see Robinson’s lecture notes, or e.g. the book: F. Schulz, Regularity for quasilinear
elliptic systems and Monge-Ampere equations in two dimensions, Lecture Notes in Maths, vol. 1445, Chapter 1).
Various Remarks
- Scheffer (Commun. Math. Phys, 1977) proved that the Hausdorff dimension of the singular set didn’t exceed 2.
|
{"url":"http://zung.zetamu.net/2010/10/notes-on-ins-14/","timestamp":"2014-04-21T07:04:25Z","content_type":null,"content_length":"153795","record_id":"<urn:uuid:eaaa64a3-13c8-4830-91e5-4f0b6f35c0fc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
University Calculus, Early Transcendentals, Multivariable
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/university-calculus-early-transcendentals/bk/9780321694607","timestamp":"2014-04-16T23:14:50Z","content_type":null,"content_length":"30416","record_id":"<urn:uuid:52221345-32c6-43b7-b5f8-9c31ed86ab3d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arithmetic Tutorials
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science
These tutorials are characterized by an interest in doing arithmetic on machines with just binary add, subtract, logical and shift operators, making no use of special hardware support for such
complex operations as BCD arithmetic or multiplication and division. While some of these techniques are old, they remain relevant today.
|
{"url":"http://homepage.cs.uiowa.edu/~jones/bcd/","timestamp":"2014-04-19T09:23:35Z","content_type":null,"content_length":"4016","record_id":"<urn:uuid:8a2b9bf3-cba8-47e3-9ed2-8742c5183641>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A New Approach to the Problem of Scattering of Water Waves by Vertical Barriers
Chakrabarti, A and Vijayabharathi, L (1992) A New Approach to the Problem of Scattering of Water Waves by Vertical Barriers. In: Journal of Applied Mathematics and Mechanics / Zeitschrift für
Angewandte Mathematik und Mechanik (ZAMM), 72 (9). pp. 415-423.
Full text not available from this repository.
Using a modified Green's function technique the two well-known basic problems of scattering of surface water waves by vertical barriers are reduced to the problem of solving a pair of uncoupled
integral equations involving the “jump” and “sum” of the limiting values of the velocity potential on the two sides of the barriers in each case. These integral equations are then solved, in closed
form, by the aid of an integral transform technique involving a general trigonometric kernel as applicable to the problems associated with a radiation condition.
Actions (login required)
|
{"url":"http://eprints.iisc.ernet.in/35068/","timestamp":"2014-04-24T08:02:45Z","content_type":null,"content_length":"18600","record_id":"<urn:uuid:1d9e6eec-b2fb-4d82-8fd5-1d109b8b216f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Golf: Selection from sets (Choose)
Since there was just a golf for factorials, I figured that doing one for the number of ways to select M objects from a set of N objects without repetition might be appropriate.
Basically, if I have a set of 4 cards, how many ways can I select a hand of 1 card from the set without repeating myself? The answer is obviously 4. Now if I have a hand size of 2 how many ways are
there? The answer is 6, but it is less obvious.
The general solution is defined by the function:
Choose(M, N) = M!
N! * (M - N)!
Where M is the size of the set and N is the number of cards to select. And M! is the factorial of M. See
Golf: Factorials
for more info.
The following are test cases that you can use:
│M │N │ Answer │ Notes │
│52│5 │2598960 │Number of 5 card hands in a deck of 52 cards │
│52│7 │133784560 │Number of 7 card hands in a deck of 52 cards │
│52│13│635013559600│Number of 7 card hands in a deck of 52 cards │
│52│52│1 │Number of ways to select a hand size of 1 from a 52 card deck │
The interface for the resulting code should be:
print c($m, $n);
If you want to define a factorial subroutine that should be included in the size of the code.
|
{"url":"http://www.perlmonks.org/?node_id=83013","timestamp":"2014-04-20T21:34:27Z","content_type":null,"content_length":"24858","record_id":"<urn:uuid:8a22b4f2-03fc-4ffa-b4bd-1b51c844307a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cosine Functions: Altering dimensions/scaling
October 20th 2008, 08:33 AM #1
Oct 2008
Cosine Functions: Altering dimensions/scaling
a[cos(bx + c)] + d
There are two cosine functions. One is higher than the other. There is a fixed domain such as 10 < x < 10 so that the area between the curves creates an image.
How do I, for example, double the dimensions of this image? And how can I play around with the variables of each curve to create any image size that I want?
any guess?
a controls the height of the function
b controls the period, or how long it takes to complete a cycle
c moves the graph to the left or right
d moves the graph up or down.
Yes, thank you. Now I need to figure out how to double the size of an image. Let me give you an example:
where the domain is 0 to 4.
This creates like a tilted 'N' shape between the two curves. How can I double this image?
So we cannot change the domain or can we?
You can change the domain if you are trying to double the size of the image. I asume the domain will be twice as big(?) Sorry for not being clear
Then double the image = double the height and double the width? Width in this case is the period. Double a and double b.
Woops, to double the period you need to halve b.
October 20th 2008, 12:05 PM #2
Oct 2008
October 20th 2008, 12:54 PM #3
Oct 2008
October 20th 2008, 01:07 PM #4
Oct 2008
October 20th 2008, 03:34 PM #5
Oct 2008
October 20th 2008, 08:59 PM #6
Oct 2008
October 20th 2008, 09:04 PM #7
Oct 2008
October 20th 2008, 09:07 PM #8
Oct 2008
|
{"url":"http://mathhelpforum.com/trigonometry/54711-cosine-functions-altering-dimensions-scaling.html","timestamp":"2014-04-21T13:32:55Z","content_type":null,"content_length":"45511","record_id":"<urn:uuid:91108d09-5c1a-4785-83e4-cb9f5e3912f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4x4 Magic Square Solver
I vaguely remember hearing about Perplex City when it was first launched, but I was too caught up in just about everything else to take too much notice. I do remember thinking that a worldwide puzzle
/scavenger hunt game with an online component sounded right up my alley, but I was disappointed that the "crossover into real-life" events were centered in a country I had never set foot in.
A number of times since then, I've been reminded of the game's existence, most recently when I heard that Perplex City would be having its first official U.S. event right here in San Francisco...on a
day when I already had obligations. In spite of my inability to attend the event, there's been a resurgence in my interest, and last Friday while in Berkeley for a concert I picked up a few packs of
"PC" cards from Games of Berkeley.
Whoo boy, the good times are a-startin'. I love myself a good mental workout, and the Perplex City cards provide just that in diverse forms and at varying intensity. From pattern-matching to
pop-culture knowledge, logic puzzles, physics problems, political trivia all abound.
Probably my favorite aspect of Perplex City problem-solving so far is scripting solutions to some of the more complicated puzzles. When I was working on a solution for card #098 'Magic Square', I
came up with a script which can be used to solve any 4 by 4 magic square, where the rows, columns, and diagonals all add to the same number.
My first attempt was far less than ideal: it randomly arranged the 16 numbers and then tested to see if everything added up properly. When I let it run for ten minutes at a time, it would run through
about 12-13 million combinations and not come up with a single solution.
My second attempt used iteration to go through possibilites in a fixed order and guaranteed me a solution eventually...within the next 56 years (seriously, I calculated), if I let the script run
constantly, it would check every possible arrangement of numbers for 'magic squareness'.
My third attempt is the one I finally found success with. Essentially, it's a modified version of the second script where I run validity tests incrementally instead of all at once after a square is
constructed. The spaces within a square are filled in a spiral pattern and rows, columns, and diagonals are tested the moment they're testable, which results in maximum efficiency.
Where the first script would have required hundreds or possibly thousands of years, and the second script nearly a lifetime, my third script only took about 23 minutes and 35 seconds to find all 924
possible solutions for a 4x4 magic square.
As it turns out, there are 54 unique solutions to the Perplex City Magic Square card.
What I'm providing you is an easy way to solve any 4x4 magic square puzzle using the numbers 1-16 where the desired sum is 34 - your most typical 4x4 magic square. Just enter the numbers you're
starting with into the square to the left and click the button to generate a list of all possible solutions. Note that you'll need to enter at least two numbers, else the script is sure to exceed the
30-second maximum execution time provided by my host (and even then, I can't promise this machine will be fast enough to return all solutions).
If you find any discrepancies, duplications, or omissions, please let me know by leaving a comment on Transient Savant.
For performance reasons, you must enter at least 2 starting values.
|
{"url":"http://quasistoic.org/fun/magicsquare/","timestamp":"2014-04-18T05:33:09Z","content_type":null,"content_length":"6577","record_id":"<urn:uuid:08dd4eaa-755a-49c9-9c84-f7c7cdac80d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Maximum Distinguishing Number of a Group
Let $G$ be a group acting faithfully on a set $X$. The distinguishing number of the action of $G$ on $X$, denoted $D_G(X)$, is the smallest number of colors such that there exists a coloring of $X$
where no nontrivial group element induces a color-preserving permutation of $X$. In this paper, we show that if $G$ is nilpotent of class $c$ or supersolvable of length $c$ then $G$ always acts with
distinguishing number at most $c+1$. We obtain that all metacyclic groups act with distinguishing number at most 3; these include all groups of squarefree order. We also prove that the distinguishing
number of the action of the general linear group $GL_n(K)$ over a field $K$ on the vector space $K^n$ is 2 if $K$ has at least $n+1$ elements.
Full Text:
|
{"url":"http://www.emis.de/journals/EJC/ojs/index.php/eljc/article/view/v13i1r70","timestamp":"2014-04-19T00:09:58Z","content_type":null,"content_length":"13702","record_id":"<urn:uuid:049c9a88-db32-4627-8dfd-280e3a1ec0b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Extending my question. Was: The relational model and relational algebra - why did SQL become the industry standard?
From: Mikito Harakiri <mikharakiri_at_ywho.com> Date: Mon, 17 Feb 2003 12:03:57 -0800 Message-ID: <ckb4a.14$7H6.123@news.oracle.com>
"Paul" <pbrazier_at_cosmos-uk.co.uk> wrote in message news:51d64140.0302170330.15d2a98f_at_posting.google.com...
> > > What's the interpretation given to a table with duplicate rows?
> So what is it based on? What is the interpretation?
"Partial" or "Weighted" predicates. Each predicate is weighted with a number, so that we could do some ariphmetics upon predicates.
For example, the fact:
in the multiset model is telling to the end user that there is *at least* one milk container. In the set model, it tells to the end user that there is *exactly* one milk container.
> But isn't there a distinction between the abstract number itself and
> any given representation of it?
Consider a vector from linear algebra. Abstract vector. Then, according to your logic, you would insist on introducing a special domain for it, right? On the other hand, it's very convenient to
dismember a vector into components and cosider vector operations as aggregate queries.
Representation is just a view of an abstract entity! A user should be allowed to freely transform one representation into another. That transformation should be explicit, i.e. defined in a
declarative query language.
> OK suppose I have an employee "relation" which is a multiset.
> I have two employees called John Smith, in the same dept on the same
> salary.
> So my multi-relation contains two outwardly identical tuples:
> ("John Smith", 10, 20000)
> ("John Smith", 10, 20000)
> Now one of the John Smiths has a sex-change and becomes Jane Smith.
> How does the user update only one of the rows?
> Surely it's impossible because the two rows are only distinguished
> internally?
The fact
("John Smith", 10, 20000)
in multiset theory is not a complete predicate. Therefore, we can't just blindly modify it.
The <name, dept> is obviously a unique key. The question should be how do we impose constraints in the multiset theory.
> Can't multisets always be defined in terms of sets?
> So you could for example define a multiset [a,a,b,c,d,d,d]
> as the set {(a,1),(a,2),(b,3),(c,4),(d,5),(d,6),(d,7)}
> (where (x,y) denotes an ordered pair which you could define as the set
> {{x},{x,y}} if you wanted)
The biggest problem with (x,y)={{x},{x,y}} definition is that it's not associative:
<a,<b,c>> != <<a,b>,c>
Therefore, we can't consistently define ordered tuple with 3 components. This is why the above definition is just a toy that is never used in practice. Once again, Set Reduction doesn't work!
> or maybe {(a,2),(b,1),(c,1),(d,3)} would be better because it doesn't
> add extra information (sequential order). Which is really the same as
> adding the extra "count" column I guess.
In multiset theory we are studying why
are equivalent. In the set theory, however, there is nothing special about these 2 representations.
> So multisets really just are syntactic shorthand. Now I suppose
> ultimately everything is syntactic shorthand for set theory :) but I'm
> not convinced that at the logical level the advantages of multisets
> outweigh the disadvantages. Quite possibly it might be different for
> the physical level though.
My problem is that I'm so frequently blurring distinction between logical and physical, that distinction of logical and physical levels is no longer a driving principle. It seems to be a price to pay
for more poweful computation model. Could this model be reasonably "declarative" that is the question.
> Well this is straying off-topic I guess since databases are
> necessarily finite so a suitably large finite set of integers would
> suffice.
> But what if you wanted a multiset of even greater cardinality than the
> reals? e.g. the set of all subsets of the reals?
No, I rather meant that nonnegative integer set is so obviously algebraically nonclosed that it might ultimately show up. Cardinalities and powersets in the set theory -- "set of integer sets" -- are
notoriously useless for anything more practical than measuring the length of Turing tape. Received on Mon Feb 17 2003 - 14:03:57 CST
|
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2003/02/17/0234.htm","timestamp":"2014-04-18T08:28:02Z","content_type":null,"content_length":"13116","record_id":"<urn:uuid:0c53e4c1-4bde-4242-8563-2831da91a2e2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
anyone help?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51285aa4e4b0dbff5b3d8673","timestamp":"2014-04-17T16:18:20Z","content_type":null,"content_length":"39869","record_id":"<urn:uuid:e16ed904-b47e-40ec-a48a-a69c24eb364c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Superposition Example
The circuit shown below provides an illustration of the principle of superposition. The values of the voltage source voltage and current source current and of both resistances can be adjusted using
the scrollbars. Also, both of the sources can be removed form the circuit and put back into the circuit by left-clicking the source with the mouse.
The voltage source voltage and current source current are the inputs to this circuit. The voltage and current measured by the meters are the outputs or responses of the circuit.The principle of
superposition says that the response of a linear circuit to several inputs working together is equal to the sum of the response of that circuit to the inputs working separately.
In this example, let
1. I and V denote the current and voltage measured by the meters when both the voltage source and the current source are in the circuit. That is, I and V, are both responses to V[s] and I[s] working
2. I[1] and V[1] denote the current and voltage measured by the meters when the voltage source, but not the current source, is in the circuit. That is, I[1] and V[1], are both responses to V[s]
working alone.
3. I[2] and V[2] denote the current and voltage measured by the meters when the current source, but not the voltage source, is in the circuit. That is, I[2] and V[2], are both responses to I[s]
working alone.
The principle of superposition tells us that
For example, use the scrollbars to set I[s]= 4 A, V[s] = 20 V, R[1]=10 Ohms and R[2]=30 ohms.
1. The meters indicate that I=-2.5 A and V=45 V.
2. Left-click on the current source to remove it from the circuit. (Notice that the current source is replaced by an open circuit because an open is equivalent to a zero current source.) Now the
meters measure I[1] and V[1] because the voltage source, but not the current source, is in the circuit.The meters indicate that I[1]=0.5 A and V[1]=15 V.
3. Left-click on the current source to restore it to the circuit. Once again the meters measure I and V because both the voltage source and the current source are in the circuit. As expected, the
meters gain indicate that I=-2.5 A and V=45 V.
4. Left-click on the voltage source to remove it from the circuit. (Notice that the voltage source is replaced by a short circuit because a short is equivalent to a zero voltage source.) Now the
meters measure I[2] and V[2] because the current source, but not the voltage source, is in the circuit.The meters indicate that I[2]=-3 A and V[2]=30 V.
5. The principle of superposition predicts that
1. Use the scrollbars to set I[s]= 1 A, V[s] = 40 V, R[1]=10 Ohms and R[2]=40 Ohms.
a. Observe the values of I and V.
b. Left-click on the current source to remove it from the circuit. Notice that R[1] and R[2] are series resistors and that V[1] can be calculated using voltage division. (See section 3.4 of
Introduction to Electric Circuits, 5e by R.C. Dorf and J.A. Svoboda.) Observe the values of I[1] and V[1].
c. Left-click on the current source to restore it to the circuit then left-click on the voltage source to remove it from the circuit. Notice that R[1] and R[2] are parallel resistors and that I
[2] can be calculated using current division. (See section 3.5 of Introduction to Electric Circuits by RC Dorf and JA Svoboda.) Observe the values of I[2] and V[2].
d. Verify that
2. In Example 5.4-1 of Introduction to Electric Circuits, 5e by R.C. Dorf and J.A. Svoboda, I[s]= 2 A, V[s] = 6 V, R[1]=3 Ohms and R[2]=6 Ohms. Example 5.4-1 shows that the current in R[2] is 4/3 A.
Equivalently, the voltage across R[2] is 6 * 4/3 = 8 V. Use the scrollbars to verify this answer.
3. Use the scrollbars to set I[s]= 3 A, R[1]=10 Ohms and R[2]=5 Ohms. Predict the value of V[s] required to make V = 25 V. Use the scrollbar to check your prediction.
4. Use the scrollbars to set I[s]= 0.6 A, V[s] = 28 V. Predict the value of R[1] = R[2] required to make V = 20 V. Use the scrollbars to check your prediction.
5. Use the scrollbars to set R[1] = R[2] = 40 Ohms. Predict the value of I[s] and V[s]and required to make V = 20 V and I = 0.1 A. Use the scrollbars to check your prediction.
(Hint: Use the principle of superposition to show that
|
{"url":"http://people.clarkson.edu/~jsvoboda/eta/ClickDevice/super.html","timestamp":"2014-04-17T12:29:27Z","content_type":null,"content_length":"6280","record_id":"<urn:uuid:76a1eb78-c02b-4269-ae5b-50be2a6b8204>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 25 of 178
1. CMB Online first
Free Locally Convex Spaces and the $k$-space Property
Let $L(X)$ be the free locally convex space over a Tychonoff space $X$. Then $L(X)$ is a $k$-space if and only if $X$ is a countable discrete space. We prove also that $L(D)$ has uncountable
tightness for every uncountable discrete space $D$.
Keywords:free locally convex space, $k$-space, countable tightness
Categories:46A03, 54D50, 54A25
2. CMB Online first
Characters on $C( X)$
The precise condition on a completely regular space $X$ for every character on $C(X) $ to be an evaluation at some point in $X$ is that $X$ be realcompact. Usually, this classical result is
obtained relying heavily on involved (and even nonconstructive) extension arguments. This note provides a direct proof that is accessible to a large audience.
Keywords:characters, realcompact, evaluation, real-valued continuous functions
Categories:54C30, 46E25
3. CMB Online first
On an Exponential Functional Inequality and its Distributional Version
Let $G$ be a group and $\mathbb K=\mathbb C$ or $\mathbb R$. In this article, as a generalization of the result of Albert and Baker, we investigate the behavior of bounded and unbounded functions
$f\colon G\to \mathbb K$ satisfying the inequality $ \Bigl|f \Bigl(\sum_{k=1}^n x_k \Bigr)-\prod_{k=1}^n f(x_k) \Bigr|\le \phi(x_2, \dots, x_n),\quad \forall\, x_1, \dots, x_n\in G, $ where $\phi\
colon G^{n-1}\to [0, \infty)$. Also, as a distributional version of the above inequality we consider the stability of the functional equation \begin{equation*} u\circ S - \overbrace{u\otimes \cdots
\otimes u}^{n-\text {times}}=0, \end{equation*} where $u$ is a Schwartz distribution or Gelfand hyperfunction, $\circ$ and $\otimes$ are the pullback and tensor product of distributions,
respectively, and $S(x_1, \dots, x_n)=x_1+ \dots +x_n$.
Keywords:distribution, exponential functional equation, Gelfand hyperfunction, stability
Categories:46F99, 39B82
4. CMB Online first
Measures of Noncompactness in Regular Spaces
Previous results by the author on the connection between three of measures of non-compactness obtained for $L_p$, are extended to regular spaces of measurable functions. An example of advantage in
some cases one of them in comparison with another is given. Geometric characteristics of regular spaces are determined. New theorems for $(k,\beta)$-boundedness of partially additive operators are
Keywords:measure of non-compactness, condensing map, partially additive operator, regular space, ideal space
Categories:47H08, 46E30, 47H99, 47G10
5. CMB Online first
Limited Sets and Bibasic Sequences
Bibasic sequences are used to study relative weak compactness and relative norm compactness of limited sets.
Keywords:limited sets, $L$-sets, bibasic sequences, the Dunford-Pettis property
Categories:46B20, 46B28, 28B05
6. CMB Online first
Property T and Amenable Transformation Group $C^*$-algebras
It is well known that a discrete group which is both amenable and has Kazhdan's Property T must be finite. In this note we generalize the above statement to the case of transformation groups. We
show that if $G$ is a discrete amenable group acting on a compact Hausdorff space $X$, then the transformation group $C^*$-algebra $C^*(X, G)$ has Property T if and only if both $X$ and $G$ are
finite. Our approach does not rely on the use of tracial states on $C^*(X, G)$.
Keywords:Property T, $C^*$-algebras, transformation group, amenable
Categories:46L55, 46L05
7. CMB Online first
Uniqueness of preduals in spaces of operators
We show that if $E$ is a separable reflexive space, and $L$ is a weak-star closed linear subspace of $L(E)$ such that $L\cap K(E)$ is weak-star dense in $L$, then $L$ has a unique isometric
predual. The proof relies on basic topological arguments.
Categories:46B20, 46B04
8. CMB Online first
Strong Asymptotic Freeness for Free Orthogonal Quantum Groups
It is known that the normalized standard generators of the free orthogonal quantum group $O_N^+$ converge in distribution to a free semicircular system as $N \to \infty$. In this note, we
substantially improve this convergence result by proving that, in addition to distributional convergence, the operator norm of any non-commutative polynomial in the normalized standard generators
of $O_N^+$ converges as $N \to \infty$ to the operator norm of the corresponding non-commutative polynomial in a standard free semicircular system. Analogous strong convergence results are obtained
for the generators of free unitary quantum groups. As applications of these results, we obtain a matrix-coefficient version of our strong convergence theorem, and we recover a well known $L^2$-$L^\
infty$ norm equivalence for non-commutative polynomials in free semicircular systems.
Keywords:quantum groups, free probability, asymptotic free independence, strong convergence, property of rapid decay
Categories:46L54, 20G42, 46L65
9. CMB Online first
Equilateral sets and a Schütte Theorem for the $4$-norm
A well-known theorem of Schütte (1963) gives a sharp lower bound for the ratio of the maximum and minimum distances between $n+2$ points in $n$-dimensional Euclidean space. In this note we adapt
Bárány's elegant proof (1994) of this theorem to the space $\ell_4^n$. This gives a new proof that the largest cardinality of an equilateral set in $\ell_4^n$ is $n+1$, and gives a constructive
bound for an interval $(4-\varepsilon_n,4+\varepsilon_n)$ of values of $p$ close to $4$ for which it is known that the largest cardinality of an equilateral set in $\ell_p^n$ is $n+1$.
Categories:46B20, 52A21, 52C17
10. CMB Online first
Constructive Proof of Carpenter's Theorem
We give a constructive proof of Carpenter's Theorem due to Kadison. Unlike the original proof our approach also yields the real case of this theorem.
Keywords:diagonals of projections, the Schur-Horn theorem, the Pythagorean theorem, the Carpenter theorem, spectral theory
Categories:42C15, 47B15, 46C05
11. CMB Online first
Interpolation of Morrey Spaces on Metric Measure Spaces
In this article, via the classical complex interpolation method and some interpolation methods traced to Gagliardo, the authors obtain an interpolation theorem for Morrey spaces on quasi-metric
measure spaces, which generalizes some known results on ${\mathbb R}^n$.
Keywords:complex interpolation, Morrey space, Gagliardo interpolation, Calderón product, quasi-metric measure space
Categories:46B70, 46E30
12. CMB 2013 (vol 57 pp. 364)
How Lipschitz Functions Characterize the Underlying Metric Spaces
Let $X, Y$ be metric spaces and $E, F$ be Banach spaces. Suppose that both $X,Y$ are realcompact, or both $E,F$ are realcompact. The zero set of a vector-valued function $f$ is denoted by $z(f)$. A
linear bijection $T$ between local or generalized Lipschitz vector-valued function spaces is said to preserve zero-set containments or nonvanishing functions if \[z(f)\subseteq z(g)\quad\
Longleftrightarrow\quad z(Tf)\subseteq z(Tg),\] or \[z(f) = \emptyset\quad \Longleftrightarrow\quad z(Tf)=\emptyset,\] respectively. Every zero-set containment preserver, and every nonvanishing
function preserver when $\dim E =\dim F\lt +\infty$, is a weighted composition operator $(Tf)(y)=J_y(f(\tau(y)))$. We show that the map $\tau\colon Y\to X$ is a locally (little) Lipschitz
Keywords:(generalized, locally, little) Lipschitz functions, zero-set containment preservers, biseparating maps
Categories:46E40, 54D60, 46E15
13. CMB Online first
Compact Operators in Regular LCQ Groups
We show that a regular locally compact quantum group $\mathbb{G}$ is discrete if and only if $\mathcal{L}^{\infty}(\mathbb{G})$ contains non-zero compact operators on $\mathcal{L}^{2}(\mathbb{G})$.
As a corollary we classify all discrete quantum groups among regular locally compact quantum groups $\mathbb{G}$ where $\mathcal{L}^{1}(\mathbb{G})$ has the Radon--Nikodym property.
Keywords:locally compact quantum groups, regularity, compact operators
14. CMB 2012 (vol 57 pp. 424)
A Note on Amenability of Locally Compact Quantum Groups
In this short note we introduce a notion called ``quantum injectivity'' of locally compact quantum groups, and prove that it is equivalent to amenability of the dual. Particularly, this provides a
new characterization of amenability of locally compact groups.
Keywords:amenability, conditional expectation, injectivity, locally compact quantum group, quantum injectivity
Categories:20G42, 22D25, 46L89
15. CMB 2012 (vol 57 pp. 90)
Compact Subsets of the Glimm Space of a $C^*$-algebra
If $A$ is a $\sigma$-unital $C^*$-algebra and $a$ is a strictly positive element of $A$ then for every compact subset $K$ of the complete regularization $\mathrm{Glimm}(A)$ of $\mathrm{Prim}(A)$
there exists $\alpha \gt 0$ such that $K\subset \{G\in \mathrm{Glimm}(A) \mid \Vert a + G\Vert \geq \alpha\}$. This extends a result of J. Dauns to all $\sigma$-unital $C^*$-algebras. However,
there are a $C^*$-algebra $A$ and a compact subset of $\mathrm{Glimm}(A)$ that is not contained in any set of the form $\{G\in \mathrm{Glimm}(A) \mid \Vert a + G\Vert \geq \alpha\}$, $a\in A$ and $
\alpha \gt 0$.
Keywords:primitive ideal space, complete regularization
16. CMB 2012 (vol 57 pp. 42)
Covering the Unit Sphere of Certain Banach Spaces by Sequences of Slices and Balls
e prove that, given any covering of any infinite-dimensional Hilbert space $H$ by countably many closed balls, some point exists in $H$ which belongs to infinitely many balls. We do that by
characterizing isomorphically polyhedral separable Banach spaces as those whose unit sphere admits a point-finite covering by the union of countably many slices of the unit ball.
Keywords:point finite coverings, slices, polyhedral spaces, Hilbert spaces
Categories:46B20, 46C05, 52C17
17. CMB 2012 (vol 57 pp. 166)
On Minimal and Maximal $p$-operator Space Structures
We show that for $p$-operator spaces, there are natural notions of minimal and maximal structures. These are useful for dealing with tensor products.
Keywords:$p$-operator space, min space, max space
Categories:46L07, 47L25, 46G10
18. CMB 2012 (vol 57 pp. 3)
A Short Proof of Paouris' Inequality
We give a short proof of a result of G.~Paouris on the tail behaviour of the Euclidean norm $|X|$ of an isotropic log-concave random vector $X\in\mathbb{R}^n,$ stating that for every $t\geq 1$, \[\
mathbb{P} \big( |X|\geq ct\sqrt n\big)\leq \exp(-t\sqrt n).\] More precisely we show that for any log-concave random vector $X$ and any $p\geq 1$, \[(\mathbb{E}|X|^p)^{1/p}\sim \mathbb{E} |X|+\sup_
{z\in S^{n-1}}(\mathbb{E} |\langle z,X\rangle|^p)^{1/p}.\]
Keywords:log-concave random vectors, deviation inequalities
Categories:46B06, 46B09, 52A23
19. CMB 2012 (vol 57 pp. 37)
Character Amenability of Lipschitz Algebras
Let ${\mathcal X}$ be a locally compact metric space and let ${\mathcal A}$ be any of the Lipschitz algebras ${\operatorname{Lip}_{\alpha}{\mathcal X}}$, ${\operatorname{lip}_{\alpha}{\mathcal X}}$
or ${\operatorname{lip}_{\alpha}^0{\mathcal X}}$. In this paper, we show, as a consequence of rather more general results on Banach algebras, that ${\mathcal A}$ is $C$-character amenable if and
only if ${\mathcal X}$ is uniformly discrete.
Keywords:character amenable, character contractible, Lipschitz algebras, spectrum
Categories:43A07, 46H05, 46J10
20. CMB 2012 (vol 56 pp. 551)
Real Dimension Groups
Dimension groups (not countable) that are also real ordered vector spaces can be obtained as direct limits (over directed sets) of simplicial real vector spaces (finite dimensional vector spaces
with the coordinatewise ordering), but the directed set is not as interesting as one would like, i.e., it is not true that a countable-dimensional real vector space that has interpolation can be
represented as such a direct limit over the a countable directed set. It turns out this is the case when the group is additionally simple, and it is shown that the latter have an ordered tensor
product decomposition. In the Appendix, we provide a huge class of polynomial rings that, with a pointwise ordering, are shown to satisfy interpolation, extending a result outlined by Fuchs.
Keywords:dimension group, simplicial vector space, direct limit, Riesz interpolation
Categories:46A40, 06F20, 13J25, 19K14
21. CMB 2012 (vol 56 pp. 870)
Note on Kasparov Product of $C^*$-algebra Extensions
Using the Dadarlat isomorphism, we give a characterization for the Kasparov product of $C^*$-algebra extensions. A certain relation between $KK(A, \mathcal q(B))$ and $KK(A, \mathcal q(\mathcal k
B))$ is also considered when $B$ is not stable and it is proved that $KK(A, \mathcal q(B))$ and $KK(A, \mathcal q(\mathcal k B))$ are not isomorphic in general.
Keywords:extension, Kasparov product, $KK$-group
22. CMB 2012 (vol 56 pp. 503)
Weak Sequential Completeness of $\mathcal K(X,Y)$
For Banach spaces $X$ and $Y$, we show that if $X^\ast$ and $Y$ are weakly sequentially complete and every weakly compact operator from $X$ to $Y$ is compact then the space of all compact operators
from $X$ to $Y$ is weakly sequentially complete. The converse is also true if, in addition, either $X^\ast$ or $Y$ has the bounded compact approximation property.
Keywords:weak sequential completeness, reflexivity, compact operator space
Categories:46B25, 46B28
23. CMB 2012 (vol 56 pp. 534)
A Cohomological Property of $\pi$-invariant Elements
Let $A$ be a Banach algebra and $\pi \colon A \longrightarrow \mathscr L(H)$ be a continuous representation of $A$ on a separable Hilbert space $H$ with $\dim H =\frak m$. Let $\pi_{ij}$ be the
coordinate functions of $\pi$ with respect to an orthonormal basis and suppose that for each $1\le j \le \frak m$, $C_j=\sum_{i=1}^{\frak m} \|\pi_{ij}\|_{A^*}\lt \infty$ and $\sup_j C_j\lt \
infty$. Under these conditions, we call an element $\overline\Phi \in l^\infty (\frak m , A^{**})$ left $\pi$-invariant if $a\cdot \overline\Phi ={}^t\pi (a) \overline\Phi$ for all $a\in A$. In
this paper we prove a link between the existence of left $\pi$-invariant elements and the vanishing of certain Hochschild cohomology groups of $A$. Our results extend an earlier result by Lau on
$F$-algebras and recent results of Kaniuth-Lau-Pym and the second named author in the special case that $\pi \colon A \longrightarrow \mathbf C$ is a non-zero character on $A$.
Keywords:Banach algebras, $\pi$-invariance, derivations, representations
Categories:46H15, 46H25, 13N15
24. CMB 2012 (vol 56 pp. 630)
Inverse Semigroups and Sheu's Groupoid for the Odd Dimensional Quantum Spheres
In this paper, we give a different proof of the fact that the odd dimensional quantum spheres are groupoid $C^{*}$-algebras. We show that the $C^{*}$-algebra $C(S_{q}^{2\ell+1})$ is generated by an
inverse semigroup $T$ of partial isometries. We show that the groupoid $\mathcal{G}_{tight}$ associated with the inverse semigroup $T$ by Exel is exactly the same as the groupoid considered by
Keywords:inverse semigroups, groupoids, odd dimensional quantum spheres
Categories:46L99, 20M18
25. CMB 2011 (vol 56 pp. 337)
Certain Properties of $K_0$-monoids Preserved by Tracial Approximation
We show that the following $K_0$-monoid properties of $C^*$-algebras in the class $\Omega$ are inherited by simple unital $C^*$-algebras in the class $TA\Omega$: (1) weak comparability, (2)
strictly unperforated, (3) strictly cancellative.
Keywords:$C^*$-algebra, tracial approximation, $K_0$-monoid
Categories:46L05, 46L80, 46L35
|
{"url":"https://cms.math.ca/cmb/msc/46","timestamp":"2014-04-20T15:54:44Z","content_type":null,"content_length":"64646","record_id":"<urn:uuid:f5769af1-52d6-4e61-9ee9-0698d3613dc9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Torsion in K-theory versus torsion in cohomology
up vote 8 down vote favorite
Inspired by this question, I wonder if anyone can provide an example of a finite CW complex X for which the order of the torsion subgroup of $H^{even} (X; \mathbb{Z}) = \bigoplus_{k=0}^\infty H^{2k}
(X; \mathbb{Z})$ differs from the order of the torsion subgroup of $K^0 (X)$, where $K^0$ is complex topological K-theory. This is the same as asking for a non-zero differential in the
Atiyah-Hirzebruch spectral sequence for some finite CW complex X, since this spectral sequence always collapses rationally.
Even better, is there an example in which X is a manifold? An orientable manifold?
Tom Goodwillie's answer to the question referenced above gave examples (real projective spaces) where the torsion subgroups are not isomorphic, but do have the same order.
It's interesting to note that the exponent of the images of these differentials is bounded by a universal constant, depending only on the starting page of the differential! This is a theorem of D.
Arrlettaz (K-theory, 6: 347-361, 1992). You can even change the underlying spectrum (complex K-theory) without affecting the constant.
kt.k-theory-homology spectral-sequences at.algebraic-topology
add comment
2 Answers
active oldest votes
This paper by Volker Braun shows that the orientable 8-manifold $X=\mathbb{RP}^3\times \mathbb{RP}^5$ gives an example. One has $$K^0(X) \cong \mathbb{Z}^2 \oplus \mathbb{Z}/4\oplus (\
mathbb{Z}/2)^2$$ and $$H^{ev}(X) \cong \mathbb{Z}^2 \oplus (\mathbb{Z}/2)^5.$$ Braun does the calculations using the Künneth formulae for the two theories, with the discrepancy in size
arising because the order of the tensor product of finite abelian groups is sensitive to their structure, not just their order.
up vote 7
down vote One another remark is that Atiyah and Hirzebruch told us the $d_3$-differential in their spectral sequence. It's the operation $Sq^3 \colon H^i(X;\mathbb{Z})\to H^{i+3}(X;\mathbb{Z})$
accepted given by $Sq^3 := \beta\circ Sq^2 \circ r$, where $r$ is reduction mod 2 and $\beta$ the Bockstein. As you say, Dan, if this is non-vanishing, K-theory has smaller torsion. This happens
iff there's a mod 2 cohomology class $u$ such that $u$ admits an integral lift but $Sq^2 (u) $ does not. Can someone think of a nice example where this occurs?
That's a surprisingly nice example! – Dan Ramras Jul 7 '10 at 20:38
add comment
There are many spaces $X$ whose $K$ theory is trivial (isomorphic to that of a point), but whose ordinary cohomology is not. Famously, there is a 4-cell complex obtained by taking the
homotopy cofiber of the "Adams map" $$f:S^{11}\cup_{3\iota} e^{12} \to S^7\cup_{3\iota} e^8.$$ Here, $3\iota$ represents a degree $3$ self-map of a sphere.
up vote 10 The Adams map induces an isomorphism in $K$-theory, so $K^*(X)\approx K^*(*)\approx Z$. But $H^*(X,Z)\approx Z\oplus Z/3\oplus Z/3$.
down vote
(Hopefully I have the dimensions correct now.)
That was quick! – Dan Ramras Jul 7 '10 at 0:38
1 The dimensions can't be right. $12-7$ should be $4=2p-2$ – Tom Goodwillie Jul 7 '10 at 3:40
I think Tom is right. Letting $C_n (p)$ denote the cofiber of the degree p self map of $S^{n-1}$, the Adams map is a mapping $C_{n+2p-2} (p) \to C_n (p)$. Charles, under what
conditions does the Adams map induce an isomorphism in K-theory? Is this in Adams' J(X) IV paper? – Dan Ramras Jul 7 '10 at 20:37
Indeed, Tom is correct. I keep confusing the dimensions in the map, and the dimensions in the resulting cell complex (the cofiber). Argh. – Charles Rezk Jul 8 '10 at 0:31
add comment
Not the answer you're looking for? Browse other questions tagged kt.k-theory-homology spectral-sequences at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/30788/torsion-in-k-theory-versus-torsion-in-cohomology/30793","timestamp":"2014-04-16T14:03:45Z","content_type":null,"content_length":"61214","record_id":"<urn:uuid:377c9f1d-af92-446f-b3c0-0af0d618babb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[spambayes-dev] The naive bayes classifier algorithm in
spambayesdoesn't take in frequency?
[spambayes-dev] The naive bayes classifier algorithm in spambayesdoesn't take in frequency?
Kenny Pitt kennypitt at hotmail.com
Tue Aug 31 23:36:18 CEST 2004
Austine Jane wrote:
> I have a question on the naive bayes classifier algorithm used in
> spambayes.
> I suppose if word1 appeared in three ham mail, the probability of
> word1 being in ham mail would be greater than when it appeared in one
> ham mail:
That depends. The statistics are based on the fraction of ham mail that the
word appeared in, not just the absolute number of times it has occurred. A
word that appeared in 1 ham message out of 1 would have the same probability
as a word that appeared in 3 messages out of 3. A word that appeared in 1
message out of 3 would have a lower probability.
>>>> c=storage.DBDictClassifier('test.db')
>>>> def tok(s): return s.split()
>>>> c.learn(tok('word1'),is_spam=False)
>>>> c.spamprob(tok('word1'))
> 0.15517241379310343
>>>> c.learn(tok('word1'),False)
>>>> c.spamprob(tok('word1'))
> 0.091836734693877542
>>>> c.learn(tok('word1'),False)
>>>> c.spamprob(tok('word1'))
> 0.065217391304347783
> As you see the spam probability declines. So far so good.
>>>> c.learn(tok('word1'),True)
> And word1 also appeared in one spam mail, but it appeared in three
> ham mail before.
>>>> c.spamprob(tok('word1'))
> 0.5
> Hm... Sounds not very right.
>>>> c.learn(tok('word1'),False)
>>>> c.spamprob(tok('word1'))
> 0.5
> Stays still.
> This doesn't sound intuitive.
Assuming that you started from a clean database file and the training shown
in your example is the only training you've done, then this is exactly
right. If you go on to train the word as ham 1000 more times, you'll still
get 0.5. Here's why:
The base probability for a word is based on ratios:
p = spamratio / (spamratio + hamratio)
where spamratio is the number of spam messages that contained the word
divided by the total number of spam messages, and hamratio is the same but
using only the ham messages.
After training the word 3 times as ham, you had a hamratio of 3 / 3 = 1.0.
You had no spam messages, so your spam ratio was 0. This leads to:
p = 0 / (0 + 1) = 0
Because a word that has been seen only a few times is not a good predictor,
an adjustment is made to the base probability based on the total number of
messages that contained the word:
n = hamcount + spamcount = 3 + 0 = 3
adj_p = ((S * X) + (n * p)) / (S + n)
where S and X are constants. S is the "unknown word strength" with a
default value of 0.45, and X is the "unknown word probability" with a
default value of 0.5 (these are configurable in SpamBayes). When you apply
this adjustment you can see how p = 0 becomes the 0.0652 that you saw, and
also why the value was slightly higher when you had 1 and 2 messages instead
of 3. You can also see that as n approaches infinity, the constant factors
of (S * X) and S become irrelevant, the n terms on top and bottom cancel
out, and you are left with p.
Now as soon as you trained the first instance of the word as spam, your
spamratio became 1 / 1 = 1 also, so your base p becomes:
p = 1.0 / (1.0 + 1.0) = 0.5
>From this point forward, as long as you train only this one word both your
hamratio and spamratio will always be 1 and p will always be 0.5. If you
train some different words and then calculate the spamprob of this word
again, then you will see it start to change from 0.5.
> For example, word1 occurred in 1000
> spam email and occured in 1 ham mail. What is the probability of one
> mail that contains
> word1 being spam mail? Half and half?
Yes, the probability is 0.5 as long as it appeared in 1000 out of 1000 spam
mails and 1 out of 1 ham mail as in your example above. If, on the other
hand, the word appeared in 1000 out of 1000 spams and 1 out of 1000 hams
then the spam probability would be very different, approximately 0.999.
> Doesn't it take in the number
> of occurences(it does seem to take in the number of distinct tokens
> though)? It seems like the concept of the number of occurences and
> the number of distinct tokens are mixed in spambayes' classifier.
No, it only counts each word once in a single mail message. The original
Paul Graham scheme (http://www.paulgraham.com/spam.html) from which
SpamBayes evolved counted the total number of occurrences of the word, but
early testing of SpamBayes showed that accuracy was better if we considered
only the number of messages that contained the word and not the total number
of times that the word appeared.
Kenny Pitt
More information about the spambayes-dev mailing list
|
{"url":"https://mail.python.org/pipermail/spambayes-dev/2004-August/003058.html","timestamp":"2014-04-18T16:04:39Z","content_type":null,"content_length":"8071","record_id":"<urn:uuid:566dfede-3c2b-4a1e-9dcd-e52610ac12e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kenmore Algebra Tutor
Find a Kenmore Algebra Tutor
...I have tutored Algebra I for over five years now as an independent contractor through a private tutoring company. I have tutored high school level Algebra I for both Public and Private School
courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework.
27 Subjects: including algebra 1, algebra 2, chemistry, reading
...Also available for piano lessons, singing and bridge.I personally scored 800/800 on the SAT Math as well as 800/800 on the SAT Level II Subject Test. I have a lot of experience in helping
students prepare for any of the SAT Math tests to be able to find solutions to the problems quickly and accu...
43 Subjects: including algebra 1, algebra 2, chemistry, geometry
...My reviews were very positive and I was often thanked by students for making a confusing subject understandable. During my time tutoring at Green River Community College, I also often tutored
students in basic arithmetic, pre-algebra, algebra, and number theory. I worked hard to make sure that the students were picking up the information and gave them as much time as they needed.
12 Subjects: including algebra 1, algebra 2, reading, accounting
...This will include writing essays for critique. The amount of time required depends on student's present knowledge base in math, math concepts, vocabulary, reading comprehension, and writing
capability. Student will achieve to capability and improve old scores if they work hard as instructed.
22 Subjects: including algebra 2, algebra 1, physics, geometry
...I have been published multiple times and feel very confident in my abilities in writing. I have always sought avenues that allowed me to work with students and peers in teaching settings. As an
undergrad I was a peer mentor for an intro to engineering class, where I guided students across vario...
14 Subjects: including algebra 2, algebra 1, geometry, writing
|
{"url":"http://www.purplemath.com/kenmore_algebra_tutors.php","timestamp":"2014-04-17T00:59:11Z","content_type":null,"content_length":"23892","record_id":"<urn:uuid:70a5d0e2-c5b4-4ba8-a83c-ff10249ed575>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coefficient of maximum static friction
Let's start by getting some terminology straightened out. μs is the coefficient of static friction. The maximum value of static friction is given by Fs = μsN. There's no coefficient of "maximum"
static friction.
the only equation given is Fs is less than or equal to UsN (U=mu you know that greek symbol).
That's the only equation you need.
Here's the idea. You keep piling weight on the hanger, which pulls the block via tension in the string, but the opposing static friction is enough to prevent any motion. But when you put
just enough
weight on the hanger, and the tension just barely exceeds the
maximum value
of static friction, the block just starts to move. That's when you can apply that equation. What's the normal force on the block? What must the friction force equal?
|
{"url":"http://www.physicsforums.com/showthread.php?t=263547","timestamp":"2014-04-20T00:59:02Z","content_type":null,"content_length":"55050","record_id":"<urn:uuid:7102a886-f22d-4298-95c5-769f647f8c16>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Hi, I am trying to print number in the loop but its saying invalid syntax for last for loop. Please debug it. packages = (6,9,20) n in range (56,66) T = [] i =0 for i in range(1,100): for x in range
(1,100): for y in range(1,100): for z in range(1,100): if n== a*x+b*y+c*z: T.append(n) for len(T) >0: print T[i] i=i+1
• one year ago
• one year ago
Best Response
You've already chosen the best response.
n in range(56,66) what is n ? what is a b c? not defined
Best Response
You've already chosen the best response.
def diophantine1(): a=raw_input("write a: ") b=raw_input("write b: ") c=raw_input("write c: ") package=[a,b,c] for n in range(200,0,-1): cont=0 for i in range(0,10): for j in range(0,10): for k
in range(0,10): p=((int(package[0]))*i)+((int(package[1]))*j)+((int(package[2]))*k) if p==n: cont+=1 if cont==0: print ("the largest number of McNuggets that cannot be bought in exact quantity
is:" + str(n)) break
Best Response
You've already chosen the best response.
Hi lopus, Thanks for ur time! import this packages = (6,9,20) n in range (56,66) T = [] i =0 for i in range(1,100): for x in range(1,100): for y in range(1,100): for z in range(1,100): if n==
packages[0]*x+packages[1]*y+packages[2]*z: T.append(n) #for len(T) >0: #print T[i] #i=i+1 for num in T: print num I want the combination of 3 numbers please explain the below line in your code
for n in range(200,0,-1): And also i want to know if there is syntax error in below line for num in T: print num
Best Response
You've already chosen the best response.
[200,199,198,197.....] (200,0,-1) start in 200 finish 0 decrees in -1
Best Response
You've already chosen the best response.
man n is bad defined
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/503ba183e4b02c1631bdacae","timestamp":"2014-04-19T04:34:46Z","content_type":null,"content_length":"38416","record_id":"<urn:uuid:62a4f8db-46d3-4db8-bf2e-63ac4de4cf73>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Smooth morphisms of module varieties
Grzegorz Zwara
Abstract: Let k be an algebraically closed field. For any finite dimensional k-algebra B, one defines module varieties \mo_B^d(k), d\geq 1. Each \mo_B^d(k) is a sum of connected components \mo_B^
\ud(k) consisting of orbits of the modules with the dimension vector \ud=(d_1,...,d_r), where \sum d_i.\dim_kS_i=d and \{S_i\ is a complete set of pairwise nonisomorphic simple B-modules.
Any homomorphism \phi:A\to B of finite dimensional k-algebras induces regular morphisms \phi^\{(d)}:\mo_B^d(k)\to\mo_A^d(k) of module varieties, for all d\geq 1. We denote by \phi^\{(\ud)} the
morphism \phi^\{(d)} restricted to \mo_B^\ud(k). We are interested in homomorphisms \phi such that morphisms \phi^\{(d)} have nice geometric properties. The main result is as follows.
Theorem. Assume that the morphism \phi^\{(\ud)}:\mo_B^\ud(k)\to\mo_A^d(k) preserves codimensions of Gl_d(k)-orbits for any dimension vector \ud, where d=\sum d_i\cdot\dim_kS_i. Then each morphism
\phi^\{(\ud)}:\mo_B^\ud(k)\to\ov\{\im\phi^\{(\ud)}} is smooth.
As an application we will show that the geometry of modules from a homogeneous tube is closely related to the geometry of nilpotent matrices.
created Mon Mar 20 10:54:56 PST 2000
|
{"url":"http://www.msri.org/realvideo/ln/msri/2000/interact/zwara/1/title.html","timestamp":"2014-04-17T18:24:33Z","content_type":null,"content_length":"2128","record_id":"<urn:uuid:00c76aac-96b2-4a92-9fcf-d697fa87e208>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATLAB Curve fitting-Custom Equation with 2 boundaries
Technical discussion about Matlab and issues related to Digital Signal Processing.
Is this thread worth a thumbs up?
MATLAB Curve fitting-Custom Equation with 2 boundaries - "Faisal, Nadimul H" - Jul 9 8:33:13 2009
Hi Dear
I am wondering if anybody would know how to fit a set of data (x, y) with a
custom equation.
The equation is:
y=k*{2*(a-b)-(sin(2*a)-sin(2*b))}, where 'a' and 'b' is function of 'x' but
both 'a' and 'b' have separate boundary conditions. In effect, the angle 'a'
and 'b' has a phase difference.
'k' is a constant.
I really appreciate your reply. I would like to know if i could used some
other package.
Re: MATLAB Curve fitting-Custom Equation with 2 boundaries - Tilak Rajesh - Jul 13 8:22:04 2009
Hi Faisal,
I have done something similar using Mathematica, its very simple, try this
Not sure in Matlab, maybe try fsolve !
Let me know if this helps.
Good luck,
On 7/8/09, Faisal, Nadimul H <n...@gmail.com> wrote:
> Hi Dear
> I am wondering if anybody would know how to fit a set of data (x, y) with
> custom equation.
> The equation is:
> y=k*{2*(a-b)-(sin(2*a)-sin(2*b))}, where 'a' and 'b' is function of 'x'
> both 'a' and 'b' have separate boundary conditions. In effect, the angle
> and 'b' has a phase difference.
> 'k' is a constant.
> I really appreciate your reply. I would like to know if i could used some
> other package.
> regards
> Faisal
RE: Log of the image data - hekmat - Jul 17 12:01:07 2009
I am trying to take log of an image data, it comes up with an error like
Function 'subsindex' is not defined for values of class 'struct'
I am not sure what actually this error is.
could you please someone advise me with reasons for error,
data = dataread('C:\Users\afn_006.nii');
|
{"url":"http://www.dsprelated.com/groups/matlab/show/6997.php","timestamp":"2014-04-16T16:07:52Z","content_type":null,"content_length":"19109","record_id":"<urn:uuid:c8fe8f98-9358-4b03-bd84-a4e73fdf14a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Square Knots, Surfaces, and the Shape of the Universe
Square Knots, Surfaces, and the Shape of the Universe
William Singer, Ph.D., studies a field of mathematics known as algebraic topology. Above, he holds a Klein Bottle, a figure of topological fascination that can only be fully pictured in the fourth
Photo by Bruce Gilbert
By Joanna Klimaski
As a professor of mathematics, William Singer, Ph.D., knows that the mere mention of his work in algebraic topology could make someone take a nervous step backward and sheepishly confess to having
never been good with numbers. That kind of math anxiety, though, comes from a fundamental misconception about mathematics, Singer says.
“What a modern mathematician means by algebra is very different,” he said. “It’s not the kind of algebra one learns in high school. When people take a college-level course in what’s called abstract
algebra, or modern algebra, they meet the kind of algebra that topologists use.”
And if you have ever tied shoelaces or thought about the difference between a basketball and a football, then you already have insight into this little-known area of mathematics.
Topology is a branch of mathematics that studies the properties of figures that don’t change when a figure is twisted, stretched, or compressed. In this variant of algebra, numbers are a rarity.
Singer instead studies what things look like—from two-dimensional surfaces, to three-dimensional objects, to even higher dimensions.
“If you showed a geometer the surface of a sphere and the surface of a football, he would say that the two are very different,” Singer said. “But a topologist would not distinguish between them.”
That’s because the two groups of mathematicians disagree over the kinds of transformation that can be made to an object before it is considered changed. For example, if the sides of a sphere were
stretched outward until they became points, it would look like a football, or an ellipsoid. Geometrically, the sphere and the ellipsoid are different because they have distinct surface areas,
volumes, lengths, and so on. But topologically speaking, they are one and the same, because the sphere has been deformed in a continuous fashion to become an ellipsoid.
Pressing on a sphere to give it six sides turns it into a cube. Geometrically, a sphere and a cube are different, but topologically they are the same.
The same can be said of more complicated figures as well. One topological joke describes a topologist as someone who cannot tell the difference between a doughnut and a coffee cup. That’s because,
in topology, the surface of a coffee cup can be deformed to look like a sphere with a handle, and this surface can be deformed, in turn, to the surface of a doughnut. Geometrically, the doughnut and
the coffee cup are very different; topologically, they are equivalent.
The topological rule of thumb: Stretching, deforming, or twisting an object keeps it the same as it was before; cutting, tearing, or pasting it changes it.
Topology, a word with Greek roots meaning “the study of place,” is not a new discipline, having played important roles in electromagnetic theory and theoretical physics throughout the last two
centuries. It gained significant momentum in 1916, when the emergence of Albert Einstein’s general theory of relativity began to raise questions about the shape of the universe. Einstein’s theory,
together with startling new discoveries in astronomy, gave topologists new incentives to imagine and classify three-dimensional universes.
Just as cartographers helped to depict that the earth was a sphere rather than a plane, topologists have been able to show what a curved universe might look like.
“The algebraic tools [used in topology] are a way to see things,” said Singer, who also has a master’s degree in physics. “We don’t know the global geometry of our universe yet, but topology gives
us the tools to imagine what might be.”
In addition to playing an important role in theoretical physics, topology also makes appearances in the fields of statistics and robotics, Singer said. In statistics, topologists can describe the
shapes and configurations of data clouds. And using their knowledge of how to manipulate objects without essentially changing them, topologists improve robots’ navigational capacities by calculating
how they can efficiently move between obstacles.
A recent undertaking for the field has been in molecular biology, which draws on the work topologists have done in knot theory. This branch of topology analyzes and attempts to classify the
structure and properties of knots, with special attention to what makes one knot different from another.
“Recently, there’s been an alliance between knot theorists and molecular biologists, because some forms of DNA are knotted,” Singer said. “There have been collaborations between the two to describe
these very large molecules and how they are embedded in space.”
So while mention of the quadraticformula is enough to send some running for the humanities, Singer is a reminder that there is much more to mathematics than the number-laden versions of it that most
people encounter in their schooling.
“Mathematics is a living subject with many unanswered questions,” Singer said. “And it’s a global effort—there are people all over the world who work on these problems. We are a community.”
More Faculty and Research in this issue:
Return to Faculty and Research index
Return to Inside Fordham home page
Copyright © 2013, Fordham University.
|
{"url":"http://www.fordham.edu/campus_resources/enewsroom/inside_fordham/march_25_2013/in_focus_faculty_and/square_knots_surface_90566.asp","timestamp":"2014-04-16T04:10:45Z","content_type":null,"content_length":"53859","record_id":"<urn:uuid:69cc2fb6-1f7f-4899-ab57-e1202a692058>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
“A preface to the reader”, extrait de “Treatise of algebra”, both “Historical and practical” Londres 1685. (B.N. code V 1515 in fol.).
(English) Zbl 0669.01025
Cah. Sémin. Hist. Math. 10, 233-265 (1989).
In his book “The mathematical work of John Wallis, D. D., F. R. S., 1616-1703” [Taylor & Francis, London 1938, reprint Chelsea Publ. Comp. (New York 1981;
Zbl 0505.01008
J. F. Scott
characterized the “Treatise of algebra” as follows (p. 133): “This important work, which was written in English, appeared in 1685, and of all the author’s vast output it was probably during the next
hundred years or more the most widely read. It rapidly became a standard text-book on the subject, and, largely on account of the improved notation which Wallis adopted in its pages, it soon
displaced many of the treatises then current. It is, however, not merely as a treatise on algebra that the work claims attention: the book marks the beginning, in England at least, of the serious
study of the history of mathematics.” He then goes on to discuss the widely diverging opinions of later historians about the reliability of the historical sections in Wallis’s “Algebra”. The “Preface
to the reader” is, for the most part, a summary of the historical account. It is, therefore, marred by Wallis’s tendency to overrate the achievements of his nation. Apart from that, 300 years after
its composition it has become a historical document. As such it presupposes readers familiar with the history of mathematics as well as with the situation in which Wallis was writing this Preface.
The present (first?) translation into French may be welcome to readers not well acquainted with the English language (as it was written three centuries ago). The translator added about 20 footnotes -
helpful, but not sufficient for a modern reader.
01A75 Collected or selected works
01A45 Mathematics in the 17th century
01A05 General histories, source books
|
{"url":"http://zbmath.org/?q=an:0669.01025","timestamp":"2014-04-19T02:05:25Z","content_type":null,"content_length":"21909","record_id":"<urn:uuid:bd41e789-9df3-4872-bf06-4bc7e6fb3941>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Word problem confusion
November 21st 2010, 11:34 AM #1
Nov 2010
Word problem confusion
The question:
A cruise ship sailed north for 50km. Than the captain turned the ship eastward through an angle of pi/6 and sailed an additional 50km. Without using a calculator, determine an exact expression
for the direct distance from the start to the finish of the cruise.
Answer at the back of the book: sqr(5000+2500sqr3) km
Seems pretty simple and maybe I'm thinking of this question the wrong way, but I can't seem to figure out where they got that answer.
All help greatly appreciated!
Draw the "triangle"...
Recall the special triangles you learned in class. Do you remember this one? You've probably seen this in the form of radian angles and with sides 1, 2, sqrt(3). (pi/6 = 30 degrees)
Use this ratio to determine the total displacement north and displacement east. then use Pythagoras to find x.
November 21st 2010, 11:38 AM #2
November 21st 2010, 11:47 AM #3
Nov 2010
November 21st 2010, 11:52 AM #4
|
{"url":"http://mathhelpforum.com/trigonometry/163977-word-problem-confusion.html","timestamp":"2014-04-17T04:55:01Z","content_type":null,"content_length":"38144","record_id":"<urn:uuid:7ea061a1-5105-4c7a-90a6-caeb32f94c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics 160 - Office of the Registar
MATH 160
Introductory Seminar in Mathematics
Catalog Entry
MATH 160
Introductory Seminar in Mathematics
One Hour Seminar
Prerequisite: Mathematics Major
Designed for students new to the mathematics major, this is a seminar course that will discuss various professional skills needed to succeed in the major and in a mathematical career. Topics may
include: introduction to mathematics literature, discussions of career options, introduction to mathematics technology, and introductions to different topics in mathematics.
Detailed Description of Content of Course
Course topics may include the following:
1. Working with Mathematics Literature
• Overview of Library Skills
• Identifying Primary versus Secondary Literature
• Identifying Literature in Databases
• Finding Full Text Articles Online
• Writing and Reading in Mathematics Format
• Proper Citation of References
2. Exploring Career Possibilities
• Concentrations within the Mathematics Major
3. What do to with a Mathematics Degree
• Undergraduate Research Opportunities
• Graduate School Opportunities
4. Introductions to Mathematics Technology
• Maple
• Mathematica
• MATLAB
• Latex
• Geometer’s Sketchpad
• Graphing Calculators
5. Introductions to Different Topics in Mathematics
• Cryptography (e.g. the Hill Cipher)
• Elementary Graph Theory (e.g. Konigsberg Bridge Problem)
• Basic Probability Topics (e.g. “Let’s Make a Deal”)
• 4-Color Map Problem
• Geometry (e.g. Fractals)
• Mathematics Debates (e.g. is ?)
Detailed Description of Conduct of Course
This course will be taught in classroom sessions. The sessions will generally include a combination of discussions and/or debates, presentations and group work as determined by the instructor, and
guest speakers.
Goals and Objectives of the Course
Students successfully completing the course will be able to:
• Distinguish between primary and secondary literature
• Use the library and internet to find mathematics literature
• Read mathematics literature
• Write in mathematics format
• Use computer software to solve math problems
• Identify potential careers
• Know how to approach faculty members about undergraduate research possibilities
• See different topics of mathematics which may or may not be seen in other courses
Assessment Measures
Assessment may include presentations, projects, and mathematics literature reflections. Other assessment may be based on the instructor’s observation of student participation in discussions and group
Assessment of whether the course is achieving its goals in the curriculum will be based on observations by instructors in upper level mathematics courses.
Other Course Information
Review and Approval
Revised 08/17/05
|
{"url":"http://www.radford.edu/content/registrar/home/course-descriptions/csat-descriptions/mathematics/math-160.html","timestamp":"2014-04-17T16:34:49Z","content_type":null,"content_length":"28252","record_id":"<urn:uuid:2c136c5b-cfee-47b8-a22d-c7035f2a99bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] using UnivariateSpline
Erik Tollerud erik.tollerud@gmail....
Fri May 22 17:38:30 CDT 2009
These classes are indeed rather poorly documented, but once you get
into them, they work very well.
Also, be aware that the three *UnivariateSpline classes are only
different in how they generate the knots:
*UnivarateSpline: determines the number of knots by adding more knots
until the smoothing condition (sum((w[i]*(y[i]-s(x[i])))**2,axis=0) <=
s) is satisfied - s is specified in the constructor or the
set_smoothing_factor method.
*LSQUnivariateSpline: the knots are specified in a sequence provided
to the constructor (t)
*InterpolatedUnivatiateSpline: the spline is forced to pass through
all the points (equivalent to s=0)
But they are all evaluated by being called, as has already been explained.
On Fri, May 22, 2009 at 1:26 PM, David Warde-Farley <dwf@cs.toronto.edu> wrote:
> On 22-May-09, at 3:57 PM, Robert Kern wrote:
>> On Fri, May 22, 2009 at 14:57, David Warde-Farley
>> <dwf@cs.toronto.edu> wrote:
>>> I must be crazy, but how does one actually USE UnivariateSpline, etc.
>>> to do interpolation? How do I evaluate the spline at other data after
>>> it's fit?
>>> There seems to be no "evaluate" method or equivalent to splev.
>> def __call__(self, x, nu=None):
>> """ Evaluate spline (or its nu-th derivative) at positions x.
>> Note: x can be unordered but the evaluation is more efficient
>> if x is (partially) ordered.
> I somehow completely missed this. I guess I was skipping over the
> __init__ method because I already understood it. :S
> Thanks Robert.
> David
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
Erik Tollerud
Graduate Student
Center For Cosmology
Department of Physics and Astronomy
2142 Frederick Reines Hall
University of California, Irvine
Office Phone: (949)824-2587
Cell: (651)307-9409
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-May/021197.html","timestamp":"2014-04-16T07:16:44Z","content_type":null,"content_length":"5166","record_id":"<urn:uuid:9165d5b3-6ee5-4b92-9602-d90b2a37b650>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Monday, in our MAT8181 class, we’ve discussed seasonal unit roots from a practical perspective (the theory will be briefly mentioned in a few weeks, once we’ve seen multivariate models). Consider
some time series , for instance traffic on French roads, > autoroute=read.table( + "http://freakonometrics.blog.free.fr/public/data/autoroute.csv", + header=TRUE,sep=";") > X=autoroute$a100 > T=
1:length(X) > plot(T,X,type="l",xlim=c(0,120)) > reg=lm(X~T) > abline(reg,col="red") As discussed in a...
Displaying time series, spatial, and space-time data with R is available for pre-order
Two years ago, motivated by a proposal from John Kimmel, Executive Editor at Chapman & Hall/CRC Press, I started working …Sigue leyendo →
Fitting models to short time series
Following my post on fitting models to long time series, I thought I’d tackle the opposite problem, which is more common in business environments. I often get asked how few data points can be used to
fit a time series model. As with almost all sample size questions, there is no easy answer. It depends on the number of model parameters...
More time series data online
Earlier this week I had coffee with Ben Fulcher who told me about his online collection comprising about 30,000 time series, mostly medical series such as ECG measurements, meteorological series,
birdsong, etc. There are some finance series, but not ma...
Nonlinear Time Series just appeared
My friends Randal Douc and Éric Moulines just published this new time series book with David Stoffer. (David also wrote Time Series Analysis and its Applications with Robert Shumway a year ago.) The
books reflects well on the research of Randal and Éric over the past decade, namely convergence results on Markov chains for validating
demodulating time series
This posting shows how one might perform demodulation in R. It is assumed that readers are generally familiar tith the procedure.First, create some fake data, a carrier signal with period 10,
modulated over a long timescale, and with phase drifting linearly over time.1 2 3 4 5 6 7 8 9 10period <- 10 fc <- 1/period fs <- 1 n...
Automatic time series forecasting in Granada
In two weeks I am presenting a workshop at the University of Granada (Spain) on Automatic Time Series Forecasting. Unlike most of my talks, this is not intended to be primarily about my own research.
Rather it is to provide a state-of-the-art overview of the topic (at a level suitable for Masters students in Computer Science). I thought I’d provide...
Inference for ARMA(p,q) Time Series
As we mentioned in our previous post, as soon as we have a moving average part, inference becomes more complicated. Again, to illustrate, we do not need a two general model. Consider, here, some
process, where is some white noise, and assume further that . > theta=.7 > phi=.5 > n=1000 > Z=rep(0,n) > set.seed(1) > e=rnorm(n) > for(t...
Inference for MA(q) Time Series
Yesterday, we’ve seen how inference for time series was possible. I started with that one because it is actually the simple case. For instance, we can use ordinary least squares. There might be some
possible bias (see e.g. White (1961)), but asymptotically, estimators are fine (consistent, with asymptotic normality). But when the noise is (auto)correlated, then it is more...
Inference for AR(p) Time Series
$Y_t =\varphi_1 Y_{t-1}+\varphi_2 Y_{t-2}+\varepsilon_t$
Consider a (stationary) autoregressive process, say of order 2, for some white noise with variance . Here is a code to generate such a process, > phi1=.25 > phi2=.7 > n=1000 > set.seed(1) > e=rnorm
(n) > Z=rep(0,n) > for(t in 3:n) Z=phi1*Z+phi2*Z+e > Z=Z > n=length(Z) > plot(Z,type="l") Here, we have to estimate two sets of parameters: the autoregressive...
|
{"url":"http://www.r-bloggers.com/search/Time%20Series","timestamp":"2014-04-21T04:39:53Z","content_type":null,"content_length":"37449","record_id":"<urn:uuid:07ee49fc-6fb5-4602-9d36-37454165bca1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Infinitary methods in finite model theory
Seminar Room 1, Newton Institute
The accepted wisdom is that standard techniques from classical model theory fail to apply in the finite. We attempt to dispel this notion by presenting new proofs of the Gaifman and Hanf locality
theorems, as they appear in Libkin's textbook on finite model theory. In particular, using compactness over an expanded vocabulary, we obtain strikingly simple arguments that apply over both finite
and infinite structures -- all without the complexity of Ehrenfeucht–Fraïssé games normally used. Our techniques rely on internalizing most of the relevant mathematical features into the first-order
theory itself. It remains to be seen whether these methods can be extended to proving order-invariant locality.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/SAS/seminars/2012033011001.html","timestamp":"2014-04-19T09:55:38Z","content_type":null,"content_length":"6449","record_id":"<urn:uuid:ca12f120-b5c7-4465-8b3a-a043d892ea58>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inverse function
October 4th 2012, 05:47 PM
Inverse function
Hi MHF, knowing that the quadratic function does not admit inverse, and that $\left | X\right | = \sqrt[2]{x^{^2}}$ a modular function have inverse? if yes, how can i find for example y= |x+2|-|
October 4th 2012, 06:13 PM
Prove It
Re: Inverse function
Functions can only have inverses if they are one-to-one on their domain. It's pretty obvious that the example you gave is not one-to-one.
October 4th 2012, 06:20 PM
Re: Inverse function
October 4th 2012, 08:52 PM
Prove It
Re: Inverse function
October 5th 2012, 02:45 AM
Re: Inverse function
October 5th 2012, 07:40 AM
Re: Inverse function
What is |2|? What is |-2|?
A function is either one-to-one or it is not. I don't know what you mean by "never be". You don't think it is sometimes one-to-one and sometimes not?
October 5th 2012, 07:43 AM
Re: Inverse function
|2|=|-2|=2... The doubt is If there is any modular function that admits inverse?
|
{"url":"http://mathhelpforum.com/pre-calculus/204670-inverse-function-print.html","timestamp":"2014-04-18T02:01:47Z","content_type":null,"content_length":"8222","record_id":"<urn:uuid:3f162dd6-be90-411f-8401-9ca04f230a56>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using Key Words to Unlock Math Word Problems
Lesson Question:
How can identifying key words help students solve mathematical word problems?
Applicable Grades:
Lesson Overview:
In this lesson, students will take turns acting as "math coaches" who will assist other students in solving word problems by identifying key words that usually indicate specific mathematical
Length of Lesson:
One hour to one hour and a half
Instructional Objectives:
Students will:
• brainstorm key words that usually indicate specific mathematical operations
• create flash cards to review the relationships between key words and operations
• coach one another in collectively solving mathematical word problems
• synthesize their knowledge of word problems by writing some of their own
• student notebooks
• white board
• computers with Internet access
• index cards (four per student)
• "Solving Word Problems through Translation" sheets (one per small group) [click here to download]
Solving an authentic word problem:
• Open the class by having the students solve an authentic word problem in pairs: "There are _________ (fill in the number) students in this classroom. I need to distribute four index cards per
student. How many index cards do I need?"
• Circulate around the room as students work, ensuring that students are multiplying the number of students in the classroom by four in order to determine how many index cards are needed.
• Have a student volunteer come to the front board to write the mathematical equation that he or she used to determine the answer to the problem. Then, above that equation, write the original word
problem and ask students which specific word in the problem let them know that they needed to multiply the two numbers in order to determine the number of index cards needed for the class (i.e.,
Brainstorming key words that indicate mathematical equations:
• Explain to the class that whether they notice it or not, they are constantly interpreting key words in word problems in order to determine which mathematical operations to use in solving the
• On the white board, display the Visual Thesaurus word map for "arithmetic operation" and then click on the meaning "a mathematical operation involving numbers" in order to reveal the four
mathematical operations: addition, subtraction, multiplication, and division.
• Draw a four-quadrant table on the board and write a different mathematical operation title in each quadrant: addition, subtraction, multiplication, and division. Write the word "per" under the
title "multiplication" and have students brainstorm additional key words that belong under each of the four mathematical operation categories.
• If students get stuck in this brainstorming process, you could suggest different key words (within the context of simple word problems) and have students direct you where to write the words in
the table. At the end of this brainstorming session, make sure you have at least the following words and phrases listed in your table:
Mathematical Operations and Key Words
│ Addition │ Subtraction │
│ │ │
│ add(ed) to │ decreased by │
│ all together │ difference │
│ both │ fewer than │
│ combined │how many more │
│ in all │ left │
│ increase by │ less │
│ more than │ less than │
│ perimeter │ minus │
│ plus │ remaining │
│ sum │ take away │
│ total │ │
│Multiplication │ Division │
│ │ │
│ a │ divided │
│ area │ half │
│ multiplied by │how many each │
│ of │ out of │
│ per │ percent │
│ product of │ quarter │
│ rate │ quotient of │
│ times │ percent │
│ triple │ │
│ twice │ │
Creating key word flash cards:
• Have a student count out the number of index cards that the class determined in the warm up problem and distribute four cards to each student.
• Direct students to create four flash cards — one for each of the four mathematical operations. On the blank side of each card, they should boldly write an operation and its symbol (i.e., +, -, x,
where is the division symbol?), and on the reverse, lined sign they should list the key words associated with that operation. (Students should base these flash cards on the table you created on
the front board.)
Playing the role of "math coach":
• Organize the class into small groups of no more than three to four students in each group, and explain that they will be using their new flash cards as visual aids in math coaching!
• Distribute a "Solving Word Problems through Key Words" sheet to a student in each group and explain that the student with the sheet will act as the reader and recorder during the first round. The
reader and recorder's job is to read a word problem aloud and to allow his fellow "math coaches" to advise him on which mathematical operation to follow in solving the problem.
• Advise the math coaches in the class to listen to the word problem closely, to advise the reader and recorder to underline any key words in the problem that they detect, and to follow the flash
card mathematical operation that they decide to "flash."
• Direct groups to complete the "Solving Word Problems through Key Words" sheet, alternating the role of reader and recorder so that each student has at least one or two turns in that role.
Sharing word problem answers and strategies:
• Invite students to the front of the classroom to explain their group's word problem strategies and how key words led to determining which mathematical operations to use in each problem.
• For homework, assign students the task of writing some of their own word problems containing some of the key words discussed in class but not previously used on the "Solving Word Problems through
Key Words" sheet.
Extending the Lesson:
• To further challenge students, you could give them additional word problems that challenge them to interpret the same key words in somewhat confusing contexts (e.g., "I have eight jelly beans,
which is three fewer than my brother has. How many jelly beans does my brother have?") Or, you could also introduce word problems involving multiple mathematical operations (e.g., "A 6000 seat
stadium is divided into 3 sections. There are 2000 seats in Section 1, and there are 1500 more seats in Section 2 than in Section 3. How many seats are in Section 2?")
• Check whether or not groups accurately solved each of the ten word problems and underlined appropriate key words in the "Solving Word Problems through Key Words" sheet.
• Assess students' original word problems to see if they appropriately incorporated key words to indicate specific mathematical operations.
Benchmarks for Mathematics
Standard 1. Uses a variety of strategies in the problem-solving process
Level II (Grades 3-5)
1. Uses a variety of strategies to understand problem situations (e.g., discussing with peers, stating problems in own words, modeling problem with diagrams or physical objects, identifying a
2. Represents problems situations in a variety of forms (e.g., translates from a diagram to a number or symbolic expression)
3. Understands that some ways of representing a problem are more helpful than others
4. Uses trial and error and the process of elimination to solve problems
5. Knows the difference between pertinent and irrelevant information when solving problems
6. Understands the basic language of logic in mathematical situations (e.g., "and," "or," "not")
7. Uses explanations of the methods and reasoning behind the problem solution to determine reasonableness of and to verify results with respect to the original problem
Level III (Grades 6-8)
1. Understands how to break a complex problem into simpler parts or use a similar problem type to solve a problem
2. Uses a variety of strategies to understand problem-solving situations and processes (e.g., considers different strategies and approaches to a problem, restates problem from various
3. Understands that there is no one right way to solve mathematical problems but that different methods (e.g., working backward from a solution, using a similar problem type, identifying a
pattern) have different advantages and disadvantages
4. Formulates a problem, determines information required to solve the problem, chooses methods for obtaining this information, and sets limits for acceptable solutions
5. Represents problem situations in and translates among oral, written, concrete, pictorial, and graphical forms
6. Generalizes from a pattern of observations made in particular cases, makes conjectures, and provides supporting arguments for these conjectures (i.e., uses inductive reasoning)
7. Constructs informal logical arguments to justify reasoning processes and methods of solutions to problems (i.e., uses informal deductive methods)
8. Understands the role of written symbols in representing mathematical ideas and the use of precise language in conjunction with the special symbols of mathematics
Do you have a comment?
Share it with the Visual Thesaurus community.
|
{"url":"http://www.visualthesaurus.com/cm/lessons/using-key-words-to-unlock-math-word-problems/","timestamp":"2014-04-19T04:27:47Z","content_type":null,"content_length":"26524","record_id":"<urn:uuid:736172d5-5595-4a9c-8e2a-053fa0453e5e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2008 [00097]
[Date Index] [Thread Index] [Author Index]
Re: Rearrangement of expression
• To: mathgroup at smc.vnet.net
• Subject: [mg91744] Re: Rearrangement of expression
• From: "David Park" <djmpark at comcast.net>
• Date: Sun, 7 Sep 2008 05:34:34 -0400 (EDT)
• References: <g9t6np$j9n$1@smc.vnet.net>
Manipulating expressions is either fun or a pain in the neck. Let's hope
you'll get enough suggestions to make it fun.
expr = a/Sqrt[a^2 + b^2];
The major difficulty that you throw in is the desired form (b/a)^2.
Mathematica always automatically expands that to b^2/a^2 so we will need a
rule to put it into a HoldForm
powerformat =
Power[x_, n_] Power[y_, m_] /; m == -n \[And] n > 0 ->
Then the easiest solution is:
(expr /. a -> 1 /. b -> b/a) /. powerformat
Sometimes we might consider that substitutions are a bit risky because we
might make a mistake in keeping all the substitutions coordinated. Here is a
more formal calculation approach using a routine, MultiplyByOne, from the
Presentations package.
MultiplyByOne[factor, simplify entire expression, simplify numerator,
simplify denominator]
multiplies the numberator and denominator by the same factor and applies
simplification functions, first to the numerator and denominator, and then
to the entire expression.
expr // MultiplyByOne[1/a, Identity, Identity,
Sqrt[Apart[#^2] /. powerformat] &]
Of course, this manipulation is only correct if a > 0.
{a/Sqrt[a^2 + b^2], 1/Sqrt[1 + b^2/a^2]} /. {{a -> 2}, {a -> -2}}
David Park
djmpark at comcast.net
<Gehricht at googlemail.com> wrote in message news:g9t6np$j9n$1 at smc.vnet.net...
> Hi!
> Suppose I have the fraction a/Sqrt[a^2+b^2]. How can I rearrange it to
> 1/Sqrt[1+(b/a)^2]?
> With thanks
> Yours Wolfgang
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Sep/msg00097.html","timestamp":"2014-04-17T13:07:04Z","content_type":null,"content_length":"26705","record_id":"<urn:uuid:148b23d0-3178-4a51-88dd-5d1016577541>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nobel Laureates on the QM Interpretation Mess
Nobel Laureates on the QM Interpretation Mess
Update: Perusing the web I noticed that John Preskill [not yet a Nobel laureate the same survey. Certainly another prominent voice to add to the mix.
In the LinkedIn discussion to my earlier blog post that was lamenting the disparate landscape of QM interpretation, I had Nobel laureate Gerard 't Hooft weighing in:
Don't worry, there's nothing rotten. The point is that we all agree about how to calculate something in qm. The interpretation question is something like: what language should one use to express
what it is that is going on? The answer to the question has no effect on the calculations you would do with qm, and thus no effect on our predictions what the outcome of an experiment should be.
The only thing is that the language might become important when we try to answer some of the questions that are still wide open: how do we quantize gravity? And: where does the Cosmological
Constant come from? And a few more. It is conceivable that the answer(s) here might be easy to phrase in one language but difficult in another. Since no-one has answered these difficult
questions, the issue about the proper language is still open.
His name certainly seemed familiar, yet due to some very long hours I am currently working, it was not until now that I realized that it was that 't Hooft. So I answered with this, in hindsight,
slightly cocky response:
Beg to differ, the interpretations are not more language, but try to answer what constitutes the measurement process. Or, with apologies to Ken, what "collapses the wave function": The latter is
obviously a physical process. There has been some yeoman's work to better understand decoherence, but ultimately what I want to highlight is that this sate of affairs, of competing QM
interpretation should be considered unsatisfactory. IMHO there should be an emphasis on trying to find ways to decide experimentally between them.
My point is we need another John Bell. And I am happy to see papers like this that may allow us to rule out some many world interpretations that rely on classical probabilities.
So why does this matter? It is one thing to argue that there can be only one correct QM interpretation, and that it is important to identify that one in order to be able to develop a better
intuition for the quantum realities (if such a thing is possible at all).
But I think there are wider implications, and so I want to quote yet another Nobel laureate, Julian Schwinger, to give testament to how this haunted us when the effective theory of quantum
electrodynamics was first developed (preface selected papers on QED 1956):
Thus also the starting point of the theory is the independent assignment of properties to the two fields, they can never be disengaged to give those properties immediate observational
significance. It seems that we have reached the limits of the quantum theory of measurement, which asserts the possibility of instantaneous observations, without reference to specific agencies.
The localization of charge with indefinite precision requires for its realization a coupling with the electromagnetic field that can attain arbitrarily large magnitudes. The resulting appearance
of divergences, and contradictions, serves to deny the basic measurement hypothesis.
6 Responses to Nobel Laureates on the QM Interpretation Mess
1. Fret not, and take a bit of heart from the case of the late great (later to win the Nobel Physics Prize in the case of) Dr. Donald Glaser
( http://www.nytimes.com/2013/03/05/science/donald-glaser-nobel-winner-in-physics-dies-at-86.html?src=recg ) It was written in the Times that:
“Dr. Glaser, who was teaching at the University of Michigan at the time, was fortunate that he did not know that Fermi had calculated that a bubble chamber would never work. Only afterward, after
Fermi had invited Dr. Glaser to the University of Chicago to give a talk about the bubble chamber, did Dr. Glaser look up Fermi’s calculation in a thermodynamics textbook. There he found an
erroneous equation.
“It’s just a small error, but that error made it possible for him to prove that it couldn’t work,” Dr. Glaser said of the bubble chamber in an oral history conducted by the Bancroft Library at
Berkeley. “And luckily I didn’t know about his book because it would have turned me off. Instead, I did my own calculation, and it was hard, but that was the critical difference.”
After winning the Nobel, Dr. Glaser, frustrated that particle physics was adopting huge atom smashers requiring large teams of scientists, switched to molecular biology and studied bacteria and
viruses. ”
Now (in my wild dreams and opinion) the two branches of science Dr. Glaser worked in are coming together to settle some important questions about ‘square one’ and how matter and anti matter are
made and move by means of bacteria and the virus as a power plant of the breeder reactor that runs the power plant that answers the question posed by Gerard ‘t Hooft to your thread question in
general was a reply of a few was this; “where does the Cosmological Constant come from?”
My humble answer (per the Cosmological Principle) the atomic power contained int the nuclear core of Carbon, Nitrogen, Oxygen, making and breaking the Hydrogen bond thus in the quantum dynamic of
making and repairing three generations of quarks ‘fused’ by neutrinos which therefore form the radioactive ‘foam of space’ as so much atmosphere around which ‘rotary evaporator’ condenser/Higg’s
mechanism which makes matter happen like fresh racks of balls and then vanish like so many billiard balls sunk as progress scored on Gods elementary table in the Pool Hall of the evolving
Universe so speaking/writing utterly metaphorically* while powered by the energy mass equivalence of the Sun, moon, and Stars – that I seek to know, a bit better: so as to know my self and ‘space
in time’ by attempting to know the structure and essence of Earth, wind, and fire, by reading and heading what the ‘old school’ has to say on the subject so as to have a point of view as old, and
new as ‘the books’, being rewritten daily, by inquiring minds in search of answers the best way to find them, which is by taking y0ur thoughts and ‘bouncing them off the wall’ to see who takes a
look/whack at your material/data stack.
You just never know; everything.
Thanks for broaching the topic.
For if knowledge is power, and E=MC2, the perhaps it stands to reason that, ‘knowledge shared, is knowledge squared.
*and as a Political Scientist who studies, thinks and writes about Theoretical Physics out of personal interest in the subject. I confess to doing so without formal training, therefore my life
experience, reading and research are correspondingly compensatory to have a semi-clue of what I speak – which I presume to, or I surely would not be writing what I believe to be true from my and
the ‘safety committee’s research pioneering research and life experience with ‘bubble chamber’ technology – which btw is still reported to be ‘the key’ and or ‘ticket’ there are many options and
variations on the form factor – if you must know, and you should.
2. Lovely post Carl. Finally, reasoned skepticism creeps aboard the train of Quax. Bravo! Was it not Schwinger who resigned the American Physical Society because they refused to publish his views on
the veracity of… cold fusion??
3. Penrose and ‘t Hooft have come around to Einstein’s POV concerning the interpretation of quantum theory, as we learn in Kumar’s recent ‘Quantum.’
“A theory that yields “maybe” as an answer should be recognized as an inaccurate theory.”
~’t Hooft
“Can it really be true that Einstein, in any significant sense, was a profoundly “wrong” as the followers of Bohr maintain? I do not believe so. I would, myself, side strongly with Einstein in
his belief in a submiscroscopic reality, and with his conviction that present-day quantum mechanics is fundamentally incomplete.”
KUmar |
Dirac also came around, as we learn in a recently republished article in SciAm.
“If h-bar is a derived quantity instead of a fundamental one, our whole set of ideas about uncertainty will be altered: h-bar is the fundamental quantity that occurs in the Heisenberg uncertainty
relation connecting the amount of uncertainty in a position and in a momentum. This uncertainty relation cannot play a fundamental role in a theory in which h-bar itself is not a fundamental
quantity. I think one can make a safe guess that uncertainty relations in their present form will not survive in the physics of the future.”
~Dirac | http://bit.ly/sPJ5GC
He makes the point explicitly in the biography by Pais:
“It seems clear that the present quantum mechanics is not in its final form [...] I think it very likely, or at any rate quite possible, that in the long run Einstein will turn out to be
~Dirac | http://bit.ly/10dJc5t
Born gave us the statistical interpretation of the wave function. Less familiar is what he said at the time.
“Anyone dissatisfied with these ideas may feel free to assume that there are additional parameters not yet introduced into the theory which determine the individual event.”
~Born | http://bit.ly/10dKkGm
We often read that quantum theory covers all known phennomena. I their admirable text on the “mathematics of Classical and Quantum Physics,’ Byron and Frederick write:
“Axiom I. Any physical system is completely described by a normalized vector (the state vector or wave function) in Hilbert space. All possible information about the system can be derived from
this state vector by rules (…)”
Yet the author of the wave function would seem to have disagreed:
“If you ask a physicist what is his idea of yellow light, he will tell you that it is transversal electromagnetic waves of wavelength in the neighborhood of 590 millimicrons. If you ask him: But
where does yellow comes in? he will say: In my picture not at all, but these kinds of vibrations, when they hit the retina of a healthy eye, give the person whose eye it is the sensation of
~Schrödinger | http://bit.ly/XJmoJi
The difficulty cited by Schrödinger has historically been dodged in one of two ways: (1) We have either swept color into the dustbin of the mind; or (2) identified it with the wavelength of the
associated light.
Mach addressed the first option in his book on ‘The Analysis of Sensations.’
“A color is a physical object as soon as we consider its dependence, for instance, upon its luminous source, upon temperatures, upon spaces, and so forth.”
~Mach | http://bit.ly/10dMZQc
The second option is merely a matter of widespread ignorance, propped up by shoddy scholarship. Grassmann, Maxwell, Schrödinger, Weyl and Feynman all tell us quite explicitly that color behaves
like a vector — a fact borne out by the technology behind our color TVs and computer monitors.
Whereas a wavelength (or frequency) is a scalar, being a simple magnitude.
Why is this important? For starters, colors manifest rather obvious symmetries — the kinds of symmetries which make vectors helpful in physics.
And then, the dual of a vector is a differential form, which has the dimensions of area. And what we observe are colored areas.
When light Dopplers, its wavelength and frequency change, and so does its energy, by E = hv.
Changing the energy of a photon rotates its vector in Hilbert space. The new vector has a new color associated with it, and so it would seem that we have rotated the color vector via the same
Since energy is conserved, the associated wavelength of constant energy will also remain invariant, leaving us with a vector pointing to the same place in Hilbert space, as well as color space.
Helmholtz observed that “Similar light produces, under like conditions, a like sensation of color.” Color is, of course, one of Locke’s “secondary qualities.” Generalizing with a view to
Heisenberg’s formulation of quantum mechanics, we can say that:
The same state vector, acted upon by the same operators A, B, C … produces the same spectrum of secondary qualities.
Where does this get us? Rather far, perhaps, for as the mathematician Steen reminds us, early on in the history of quantum theory, “The mathematical machinery of quantum mechanics became that of
spectral analysis… ”
[PDF] http://bit.ly/PEqaS6
Finally, Weyl tells us that “the colors with their various qualities and intensities fulfill the axioms of vector geometry if addition is interpreted as mixing; consequently, projective geometry
applies to the color qualities.”
Considering that the dualities which crop up in contemporary physics had their provenance in projective geometry, this point should be of general interest.
Mapping the projective plane to a color sphere by way of a double cover gives us a complex, projective vector space, where the antipodes are naturally interpreted as the same color x, rotated
thru 180 degrees to give us the same color, but wholly out of phase. Adding x to its opposite (-x) gives us “no light” or darkness, thus giving us a group structure, the other axioms being
satisfied in an obvious way.
Again, one doesn’t have to go far to bump into complex, projective vector spaces these days. The interested reader is encouraged to google that term, together with “gauge” and/or “M-theory.”
4. O, drat. Here’s that link to Kumar: http://bit.ly/v9y1yE
5. PS,
In regard to operators (matrices) and spectra, we would seem to have ready correspondences with M(atrix) theory and spectral triples.
This entry was posted in Popular Science, Quantum Mechanics and tagged Gerard 't Hooft, John Preskill, Julian Schwinger. Bookmark the permalink.
|
{"url":"http://wavewatching.net/2013/03/06/nobel-laureates-on-the-qm-interpretation-mess/","timestamp":"2014-04-20T06:41:46Z","content_type":null,"content_length":"57846","record_id":"<urn:uuid:58fe9dd0-2ce2-4cf3-9fd1-2f0102cf7cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Search Results
PMCID: PMC3908914 PMID: 24489418
PMCID: PMC3771340 PMID: 23034025
Recently proposed double-robust estimators for a population mean from incomplete data and for a finite number of counterfactual means can have much higher efficiency than the usual double-robust
estimators under misspecification of the outcome model. In this paper, we derive a new class of double-robust estimators for the parameters of regression models with incomplete cross-sectional or
longitudinal data, and of marginal structural mean models for cross-sectional data with similar efficiency properties. Unlike the recent proposals, our estimators solve outcome regression estimating
equations. In a simulation study, the new estimator shows improvements in variance relative to the standard double-robust estimator that are in agreement with those suggested by asymptotic theory.
PMCID: PMC3635709 PMID: 23843666
Drop-out; Marginal structural model; Missing at random
We derive estimators of the mean of a function of a quality-of-life adjusted failure time, in the presence of competing right censoring mechanisms. Our approach allows for the possibility that some
or all of the competing censoring mechanisms are associated with the endpoint, even after adjustment for recorded prognostic factors, with the degree of residual association possibly different for
distinct censoring processes. Our methods generalize from a single to many censoring processes and from ignorable to non-ignorable censoring processes.
PMCID: PMC3499834 PMID: 18575980
Cause-specific; Dependent censoring; Inverse weighted probability; Sensitivity analysis
We consider nonparametric regression of a scalar outcome on a covariate when the outcome is missing at random (MAR) given the covariate and other observed auxiliary variables. We propose a class of
augmented inverse probability weighted (AIPW) kernel estimating equations for nonparametric regression under MAR. We show that AIPW kernel estimators are consistent when the probability that the
outcome is observed, that is, the selection probability, is either known by design or estimated under a correctly specified model. In addition, we show that a specific AIPW kernel estimator in our
class that employs the fitted values from a model for the conditional mean of the outcome given covariates and auxiliaries is double-robust, that is, it remains consistent if this model is correctly
specified even if the selection probabilities are modeled or specified incorrectly. Furthermore, when both models happen to be right, this double-robust estimator attains the smallest possible
asymptotic variance of all AIPW kernel estimators and maximally extracts the information in the auxiliary variables. We also describe a simple correction to the AIPW kernel estimating equations that
while preserving double-robustness it ensures efficiency improvement over nonaugmented IPW estimation when the selection model is correctly specified regardless of the validity of the second model
used in the augmentation term. We perform simulations to evaluate the finite sample performance of the proposed estimators, and apply the methods to the analysis of the AIDS Costs and Services
Utilization Survey data. Technical proofs are available online.
PMCID: PMC3491912 PMID: 23144520
Asymptotics; Augmented kernel estimating equations; Double robustness; Efficiency; Inverse probability weighted kernel estimating equations; Kernel smoothing
We consider the estimation of the parameters indexing a parametric model for the conditional distribution of a diagnostic marker given covariates and disease status. Such models are useful for the
evaluation of whether and to what extent a marker’s ability to accurately detect or discard disease depends on patient characteristics. A frequent problem that complicates the estimation of the model
parameters is that estimation must be conducted from observational studies. Often, in such studies not all patients undergo the gold standard assessment of disease. Furthermore, the decision as to
whether a patient undergoes verification is not controlled by study design. In such scenarios, maximum likelihood estimators based on subjects with observed disease status are generally biased. In
this paper, we propose estimators for the model parameters that adjust for selection to verification that may depend on measured patient characteristics and additonally adjust for an assumed degree
of residual association. Such estimators may be used as part of a sensitivity analysis for plausible degrees of residual association. We describe a doubly robust estimator that has the attractive
feature of being consistent if either a model for the probability of selection to verification or a model for the probability of disease among the verified subjects (but not necessarily both) is
PMCID: PMC3475507 PMID: 23087495
Missing at Random; Nonignorable; Missing Covariate; Sensitivity Analysis; Semiparametric; Diagnosis
We present new statistical analyses of data arising from a clinical trial designed to compare two-stage dynamic treatment regimes (DTRs) for advanced prostate cancer. The trial protocol mandated that
patients were to be initially randomized among four chemotherapies, and that those who responded poorly were to be rerandomized to one of the remaining candidate therapies. The primary aim was to
compare the DTRs’ overall success rates, with success defined by the occurrence of successful responses in each of two consecutive courses of the patient’s therapy. Of the one hundred and fifty study
participants, forty seven did not complete their therapy per the algorithm. However, thirty five of them did so for reasons that precluded further chemotherapy; i.e. toxicity and/or progressive
disease. Consequently, rather than comparing the overall success rates of the DTRs in the unrealistic event that these patients had remained on their assigned chemotherapies, we conducted an analysis
that compared viable switch rules defined by the per-protocol rules but with the additional provision that patients who developed toxicity or progressive disease switch to a non-prespecified
therapeutic or palliative strategy. This modification involved consideration of bivariate per-course outcomes encoding both efficacy and toxicity. We used numerical scores elicited from the trial’s
Principal Investigator to quantify the clinical desirability of each bivariate per-course outcome, and defined one endpoint as their average over all courses of treatment. Two other simpler sets of
scores as well as log survival time also were used as endpoints. Estimation of each DTR-specific mean score was conducted using inverse probability weighted methods that assumed that missingness in
the twelve remaining drop-outs was informative but explainable in that it only depended on past recorded data. We conducted additional worst-best case analyses to evaluate sensitivity of our findings
to extreme departures from the explainable drop-out assumption.
PMCID: PMC3433243 PMID: 22956855
Causal inference; Efficiency; Informative dropout; Inverse probability weighting; Marginal structural models; Optimal regime; Simultaneous confidence intervals
Modern epidemiologic studies often aim to evaluate the causal effect of a point exposure on the risk of a disease from cohort or case-control observational data. Because confounding bias is of
serious concern in such non-experimental studies, investigators routinely adjust for a large number of potential confounders in a logistic regression analysis of the effect of exposure on disease
outcome. Unfortunately, when confounders are not correctly modeled, standard logistic regression is likely biased in its estimate of the effect of exposure, potentially leading to erroneous
conclusions. We partially resolve this serious limitation of standard logistic regression analysis with a new iterative approach that we call ProRetroSpective estimation, which carefully combines
standard logistic regression with a logistic regression analysis in which exposure is the dependent variable and the outcome and confounders are the independent variables. As a result, we obtain a
correct estimate of the exposure-outcome odds ratio, if either the standard logistic regression of the outcome given exposure and confounding factors is correct, or the regression model of exposure
given the outcome and confounding factors is correct but not necessarily both, that is, it is double-robust. In fact, it also has certain advantadgeous efficiency properties. The approach is general
in that it applies to both cohort and case-control studies whether the design of the study is matched or unmatched on a subset of covariates. Finally, an application illustrates the methods using
data from the National Cancer Institute's Black/White Cancer Survival Study.
PMCID: PMC3059519 PMID: 21225896
Standardized means, commonly used in observational studies in epidemiology to adjust for potential confounders, are equal to inverse probability weighted means with inverse weights equal to the
empirical propensity scores. More refined standardization corresponds with empirical propensity scores computed under more flexible models. Unnecessary standardization induces efficiency loss.
However, according to the theory of inverse probability weighted estimation, propensity scores estimated under more flexible models induce improvement in the precision of inverse probability weighted
means. This apparent contradiction is clarified by explicitly stating the assumptions under which the improvement in precision is attained.
PMCID: PMC3371719 PMID: 22822256
Causal inference; Propensity score; Standardized mean
In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB,
Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an
example to aid the interpretation of the positivity assumption.
PMCID: PMC2854089 PMID: 20405047
dynamic treatment regime; double-robust; inverse probability weighted; marginal structural model; optimal treatment regime; causality
We consider the doubly robust estimation of the parameters in a semiparametric conditional odds ratio model. Our estimators are consistent and asymptotically normal in a union model that assumes
either of two variation independent baseline functions is correctly modelled but not necessarily both. Furthermore, when either outcome has finite support, our estimators are semiparametric efficient
in the union model at the intersection submodel where both nuisance functions models are correct. For general outcomes, we obtain doubly robust estimators that are nearly efficient at the
intersection submodel. Our methods are easy to implement as they do not require the use of the alternating conditional expectations algorithm of Chen (2007).
PMCID: PMC3412601 PMID: 23049119
Doubly robust; Generalized odds ratio; Locally efficient; Semiparametric logistic regression
We consider estimation, from a double-blind randomized trial, of treatment effect within levels of base-line covariates on an outcome that is measured after a post-treatment event E has occurred in
the subpopulation 𝒫E,E that would experience event E regardless of treatment. Specifically, we consider estimation of the parameters γ indexing models for the outcome mean conditional on treatment
and base-line covariates in the subpopulation 𝒫E,E. Such parameters are not identified from randomized trial data but become identified if additionally it is assumed that the subpopulation 𝒫Ē,E of
subjects that would experience event E under the second treatment but not under the first is empty and a parametric model for the conditional probability that a subject experiences event E if
assigned to the first treatment given that the subject would experience the event if assigned to the second treatment, his or her outcome under the second treatment and his or her pretreatment
covariates. We develop a class of estimating equations whose solutions comprise, up to asymptotic equivalence, all consistent and asymptotically normal estimators of γ under these two assumptions. In
addition, we derive a locally semiparametric efficient estimator of γ. We apply our methods to estimate the effect on mean viral load of vaccine versus placebo after infection with human
immunodeficiency virus (the event E) in a placebo-controlled randomized acquired immune deficiency syndrome vaccine trial.
PMCID: PMC2837843 PMID: 20228899
Counterfactuals; Missing data; Potential outcomes; Principal stratification; Structural model; Vaccine trials
|
{"url":"http://pubmedcentralcanada.ca/pmcc/solr/reg?term=author%3A(%22Rotnitzky%2C+Andrea%22)&filterQuery=author_s%3ARotnitzky%2C%5C+Andrea&sortby=score+desc","timestamp":"2014-04-20T06:08:46Z","content_type":null,"content_length":"79485","record_id":"<urn:uuid:7c46ab84-fe56-46c3-8918-8b3ff41b4933>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Introduction
The so-called Golden Section refers to the division of a line such that the whole is to the greater part as that part is to the smaller part - a proportion which is considered to be particularly
pleasing to the eye. Here is a depiction of the principle:
whence we have the related Golden Ratio, (a + b)/a = a/b, usually represented by the lower case Greek letter 'phi' (f).
This equation has as its unique positive solution the irrational number
(1 + ) / 2 = 1.6180339... = f -------------- (i)
We read in the Wikipedia entry under this heading, "At least since the Renaissance, many artists and architects have proportioned their works to approximate the golden ratio - especially in the form
of the Golden Rectangle, in which the ratio of the longer side to the shorter is the golden ratio...Mathematicians have studied the golden ratio because of its interesting properties."
Observe that the rectangle generated by removing a square from the original figure is similar to it, i.e. its sides too are in the ratio 1:f, and we may therefore write f/1 = 1/f - 1. This leads to
the two interesting outcomes:
f ^2 = f + 1 --------------------------- (ii)
1/f = f - 1 -------------------------- (iii)
As has been demonstrated in earlier pages (details here), the three prominent universal constants p, e and a are implied by the Hebrew letters and words of Genesis 1:1 and its Greek cognate, John
1:1. Here, it is proposed to add f to this list. However, before that becomes possible, it is necessary that we establish clear links between f and certain trigonometric ratios - this, in turn,
requiring an appeal to simple principles of geometry.
2. A geometric derivation of the golden ratio
The following diagrams depict a regular pentagon - i.e. one whose vertices lie on a circle and whose sides and diagonals form isosceles triangles. Observe that each number represented is an angle
expressed in degree measure.
Since each exterior angle of the pentagon is 360/5, or 72, it follows that each interior angle is (180 - 72), or 108 [see inset at (a)]. At (b), O represents the centre of the circumscribed circle.
Angle DAC is half the angle subtended at the centre - its value is therefore 36. The remaining angles are readily calculated - significantly, all are multiples of 18.
In the following developments, each of the coloured triangles is observed to be isosceles, and the green and blue triangles, similar. Thus, in particular,
EB/BA = BA/QB, whence
BA^2 = EB.QB
= (EQ + QB).QB = (AE + QB).QB
= (BA + QB).QB
It therefore follows that
BA/QB = (BA + QB)/BA = 1 + QB/BA
and, setting the ratio BA/QB = x, we have
x = 1 + 1/x, or x^2 = x + 1
x = (1 + = 1.618034... = f, the GOLDEN RATIO ---------------- (iv)
If the perpendicular, QR, now be dropped from Q onto AB (as in the diagram below), it follows that
BR/QB = f/2 = cos 36 = sin 54 -------------- (v)
Because the trigonometric functions are periodic, these results extend to limitless sequences of angles - as the following diagrams make clear.
The outline of a typical sine curve is here represented by a series of points, q being a multiple of 3. Observe that the graduations on the axes refer to q a multiple of 18. Clearly, sin 54 = sin 126
= -sin 234...= -sin 594 = -sin 666 =...
The corresponding cosine curve above similarly reveals cos 36 = -cos 144 = -cos 216 = cos 324 =...
Combining these observations therefore enables us to write
sin 666 + cos 216 = -f
f = - [sin (666) + cos (6.6.6)] ------------- (vi)
And what of 18, i.e. (6+6+6)? The following analysis shows that this angle too has a place in these proceedings. Here is the geometrical basis of the matter:
In triangle ANQ, sin 18 = NQ/QA = NQ/QB [triangle QBA isosceles] ----------------- (vii)
In triangle NBA, NB/BA = cos 36 = f/2 [from (v)], and
in triangle QBR, BR/QB = BA/(2.QB) = sin 54 = f/2 [from (v)]
Hence, NB = BA.f/2 and QB = BA/f
Clearly, therefore, NQ = NB - QB = BA.(f/2 - 1/f) = BA.(f^2 - 2)/(2.f) and it follows from (vii) that
sin 18 = BA.(f^2- 2)/2.f) / (BA/f)
= (f^2 - 2)/2 = 1/(2.f) [since, from (ii) and (iii), f^2 = f + 1, and f - 1 = 1/f]
f = cosec(18)/2 = cosec (6+6+6)/2 ---------------- (viii)
[Observe that each of the foregoing results - linking the golden section (f) with trigonometrical functions of particular angles - may be readly confirmed if one has access to a scientific electronic
3. The triples of sixes spanning the Judeo-Christian Scriptures
Attention is here drawn to just two significant instances of the relationships involving 666, 6.6.6 and 6+6+6.
(1) In the Book of Revelation, 666 is offered as a key to the gift of wisdom (Rev.13:18). A detailed consideration of this visually-arresting number reveals it to be not only triangular, but
uniquely so; this because each of its numerical attributes is triangular - and, further, that the sum of these is 216, or 6.6.6. Here is a reminder of these details:
Observe that the radix-dependent attributes contribute 72, and the absolute attributes 144, to the combined total of 216 - and that 666, 72 and 144, are multiples of 18.
But note too the triangular representations of 6 and 10 that occur at each of the three vertices of the larger triangles - where the angle in degree measure is 60, or 6.10. Clearly, this is a
feature common to all numerical triangles.
(2) As we have seen in earlier pages on this site, a geometrical view of the numerics of Genesis 1:1 - the Bible's first verse - reveals a symmetrical structure in which 666-as-triangle features,
in triplicate, within a triangular outline of 216, or 6.6.6 - i.e. the 6th cube. These facts are recalled in the diagram below.
At (a), we have the composite structure of 2701 - the sum of the 7 Hebrew words of the Bible's first verse, fairly read as numbers. The first 5 of these total 1998, or 3.666 (represented by
the purple triangles); the last 2 (translated "and the earth.") total 703 (represented by the green central triangle). At (b), the outline triangle (rendered white) comprises 216, or 6.6.6,
4. Conclusion
The foregoing analysis makes it abundantly clear that this universal constant, f - whose history stretches far back into antiquity, and which, for a variety of reasons, has long fascinated man -
has substantial links with the Judeo-Christian Scriptures. Against the backcloth of the p, e and a connections drawn from the same sources, it is hard to believe that such matters are fortuitous.
Everything points to the fact that these texts are divinely authored, and contain far more information than has, formerly, been supposed.
Vernon Jenkins MSc
2009/11/25 - corrected and extended
|
{"url":"http://www.whatabeginning.com/Misc/Golden/P.htm","timestamp":"2014-04-21T12:08:27Z","content_type":null,"content_length":"24715","record_id":"<urn:uuid:69610bcf-9866-4dfa-8814-eee9ea11920d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Infinitesimal Lifting Property
I’ve basically recuperated from my test and I’m trying to get back into the AG frame of mind. I have about 5 posts half written, so I’m going to actually try to finish this one and start up a nice
little series. I’m taking a class on deformation theory this quarter (which hasn’t actually started yet), so this series will review some of the very, very small amount of deformation theory
scattered throughout the exercises of Hartshorne.
Let’s start with some basics on the infinitesimal lifting property. First assume ${k}$ an algebraically closed field and ${A}$ a finitely generated ${k}$-algebra with Spec ${A}$ a nonsingular variety
(over ${k}$). Suppose ${0\rightarrow I\rightarrow B'\rightarrow B\rightarrow 0}$ is exact with ${B'}$ a ${k}$-algebra and ${I}$ an ideal with ${I^2=0}$. Then ${A}$ satisfies the infinitesimal lifting
property: whenever there is a ${k}$-algebra hom ${f:A\rightarrow B}$, there is a lift ${g: A\rightarrow B'}$ making the obvious diagram commute.
First note that if ${g, g':A\rightarrow B'}$ are two such lifts then, ${\theta=g-g'}$ is a ${k}$-derivation of ${A}$ into ${I}$. A quick subtlety is that a “${k}$-derivation” is an ${A}$-module map
that is a derivation and evaluates to zero on ${k}$. So we need to understand how ${I}$ is an ${A}$-module. But ${I^2=0}$, so it is a ${B}$-module, which in turn is an ${A}$-module (via ${g}$ and $
{g'}$ which will be used). The reason ${im \theta\subset I}$ is that ${\theta}$ is a lift of the zero map since ${g}$ and ${g'}$ both lift ${f}$. Since the sequence is exact and ${\theta}$ lands in
the kernel, it is in the image of the one before it, i.e. ${I}$.
$\displaystyle \begin{array}{rcl} \theta(ab) & = & g(a)g(b)-g'(a)g'(b) \\ & = & g(a)g(b)-g(a)g'(b)+g(a)g'(b)-g'(a)g'(b)\\ & = & g(a)(g(b)-g'(b))+(g(a)-g'(a))g'(b)\\ & = & g(a)\theta(b) + \theta(a)g'
(b)\\ & = & a\cdot\theta(b)+b\cdot\theta(a) \end{array}$
Evaluates to 0 on ${k}$: Since ${g(1)=g'(1)=1}$, ${\theta(1)=0}$. Thus ${\theta(k)=k\cdot 0 = 0}$.
Since ${\Omega_{A/k}}$ is a universal object, we can consider ${\theta\in Hom_A(\Omega_{A/k}, I)}$. Conversely, given any ${\theta\in Hom_A(\Omega_{A/k}, I)}$, we can compose with the universal map $
{d}$ to get ${\theta'=\theta\circ d: A\rightarrow I}$ is a ${k}$-derivation. Compose this with the inclusion ${I\rightarrow B'}$, call this ${\psi: A\rightarrow B'}$. Since composing again with ${B'\
rightarrow B}$ gives ${0}$, ${\psi}$ is a lift of ${0}$ and hence ${g'=\psi + g}$ is a lift of ${f}$ (note we’ve only guaranteed ${k}$-linear so far, not algebra hom). Finally let’s check it
preserves multiplication:
$\displaystyle \begin{array}{rcl} \psi(ab)+g(ab) & = & \theta'(ab)+g(ab) \\ & = & \theta'(a)g(b)+g(a)\theta'(b)+g(ab)\\ & = & \theta'(a)\theta'(b) + \theta'(a)g(b)+g(a)\theta'(b)+g(a)g(b)\\ & = & (\
theta'(a)+g(a))(\theta'(b)+g(b))\\ & = & g'(a)g'(b) \end{array}$
Now let ${P=k[x_1, \ldots , x_n]}$ for which ${A=P/J}$ for some ${J}$. So we get another exact sequence ${0\rightarrow J\rightarrow P\rightarrow A\rightarrow 0}$. We now check that there is a map $
{h:P\rightarrow B'}$ such that the square ${\begin{matrix} P & \stackrel{h}{\longrightarrow} & B' \\ \downarrow & & \downarrow \\ A & \stackrel{f}{\longrightarrow} & B \end{matrix}}$ commutes and
this induces an ${A}$-linear map ${\overline{h}: J/J^2\rightarrow I}$.
Note a map out of ${P}$ is completely determined by where the ${x_i}$ go. Since ${B'\rightarrow B}$ surjective, choose any ${b_i\in B'}$ such that ${b_i\mapsto f(\overline{x_i})}$. Extend this to get
${h}$. By definition ${h}$ makes the square commute. Chasing around exactness, we get that if ${a\in J}$, then considering ${a\in P}$ gives ${h(a)\in I}$. Thus restricting gives ${\overline{h}: J\
rightarrow I}$. Since ${I^2=0}$ we have ${h(a^2)=h(a)^2=0}$, so this descends to a map ${\overline{h}: J/J^2\rightarrow I}$. It is clearly ${A}$-linear.
Let ${X=}$ Spec ${P}$ and ${Y=}$ Spec ${A}$. The sheaf of ideals ${\mathcal{J}=\tilde{J}}$ defines ${Y}$ as a subscheme. Then by nonsingularity (Theorem 8.17 of Hartshorne) we have an exact sequence
${0\rightarrow \mathcal{J}/\mathcal{J}^2\rightarrow \Omega_{X/k}\otimes \mathcal{O}_Y\rightarrow \Omega_{Y/k}\rightarrow 0}$. Take global sections of this sequence to get the exact sequence ${0\
rightarrow J/J^2\rightarrow \Omega_{P/k}\otimes A\rightarrow \Omega_{A/k}\rightarrow 0}$ (${H^1}$ vanishes by Serre).
Now apply the functor ${Hom_A(\cdot, I)}$ to get the exact sequence ${0\rightarrow Hom_A(\Omega_{A/k}, I)\rightarrow Hom_P(\Omega_{P/k}, I)\rightarrow Hom_A(J/J^2, I)\rightarrow 0}$. Exactness on the
right is due to ${\Omega_{A/k}}$ being locally free and hence projective, so ${Ext^1}$ vanishes.
That surjectivity is exactly what we needed to say a lift exists. Take ${\overline{h}\in Hom_A(J/J^2, I)}$ as constructed before. Then choose ${\theta\in Hom_P(\Omega_{P/k}, I)}$ that maps to it.
Compose with the universal map and inclusion to get a derivation ${P\rightarrow B'}$ (we’ll just relabel this ${\theta}$). Set ${h'=h-\theta}$. Since if ${j\in J}$ we have ${h'(j)=h(j)-\theta(j)=\
overline{h}(j)-\overline{h}(j)=0}$ it descends to a map ${g:A\rightarrow B'}$ which is the desired lift.
Next time we’ll move on to infinitesimal extensions and rephrase what we just did in those terms.
|
{"url":"http://hilbertthm90.wordpress.com/2010/09/22/infinitesimal-lifting-property/","timestamp":"2014-04-20T18:44:56Z","content_type":null,"content_length":"99531","record_id":"<urn:uuid:0435ceeb-14c6-4b26-9adb-423453bb6cbc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fascinating facts about baby pandas combined with a math lesson make this ideal for parents and teachers alike. Right-hand pages tell about the births and growth of Hua Mei and Mei Sheng, panda cubs
at the San Diego Zoo. Left-hand pages zero in on one of the facts presented and turn it into a subtraction problem. Graphs, number lines, calendars and place-value charts present the problems
visually. Nagda explains several different methods for solving subtraction problems: regrouping, subtracting each place value, thinking about doubles and adding up. While she explains these methods
well, children will need additional support. Throughout, readers are encouraged to get to a “friendly” number (10, 100, etc.) to make the problem easier to solve—adding the same amount to both
numbers keeps the difference the same. Even the least enthusiastic of math students will find something of interest, from how much the pandas eat, sleep, poop and weigh, to how long they will live.
Adorable photos of the cubs support the text. A great text for elementary teachers and parents to share with their young mathematicians. (Nonfiction. 6-10)
|
{"url":"https://www.kirkusreviews.com/book-reviews/ann-whitehead-nagda/panda-math/print/","timestamp":"2014-04-20T10:49:05Z","content_type":null,"content_length":"4399","record_id":"<urn:uuid:18a7cd63-3724-4553-8303-dc3f0ced3dda>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trig identities.
December 7th 2007, 11:49 AM #1
Dec 2007
Trig identities.
Hello everyone, I have a couple of questions for you today.
First, how does one deal with "sinx + sin2x" (or anything for that matter, cosx - cos2x, etc)? Is it "sin3x" or rather sinx + sin(x+x) and then expand the sin(x+x) into sinxcosx+cosxsinx?
An Identity to fit the question,
(sinx-sin3x)/(cosx+cos3x) = -tanx
Also, this question on my review homework seems to be ridiculous, and no one in my class can seem to get it.
(sinx+siny)/(cosx-cosy) = -cot(x/2 - y/2)
May I also mention, we can not use the Sum-product identity, cosa + cosb = 2cos(-------,
Thank you
You have the right idea. Changing an odd "2x" function to a bunch of 'x' functions should be beneficial.
For that last one, there MUST be a clue in the expansion of tan(A+B).
December 7th 2007, 12:09 PM #2
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/trigonometry/24396-trig-identities.html","timestamp":"2014-04-18T11:19:48Z","content_type":null,"content_length":"31793","record_id":"<urn:uuid:61f992cf-0b27-4e50-8fa3-b14f39d10acf>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding LCDs, LCMs, and GCFs
Date: 06/09/98 at 20:50:15
From: Cheryl
Subject: LCD, LCM, GCF
I don't know how to do the greatest common factor, the least common
multiple, and the least common denominator.
Date: 06/09/98 at 21:21:57
From: Doctor Gary
Subject: Re: LCD, LCM, GCF
Let's focus on what you do know, and maybe we'll discover that you
know more than you think you do.
Do you know how to express a number as the product of its prime
factors? For example, do you know that 120 can be expressed (using x
to indicate multiplication) as 2 x 2 x 2 x 3 x 5?
There are some very useful shortcuts for factoring a number. You
should be familiar with how to tell if a number is divisible by 2 or 5
just by looking at the last digit. (If the last digit is 0, 2, 4, 6,
or 8, it's divisible by 2. If the last digit is 0 or 5, it's divisible
by 5.) You should also know that numbers are divisible by 3 only when
the sum of the digits of the number is divisible by 3.
When you do factor, it's always a very good idea to list the factors
in increasing order, because it makes it much easier to see "common"
The greatest common factor of two (or more) numbers is the product of
all the factors the numbers have in common. If you wanted to find the
greatest common factor of 32 and 76, you would express both as
products of their prime factors, and look for factors common to both:
32 = 2 x 2 x 2 x 2 x 2
76 = 2 x 2 x 19
There are two 2s common to both numbers, so 2 x 2 = 4 is the "greatest
common factor" of 32 and 76.
The least common multiple of two (or more) numbers is the product of
one number times the factors of the other number(s) that aren't
If you wanted to find the least common multiple of 32 and 76, you'd
multiply 32 by 19, because 19 is the only factor of 76 that isn't
common to the factors of 32.
Least common denominators is just a fancy way of saying least common
multiple for two (or more) different denominators, so if you know how
to find least common multiples, you can find least common
denominators. Once you've found a least common denominator, you
re-express each fraction by multiplying by a carefully chosen fraction
which is equal to 1 (remember that multiplying by 1 doesn't change the
value of a number).
For example, in order to add 1/6 and 1/8, we find the least common
multiple of the denominators, which is 24. Then we multiply 1/6 by 4/4
and multiply 1/8 by 3/3, to re-express each addend as some number of
1/6 x 4/4 = 4/24
1/8 x 3/3 = 3/24
1/6 + 1/8 = 4/24 + 3/24 = (4+3)/24 = 7/24
Remember, whether you're trying to find the greatest common factor (a
fancy way of saying biggest number by which two or more numbers are
divisible), or the least common multiple (a fancy way of saying the
smallest number which is divisible by two or more numbers), the key is
factoring the numbers to their prime factors, and then noting which
factor(s) the numbers have in common.
The product of the common factors will be the greatest common factor.
The product of one number and the factors of the other that are NOT
common to the first will be the least common multiple.
Enjoy. Once you understand it, it's actually sort of fun.
-Doctor Gary, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/58527.html","timestamp":"2014-04-16T13:55:54Z","content_type":null,"content_length":"8238","record_id":"<urn:uuid:1caef9de-645b-4424-b785-1879af5dbfc1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Programming Praxies - Egyptian Fractions, C# solution.
Programming Praxies – Egyptian Fractions, C# solution.
This post presents C# solutions to a coin change problem as described in http://programmingpraxis.com/2013/06/04/egyptian-fractions.
An Egyptian fraction was written as a sum of unit fractions, meaning the numerator is always 1; further, no two denominators can be the same. As easy way to create an Egyptian fraction is to
repeatedly take the largest unit fraction that will fit, subtract to find what remains, and repeat until the remainder is a unit fraction. For instance, 7 divided by 15 is less than 1/2 but more
than 1/3, so the first unit fraction is 1/3 and the first remainder is 2/15. Then 2/15 is less than 1/7 but more than 1/8, so the second unit fraction is 1/8 and the second remainder is 1/120.
That’s in unit form, so we are finished: 7 ÷ 15 = 1/3 + 1/8 + 1/120. There are other algorithms for finding Egyptian fractions, but there is no algorithm that guarantees a maximum number of terms
or a minimum largest denominator; for instance, the greedy algorithm leads to 5 ÷ 121 = 1/25 + 1/757 + 1/763309 + 1/873960180913 + 1/1527612795642093418846225, but a simpler rendering of the same
number is 1/33 + 1/121 + 1/363.
Your task is to write a program that calculates the ratio of two numbers as an Egyptian fraction…
As presented in the original post, you can use a greedy algorithm. Thats not fun! Let’s try and improve it: i.e. to return the smallest amount of unit fractions. You cannot devise an algorithm to
guarantee the smallest amount of terms, since the problem space has infinite possibilities (genetic algorithm? food for thought).
So what approach can we take to improve on the greedy solution? I began by writing out a few iterations of calculations for 5 ÷ 121: from 1/25 up to 1/33 (part of an optimal solution presented in the
problem definition). I noticed that when choosing 1/33, the remaining fraction (that we are pulling apart) can be simplified: where as all other fractions leading up to 1/33 leaves a fraction that
cannot be further simplified! If you think about it, simplifying keeps the size of the denominator down, keeping smaller denominators helps yields a smaller amount of terms. This is because when we
are dealing with very small numbers (large denominators), we are getting to the final target at a slower rate that we would with larger numbers. Simple huh?
So how can you simplify a fraction? It can be done by calculating the gcd (greatest common divisor) between the numerator and the denominator, then dividing the numerator and the denominator by the
gcd. If the gcd is 1, then the fraction cannot be simplified. Hmmm… so maybe we can decide on the next unit fraction (for subtracting) only if the result can be simplified. Using this informal idea
as the basis of our algorithm, we get the following solution:
public class EgyptionFractions
public static List<int[]> GetFractions(int numerator, int denominator) {
if (numerator >= denominator)
throw new ArgumentOutOfRangeException ("denominator");
if (numerator <= 0)
throw new ArgumentOutOfRangeException ("numerator");
var fractions = new List<int[]> ();
int subDenominator = 2;
do {
// First find the next fraction to substract from that is small enough
int leftNumerator = numerator * subDenominator;
while (leftNumerator < denominator) { // Note: rightNumerator == denominator
leftNumerator += numerator;
// Now we have a valid unit fraction to substract with, lets continue
// searching for the next unit fraction that yeilds a remainder that
// can be simplified (to keep the denominators small).
while (true) {
int remainingNumerator = leftNumerator - denominator;
if(remainingNumerator == 0) {
// The fractions are the same
numerator = 0;
fractions.Add (new [] {1, subDenominator});
int remainingDenominator = denominator * subDenominator;
int gcd = GCD (remainingNumerator, remainingDenominator);
if (gcd > 1 || remainingNumerator == 1) {
// The resultant fraction can be simplified using this denominator
numerator = remainingNumerator / gcd;
denominator = remainingDenominator / gcd;
fractions.Add (new [] {1, subDenominator});
// Finished?
if(numerator == 1)
fractions.Add (new [] {1, denominator});
leftNumerator += numerator; // i.e. additive version of subDenominator * numerator;
} while (numerator > 1);
return fractions;
private static int GCD(int n1, int n2) {
if (n2 == 0)
return n1;
return GCD (n2, n1 % n2);
If you pass in 5, 121, the result will be:
1/33, 1/91, 1/33033
|
{"url":"http://brooknovak.wordpress.com/2013/06/11/programming-praxies-egyptian-fractions-c-solution/","timestamp":"2014-04-20T03:19:46Z","content_type":null,"content_length":"68082","record_id":"<urn:uuid:8fbe6621-35cf-42b3-8f31-7a94749a9e76>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
question on hypothesis
October 18th 2009, 04:56 AM #1
Oct 2008
question on hypothesis
Under H0, a random variable has the cumulative distribution function $F_0(x)=x^2, 0\leq x\leq 1$; and under H1, it has the cumulative distribution function $F_1(x)=x^3, 0\leq x\leq 1$
Question: If the two hypotheses have equal prior probability, for what values of x is the posterior probability of H0 greater than that of H1?
This is what I did:
Since I know that $\frac{P(H_0|x)}{P(H_1|x)}=\frac{P(H_0)}{P(H_1)}\fr ac{P(x|H_0)}{P(x|H_1)}$, Given that the prior probability is equal will yield $\frac{P(H_0|x)}{P(H_1|x)}=\frac{P(x|H_0)}{P(x|
H_1) }$ However, $x^2 \geq x^3$ given that 0<x<1, so it seems that for all values the statement is true, but the answer is apparent not as above.
Hope someone can help me. Thanks
The answer as given is 2/3. But I not sure how to get it
Under H0, a random variable has the cumulative distribution function $F_0(x)=x^2, 0\leq x\leq 1$; and under H1, it has the cumulative distribution function $F_1(x)=x^3, 0\leq x\leq 1$
Question: If the two hypotheses have equal prior probability, for what values of x is the posterior probability of H0 greater than that of H1?
This is what I did:
Since I know that $\frac{P(H_0|x)}{P(H_1|x)}=\frac{P(H_0)}{P(H_1)}\fr ac{P(x|H_0)}{P(x|H_1)}$, Given that the prior probability is equal will yield $\frac{P(H_0|x)}{P(H_1|x)}=\frac{P(x|H_0)}{P(x|
H_1) }$ However, $x^2 \geq x^3$ given that 0<x<1, so it seems that for all values the statement is true, but the answer is apparent not as above.
Hope someone can help me. Thanks
$P(x|H_0)=\frac{d}{dx}x^2,\ \ 0<x<1$
$P(x|H_1)=\frac{d}{dx}x^3, \ \ 0<x<1$
Given a level $\alpha$ test, I managed to find the rejection region as: $X>\sqrt{1-\alpha}$
What about the power of the test?
The answer is $1-(1-\alpha)^3 /2$. Anyone can help with the explanation?
My understanding is as followed: to find power of the test, I will use
= $P(X>\sqrt{1-\alpha}|H_1)$
= $1-\int_{0}^{\sqrt{1-\alpha}}3x^2.dx$
= $1-(1-\alpha)^{3/2}$
Can it be that the answer is wrong, which is highly impossible i think.
Last edited by noob mathematician; October 20th 2009 at 06:21 AM.
October 19th 2009, 04:02 AM #2
Oct 2008
October 19th 2009, 04:24 AM #3
Grand Panjandrum
Nov 2005
October 19th 2009, 08:51 AM #4
Oct 2008
October 20th 2009, 04:10 AM #5
Oct 2008
|
{"url":"http://mathhelpforum.com/advanced-statistics/108739-question-hypothesis.html","timestamp":"2014-04-17T19:27:06Z","content_type":null,"content_length":"45175","record_id":"<urn:uuid:e2133442-c42b-4cc9-9767-a1dad7471fe8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interpretations of ProbabilityInterpretations of Probability
Here’s Timothy Gowers, a Fields Medalist, from his book Mathematics: A Very Short Intorduction:
However, there certainly are philosophers who take seriously the question of whether numbers exist, and this distinguishes them from mathematicians, who either find it obvious that numbers exist
or do not understand what is being asked.
Everyone knows there is friction between scientists and philosophers of science. Richard Feynman spoke for many scientists when he quipped that, “Philosophy of science is as useful to scientists as
ornithology is to birds.” From the other side, it is not uncommon for philosophers to lament the philosophical naivete of scientists (for example, in this recent book review.)
I am not aware of any similar tension between mathematicians and philosophers of mathematics, for the simple reason that I do not know any mathematicians who take any interest at all in the
philosophy of their discipline. Perhaps this reflects badly on us as a community, but it is what it is. In my own case, every once in a while I get motivated to dip my toe into the philosophical
literature, but it’s rare that I find myself enriched by the experience.
There have been exceptions, however. While writing the BMHB (that’s The Big Monty Hall Book) I found myself moved to read some of the literature about Interpretations of Probability. The reason was
that in writing the book’s early chapters I found myself very casually making use of three different approaches to probability. In discussing the most elementary methods for solving the problem I
used the classical interpretation, in which probabilities record the ratio of favorable outcomes to possible outcomes, assuming the possibilities are equiprobable. Later I discussed the use of Monte
Carlo simulations to determine the correctness of our abstract reasoning, and this suggested a frequentist approach to probability. In this view a probability is something you measure from the data
produced by multiple trials of some experiment. Later still I discussed matters from the perspective of a contestant actually playing the game. In this context it was convenient to take a Bayesian
view of probability, in which a probability statement just records a person’s subjective degree of belief in some proposition.
The literature I found about interpreting probability was fascinating, and I certainly found plenty of food for thought. But for all of that I’m still not really sure what people are doing when they
speak of interpreting probability. Probability theory is an abstract construction no different from anything else mathematicians study. No one talks about interpreting a perfect circle; instead we
ask whether the idea of a perfect circle is useful in a given context. Frankly, as a pure mathematician I say that if you run into philosophical difficulties when applying the theory to a real-world
situation, that just serves you right for trying to apply it to anything.
More seriously, the most important criterion for assessing any particular model of probability must surely be usefulness. That my Monty Hall experience led so naturally to three different
interpretations suggests that no one interpretation can capture everything we have in mind when we use probability language. For that reason I tend to favor an ecumenical approach to probability: If
your interpretation is helpful and leads to correct conclusions, then you just go right ahead and stick with it. The existence of other situations where your interpretation does not work so well is
neither here nor there. Why should we even expect one interpretation to cover every facet of probability?
In perusing some of the literature on interpretations of probability, I noticed a bit of a cultural difference between defenders of rival schools of thought. In particular, Bayesians, to a greater
degree than their rivals, really really care about this. They also tend to be a bit contemptuous of other approaches, especially the poor frequentists, who they regard with great pity. A case in
point is this post by Ian Pollock, over at Rationally Speaking. He writes:
Stop me if you’ve heard this before: suppose I flip a coin, right now. I am not giving you any other information. What odds (or probability, if you prefer) do you assign that it will come up
If you would happily say “Even” or “1 to 1” or “Fifty-fifty” or “probability 50%” — and you’re clear on WHY you would say this — then this post is not aimed at you, although it may pleasantly
confirm your preexisting opinions as a Bayesian on probability. Bayesians, broadly, consider probability to be a measure of their state of knowledge about some proposition, so that different
people with different knowledge may correctly quote different probabilities for the same proposition.
If you would say something along the lines of “The question is meaningless; probability only has meaning as the many-trials limit of frequency in a random experiment,” or perhaps “50%, but only
given that a fair coin and fair flipping procedure is being used,” this post is aimed at you. I intend to try to talk you out of your Frequentist view; the view that probability exists out there
and is an objective property of certain physical systems, which we humans, merely fallibly, measure.
My broader aim is therefore to argue that “chance” is always and everywhere subjective — a result of the limitations of minds — rather than objective in the sense of actually existing in the
outside world.
It’s hard to see how this could be true. It is simply a fact that a great many physical systems produce outcomes with broadly predictable relative frequencies. A fair coin flipped in a fair way
really does land heads about half the time and tails about half the time. The ball in an honest roulette wheel finds each number roughly one thirty-eighth of the time. Those are objective properties
of those systems, and it seems perfectly reasonable to use probability language to discuss those objective properties.
So let’s see what Pollock has in mind:
The canonical example from every textbook is a coin flip that uses a fair coin and has a fair flipping procedure. “Fair coin” means, in effect, that the coin is not weighted or tampered with in
such a way as to make it tend to land, say, tails. In this particular case, we can say a coin is fair if it is approximately cylindrical and has approximately uniform density.
How about a fair
flipping procedure? Well, suppose that I were to flip a coin such that it made only one rotation, then landed in my hand again. That would be an unfair flipping procedure. A fair flipping
procedure is not like that, in the sense that it’s … unpredictable? Sure, let’s go with that. (Feel free to try to formalize that idea in a non question-begging way, if you wish.)
I don’t know what level of description Pollock wants here. If he would care to come to my office, I will simply show him what I mean by a fair flipping procedure. But he knows what I would show him,
since it’s the same procedure everyone uses when they are not deliberately trying to cheat someone. The case of a roulette wheel is perhaps even clearer. By a fair procedure I mean, “The way it’s
done in your classier casinos, you know, with the ball going in one direction and the wheel going in the other.”
Let’s move on:
Given these conditions, frequentists are usually comfortable talking about the probability of heads as being synonymous with the long-run frequency of heads, or sometimes the limit, as the number
of trials approaches infinity, of the ratio of trials that come up heads to all trials. They are definitely not comfortable with talking about the probability of a single event — for example, the
probability that Eugene will be late for work today. Will Feller said: “There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before
speaking of it we should have to agree on an (idealized) model which would presumably run along the lines ‘out of infinitely many worlds one is selected at random…’ Little imagination is required
to construct such a model, but it appears both uninteresting and meaningless.”
The first, rather practical problem with this is that it excludes altogether many interesting questions to which the word “probability” would seem prima facie to apply. For example, I might wish
to know the likelihood of a certain accident’s occurrance in an industrial process — an accident that has not occurred before. It seems that we are asking a real question when we ask how likely
this is, and it seems we can reason about this likelihood mathematically. Why refuse to countenance that as a question of probability?
As it happens, I am among those who are uncomfortable with applying probability language to one-off situations. It’s fine to speak informally about the likelihood (or odds, or probability) of a
one-off event, but if the idea is to assign actual numbers to events and then apply the formal theory of probability to them, then I no longer understand what you are doing. It’s unclear to me what
it means to say, “Given the information I have I believe the probability of this one-off event is one-third,” unless we can view the event as one among a long sequence of trials.
Let’s consider Pollack’s examples. Informally I might say that, given what I know about Eugene, it’s highly likely that he will be late to work today. But it’s hard to imagine what it would mean to
assign an actual number to the probability that Eugene will be late, unless we have long experience with Eugene’s habits on days that are comparable to this one. Likewise, I could make an informal
assessment of how likely it is that an industrial accident will occur, but I don’t know how to assign an actual number to the probability of it occurring. Of course, we might look at a specific
mechanical part used in the industrial process and say something like, “This part has been used in tens of thousands of industrial processes and empirically it fails roughly one time in five
thousand…” Now I know what we’re talking about! But if we’re truly talking about a one-off event that is completely divorced from any possible long sequence of trials, then I just don’t know what it
means to assign a probability to its occurrence.
Moving on:
The second, much deeper problem is as follows (going back to coin flipping as an example): the fairness (i.e., unpredictability) of the flipping procedure is subjective — it depends on the state
of knowledge of the person assigning probabilities. Some magicians, for example, are able to exert pretty good control over the outcome of a coin toss with a fairly large number of rotations, if
they so choose. Let us suppose, for the sake of argument, that the substance of their trick has something to do with whether the coin starts out heads or tails before the flip. If so, then
somebody who knows the magicians’ trick may be able to predict the outcome of a coin flip I am performing with decent accuracy — perhaps not 100%, but maybe 55 or 60%. Suppose that a person
versed in such tricks is watching me perform what I think is a fair flipping procedure. That person actually knows, with better than chance accuracy, the outcome of each flip. Is it still a “fair
flipping procedure?”
I’m afraid I don’t see the problem. I certainly agree that a skillful magician can fool me into thinking he is using a fair procedure when he really isn’t. The fact remains that there are flipping
procedures that produce stable relative frequencies of heads and tails. If I know you are using one of those, then I can make an objective statement about what will happen in a long-run of trials.
You might retort that I can never really know what procedure you’re using, and that is where the subjectivity comes in. But that same argument could be used against any claim to objective knowledge.
It’s hardly a weakness unique to probability. Any fact you assert is inevitably based on a pile of assumptions about how the world is, and a determined skeptic could challenge you on any of those
assumptions. But if we’re ever comfortable talking about objective knowledge, then I don’t see why, “A fair coin flipped in a fair way will land heads roughly half the time in a long sequence of
trials,” should not be considered objective.
So it breaks down like this: It is an objective fact that certain physical systems produce outcomes with broadly stable relative frequencies. Probability theory is very useful for understanding such
situations. Plainly, then, there is an objective aspect to probability. In practice I can be mistaken about certain facts that are relevant to making correct probability assignments. Thus, there is
also a subjective aspect to probability. That is why, depending on the situation, it might be useful to think of probability in terms of the objective properties of physical systems, or in terms of
our subjective knowledge of what is taking place.
This problem is made even clearer by indulging in a little bit of thought experimentation. In principle, no matter how complicated I make the flipping procedure, a godlike Laplacian Calculator
who sees every particle in the universe and can compute their past, present and future trajectories will always be able to predict the outcome of every coin flip with probability ~1. To such an
entity, a “fair flipping procedure” is ridiculous — just compute the trajectories and you know the outcome!
Generalizing away from the coin flipping example, we can see that so-called “random experiments” are always less random for some agents than for others (and at a bare minimum, they are not random
at all for the Laplacian Calculator), which undermines the supposedly objective basis of frequentism.
I disagree. That a godlike Laplacian Calculator can perfectly predict the outcome of any coin toss has no relevance at all to the objective basis of frequentism. The thing that’s objective is the
stable long-run frequency, not the outcome of any one toss. Our godlike Calculator will presumably predict that heads will occur half the time in a long sequence of trials.
Pollack goes on to discuss quantum mechanics and chaos theory, but I won’t discuss that part of his post.
The three interpretations of probability I have mentioned are clearly related to one another. The classical interpretation defines probability without any reference to long runs of trials, but the
ratio you compute is understood to represent a prediction about what will happen in the long run. And Bayesians don’t think that long run data is irrelevant to probabilistic reasoning. They just
treat that data as new information they use to update a prior probability distribution. And no one would deny that our judgements about how likely things are to happen in the future depends on the
information we have in the present.
Given that different interpretations are plainly useful in different contexts, I don’t understand the mania for trying to squeeze everything about probability into just one interpretation. You have
lost something important by declaring that any probability assignment is purely subjective. Let’s not forget that probability was invented in the context of games of chance, and in that context it
developed models that permit fairly detailed predictions about long-run frequencies. That I can never be absolutely certain, in a given situation, that my model applies does not imply that all
probability statements are purely subjective.
1. #1 lylebot October 11, 2011
It’s hard to see how this could be true. It is simply a fact that a great many physical systems produce outcomes with broadly predictable relative frequencies. A fair coin flipped in a fair
way really does land heads about half the time and tails about half the time
But in principle you could measure all forces acting on the coin and predict with near 100% certainty whether it will come up heads or tails, right? It’s not random in the same way that measuring
the spin of an electron is.
In practice we abstract those forces away and model the coin as a random variable with 50% probability of coming up heads. But that’s just a model, and in some sense it is subjective: we have
made the decision to abstract certain forces that we could have measured had we wanted to, and that decision is a subjective one that we could have made differently. No?
Anyway, Andrew Gelman is a political scientist/statistician/Bayesian/blogger that you might enjoy reading on this topic. He’s a Bayesian but he doesn’t commit to the notion of “subjectivity” that
so many Bayesians do, and he sees value in frequentist interpretations as well. Here’s a link to a recent essay:
2. #2 lylebot October 11, 2011
Just to add to my previous post… the great statistician C. R. Rao could supposedly flip a coin so that it always came up heads. If that’s true, it’s a pretty great demonstration of Box’s famous
quote “all models are wrong, but some are useful”.
3. #3 Sean Santos October 11, 2011
Hmm. The one thing I will say is that in order to meaningfully assign probability to some event, you have to stipulate what knowledge or information about the event you are or are not taking into
account. From a Bayesian perspective, this is part of the “subjective” viewpoint that you have to take. From a frequentist perspective, this is done by labeling the event as a member of a
specific class. From a “classical” perspective, this might be done by listing possible outcomes (and then stating that all outcomes are equally likely, for all you know).
This all potentially leads to the same result. A random person, viewing a coin toss as a “typical” or “fair” coin toss, from the class of fair coin tosses, will give the probability of getting
heads as 50%. Laplace’s demon, viewing a coin toss as a deterministic event from a class consisting only of tosses physically identical to that one flip, might give the probability of getting
heads as 100%. A skilled predictor, viewing a coin toss as a moderately biased toss from a class of tosses that are lopsided to about the same degree, might give the probability of getting heads
as 75%.
In each case the outcome might be “ultimately” or ontologically determined from the start. But there are legitimate yet differing estimates due to the amount of information available or being
taken into account. None of the three actors listed is necessarily “wrong” in their deductions, even though they have different conclusions, which is where the claims about subjectivity come in.
Each probability may be objectively correct given certain information or a certain reference class, but “the probability that this next coin flip will come up heads” is not an objective quantity
with a unique value. I would agree, however, that you can objectively assign a value of 50% to “the probability that this coin will come up heads given that it is flipped fairly and that we take
into account no other information about the way in which it is flipped”. You have to exclude any further relevant information from your consideration, even if only implicitly.
4. #4 Stephen Lucas October 11, 2011
When discussing philosophy of mathematics, I am reminded of the quote: “All a mathematician needs to work is a pen, paper, and a waste basket. A philosopher doesn’t need the waste basket.” once
had a reason to try and understand the philosophy of mathematics (the invented versus discovered issue), and after plowing through various texts over a period years, knew enough to find holes in
the arguments for invented, and decided it wasn’t worth wasting my time on any further.
I’m not going to disagree with your broader thesis, which I thoroughly agree with. However, a minor point. Be careful about claims of applying pure mathematics to the real world being worthless.
Pretty much all of pure mathematics has turned out to be useful eventually, including your own areas of number theory and graph theory. Virtually all of number theory (the purest of the pure)
turns out to be enormously practical and important in cryptography. Graph theory started out pure, but now is useful in huge numbers of practical areas from operations research to network theory.
And I’d say your credentials as a pure mathematician are severely battered by your excellent work on the Monty Hall Problem. Understanding human frailties with respect to conditional
probabilities is why casinos and insurance companies do so well. That is positively applied!
5. #5 Jason Rosenhouse October 11, 2011
Oh, lighten up Steve! I certainly do not think that applied mathematics is worthless. I’m actually a big fan of applied math, just so long as it’s someone else who’s doing it.
And since you were kind enough to compliment my work on the Monty Hall problem, let me return the favor by informing everyone that some of the Monte Carlo simulations I referred to in the post
were actually programmed and run by you. Thanks!
6. #6 D. C. Sessions October 11, 2011
However, there certainly are philosophers who take seriously the question of whether numbers exist
Which statement leads me to wonder whether there are also philosophers who take seriously the question of whether words exist — and if not, why not?
7. #7 Michael Fisher October 11, 2011
Hi Jason ~ yet another entertaining & interesting post ~ thank you
A small detail that doesn’t effect your general argument, but perhaps of interest:
…The ball in an honest roulette wheel finds each number roughly one thirty-eighth of the time. Those are objective properties of those systems, and it seems perfectly reasonable to use
probability language to discuss those objective properties
The European roulette wheel lacks the “00″ slot
The house edge on a single number bet on the American wheel with 38 different numbers is 2/38 – or 5.26% rake
The house edge on the European wheel is only 1/37 – or 2.70% rake
When the odds shift so much, you should always be looking for the single zero European roulette, or even better play a skilled game against less skilled players with only house fees as a rake
e.g. poker
8. #8 Dr. I. Needtob Athe October 11, 2011
You’re sitting at a card table with Andy, Bill, and Carl, all of whom are good at math. You take the four aces out of a deck of cards and explain that the Ace of Hearts and the Ace of Diamonds
are defined as red cards, while the Ace of Spades and Ace of Clubs are defined as black cards.
You set the rest of the deck aside and ask, “What is the probability that a card drawn at random from these four cards will be red?”
Everyone agrees that the probability is 1/2.
You shuffle the cards in such a thorough way that a card selected from them will truly be drawn at random, then you draw a card without looking and place it face down in the center of the table.
You hand out sheets of paper and ask each person to secretly write his name and the probability that the card in the center of the table is red, then fold his paper and give it to you.
Unsurprisingly, all three papers say 1/2.
From the three remaining aces, you give one card each to Andy, Bill, and Carl, keeping the cards face down and instructing them to each look at their own card but hide it from everyone else.
Again, you hand out sheets of paper and ask each person to secretly write his name and the probability that the card in the center of the table is red, then fold his paper and give it to you.
This time the results are different. Andy’s paper says 1/3, but Bill’s and Carl’s papers both say 2/3.
My question is, who is right and who is wrong? Also, if anyone is right, does that mean that his first answer of 1/2 was wrong?
My point is that when you speak of the probability that a certain card has a certain property, you’re speaking about a property of the card, and if two people give two different answers about
that property, they can’t both be correct.
This seems like a paradox to me. How can it be resolved?
9. #9 Deepak Shetty October 11, 2011
I don’t know what level of description Pollock wants here.
I think he means that if you flip the coin exactly the same way it will always land the same side. And if you change something that it isn’t exactly “fair” and if you knew enough you could
predict the outcome.
10. #10 Dr. I. Needtob Athe October 11, 2011
I have another question about mathematics:
Most of you are probably familiar with Bruce Schneier, the prominent cryptographer. I’ve been reading his Crypto-Gram and his blog for many years and I’ve noticed that he occasionally refers to
himself as a scientist.
I’m not trying to deny him that status, but does cryptography fit the definition of a science? Does it fit the definition of a natural science? Do cryptographers study some aspect of nature? If
so, what aspect of nature do they study?
11. #11 eric October 11, 2011
This time the results are different. Andy’s paper says 1/3, but Bill’s and Carl’s papers both say 2/3.
My question is, who is right and who is wrong?
All of them are wrong; the probability of it being red is 100%.
(Assuming none of them lied and all of them know how to calculate probability).
12. #12 Pierce R. Butler October 11, 2011
“Philosophy of science is as useful to scientists as ornithology is to birds.”
Apparently Dr. Feynman never asked Audubon Society members about their contributions to habitat preservation.
13. #13 jt512 October 12, 2011
Jason, When Ian first made this blog post, I tried posting a similar critique as a comment on his blog, but couldn’t, apparently due to a software glitch of some sort. I agree with you about
everything, except possibly that it is inappropriate to assign a probability to a “one-off” event if we employ the Bayesian interpretation that probability represents one’s degree of belief in a
hypothesis. For example, what’s the probability that ESP is real? IMO, on the order of 10^(-12).
14. #14 Patrick October 12, 2011
This may be a stupid comment, since my math education, while fairly significant, did end in undergraduate.
But surely there’s a difference between saying that a process yields a stable relative frequency, and saying that a process is random in some objective sense? A table of random integers yields a
stable relative frequency, I think, but if I’ve got the table in front of me I can know with 100% certainty that the next integer is a 4.
Doesn’t the Bayesian point become a lot clearer if you change analogies about coin flips to something that’s clearly and obviously predetermined, but also “random” in certain senses of the term?
That would at least eliminate intuitive assumptions people make about the nature of games of chance.
15. #15 Sascha Vongehr October 12, 2011
“a great many physical systems produce outcomes with broadly predictable relative frequencies.”
Bayesians may be contemptuous because once you do QM physics, the frequentist approach is as circular as the classical interpretation, or in other words, there is an infinite regress. I tried to
explain one aspect of this in the article “Empirical Probability versus Classical Fair Meta-Randomness”:
16. #16 Peter October 12, 2011
Just because a model doesn’t work on a quantum level doesn’t mean that the model is wrong elsewhere, or inappropriate, or broken, it just means the model doesn’t work on a quantum level.
17. #17 bobh October 12, 2011
So Jason you say “every once in a while I get motivated to dip my toe into the philosophical literature, but it’s rare that I find myself enriched by the experience.” Then you launch in to a post
on philosophical interpretations of probabilities demonstrating the truth of your statement.
18. #18 darkgently October 12, 2011
- “I’m actually a big fan of applied math, just so long as it’s someone else who’s doing it.”
As a pure mathematician sharing an office with applied mathematicians, I am totally on board with this!
19. #19 Collin October 12, 2011
@8 & @11: The probability of a card already placed can only be meaningful in the frequentist sense. The problem is that it isn’t established beforehand what details of the experiment are
necessary to place it among the set of trials. Andy, Bill, and Carl each assume the right to keep their card while the others must redraw. It is this inconsistency that causes the paradox. If it
were made plain that they all had to return their card after reporting the probability, they would’ve said 1/2.
20. #20 J. Quinton October 12, 2011
This post looks like it would fit in well with the community at Less Wrong.
21. #21 Dr. I. Needtob Athe October 12, 2011
Collin (Post #19): “Andy, Bill, and Carl each assume the right to keep their card while the others must redraw.”
Nobody will redraw. All the drawing is finished and we’re talking about the color of the card laying face-down in the center of the table.
I don’t think you understood what I wrote. Please read it again.
22. #22 phayes October 12, 2011
“That a godlike Laplacian Calculator can perfectly predict the outcome of any coin toss has no relevance at all to the objective basis of frequentism. The thing that’s objective is the stable
long-run frequency, not the outcome of any one toss. Our godlike Calculator will presumably predict that heads will occur half the time in a long sequence of trials.”
Yes, well… Pollack’s post looks very much like a sketchy attempt at an interpretation of bits of Jaynes and if that is where it’s coming from it has made an easily dismissable mess of the
argument(s) pointing out what’s wrong with the idea of an ‘objective basis of frequentism’ there. Best read Jaynes’s book.
23. #23 Marshall October 12, 2011
You don’t understand the mania for squeezing everything into just one interpretation, and I agree totally. Understanding benefits from looking at things from various sides, from angles as
distinct as possible. Logical parallax. Reification. Substitution of variables. And so on. No “one size fits all”.
The ‘fairness’ of the single-trial flipping procedure lies in making conditions uncontrollable or unobservable. One point of a good experiment is to have initial conditions well controlled, so
one imagines that if you thoroughly controlled your flipping procedure, the outcome would indeed be repeatably determinate. Is it possible to define a selection procedure that is truly random (P
in the normal sense) rather than merely unknown (Bayseian)? I suppose the Modernist answer is “no”.
Not that that makes any practical difference. Flipping a coin once to see who pays for dinner is “fair”, in the sense that I would have no qualms about submitting to such a procedure. Other
things being equal. I think that can be mashed into a good philosophical argument. Was that your point?
(On the side, I think it’s interesting that it’s so much easier to describe a fair coin, than a fair procedure for using it.)
… I don’t think it’s surprising that scientists don’t necessarily get much out of philosophy of science. Doers of X would not ordinarily be the best observers of X, or the most interested. (I
once teased my son, then 4 or 5, by reading to him from our child-rearing handbook, explaining to him how his behavior reflected what it said, and what it predicted for his future behavior. He
was indeed quite annoyed.) Perhaps the best consumers of P of S would be those of us, not practicing scientists, who want to understand what scientists are doing and how best to incorporate that
into our lives. Where are the values, where are the blind spots.
24. #24 One Brow October 12, 2011
My question is, who is right and who is wrong? Also, if anyone is right, does that mean that his first answer of 1/2 was wrong?
They are all correct. They were also all correct when they said 1/2.
My point is that when you speak of the probability that a certain card has a certain property, you’re speaking about a property of the card, and if two people give two different answers about
that property, they can’t both be correct.
The probability is not a property of the card. I’m a frequentist (to the degree a choice must be made at all), and see the probability is a feature of a long-term trend of identical situations,
not of a single event.
This seems like a paradox to me. How can it be resolved?
By recognizing you are asking four different questions, which among them have three different, correct, answers.
Before the three cards are dealt:
1) Based what you know of the card distribution, what is the probability that this card is red?
After the cards are dealt:
2a) Andy, based what you know of the card distribution, what is the probability that this card is red?
2b) Bill, based what you know of the card distribution, what is the probability that this card is red?
2c) Carl based what you know of the card distribution, what is the probability that this card is red?
The different intial knowledge means different, correct answers.
25. #25 G.D. October 12, 2011
“As it happens, I am among those who are uncomfortable with applying probability language to one-off situations. It’s fine to speak informally about the likelihood (or odds, or probability) of a
one-off event, but if the idea is to assign actual numbers to events and then apply the formal theory of probability to them, then I no longer understand what you are doing.”
This actually makes little or no sense, and you don’t think it is true when you think about it. I mean, you wrote a whole book on Monty Hall, didn’t you? Certainly the point about knowing how to
assess probabilities in a Monty Hall case is to be able to assess your chances of winning by changing doors or not? So suppose I am at that stage in the program where I choose whether to switch
doors or not. I assume you would want to say that I should (rationally) switch. But how could you say that? After all, me standing in front of the doors faced with this decision is a one-off
event if anything is. Still, your recommendation only makes sense if you think it makes sense to apply probabilities to the situation. And of course it makes sense. If I stick with my door, I
have a 1/3 chance of winning; if I switch I have a 2/3 chance. These are precise probabilities, applied to a specific, one-off event (and the problem with the accident case is simply that it is
much more complex, and there is a far greater number of unknowns; it is not a difference in kind).
But here’s the thing that raises the question of subjective vs. objective probabilities (unfortunate terms, I know). In the Monty Hall scenario the door where the prize is hidden is already
determined. Either it is behind door A or door C (say). That means that the chance that it is behind door A is either 1 or 0 (not 1/3) and the chance that it is behind C is either 1 or 0 (not 2/
3). That’s what it is supposed to mean when one says that the probabilities are not objective. There are no objective probabilities around in this case at all. And citing frequency is irrelevant,
for the frequency of winning doesn’t change the fact that the chance that the prize is behind door A is either 1 or 0 (though I don’t know which). It doesn’t become 1/3.
Rather, what the probabilities apply to are not the objective chances (there are none; the chance of the prize being behind door A is either 1 or 0), but my lack of information. The point is that
given what I know, I should assign a 1/3 credence to “the prize is behind A” and a 2/3 credence to “the prize is behind C”. The credence doesn’t reflect any objective chance, since the objective
chance is 0 (or 1 – I don’t know which).
You are right that frequencies provide a standard. It’s the frequencies that allow me to determine which credence to assign to which hypothesis. That doesn’t alter the fact that the probability
assignment in the one-off situation at hand reflects my information, not any objective fact about the situation at hand (since this is a one-off event where the probability for each option is
either 0 or 1).
What is unfortunate is the use of “objective” and “subjective”. The point is really the question of whether probabilities are “epistemic” (reflecting our information; the world may, for all we
know, be completely deterministic) or “metaphysical” (the world is genuinely indeterminate) – but in the Monty Hall case the situation isn’t genuinely indeterminate; it is already determined
which door hides the prize, so it means that the epistemic approach is the only one that makes sense in this case. But of course (well, presumably) it is an objective fact (in the more ordinary
sense of “objective”) which credence a rational agent ought to assign – if you’re rational you wouldn’t just choose any random number – and frequencies can provide you with a very much objective
reason why you should assign a 1/3 credence to “the prize is behind A” and a 2/3 credence to “the prize is behind C”.
The more general point is that “the probability of X” cannot just mean “the frequency of X”, since a lot of events to which it makes sense to assign probabilities don’t really have any (relevant)
frequencies behind them. Frequencies certainly help determining which credence you should assign (which is why frequencies are important also for Bayesians), but since each particular event
making up a frequency may be completely deterministic, we have an argument that probability assignments really reflect our information states rather than any sort of “real indeterminacy” in the
26. #26 Jr October 12, 2011
I agree that we should not assume that there is one correct interpretation of probability, anymore then there is one correct interpretation of linear algebra.
I however find your discussion of philosophy somewhat lacking.
I think it is very difficult to define probability using relative frequency without assuming we already know what probability is.
Taking the limit of infinitely many trials is impossible in the real world so we can hardly use that as a definition of probability. (Or at least, if we do probability becomes completely
unmeasurable.) But if we perform a large number of finite trials we cannot say what the relative frequency will be. All we can say is that with high probability it will be close to the true
probabilities. But of course now we have a circular definition of probability.
27. #27 eric October 12, 2011
Jr: Taking the limit of infinitely many trials is impossible in the real world so we can hardly use that as a definition of probability
Why not? Mathematicians take the limit of infinitely many elements all the time, even though counting them would be impossible in the real world. I don’t think the criteria “can do it physically
with real objects” is any more a legitimate criteria for probabilities than it is for, say, calculus. When was the last time you cut out the area under a (e.g., paper) curve into infinite,
infinitely thin sections, and measured their individual areas?
Jason’s point is very well taken and parallel to the natural science approach: its perfectly fine to keep multiple models around to attack a problem, since each of them will have different
limitations / assumptions / boundary conditions. There is no need to philosophically dedicate oneself to any one of them. You can, if you like, but it isn’t required to be a good scientist (or
mathematician). If one of them eventually turns out to be far more useful or “better” than the others, so be it, and the people who put their eggs in that basket early can crow about it (sorry
about the mixed metaphor). But there’s no need to try and force the working community to make that decision prematurely.
28. #28 G.D. October 12, 2011
“Why not? Mathematicians take the limit of infinitely many elements all the time, even though counting them would be impossible in the real world. I don’t think the criteria “can do it physically
with real objects” is any more a legitimate criteria for probabilities than it is for, say, calculus.”
I think you misunderstand the point. When we say that X has a 50% chance of happening we don’t mean that in earlier cases X has occurred 50% of the time and not occurred the other 50% of the
time. There generally isn’t any good correspondence between *actual* frequencies and the probabilities we assign (i.e. when we say that a coin has a 50% of landing heads, that doesn’t mean “coins
have landed heads 50% of the times they have been flipped in the past”. Many of the events to which we assign probabilities haven’t occurred often enough for us to do it this way. So, one
solution would be to say that “this coin has a 50% chance of landing heads if flipped” should be understood in terms of “if a coin was flipped an infinite number of times it would land heads”.
The problem Jr points out, as I read it, is that this gets the cart before the horse. It cannot be what we mean by “50% probability”, since in order to make sense of “if this coin was flipped an
infinite number of times it would land heads” we *already* have to assume that the coin has a 50% chance of landing heads. Frequencies are evidence for probabilities, and probabilities explain
frequencies, not vice versa, as frequentists would claim (that’s what it means to be a frequentist). The Bayesian account is one (the best) way of making sense of what probabilities are when they
cannot be frequencies.
Indeed, I totally agree that the criteria “can do it physically with real objects” is a legitimate criteria for probabilities. So much the worse for the frequentist approach
(dunno what happened to my last, long post explaining what is really at stake here and why the Bayesian, epistemic approach is obviously correct (and why “subjective” in “subjective
probabilities” is a very unfortunate and misleading choice of words since there doesn’t have to be anything about them that are not objectively true).
29. #29 eric October 12, 2011
G.D.: The problem Jr points out, as I read it, is that this gets the cart before the horse. It cannot be what we mean by “50% probability”, since in order to make sense of “if this coin was
flipped an infinite number of times it would land heads” we *already* have to assume that the coin has a 50% chance of landing heads.
I don’t see that as circular reasoning so much as stating the same premise in two different ways. “P = kT and and “If we were to raise the temperature on a gas in a sealed container, the pressure
would go up” are not circular, they are two different methods of saying the same thing.
30. #30 Jason Rosenhouse October 12, 2011
G. D. –
Your comment got sent to moderation. Sorry about that. I have now posted it. I don’t think we’re on the same page, though. For example, you said:
This actually makes little or no sense, and you don’t think it is true when you think about it. I mean, you wrote a whole book on Monty Hall, didn’t you? Certainly the point about knowing how
to assess probabilities in a Monty Hall case is to be able to assess your chances of winning by changing doors or not? So suppose I am at that stage in the program where I choose whether to
switch doors or not. I assume you would want to say that I should (rationally) switch. But how could you say that? After all, me standing in front of the doors faced with this decision is a
one-off event if anything is. (Emphasis Added)
No, that’s not at all a one-off event, at least not in the relevant sense. It makes sense to say that you will win with probability 2/3 by switching precisely because you can imagine playing the
game multiple times, and the information you have justifies some conclusions about what will happen in the long run.
I simply don’t understand what it means “to assign a 1/3 credence” to something. That sounds like gibberish to me. If your credence is simply based on what you know about long-run frequencies,
then why talk about credences at all? Why not just talk about frequencies and be done with it?
Jr –
I think it is very difficult to define probability using relative frequency without assuming we already know what probability is.
But I didn’t define probability using relative frequencies. I didn’t define probability at all. As far as I know, the only definition of probability that makes any sense is the one given by the
abstract, axiomatic treatment you would find in a high-level mathematics textbook.
The issue here is interpreting what probability means in different real-world contexts. And it just seems pretty obvious that different interpretations are helpful in different contexts. And
since it’s an objective fact that certain physical systems produce stable long-run frequencies, I don’t see how one can argue, as Pollack does, that chance is always and everywhere subjective.
For what it’s worth, here’s the conclusion of the article on this subject in the Stanford Encyclopedia of Philosophy (the article was written by Alan Hajek):
It should be clear from the foregoing that there is still much work to be done regarding the interpretation of probability. Each interpretation that we have canvassed seems to capture some
crucial insight into it, yet falls short of doing complete justice to it. Perhaps the full story about probability is something of a patchwork, with partially overlapping pieces. In that
sense, the above interpretations might be regarded as complementary, although to be sure each may need some further refinement. My bet, for what it is worth, is that we will retain at least
three distinct notions of probability: one quasi-logical, one objective, and one subjective.
That’s precisely my view.
31. #31 Thanny October 12, 2011
Since the topic is probability, something that’s always bothered me is the “gambler’s fallacy”. It seems to me that the fallacy is itself a fallacy.
The probability that a coin will end up heads 50 times in a row is much smaller than the probability that it will end up heads just once. That’s all the “gambler’s fallacy” ever claims – if you
are a gambler, and you’ve witnessed the equivalent of 50 heads turning up, is it really a fallacy to expect a higher probability of tails on the next flip? Isn’t it the same as saying that in all
the times the gambler has seen 50 heads turn up, more often than not tails turned up next? Isn’t that actually true from any probability perspective?
What’s wrong with my interpretation?
32. #32 Marshall October 12, 2011
Assuming independent trials and a fair flip, it isn’t true that when 50 head-flips are observed, usually a tail-flip will come next. There are only 2 relevant cases: 50 heads followed by a tails,
and 51 heads. Expectation of tails is 1 out of 2 = .5. The rarity of a run of 50 flips is irrelevant. Try a monte carlo if you like: a run of 50 heads to qualify is going to be tedious, but try a
run of 2 or 3.
33. #33 Collin October 13, 2011
@21. Interesting that you should ask me to read it again. Andy, Bill, and Carl are not real people. They are just characters in your narrative. By suggesting that the narrative should be re-read,
you’re confirming my assumption that it is a repeated experiment. So if Andy, Bill, and Carl are self-aware, they know this and can plan for the next time someone reads it.
But seriously… Although a person reading a story obviously can’t create characters as I suggested in that doggerel, it’s not that far-fetched that a particle detector might be creating some of
the states it’s assumed to be observing. Arguments like these, combined with a willingness to accept tachyonic ghosts, might be the start of showing that probability isn’t really that different
at the quantum level.
34. #34 Tom October 13, 2011
DC – I don’t think you’ll find any philosophers who don’t think numbers exist. The argument is whether they are a like words, which we invented, or they are a special kind of thing, independent
of us, that we discovered.
35. #35 Wow October 13, 2011
“DC – I don’t think you’ll find any philosophers who don’t think numbers exist.”
I do believe you’re wrong.
3 apples exist. And that is different from the existence of 2 apples. But 3 and 2 don’t exist as real things, just as counts.
“independent of us, that we discovered”
Chimpanzees can count to three and IIRC humans have to count five or more, we don’t know the difference between five things and six things without counting “one, two, … Oh, that’d be five”.
So numbers are things nonhumans have too.
They just don’t exist as things in themselves.
36. #36 phayes October 13, 2011
“If your credence is simply based on what you know about long-run frequencies, then why talk about credences at all? Why not just talk about frequencies and be done with it?”
Because your state of knowledge never includes ‘knowledge of long-run frequencies’. How could it?
“And since it’s an objective fact that certain physical systems produce stable long-run frequencies”
It isn’t. This is from chapter 10 of Jaynes:
“This point needs to be stressed: those who assert the existence of physical probabilities do so in the belief that this establishes for their position an `objectivity’ that those who speak only
of a `state of knowledge’ lack. Yet to assert as fact something which cannot be either proved or disproved by observation of facts, is the opposite of objectivity; it is to assert something that
one could not possibly know to be true.”
That Stanford Encyclopedia of Philosophy article’s omission of the ‘Jaynesian interpretation’ is unfortunate.
37. #37 Jeff October 13, 2011
Via Jordan Ellenburg’s blog (http://quomodocumque.wordpress.com/), I read this paper by Adam Elga entitled “Subjective Probabilities Should be Sharp”, which can be found at http://
www.princeton.edu/~adame/papers/sharp/elga-subjective-probabilities-should-be-sharp.pdf. It’s a very short and easy read, and I found his arguments compelling, even though I am inclined to agree
with your hesitancy to assign probabilities to one-off events, Jason.
38. #38 Lenoxus October 13, 2011
When I was twelve or so, I asked my mother what it really meant when the newspaper said there was a 40% chance of rain. I could be mistaken, but I think she answered that out of all the days the
meteorologists had seen with conditions similar to this one, about 40% had rain. It might not have been my mother, it might even have been me thinking to myself, and whoever it was might have
said something more like “If today were repeated a thousand times, about 400 of those hypothetical days would have rain”. (Alternative-universes frequentism?)
Either way, I more or less became a frequentist from then until my encounters with the Internet and various discussions of probability therein, particularly the ones on Less Wrong. These days I
lean slightly towards the Bayesian “degree of knowledge” model, but I see all models as deficient to some degree. In the case of Bayesianism, one could arrive at some peculiar conclusions as a
result of the apparent subjectiveness.
Most people, even very smart peple, who encounter even the fully-described version of Monty Hall (host-always-shows-goat-etc) will respond that the probability of their first door being right is
1/2. Could you not argue that their “degree of knowledge” or “credence” really is 1/2, because in a sense it takes some extra knowledge — by explaining the problem in terms of “Monty is giving
you two doors” or “Imagine if there were a million doors” — before a person’s answer (or “credence”) changes to 1/3? In other words, the Bayesian model needs some odd fleshing-out along the lines
of “the degree of knowledge of a maximally intelligent being who nonetheless doesn’t know where exactly the car is”. In any case, frequentism is the one model that necessarily convinces anyone,
with the exception of those who would argue your simulations are inaccurate (but they would have to give good reasons why).
Suddenly, I’m curious whether 2/3 of a car is more economically valuable than a whole goat…
Thanny @ 31:
Isn’t it the same as saying that in all the times the gambler has seen 50 heads turn up, more often than not tails turned up next? Isn’t that actually true from any probability perspective?
It’s false in a frequentist sense (and every other sense as well). In fact, out of all the times in history that 51 coins have been flipped, an equal number have the sequence “50 heads followed
by 1 tails” as have “51 heads”.
One thing to help grasp this is to recognize that yes, 50 heads in a row is rare, but it’s no rarer than any other sequence of 50 coin flips — just easier to describe. A monkey who types 6 random
letters has a very low chance of typing a word and a very high chance of typing gibberish. But that doesn’t mean “JPBFUU” is any more likely than “MONKEY”, even though the first is gibberish and
the second isn’t; they are both equally unlikely. (If the monkey is truly random, unlike real-life monkeys which tend to prefer hitting the same key repeatedly, like these guys.)
39. #39 Blaine October 13, 2011
MOST modern philosophers think that numbers DO NOT exist just as words do not exist. Numbers and words exist in the trivial sense that they are products of the human mind. Before humans evolved
sentience, where were numbers and words? Math itself is just a language. Unfortunately, when mathematicians/scientists like Roger Penrose say philosophical things, they sound silly as when he
says that mathematicians are discovering and exploring a really existing transcendent platonic world and not inventing mathematics. I find this rather shocking…on the order of saying he believed
that a dead jewish terrorist died for his sins and was resurrected..or that the two attributes of god that manifest themselves in the world are Thought and Extension ( Spinoza ).
You can’t rightly call yourself an atheist if you think numbers and words exist…you’re just another theist, only your god is different.
40. #40 G.D. October 13, 2011
“simply don’t understand what it means “to assign a 1/3 credence” to something. That sounds like gibberish to me. If your credence is simply based on what you know about long-run frequencies,
then why talk about credences at all? Why not just talk about frequencies and be done with it?”
I agree (with the last point). My point was what the debate of the original post is about. “Objective probabilities” generally don’t make any sense (in the sense of “objective” that is the topic
of the original post). The Monty Hall case illustrates that, since the objective chance that you chose the right door is *never* 1/3 or 2/3. It is always 1 or 0. Either the door hides the prize
or it doesn’t, and this is already determined even before you went on the show. The only sense one can make of “probability of 1/3″ in this case is “given what you know, your state of
information, you should place a confidence level of 1/3 on the belief that the prize is behind this door”. The point is: probabilities are not “things out there in the world” (since the world
may, for all we know, be fully deterministic, and if even if it isn’t at the quantum level at least it is in the case of Monty Hall); rather, probabilities reflect how much confidence a rational
agent should have in a given outcome. And that’s what probabilities *are*.
In the more ordinary sense of “objective” such probability assignments are no less objective – it is not up to you what confidence you rationally should have in a hypothesis. Frequencies, for
instance, provide good evidence for what the rational level should be.
DC#6: Your comment is based on a common confusion. No one doubts that words exists, and no one doubts that *numerals* exists. Numbers are not the same as numerals. After all, “2″ and the Roman
numeral “II” denote the same number, but wouldn’t if numbers just were numerals.
There’s a long philosophical history here, and despite what some mathematicians may think, the philosophical discussion has had profound influence on the discipline. After all Frege invented the
quantifiers to answer this question, and Russell’s paradox and his type theory were developed to provide an acceptable epistemic and metaphysical foundation for mathematics. And no one can deny
the profound effects of those efforts on (at least branches of) mathematics. Indeed, the claim that mathematicians don’t have use for philosophy would have sounded more compelling if there had
been a sharp distinction between mathematics and philosophy of mathematics, but in areas such as mathematical logic there isn’t, and many of the crucial discoveries have been made at least in
part as responses to “philosophical” concerns (the Löwenheim-Skolem theorem, Gödel’s completeness and incompleteness theorems, etc.)
41. #41 Blaine October 13, 2011
In a trillion, trillion, trillion years the expansion of the universe will have reached such a point that the universe will blink out in a catastrophic hail of subatomic particles and atoms
themselves will no longer exist. There will be nothing left to embody anything. Of course, our solar system and us with it, will be long gone by then. Just a burnt cinder floating aimlessly in
space. This is a scientific fact. In logical time, it has already happened.
If numbers and words exist, where will they be then? In god’s mind…come on folks, get with the program.
42. #42 Knightly October 13, 2011
So what are the chances they got this right?
Vegas is giving 300:1.
43. #43 Blaine October 13, 2011
They may be off by a couple hundred trillion years…
44. #44 Thony Christie October 13, 2011
For those that disparage or make snide comments about the philosophy of maths I would point out that you’re displaying an incredible ignorance of the history of maths. A large number of major
advances in maths have come about because mathematicians addressed and tried to solve philosophical questions. I’m not going to use a comments column to list all of the cases but every
mathematician should be aware that for example non-Euclidian geometry came about because people asked philosophically questions about Euclid’s fifth postulate.
45. #45 Lenoxus October 13, 2011
When Jason wrote that “it is not uncommon for philosophers to lament the philosophical naivete of scientists (for example, in this recent book review)”, I was worried the linked review would
involve some sort of Courtier’s Reply to a science book whose author callously unweaved a rainbow or two. In actuality, the article is a decent overview of contemporary neurology, and its
criticisms of the book in question were mostly good from a scientific (not just philosophical) perspective.
For instance, writer V.S. Ramachandran overreaches when he attempts to explain specific features of art in terms of our knowledge of neurology. I have no doubts that every facet of humanity is
ultimately explicable in terms of neurology, etc, but right now it’s too early in the game to draw more than trivial conclusions about which “rule” in art creation/appreciation derives from which
property of the mind (we don’t even know what most of the rules are!).
In another paraphrashed part of the book, Ramachandran suggests that the “come-hither gesture” may derive from the shape the tongue makes in producing the word “hither”. I’m not unfamiliar with
phonetics, and it seems to me that we would have a few more “phonetic gestures” if that were the case. Furthermore, a much simpler explanation is that the gesture indicates the direction you want
the person to travel. Partially confirming this is that frequent variants involve the hand moving pointed sideways instead of up, something the tongue doesn’t really ever do in spoken language.
(It doesn’t go up much in the word “hither” either.) Also, many other gestures have a straightforward “visual symbolism” origin, such as the police officer’s “stop” gesture strongly resembling
the actual physical motion necessary to stop someone coming at you (albeit often simplified, if the cop is holding something, to one arm instead of two).
Apart from such speculations, the book actually looks pretty good, although it treads territory many SB readers will be pretty familiar with, such as blindsight, split-brain patients, etc. I
can’t say with confidence that I learned anything about the human brain from the review, but that’s just me.
46. #46 TylerD October 14, 2011
The specific idea of “randomness” becomes less problematic when you look at it from a Kolmogorov-Chaitin perspective: something (data) is random if it permits no description in a formal model
that is shorter than the literal representation. Statistics then amounts to finding the non-random or compressible part of data and separating it from the incompressible part, or the noise.
47. #47 Dan L. October 26, 2011
I think Bayesians just get excited because Bayes’ Theorem is so freakin’ cool (I mean, it includes Popperian falsificationism as a special case! c’mon!). You’re absolutely right that Bayes’
theorem just becomes a frequentist view when applied to a series of observations (assuming the experimenter doesn’t start with any special information).
And the claim about Bayes’ theorem handling “one-off” events is just weird. Sure, it CAN, but it’s not very good at it because of the limited information. Bayes’ theorem isn’t very useful until
you’ve actually done the experiment a few times; the initial probability can’t be anything other than a guess.
48. #48 Dan L. October 26, 2011
Thony Christie@44:
Great point. I would simply add that almost all of Cantor’s entire career and contribution to mathematics was inspired by philosophical problems about infinity.
49. #49 Daryl McCullough October 29, 2011
I think this is a fascinating topic, and my feeling is that, somehow, Bayesians are more right about probability than frequentists, but that they’re not right, either. I don’t have a definitive
argument against Bayesian probability, but it appears to me that quantum mechanics gives a notion of probability that is NOT in any way subjective. Subjective probability gives the impression
that if you only knew more about the details of the situation, you would come up with a different probability, but apparently that is not the case with quantum mechanics. A free neutron has a 50%
chance of decaying into a proton, electron and anti-neutrino in any given 14 minute long period. There are no additional details about the state of the neutron that would allow me to adjust that
probability. There is something NON-subjective about quantum probabilities, it seems to me.
But I don’t think that the frequentist approach makes a whole lot of sense, either. For one thing, it doesn’t actually give an answer to the question: What is the probability of a coin toss
resulting in “heads”? If you define it as a limiting frequency–the limit as N goes to infinity of the number of heads in N tosses divided by N–you have to face the issue of: why that should that
number have a limit, in the first place? Not every infinite sequence of numbers has a frequency. Why should coin flips?
You could say that, empirically, when you flip a coin 100 or 1000 times, it seems to settle down to a limiting frequency of heads, but why should you expect 1000 flips to give a good estimate
about what would happen in infinitely many flips?
50. #50 Sean November 2, 2011
Fantastic post I very much enjoyed it, keep up the good work.
|
{"url":"http://scienceblogs.com/evolutionblog/2011/10/11/interpretations-of-probability/","timestamp":"2014-04-18T10:49:18Z","content_type":null,"content_length":"147884","record_id":"<urn:uuid:5ef9400a-15d5-49bc-8265-8a75f70cd515>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Angle btween Coordinate Vector and Normal Vector of Facet in a Convex Polytope, Asking for a Counterexample
up vote 2 down vote favorite
Let $\mathcal{C}$ be a convex polytope in $\mathbb{R}^{D}$ with $K$-facets $F_{1},\ldots,F_{K}$. I denote the normal vector of the $k^\mathrm{th}$ facet as $\mathbf{w}\_k=(w_{k1},\ldots,w_{kD})$.
In the sequel, I will use $k$ as the index of $K$ facets and $d$ as the index of $D$ dimensions. Namely, $d\in \{1,\ldots,D\}$ and $k\in \{1,\ldots,K\}$.
Let $\mathbf{p}=(p_{1},\ldots,p_{D})$ be a point in $\mathbb{R}^{D}$. Define
$L_{d}=\{\mathbf{p}+\theta\mathbf{u}_{d}|\theta\in \mathbb{R}\},$
where $\mathbf{u}_{d}$ is the vector of the form $(0,\ldots,0,1,0,\ldots,0)$ with a $1$ only at the $d^{\mathrm{th}}$ dimension.
For $k=1,\ldots, K$, define
$G_{k}=\{d|L_{d}\cap F_{k}\neq \emptyset\}.$
Define $f:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow [0,1]$ as
My conjecture
For any $\mathbf{p}\in \mathrm{int}\mathcal{C}$, there exist $d$ and $k$ such that $d\in G_{k}$ and $f(\mathbf{u}\_{d},\mathbf{w}\_{k})=\max \{f(\mathbf{u}\_{1},\mathbf{w}\_{k}),\ldots,f(\mathbf{u}\_
Can anyone provide a counterexample?
An illustrative example in $\mathbb{R}^2$
In particular, if we restrict ourself in $\mathbb{R}^2$, the above conjecture can be restated as follows:
Let $p$ be a point in the interior of a convex polygon $\mathcal{C}$. Let $L_x$ and $L_y$ be two lines through $p$, which are parallel to $x$-axis and $y$-axis respectively. Consider all acute angles
at intersections of $L_x$ $L_y$ and $\partial \mathcal{C}$, there is at least one angle $\geq$45°.
The figure below gives an example.
I haven't found any counterexample in $\mathbb{R}^2$, and that's why I'm considering to generalise this conjecture into high dimensional space.
Finally, any problem reformulation is also welcome.
convex-polytopes geometry convex-geometry mg.metric-geometry
1 Your conjecture (at this writing) asserts what appears to me to be an obvious equality, and says nothing about the maximum value with respect to d. You might edit your conjecture to be more in
align with your example. Gerhard "Ask Me About System Design" Paseman, 2011.09.14 – Gerhard Paseman Sep 14 '11 at 15:46
In case Gerhard's point isn't clear, your conjecture has the form, $c = \max \lbrace a, b, c, d, e, \ldots \rbrace$: it only states that the max of a finite set of numbers is one of the numbers. –
Joseph O'Rourke Sep 14 '11 at 16:04
add comment
1 Answer
active oldest votes
$\def\u{{\bf u}}\def\p{{\bf p}}\def\q{{\bf q}}$ Consider all the points of intersection of the lines $L_d$ with the hyperplanes $H_k$ defining the facets $F_k$. Let $\q$ be the one
closest to $\p$; suppose $\q=L_d\cap H_k$. Then $(d,k)$ is a desired pair.
Firstly, $\q$ should belong to $F_k$, otherwise the segment $[\p,\q]$ would intersect the boundary of a polytope at a point on another facet; thus $d\in G_k$. Next, let $\q_1,\dots,\
up vote 2 down q_D$ be the intersection points of the hyperplane $H_k$ with the lines $L_1,\dots,L_D$ (some of these points may be ideal). Then $\|\p-\q\|=\min_i\|\p-\q_i\|$ which is equivalent to
vote accepted your relation.
EDIT: Surely, the convexity condition IS necessary.
Thanks Ilya Bogdanov for this nice and clear solution. – han Sep 15 '11 at 9:55
add comment
Not the answer you're looking for? Browse other questions tagged convex-polytopes geometry convex-geometry mg.metric-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/75413/angle-btween-coordinate-vector-and-normal-vector-of-facet-in-a-convex-polytope/75432","timestamp":"2014-04-16T19:41:16Z","content_type":null,"content_length":"56730","record_id":"<urn:uuid:4ec602b7-d29a-4b25-bcad-7e31fb6df4e8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding diameter of pins from allowable shear stress
The reason the sum of forces "failed" if that for static equilibrium in 3 forces members, the forces if they are not parallel then they (the 3 forces) must be concurrent, but as you can see in the
geometry on the FBD you used the forces were not concurrent or parallel therefore one of them had to be zero in order to maintain the sum of moments condition of equilibrium.
Btw, I'm happy you feel this way. What i was saying earlier that if we look closer at the "failed" FBD you will see there cannot be static equilibrium for the reasons already stated. Now i want you
to remember again that in frames there can be more than 2 forces acting on a single member. What this means is that generally the forces wouldn't be directed along the members in which they act;
their direction is unknown. This is not the case for trusses (i've the feeling you are familiar with them), so in the future for solving problems with frames consider the method described in your
book, or the isolating each simple member and applying Newton's 3rd Law.
Thanks Cyclovenom. I think things are making a little more sense, though I am still not sure now why Sum of Moments
does work
though I do understand why sum of forces
does work
which is a sterp in the right direction!
If you're teaching yourself statics, I might suggest the book Simplified Engineering for Architects and Builders, Harry Parker and James Ambrose, John Wiley & Sons. You can probably pick up a used
copy off Amazon for less than $20 (US).
I actually have a text. I was doing a directed study in which I met with my instructor only about five times over a 5-6 week period. So most of it was on me (and PF!) since I still had to take exams
and what not. I still have one exam left and I'll tell you I can't wait until it's over! My regular semester started back up so I am at 5 classes right now, two of which are based on Statics. So I
could really use that extra time!
Hey, I posted another engineering thread in the physics section, since it's the physics part that's really screwing with me. I thought it might get more traffic there.
If one of you guys happens to get a moment and wants to give me a nudge in the right direction, that would be rad.
Here it is
Thanks for all of your guys!
|
{"url":"http://www.physicsforums.com/showthread.php?t=212736","timestamp":"2014-04-17T15:41:23Z","content_type":null,"content_length":"73886","record_id":"<urn:uuid:85009e86-d685-4e05-9e48-850714193fec>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Elementary Course in Synthetic Projective Geometry Summary
PR + RQ = PQ,
which holds for all positions of the three points if account be taken of the sign of the segments, the last proportion may be written
(CB — BA) : AB = -(CA — DA) : AD,
(AB — AC) : AB = (AC — AD) : AD;
so that AB, AC, and AD are three quantities in hamonic progression, since the difference between the first and second is to the first as the difference between the second and third is to the third.
Also, from this last proportion comes the familiar relation
which is convenient for the computation of the distance AD when AB and AC are given numerically.
46. Anharmonic ratio. The corresponding relations between the trigonometric functions of the angles determined by four harmonic lines are not difficult to obtain, but as we shall not need them in
building up the theory of projective geometry, we will not discuss them here. Students who have a slight acquaintance with trigonometry may read in a later chapter (§ 161) a development of the theory
of a more general relation, called the anharmonic ratio, or cross ratio, which connects any four points on a line.
1. Draw through a given point a line which shall pass through the inaccessible point of intersection of two given lines. The following construction may be made to depend upon Desargues’s theorem:
Through the given point P draw any two rays cutting the two lines in the points AB’ and A’B, A, B, lying on one of the given lines and A’, B’, on the other. Join AA’ and BB’, and find their point of
intersection S. Through S draw any other ray, cutting the given lines in CC’. Join BC’ and B’C, and obtain their point of intersection Q. PQ is the desired line. Justify this construction.
2. To draw through a given point P a line which shall meet two given lines in points A and B, equally distant from P. Justify the following construction: Join P to the point S of intersection of the
two given lines. Construct the fourth harmonic of PS with respect to the two given lines. Draw through P a line parallel to this line. This is the required line.
3. Given a parallelogram in the same plane with a given segment AC, to construct linearly the middle point of AC.
4. Given four harmonic lines, of which one pair are at right angles to each other, show that the other pair make equal angles with them. This is a theorem of which frequent use will be made.
5. Given the middle point of a line segment, to draw a line parallel to the segment and passing through a given point.
|
{"url":"http://www.bookrags.com/ebooks/17001/15.html","timestamp":"2014-04-18T06:04:14Z","content_type":null,"content_length":"34611","record_id":"<urn:uuid:67884569-bb51-44a3-920c-bc94ce09593b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Covina Math Tutor
Find a West Covina Math Tutor
...I have a pretty good background in Spelling. I was exposed to an educational institution where spelling was a "MUST" in the learning environment. I may not be a "Spelling Bee" material, but I'm
pretty competitive.
8 Subjects: including algebra 1, reading, English, grammar
...I have more than 7 years experience using IBM's SPSS statistical program (from version 16 to current version 21) including extensions in SPSS AMOS and SPSS Modeler. Additionally, I also have an
extensive experience in spatial analytics using Geographical Information Systems (ArcGIS) version 10 &...
3 Subjects: including statistics, SPSS, ecology
...I'm more than capable of helping students with English (grammar, writing, vocab, etc.) and because of my strong science background, I'm comfortable tutoring various math subjects, as well as
physics, up to the college level. I'm also familiar with chemistry and can help up to high school level. My hours are very flexible, and please let me know if you have any questions.
15 Subjects: including algebra 1, algebra 2, calculus, vocabulary
...Going over the basics again, like fractions, decimals and percents can play a large part in success for algebra and beyond. If a student is seeing this material for the first time I try and
make sure that they feel comfortable and natural with topics like fractions. I have for the past few years fixed computers and taught the basics of using them.
32 Subjects: including algebra 1, algebra 2, calculus, grammar
I am a recent college graduate from McGill University with a BA in art history (minor in drama), who remained on the Dean's list throughout my university career. I would be eager to tutor in
multiple areas of art history. I excelled in diverse subjects in high school (including AP art history); I ...
24 Subjects: including calculus, prealgebra, algebra 2, precalculus
|
{"url":"http://www.purplemath.com/West_Covina_Math_tutors.php","timestamp":"2014-04-18T23:20:26Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:04a255f1-582d-4e46-9f2b-243eec1d6977>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry with Polygons..
August 15th 2009, 07:38 AM #1
Aug 2009
Geometry with Polygons..
I would be very thankful if somebody could work and explain these problems. They are part of my Math SAT prep and I don't understand them.
You will probably remember learning about linear equations, and being told that if you have n unknown variables, you need (at least) n independent equations for them. So you first want to
identify all the variables. In example 1, that's x, y, z, r, and s. So n = 5, and you need to find 5 equations that relate these variables. Then you can solve this system either by substitution
or linear combinations.
Geometry is chock full of relationships, so it should not be too hard to find all these equations. (One thing I was confused by - what are the symbols you have written on the problems - the
triangle on the lines in example 1, and the arrows in example 2?)
You (should) know supplementary angles sum to 180 deg, yes? So there's a bunch of equations for you right off the bat:
s + r = 180
(2x) + (2x + y) = 180
(2x + y) + z = 180
You also know that
(2x) + (s) = 180
(2x + y) + r = 180
So there's 5 equations right off the bat, without even breaking a sweat.
If you know that the two horizontal lines are parallel, then you can make other equations from the relationships between the groups of angles formed by the vertical line and the two parallel
lines, and you can also use vertical angle identities as well.
For the second problem, is any information given about the type of shape this is? Are any of the line pairs parallel? The only thing I can tell for sure without making any assumptions is that the
4 angles should sum to 360 deg.
If either of the line pairs are parallel, you'll be able to use properties of the diagonals to relate opposite angles.
August 15th 2009, 11:58 AM #2
Junior Member
Aug 2009
|
{"url":"http://mathhelpforum.com/geometry/98140-geometry-polygons.html","timestamp":"2014-04-19T20:29:28Z","content_type":null,"content_length":"31993","record_id":"<urn:uuid:e7dab17f-2c56-41e5-916d-06ed0534798e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volume 25
Volume 25, Issue 6, June 2013
The velocity of a two-dimensional aqueous foam has been measured as it flows through two parallel channels, at a constant overall volumetric flow rate. The flux distribution between the two
channels is studied as a function of the ratio of their widths. A peculiar dependence of the velocity ratio on the width ratio is observed when the foam structure in the narrower channel is
either single staircase or bamboo. In particular, discontinuities in the velocity ratios are observed at the transitions between double and single staircase and between single staircase and
bamboo. A theoretical model accounting for the viscous dissipation at the solid wall and the capillary pressure across a film pinned at the channel outlet predicts the observed non-monotonic
evolution of the velocity ratio as a function of the width ratio. It also predicts quantitatively the intermittent temporal evolution of the velocity in the narrower channel when it is so narrow
that film pinning at its outlet repeatedly brings the flow to a near stop.
• LETTERS
View Description Hide Description
Suspended colloidal particles interacting chemically with a solute can self-propel by autophoretic motion when they are asymmetrically patterned (Janus colloids). Here we demonstrate
theoretically that such anisotropy is not necessary for locomotion and that the nonlinear interplay between surface osmotic flows and solute advection can produce spontaneous and
self-sustained motion of isotropic particles. Solving the classical autophoretic framework for isotropic particles, we show that, for given material properties, there exists a critical
particle size (or Péclet number) above which spontaneous symmetry-breaking and autophoretic motion occur. A hierarchy of instabilities is further identified for quantized critical Péclet
• ARTICLES
□ Biofluid Mechanics
View Description Hide Description
We develop a coarse-grained theory to predict the concentration distribution of a suspension of vesicles or red blood cells in a wall-bound Couette flow. This model balances the
wall-induced hydrodynamic lift on deformable particles with the flux due to binary collisions, which we represent via a second-order kinetic master equation. Our theory predicts a
depletion of particles near the channel wall (i.e., the Fahraeus-Lindqvist effect), followed by a near-wall formation of particle layers. We quantify the effect of channel height,
viscosity ratio, and shear-rate on the cell-free layer thickness (i.e., the Fahraeus-Lindqvist effect). The results agree with in vitro experiments as well as boundary integral
simulations of suspension flows. Lastly, we examine a new type of collective particle motion for red blood cells induced by hydrodynamic interactions near the wall. These “swapping
trajectories,” coined by Zurita-Gotor et al. [J. Fluid Mech.592, 447–469 (Year: 2007)10.1017/S0022112007008701], could explain the origin of particle layering near the wall. The theory we
describe represents a significant improvement in terms of time savings and predictive power over current large-scale numerical simulations of suspension flows.
View Description Hide Description
We apply the boundary-element method to Stokes flows with helical symmetry, such as the flow driven by an immersed rotating helical flagellum. We show that the two-dimensional boundary
integral method can be reduced to one dimension using the helical symmetry. The computational cost is thus much reduced while spatial resolution is maintained. We review the robustness of
this method by comparing the simulation results with the experimental measurement of the motility of model helical flagella of various ratios of pitch to radius, along with predictions
from resistive-force theory and slender-body theory. We also show that the modified boundary integral method provides reliable convergence if the singularities in the kernel of the
integral are treated appropriately.
□ Micro- and Nanofluid Mechanics
View Description Hide Description
We present a numerical study of the effect that fluid and particle inertia have on the motion of suspended spherical particles through a geometric constriction to understand analogous
microfluidic settings, such as pinched flow fractionation devices. The particles are driven by a constant force in a quiescent fluid, and the constriction (the pinching gap) corresponds
to the space between a plane wall and a second, fixed sphere of the same size (the obstacle). The results show that, due to inertia and/or the presence of a geometric constriction, the
particles attain smaller separations to the obstacle. We then relate the minimum surface-to-surface separation to the effect that short-range, repulsive non-hydrodynamic interactions
(such as solid-solid contact due to surface roughness, electrostatic double layer repulsion, etc.) would have on the particle trajectories. In particular, using a simple hard-core
repulsive potential model for such interactions, we infer that the particles would experience larger lateral displacements moving through the pinching gap as inertia increases and/or the
aperture of the constriction decreases. Thus, separation of particles based on differences in density is in principle possible, owing to the differences in inertia associated with them.
We also discuss the case of significant inertia in which the presence of a small constriction may hinder separation by reducing inertia effects.
View Description Hide Description
A Fokker–Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker–Planck
approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition
theorem in equilibrium and fulfills the Landau–Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably
low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated
particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision
time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct
simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where
significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out
of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition
model) for vibrational and translational temperatures is shown.
□ Interfacial Flows
View Description Hide Description
A model of a nonspherical gas bubble is developed in which the Rayleigh-Plesset equation is augmented with second order terms that back-couple the volume mode to a single shape mode.
These additional terms in the Rayleigh-Plesset equation permit oscillation energy to be transferred back and forth between the shape and volume modes. The parametric stability of the
shape mode is analyzed for a driven bubble, and it is seen that the bidirectional coupling yields an enhanced, albeit minor, stabilizing effect on the shape mode when compared with a
model where the shape-volume coupling is unidirectional. It is also demonstrated how a pure shape distortion can excite significant volume pulsations when the volume mode is in 2:1
internal resonance with the shape mode.
View Description Hide Description
Shape oscillations of a spherical bubble or drop, for which part of its interface is fixed due to contact with a solid support, are studied analytically using variational methods. Linear
oscillations and irrotational flow are assumed. The present analysis is parallel to those of Strani and Sabetta [“Free vibrations of a drop in partial contact with a solid support,” J.
Fluid Mech.141, 233–247 (Year: 1984)]10.1017/S0022112084000811; and Bostwick and Steen [“Capillary oscillations of a constrained liquid drop,” Phys. Fluids21, 032108 (Year: 2009)]10.1063/
1.3103344 but is also able to determine the response of bubbles or drops to movements imposed on their supports or to variations of their volumes. The analysis leads to equations of
motion with a simple structure, from which the eigenmodes and frequency response to periodic forcing are easily determined.
View Description Hide Description
The stationary motion of a liquid curtain falling under the effects of inertia, gravity, and surface tension is analyzed. An original equation governing the streamwise distribution of
thickness and velocity is derived by means of a Taylor expansion in the lateral distance from the mean line of the sheet. Approximate solutions are obtained by means of perturbation
approaches involving the two parameters governing the problem, namely, the slenderness ratio ɛ and the Weber number We. The numerical procedure employed in order to integrate the
non-linear equation is discussed and a parametric study is presented, together with a comparison with the approximate asymptotic solutions valid for small ɛ and We.
View Description Hide Description
The behavior of a single acoustically driven bubble tethered to a wire ring is considered. The method of restraining the bubble against rising by attaching it to a wire is a common
procedure in conducting precision acoustic measurements. The dynamics of the tethered bubble differs from those of free bubble due to variation in inertial (or added) mass. The objective
of this study is to obtain a closed-form, leading order solution for the volume oscillations, assuming smallness of the bubble radius R 0 in comparison with the acoustic wavelength λ. It
was shown, by using the invariance of the Laplace equation to conformal transformations and the geometry of the problem, that the toroidal coordinates provide separation of variables and
are most suitable for analysis of the oscillations of the tethered bubble. Thus, the dynamics of the bubble restraining by a wire loop in toroidal coordinates can be investigated by using
analytical approach and by analogy to the dynamics of a free spherical bubble.
View Description Hide Description
We consider the inertially driven, time-dependent biaxial extensional motion of inviscid and viscous thinning liquid sheets. We present an analytic solution describing the base flow and
examine its linear stability to varicose (symmetric) perturbations within the framework of a long-wave model where transient growth and long-time asymptotic stability are considered. The
stability of the system is characterized in terms of the perturbation wavenumber, Weber number, and Reynolds number. We find that the isotropic nature of the base flow yields stability
results that are identical for axisymmetric and general two-dimensional perturbations. Transient growth of short-wave perturbations at early to moderate times can have significant and
lasting influence on the long-time sheet thickness. For finite Reynolds numbers, a radially expanding sheet is weakly unstable with bounded growth of all perturbations, whereas in the
inviscid and Stokes flow limits sheets are unstable to perturbations in the short-wave limit.
View Description Hide Description
Recent experiments by Pailha et al. [Phys. Fluids24, 021702 (Year: 2012)10.1063/1.3682772] uncovered a rich array of propagation modes when air displaces oil from axially uniform tubes
that have local variations in flow resistance within their cross-sections. The behaviour is particularly surprising because only a single, symmetric mode has been observed in tubes of
regular cross-section, e.g., circular, elliptical, rectangular, and polygonal. In this paper, we present experimental results describing a new mode, an asymmetric localised air finger,
that persists in the limit of zero propagation speed. We show that the experimental observations are consistent with a model based on capillary static calculations within the tube's
cross-section, and the observed bistability is a consequence of the existence of multiple solutions to the Young–Laplace equations. The model also provides an upper bound for the
previously reported symmetry-breaking bifurcation[A. de Lózar, A. Heap, F. Box, A. L. Hazel, and A. Juel, Phys. Fluids21, 101702 (Year: 2009)10.1063/1.3247879].
View Description Hide Description
The thermocapillary instability of irradiated transparent liquid films on absorbing solid substrates is investigated by means of linear stability analysis. Under such circumstances,
incident light passes through a film and is absorbed by the substrate, and the film is then heated by the heat influx across the interface with the substrate. The optical absorption in
the substrate is affected by optical reflection. The energy reflectance varies periodically with the film thickness due to optical interference between light waves reflected from the
gas-liquid and liquid-solid interfaces. The periodic variation of the reflectance strongly affects the film stability, which also varies periodically with the film thickness.
Characteristic scales of the instability are also affected by the substrate thickness and incident light intensity. While qualitative aspects of the stability can be easily obtained from
the analysis based on a simplified model that is derived under the thin-substrate assumption, the quantitative evaluation for the case of substrates of moderate to large thickness should
be based on a more generalized model that allows for substrates of arbitrary thickness.
View Description Hide Description
A mathematical framework is developed to predict the longevity of a submerged superhydrophobic surface made up of parallel grooves. Time-dependent integro-differential equations
predicting the instantaneous behavior of the air–water interface are derived by applying the balance of forces across the air–water interface, while accounting for the dissolution of the
air in water over time. The calculations start by producing a differential equation for the initial steady-state shape and equilibrium position of the air–water interface at t = 0.
Analytical and/or numerical solutions are then developed to solve the time-dependent equations and to compute the volume of the trapped air in the grooves over time until a Wenzel state
is reached as the interface touches the groove's bottom. For demonstration, a superhydrophobic surface made of parallel grooves is considered, and the influence of the groove's dimensions
on the longevity of the surface under different hydrostatic pressures is studied. It was found that for grooves with higher width-to-depth ratios, the critical pressure (pressure at which
departure from the Cassie state starts) is higher due to stronger resistance to deflection of the air–water interface from the air trapped in such grooves. However, grooves with higher
width-to-depth ratios reach the Wenzel state faster because of their greater air–water interface areas.
View Description Hide Description
We study the collapse of an axisymmetric liquid filament both analytically and by means of a numerical model. The liquid filament, also known as ligament, may either collapse stably into
a single droplet or break up into multiple droplets. The dynamics of the filament are governed by the viscosity and the aspect ratio, and the initial perturbations of its surface. We find
that the instability of long viscous filaments can be completely explained by the Rayleigh-Plateau instability, whereas a low viscous filament can also break up due to end pinching. We
analytically derive the transition between stable collapse and breakup in the Ohnesorge number versus aspect ratio phase space. Our result is confirmed by numerical simulations based on
the slender jet approximation and explains recent experimental findings by Castréjon-Pita et al. [Phys. Rev. Lett.108, 074506 (Year: 2012)]10.1103/PhysRevLett.108.074506.
□ Viscous and Non-Newtonian Flows
View Description Hide Description
The velocity of a two-dimensional aqueous foam has been measured as it flows through two parallel channels, at a constant overall volumetric flow rate. The flux distribution between the
two channels is studied as a function of the ratio of their widths. A peculiar dependence of the velocity ratio on the width ratio is observed when the foam structure in the narrower
channel is either single staircase or bamboo. In particular, discontinuities in the velocity ratios are observed at the transitions between double and single staircase and between single
staircase and bamboo. A theoretical model accounting for the viscous dissipation at the solid wall and the capillary pressure across a film pinned at the channel outlet predicts the
observed non-monotonic evolution of the velocity ratio as a function of the width ratio. It also predicts quantitatively the intermittent temporal evolution of the velocity in the
narrower channel when it is so narrow that film pinning at its outlet repeatedly brings the flow to a near stop.
View Description Hide Description
The aim of this work was to extend a previous investigation of the flow between two parallel disks (one of which was stationary) that have been subjected to a constant energy impact
arising from a falling mass onto the upper disk assembly. Whereas the previous work considered the measurement of centreline pressures and distance between the plates only, for a single
case, the current work in addition entailed monitoring of pressures at 45% and 90% of disk radius, under 28 combinations of drop height (100 to 1000 mm), drop mass (10 to 55 kg), and
initial disk separation (3 to 10 mm), each with 5 repeat tests. Over the duration of the phenomenon (about 3.5 to 10 ms), four basic features were identified: (1) during initial impact
under the dominance of temporal inertia, a preliminary pressure spike with peak pressures occurring at a displacement change of less than 0.25 mm from the initial disk separation; (2) an
intermediate region with lower pressures; (3) pressure changes arising from a succession of elastic momentum exchanges (bounces) between the colliding masses; and (4) the final largest
pressure spike towards the end of the phenomenon, where viscous effects dominate. Regions (1) and (4) became merged for smaller values of initial disk separation, with region (2) being
obscured. A previously developed quasi-steady linear (QSL) model conformed satisfactorily with pressures measured at the centre of the lower disk; however, substantial deviations from
radially parabolic pressure distributions were encountered over a range of operating parameters during the preliminary pressure phenomenon, unexpected because they implicitly conflict
with the generally accepted concept of parallel flows and radially self-similar velocity profiles in such systems. Measurements of maximum pressures encountered during the preliminary and
final pressure events agreed satisfactorily, both with the QSL model and with a simple but effective scaling analysis.
□ Particulate, Multiphase, and Granular Flows
View Description Hide Description
Axial mixing of wet particles in rotating drums was investigated by the discrete element method with the capillary force explicitly considered. Different flow regimes were observed by
varying the surface tension of liquid and keeping other conditions unchanged. The analysis of the concentration and mean square displacement of particles indicated that the axial motion
of wet particles was a diffusive process characterised by Fick's law. Particle diffusivity decreased with increasing inter-particle cohesion and drum filling level but increased with
increasing drum rotation speed. Two competing mechanisms were proposed to explain these effects. A theoretical model based on the relation between local diffusivity and shear rate was
developed to predict particle diffusivity as a function of drum operation conditions. It was also observed that despite the high inhomogeneity of particle flow in rotating drums, the mean
diffusivity of flow exhibited a strong correlation with granular temperature, defined as the mean square fluctuating velocity of particles.
View Description Hide Description
We experimentally study the behavior of a particle slightly denser than the surrounding liquid in solid body rotating flow. Earlier work revealed that a heavy particle has an unstable
equilibrium point in unbounded rotating flows[G. O. Roberts, D. M Kornfeld, and W. W Fowlis, J. Fluid Mech.229, 555–567 (Year: 1991)10.1017/S0022112091003166]. In the confinement of the
rotational flow by a cylindrical wall a heavy sphere with density 1.05 g/cm3 describes an orbital motion in our experiments. This is due to the effect of the wall near the sphere, i.e., a
repulsive force (F W ). We model F W on the sphere as a function of the distance from the wall (L): F W ∝L −4 as proposed by Takemura et al. [J. Fluid Mech.495, 235–253 (Year: 2003)
10.1017/S0022112003006232]. Remarkably, the path evaluated from the model including F W reproduces the experimentally measured trajectory. In addition during an orbital motion the
particle does not spin around its axis, and we provide a possible explanation for this phenomenon.
View Description Hide Description
We present results of detailed velocity profile measurements in a large series of granular flow experiments in a dam-break setup. The inclination angle, bead size, and roughness of the
running surface were varied. In all experiments, the downstream velocity profiles changed continuously from the head to the tail of the avalanches. On rough running surfaces, an
inflection point developed in the velocity profiles. These velocity profiles cannot be modeled by the large class of constitutive laws which relate the shear stress to a power law of the
strain rate. The velocity profile shape factor increased from the head to the tail of the avalanches. Its maximum value grew with increasing roughness of the running surface. We conclude
that flow features such as velocity profiles are strongly influenced by the boundary condition at the running surface, which depends on the ratio of bead size to the typical roughness
length of the surface. Furthermore, we show that varying velocity profile shape factors inside gravitationally driven finite-mass flows give rise to an additional term in the
depth-averaged momentum equation, which is normally solved in the simulation software of hazardous geophysical flows. We therefore encourage time dependent velocity profile measurements
inside hazardous geophysical flows, to learn about the importance of this “new” term in the mathematical modeling of these flows.
View Description Hide Description
Granular shear flows of flat disks and elongated rods are simulated using the Discrete Element Method. The effects of particle shape, interparticle friction, coefficient of restitution,
and Young's modulus on the flow behavior and solid phase stresses have been investigated. Without friction, the stresses decrease as the particles become flatter or more elongated due to
the effect of particle shape on the motion and interaction of particles. In dense flows, the particles tend to have their largest dimension aligned in the flow direction and their
smallest dimension aligned in the velocity gradient direction, such that the contacts between the particles are reduced. The particle alignment is more significant for flatter disks and
more elongated rods. The interparticle friction has a crucial impact on the flow pattern, particle alignment, and stress. Unlike in the smooth layer flows with frictionless particles,
frictional particles are entangled into large masses which rotate like solid bodies under shear. In dense flows with friction, a sharp stress increase is observed with a small increase in
the solid volume fraction, and a space-spanning network of force chains is rapidly formed with the increase in stress. The stress surge can occur at a lower solid volume fraction for the
flatter and more elongated particles. The particle Young's modulus has a negligible effect on dilute and moderately dense flows. However, in dense flows where the space-spanning network
of force chains is formed, the stress depends strongly on the particle Young's modulus. In shear flows of non-spherical particles, the stress tensor is found to be symmetric, but
anisotropic with the normal component in the flow direction greater than the other two normal components. The granular temperature for the non-spherical particle systems consists of
translational and rotational temperatures. The translational temperature is not equally partitioned in the three directions with the component in the flow direction greater than the other
two. The rotational temperature is less than the translational temperature at low solid volume fractions, but may become greater than the translational temperature at high solid volume
|
{"url":"http://scitation.aip.org/content/aip/journal/pof2/25/6/","timestamp":"2014-04-16T19:24:30Z","content_type":null,"content_length":"184034","record_id":"<urn:uuid:8c676215-331b-409b-a05f-2b10741fd8d0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to prepare for the CLAST
From inside the book
7 pages matching Relationships Within Sentences in this book
Page iii
Where's the rest of this book?
Results 1-3 of 7
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Some Proven TestTaking Strategies 6
Critical Comprehension 22
Reading Practice Explained Answers 33
19 other sections not shown
Common terms and phrases
absolutely correct adverbs angles Answers on page asked author's purpose author's tone bias CAT-CLAST Chad change is necessary Choose the answer Choose the correct CLAST questions look comma
Completed Example conclusion correct answer correct replacement decimal diagram digit Divide Eliminate incorrect answers empty set English Language Skills equation essay formula four answer choices
fractions graphing calculators Hardy Boys heart-lung machine inches independent clauses inequality logical main idea Mathematics means measure median meters missing term modifier Multiply nouns
ordered pair organizational pattern paragraph parallel passage percent players Practice Questions programs pronoun quadratic equation question has four question type raining Relationships Within
Sentences Ryan scientific notation score sentence elements shows sides soccer solution Solve square Step strategies subtest Subtract teacher test booklet Test Question Example Test Tip topic
triangle type of question underlined valid verb Write
About the author (1996)
Bibliographic information
|
{"url":"http://books.google.com/books?id=MNHTmlS96lEC&q=Relationships+Within+Sentences&dq=related:ISBN0812094344&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-21T00:37:05Z","content_type":null,"content_length":"112330","record_id":"<urn:uuid:4d1a64f9-dd42-4f6e-9ed9-bd9a1045dd24>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the limit of the sequence
July 25th 2012, 10:47 PM #1
Jul 2012
Find the limit of the sequence
The problem says to find the limit of the sequence.
a[n] = (-1)^n * (3n + e^-n/5n)
I'm not sure how to approach this problem. Do I use the ratio test?
I apologize for the formatting. I'm not sure how to make the fraction look nice
Last edited by NeedsHelpPlease; July 26th 2012 at 08:01 AM.
Re: Find the limit of the sequence
Last edited by Plato; July 26th 2012 at 06:41 AM.
Re: Find the limit of the sequence
And the "ratio test" is a test for convergence of a series, not a sequence.
Re: Find the limit of the sequence
Actually there is a ratio test for sequences as well.
For a sequence \displaystyle \begin{align*} \left\{ a_n \right\} \end{align*}, the ratio test states
\displaystyle \begin{align*} \lim_{n \to \infty}\left| \frac{a_{n+1}}{a_n} \right| = L < 1 \implies \lim_{n \to \infty}a_n = 0 \end{align*}
July 26th 2012, 03:24 AM #2
July 26th 2012, 05:26 AM #3
MHF Contributor
Apr 2005
July 26th 2012, 05:39 AM #4
|
{"url":"http://mathhelpforum.com/calculus/201371-find-limit-sequence.html","timestamp":"2014-04-17T04:37:25Z","content_type":null,"content_length":"41424","record_id":"<urn:uuid:88c772ae-5201-4163-999d-c7fd37848446>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Bridgewater SAT Math Tutor
Find an East Bridgewater SAT Math Tutor
...Visit me at, http://www.wyzant.com/Tutors/engineeringhelpI frequently tutor calculus, and I have studied it from many angles. In high school, I scored 5/5 on both AP Calculus AB and BC
examinations. At MIT, I aced a course in multivariable calculus.
8 Subjects: including SAT math, physics, calculus, differential equations
...Just like real life, it's often a matter of knowing what is required of you, and addressing those areas first and foremost. Too often, students "spin their wheels" on areas from which they will
not maximally benefit. They get frustrated going around and around in circles, not understanding something.
30 Subjects: including SAT math, reading, writing, English
...For more then a decade I have been using calculus to solve a wide variety of complex problems. As a math phd student I have particularly excelled in the field of analysis which is largely just
a more rigorous and abstract formulation of traditional calculus concepts. In addition to my love for the subject, I also have a natural impulse to explain it to others.
14 Subjects: including SAT math, calculus, geometry, GRE
I have just retired after 40 years of teaching math and coaching various sports. My experience has been in public school at Scituate High School and in private at Archbishop Williams. I can
honestly say that I loved going to work everyday.
5 Subjects: including SAT math, geometry, algebra 2, prealgebra
...I am familiar with a few Java IDEs as well, so I am able to tutor from a versatile standpoint. I received excellent scores in all areas on my first and only attempt at the SAT. I have excellent
reading and communication skills, and a background in Latin to help with vocabulary.
38 Subjects: including SAT math, reading, English, physics
|
{"url":"http://www.purplemath.com/east_bridgewater_sat_math_tutors.php","timestamp":"2014-04-18T11:37:35Z","content_type":null,"content_length":"24213","record_id":"<urn:uuid:a2617be3-187f-492c-8e08-7ccdda916cb2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Presidential Election Predictor
About the Presidential Election Predictor
Go to the state-by-state form
November 10, 2012
There is still a little bit to look at now that the election results are in. The first bit of business is to retract my comments about Maine's second congressional district that I made on November 5.
There were four combinations of states that I had overlooked that could give Obama 33 electoral votes without ME2. This means that ME2 could have tipped the election. So I moved it back to the
category of Possible Tipping Points States.
ME2 didn't figure particularly prominently prominently for either candidate. Using the data that I had captured just after midnight on the morning of the electon, Obama's chances of winning stood at
67.19% over all outcomes.
November 6, 2012
How each candidate can win: There are seven key states. The Intrade polling last night didn't quite reflect that...Obama's Pennsylvania and Michigan futures were only a bit higher than his Nevada
futures last night when I captured the data, but I analyzed what the data presented. That gives Romney some out-of-the-box opportunities. In fact, out of the Intrade state futures suggest that Romney
has a 26.57% chance of winning by holding his own must-win states and breaking through to capture at least one of Obama's must-win states. Similarly, though, Obama can also win by holding his own
must-win states while capturing Florida or another one of Romney's must-win states, and the Intrade state futures suggest that he has a 39.51% chance of doing so.
Unlike Romney, however, Obama has some chances to win even if he loses a must-win state. He has a 2.61% chance of winning the election in cases where Romney holds all of his must-win states and
captures at least one of Obama's. He has an additional 1.54% chance of winning when both he and Romney capture something from the other's must-win column. In contrast, Romney's has only a 0.05%
chance of winning in cases where Obama holds his must-win states and captures something from Romney's must-win column. Romney has some modest chances of winning if both candidates capture something
from each other's must-win column, but those cases only add up to 0.45%.
Those are the out-of-the-box cases, and they account for 70.73% of the outcomes. (I have to inject editorially here—I'm reporting on the implications of the Intrade futures data. I personally don't
think that Romney has any significant chance of breaking into Obama's must-win column.) Of those outcomes, Obama wins 43.66% and Romney wins 27.07%. Usually, only one candidate breaks through and
that candidate wins the election, but 4.65% of the outcomes are really out of the box, and Obama wins most of those (4.15%).
The other 29.27% of the outcomes occur when both candidates hold their must-win states. With the possible tipping-point states as I've defined them, either candidate needs 34 electoral votes to claim
the election. Obama could win with 33 EVs, but there are no combinations that equally split the 66 EVs from those states. If Romney gets 34 EVs, he only achieves at 169-169 electoral tie, but
Republicans are almost certain to control a majority of state delegations in the new House of Representatives, so I award all of those outcomes to Romney.
Of the 29.27% of cases that are in-the-box scenarios, the Intrade futures suggest that Obama wins 23.52% and Romney wins only 5.75%. So, for both Obama and Romney, the majority of their favorable
outcomes are from out-of-the-box scenarios, where someone gets into the other's must-win column, but for Romney it's overwhelming. Over 82% of his winning scenarios involve him capturing at least one
of Obama's must-win states. For Obama, it's only 61.10%.
There are eighteen different ways to get at least 34 of the 66 EVs (without unneeded states). Obama's easiest combination is NV+WI+OH, and 17.37% of the the outcomes are cases where both candidates
hold the must-win states and Obama wins those three. If Obama does not win with an out-of-the-box scenario or capture those three tipping-point states, then his next most useful combination is
NV+IA+VA+CO adding an additional 1.80% to his chances. The next most useful combination is OH+NH+VA, adding anther 0.76%. Obama still has 3.60% of the favorable outcomes from other in-the-box
scenarios. Some of the notable ones are WI+OH+IA, NV+WI+IA+VA, or NV+OH+IA+NH.
Romney's best in-the-box scenario is VA+CO+OH representing 2.75% of his winning outcomes, which is almost half the 5.75% of his winning outcomes from all in-the-box scenarios. If he does not win with
an out-of-the-box result or with those three states, his next most useful combination is VA+NH+OH, adding another 0.80%. Romney has another 2.19% of all outcomes that result in an in-the-box win.
Some of the notable ones are VA+IA+OH, IA+OH+WI, or VA+CO+NH+WI.
November 5, 2012
I'm working on the pre-election summary and mulling over what to do with Maine's second congressional district. I've decided to move it in with the "Possible Tipping Point States", but for an odd
reason: It doesn't fit either of the two criteria for getting out of that category.
• It is strongly enough on one side of the tipping point that it can only swing if that candidate loses
• It is firmly on one candidate's side of the tipping point, but worth so many electoral votes that it will tip the election if lost.
Maine's second congressional district doesn't fit either of those. But, if you set it aside as an Obama EV and look at the seven states around the tipping point, then Obama needs 33 EV or Romney
needs 34 EV to secure an electoral college victory. But there is not a single combination of those seven states that adds up to 33 electoral votes. So each candidate actually needs 34 EV. If you toss
ME2 back into play, then Obama needs 34 EV or Romney needs 34 EV. Since there was no combination of 33 EV without ME2, ME2 can never tip the election, unless one of the Must-Win states flipped.
So in the form below, ME2 has been moved into the category of Possible Tipping Point States, but it can't tip the election.
November 5, 2012
The election is tomorrow. I've made the final changes. It's now clear that Romney cannot win the election if he loses Florida, so I've moved Florida into Romney's must-win states. In a narrow sense,
I'm not asserting that none of the must-win states could be the tipping point. I'm making the more narrow assertion that if the opposing candidate wins any of those states, then the opposing
candidate will win the electoral college.
So, I see 2^7 + 2 = 130 possible outcomes. (The +2 covers the cases where one candidate wins a state in the other's must-win set.) If I have enough time, I'll try to break those down and assign
probabilities to each outcome based on Intrade state futures.
October 31, 2012
A little bit of self-induced horror this Halloween. The state aggreggation is particularly simple right now. (It's Ohio, stupid.) So, I noticed that the aggregated future that was calculated by my
site seemed to be off. I found a bug in the code that was using too large a value for the state-by-state part of the variance. (The national part of the variance was calculated properly, so there was
too much variance overall.) I fixed the bug and reran the previous calculations using historical data files. I had overwritten one data file from 10/3, but all other historical values have been
recalculated. If you run the script using the form below, you can see the corrections.
As I commented above, Ohio dominates the presidential futures. In fact, most of Obama's winning scenarios include Wisconsin, Nevada and Ohio (including combinations where Obama also wins other states
like Iowa.) Obama's most likely scenario without Ohio involves Virginia with Wisconsin and any two of Nevada, Iowa or New Hampshire (about 3.0% of all scenarios, or about 60% of Obama's wins without
Ohio). Romney's most likely scenario without Ohio is to win Wisconsin (about 4.7% of all scenarios, or about 57% of Romney's wins without Ohio).
October 26, 2012
There's a bit of a spread again between the aggregated prediction based on state futures (59.85 for Obama) and the directly traded futures (63.3). In this case, though, the arbitrage is not so
obvious. The aggregate isn't soft for Obama because one or to key state futures are off, but becaue several states that would be part of Obama's backup. Essentially, with Virginia and Colorado
trading below 50%, the aggregated prediction doesn't have much contribution from cases where Obama loses Ohio, although there are certainly ways for him to lose while winning Ohio. That's a harder
balance to play. My gut tells me that buying Colorado and Virginia at depressed prices and selling Obama's aggregate is pretty safe, it's not as well hedged as I'd like.
October 24, 2012
Still chaotic—especially at the state level. Since yesterday, Ohio has traded at least as low as 43% for Obama and at least as high as 63%. The overall result has traded over a narrower range, but
still varied by at least a few points. We'll probably need to wait for some solid post-debate polling before things settle down. The aggregated state results are currently 54.93% for Obama, up very
slightly since yesterday, while the Intrade individual results are at 57.5%, down about one point. But, that's just a snapshot at about 11:40 AM.
October 23, 2012
This site is back up after being down for 11 days. (The Internet Service for MathRec went down unexpectedly.) In the meantime, a lot has happened. Romney had been in trouble before the first debate,
but recovered significantly to get back to a 2:3 underdog position—which is much less daunting than the 1:4 or 1:5 position he was looking at in late September. After that first debate, Romney's
position settled into a new normal of about 37%.
That position was not much changed before last night's final debate, but Romney's direct futures are now trading at more than 41%—an overnight improvement of about three points. During some chaotic
trading this morning, his futures even exceeded 48%. In the meantime, the state futures have gotten slightly out of sync again. Arbitrage? Not right now… It's Ohio that's out of whack in the state
futures, and there's a largish gap between bid (55%) and ask (58.4%). Since the ask price is not really out of sync with the direct futures, there's not much opportunity there. Don't be surprised to
see some more chaotic trading, though. At this point, the tipping point is so dominated by Ohio, that you don't need this site to check for the arbitrage opportunity.
October 10, 2012
At 5:05 PM EDT, the bid price for Obama shares was 62.5%. Short selling ten lots would require an outlay of $37.50. I could have closed out the ten shares by meeting the ask price of 61.4% at 9:04 PM
EDT. That would have been a profit of $1.10 on ten lots, or 2.9% (recouping most of my loss from yesterday, but still down $0.30). There's a debate tomorrow, so it would not be a good idea to play
the time-of-day game.
October 10, 2012
Another seeming arbitrage opportunity today. My automated processing of the Intrade state data extracted a value for Obama in Ohio of 56.25%. As a result, the aggregated state data came out at 56.97%
for Obama. Meanwhile Obama's direct futures are trading at 63.2%. But the state futures are lightly traded, and the ask price for Ohio is 62.6% (or a bid price of 37.5% for Romney in Ohio), so
there's not really an opportunity for arbitrage (unless you want to buy one lot of Obama in Colorado at 49.0—that's a good price, but it's not as tightly coupled to the overall outcome as Ohio is;
the offseting purchase would be one overall share of Romney at 37.6%). Update: At 12:30 PM EDT, buy 2 shares of Obama in Ohio at 55.0%; sell 2 shares of Obama overall at 62.1%. That's an exposure of
$18.58. Resolution: At 7:14 PM EDT, sell the shares of Obama in Ohio at the bid price of 59.7% and buy back the shares of Obama overall at the ask price of 62.7%. That's a profit of $0.94 on Ohio and
a loss of $0.12 on the overall, for a net of $0.82, or 4.4% in seven hours.
October 9, 2012
So, at 5:08 Eastern time, I could have sold Obama on Intrade at 61.2, matching the posted bid price. (The competing ask was 61.6 at the time. Alternatively, I could have bought Romney by matching the
ask price of 38.9). Selling ten lots short would have cost $38.80. Liquidating that position at 9:37 PM would have been at 62.6, matching the ask price, for a loss of $1.40 on ten lots. Romney's
futures were run up by about four points between 12:25 and 1:00 PM EDT, so the market was still settling back down from that event. We'll see if the afternoon/evening drift shows up tomorrow, but for
now, I'd be down $1.40.
October 8, 2012
As far as arbitrage goes, I think you could do it every weekday. Buy Romney at 4:00 EDT and sell your stake at 9:00 PM EDT. I'm not sure what traders are coming and going throughout the day, but this
pattern has seemed pretty stable. Today you could have bought at no more than 33.5 and sold at no less than 35.7. That's a 6.6% return. Taking the worst timing on prices would still be something like
buying at 33.7 and selling at 35.1, for a 4.2% return.
That one doesn't count. I'll track this over the next few days and see whether I can make it consistent. It's certainly more like gambling than arbitrage, though.
October 6, 2012
The arbitrage opportunity did not last long. You could sell that Ohio at 65.1 and Wisconsin at 68.0, and buy back Obama at 65.9 at profits of 5.2, 13.0 and 4.4 respectively. So a well-hedged
investment of $5.99 + $5.50 + 2×$2.97 = $17.43 would have yielded a profit of $2.70 in a day.
If the markets stay chaotic, there are likely to be additional arbitrage opportunities. One of the purposes of this site is to identify them. When the well-traded direct presidential futures are
out-of-whack with the state futures, buying opposite sides of the November result in state futures and direct futures can present a low-risk opportunity for short-term gain.
October 5, 2012
Well, this is the first clear arbitrage opportunity. Obama's direct futures are trading at over 70%, but his futures for Wisconsin, Ohio, Virginia and Iowa are all no more than 55%. (The way that I
calculate the predictive probability does not meant that you can make a purchase at that price, but you could buy Ohio for 59.9 and Wisconsin for 55.0 and sell Obama direct futures at 70.3.) As a
result, the state aggregate has collapsed to 61.92%, while the Intrade direct futures have risen to 70.3%. I'll make a non-analytical prediction that that won't last long...
September 29, 2012
It has been one thing after another for Mitt. I record three different predictions each day, and a list of these historical values are printed at the end of the CGI output. (These CGI is launched
when you click submit on the form at the bottom of the page.) With just a little noise in the data, Mr. Romney's futures as calculated by aggregating the trading data for state results has been
dropping steadily. Similarly, the direct futures prices on the overall result of the presidential election have been doing the same thing. One thing I've been trying to tease out of the data is
whether the state futures are a leading or lagging indicator. Because both have been on a relentless uphill march, it's been difficult to tell. I would, however, note that one of the bigger jumps
occurred when some state polls came out that were quite favorable to Obama. In that particular case, the aggregate of the state futures led the futures for the overall outcome, and it was pretty easy
to see that the state futures that were driving the change were the ones that had the new polling data.
My impression right now is that the aggregated state futures lead when there's specific new data, but that the futures for the states without new data actually lag a bit. That is, the order of
movement is: 1) states with new data, 2) overall direct futures, 3) states without new data. This means that the state aggregate both leads and lags.
September 22, 2012
I made a couple of structural changes, moving Michigan and Pennsylvania out of the list of possible tipping point states. It's pretty clear that Obama basically has no winning combinations where he
loses either of those states. I also reduced the portion of the variance applied to the national shift from 75% to 70%, now that we're closer to the election. This has a relatively small effect on
the results, implying 71.96% for Obama, instead of 71.87%.
September 13, 2012
It looks like I launched just in time to watch the Intrade markets respond to Obama's convention bounce…possibly mixed with some foreign-policy bounce resulting from the Libya attacks. For my
purposes, I'm interested in whether the state aggregate from my predictor leads or lags the Intrade direct futures. On September 9, Romney's direct futures were at their peak, and the state aggregate
stood at 63.13% for Obama, which was 5.13 greater than his direct futures. So, when Obama's futures were in decline, the state aggregate, perhaps was a bit high. As Obama's futures have been shooting
upward, the delta between the state aggregate and the direct futures has narrowed. These observations suggest that the direct futures lead the state aggregate.
The question of which indicator leads and which one lags is separate from the issue of whether there is a systemic bias between the state aggregate and the direct futures. As I observed in my
previous post, the state aggregate has always indicated a better position for Obama than the direct futures do. Including my experience from previous elections, I have good reason to believe that the
offset is systemic. It is less clear to me whether the systemic offset is in my aggregating model or in the underlying Intrade data.
September 10, 2012
The direct Intrade futures for president indicate only a 58.7% chance of an Obama victory in November, but I infer a 63.27% chance from the state-by-state Intrade data. This disconnect was common
during the last presidential election, and it's not surprising during the early phase of the election, when the state futures aren't highly traded. But, the direction of the inconsistency seems
fairly constant. Looking at the state-by-state futures paints a rosier picture for Obama than the direct presidential election futures.
An interesting check on my algorithm is to see what it does with the fivethirtyeight state data. When I enter the state-by-state values from fivethirtyeight, I get a higher overall estimate of
Obama's chances than fivethirtyeight does, unless I attribute most (92%) of the variance to national shift. Using the 75% that is my current value for the percentage of national variance gives 82.47%
for Obama instead of 80.7%. I suspect this is "real" in some sense. I think Nate's model holds out some chance that there could be a very big shift due to unexpected events. That is, I think his
model might have some non-Gaussian distribution for the national shift. This sort of hedging against an unknown scale-tipping event might also be reflected in the Intrade estimates. Looking at the
Intrade data for "safe" states, for example, suggests the possibility of a fairly large shift that could only come after a significant change to the satus quo.
September 9, 2012
I was pretty quick yesterday when I described how I convert Intrade data to state-by-state probabilities to populate the form below. Today I'll give some examples.
Ohio is both pivotal and highly traded, so let's use that as a typical example. The first step is to determine the trading window. Futures for Obama to win Ohio were being offered for sale at 6.19
(out of 10), and there were bids to buy at 6.02. So that sets a current low end of 60.2% and a high end of 61.9%. Romney's futures were being offered at 4.20 and bid at 3.80. Flipping those to Obama
probabilities gives a low of 58.0% and a high of 62.0%. Combining those ranges gives a low of 60.2% and a high of 61.9%. The last trade in Obama futures was at 61.9%, which is inside the current
trading range. The last trade in Romney futures was at 47.0%, which is equivalent to 63% for Obama. That is outside of the combined range, so I use a value of 61.9%, which is the upper end of the
current range. The average of the Obama value 61.9% and the Romney value 61.9% is 61.9%.
Utah will surely cast its electoral votes for Romney. Let's see how the algorithm parses the Intrade data. Futures for Obama to win Ohio were being offered for sale at 1.45, but no one was buying.
That is interpreted as a range between 0.0 and 14.5%. There were bids to buy Romney futures at 9.20, but no one was selling. That corresponds to a range between 92% and 100% for Romney, or 0.0% and
8.0% for Obama. There had never been a trade in Obama futures for Utah, so we treat that as a previous trade at 0%. Romney futures, however, had last traded at 91.5%. That's outside the current
trading window, so we use 0.0% and 8.0% as the two values and average them to get 4.0% for Obama in Utah.
September 8, 2012
I've now launched for the 2012 election! The Intrade state data now seems stable enough to use without a lot of tweaking. I'm populating the state percentages in the form below directly from Intrade
according to a relatively simple algorithm.
I first determine the trading range based on Bid and Ask prices. The low end of the range is the higher of the Dem Bid (in percent) or 100 minus the Rep Ask. The high end of the range is the lower of
the Dem Ask or 100 minus the Rep Bid. If the low is higher than the high, then the trading ranges for Dem and Rep are inconsistent, and I take the percentage for the form to be the average of low and
high. Normally the trading ranges are consistent, and I consider the last trades for Dem and Rep. If those are within the combined trading range [low,high] then I use those values. But if either
value is outside of the current trading range, then I use the closest edge instead. After translating the last Dem and Rep trades into the current trading range, I take the average and use that for
the percentage in the form.
There are some states for which there is no last trade for either Dem or Rep. Those are all for safe states—some for Romney, some for Obama. In that case, I take the last trade to be at the extreme:
either 0% or 100%.
I have an algorithm that I apply to average some state values to get percentages for ME2 and NE2 (and ME1, NE3 and NE1, too). For the moment, DC is just set to 98% for Obama.
There are a few states that seem to have spurious values. These aren't going to have a very big effect. Today, they seem to bias the results slightly toward Obama—the couple states that seem "off"
are safe Romney states that are a bit too high for Obama. By fiddling with the Intrade values, I'd estimate that fixing those few poor values might make the aggregate probability of an Obama victory
be 63.34% instead of the 63.41% that I get directly from the Intrade data.
July 15, 2012
I'm going to take a moment here to discuss the various predictors of the presidential election that are available on the web. Please keep in mind that a prediction is a statement about the
probability of an outcome, not a statement that an outcome will or will not occur. There are five sites that either give a prediction in terms of probability, or give enough information to infer a
The first is the Princeton Election Consortium. This site does not have 2012 aggregated data yet, but they are under construction for the upcoming election. Four years ago they did not give a direct
predictor, but did give an electoral vote estimator with a 95% confidence interval—very nice and very meaningful. The confidence interval was based on the margin of error for aggregated poll data.
The polls were combined to determine a probability for each state for each candidate. Those probabilities were combined into a histogram of possible electoral outcomes. The median outcome with 95%
confidence interval was determined from that.
This is not, however, a prediction of the electoral outcome. The probabilities in that meta-analysis are only for recent polling. There is no attempt to determine or estimate the amount that the
overall national or individual state electorate may change their opinions or intentions between now and the time that the election is held.
The second site is FiveThirtyEight. This site does provide a direct predictor. The underlying analyis is very similar to the Princeton Election Consortium, but FiveThirtyEight goes a step further and
uses demographic data, historical data about national and state-by-state shifts in polling and actual election returns. This is very important, and the model that they use is sophisticated and their
methodology is fairly transparent. I feel they are the best site for aggregating and interpreting poll data.
Their model of electoral shifts includes state-by-state shifts, national shifts, and shifts in demographically similar states. This is fairly similar to my analysis, but we use very different
underlying data…more about that later.
The third site is Intrade. This is a futures market with four different futures for the outcome of the presidential election. Each of these futures is a separate predictor of the overall electoral
outcome (if you believe that the possibility of a third-party candidate winning the election is negligible). In short, the futures are "Obama", "Romney", "Democrat", and "Republican". Because the
futures are actively traded, the four futures never indicate a single value, but it's straightforward to aggregate them. (I use the bid and ask prices, as well as the sales prices, to determine the
most consistent market value.)
The prices of the futures track the polls pretty directly, since that's the main source of information available to futures traders. But they do not reflect the polls in a well-defined mathematical
way. The trading market is making an implicit assessment of the factors that the Princeton Election Consortium does not address. The futures prices are completely aggregated. They may or may not be
biased, but it is arguable that they are better at removing biases than pollsters are, and they are certainly better at incorporating the intangible aspects of the electoral outcome, such as how
large will the black or youth turnout be?
The fourth site is Iowa Electronic Markets. This is also a futures market, but their futures are different. They have two futures based on the U.S. Presidential Election. The easiest to understand is
the "Winner-Take-All" market, based on the popular vote This is a direct predictor, but it does not include an assessment of the probability that the popular vote winner fails to win in the Electoral
College. The second market is also based on the popular vote, and the value of the future at maturity is equal to the percentage of the two-party vote share. So a Democratic future is worth $5.45 if
Obama wins 54.5% of the Romney/Obama vote share.
The fifth site is this one. Although less slick than the others, it also provides a direct predictor of the electoral outcome. By visiting the site and clicking on the button on the bottom of the
page, you get a direct prediction of the electoral outcome, based on Intrade prices for each state. The Intrade prices in the web form are updated from time to time (eventually daily), and the cgi
includes a historical tracker of the daily predictor.
The model behind the form is similar to that used by FiveThirtyEight, in that it allows for both state-by-state shifts and also for national shifts. The default 70% split between national shift and
state-by-state shift is based on the typical ratio of polling uncertainty in a state to the state-by-state Intrade futures prices.
What is interesting is that the four sites that offer an electoral predictor give different results. The inferred predictor here at MathRec closely tracks the direct futures prices at Intrade, but in
the '08 election there had been a consistent offset. Both of these were significantly lower than the futures price of the IEM Winner-Take-All market for Obama, even though it was quite clear that
Obama had a significantly greater chance of winning the Electoral College than the popular vote. The predictor at FiveThirtyEight was significantly higher than the futures price at IEM, even after
adjusting for the possibility that Obama could win the Electoral College without winning the popular vote.
I'll discuss possible reasons for the discrepancies in my next post.
A quick disclaimer. Although I think that this site is useful and serves a small niche, there are better sites available. In particular, I've drawn heavily from Sam Wang's Electoral College Meta
Analysis. I've also paid close attention to FiveThirtyEight.
I believe the principal usefulness of my site is to accurately assess the effect on the overall election resulting from shifts in one or a few states. There may also be some usefulness in picking out
arbitrage opportunities between state-by-state futures and the futures for the overall electoral result. It also provides a different perspective for assessing whether other sites might have a bias
that should be noted.
The form below takes state-by-state inputs as percentages for Obama, such as that provided by Intrade. (Look for Politics, then US Election by State.) The state-by-state odds will incorporate the
possibility of a national swing in the popular vote, plus state-by-state variation. Of course there is uncertainty associated with both of these. You choose how to allocate that uncertainty, by
specifying the percentage that is national. My best guesstimate of the value is 75% for a couple months out, but the previous two elections I shifted that to 60% as the election drew closer. (It
cannot be 100%, but it can be 0%. The overall prediction doesn't work consistently with values outside of 5% to 95%.) Then just plunk in the state-by-state odds. The cgi will do the permutations for
various values of the national vote swing and predict the probability of an Obama win. I've prepared another page with more details about the methodology (and the math).
The form below is populated with default values from Intrade prices as of Tuesday, 11/6 at 08:46 Pacific Time.
Percentage of Variance applied to national shift:
Votes State Obama Prob (%)
Safe Obama States:
CA, CT, DE, DC, HI, IL, ME, ME1, MD, MA, NY, RI, VT (152 electoral votes)
55 CA
7 CT
3 DE
3 DC
4 HI
20 IL
2 ME
1 ME1
10 MD
11 MA
29 NY
4 RI
3 VT
Obama's Must-Win States:
MI, MN, NJ, NM, OR, PA, WA (84 electoral votes)
16 MI
10 MN
14 NJ
5 NM
7 OR
20 PA
12 WA
Possible Tipping Point States:
CO, FL, IA, ME2, NV, NH, OH, VA, WI (67 electoral votes)
Obama needs 34; Romney needs 34
9 CO
6 IA
1 ME2
6 NV
4 NH
18 OH
13 VA
10 WI
Romney's Must-Win States:
AZ, IN, MO, MT, NE2, NC (80 electoral votes)
11 AZ
29 FL
11 IN
10 MO
3 MT
1 NE2
15 NC
Safe Romney States:
AL, AR, AK, GA, ID, KS, KY, LA, MS, NE, NE1, NE3, ND, OK, SC, SD, TN, TX, UT, WV, WY (155 electoral votes)
9 AL
6 AR
3 AK
16 GA
4 ID
6 KS
8 KY
8 LA
6 MS
2 NE
1 NE1
1 NE3
3 ND
7 OK
9 SC
3 SD
11 TN
38 TX
6 UT
5 WV
3 WY
Steve Schaefer
Last modified: Date: 2012/07/15
|
{"url":"http://www.mathrec.org/election/pres.html","timestamp":"2014-04-20T20:55:10Z","content_type":null,"content_length":"40328","record_id":"<urn:uuid:ce639a8a-f890-4ebd-a05d-427ff0aba8e2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tales of Statisticians | Jacob Bernoulli
Tales of Statisticians
Jacob Bernoulli
6 Jan 1655 - 16 Aug 1705
Jacob (or James, or Jacques), the son of a prosperous Protestant merchant, was the first of the remarkable Bernoulli mathematical dynasty. Back in 1583, the Bernoulli family had fled from
Antwerp, first to Frankfort and then to Basel, to escape the persecutions of the Catholic Duke of Alba, acting for the absentee King of Spain. Jacob was first trained in theology, but after
receiving his degree and working as a tutor in Geneva for two years, he made his own escape into mathematics. He went to France, where he studied with followers of Descartes. He visited the
Netherlands, and then went to England, where he met Boyle and Hooke. On his return to Switzerland in 1683 he began teaching mechanics at the University of Basel. He published papers on algebra
and probability in 1685 and on geometry in 1687, and was appointed to the chair of mathematics at Basel in 1687. At about this time his younger brother Johannes, who had been studying mechanics,
asked Jacob to teach him mathematics, beginning with Leibniz's difficult 1684 paper on the calculus, the big new thing at that time. The two collaborated for a while, but a period of rivalry and
mutual attacks followed, becoming even more acrimonious after Johannes moved to the Netherlands, in 1697.
Jacob's contributions to statistics include a major 1689 work on infinite series, in which his statement of the Law of Large Numbers appeared. The essence of the law is that, given an inherent
probability that an event will come out a certain way, an increasing number of trials will give an increasingly accurate approximation to that frequency, random variations on either side of the
inherently probable value tending to cancel out in the long run. Jacob's work on the calculus (he was the first to use the term "integral") and on geometry gave him a deep insight into the
equations of many of the curves that are useful in describing growth and probability. The curve called the Lemniscate of Bernoulli still bears his name, as does the Bernoulli equation y' = p(x)y
+ q(x)y*n. He was so taken with another curve he investigated, the logarithmic (equilangular) spiral, that he ordered it to be engraved on his tomb with the caption Eadem mutata resurgo: "Though
transformed, I shall arise unchanged." Thus did mathematics and theology join hands at the end. After Jacob's death, Johannes realized his long ambition by succeeding to his brother's chair at
Jacob's most important work was the Ars Conjectandi, which was published only after his death (in 1713). With de Moivre's work of 1718, it is the first major treatise in the field of probability
and statistics. Bernoulli's legacy, like his life, has an oppositional character. The Ars Conjectandi defines a "frequentist" or objective position as against the "expectation" or subjective
position which has developed from the work of Bayes. Opposition between these two viewpoints still to some extent divides statistics into two hostile camps. One subgroup of the International
Association for Statistics is called the Bernoulli Society.
Bernoulli developed the binomial approach of Pascal, in which the binomial coefficients of the Arithmetical Triangle have a central place. A coin toss or other trial which can have only one of
two outcomes is sometimes called a Bernoulli trial. The world owes Bernoulli no lesser tribute.
4 Sept 2004 / Contact The Project / Exit to Outlinen Index Page
|
{"url":"http://www.umass.edu/wsp/history/outline/bernoulli.html","timestamp":"2014-04-20T06:53:42Z","content_type":null,"content_length":"5552","record_id":"<urn:uuid:7a0f818f-fb44-4e12-b27b-ed6d857c8b33>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vitali convergence theorem
Vitali convergence theorem
1. i.
the sequence $\{f_{n}\}$ converges to $f$ in measure;
2. ii.
3. iii.
for every $\epsilon>0$, there exists a set $E$ of finite measure, such that $\int_{{E^{\mathrm{c}}}}\lvert f_{n}\rvert^{p}<\epsilon$ for all $n$.
This theorem can be used as a replacement for the more well-known dominated convergence theorem, when a dominating factor cannot be found for the functions $f_{n}$ to be integrated. (If this theorem
is known, the dominated convergence theorem can be derived as a special case.)
In a finite measure space, condition (iii) is trivial. In fact, condition (iii) is the tool used to reduce considerations in the general case to the case of a finite measure space.
In probability theory, the definition of “uniform integrability” is slightly different from its definition in general measure theory; either definition may be used in the statement of this theorem.
• 1 Gerald B. Folland. Real Analysis: Modern Techniques and Their Applications, second ed. Wiley-Interscience, 1999.
• 2 Jeffrey S. Rosenthal. A First Look at Rigorous Probability Theory. World Scientific, 2003.
ModesOfConvergenceOfSequencesOfMeasurableFunctions, UniformlyIntegrable, DominatedConvergenceTheorem
uniform-integrability convergence theorem
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/VitaliConvergenceTheorem","timestamp":"2014-04-17T21:32:37Z","content_type":null,"content_length":"46671","record_id":"<urn:uuid:92aec2b4-c14b-4921-bbdb-b9bae0b1d52a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ramanujam, J. "Ram" - Department of Electrical and Computer Engineering, Louisiana State University
• Integer Lattice Based Methods for Local Address Generation for Block-Cyclic
• Proceedings of the 1998 International Conference on Parallel Processing (ICPP 98) Minimizing Data and Synchronization Costs in OneWay Communication
• To appear in Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, February 1995.
• The Science of Programming HighPerformance Linear Algebra Libraries Paolo Bientinesi John A. Gunnels y Fred G. Gustavson y Greg M. Henry z
• A Component Architecture for High-Performance Computing David E. Bernholdt, Wael R. Elwasif, and James A. Kohl
• Improving Offset Assignment on Embedded Processors Using Transformations
• An Efficient CompileTime Approach to Compute Address Sequences in Data Parallel Programs
• EXACT SCHEDULING TECHNIQUES FOR HIGH LEVEL Submitted to the Graduate Faculty of the
• CompileTime Techniques for Data Distribution in Distributed Memory Machines
• Tiling Multidimensional Iteration Spaces for Multicomputers
• MultiPhase Redistribution: A CommunicationEfficient Approach to Array Redistribution z
• Beyond Unimodular Transformations J. Ramanujam \Lambda
• Final version of paper to appear in IEEE Trans. Parallel and Dist. Systems, Feb 1999 A Linear Algebra Framework for Automatic Determination of
• Proc. 7th International Workshop on Compilers for Parallel Computers, Sweden, June 1998 Optimizing Spatial Locality in Loop Nests using Linear Algebra
• Proc. Workshop on Architecture Compiler Interaction, 3rd HPCA, San Anotnio, Feb. 1997 Optimizing OutofCore Computations in Uniprocessors \Lambda
• Reducing Code Size Through Address Register Assignment
• Compiler Algorithms for Optimizing Locality and Parallelism on Shared and Distributed Memory Machines \Lambda
• Automatic Data Mapping and Program Transformations J. Ramanujam \Lambda and A. Narayan
• Data Locality Optimization for Synthesis of Efficient Out-of-Core Algorithms
• Aiding Library Writers in Optimizing the Use of High-Level Abstractions in Scientific Applications
• Appears in IEEE Trans. on Parallel and Distributed Systems, December 2000 Minimizing data and synchronization costs in oneway
• Enhancing Spatial Locality using Data Layout Optimizations M. Kandemir, A. Choudhary, J. Ramanujam, N. Shenoy, and P. Banerjee
• A Linear Algebraic View of Loop Transformations and Their Interaction
• Exploiting Domain Specific Highlevel Runtime Support for Parallel Code Generation
• Experiments with Data Layouts M. Kandemir, A. Choudhary, N. Shenoy, P. Banerjee, and J. Ramanujam
• Automatic Transformations for Communication-Minimized Parallelization and Locality
• Fast Address Sequence Generation for DataParallel Programs Using Integer Lattices
• A MatrixBased Approach to the Global Locality Optimization Problem M. Kandemir \Lambda A. Choudhary y J. Ramanujam z P. Banerjee y
• Compiler Support for Software Libraries Samuel Z. Guyer Calvin Lin
• Effective Automatic Parallelization of Stencil Computations Sriram Krishnamoorthy1
• Generating Parallel Programs for Fast Signal Transforms using SPIRAL
• A Framework for Integrated Communication and I/O Placement
• In Proc. 8th Workshop on Compilers for Parallel Computing (CPC 98), Sweden, June 1998 Advanced Compilation Techniques for HPF
• StatementLevel Independent Partitioning of Uniform Recurrences J. Ramanujam \Lambda S. Vasanthakumar
• Automatic Optimization of Communication in Compiling Outofcore Stencil Rajesh Bordawekar Alok Choudhary
• Nonunimodular Transformations of Nested Loops J. Ramanujam \Lambda
• Efficient Computation of Address Sequences in Data Parallel Programs Using Closed Forms for Basis Vectors
• Improving locality in outofcore computations using data layout transformations
• Optimization of OutofCore Computations Using Chain Vectors
• Cluster Partitioning Approaches to Mapping Parallel Programs Onto a Hypercube
• Automatic Performance Tuning and Analysis of Sparse Triangular Solve
• Tiling Multidimensional Iteration Spaces for Nonshared Memory Machines J. Ramanujam P. Sadayappan
• Accepted for publication in IEEE Trans. Computer Aided Design (TCAD), 2000 Compact and Ecient Code Generation through
• A Unified Tiling Approach for OutOfCore Computations \Lambda M. Kandemir
• Generalized Overlap Regions for Communication Optimization in DataParallel Programs
• Towards Effective Automatic Parallelization for Multicore Systems Uday Bondhugula1
• Iteration Space Tiling for Distributed Memory Machines J. Ramanujam y and P. Sadayappan z
• Efficient Address Sequence Generation for TwoLevel Mappings in High Performance Fortran
• A Graph Based Framework to Detect Optimal Memory Layouts for Improving Data Locality
• MultiPhase Redistribution: A CommunicationEfficient Approach to Array Redistribution z
• A Compiler Algorithm for Optimizing Locality in Loop Nests \Lambda M. Kandemir J. Ramanujam A. Choudhary
• Data Locality Optimization for Synthesis of Efficient Out-of-Core Algorithms
• Data Relation Vectors: A New Abstraction for Data Optimizations Mahmut Kandemir
• A Fast Approach to Computing Exact Solutions to the Resource-Constrained Scheduling Problem
• To appear in Proc. Europar 98, Southampton, UK, September 1998 Enhancing spatial locality via data layout optimizations
• Parallel Processing Letters, c World Scientific Publishing Company
• Parallel Objects: Virtualization and in-Process Components Laxmikant Kale Orion Lawlor Milind Bhandarkar
• IEEE TRANSACTIONS ON COMPUTERS, VOL. XX, NO. Y, MONTH 1999 1 Improving Cache Locality by a Combination of
• A Combined Communication and Synchronization Optimization Algorithm for OneWay Communication
• A Compiler Framework for Optimization of Affine Loop Nests for General Purpose Computations on GPUs
• A Neural Architecture for a Class of Abduction Problems Ashok K. Goel \Lambda J. Ramanujam y
• Code Generation for Complex Subscripts in DataParallel Programs
• A Practical Automatic Polyhedral Parallelizer and Locality Optimizer
• A Data Layout Optimization Technique Based on Hyperplanes M. Kandemir, A. Choudhary, N. Shenoy, P. Banerjee, and J. Ramanujam
• 1990 International Conference on Parallel Processing, Volume II, pages 179186 1 Tiling of Iteration Spaces for Multicomputers
• Task Allocation onto a Hypercube by Recursive Mincut Bipartitioning
• In Proceedings of the 1998 International Parallel Processing Symposium A Generalized Framework for Global Communication Optimization
• Optimizing Communication Using Global Dataflow Analysis M. Kandemir, P. Banerjee, A. Choudhary, J. Ramanujam, and N. Shenoy
• A Global Communication Optimization Technique Based on DataFlow Analysis and Linear Algebra
• Tighter lower bounds for scheduling problems in highlevel synthesis
• SPIRAL: A Generator for Platform-Adapted Libraries of Signal Processing Algorithms
• Compilation Techniques for OutofCore Parallel Computations \Lambda M. Kandemir A. Choudhary J. Ramanujam R. Bordawekar
• A Performance Optimization Framework for Compilation of Tensor Contraction Expressions into Parallel Programs
• Code Size Optimization for Embedded Processors using Commutative Transformations
• Address Register Assignment for Reducing Code M. Kandemir1
• Address Code and Arithmetic Optimizations for Embedded Systems J. Ramanujam Satish Krishnamurthy Jinpyo Hong Mahmut Kandemir
• Improving Offset Assignment on Embedded Processors using Transformations
• Improving the Computational Performance of ILPbased Problems M. Narasimhan J. Ramanujam
• Improving the Performance of OutofCore Computations \Lambda M. Kandemir y J. Ramanujam z A. Choudhary x
• Lessons learned from the Shared Memory Parallelization of a Functional Array Language
• Simultaneous Peak and Average Power Optimization in Synchronous Sequential Designs Using Retiming and Multiple Supply Voltages
• PLuTo: A Practical and Fully Automatic Polyhedral Program Optimization System
• Automatic Data Movement and Computation Mapping for Multi-level Parallel Architectures with Explicitly Managed
• Automatic Data Movement and Computation Mapping for Multi-level Parallel Architectures with Explicitly Managed
• CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 2007; 19:24252443 Prepared using cpeauth.cls [Version: 2002/09/19 v2.02]
• Automatic Mapping of Nested Loops to FPGAs Uday Bondhugula
• Memory Offset Assignment for DSPs Jinpyo Hong1
• An Effective Heuristic for Simple Offset Assignment with Variable Coalescing
• Improving the Energy Behavior of Block Buffering Using Compiler Optimizations
• Automatic Code Generation for Many-Body Electronic Structure Methods: The Tensor
• Dynamic Memory Usage Optimization using ILP and J. Ramanujam2
• ILP and Iterative LP Solutions for Peak and Average Power Optimization in HLS
• Modified Force-Directed Scheduling for Peak and Average Power Optimization using Multiple Supply-Voltages
• Efficient Search-Space Pruning for Integrated Fusion and Tiling Transformations
• Performance Modeling and Optimization of Parallel Out-of-Core Tensor Contractions
• IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 23, NO. 2, FEBRUARY 2004 243 A Compiler-Based Approach for Dynamically
• Memory-Constrained Communication Minimization for a Class of Array Computations
• A Heuristic for Clock Selection in High-Level Synthesis J. Ramanujam Sandeep Deshpande Jinpyo Hong Mahmut Kandemir
• Global Communication Optimization for Tensor Contraction Expressions under Memory Constraints
• A High-Level Approach to Synthesis of High-Performance Codes for Quantum Chemistry
• Towards Automatic Synthesis of High-Performance Codes for Electronic Structure Calculations: Data
• Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit
• A Parallel Communication Infrastructure for STAPL Steven Saunders
• Improving Offset Assignment for Embedded Processors Sunil Atri 1 J. Ramanujam 1 Mahmut Kandemir 2
• A Unified Compiler Algorithm for Optimizing Locality, Parallelism and Communication in OutofCore Computations \Lambda
• Mapping Combinatorial Optimization Problems onto Neural Networks
• A Methodology for Parallelizing Programs for Multicomputers and Complex Memory Multiprocessors
• Proc. 11th ACM international Conference on Supercomputing, Melbourne, Australia, July 1998 A Hyperplane Based Approach for Optimizing Spatial Locality in Loop Nests
• On Lower Bounds for Scheduling Problems in HighLevel Synthesis M. Narasimhan y
• Analysis of Event Synchronization in Parallel Programs J. Ramanujam and A. Mathew
• Affine Transformations for Communication Minimal Parallelization and Locality Optimization of Arbitrarily Nested Loop Sequences
• A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs
• Estimating and Reducing the Memory Requirements of Signal Processing Codes for Embedded Systems
• IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 12, NO. 3, MARCH 2004 281 Compiler-Directed Scratch Pad Memory Optimization
• To appear in Proceedings of the Third Workshop on Languages, Compilers and Runtime Systems for Scalable Computers, Troy, NY, May 1995
• Appears in Journal of Parallel and Distributed Computing, September 1999 A MatrixBased Approach to Global Locality Optimization \Lambda
• Address Sequence Generation for DataParallel Programs Using Integer Lattices
• Access based data decomposition for distributed memory machines J. Ramanujam P. Sadayappan
• Compilation and Communication Strategies for Outofcore programs on Distributed Memory Machines
• Memory-Constrained Communication Minimization for a Class of Array Computations
• Compiler Support for Optimizing Tensor Contraction Expressions in Quantum Chemistry Computations
• Improving Offset Assignment for Embedded Processors Sunil Atri1
• EE, CompE and CS Programs: Merger or Peaceful Co-Existence?
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/11/565.html","timestamp":"2014-04-19T23:47:32Z","content_type":null,"content_length":"30638","record_id":"<urn:uuid:69db2373-0674-40e0-afc6-f308184d7e28>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Constructing an Ellipse
On this web page you can explore the geometric construction of an ellipse. In Lesson 9.2 of Discovering Advanced Algebra: An Investigative Approach, you learned the locus definition of an ellipse.
Definition of an Ellipse
An ellipse is a locus of points P in a plane, the sum of whose distances, d[1] and d[2], from two fixed points, F[1] and F[2], is always a constant, d. That is, d[1] + d[2] = d, or F[1]P + F[2]P = d.
The two fixed points, F[1] and F[2], are called foci.
The sketches on this page use this definition to construct an ellipse. The construction is the same as the one explained in the Exploration Constructing the Conic Sections in Chapter 8.
This sketch shows two circles centered at F[1] and F[2] and the points where they intersect. The radius of circle F[1] is congruent to segment AC, and the radius of circle F[2] is congruent to
segment CB. Drag point C to trace the intersections of the circles as the radii change. Drag points F[1] and F[2] to reposition the foci, and drag point B to change the sum of the lengths of the
radii. Click X to erase the traces.
1. Slowly drag point C back and forth along segment AB. What shape is created by the trace of the intersections of the circles?
2. What is true about the sum of the distance from an intersection point to F[1] and the distance from the same intersection point to F[2]? In what ways is this sum represented by the sketch?
3. Why is the trace of the intersections an ellipse?
4. Try moving F[1], F[2], and/or B, erase the traces, then slowly drag C back and forth. Do you still get an ellipse? In what ways has it changed?
This sketch shows the locus of the intersections from the previous sketch. Drag points F[1] and F[2] to reposition the foci, and drag point B to change the sum of the lengths of the radii. (Note: For
technical reasons, there may be small gaps at the ends of the ellipse.)
1. Experiment by dragging the foci farther apart and closer together. How does the ellipse change?
2. How far apart can the foci be before you no longer have an ellipse?
3. What happens when both foci are at the same point?
4. Experiment by dragging point B. How does the ellipse change? Based on the locus definition of an ellipse, what is changing?
5. How can you make the ellipse taller than it is wide?
Go to the parabola page or the hyperbola page to explore the geometric constructions of other conic sections.
|
{"url":"http://math.kendallhunt.com/x24238.html","timestamp":"2014-04-18T05:30:46Z","content_type":null,"content_length":"9478","record_id":"<urn:uuid:d32b7cbc-e02d-4fca-9914-9af55a6763ba>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Subfields joining an algebraic element to another
up vote 9 down vote favorite
Let $\alpha$ and $\beta$ be two algebraic numbers over $\mathbb Q$. Say that a subfield $\mathbb K$ of $\mathbb C$ joins $\alpha$ to $\beta$ iff $\beta \in {\mathbb K}[\alpha]$ but $\beta \not\in {\
mathbb K}$. Now, if $\mathbb K$ joins $\alpha$ to $\beta$ and we add a completely unrelated algebraic number to $\mathbb K$, we still have a join from $\alpha$ to $\beta$. So it is natural to
consider the minimal joins from $\alpha$ to $\beta$, i.e. the joins that are minimal with respect to field inclusion. Let ${\cal M}(\alpha,\beta)$ denote the set of all minimal joins from $\alpha$ to
$\beta$. My guesses are that :
1) Any field in ${\cal M}(\alpha,\beta)$ is always contained in the normal (Galois) closure of ${\mathbb Q}(\alpha,\beta)$
2) ${\cal M}(\alpha,\beta)$ is always finite
3) The two facts above should be provable using Galois theory.
Note that 2) follows from 1).
Can anyone confirm this ?
A simple example : ${\cal M}(\sqrt{2},\sqrt{3})$ consists of ${\mathbb Q}(\sqrt{6})$. Indeed, suppose $\mathbb K$ joins $\sqrt{2}$ to $\sqrt{3}$ and $x$ and $y$ are numbers in $\mathbb K$ such that
$x+y\sqrt{2}=\sqrt{3}$. If $x \neq 0$ then $\sqrt{3}=\frac{3+x^2-2y^2}{2x} \in \mathbb K$ which is absurd. So $x=0$ and $y=\frac{\sqrt{6}}{2}$.
nt.number-theory number-fields
add comment
2 Answers
active oldest votes
Finally my guesses seem to be correct : let $d$ be the degree of the extension ${\mathbb K}(\alpha):{\mathbb K} $. There are exactly $d$ field homomorphisms ${\mathbb K}(\alpha)\to \mathbb
C$ that coincide with the identity on $\mathbb K$. Each one of this homomorphisms may be extended to a field homomorphism from ${\mathbb K}(\alpha,\beta)$ to $\mathbb C$ (in a generally
nonunique way). Let us denote by $H$ the $d$-element set of all the homomorphisms thus obtained.
As darij said, there is a polynomial $P\in {\mathbb K}[X]$ such that $P(\alpha)=\beta$. We may take $P$ of degree smaller than $d$. If we denote the coefficients of $P$ by $p_0,p_1, \ldots
, p_{d-1}$, then we have
$\sum_{k=0}^{d-1}p_k \alpha^k=\beta$.
Now, applying the elements of $H$ to this equalities yields
up vote 2 $\sum_{k=0}^{d-1}p_k h(\alpha)^k=h(\beta)$
down vote for any $h\in H$. We see now that the $p_k$ may be retrieved as solutions of a $d\times d$ system all of whose coefficients are in the closure of ${\mathbb Q}(\alpha,\beta)$. The
determinant of this system is a Van der Monde determinant on the distinct conjugates of $\alpha$, so it's nonzero.
This shows statement 1) (and 2)). Although darij's proof is wrong as shown in the comments, it may still be that his statement a) is correct (so that "Galois closure" may be supressed in
the statement of 1)). Pheraps one could show it the Galois way, by showing that any field homomorphism that coincides with the identity on ${\mathbb Q}(\alpha,\beta)$ in fact preserves the
coefficients of $P$.
add comment
EDIT: Okay, wrong. Sorry for spamming.
This looks just too easy, so I guess I'm doing something stupid.
I'll prove that
(a) whenever a subfield $K$ of $\mathbb{C}$ satisfies $\beta\in K\left[\alpha\right]$, then $\beta\in\left(K\cap\mathbb{Q}\left(\alpha,\beta\right)\right)\left[\alpha\right]$.
Adding to this the trivial observation that
(b) whenever a subfield $K$ of $\mathbb{C}$ satisfies $\beta\not\in K$, then $\beta\not\in K\cap\mathbb{Q}\left(\alpha,\beta\right)$,
up vote
0 down we see that whenever a subfield $K$ of $\mathbb{C}$ joins $\alpha$ to $\beta$, its subfield $K\cap\mathbb{Q}\left(\alpha,\beta\right)$ does the same. This obviously settles 1) (even without
vote the normal closure) and therefore 2).
So let's prove (a) now: Since $\beta\in K\left[\alpha\right]$, there exists a polynomial $P\in K\left[X\right]$ such that $\beta=P\left(\alpha\right)$. Since $K\cap\mathbb{Q}\left(\alpha,\
beta\right)$ is a vector subspace of $K$ (where "vector space" means $\mathbb{Q}$-vector space), there exists a linear map $\phi:K\to K\cap\mathbb{Q}\left(\alpha,\beta\right)$ such that $\phi
\left(v\right)=v$ for every $v\in K\cap\mathbb{Q}\left(\alpha,\beta\right)$ (this may require the axiom of choice for infinite $K$, but if you want to consider infinite field extensions of $\
mathbb{Q}$ I think you can't help but use the axiom of choice). Applying the linear map $\phi$ to every coefficient of the polynomial $P\in K\left[X\right]$, we get another polynomial $Q\in \
left(K\cap\mathbb{Q}\left(\alpha,\beta\right)\right)\left[X\right]$ which also satisfies $\beta=Q\left(\alpha\right)$ (since all powers of $\alpha$ and $\beta$ lie in $K\cap\mathbb{Q}\left(\
alpha,\beta\right)$ and thus are invariant under $\phi$). Thus, $\beta\in\left(K\cap\mathbb{Q}\left(\alpha,\beta\right)\right)\left[\alpha\right]$, qed.
Can anyone check this for nonsense?
I don't think I understand why $\beta = Q(\alpha)$: the map is linear and I can't find the reason yet why it preserves the value of a polynominal. Did I miss something? – Ilya Nikokoshev
Dec 12 '09 at 22:06
Not all powers of $\alpha$ and $\beta$ lie in $K\cap\mathbb Q(\alpha,\beta)$. For example, neither $\alpha$ not $\beta$ are in $K$. – Mariano Suárez-Alvarez♦ Dec 12 '09 at 22:23
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory number-fields or ask your own question.
|
{"url":"http://mathoverflow.net/questions/8184/subfields-joining-an-algebraic-element-to-another","timestamp":"2014-04-19T07:14:55Z","content_type":null,"content_length":"59243","record_id":"<urn:uuid:0f3ea6d2-1aad-4057-bd65-e13c5f08a02f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recursive presentations
up vote 5 down vote favorite
A recursive presentation of a group is a one in which there is a finite number of generators and the set of relations is recursively enumerable. I found the following quote in Lyndon-Schupp, chapter
"This usage may seem a bit strange, but we shall see that if G has a presentation with the set of relations recursively enumerable, then it has another presentation with the set of relations
It is not clear to me however how one proves it. Does it go through Higman theorem? I.e. one first proves that group with a recursive presentation embeds in a finitely presented group and then one
proves that every subgroup of a finitely presented group has a presentation with a recursive set of relations?
And in any case, can one see it somehow directly that having a presentation with a recursively enumerable set of relations (i.e. being recursive) implies having a presentation with a recursive set of
gr.group-theory computability-theory
add comment
1 Answer
active oldest votes
The answer is a simple trick. Essentially no group theory is involved.
Suppose that we are given a group presentation with a set of generators, and relations R_0, R_1, etc. that have been given by a computably enumerable procedure. Let us view each relation
as a word in the generators that is to become trivial in the group.
Now, the trick. Introduce a new generator x. Also, add the relation x, which means that x will be the identity. Now, let S_i be the relation (R_i)x^(t_i), where t_i is the time it takes
for the word R_i to be enumerated in the enumeration algorithm. That is, we simply pad R_i with an enormous number of x's, depending on how long it takes for R_i to be enumerated into the
up vote 15 set of relations. Clearly, S_i and R_i are going to give the same group, once we have said that x is trivial, since the enormous number of copies of x in S_i will all cancel out. But the
down vote point is that the presentation of the group with this new presentation becomes computably decidable (rather than merely enumerable), because given a relation, we look at it to see if it
accepted has the form Sx^t for some t, then we run the enumeration algorithm for t steps, and see if S has been added. If not, then we reject; if so, then we accept.
One can get rid of the extra generator x simply by using the relation R_0 from the original presentation. This gives a computable set of relations in the same generating set that
generates the same group.
The essence of this trick is that every relation is equivalent to a ridiculously long relation, and you make the length long enough so that one can check that it really should be there.
1 This is a variant of "Craig's trick" that shows that a first order theory with a r.e. set of axioms has a recursive set of axioms. – SixWingedSeraph Dec 16 '09 at 20:24
Yes, that's right. It is exactly the same idea. – Joel David Hamkins Dec 16 '09 at 20:28
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory computability-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/9122/recursive-presentations","timestamp":"2014-04-19T15:26:26Z","content_type":null,"content_length":"54933","record_id":"<urn:uuid:a7873ea0-a63a-4cc5-babb-f2aca74d08b8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another quad formula word prob
August 28th 2009, 05:30 AM #1
Junior Member
Oct 2008
Another quad formula word prob
Block 4 by 5 by 3 feet. Wants to reduce volume by shaving off the same amount from length width and height.
Could you give me a clue as to where I should start? I know I have to make a quad formula.
The answer is -x^3+12x^2-47x+60
Last edited by RaphaelB30; August 28th 2009 at 05:33 AM. Reason: add more info
$V=h \cdot l \cdot w$
If we're subtracting the same amount from each value:
Substitute in for the values of h, w, and l:
Multiply it out:
$V= (20-9x+x^2)(3-x)$
$V= 60-27x+3x^2-20x+9x^2-x^3 = -x^3+12x^2-47x+60$
Also, it isn't quadratic, but rather, a cubic...but I suppose you make a quadratic in one of the intermediate steps.
August 28th 2009, 05:44 AM #2
Junior Member
Jul 2009
|
{"url":"http://mathhelpforum.com/pre-calculus/99557-another-quad-formula-word-prob.html","timestamp":"2014-04-18T07:39:57Z","content_type":null,"content_length":"32679","record_id":"<urn:uuid:473405b2-7a1c-4cfa-a568-d78999dea5de>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help on differentiation
February 10th 2008, 04:02 AM
Help on differentiation
Hello to everyone, I have just signed up and hope that I can grow welcome on these forums.
I have a calculus assignment due in this week, and needed some help on a few questions, any help would be much appreciated.
Only need help on 2)e)
Help on this entire question is needed
Thanks (Rofl),
February 10th 2008, 04:06 AM
2e is just a matter of the product rule.
Now, implement the rule.
February 10th 2008, 04:10 AM
Though, to differentiate $ln f(x)$ is it:
February 10th 2008, 04:27 AM
Just use the chain rule.
February 10th 2008, 08:34 AM
Help on this entire question is needed
Thanks (Rofl),
for 3(a), use the chain rule, the result follows immediately, you may want to write out some steps though, so it looks like you're doing something. remember... $\frac d{dx} h(k(x)) = h'(k(x)) \
cdot k'(x)$
that is, you differentiate the entire function leaving the inside function intact (that is, you treat it as if it were a single variable), then multiply by the derivative of the inside function
it's awkward using h and k as the functions for this rule, but f was taken, so...
for 3(b): the product rule says: $\frac d{dx}h(x) \cdot k(x) = h'(x)k(x) + h(x)k'(x)$
just apply the product rule here (use part (a) to find the derivative of $[g(x)]^{-1}$). simplify, and you will get the quotient rule
yes. as galactus said, we can see this by using the chain rule
|
{"url":"http://mathhelpforum.com/calculus/27881-help-differentiation-print.html","timestamp":"2014-04-20T00:07:11Z","content_type":null,"content_length":"8602","record_id":"<urn:uuid:9646a4af-d105-4702-96d7-b07f006ee0f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Modern multidimensional scaling. Theory and applications. 2nd ed.
(English) Zbl 1085.62079
Springer Series in Statistics. New York, NY: Springer (ISBN 0-387-25150-2/hbk; 978-1-4419-2046-1/pbk; 0-387-28981-X/ebook). xxi, 614 p. EUR 64.95/net; sFr 115.00; £ 50.00; $ 89.95/hbk (2005).
[For the review of the first edition from 1997 see Zbl 0862.62052.]
The authors provide a comprehensive treatment of multidimensional scaling (MDS), a family of statistical techniques for analyzing similarity or dissimilarity data on a set of objects. MDS is based on
modeling such data as distances among points in geometric spaces. The book is subdivided into five parts. In Part I, the basic notions of ordinary MDS are explained, with emphasis on how MDS can be
helpful in answering substantive questions. Later parts deal with various special models in a more mathematical way and with particular issues that are important in individual applications of MDS.
Finally, the appendix on major MDS computer programs helps the reader to choose a program for solving his problems.
This book may be used as an introduction to MDS for students in psychology, sociology and marketing, the prerequisite being an elementary background in statistics. It is also well suited for a
variety of advanced courses on MDS topics.
62H25 Factor analysis and principal components; correspondence analysis
62-01 Textbooks (statistics)
91C15 One- and multidimensional scaling (Social and behavioral sciences)
62-04 Machine computation, programs (statistics)
62H30 Classification and discrimination; cluster analysis (statistics)
62H99 Multivariate analysis
|
{"url":"http://zbmath.org/?q=an:1085.62079","timestamp":"2014-04-19T04:38:11Z","content_type":null,"content_length":"22047","record_id":"<urn:uuid:dc457ec4-229a-4bcf-ba35-b06c1f58f127>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Condense each expression to a single logarithm.:
8 log2 x + 2 log2 y - Homework Help - eNotes.com
Condense each expression to a single logarithm.:
8 log2 x + 2 log2 y
Use the properties:
`nlog_b a = log_b a^n`
`log_b a + log_b c = log_b(ac)`
Notice that terms should have the same bases (the small number beside log) to apply the properties.
Start with the use of the first property above:
`8log_2 x`
n = 8, b = 2 and a = x
`8log_2 x = log_2 x^8`
Do the same with the second term.
`2log_2 y`
n = 2, b = 2, and a = y
`2log_2 y = log_2 y^2`
So, you have `log_2x^8 + log_2y^2`
Then, use the second property.
b = 2, a = x^8 and c = y^2
`log_2x^8 + log_2y^2 = log_2(x^8 * y^2)`
Thus, the answer is `log_2 x^8y^2.`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/condense-each-expression-single-logarithm-8-log2-x-428222","timestamp":"2014-04-17T04:38:30Z","content_type":null,"content_length":"26727","record_id":"<urn:uuid:7970f54f-7810-46c0-a950-56826afae3f9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Henderson, CO Trigonometry Tutor
Find a Henderson, CO Trigonometry Tutor
...I look forward to working with you to reaching your GPA and test score goals! Education: B.S. Mathematics, University of Louisiana, Lafayette, LA B.S.
16 Subjects: including trigonometry, chemistry, calculus, physics
...I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and
high school students. We have worked on applications and how this relates to things in real life. I realize that Algebra is a big step up from the math many have worked at previously with great
7 Subjects: including trigonometry, geometry, GRE, algebra 1
...I am excited to have the opportunity to help you succeed. Please feel free to reach out to me. Courses taken include: Calculus I, Calculus II, Calculus III, Differential Equations, Statistics,
Physics I, Physics II, Chemistry, Circuits I, Circuits IIAs a student in these courses I also worked on campus to provide tutoring to many students struggling in math and engineering related
20 Subjects: including trigonometry, chemistry, calculus, physics
...I was the only Freshman at my high school in Trig. I can help you or your student with some memory "tricks" to help make Trig less tedious! I believe in building strong foundations.
15 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...I am currently taking a break from school, but do not want to get away from teaching and tutoring mathematics. Working one-on-one with my students has always been my favorite part about
teaching, and I am hoping to continue that with tutoring. Honestly, I enjoy tutoring so much that I have even been known to volunteer tutor.
13 Subjects: including trigonometry, calculus, statistics, geometry
|
{"url":"http://www.purplemath.com/Henderson_CO_Trigonometry_tutors.php","timestamp":"2014-04-16T19:41:56Z","content_type":null,"content_length":"24262","record_id":"<urn:uuid:a9fd5527-44aa-4529-a6ad-63fc485194f9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: instrumental variable for quantile regression
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: instrumental variable for quantile regression
From "alessia matano" <alexis.rtd@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: instrumental variable for quantile regression
Date Fri, 23 May 2008 12:14:10 +0200
Dear Austin and Brian.
Thanks first for your answers. Austin I probably missunderstood the
answer of Brian in that previous message suggesting such a procedure,
I thought it was to solve this kind of problem. Anyway I should be
more explicity about the data I use. I have individual panel data
(with wages and a set of individual characteristics) and I matched
them with provincial variables whose I would like to estimate the
impact on differnet points of workers wage distribution.
Hence I have two problems I'd like to solve.
- The first is that because of likely endogeneity I would like to
instrument my provincial variable and perform a quantile regression.
This is the reason why of my post. I thought that it could have been
possible to run a first stage and then correcting the standard errors
performing the qreg estimation. But it is not correct. Thank for this
- The second. I would like to correct for individual unoberved
heterogeneity and hence to see if it is possible to estimate a
quantile fixed effects estimation. I know that also this question is
not easy to solve. I red another stata faq that suggests to use
xtdata, fe and then apply qreg clustering the standard errors with a
general sandwich formula (?!?). The last point I did not well
understood, and also qreg does not allow clustering options. If you
have something to advice also on this topic, many thanks. Below the
link of the related stata faq.
thank you again
2008/5/22 Austin Nichols <austinnichols@gmail.com>:
> alessia matano <alexis.rtd@gmail.com>:
> The ref cited elliptically in
> http://www.stata.com/statalist/archive/2003-09/msg00585.html
> http://www.stata.com/statalist/archive/2006-06/msg00472.html
> is
> Takeshi Amemiya
> "Two Stage Least Absolute Deviations Estimators"
> Econometrica, Vol. 50, No. 3 (May, 1982), pp. 689-711
> http://www.jstor.org/stable/1912608
> which proves consistency of that model (2SLAD) for the structural
> parameter \beta which determines the conditional mean of y = X\beta.
> That's the conditional mean, not median. I.e. not the usual
> interpretation of LAD in terms of quantile regression; see e.g.
> Koenker and Hallock 2001:
> http://www.econ.uiuc.edu/~roger/research/rq/QRJEP.pdf
> If you have survey data, read Stas K.'s refs in
> http://www.stata.com/statalist/archive/2007-09/msg00147.html
> So I guess I need more info from you as to what data you've got and
> what you want to estimate before I make any claims about
> consistency...
> On Thu, May 22, 2008 at 12:17 PM, alessia matano <alexis.rtd@gmail.com> wrote:
>> Dear Austin,
>> first thanks for your answer. I also found some of these articles to
>> read that could be useful, and I will do that. of course. I also found
>> out an answer in an old stata faq about the same problem where a guy
>> was suggesting the procedure below. What do you think about it?
>> sysuse auto, clear
>> program bootit
>> version 8.0
>> // Stage 1
>> regress price foreign weight length
>> predict double phat, xb
>> // Stage 2
>> qreg mpg foreign phat
>> end
>> bootstrap "bootit" _b, reps(1000) dots
>> thank you
>> alessia
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-05/msg00951.html","timestamp":"2014-04-20T03:13:05Z","content_type":null,"content_length":"10479","record_id":"<urn:uuid:8c9cf70a-6b61-408d-8c16-4b13f22c0366>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Two-part LGM with multiple categorical indicators
Justin Delahunty posted on Thursday, August 01, 2013 - 11:00 pm
Hi Linda,
I'm trying to fit a two-part LGM to a data set with a preponderance of zeros. I am using the DATA TWOPART: syntax to model this, however the indicator for each timepoint is comprised of 4 categorical
variables which I had planned to create second order factors from.
Should I model these second order factors using the continuous variables from DATA TWOPART:, then estimate the two LGMs with one based on the binary variables from DATA TWOPART: and the other based
on the second order factors? Or is there a way to pass the second order factors to DATA TWOPART:, VARIABLES=?
Ideally I would simply use a composite variable for each time point, however as the indicators are categorical I have not been able to calculate appropriate weights for creating such variables. This
is further complicated by the scale used being changed between the first and subsequent timepoints.
The MODEL: section from my non-twopart model is as follows:
t1 BY t11*-t14;
t2 BY t21*-t24;
t3 BY t31*-t34;
i s | t1@0 t2@1 t3@2;
Thanks for your help,
Linda K. Muthen posted on Friday, August 02, 2013 - 8:37 am
Why do you think your categorical variables need twoo-part modeling. This is usually for count variables.
Justin Delahunty posted on Friday, August 02, 2013 - 9:06 am
Hi Linda,
The variables reference frequency, i.e.:
1 = Never
2 = Once or twice
3 = Three to five times
4 = Six to ten times
5 = Eleven to twenty times
6 = 21 to 30 times
7 = 31 to 40 times
8 = More than 40 times
with the majority of values being 1 (Never). What would be the correct approach to fitting an LGM to data such as this if not two-part?
Justin Delahunty posted on Friday, August 02, 2013 - 9:22 am
The variables referenced above only apply to t2-t6 of the data. t1 and t7 use a different scale (3 point).
3 point variables are available for all timepoints, i.e.:
1 = Never
2 = Once or twice
3 = Three or more times
to capture the first and last timepoints.
Linda K. Muthen posted on Friday, August 02, 2013 - 10:53 am
In your case, you can treat the variables as categorical by placing them on the CATEGORICAL list. Categorical data methodology can handle the piling up. It is when there is a preponderance of zeroes
with count or continuous variables that two-part modeling is helpful.
Justin Delahunty posted on Friday, August 02, 2013 - 11:27 am
Thanks Linda,
When I run the model with the variables specified as categorical, and the LGM fitted to the second order factors for each time point, I receive errors regarding convergence, non-positive definite,
I am using the syntax:
t1 BY t1q1* t1q2 t1q3 t1q4 ; !Time 1 Factor
t1@3 ; !To scale by original metric
t7 BY t7q1* t7q2 t7q3 t7q4 ;
t7@3 ;
i s | t1@0 t2@1 t3@2 t4@4 t5@5 t6@6 t7@8 ;
Is this correct?
Linda K. Muthen posted on Friday, August 02, 2013 - 11:36 am
Follow Example 6.15 for multiple indicator growth. See also the example in either the Topic 3 or 4 course handout on the website. You must establish and specify measurement invariance of the factors
across time before you do a growth model.
Justin Delahunty posted on Thursday, August 08, 2013 - 5:28 am
Hi Linda,
Thanks for those references, I read and listened to the Topic 3 & 4 sections, and reviewed example 6.15. Unfortunately, even with measurement invariance specified the model still won't converge.
Most researchers using this data set in the past have treated the variables in question as continuous, and rarely as count. I have had much better luck with using continuous composite variables with
this model when run as a two-part model, and was wondering if using DATA TWOPART: I could fit a multi-indicator model? The issue I am facing here is the requirement for both ALGORITHM=INTEGRATION and
scale factors.
Justin Delahunty posted on Thursday, August 08, 2013 - 5:31 am
Sorry, I forgot to mention that the point I'm stuck at is running the binary part of the model: is it necessary to use scale factors for the binary model?
Linda K. Muthen posted on Thursday, August 08, 2013 - 9:30 am
Two-part modeling can be used for any model.
Justin Delahunty posted on Thursday, August 08, 2013 - 9:34 am
Thanks Linda,
So it's not necessary to specify the scale factors as per 6.15? I assume the section specifying measurement invariance should stay.
Linda K. Muthen posted on Thursday, August 08, 2013 - 10:27 am
You can use only ML for two-part so scale factors are not an issue.
Back to top
|
{"url":"http://www.statmodel.com/discussion/messages/23/13627.html?1379057057","timestamp":"2014-04-18T03:01:25Z","content_type":null,"content_length":"32533","record_id":"<urn:uuid:b0c4446d-90f2-4d00-b96b-e0b56f30cac5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help to formulate a non linear least squares problem
August 20th 2011, 01:51 AM #1
Mar 2011
Help to formulate a non linear least squares problem
Hi all,
linear least squares problem can be formulated as follows:
finding a and b that best fit a * x (t) + b to y (t), is equivalent to minimizing sum [(a * x (t) + b - y (t)) ^ 2]
I would like to obtain a correct formulation of this problem:
I'm looking for a1-2-3-4, b1-2-3-4 and c1-2-3-4 that minimize the gap between
a1 * n (t) ^ 2 + b1 * n (t) + c1 and p1 (t)
a2 * n (t) ^ 2 + b 2 * n (t) + c2 and p2 (t)
a3 * n (t) ^ 2 + b3 * n (t) + c3 and q (t)
a4 * n (t) ^ 2 + b4 * n (t) + c4 and d (t)
n is known but p1 p2 q and d are not, however, they are linked by the equation:
y(t+2) = (-p2(t+2)+p1(t+2)*p2(t+1)/p1(t+1)-Te*p1(t+2))*(y(t+1)-p2(t+1)*u(t+1)+p1(t+1)*p2(t)*u (t)/p1(t)+p2(t+1)*Te*y(t)/q(t)+p2(t+1)*Te*d(t)/q(t)-p1(t+1)*y(t)/p1(t)-Te*u(t)*p1(t+1))/(-p2(t+1)+p1
how from thess informations, can we properly formulate the problem of least squares as a minimization problem?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-applied-math/186447-help-formulate-non-linear-least-squares-problem.html","timestamp":"2014-04-20T03:47:00Z","content_type":null,"content_length":"30771","record_id":"<urn:uuid:6ef56092-b474-49c2-9d9c-310c90a5f9d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] random.uniform documentation bug?
Robert Kern robert.kern@gmail....
Tue Feb 23 12:29:29 CST 2010
On Tue, Feb 23, 2010 at 08:21, Alan G Isaac <aisaac@american.edu> wrote:
> This behavior does not match the current documentation.
>>>> np.random.uniform(low=0.5,high=0.5)
> 0.5
>>>> np.random.uniform(low=0.5,high=0.4)
> 0.48796883601707464
> I assume this behavior is intentional and it is
> the documentation that is in error (for the case
> when high<=low)?
Well, the documentation just doesn't really address high<=low. In any
case, the claim that the results are in [low, high) is wrong thanks to
floating point arithmetic. It has exactly the same issues as the
standard library's random.uniform() and should be updated to reflect
that fact:
random.uniform(a, b)
Return a random floating point number N such that a <= N <= b for a
<= b and b <= N <= a for b < a.
The end-point value b may or may not be included in the range
depending on floating-point rounding in the equation a + (b-a) *
We should address the high < low case in the documentation because
we're not going to bother raising an exception when high < low.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-February/048880.html","timestamp":"2014-04-19T09:40:39Z","content_type":null,"content_length":"4082","record_id":"<urn:uuid:b99e0465-1edf-4b61-82fc-441bb09d729c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AASHTO LRFD Driven Pile Estimated Length
DRC1 (Civil/Environmental) 29 Oct 08 22:29
LRFD took a simmple thing and made it much more complicaed than, in my opinion, than it had to be. But we are where we are.
Resistance factors can be corralated to the old factors of safety. If we could be 100% certian of the pile capacity, we would design for the factored capacity of the pile (60 kips). Since we can't,
we reflect our uncertianty in our pile capacity with resitance factors. Static analysis is far more uncertian than capwap, so in order to achieve at least the factored load, it has a much higher
factor than the capwap. The only thing that is equal between the two equations is the certianty that the pile has sufficent capacity.
AASHTO implies you should use these factors to develop estimated lengths. It is certianly conservative, especially if you are not planning on reducing the lenghts based on the test pile program. For
example, say you have your example and figure you need 40 feet to give you 156 kips ultmate load by static design and 92 kips by dynamic. You drive 3 testpiles and tap them. Dynamic tests all come
back in the 120 to 150 kip range. You can declare the piles good and drive all 40 footers. On the other hand you can analyze the tests, and decide you could drop 5 feet off the length. You could give
an order length of 35 feet, or you could just order 3 more test piles, drive test those. say results are 100 to 120kips. This is still acceptable.
Since the static method is more varible than the PDA or capwap, I would say don't design the pile to the 92 kips. You may very well have have test piles that are too short. However, if you are going
to test the piles and have some experience with the local soil, you may design for a smaller ultimate capacity.
I' sure this makes every hing clear as mud.
Any questions I wil do my best to answer.
|
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=229716","timestamp":"2014-04-19T17:13:48Z","content_type":null,"content_length":"26453","record_id":"<urn:uuid:f71ec9bc-dbf5-4025-a690-9da0fa7bd0b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Penndel, PA Statistics Tutor
Find a Penndel, PA Statistics Tutor
Hello,I am a 26 year High School Physics Teacher at a charter school in Northeast Philadelphia. I have multiple years of experience teaching Physics, as well as a year of teaching Algebra II. I
am certified in Pennsylvania to teach secondary Physics and secondary Math.
9 Subjects: including statistics, physics, geometry, algebra 1
My name is Carl. Many have known me as Carl The Math Tutor. I have nearly completed a PhD in math (with a heavy emphasis towards the computer science side of math) from the University of
11 Subjects: including statistics, calculus, ACT Math, precalculus
Hi, my name is Jem. I hold a B.S. in Mathematics from Rensselear Polytechnic Institute (RPI), and I offer tutoring in all math levels as well as chemistry and physics. My credentials include over
10 years tutoring experience and over 4 years professional teaching experience.
58 Subjects: including statistics, reading, geometry, biology
...I missed one question on the reading and two on the math section. I desire to tutor people who have the desire to do well on the SAT and do not feel like dealing with the snobbery that may
occur with a professional tutor. I tutored many of my friends on the SAT and saw improvements in their score.
16 Subjects: including statistics, reading, chemistry, physics
...In addition to my coursework in general and advanced eukaryotic genetics as a graduate student, my work as a research professor involved a significant amount of genetics, especially population
genetics (one of the sub-disciplines of genetics that many students find particularly mysterious and con...
15 Subjects: including statistics, chemistry, calculus, biology
Related Penndel, PA Tutors
Penndel, PA Accounting Tutors
Penndel, PA ACT Tutors
Penndel, PA Algebra Tutors
Penndel, PA Algebra 2 Tutors
Penndel, PA Calculus Tutors
Penndel, PA Geometry Tutors
Penndel, PA Math Tutors
Penndel, PA Prealgebra Tutors
Penndel, PA Precalculus Tutors
Penndel, PA SAT Tutors
Penndel, PA SAT Math Tutors
Penndel, PA Science Tutors
Penndel, PA Statistics Tutors
Penndel, PA Trigonometry Tutors
Nearby Cities With statistics Tutor
Bristol, PA statistics Tutors
Bryn Athyn statistics Tutors
Feasterville Trevose statistics Tutors
Feasterville, PA statistics Tutors
Hulmeville, PA statistics Tutors
Langhorne statistics Tutors
Levittown, PA statistics Tutors
Middletown Twp, PA statistics Tutors
Morrisville, PA statistics Tutors
Newtown, PA statistics Tutors
Parkland, PA statistics Tutors
Penns Park statistics Tutors
Pineville, PA statistics Tutors
Southampton, PA statistics Tutors
Wycombe statistics Tutors
|
{"url":"http://www.purplemath.com/Penndel_PA_statistics_tutors.php","timestamp":"2014-04-17T19:40:15Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:4301bcf5-584c-4307-9a5b-2996dcf408db>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
could someone please help me find my R value for this equation? i'm not quite sure as to what it should be to solve for the equation PV= R [1-(1+i)^-n/i]
Best Response
You've already chosen the best response.
A lottery to raise a funds for a hospital is advertising a $240 000 prize. The winner will receive $1000 ever month for 20 years., starting a year from now. a) if the interest is 8.9% per annum,
compounded annually, how much must be invested not to have the money to pay this prize?
Best Response
You've already chosen the best response.
darn im not this high in math...
Best Response
You've already chosen the best response.
hm.. the way i did it was like this \[240000\left[ 1-(1.089)^{-20} / 0.089 \right]\] but i got the wrong answer. the answer should be $111 943.89 but... i haven't really gotten that yet.. :S
Best Response
You've already chosen the best response.
i used this because shouldn't i find the Present Value?
Best Response
You've already chosen the best response.
\[12000\left(\frac{1-(1.089)^{-20}}{0.089}\right)\] 12000 because 1000 every month which means 12000 every year
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
hmm that's still not $111 943.89 though. :S sorry D:
Best Response
You've already chosen the best response.
ok , getting there
Best Response
You've already chosen the best response.
yep. for sure.
Best Response
You've already chosen the best response.
what grade is that, btw?
Best Response
You've already chosen the best response.
grade 11 financial applications (X
Best Response
You've already chosen the best response.
are you sure your answer is absolutely correct, because I tried everything
Best Response
You've already chosen the best response.
that's what the answer at the back of the book says. and yeah.. im trying different possibilities as well.. but nothing's been working for me either
Best Response
You've already chosen the best response.
really? :S how did you get that? and yea i know for a fact that i can't trust some of the answers in the back of the book.
Best Response
You've already chosen the best response.
1 st Year = PV 2nd Year= PV(1+R)-12000 3rd Year= (PV (1 + R)^2 - 12000 (1 + R)) - 12000 21th Year =(PV(1+R)^20)- 12000((1+R)^0+(1+R)^1+....(1+R)^19) rewrite using geometric sum \[\text{PV}(1+R)^
{20}-12000\frac{\left(1-(1+R)^{20}\right)}{(-R)}\text{==}0\] we know R, so solve for PV and I get PV = 110328.
Best Response
You've already chosen the best response.
hm, that seems to be the closest the answer could get to 111 943. 89 so i guess i'll stick with it for now until i get back to school. i'll ask my teacher if there was a typo in the answer or
something.. so the equation is PV(1+R)20−12000(1−(1+R)20)(−R)=0 ? right? R=280000
Best Response
You've already chosen the best response.
^20 **
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
alright. i'll just keep it like this for now. and then when i get back to school i'll ask my teacher. thanks for the help imran !
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f6252bbe4b079c5c631534c","timestamp":"2014-04-18T16:26:12Z","content_type":null,"content_length":"76674","record_id":"<urn:uuid:aef68e15-5187-4ea9-82ab-3ef623692de5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|