content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Viewing Equations in the HTML Article
OSA uses MathJax software to render equations. The software is run server side, and readers do not need to download additional software to view the equations. Right click on any equation to invoke
the MathJax toolbar with options to change zoom behavior, to scale, to view source, etc.
Problems and Solutions
1. The equations take too long to load.
Load time can be undersirably slow with some math-heavy articles, especially in Internet Explorer 8. We recommend Firefox or Safari for best performance. However, we will continue optimizing MathJax
display for all browsers. Internet Explorer users should upgrade to version 9 if possible.
2. The equations run off the page.
MathJax breaks long equations programmatically to fit the viewing page. To ensure that automatic breaking is turned on, set your your MathJax to CSS-HTML mode (right click an equation; then use Math
Settings -> Math Renderer
Right click an equation to activate the options menu.
3. I don't see the equation structure — I just see a string of characters.
Ensure that MathJax is set to CSS-HTML mode as described above.
4. The equations have bad looking or missing fonts.
1. Install the STIX fonts (comprehensive science and engineering fonts) on your computer if possible. For information on installing fonts on various operating systems, see http://www.dafont.com/
2. If you are using Safari or another browser with explicit font settings, ensure that the settings are optimized. In Safari for Windows, in the Appearance preferences, set "Font smoothing" to
5. The equation seems to have a serious display or content error.
Please contact OSA Staff if you need help or suspect that there are problems with equation content or display. | {"url":"http://www.opticsinfobase.org/MathML-help.html","timestamp":"2014-04-17T05:11:51Z","content_type":null,"content_length":"4016","record_id":"<urn:uuid:ad810fe4-6d79-47a0-9c04-d13c73196aad>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norwalk, CA Trigonometry Tutor
Find a Norwalk, CA Trigonometry Tutor
...Having taken Calculus 1, 2, 3, and Differential Equations, I have passed each class with "A". Add this to the fact that I have tutored math at CSULB's Learning Assistance Center for the past
two semesters has made me much more comfortable with the material and has left it fresh in my mind. Fini...
10 Subjects: including trigonometry, chemistry, physics, calculus
...I am open to traveling to a location that is productive and comfortable for the student for sessions. I look forward to helping you have success, both in school and on standardized tests!I
took advanced HS algebra 2 as an 8th grader, walking across the street from the middle to the high school e...
60 Subjects: including trigonometry, reading, Spanish, chemistry
...I understand the difficulties of learning complex material, and I do my best to help improve both a student's performance as well as their understanding. Performed very well in Algebra 2 in
high school. Have taken many math classes since then.
15 Subjects: including trigonometry, reading, physics, calculus
...My experience at tutoring centers was as a drop-in tutor -- students could arrive at the tutoring center without having made an appointment to receive a quick lesson or two. At the private
academy, I was an SAT instructor for small classrooms, with no more than eight students per session. I als...
10 Subjects: including trigonometry, calculus, physics, geometry
...I also have a CA teaching credential. Prior to becoming a teacher I was an Electrical Engineer and a graduate from Carnegie Mellon University in Pittsburgh, PA. I worked for a 8+ years as a
teacher in all subjects of Math in High Schools.
11 Subjects: including trigonometry, geometry, algebra 1, algebra 2
Related Norwalk, CA Tutors
Norwalk, CA Accounting Tutors
Norwalk, CA ACT Tutors
Norwalk, CA Algebra Tutors
Norwalk, CA Algebra 2 Tutors
Norwalk, CA Calculus Tutors
Norwalk, CA Geometry Tutors
Norwalk, CA Math Tutors
Norwalk, CA Prealgebra Tutors
Norwalk, CA Precalculus Tutors
Norwalk, CA SAT Tutors
Norwalk, CA SAT Math Tutors
Norwalk, CA Science Tutors
Norwalk, CA Statistics Tutors
Norwalk, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Norwalk_CA_trigonometry_tutors.php","timestamp":"2014-04-18T11:45:59Z","content_type":null,"content_length":"24178","record_id":"<urn:uuid:4462b94b-fe26-4791-a72b-4933ded1492a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
1103 Submissions
[7] viXra:1103.0110 [pdf] replaced on 18 May 2011
The Structuring Force of Galaxies
Authors: Jin He
Comments: 45 pages (the final 28 pages are c++ program source file)
The concept of rational structure was suggested in 2000. A flat material distribution is called the rational structure if there exists a special net of orthogonal curves on the plane, and the ratio
of mass density at one side of a curve (from the net) to the one at the other side is constant along the curve. Such curve is called a proportion curve. Such net of curves is called an orthogonal net
of proportion curves. Eleven years have passed and a rational sufficient condition for given material distribution is finally obtained. This completes the mathematical basis for the study of rational
structure and its galaxy application. People can fit the stellar distribution of a barred spiral galaxy with exponential disk and dual-hand structure by varying their parameter values. If the
conjecture is proved that barred galaxies satisfy a rational suffcient condition then the assumption of galaxy rational origin will be established.
Category: Astrophysics
[6] viXra:1103.0101 [pdf] submitted on 25 Mar 2011
Astrophysics at Home. Further Hunting for Possible Micrometeorites.
Authors: Giuliano Bettini
Comments: 11 pages. In English
This paper follows "Astrophysics at home. Micrometeorites" posted in Vixra on 15 Feb 2011. As Jon Larsen says: "Up until now splendid research on MMs has been executed at secure localities (the South
Pole well, prehistoric layers beneath the Indian Ocean, at the Greenland ice cap, etc), but consistent research in order to identify the similar objects found for instance in our populated areas, is
practically absent". I present here a lot of new specimens found in a city environment, photos, a lot of questions and few answers.
Category: Astrophysics
[5] viXra:1103.0090 [pdf] submitted on 23 Mar 2011
The Structuring Force of Natural World
Authors: Jin He
Comments: 13 pages. In chinese
The assumption that the mass distribution of spiral galaxies is rational was suggested 11 years ago. The rationality means that on any spiral galaxy disk plane there exists a special net of
orthogonal curves. The ratio of mass density at one side of a curve (from the net) to the one at the other side is constant along the curve. Such curve is called a proportion curve. Such net of
curves is called an orthogonal net of proportion curves. I also suggested that the arms and rings are the disturbance to the rational structure. To achieve the minimal disturbance, the disturbing
waves trace the orthogonal or non-orthogonal proportion curves. I proved 6 years ago that exponential disks and dual-handle structures are rational. Recently, I have also proved that rational
structure satisfies a cubic algebraic equation. Based on these results, this paper ultimately demonstrates visually what the orthogonal net of proportion curves looks like if the superposition of a
disk and dual-handle structures is still rational. That is, based on the natural solution of the equation, the rate of variance along the 'radial' direction of the logarithmic mass density is
obtained. Its image is called the 'basket graph'. The myth of galaxy structure will possibly be resolved based the further study of 'basket graphs'.
Category: Astrophysics
[4] viXra:1103.0072 [pdf] submitted on 16 Mar 2011
"Aspin Bubbles" and the Force of Gravity
Authors: Yoël Lana-Renault
Comments: 9 pages
Based on the "Aspin Bubbles" theory, we will demonstrate that the force of gravity between two neutral matters is always a residue of the electrical forces that act among their elementary particles.
Category: Astrophysics
[3] viXra:1103.0071 [pdf] submitted on 16 Mar 2011
"Aspin Bubbles" and Gravitational Deflection
Authors: Yoël Lana-Renault
Comments: 10 pages with 4 figures
Based on the "Aspin Bubbles" theory, we propose a velocity function v(r) for light that is exclusively dependent on the gravity g(r) that exists at each point P(r) of space, and with which, by
applying the laws of refraction, the gravitational deflections of light measured up to this point are obtained .
Category: Astrophysics
[2] viXra:1103.0027 [pdf] submitted on 10 Mar 2011
A Possible Alternative to Dark Matter
Authors: I. V. Grossu
Comments: 2 pages.
Inspired by existing theories that consider modifications of Newton's law for extragalactic systems, I propose, as an alternative to dark matter, the existence of a new force, constructed in an
analog way with the magnetic field. In this context, an encouraging qualitative agreement with the rotation curves of disk galaxies was obtained. It is important to emphasis on the very basic level
of the treatment presented in this paper. Further analyses along those lines are currently in progress.
Category: Astrophysics
[1] viXra:1103.0021 [pdf] submitted on 8 Mar 2011
Two Gravitational Singularities
Authors: Javier Bootello
Comments: 3 pages.
This article presents a virtual gravitational potential, which could explain some recent astronomical singularities: the secular increase of the eccentricity of the orbit of the Moon and the increase
of the Astronomical Unit. Anyway, it is a theoretical potential without any proof of its physical reality.
Category: Astrophysics | {"url":"http://vixra.org/astro/1103","timestamp":"2014-04-18T18:13:02Z","content_type":null,"content_length":"10394","record_id":"<urn:uuid:99c63074-e0c6-4b78-89b9-5fc1353ab8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching in the big wired world
WOW- Maths Starter a Day
This week’s WOW is Maths Starter a Day. This site features a calendar that has links to a different maths problem for each day. The problems are based on a range of maths topics and encourage problem
solving skills as they provide JUST enough information for the students to solve the problem. (Can be quite frustrating for some students at first until they start to develop their problem solving
I have used this in class as an introduction to the Maths lesson and it is also an excellent activity for early finishers. The problems range in difficulty and I would suggest that they are probably
best suited to high grade 3 students and upwards.
All problems have the answers and usually some demonstration / explanation of how to solve the problems at the bottom of the page.
Here’s the poster for Maths Starter a Day
One thought on “WOW- Maths Starter a Day”
1. Loving this one Riss – thanks. First day back this term…it’s there!! Cheers | {"url":"http://mrsleung.edublogs.org/2009/02/26/wow-maths-starter-a-day/","timestamp":"2014-04-18T18:29:20Z","content_type":null,"content_length":"27096","record_id":"<urn:uuid:1353bc65-a619-472b-85f6-30b758a38c46>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Google Answers: Bank reconciliations using MS Excel
IMPORTANT -- PLEASE READ
This answer is not finished until you're satisfied with it. If you
choose to rate this answer, please only do so AFTER allowing me the
opportunity to make it satisfactory to you. Thank you for your
Greetings -
This is actually much more easily handled in Microsoft Access, but if
I were to handle this in Excel, the following is how I would do it:
The way I read your setup, you've got 4 columns: your cash book
entries, your bank statement entries, a cash book description column,
and a bank statement description column. You need to make a fifth
column. Label it RECONCILED. When we're done making the formula,
each cell in this column will denote whether or not the contents of
your cash book entries and bank statement entries match. So, here's
an example:
Column A Column B Column C Column D Column E
1 CASHBOOK BANKSTATEMENT CASHDESCRIPT BANKDESCRIPT
2 $400 $400 Bill Payment Bill Payment
3 $800 $293 Bill Payment Bonus Payout
4 $200 $200 Supplies Lunch
As we can see, the cells in row 2 match. The third and fourth rows
don't. We need to create a formula [also called "function"] in the
first row of the RECONCILED column [which will go in each cell of that
column] that will compare the cells of columns A and B and columns C
and D [on each row] and determine if they match. If they match, a
TRUE will be entered in the cell. If they don't match, FALSE will be
entered. Please note, the description entries for your cash book and
your bank entries MUST have the same naming scheme in order for this
to work. For instance, if your cash book description says "Bill
Payment" for when you've paid a bill, but your bank statement
descriptions says "Payment of Bill", then even though conceptually
these two things are the same, the formula will not be able to match
In the first cell of your RECONCILED column, you want to enter the
following formula:
This "AND" formula/function works by evaluating up to 30 expressions,
separated by commas [in the above example, we have 2 expressions being
evaluated]. Each expression must be true in order for it to return
[or, "render a calculation of"] TRUE. If one of the expressions is
false, this function will return FALSE.
Once you have that formula entered into that first cell, all you have
to do is "auto-fill" this formula into the rest of the cells in that
column by clicking and holding the bottom right corner of that cell,
and dragging it down into the rest of your cells. This will
automatically fill the formula in for every row and do the associated
calculations for each row.
Now the first 4 rows of your spreadsheet should look something like
Column A Column B Column C Column D Column E
1 CASHBOOK BANKSTATEMENT CASHDESCRIPT BANKDESCRIPT
2 $400 $400 Bill Payment Bill Payment TRUE
3 $800 $293 Bill Payment Bonus Payout FALSE
4 $200 $200 Supplies Lunch FALSE
All the way down your spreadsheet, the RECONCILED column should be
filled with TRUE or FALSE as appropriate [manually double-check a few
to make sure it's calculating correctly.
Now, all you have to do is highlight your entire spreadsheet and sort
it by your RECONCILED column [Column E in this example]. All of your
reconciled bank entries will be listed consecutively, and all of your
unreconciled entries will also be listed consecutively. You can
simply copy and paste any entries you wish into another worksheet.
I hope this was clear. If you have any questions, please don't
hesitate to ask.
Additional Link:
A few Useful Microsoft Excel Resources
Excel Tutorials by John F. Lacher CPA
Search strategy:
See excel help file for more information on this and other functions
GA Researcher
Clarification of Answer by jbf777-ga on 10 Jan 2003 11:48 PST
Oops... the system misformatted my answer. The "RECONCILED" should be
under the Column E heading. Let my try it again:
IMPORTANT -- PLEASE READ
This answer is not finished until you're satisfied with it. If you
choose to rate this answer, please only do so AFTER allowing me the
opportunity to make it satisfactory to you. Thank you for your
Greetings -
This is actually much more easily handled in Microsoft Access, but if
I were to handle this in Excel, the following is how I would do it:
The way I read your setup, you've got 4 columns: your cash book
entries, your bank statement entries, a cash book description column,
and a bank statement description column. You need to make a fifth
column. Label it RECONCILED. When we're done making the formula,
each cell in this column will denote whether or not the contents of
your cash book entries and bank statement entries match. So, here's
an example:
Column A Column B Column C Column D Column E
1 CASHBOOK BANKSTATEMENT CASHDESCRIPT BANKDESCRIPT
2 $400 $400 Bill Payment Bill Payment
3 $800 $293 Bill Payment Bonus Payout
4 $200 $200 Supplies Lunch
As we can see, the cells in row 2 match. The third and fourth rows
don't. We need to create a formula [also called "function"] in the
first row of the RECONCILED column [which will go in each cell of that
column] that will compare the cells of columns A and B and columns C
and D [on each row] and determine if they match. If they match, a
TRUE will be entered in the cell. If they don't match, FALSE will be
entered. Please note, the description entries for your cash book and
your bank entries MUST have the same naming scheme in order for this
to work. For instance, if your cash book description says "Bill
Payment" for when you've paid a bill, but your bank statement
descriptions says "Payment of Bill", then even though conceptually
these two things are the same, the formula will not be able to match
In the first cell of your RECONCILED column, you want to enter the
following formula:
This "AND" formula/function works by evaluating up to 30 expressions,
separated by commas [in the above example, we have 2 expressions being
evaluated]. Each expression must be true in order for it to return
[or, "render a calculation of"] TRUE. If one of the expressions is
false, this function will return FALSE.
Once you have that formula entered into that first cell, all you have
to do is "auto-fill" this formula into the rest of the cells in that
column by clicking and holding the bottom right corner of that cell,
and dragging it down into the rest of your cells. This will
automatically fill the formula in for every row and do the associated
calculations for each row.
Now the first 4 rows of your spreadsheet should look something like
Column A Column B Column C Column D Column E
1 CASHBOOK BANKSTATEMENT CASHDESCRIPT BANKDESCRIPT RECONCILED
2 $400 $400 Bill Payment Bill Payment TRUE
3 $800 $293 Bill Payment Bonus Payout FALSE
4 $200 $200 Supplies Lunch FALSE
All the way down your spreadsheet, the RECONCILED column should be
filled with TRUE or FALSE as appropriate [manually double-check a few
to make sure it's calculating correctly.
Now, all you have to do is highlight your entire spreadsheet and sort
it by your RECONCILED column [Column E in this example]. All of your
reconciled bank entries will be listed consecutively, and all of your
unreconciled entries will also be listed consecutively. You can
simply copy and paste any entries you wish into another worksheet.
I hope this was clear. If you have any questions, please don't
hesitate to ask.
Additional Link:
A few Useful Microsoft Excel Resources
Excel Tutorials by John F. Lacher CPA
Search strategy:
See excel help file for more information on this and other functions
GA Researcher
Clarification of Answer by jbf777-ga on 11 Jan 2003 12:50 PST
Hello -
If I understand you right, you're saying your columns are not next to
each other? For example, you have one column as column B and one
column as column F? Columns don't have to be side by side in order
for any formula to work. For instance in the following formula:
=AND(A5=V4, B2=R12, M7=X7)
Cell A5 is compared to V4 [column A vs. column V], cell B2 is compared
to R12 [column B vs. column R], and M7 is compared to X7 [column M vs.
column X]. If A5=V4 AND B2=R12 AND M7=X7 the function will be
evaluated as True. You can simply adjust the formula as need-be to
reflect how your worksheet is set up.
Please let me know if you need any additional clarification. I'm here
to help.
GA Researcher | {"url":"http://answers.google.com/answers/threadview?id=139777","timestamp":"2014-04-17T03:50:11Z","content_type":null,"content_length":"22299","record_id":"<urn:uuid:dc02a355-ad2d-4339-bb79-827f8438e9ea>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
differential equationmathematical statement containing one or more derivatives, that derivatives—that is, terms representing the relationships between the rates of change of continuously varying
quantities. Differential equations are very common in science and engineering, as well as in many other fields of quantitative study, because what can be directly observed and measured for systems
undergoing changes are their rates of change. The solution of a differential equation is, in general, an algebraic equation expressing the functional dependence of one variable upon one or more
others; it ordinarily contains constant terms that are not present in the original differential equation. Another way of saying this is that the solution of a differential equation produces a
function that can be used to predict the behaviour of the original system, at least within certain constraints.
Differential equations are classified into several broad categories, and these are in turn further divided into many subcategories. The most important categories are those of the so-called ordinary
differential equations and the so-called partial differential equations. When the function involved in the equation depends upon on only a single variable, its derivatives are ordinary derivatives
and the differential equation is classed as an ordinary differential equation. If, on On the other hand, if the function depends upon on several independent variables, so that its derivatives are
partial derivatives, then the differential equation is classed as a partial differential equation. The following are examples of ordinary differential equations:
In these, y stands for the function, and either t or x is the independent variable. The symbols k and m are used here to stand for specific constants.
Whichever the type may be, a differential equation is said to be of the nth order if it involves a derivative of the nth order but no derivative of an order higher than this. The equationis an
example of a partial differential equation of the second order. The theories of ordinary and partial differential equations are markedly different, and for this reason the two categories are treated
Instead of a single differential equation, the object of study may be a simultaneous system of such equations. The formulation of the laws of dynamics frequently leads to such systems. In many cases,
a single differential equation of the nth order is advantageously replaceable by a system of n simultaneous equations, each of which is of the first order, so that techniques from linear algebra can
be applied.
An ordinary differential equation in which, for example, the function and the independent variable are denoted by y and x is in effect an implicit summary of the essential characteristics of y as a
function of x. These characteristics would presumably be more accessible to analysis if an explicit formula for y could be produced. Such a formula, or at least an equation in x and y (involving no
derivatives) that is deducible from the differential equation, is called a solution of the differential equation. The process of deducing a solution from the equation by the applications of algebra
and the calculus is called that of solving or integrating the equation. It should be noted, however, that the differential equations that can be integrated explicitly solved form but a small
minority. The chances are large, in the instance of a differential equation selected at random, that the equation is itself the simplest mode of summarizing the characteristics of the function and
that even theoretically no solving formula in the usual sense exists. In such instances, the function Thus, most functions must be studied by indirect methods. Even its existence must be proved when
there is no possibility of producing it for inspection. In practice, methods from numerical analysis, involving computers, are employed to obtain useful approximate solutions. | {"url":"http://media-1.web.britannica.com/eb-diffs/910/162910-8909-30412.html","timestamp":"2014-04-18T20:44:24Z","content_type":null,"content_length":"8065","record_id":"<urn:uuid:be16e227-19c3-41a2-9196-293fc6c26680>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help me @Daish96 @dumbsearch2 @damien @Danny_Boy
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
omgggggggggg idk
Best Response
You've already chosen the best response.
not in that problem
Best Response
You've already chosen the best response.
@damien @Daish96 @DLS @dietrich_harmon @Koikkara @kugler97 @Kpinky
Best Response
You've already chosen the best response.
@InYourHead .....
Best Response
You've already chosen the best response.
@ThatHippieKid @haleylou56 @heena @hewsmike @hannahbabiexoxo
Best Response
You've already chosen the best response.
i really need help
Best Response
You've already chosen the best response.
@nincompoop @NickR @nolesgirl328 @nolesgirl328
Best Response
You've already chosen the best response.
64b^10/b^9c^5= 64b/c^5
Best Response
You've already chosen the best response.
omggggggg confused thats the anwser?????
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ummm thanks can u help me with these other 3 questions?
Best Response
You've already chosen the best response.
but on one condition u hav to also try with me and have to try to undrstand how towork on such question is it fine?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
)Robert loves football and wants to start practice throwing daily. The first day he practices for 3 minutes. The next day he triples his time. Robert continues to triple his practice time every
day. How many minutes did Robert practice on the fourth day?
Best Response
You've already chosen the best response.
ok on firt day he did 3^1-3min den on second day he triples his time mean 3^3=3x3x3=27min
Best Response
You've already chosen the best response.
now on third day u tell me
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i think triples mean increasing power means 27^3
Best Response
You've already chosen the best response.
danya what happen?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@heena ??????????????????
Best Response
You've already chosen the best response.
what dear ? now work for fourth dat imilarly as u did for third day
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
didnt u get the trick how to work on it?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
is that right???
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@jonnymiller @Jenn777 @hewsmike @heena @whpalmer4 @Agent_Sniffles @ash2326 @Koikkara @kelliegirl33 is this right
Best Response
You've already chosen the best response.
@ash2326 i think i made a mistake help her plz
Best Response
You've already chosen the best response.
orry danya but i think i m not correct here
Best Response
You've already chosen the best response.
Im so lost lol
Best Response
You've already chosen the best response.
the questiopn is at the top
Best Response
You've already chosen the best response.
its okay @heena
Best Response
You've already chosen the best response.
@danya1 Always ask new question as a new post Robert loves football and wants to start practice throwing daily. The first day he practices for 3 minutes. The next day he triples his time. Robert
continues to triple his practice time every day. How many minutes did Robert practice on the fourth day?
Best Response
You've already chosen the best response.
He triples his time means times 3 First day 3 minutes=3 mins Second 3*3 minutes=9 mins Can you find for the third day and 4th day? It's just multiplication :)
Best Response
You've already chosen the best response.
yes thats the question
Best Response
You've already chosen the best response.
do i do 9x3??
Best Response
You've already chosen the best response.
Is this how to do it.....are you sure it doesn't have anything to do with exonents ? first day = 3 min 2nd day = 3 x 3 = 9 3rd day = 9 x 3 = 27 4th day = 27 x 3 = 81
Best Response
You've already chosen the best response.
oops...typo...exponents ?
Best Response
You've already chosen the best response.
yes @danya1
Best Response
You've already chosen the best response.
or is it 9x9x9
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
see...that is what is confusing me....exponents
Best Response
You've already chosen the best response.
I suppose ash is right.....we are making this too hard when it isn't.
Best Response
You've already chosen the best response.
thrice is 3 times
Best Response
You've already chosen the best response.
Thanks for explaining ash....oh, and I am sorry I stepped on your turf...I will do better
Best Response
You've already chosen the best response.
no problem @kelliegirl33
Best Response
You've already chosen the best response.
you deserve the medal ash :)
Best Response
You've already chosen the best response.
Do you get @danya1
Best Response
You've already chosen the best response.
so the second day was 9 and the third is 9x3=27 and the fourth day is 27x3=81
Best Response
You've already chosen the best response.
Yes @danya1 :D
Best Response
You've already chosen the best response.
ash is good
Best Response
You've already chosen the best response.
Thanks @kelliegirl33 :) Likewise :D
Best Response
You've already chosen the best response.
can u help me withe the first problem i wrote at the top thanks
Best Response
You've already chosen the best response.
Okay, you want to simplify it \[(\frac{2b^2}{b^4c})^5\] First cancel the common terms \[(\frac{2\cancel{b^2}}{b^{\cancel 4 2}c})^5\] so we get \[(\frac{2}{b^2c})^5\] Do you understand till here?
Best Response
You've already chosen the best response.
ummmmmmmmmmmmmm yea
Best Response
You've already chosen the best response.
Now just raise the terms to the power \[\frac{2^5}{b^{2^5}c^5}\] \[\frac{32}{b^{10}c^5}\]
Best Response
You've already chosen the best response.
ok thats it
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
thankswhat about this one
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@danya1 Please close this question and ask this question in a new post :) That way you'll get help from many users :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51269ac1e4b0111cc68f0b9c","timestamp":"2014-04-21T12:48:47Z","content_type":null,"content_length":"179193","record_id":"<urn:uuid:7355b389-2f57-4635-979c-d1014b771250>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
cbind {base}
Combine R Objects by Rows or Columns
Take a sequence of vector, matrix or data frames arguments and combine by columns or rows, respectively. These are generic functions with methods for other R classes.
cbind(..., deparse.level = 1)
rbind(..., deparse.level = 1)
vectors or matrices. These can be given as named arguments. Other R objects will be coerced as appropriate: see sections ‘Details’ and ‘Value’. (For the "data.frame" method of cbind these can be
further arguments to data.frame such as stringsAsFactors.)
integer controlling the construction of labels in the case of non-matrix-like arguments (for the default method):
deparse.level = 0 constructs no labels; the default,
deparse.level = 1 or 2 constructs labels from the argument names, see the ‘Value’ section below.
The functions cbind and rbind are S3 generic, with methods for data frames. The data frame method will be used if at least one argument is a data frame and the rest are vectors or matrices. There can
be other methods; in particular, there is one for time series objects. See the section on ‘Dispatch’ for how the method to be used is selected.
In the default method, all the vectors/matrices must be atomic (see vector) or lists. Expressions are not allowed. Language objects (such as formulae and calls) and pairlists will be coerced to
lists: other objects (such as names and external pointers) will be included as elements in a list result. Any classes the inputs might have are discarded (in particular, factors are replaced by their
internal codes).
If there are several matrix arguments, they must all have the same number of columns (or rows) and this will be the number of columns (or rows) of the result. If all the arguments are vectors, the
number of columns (rows) in the result is equal to the length of the longest vector. Values in shorter arguments are recycled to achieve this length (with a warning if they are recycled only
When the arguments consist of a mix of matrices and vectors the number of columns (rows) of the result is determined by the number of columns (rows) of the matrix arguments. Any vectors have their
values recycled or subsetted to achieve this length.
For cbind (rbind), vectors of zero length (including NULL) are ignored unless the result would have zero rows (columns), for S compatibility. (Zero-extent matrices do not occur in S3 and are not
ignored in R.)
For the default method, a matrix combining the ... arguments column-wise or row-wise. (Exception: if there are no inputs or all the inputs are NULL, the value is NULL.)
The type of a matrix result determined from the highest type of any of the inputs in the hierarchy raw < logical < integer < double < complex < character < list .
For cbind (rbind) the column (row) names are taken from the colnames (rownames) of the arguments if these are matrix-like. Otherwise from the names of the arguments or where those are not supplied
and deparse.level > 0, by deparsing the expressions given, for deparse.level = 1 only if that gives a sensible name (a ‘symbol’, see is.symbol).
For cbind row names are taken from the first argument with appropriate names: rownames for a matrix, or names for a vector of length the number of rows of the result.
For rbind column names are taken from the first argument with appropriate names: colnames for a matrix, or names for a vector of length the number of columns of the result.
Data frame methods
The cbind data frame method is just a wrapper for data.frame(..., check.names = FALSE). This means that it will split matrix columns in data frame arguments, and convert character columns to factors
unless stringsAsFactors = FALSE is specified.
The rbind data frame method first drops all zero-column and zero-row arguments. (If that leaves none, it returns the first argument with columns otherwise a zero-column zero-row data frame.) It then
takes the classes of the columns from the first data frame, and matches columns by name (rather than by position). Factors have their levels expanded as necessary (in the order of the levels of the
levelsets of the factors encountered) and the result is an ordered factor if and only if all the components were ordered factors. (The last point differs from S-PLUS.) Old-style categories (integer
vectors with levels) are promoted to factors.
The method dispatching is not done via UseMethod(), but by C-internal dispatching. Therefore there is no need for, e.g., rbind.default.
The dispatch algorithm is described in the source file (‘.../src/main/bind.c’) as
1. For each argument we get the list of possible class memberships from the class attribute.
2. We inspect each class in turn to see if there is an applicable method.
3. If we find an applicable method we make sure that it is identical to any method determined for prior arguments. If it is identical, we proceed, otherwise we immediately drop through to the
default code.
If you want to combine other objects with data frames, it may be necessary to coerce them to data frames first. (Note that this algorithm can result in calling the data frame method if all the
arguments are either data frames or vectors, and this will result in the coercion of character vectors to factors.)
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
See Also
c to combine vectors (and lists) as vectors, data.frame to combine vectors and matrices as a data frame.
m <- cbind(1, 1:7) # the '1' (= shorter vector) is recycled
m <- cbind(m, 8:14)[, c(1, 3, 2)] # insert a column
cbind(1:7, diag(3)) # vector is subset -> warning
cbind(0, rbind(1, 1:3))
cbind(I = 0, X = rbind(a = 1, b = 1:3)) # use some names
xx <- data.frame(I = rep(0,2))
cbind(xx, X = rbind(a = 1, b = 1:3)) # named differently
cbind(0, matrix(1, nrow = 0, ncol = 4)) #> Warning (making sense)
dim(cbind(0, matrix(1, nrow = 2, ncol = 0))) #-> 2 x 1
## deparse.level
dd <- 10
rbind(1:4, c = 2, "a++" = 10, dd, deparse.level = 0) # middle 2 rownames
rbind(1:4, c = 2, "a++" = 10, dd, deparse.level = 1) # 3 rownames (default)
rbind(1:4, c = 2, "a++" = 10, dd, deparse.level = 2) # 4 rownames
Documentation reproduced from R 3.0.2. License: GPL-2. | {"url":"http://www.inside-r.org/r-doc/base/rbind","timestamp":"2014-04-18T11:30:15Z","content_type":null,"content_length":"34147","record_id":"<urn:uuid:3adcc1f2-9eea-4903-a7ae-b329708a47a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
from the Automation List department...
pressure and temp compensation formula for the mass flow calculation of superheated steam
we have added the pressure and temperature compensation for calculating the mass flow of the superheated steam. The design pressure and temperature of the orifice plate are 43kg/cm2 and 420 C and the
differential pressure created by the orifice plate is 10000mmWC.
kindly suggest the correct formula for the above
Temperature & pressure compensation is not at all required for mass flow rate only is accountable in volumetric flow rate. As you are using differential pressure type flow element, it ensures that
the transmitter is having linear and in DCS/PLC square root. And also check the specifications whether mass flow or volumetric flow is considered during the process of manfacturing.
Anyway, I have the formula, it is:
SQUARE ROOT (P+1.033 * Td+273/ Pd+ 1.033 * T+ 293)
P--- Kg/Cm2
Make sure that you are using gauge pressure transmitter.
Effect of temperature & pressure is negligible in
mass flow as it is movement of mass/time, but affects volume as density changes with pressure & temperature.
If you are unable to solve this, can you give me a clear picture or contact me at aswasidutt2 @ hotmail. com
I've been interested in this topic for a while as well.
Why is it you dont take into account the size of the orifice?
Also, is it possible to use this (or maybe some other that people know about) to calculate the mass flow through a pipe as it enters a main header (i.e. you have 3 x 8" pipes connected to a common
14" header and you want to know the mass flow through each of the 8" pipes)?
Glenn, you use the orifice plate size when you calculate the flow constant. The flow (without compensation for temperature or pressure variations) is then normally calculated as
Q = C * sqrt(dp)
Where C is the flow constant that you normally find bu using an orifice plate sizing program. We use the program FlowCalc
There are also other more accurate formulas you can use. This is described in the FlowCalc manual.
This formula may be used:
Q=Qin X Sq. root (Tr+273.14)/(Tin+273.14)X Sq root (Pin+Pabs)/(Pr+Pabs)
Q - Compensated flow
Qin - measured flow
Tr & Pr are temp & press considered during flow element design
Pabs - absolute press
Tin & Pin - measured temp & press using TT & PT in the line.
Hope that helps.
Dear sir
as you know we will measure flow as normal cubic meter per hour. is it possible to compensate this value by using mentioned formula?
thank you
What, exactly, is a normal cubic meter of saturated steam? Superheated steam does not exist at normal conditions!! (Normal conditions are 0 deg C temperature and 1 atm pressure.)
dear sir
its true, in normal condition steam is not really exist but we consider this value only for comparison in PFD in related to other gases. any how for other gases?
is there any difference between square root in transmitter or in DCS?
Expressing flow of a gas in terms of normal cubic meters per time unit is effectively a mass flow rather than a volumetric flow. It only has meaning if the substance is a gas at normal conditions. If
1 cubic meter of the gas weighed 0.5 kg at normal conditions, then 1000 Nm3/minute is the same as 500 kg/minute for that gas.
On your 2nd question, mathematically there is no difference in having the square root operation in the transmitter instead of the DCS. However, unless the pressure and temperature compensation is
also done in the transmitter you are not saving anything in terms of DCS loading - the temperature and pressure compensation should be done before the square root is extracted.
I have a query on pressure temp. compensation if you can help. My problem is that the vendor gave the orifice flow calculated in terms of kg/hr and not volumeric flow rate, the orifice has a DP
I would like to know is there a way out to calculate this mass flow after pressure temperature compensation?
Or the only way is to get it calculated into volmetric flow and then calculate the compensation. It will be a big task and will take time
I think if you look at this closer you will find that the d/p across the orifice is related to the mass of the material flowing through it and not the volume.
If you know the density (specific gravity) or specific volume of the fluid, you can convert between mass flow and volumetric flow.
Is it really required to convert it back to volumetric flow for pressure temp. conversions I think as this Mass is calculated at a fixed Temp and Press, the same can be used for pressure temp. as
well without any conversion. Pl confirm.
If the mass flow has already been compensated for pressure and temperature (which is really density compensation), and you know the density or specific volume, you divide the compensated mass flow by
density (or multiply by specific volume) to get actual volumetric flow.
Divide the mass flow by the density of the fluid (or multiply by the specific volume) to convert it to volumetric flow.
In visual basic format:
Function StmFlow(Density, WC, PipeID, OrifID, ExpF, DisC)
StmFlow = (358.92684) * (DisC * (OrifID ^ 2) / (Sqr(1 - ((OrifID / PipeID) ^ 4)))) * (Sqr(Density)) * (Sqr(WC)) * ExpF
End Function
Formula for Temperature and Pressure
1. The problem statement, all variables and given/known data
A quantity of gas occupies a volume of 0.5m. The pressure of the gas is 300kPa, when its temperature is 30°C. Calculate the pressure of the gas if it is compressed to half of its volume and heated to
a temperature of 140°C.
2. Relevant equations
(P1 x V1)/T1 = (P2 x V2)/T2
3. The attempt at a solution
P1 = 300 kPa
V1 = 0.5m (i'm not sure what unit of volume i'm meant to be using or converting to so i'm using as it is)
T1 = 30°C (303 Kelvin)
P2 = THIS IS WHAT I NEED TO FIND OUT
V2 = 0.25m
T2 = 140°C (413 Kelvin)
I moved stuff around to make V2 after the "=" Is this right?
= (300x0.5)/303 = (P2 x 0.25m)/413
= (300x0.5)/303/0.25x413 = 817.82kPa?
You need to be careful with calcs like "(300x0.5)/303/0.25" your calculator might not do them in the order you think.
It might be safer to do something like:
P1V1/T1 = P2V2/T2
so P2 = P1(V1/V2) * (T2/T1)
this also makes it obvious that units of the answer are correct.
Your use of this site is subject to the terms and conditions set forth under
Legal Notices
and the
Privacy Policy
. Please read those terms and conditions carefully. Subject to the rights expressly reserved to others under Legal Notices, the content of this site and the compilation thereof is © 1999-2014 Nerds
in Control, LLC. All rights reserved.
Users of this site are benefiting from open source technologies, including PHP, MySQL and Apache. Be happy.
Have an adequate day. | {"url":"http://www.control.com/thread/1026237316","timestamp":"2014-04-18T19:13:55Z","content_type":null,"content_length":"26732","record_id":"<urn:uuid:76269ae6-7f4b-47be-9d16-3cba2a4e1a33>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal Metric Planning with State Sets in Automata Representation
Bjoern Borowsky, Stefan Edelkamp
This paper proposes an optimal approach to infinite-state action planning exploiting automata theory. State sets and actions are characterized by Presburger formulas and represented using minimized
finite state machines. The exploration that contributes to the planning via model checking paradigm applies symbolic images in order to compute the deterministic finite automaton for the sets of
successors. A large fraction of metric planning problems can be translated into Presburger arithmetic, while derived predicates are simply compiled away. We further propose three algorithms for
computing optimal plans; one for uniform action costs, one for the additive cost model, and one for linear plan metrics. Furthermore, an extension for infinite state sets is discussed.
Subjects: 1.11 Planning; 15.7 Search
Submitted: Apr 14, 2008
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/AAAI/2008/aaai08-139.php","timestamp":"2014-04-18T10:46:47Z","content_type":null,"content_length":"2776","record_id":"<urn:uuid:9f00de77-d7a4-406f-b23b-155ffe3a9153>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] chebyshev polynomials
Pauli Virtanen pav@iki...
Thu Sep 24 14:18:06 CDT 2009
to, 2009-09-24 kello 11:51 -0600, Charles R Harris kirjoitti:
> Would it be appropriate to add a class similar to poly but instead
> using chebyshev polynomials? That is, where we currently have
Yes, I think. scipy.special.orthogonal would be the best place for this,
I think. Numpy would probably be a wrong place for stuff like this.
Ideally, all the orthogonal polynomial classes in Scipy should be
rewritten to use more a stable representation of the polynomials.
Currently, they break down at high orders, which is a bit ugly.
I started working on something related in the spring. The branch is
but as you can see, it hasn't got far (eg. orthopoly1d.__call__ is
effectively a placeholder). Anyway, the idea was to divide the
orthopoly1d class to subclasses, each having more stable
polynomial-specific evaluation routines. Stability-preserving arithmetic
would be supported at least within the polynomial class.
As a side note, should the cheby* versions of `polyval`, `polymul` etc.
just be dropped to reduce namespace clutter? You can do the same things
already within just class methods and arithmetic.
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-September/045532.html","timestamp":"2014-04-19T05:02:24Z","content_type":null,"content_length":"3721","record_id":"<urn:uuid:3a3e4736-d047-4c9c-90ac-f64308a59fc0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Significance Testing
Northern Prairie Wildlife Research Center
The Insignificance of Statistical Significance Testing
Literature Cited
Abelson, R. P. 1997. A retrospective on the significance test ban on 1999 (If
there were no significance tests, they would be invented). Pages
117-141 in L. L. Harlow, S. A. Mulaik, and J. H. Steiger, editors. What
if there were no significance tests? Lawrence Erlbaum Associates, Mahwah,
New Jersey, USA.
Anscombe, F. J. 1956. Discussion on Dr. David's and Dr. Johnson's Paper.
Journal of the Royal Statistical Society 18:24-27.
Bakan, D. 1966. The test of significance in psychological research.
Psychological Bulletin 66:423-437.
Barnard, G. 1998. Pooling probabilities. New Scientist 157:47.
Bauernfeind, R. H. 1968. The need for replication in educational research.
Phi Delta Kappan 50:126-128.
Bayes, T. 1763. An essay toward solving a problem in the doctrine of chances.
Philosophical Transactions of the Royal Society, London 53:370-418.
Berger, J. O. 1985. Statistical decision theory and Bayesian analysis.
Springer-Verlag, Berlin, Germany.
Berger, J. O., and D. A. Berry. 1988. Statistical analysis and illusion of
objectivity. American Scientist 76:159-165.
Berger, J. O., and M. Delampady, 1987. Testing precise hypotheses. Statistical
Science 2:317-352.
Berger, J. O., and T. Sellke. 1987. Testing a point null hypothesis: the
irreconcilability of P values and evidence. Journal of the American
Statistical Association 82:112-122.
Berkson, J. 1938. Some difficulties of interpretation encountered in the
application of the chi-square test. Journal of the American Statistical
Association 33:526-542.
Box, G. E. P. 1980. Sampling and Bayes' inference in scientific modelling and
robustness. Journal of the Royal Statistical Society 143:383-430.
Box, G. E. P., and G. C. Tiao. 1973. Bayesian inference in statistical analysis.
Addison-Wesley, Reading, Massachusetts, USA.
Buckland, S. T., K. P. Burnham, and N. H. Augustin. 1997. Model selection: an
integrated part of inference. Biometrics 53:603-618.
Burnham, K. P., and D. R. Anderson. 1998. Model selection and inference:a
practical information-theoretic approach. Springer-Verlag, New York, New
York, USA.
Campbell, M. 1992. Confidence intervals. Royal Statistical Society News and
Notes 18(9):4-5.
Carlin, B. P., and T. A. Louis. 1996. Bayes and empirical Bayes methods for
data analysis. Chapman & Hall, London, United Kingdom.
Carver, R. P. 1978. The case against statistical significance testing. Harvard
Educational Review 48:378-399.
Clark, C. A. 1963. Hypothesis testing in relation to statistical methodology.
Review of Educational Research 33:455-473.
Cohen, J. 1988. Statistical power analysis for the behavioral sciences, second
edition. Lawrence Erlbaum Associates, Hillsdale, New Jersey, USA.
Cohen, J. 1994. The earth is round (p < .05). American Psychologist 49:997-1003.
Dayton, P. K. 1998. Reversal of the burden of proof in fisheries management.
Science 279:821-822.
Degroot, M. H. 1970. Optimal statistical decisions. McGraw-Hill, New York,
New York, USA.
Deming, W. E. 1975. On probability as a basis for action. American Statistician
Ellison, A. M. 1996. An introduction to Bayesian inference for ecological
research and environmental decision-making. Ecological Applications
Gerard, P. D, D. R. Smith, and G. Weerakkody. 1998. Limits of retrospective
power analysis. Journal of Wildlife Management 62:801-807.
Good, I. J. 1982. Standardized tail-area probabilities. Journal of Statistical
Computation and Simulation 16:65-66.
Guttman, L. 1985. The illogic of statistical inference for cumulative science.
Applied Stochastic Models and Data Analysis 1:3-10.
Hedges, L. V., and I. Olkin. 1985. Statistical methods for meta-analysis.
Academic Press, New York, New York, USA.
Holling, C. S., editor. 1978. Adaptive environmental assessment and management.
John Wiley & Sons, Chichester, United Kingdom.
Howson, C., and P. Urbach. 1991. Bayesian reasoning in science. Nature
Huberty, C. J. 1993. Historical origins of statistical testing practices: the
treatment of Fisher versus Neyman-Pearson views in textbooks. Journal of
Experimental Education 61:317-333.
Johnson, D. H. 1995. Statistical sirens: the allure of nonparametrics. Ecology
Loftus, G. R. 1991. On the tyranny of hypothesis testing in the social sciences.
Contemporary Psychology 36:102-105.
Matthews, R. 1997. Faith, hope and statistics. New Scientist 156:36-39.
McLean, J. E., and J. M. Ernest. 1998. The role of statistical significance
testing in educational research. Research in the Schools 5:15-22.
Meehl, P. E. 1997. The problem is epistemology, not statistics: replace
significance tests by confidence intervals and quantify accuracy of risky
numerical predictions. Pages 393-425 in L. L. Harlow, S. A. Mulaik, and
J. H. Steiger, editors. What if there were no significance tests?
Lawrence Erlbaum Associates, Mahwah, New Jersey, USA.
Mulaik, S. A., N. S. Raju, and R. A. Harshman. 1997. There is a time and a
place for significance testing. Pages 65-115 in L. L. Harlow, S. A.
Mulaik, and J. H. Steiger, editors. What if there were no significance
tests? Lawrence Erlbaum Associates, Mahwah, New Jersey, USA.
Nester, M. R. 1996. An applied statistician's creed. Applied Statistics 45:
Nunnally, J. C. 1960. The place of statistics in psychology. Educational and
Psychological Measurement 20:641-650.
Peterman, R. M. 1990. Statistical power analysis can improve fisheries research
and management. Canadian Journal of Fisheries and Aquatic Sciences 47:2-15
Platt, J. R. 1964. Strong inference. Science 146:347-353.
Popper, K. R. 1959. The logic of scientific discovery. Basic Books, New York,
New York, USA.
Pratt, J. W., H. Raiffa, and R. Schlaifer. 1995. Introduction to statistical
decision theory. MIT Press, Cambridge, Massachusetts, USA.
Preece, D. A. 1990. R. A. Fisher and experimental design: a review. Biometrics
Quinn, J. F., and A. E. Dunham. 1983. On hypothesis testing in ecology and
evolution. American Naturalist 122:602-617.
Reichardt, C. S, and H. F. Gollob. 1997. When confidence intervals should be
used instead of statistical tests, and vice versa. Pages 259-284 in
L. L. Harlow, S. A. Mulaik, and J. H. Steiger, editors. What if there
were no significance tests? Lawrence Erlbaum Associates, Mahwah, New
Jersey, USA.
Rindskopf, D. M. 1997. Testing "small," not null, hypotheses: classical and
Bayesian approaches. Pages 319-332 in L. L. Harlow, S. A. Mulaik, and
J. H. Steiger, editors. What if there were no significance tests?
Lawrence Erlbaum Associates, Mahwah, New Jersey, USA.
Savage, I. R. 1957. Nonparametric statistics. Journal of the American
Statistical Association 52:331-344.
Schmidt, F. L., and J. E. Hunter. 1997. Eight common but false objections to
the discontinuation of significance testing in the analysis of research
data. Pages 37-64 in L. L. Harlow, S. A. Mulaik, and J. H. Steiger,
editors. What if there were no significance tests? Lawrence Erlbaum
Associates, Mahwah, New Jersey, USA.
Schmitt, S. A. 1969. Measuring uncertainty: an elementary introduction to
Bayesian statistics. Addison-Wesley, Reading, Massachusetts, USA.
Shaver, J. P. 1993. What statistical significance testing is, and what it is
not. Journal of Experimental Education 61:293-316.
Simberloff, D. 1990. Hypotheses, errors, and statistical assumptions.
Herpetologica 46:351-357.
Steidl, R. J., J. P. Hayes, and E. Schauber. 1997. Statistical power analysis
in wildlife research. Journal of Wildlife Management 61:270-279.
Steiger, J. H., and R. T. Fouladi. 1997. Noncentrality interval estimation and
evaluation of statistical models. Pages 221-257 in L. L. Harlow, S. A.
Mulaik, and J. H. Steiger, editors. What if there were no significance
tests? Lawrence Erlbaum Associates, Mahwah, New Jersey, USA.
The Wildlife Society. 1995. Journal News. Journal of Wildlife Management
Thomas, L., and C. J. Krebs. 1997. Technological tools. Bulletin of the
Ecological Society of America 78:126-139.
Toft, C. A., and P. J. Shea. 1983. Detecting community-wide patterns:
estimating power strengthens statistical inference. American
Naturalist 122:618-625.
Tukey, J. W. 1969. Analyzing data: sanctification or detective work? American
Psychologist 24:83-91.
Underwood, A. J. 1997. Experiments in ecology: their logical design and
interpretation using analysis of variance. Cambridge University Press,
Cambridge, United Kingdom.
Walters, C. 1986. Adaptive management of renewable resources. MacMillan
Publishing Co., New York, New York, USA.
Walters, C. J. and R. Green. 1997. Valuation of experimental management
options for ecological systems. Journal of Wildlife Management
Wolfson, L. J, J. B. Kadane, and M. J. Small. 1996. Bayesian environmental
policy decisions: two case studies. Ecological Applications 6:1056-1066.
Yates, F. 1964. Sir Ronald Fisher and the design of experiments. Biometrics
Zellner, A. 1987. Comment. Statistical Science 2:339-341.
Previous Section -- Acknowledgments
Return to Contents | {"url":"http://www.npwrc.usgs.gov/resource/methods/statsig/litcite.htm","timestamp":"2014-04-21T02:15:12Z","content_type":null,"content_length":"13722","record_id":"<urn:uuid:0b31b735-3a0f-421c-a301-d1a7eae578c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factor group order
November 7th 2009, 11:58 AM #1
Factor group order
If $K\lhd G$ and $|g|=n$, $g\in G$, show that the order of $Kg$ in $G/K$ divides $n$.
So I know that $G=\{1,g,\cdots g^n\}$ and will $K=\{1,g, \cdots g^{n-1}\}$ ? From Lagrange's theorem K|G?
Thanks guys
Two easy lemmas for you to remember/prove:
1) In a finite group the order of any element divides the order of the group;
2) If $f: G\rightarrow H$ is a group homomorphism, then for any $g\in G\,,\,\,ord(f(g))\mid ord(g)$ , and if $ord(g)=\infty \,\,then\,\, ord(f(g))=\infty \,\,or\,\,else\,\,f(g)=1$
I'm confused about your claim in the second part of your second lemma.
Let G be the group of integers under addition and H the integers modulo n under addition. That is surely a homomorphism that is sending torsion free elements to elements with finite order that
are not the identity. Am I misunderstanding your notation?
I'm confused about your claim in the second part of your second lemma.
Let G be the group of integers under addition and H the integers modulo n under addition. That is surely a homomorphism that is sending torsion free elements to elements with finite order that
are not the identity. Am I misunderstanding your notation?
Nop, you're right and I got confused. It should be that if $ord(g)=\infty$ then all the options are open.
Anyway, and for your problem, only the finite case matters.
What I have-
For any element g in G $Kg=\{kg|k\in K\}$ defines the right coset of K. Each k in K will have a different product when multiplied by any g. Thus, each element of K will create a corresponding
unique element of Kg. So, Kg will have the same number of elements as K (same order as K). Now, the order of K will be $k^m=1, |k|=m$ and the order of G is $g^m=1$. They each produce the identity
element and thus divide each other.....?
November 7th 2009, 12:10 PM #2
Oct 2009
November 7th 2009, 02:45 PM #3
November 7th 2009, 03:35 PM #4
Oct 2009
November 8th 2009, 02:15 PM #5 | {"url":"http://mathhelpforum.com/advanced-algebra/112975-factor-group-order.html","timestamp":"2014-04-19T08:17:56Z","content_type":null,"content_length":"46608","record_id":"<urn:uuid:eb2fc21c-cf1c-48c3-aa16-bf0a89551713>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
ziriac, Claude-Gaspar (1581
Bachet de Méziriac, Claude-Gaspar (1581–1638)
Poet and early mathematician of the French Academy, best known for his translation in 1621 of Diophantus's Arithmetica, the book that Pierre de Fermat was reading when he inscribed the margin with
his famous Last Theorem.
Bachet is also remembered as a collector of mathematical puzzles, many of which, including river-crossing problems, measuring and weighing puzzles, number tricks, and magic squares, he published in
Problèmes plaisans et délectables qui font par les nombres (1612). One of the puzzles is to find the least number of weights that can be used on a scale pan to weigh any integral number of pounds
from 1 to 40 inclusive, if the weights can be placed in either of the scale pans. The answer is four: 1, 3, 9, and 27 pounds.
On a slightly more serious note, Bachet observed that apparently every positive number can be expressed as a sum of at most four squares; for example, 5 = 2^2 + 1^2 , 6 = 2^2 + 1^2 + 1^2, 7 = 2^2 + 1
^2 + 1^2 + 1^2, 8 = 2^2 + 2^2, and 9 = 3^2. The case of 7 shows that sometimes three squares wouldn't be enough. Bachet said he had checked this for more than 300 numbers but didn't know how to prove
it. It wasn't until the late 18th century that Joseph Lagrange supplied a complete proof.
1. Underwood, Dudley. "The First Recreational Mathematics Book." Journal of Recreational Mathematics, 3, 164-169, 1970.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/B/Bachet.html","timestamp":"2014-04-20T03:22:58Z","content_type":null,"content_length":"8077","record_id":"<urn:uuid:5fc15066-3996-4a2f-a729-0f0500b21c74>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Max Value within a List of Lists of Tuple
up vote 4 down vote favorite
I have a problem to get the highest Value in a dynamic List of Lists of Tuples.
The List can looks like this:
adymlist = [[('name1',1)],[('name2',2),('name3',1), ...('name10', 20)], ...,[('name m',int),..]]
Now I loop through the List to get the highest Value (integer):
total = {}
while y < len(adymlist):
if len(adymlist) == 1:
#has the List only 1 Element -> save it in total
total[adymlist[y][0][0]] = adymlist[y][0][1]
y += 1
# here is the problem
# iterate through each lists to get the highest Value
# and you dont know how long this list can be
# safe the highest Value in total f.e. total = {'name1':1,'name10':20, ..}
I tried a lot to get the maximum Value but I found no conclusion to my problem. I know i must loop through each Tuple in the List and compare it with the next one but it dont know how to code it
Also I can use the function max() but it doesnt work with strings and integers. f.e. a = [ ('a',5),('z',1)] -> result is max(a) ---> ('z',1) obv 5 > 1 but z > a so I tried to expand the max function
with max(a, key=int) but I get an Type Error.
Hope you can understand what I want ;-)
Thanks so far.
If I use itertools.chain(*adymlist) and max(flatlist, key=lambda x: x[1])
I will get an exception like : max_word = max(flatlist, key=lambda x: x[1]) TypeError: 'int' object is unsubscriptable
BUT If I use itertools.chain(adymlist) it works fine. But I dont know how to summate all integers from each Tuple of the List. I need your help to figure it out.
Otherwise I wrote a workaround for itertools.chain(*adymlist) to get the sum of all integers and the highest integer in that list.
chain = itertools.chain(*adymlist)
flatlist = list(chain)
# flatlist = string, integer, string, integer, ...
max_count = max(flatlist[1:len(flatlist):2])
total_count = sum(flatlist[1:len(flatlist):2])
# index of highest integer
idx = flatlist.index(next((n for n in flatlist if n == max_count)))
max_keyword = flatlist[idx-1]
It still does what I want, but isn't it to dirty?
python list max
add comment
2 Answers
active oldest votes
To clarify, looks like you've got a list of lists of tuples. It doesn't look like we care about what list they are in, so we can simplify this to two steps
• Flatten the list of lists to a list of tuples
• Find the max value
The first part can be accomplished via itertools.chain (see e.g., Flattening a shallow list in Python)
The second can be solved through max, you have the right idea, but you should be passing in a function rather than the type you want. This function needs to return the value
you've keyed on, in this case ,the second part of the tuple
max(flatlist, key=lambda x: x[1])
up vote 13 down vote
accepted I re-read your question - are you looking for the max value in each sub-list? If this is the case, then only the second part is applicable. Simply iterate over your list for
each list
A bit more pythonic than what you currently have would like
output = []
for lst in lists:
output.append( max(flatlist, key=lambda x: x[1]) )
map(lambda x: max(x, key=lambda y: y[1]) , lists)
add comment
As spintheblack says, you have a list of lists of tuples. I presume you are looking for the highest integer value of all tuples.
You can iterate over the outer list, then over the list of tuples tuples like this:
max_so_far = 0
for list in adymlist:
for t in list:
up vote 4 down vote if t[1] > max_so_far:
max_so_far = t[1]
print max_so_far
This is a little bit more verbose but might be easier to understand.
add comment
Not the answer you're looking for? Browse other questions tagged python list max or ask your own question. | {"url":"http://stackoverflow.com/questions/4963957/max-value-within-a-list-of-lists-of-tuple","timestamp":"2014-04-18T03:52:31Z","content_type":null,"content_length":"70795","record_id":"<urn:uuid:35cf319d-7c91-47f5-9a63-9d3489500d3e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Answers to Sample Statistics Questions
Correct answers are marked in Red Bold, with reasons in blue. Incorrect choices are black, with reasons they are wrong in green.
1. In a truly normal frequency distribution
a. the mean always is the same as the standard deviation
Not at all; could have a negative mean, but standard deviation is always positive
b. the mean is never the same as the mode
Actually, it is always the same
c. the mode is never the same as the median
Actually, it is always the same
d. the mean always is the same as the median
Mean, median, and mode are all the same; skew = 0
2. How might the standard deviation (S) of a normal distribution be greater than the mean?
a. S is given by a square root, and the square root is larger than the fraction.
What does that have to do with anything?
b. In a normal distribution, the variance must equal the mean.
Not true, although it is true of a Poisson distribution (which we didn't get to)
c. If some scores are negative, the mean could be very small despite a large S.
The mean could be anything, even negative. Standard deviation is always positive...
d. The median would have to be less than the skew.
Since the skew of a normal distribution is zero, this is only true if the mean is negative
3. In a class of 100, the mean on a certain exam was 50, the standard deviation, 0. This means
a. half the class had scores less than 50
For the mean to be 50, the others would need scores above 50. If the scores are not all the same, there will be differences from the mean; those differences squared will add to a non-zero sum so
standard deviation will be greater than zero.
b. there was a high correlation between ability and grade
You can't know about correlation from the distribution of a single variable -- what would it correlate with?
c. everyone had a score of exactly 50
A zero standard deviation means all scores are the same, and equal to the mean (what else could the mean be?)
d. half the class had 0's and half had 50's
That would make the mean = 25, and the standard deviation > 625 (25 squared for everyone)
4. The null hypothesis in an experiment would be
a. there is a high correlation between the independent and dependent variables
That could be an expectation, but the null hypothesis says there is no effect
b. changing the independent variable has no significant effect on the dependent variable
That is, every group (experimental or control or whatever) is a sample of the same population, even though the groups are differentiated by the independent variable.
c. changing the dependent variable causes a significant change in the independent variable
That's the opposite of the null hypothesis
d. the standard error of the dependent variable is greater than the mean of the independent variable
Isn't it amazing how random words in random order can sound like they mean something?
5. Suppose the mean on the final exam is 24 (of 40), with a standard deviation of 1.5. If you get a 21, how well do you do (relative to the rest of the class)?
a. very poorly--perhaps the lowest score
That's 2 standard deviations below the mean (z = -2.0). The fraction in the lower tail is 0.0228 only 2 1/4% did worse! (Assume it's roughly a normal distribution....)
b. not well, but somewhere in the C's
If only 2 1/4% did worse, there won't be many D's or E's, will there?
c. OK--about average
Average is a z near 0
d. nicely--better than the median
Assuming an approximately normal distribution, median is close to the mean...
6. You can claim that there is a significant difference between scores from two groups if
a. the difference between the means is large compared to the standard error
This is basically the definition of t (difference in means divided by standard error). A large t means you can reject the null hypothesis.
b. the means are large compared to the standard error
Size of means is irrelevant -- it's the difference that matters. You can't claim two groups differ if their means are the same, even if the means are 1,000,000 and standard errors are around 0.5.
c. the means are small compared to the standard error
As in (b) -- it's the difference between the means that matters
d. the difference between the standard deviations is large compared to the means
That's backwards -- it's difference between means, not standard deviations....
7. The correlation between a person's hair length and score on the midterm is very nearly zero. If your friend has a crewcut, your best guess as to what he got on the midterm is
a. the standard deviation of scores on the midterm
Why would standard deviation predict a score? The distribution of GRE scores has a mean of 500 and standard deviation of 100 -- 100 is not even a possible score!
b. the mean minus the standard deviation
Even less sensible that (a)
c. the mean plus the standard deviation
Same as (b)
d. the mean score
If correlation is zero, there is no added information from the other variable (hair length. Your best guess is the Expected Value, or the mean.
8. There is a low (but real) negative correlation between the amount of rain in a given summer and the amount the summer before. In the absence of any information except that this summer is wetter
than usual, you are asked to guess next summer's rain. Your best guess:
a. somewhat more than the average summer rainfall
Negative correlation means you expect less
b. the average summer rainfall
Correlation was real (different from 0) so regression will do better than the mean
c. somewhat less than the average summer rainfall
Since the correlation is negative, more rain this year means less next. The correlation is weak (small), so the regression line has a low slope; that is, it won't be very different from the mean.
d. the standard deviation of the rainfall
A real nonsense answer.
Click here to return to the full questions. | {"url":"http://www.uic.edu/classes/psych/psych343/sampans.html","timestamp":"2014-04-18T01:02:28Z","content_type":null,"content_length":"12105","record_id":"<urn:uuid:620a1cca-8fd3-4574-89da-21a7e2e14ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Could the Hover Bike Fly With a Human? | Science Blogs | WIRED
• By Rhett Allain
• 06.15.13 |
• 8:52 am |
The flying bike is a mostly real thing. Mostly in that it actually flies – but not with a real person. Here is the developer’s site (Duratec) and a good review from Mashable where they add that the
whole thing weighs 209 pounds. The claim is that the bike can not yet support the full mass of a real human and the demonstration only ran for 5 minutes.
You probably know what comes next, right? Now I will make an estimate of the battery size for this thing to actually work. And by “actually work”, I mean that it should be able to carry a normal
adult for at least 30 minutes. I mean, who would want a flying bike that only runs for 5 minutes?
How Does a Hover Bike Fly?
Let’s think about this in terms of basic physics. The bike doesn’t fly because of fairy dust. No, it flies because it is “throwing” air down. The blades take stationary air above the bike and push it
down. Since the bike is pushing air down, the air pushes back up on the bike. If the force from the air on the bike has the same magnitude as gravitational force on the bike, it will hover (stay
stationary in the air). Simple right?
How about diagram? I already looked at the physics of hovering when I calculated the power needed for the human powered helicopter, so I will just start with that image.
Here you can see what matters when dealing with helicopter thrust. You get the greatest thrust force when you have the largest change in momentum of air. If you assume the density of air is constant,
then there are two important parameters: the speed of the air and the size of the rotors. I will skip over the derivation (but you can find it here), but there are really just two important
First, there is the power required to hover.
In this expression, ρ is the density of the air, A is the area of the rotor and v is the speed of the air coming out of the rotors. I can find this thrust air speed by looking at the weight of the
aircraft and the change in momentum of the air. I get:
Well, what if I don’t know the thrust speed of the air? No problem. I just solve for the thrust speed from the force equation and plug it into the power equation.
And there you have it. The power needed to fly depends on the mass of the object and the area of the rotors. This is why the Gamera II human powered helicopter has such a large rotor area. Actually,
this is wrong. It is just a little bit wrong since it assumes a perfectly efficient system. However, I can make a nice approximation of the actual efficiency by looking at some real helicopters.
This is a plot of calculated power (with efficiency) vs. listed power for some helicopters on Wikipedia – just like I did before with the S.H.I.E.L.D. Helicarrier. If I adjust the efficiency to 40%,
then I can get a nice slope value of 1.
There are two problems with this model. First, I am going to use this for much smaller masses – like the hover bike. Second, the listed power is the maximum engine power (I assume). I wouldn’t think
you would need maximum power to hover. If I had to guess, I would say somewhere around 50% power but I really don’t know. Of course neither of these things will stop me from moving on (nothing ever
Battery Energy and Mass
What kind of battery would you like to use for this hover bike? It has to have a high energy mass density. If you add some big old lead-acid batteries, you are going to have a weight problem.
Wikipedia’s page on energy density lists the lithium-ion battery with an energy density of about 0.8 MJ/kg. I will just assume 100% efficient batteries. That means that if I know the required power
for my device, I can calculate the mass of the batteries (which will of course change the required power).
In this expression, Δt is the flight time and d[E] is the energy density.
Estimating the Battery Mass
So, I have an expression for the mass of the battery based on the power. I also have an expression for the power that depends on the mass (total mass). Let me write the hover bike power based on the
mass of the battery and the power based on the rotor size as:
With a few estimates, I can plot the power vs. battery mass for the two functions. When they intersect, I have my mass. Simple really. Here are my estimates.
• Rotor size: There are two big rotors with a radius of about 0.5 meters and two smaller ones with a radius of maybe 0.3 meters. This would put the total rotor area at 2.14 m^2.
• Bike + person mass (called m[o] in the equation). Without the batteries and a full sized human, I am going to guess 140 kg.
• Time of flight – 30 minutes or 1,800 seconds.
• Efficiency. Even though I took the time to estimate the efficiency, I am going to leave it off. Why? Because this will be balanced by the fact that the motors won’t be at full throttle all the
• Density of air = 1.2 kg/m^3.
• Energy density = 0.8 MJ/kg.
And now for the plot of the two functions.
These two functions intersect at a battery mass of 151 kg (333 pounds) and a total motor power of 67.5 kilowatts. That mass is about half the total hover bike mass and the power is pretty high too.
There is one more thing to calculate – the thrust speed. For real helicopters, I estimated the speed of the air at about 25 m/s regardless of size. Using the same formula, this hover bike would have
a thrust air speed of 47 m/s. I’m not saying you can’t do that. I’m just saying that real helicopters have lower thrust speed. That’s all I’m saying.
There is one way to make this possibly work. What if you wanted a flight time of only 15 minutes? In that case you wouldn’t need as large of a battery so you wouldn’t need as much power. This means
that a battery half the mass of the 30 minute one would be too much. If you run the calculation for a time of 15 minutes, you only a battery mass of 35.6 kg (78 pounds). That seems more reasonable
for a battery mass – but maybe not so reasonable for a functioning flying bike.
If you only had a five minute flight time, the battery would be even smaller. I guess this is why the vehicle has bike wheels. You will likely have to ride around as a bike for most of your travels.
Of course there is another way to fix this vehicle – make rotors with a much larger area (which requires lower power). But if the rotors got too big, you might not call this a hover bike. In that
case you would probably call it an electric helicopter. | {"url":"http://www.wired.com/2013/06/could-the-hover-bike-fly-with-a-human/","timestamp":"2014-04-20T14:16:57Z","content_type":null,"content_length":"109417","record_id":"<urn:uuid:d5be3ac4-495d-4423-b460-b4663ac7ea99>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Let's Go Out to the Movies' Brain Teaser
Let's Go Out to the Movies
Probability puzzles require you to weigh all the possibilities and pick the most likely outcome.
Puzzle ID: #31308
Category: Probability
Submitted By: tsimkin
Corrected By: Zag24
Gretchen and Henry were sent to their rooms for fighting in the house. They each separately voiced their protest to their father, insisting that the fight was nothing more than healthy sibling
competition, and they each wanted to go out that afternoon to see a movie. He was moved by their stories, but wouldn't simply set them free. Instead, he devised a system. He went to each child's
room with a penny, and told them that they would have to show up in the den in 10 minutes, and could choose to bring the penny with them or leave it in their respective rooms. Dad would then flip
the one or two pennies brought to the den, and if the pennies he flipped came up heads, the kids could go to the movies. If neither brought a penny, or if he flipped at least one tail, they would
stay in their rooms until supper time.
The problem facing Gretchen and Henry was that neither knew what the other would do. It would be easy if they could collude -- one would bring a penny, and the other would not, giving them a 50%
chance of going free -- but they did not have this luxury.
If they both acted optimally, what is the probability that they will be free in time to see the movie?
Show Hint
They should each bring the penny with probability 2/3, and they will go free 1/3 of the time.
There are a couple of ways of tackling this. The first is to say that what is optimal for Henry must also be optimal for Gretchen, so whatever probability one has of bringing the penny should equal
the probability that the other brings the penny. If we set that probability equal to 'p', then the probability that they go free ('f') is:
p^2*(1/4) + p*(1-p)*(1/2) + (1-p)*p*(1/2) + (1-p)*(1-p)*0
Since the last term goes to 0, this leads to an equation of:
p - (3/4)*(p^2) = f
From here, you could either plug in values of p from 0 to 1, finding that f is at its greatest (1/3) when p is equal to 2/3, or you could use calculus.
Warning: CALCULUS FOLLOWS!
Since this is a quadratic equation, the value will be at its maximum or minimum when its first derivative equals 0. The first derivative of f with respect to p is:
df/dp = (-3/2)p + 1
Setting this equal to 0, we get p = 2/3.
To determine if this is a maximum or a minimum, we need to take the second derivative:
d2f/dp2 = (-3/2)
Since the second derivative is negative, the value we found by setting the first derivative equal to 0 is a maximum, and we have our answer.
Another way we can solve this is to say that each needs to make a decision that will equalize their chance of survival REGARDLESS of the action taken by the other.
This point, where you get the same result regardless of the other person's actions, is known as the "Nash Equilibrium Point". The brilliant mathematician John Nash (who was made famous in the book
and movie "A Beautiful Mind") showed that this point of equilibrium will always give the maximum overall result, both in a cooperative game like this one and in a competitive game such as poker, as
long as the correct strategy is a mixed strategy like this.
So if Henry assumes that Gretchen will bring her penny with her none of the time, all of the time, or somewhere in between, he wants to pick a probable course of action (in game theory, a "mixed
strategy") that will maximize his chances of going free. Let's just look at the two extremes (i.e., Gretchen will either bring her penny 100% of the time, or 0% of the time).
If Gretchen brings the penny 100% of the time, and Henry brings his penny with probability p, they will go free with probability:
p*1*(1/4) + (1-p)*1*(1/2)
If we assume Gretchen never brings her penny, then they will go free with probability:
Since Henry wants to be indifferent to Gretchen's actions, we can set these two probabilities of going free equal to each other. We then get:
(p/4) + (1-p)/2 = p/2 [setting equations equal]
(p/2) + (1-p) = p [multiplying through by 2]
1-p = (p/2) [subtracting p/2 from each side]
1 = (3/2)p [adding p to each side]
p = 2/3 [solving for p]
Gretchen will determine the same probability of bringing a penny by using the same logic.
Thus, they will each bring a penny 2/3 of the time, maximizing their joint chances of going free at 1/3. Hide
What Next? | {"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=31308&comm=0","timestamp":"2014-04-18T21:06:34Z","content_type":null,"content_length":"28094","record_id":"<urn:uuid:3fae6216-90cf-4397-95fb-e4ecdf5db13a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Study of Search Directions in Primal-Dual Interior-Point Methods for Semidefinite Programming
Results 1 - 10 of 21
, 2000
"... The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Cited by 463 (16 self)
Add to MetaCart
The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached
varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite
programming, monotone linear complementarity, and convex programming over sets that can be characterized by self-concordant barrier functions.
"... In Part I of this series of articles, we introduced a general framework of exploiting the aggregate sparsity pattern over all data matrices of large scale and sparse semidefinite programs (SDPs)
when solving them by primal-dual interior-point methods. This framework is based on some results about po ..."
Cited by 28 (14 self)
Add to MetaCart
In Part I of this series of articles, we introduced a general framework of exploiting the aggregate sparsity pattern over all data matrices of large scale and sparse semidefinite programs (SDPs) when
solving them by primal-dual interior-point methods. This framework is based on some results about positive semidefinite matrix completion, and it can be embodied in two di#erent ways. One is by a
conversion of a given sparse SDP having a large scale positive semidefinite matrix variable into an SDP having multiple but smaller positive semidefinite matrix variables. The other is by
incorporating a positive definite matrix completion itself in a primal-dual interior-point method. The current article presents the details of their implementations. We introduce new techniques to
deal with the sparsity through a clique tree in the former method and through new computational formulae in the latter one. Numerical results over di#erent classes of SDPs show that these methods can
be very e#cient for some problems. Keywords: Semidefinite programming; Primal-dual interior-point method; Matrix completion problem; Clique tree; Numerical results. # Department of Applied Physics,
The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8565 Japan (nakata@zzz.t.u-tokyo.ac.jp ). + Department of Architecture and Architectural Systems, Kyoto University, Kyoto 606-8501 Japan
(fujisawa@is-mj.archi.kyoto-u.ac.jp). # Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2-12-1 OhOkayama, Meguro-ku, Tokyo 152-8552 Japan (mituhiro@is.titech.ac.jp).
The author was supported by The Ministry of Education, Culture, Sports, Science and Technology of Japan. Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2-12-1
OhOkayama, Meguro-ku, Toky...
- Cornell University , 1999
"... We analyze perturbations of the right-hand side and the cost parameters in linear programming (LP) and semidefinite programming (SDP). We obtain tight bounds on the norm of the perturbations
that allow interior-point methods to recover feasible and near-optimal solutions in a single interior-point i ..."
Cited by 13 (2 self)
Add to MetaCart
We analyze perturbations of the right-hand side and the cost parameters in linear programming (LP) and semidefinite programming (SDP). We obtain tight bounds on the norm of the perturbations that
allow interior-point methods to recover feasible and near-optimal solutions in a single interior-point iteration. For the unique, nondegenerate solution case in LP, we show that the bounds obtained
using interior-point methods compare nicely with the bounds arising from the simplex method. We also present explicit bounds for SDP using the AHO, H..K..M, and NT directions.
- Discrete Appl. Math , 2002
"... Survey article for the proceedings of Discrete Optimization '99 where some of these results were presented as a plenary address. y ..."
, 1999
"... We propose a new class of primal-dual methods for linear optimization (LO). By using some new analysis tools, we prove that the large update method for LO based on the new search direction has a
polynomial complexity O i n 4 4+ae log n " j iterations where ae 2 [0; 2] is a parameter used in t ..."
Cited by 8 (5 self)
Add to MetaCart
We propose a new class of primal-dual methods for linear optimization (LO). By using some new analysis tools, we prove that the large update method for LO based on the new search direction has a
polynomial complexity O i n 4 4+ae log n " j iterations where ae 2 [0; 2] is a parameter used in the system defining the search direction. If ae = 0, our results reproduce the well known complexity
of the standard primal dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also
presented. Keywords: Linear Optimization, Semidefinite Optimization, Interior Point Method, PrimalDual Newton Method, Polynomial Complexity. AMS Subject Classification: 90C05 1 Introduction Interior
point methods (IPMs) are among the most effective methods for solving wide classes of optimization problems. Since the seminal work of Karmarkar [7], many researchers have proposed and analyzed
various ...
- Mathematical Programming , 2000
"... In this paper, we first introduce the notion of self-regular functions. Various appealing properties of self-regular functions are explored and we also discuss the relation between selfregular
functions and the well-known self-concordant functions. Then we use such functions to define self-regular p ..."
Cited by 8 (5 self)
Add to MetaCart
In this paper, we first introduce the notion of self-regular functions. Various appealing properties of self-regular functions are explored and we also discuss the relation between selfregular
functions and the well-known self-concordant functions. Then we use such functions to define self-regular proximity measure for path-following interior point methods for solving linear optimization
(LO) problems. Any self-regular proximity measure naturally defines a primal-dual search direction. In this way a new class of primal-dual search directions for solving LO problems is obtained. Using
the appealing properties of self-regular functions, we prove that these new large-update path-following methods for LO enjoy a polynomial, O n q+1 2q log n iteration bound, where q ≥ 1 is the
so-called barrier degree of the self-regular ε proximity measure underlying the algorithm. When q increases, this � bound approaches the √n n best known complexity bound for interior point methods,
namely O log. Our unified �√n ε n analysis provides also the O log best known iteration bound of small-update IPMs. ε At each iteration, we need only to solve one linear system. As a byproduct of our
results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed.
- Optim. Methods Softw , 2003
"... The contribution of this paper is to describe a general technique to solve some classes of large but sparse semidefinite problems via a robust primal-dual interior-point technique which uses an
inexact Gauss-Newton approach with a matrix free preconditioned conjugate gradient method. This approach a ..."
Cited by 8 (3 self)
Add to MetaCart
The contribution of this paper is to describe a general technique to solve some classes of large but sparse semidefinite problems via a robust primal-dual interior-point technique which uses an
inexact Gauss-Newton approach with a matrix free preconditioned conjugate gradient method. This approach avoids the ill-conditioning pitfalls that result from symmetrization and from forming the
so-called normal equations, while maintaining the primal-dual framework.
, 1999
"... The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Cited by 3 (1 self)
Add to MetaCart
The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have
become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached
varying levels of maturity. Interior-point methodology has been used as part of the solution strategy in many other optimization contexts as well, including analytic center methods and
column-generation algorithms for large linear programs. We review some core developments in the area and discuss their impact on these other problem areas.
"... Semidefinite Programming is currently a very exciting and active area of research. Semidefinite relaxations generally provide very tight bounds for many classes of numerically hard problems. In
addition, these relaxations can be solved efficiently by interior-point methods. In this ..."
Cited by 3 (3 self)
Add to MetaCart
Semidefinite Programming is currently a very exciting and active area of research. Semidefinite relaxations generally provide very tight bounds for many classes of numerically hard problems. In
addition, these relaxations can be solved efficiently by interior-point methods. In this
- European Journal of Operational Research
"... 2009 Semidefinite Programming (SDP) may be seen as a generalization of Linear Programming (LP). In particular, one may extend interior point algorithms for LP to SDP, but it has proven much more
difficult to exploit structure in the SDP data during computation. We survey three types of special struc ..."
Cited by 3 (0 self)
Add to MetaCart
2009 Semidefinite Programming (SDP) may be seen as a generalization of Linear Programming (LP). In particular, one may extend interior point algorithms for LP to SDP, but it has proven much more
difficult to exploit structure in the SDP data during computation. We survey three types of special structures in SDP data: 1. a common ‘chordal ’ sparsity pattern of all the data matrices. This
structure arises in applications in graph theory, and may also be used to deal with more general sparsity patterns in a heuristic way. 2. low rank of all the data matrices. This structure is common
in SDP relaxations of combinatorial optimization problems, and SDP approximations of polynomial optimization problems. 3. the situation where the data matrices are invariant under the action of a
permutation group, or, more generally, where the data matrices belong to a low dimensional matrix algebra. Such problems arise in truss topology optimization, particle physics, coding theory,
computational geometry, and graph theory. We will give an overview of existing techniques to exploit these structures in the data. Most of the paper will be devoted to the third situation, since it
has received the least attention in the literature so far. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.47.7845","timestamp":"2014-04-18T14:26:08Z","content_type":null,"content_length":"38457","record_id":"<urn:uuid:d120b073-5767-420b-8b94-01db2934045a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pengaruh Ketebalan Terhadap Ragam Polariton Magnetik dalam Bahan Logam Antiferromagnet
Gunawan .S.K, Vincensius (2005) Pengaruh Ketebalan Terhadap Ragam Polariton Magnetik dalam Bahan Logam Antiferromagnet. BERKALA FISIKA, 8 (3). pp. 75-78. ISSN 1410 - 9662
PDF - Published Version
The intention of this research is to study the relation dispersion of magnetic polariton generated in metallic antiferromagnet material with finite thickness numerically. The relation dispersion
equation is solved in a various thickness, so that the affect of thickness in relation dispersion structure could be understood more clearly. The method used in solving relation dispersion is finding
root by bisection. The results show that the structure of relation dispersion ghraphic for 0,2 cm thickness is the same with the relation dispersion graphic for semi infinite geometri. The bulk
polariton became diskret below 0.2 cm thickness, and the quantization tend too decrease if the thickness of material is reduce. The bulk polariton disappear when the thickness of material is reduced
to 2 ım. The number of surface polariton will have two branches in each direction of propagation when the material thickness is diminished to 20 ım. The distance between branches tend to larger if
the thickness of materials became thinner.
Repository Staff Only: item control page | {"url":"http://eprints.undip.ac.id/2065/","timestamp":"2014-04-17T09:54:50Z","content_type":null,"content_length":"17639","record_id":"<urn:uuid:b5ecb75d-7429-4425-8031-cbf2d44aec4c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
University of Missouri psychologists have discovered a link between preschoolers’ ability to estimate and later math ability. “Lacking skill at estimating group size may impede a child’s ability to
learn the concept of how numerals symbolize quantities and how those quantities relate to each other,” said study co-author David Geary. Read more about how teaching…
Read More
A mathematical study from the Miguel Hernández University of Elche has released a ranking of tennis players based on their statistics from the Association of Tennis Professionals (ATP).
Interestingly, the study’s first place competitor and the ATP first place competitor are different! Click here to see who came out on top.
Read More | {"url":"http://weusemath.org/?m=201310","timestamp":"2014-04-18T18:20:51Z","content_type":null,"content_length":"31872","record_id":"<urn:uuid:97bd15a7-93d5-4ca3-9a4e-931b9a5fc372>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic ring theory in Haskell
Apparently this is the first post on this blog. In it I will speak a little about how to implement the basic algebraic structures of ring theory as Haskell type classes. This is the core of my
constructive-algebra library, which I wrote as part of my master thesis. In the thesis I mainly looked at three different structures: Bézout domains, Prüfer domains and polynomial rings. I plan to
write about these at some point in the future but now I will focus on the basics.
The natural place to start is of course with rings. A ring is a set together with two operators + (addition) and * (multiplication). There are also two special objects in the ring, one and zero.
Every object should also have an additive inverse. This can be represented in Haskell by a type class:
class Ring a where
(<+>) :: a -> a -> a
(<*>) :: a -> a -> a
neg :: a -> a
one :: a
zero :: a
But now we run into some peculiarities. The operators and constants are not the only requirements for a ring… A ring also has to satisfy certain axioms. But how should we encode these? In a language
with dependent types this would have been possible to do in a satisfactory way as part of the structure, but in Haskell we have to settle with something less fancy. I have chosen to use QuickCheck
properties for representing the axioms that rings has to satisfy. For example: in a ring must the multiplication distribute over addition (both from left and right), this can be expressed as:
propLeftDist :: (Ring a, Eq a) => a -> a -> a -> Bool
propLeftDist a b c = a <*> (b <+> c) == (a <*> b) <+> (a <*> c)
The next structure is commutative rings, that is rings in which the multiplication is commutative. This does not introduce any new elements to the structure, so this is just an empty type class:
class Ring a => CommutativeRing a
But now we have another axiom that should hold:
propMulComm :: (CommutativeRing a, Eq a) => a -> a -> Bool
propMulComm a b = a <*> b == b <*> a
The next structure we will look at is integral domains. An integral domain is a commutative ring with no zero-divisors. That is $\forall x y(x*y=0 \rightarrow x = 0 \lor y = 0)$. This can be
expressed in the same manner as for commutative rings. But here there is another peculiarity since it can be quite unlikely for QuickCheck to generate two random numbers $x eq 0$ and $y eq 0$ such
that $x*y = 0$ and thus will the implication almost always be true. So the property should be taken with a grain of salt. The classical example of an integral domain is $\mathbb{Z}$ which is
represented by the Integer type in Haskell.
type Z = Integer
instance Ring Z where
(<+>) = (+)
(<*>) = (*)
neg = negate
zero = 0
one = 1
instance CommutativeRing Z
instance IntegralDomain Z
We can now test that Z satisfies all axioms for integral domains.
*Algebra.Z> quickCheck (propIntegralDomain :: Z -> Z -> Z -> Property)
+++ OK, passed 100 tests.
The final structure for today is fields, that is integral domains in which all elements (except zero) has a multiplicative inverse. This need some more structure, since now there has to be a function
that given an element computes its multiplicative inverse.
class IntegralDomain a => Field a where
inv :: a -> a
propMulInv :: (Field a, Eq a) => a -> Property
propMulInv a = a /= zero ==> inv a <*> a == one
Of course $\mathbb{Z}$ is not a field (what is the inverse of 2?). The standard example of a field is instead $\mathbb{Q}$. With a suitable implementation of $\mathbb{Q}$ we can now do:
instance Field Q where
inv x = 1/x
That’s all for this time. Nothing fancy so far, but this proved to be a good foundation to build more interesting things on. Having runnable axioms for the structures can seem a bit silly for the
simple examples presented here but it was in fact quite useful when implementing more complex things, like for example the proof that $\mathbb{Z}[\sqrt{-5}]$ is a Prüfer domain.
5 Responses to Basic ring theory in Haskell
1. Why did you choose to use Haskell and not some dependent-typed language?
□ It was mostly due to time limitations. I figured that it would be easier to do it in Haskell since formalizing all the proofs would be very time consuming and difficult. But I plan to
formalize my Haskell implementation in some dependently typed language at some point in the future. :)
2. I found your blog! How cute!
3. Have you looked at the “checkers” package? http://hackage.haskell.org/package/checkers
Admittedly, the name and short description are unenlightening, but it has QuickCheck properties for some simple algebraic stuff–might save you a bit of time?
□ No I haven’t seen it as far as I can remember. It looks promising anyway, maybe I can get nicer property definitions using that… | {"url":"http://mortberg.wordpress.com/2010/08/12/basic-ring-theory-in-haskell/","timestamp":"2014-04-21T10:26:09Z","content_type":null,"content_length":"54657","record_id":"<urn:uuid:900bc85a-1210-4430-9cf9-5aee17cb05e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tsuneo Nakata, Kawasaki JP
Tsuneo Nakata, Kawasaki JP
application Description Published
Circuit design data conversion apparatus, circuit design data conversion method, and computer product - A single module includes a shared combinational circuit, a multiplexed
20080312881 sequential circuit, and a common I/F and is substituted for a multiplexed module formed of plural modules of an identical category and type and including plural CPUs. 12-18-2008
Specifically, the shared combinational circuit is substituted for n combinational circuits, the multiplexed sequential circuit is substituted for n sequential circuits, and the
common I/F is substituted for n input pins and n output pins.
20080312890 VERIFICATION METHOD - Conditions necessary to be satisfied for execution of each use case from a use case description indicative of a requirements specification of the design 12-18-2008
object are acquired. Then a state satisfying the conditions, from among a set of states represented in a finite state machine model indicative of a design specification of the
design object are detected. A presence or absence of an undetected state in the set of states in accordance with the detection is determined and output.
20090182538 MULTI-OBJECTIVE OPTIMUM DESIGN SUPPORT DEVICE USING MATHEMATICAL PROCESS TECHNIQUE, ITS METHOD AND PROGRAM 07-16-2009
MULTI-OBJECTIVE OPTIMAL DESIGN SUPPORT DEVICE, METHOD AND PROGRAM STORAGE MEDIUM - An objective function can be mathematically approximated using a prescribed number of sample
sets of design parameters and sets of a plurality of objective functions computed corresponding to them. A logical expression indicating a relation between or among arbitrary
20090182539 two or three objective functions of the plurality of mathematically approximated objective functions is computed as an inter-objective-function logical expression and a region 07-16-2009
that the arbitrary objective function values can take is displayed as a feasible region in an objective space corresponding to the arbitrary objective functions. Furthermore, a
point or area in a design space corresponding to arbitrary design parameters corresponding to a point or area specified by a user in the displayed feasible region is displayed.
VERIFICATION SUPPORTING APPARATUS, VERIFICATION SUPPORTING METHOD, AND COMPUTER PRODUCT - In a verification supporting apparatus, a recording unit records a DIRW matrix in
which a state transition possibly occurring in a register of a circuit to be verified and information concerning validity of a path corresponding to the state transition are
20090276740 set and an acquiring unit acquires a control data flow graph that includes a control flow graph having a data flow graph written therein. When a register is designated for 11-05-2009
verification, a data flow graph having described therein the designated register is extracted from the control data flow graph. From the data flow graph extracted, a path
indicating the flow of data concerning the register is extracted. The state transition of the path extracted is identified and if the state transition is determined to be is
set in the DIRW matrix, information concerning the validity set in the DIRW matrix and the path are correlated, and output.
VERIFICATION SUPPORTING SYSTEM - A verification target register to be verified is specified from a configuration of a verification target circuit, and patterns requiring
20090287965 verification are extracted as a coverage standard with regard to the specified verification target register. When the patterns are extracted, a DIRW matrix is prepared to 11-19-2009
indicate possibly occurring state transitions among four states Declare, Initialize, Read, and Write in the register included in the verification target circuit, and used to
decide two coverage standards, a matrix coverage standard and an implementation coverage standard.
DESIGN SUPPORT APPARATUS - A design support apparatus for determining a plurality of objective functions for modeling an object having a plurality of elements, each of the
elements providing variable geometrical parameters, the design support apparatus includes a memory for storing the variable geometrical parameters and a processor for executing
20100153074 a process including: determining boundary information associated with specified geometrical parameters of the elements which indicate a state of contact between the elements, 06-17-2010
dividing the variable geometrical parameters into a plurality of groups on the basis of the boundary information, and determining the plurality of objective functions for each
of the groups by using the variable geometrical parameters.
MULTI-PURPOSE OPTIMIZATION DESIGN SUPPORT APPARATUS AND METHOD, AND RECORDING MEDIUM STORING PROGRAM - A design support apparatus includes a parameter set generation unit
configured to obtain a plurality of types of parameters and sequentially generates parameter sets while sequentially changing each parameter, a design object shape data
generation unit configured to generate design object shape data based on the parameter set and initial shape data representing an initial shape of the design object shape, a
20100332195 geometric penalty function value calculation unit configured to calculate a geometric penalty function value indicating suitability of geometric characteristics of the design 12-30-2010
object shape based on the design object shape data, an objective function calculation control unit configured to determine whether or not the parameter set is used to calculate
an objective function based on the geometric penalty function value and an optimal value of the objective function, and an objective function calculation unit configured to
calculate the objective function based on the parameter set.
VERIFICATION SUPPORTING SYSTEM - A verification target register to be verified is specified from a configuration of a verification target circuit, and patterns requiring
20110239172 verification are extracted as a coverage standard with regard to the specified verification target register. When the patterns are extracted, a DIRW matrix is prepared to 09-29-2011
indicate possibly occurring state transitions among four states Declare, Initialize, Read, and Write in the register included in the verification target circuit, and used to
decide two coverage standards, a matrix coverage standard and an implementation coverage standard.
Patent applications by Tsuneo Nakata, Kawasaki JP | {"url":"http://www.faqs.org/patents/inventor/tsuneo-nakata-kawasaki-jp-1/","timestamp":"2014-04-17T06:58:41Z","content_type":null,"content_length":"12692","record_id":"<urn:uuid:0e580de7-f44f-4968-a6cd-5435739d482e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Bosonic Vertex Operator Algebras on Genus Two Riemann Surfaces II
We continue our program to define and study $n$-point correlation functions for a vertex operator algebra $V$ on a higher genus compact Riemann surface obtained by sewing surfaces of lower genus.
Here we consider Riemann surfaces of genus 2 obtained by attaching a handle to a torus. We obtain closed formulas for the genus two partition function for free bosonic theories and lattice vertex
operator algebras $V_L$. We prove that the partition function is holomorphic in the sewing parameters on a given suitable domain and describe its modular properties. We also compute the genus two
Heisenberg vector $n$-point function and show that the Virasoro vector one point function satisfies a genus two Ward identity. We compare our results with those obtained in the companion paper, when
a pair of tori are sewn together, and show that the partition functions are not compatible in the neighborhood of a two-tori degeneration point. The \emph{normalized} partition functions of a lattice
theory $V_L$ \emph{are} compatible, each being identified with the genus two theta function of $L$. | {"url":"http://aran.library.nuigalway.ie/xmlui/handle/10379/2427","timestamp":"2014-04-19T02:17:09Z","content_type":null,"content_length":"20702","record_id":"<urn:uuid:e4d71344-fa54-4571-a466-51c0bbd9d205>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
MODFLOW - USGS Modular Three-Dimensional Ground-Water Flow Model - Environmental Software
MODFLOW is the name that has been given the USGS Modular Three-Dimensional Ground-Water Flow Model. Because of its ability to simulate a wide variety of systems, its extensive publicly available
documentation, and its rigorous USGS peer review, MODFLOW has become the worldwide standard ground-water flow model. MODFLOW is used to simulate systems for water supply, containment remediation and
mine dewatering. When properly applied, MODFLOW is the recognized standard model used by courts, regulatory agencies, universities, consultants and industry.
The main objectives in designing MODFLOW were to produce a program that can be readily modified, is simple to use and maintain, can be executed on a variety of computers with minimal changes, and has
the ability to manage the large data sets required when running large problems. The MODFLOW report includes detailed explanations of physical and mathematical concepts on which the model is based and
an explanation of how those concepts were incorporated in the modular structure of the computer program. The modular structure of MODFLOW consists of a Main Program and a series of highly-independent
subroutines called modules. The modules are grouped in packages. Each package deals with a specific feature of the hydrologic system which is to be simulated such as flow from rivers or flow into
drains or with a specific method of solving linear equations which describe the flow system such as the Strongly Implicit Procedure or Preconditioned Conjugate Gradient. The division of MODFLOW into
modules permits the user to examine specific hydrologic features of the model independently. This also facilitates development of additional capabilities because new modules or packages can be added
to the program without modifying the existing ones. The input/output system of MODFLOW was designed for optimal flexibility.
Ground-water flow within the aquifer is simulated in MODFLOW using a block-centered finite-difference approach. Layers can be simulated as confined, unconfined, or a combination of both. Flows from
external stresses such as flow to wells, areal recharge, evapotranspiration, flow to drains, and flow through riverbeds can also be simulated.
The following packages are also included in most versions of MODFLOW.
TRANSIENT LEAKAGE - The TLK1 Package is a new method of simulating transient leakage in the MODFLOW model. It solves these equations that describe the flow components across the upper and lower
boundaries of confining units. The exact equations are approximated to allow efficient solution for the flow components. The flow components are incorporated into the finite-difference equations for
model cells that are adjacent to confining units. Confining-unit properties can differ from cell to cell and a confining unit need not be present at all locations; however, a confining unit must be
bounded above and below by model layers in which head is calculated or specified.
IBS1 (Compaction Package) - This recent addition to MODFLOW permits calculation of both elastic and inelastic release of water from fine-grained beds. This is especially useful in areas where land
surface is subject to subsidence.
CHD1 (Time-Variant Specified-Head Package) - This package for MODFLOW permits specification of fixed head for boundary cells that vary from time step to time step during a stress period.
STR1 (Streamflow Routing Package) - The Stream Package permits representation of intermittent streams in MODFLOW. It is especially useful in systems in the headwaters of small streams. The program
limits the amount of ground-water recharge to the available streamflow. It permits two or more streams to merge into one with flow in the merged stream equal to the sum of the tributary flows. The
program also permits diversions from streams.
PCG2 (Preconditioned Conjugate Gradient Solver) - PCG2 uses the preconditioned conjugate gradient method to solve the equations produced by MODFLOW for hydraulic head. Linear or nonlinear flow
conditions may be simulated. PCG2 includes two preconditioning options: modified incomplete Cholesky preconditioning which is efficient on scalar computers; and polynomial preconditioning which
requires less computer storage and, with modifications that depend on the computer used, is most efficient on vector computers. Convergence of the solver is determined using both head-change and
residual criteria. Nonlinear problems are solved using Picard iterations.
ZONEBUDGET - The MODFLOW Zonebudget Package calculates subregional water budgets using results from the USGS MODFLOW Model. It uses cell-by-cell flow data saved by the model in order to calculate the
budgets. Subregions of the modeled region are designated by zone numbers. The user assigns a zone number for each cell in the model. Composite zones can also be defined as combinations of the numeric
BCF3 - As originally published, MODFLOW could simulate the desaturation of variable-head model cells which resulted in their conversion to no-flow cells but could not simulate the resaturation of
cells. That is, a no-flow cell could not be converted to variable head. However, such conversion is desirable in many situations. For example, one might wish to simulate pumping that desaturates some
cells followed by the recovery of water levels after pumping is stopped. This program allows cells to convert from no-flow to variable-head. A cell is converted to variable head based on the head at
neighboring cells.
GFD1 (Generalized Finite-Difference Package) - This package for the advanced user of MODFLOW permits specification of interblock conductance. It is essential for use with RAD-MOD.
RAD-MOD - A preprocessor for assembling files needed to use MODFLOW to simulate radial flow towards a well. Although MODFLOW permits simulation of flow toward a well, it does so with a rectilinear
grid. RAD-MOD permits simulation using a two-dimensional cross section.
HORIZONTAL FLOW BARRIER PACKAGE - This package for MODFLOW simulates thin, vertical low-permeability geologic features that impede the horizontal flow of ground water. These geologic features are
approximated as a series of horizontal-flow barriers conceptually situated on the boundaries between pairs of adjacent cells in the finite-difference grid. The key assumption underlying this package
is that the width of the barrier is negligibly small in comparison with the horizontal dimensions of the cells in the grid. Barrier width is not explicitly considered in the package but is included
implicitly in a hydraulic characteristic defined as either (1) barrier transmissivity divided by barrier width if the barrier is in a constant-transmissivity layer or (2) barrier hydraulic
conductivity divided by barrier width if the barrier is in a variable-transmissivity layer. Furthermore, the barrier is assumed to have zero storage capacity. Its sole function is to lower the
horizontal branch conductance between the two cells that is separates.
MODFLOW is most appropriate in those situations where a relatively precise understanding of the flow system is needed to make a decision. MODFLOW was developed using the finite-difference method. The
finite-difference method permits a physical explanation of the concepts used in construction of the model. Therefore, MODFLOW is easily learned and modified to represent more complex features of the
flow system.
A large amount of information and a complete description of the flow system is required to make the most efficient use of MODFLOW. In situations where only rough estimates of the flow system are
needed, the input requirements of MODFLOW may not justify its use. To use MODFLOW, the region to be simulated must be divided into cells with a rectilinear grid resulting in layers, rows and columns.
Files must then be prepared that contain hydraulic parameters (hydraulic conductivity, transmissivity, specific yield, etc.), boundary conditions (location of impermeable boundaries and constant
heads), and stresses (pumping wells, recharge from precipitation, rivers, drains, etc.). | {"url":"http://www.scisoftware.com/environmental_software/detailed_description.php?products_id=172","timestamp":"2014-04-20T16:03:23Z","content_type":null,"content_length":"24283","record_id":"<urn:uuid:3566047c-a4fb-4ad5-a5d0-bf25d56f49e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Quantum confusion
The most useful way to think of QM is not as a description of reality, but as a set of rules that tells us how to calculate probabilities of possible results of experiments.
If "QM" refers to the theory defined by the standard Hilbert space axioms, then there's nothing in QM that tells us unambiguously what the system "is doing" at times between state preparation and
The "interpretations of QM" are attempts to turn QM into a description of reality. The most straightforward way to do that is to simply add new axioms on top of the ones that define QM, in order to
give us a picture of what "actually happens" without changing the theory's predictions. The fact that the predictions are unchanged means that these interpretations are unfalsifiable, so they are
strictly speaking not a part of science.
Another approach, which is also considered to be a part of "interpretations of QM", is to find another theory, that makes the same predictions but is defined by a different set of axioms, and see if
it suggests a different picture of what "actually happens". A good example is de Broglie-Bohm pilot wave theory. | {"url":"http://www.physicsforums.com/showpost.php?p=3053159&postcount=5","timestamp":"2014-04-19T07:36:39Z","content_type":null,"content_length":"8473","record_id":"<urn:uuid:d56f7fc7-7d34-4354-a8bf-197a9e238ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Purplemath Forums
This is a problem being asked in a writing/problem solving question. My classmates and I are having some issues.. Can anyone help?! Thanks!
Your text indicates that the standard form of a linear equation in two variables is
Ax + By = C
Note: Even though we are using 5 letters, only the x and y represent variables.
Note: Your book also says that both A and B cannot be equal to zero.
1. Consider the special case where the numerical value for "A" is equal to zero. NOTE* we are NOT saying that the value of x is zero**
Give a complete explanation of the special linear equation this would yield and why.
2. Consider the special case where the numerical value for "B" is equal to zero.
Give a complete explanation of the special linear equation this would yield and why.
3. The text indicates that when the x value of an ordered pair is equal to 0, the y value will be an intercept.
Give an explanation of why this occurs graphically or algebraically.
4. Finish this statement..... when the x value of an ordered pair is equal to 0, the y value is the y intercept, therefore: when the ________value of an ordered pair____.
kristin.mauller wrote:1. Consider the special case where the numerical value for "A" is equal to zero.
Give a complete explanation of the special linear equation this would yield and why.
What did you get when you made A equal zero and graphed a few lines?
kristin.mauller wrote:2. Consider the special case where the numerical value for "B" is equal to zero.
Give a complete explanation of the special linear equation this would yield and why.
What did you get when you made B equal zero and graphed a few lines?
kristin.mauller wrote:3. The text indicates that when the x value of an ordered pair is equal to 0, the y value will be an intercept.
Give an explanation of why this occurs graphically or algebraically.
What did you notice when you graphed various lines and looked at where they crossed the x- and y-axes? What did you notice about the coordinates of the x-intercepts? What did you notice about the
y-intercepts? What did you notice about the value(s) of x for points on the y-axis? What did you notice about the value(s) of y for points on the x-axis?
kristin.mauller wrote:4. Finish this statement..... when the x value of an ordered pair is equal to 0, the y value is the y intercept, therefore: when the ________value of an ordered pair____.
This is not a complete statement -- there is no "is" for the second half -- so there is no way to know what they're wanting. Sorry.
Please consult with your instructor regarding the missing information. Thank you! | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=2150&p=6339","timestamp":"2014-04-19T00:00:10Z","content_type":null,"content_length":"21169","record_id":"<urn:uuid:12f1fed5-f1a8-44f0-938f-43618660074a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amos Storkey
This is one of my favourite brain teasers. It was first introduced to me some years ago in a Cambridge pub garden during (if my memory serves me well) the Bayesian Methods workshop at the Neural
Networks and Machine Learning session at the Newton Institute. There are many interpretations, but needless to say, I prefer the Bayesian ones.
The scene
You are taking part in a game show. The host introduces you to two envelopes. He explains carefully that you will get to choose one of the envelopes, and keep the money that it contains. He makes
sure you understand that each envelope contains a cheque for a different sum of money, and that in fact, one contains twice as much as the other. The only problem is that you don't know which is
The host offers both envelopes to you, and you may choose which one you want. There is no way of knowing which has the larger sum in, and so you pick an envelope at random (equiprobably). The host
asks you to open the envelope. Nervously you reveal the contents to contain a cheque for 40,000 pounds.
The host then says you have a chance to change your mind. You may choose the other envelope if you would rather. You are an astute person, and so do a quick sum. There are two envelopes, and either
could contain the larger amount. As you chose the envelope entirely at random, there is a probability of 0.5 that the larger check is the one you opened. Hence there is a probability 0.5 that the
other is larger. Aha, you say. You need to calculate the expected gain due to swapping. Well the other envelope contains either 20,000 pounds or 80,000 pounds equiprobably. Hence the expected gain is
0.5x20000+0.5x80000-40000, ie the expected amount in the other envelope minus what you already have. The expected gain is therefore 10,000 pounds. So you swap.
Does that seem reasonable? Well maybe it does. If so consider this. It doesn't matter what the money is, the outcome is the same if you follow the same line of reasoning. Suppose you opened the
envelope and found N pounds in the envelope, then you would calculate your expected gain from swapping to be 0.5(N/2)+0.5(2N)-N = N/4, and as this is greater than zero, you would swap.
But if it doesn't matter what N actually is, then you don't actually need to open the envelope at all. Whatever is in the envelope you would choose to swap. But if you don't open the envelope then it
is no different from choosing the other envelope in the first place. Having swapped envelopes you can do the same calculation again and again, swapping envelopes back and forward ad-infinitum. And
that is absurd.
That is the paradox. A simple mathematical puzzle. The question is: What is wrong? Where does the fallacy lie, and what is the problem?
My answer
There have been comments made by many people on this problem, most of which provide good solutions to the problem, but some of which are just plain wrong! Those which are right generally amount to
much the same idea. My attempt at an answer can be found on the Two Envelope Paradox Solution page. | {"url":"http://homepages.inf.ed.ac.uk/amos/doubleswap.html","timestamp":"2014-04-21T09:51:13Z","content_type":null,"content_length":"6401","record_id":"<urn:uuid:c3db216e-6ac1-4791-b9a3-05639e8668a9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
April 2000
<<March April May>>
In the first year the government collects a net of qrst/24.
In subsequent years it collects a net of 0.
In general, if after n years the only nonzero amounts were
at the n+1 positions 0, b1, b2, ..., bn, then the government's
net take during the first year is b1*b2*...*bn/n!.
Consider the polynomial f(x) whose coefficient f[i] of x^i is the
original net worth of the resident at position i;
here and throughout, "x^i" denotes exponentiation.
Each year the effect of the taxation is to multiply f(x) by (1-x), since the new value of the coefficient f[i] is the old value
of f[i] - f[i-1]. After four years, the new polynomial is
g(x)=f(x)*(1-x)^4. We are told that g(x) has only five nonzero
coefficients, including g[0]=1.
Let the nonzero coefficients be g[q]=b, g[r]=c, g[s]=d, g[t]=e.
Because g(x) is divisible by (1-x)^4, we know that g and its first
three derivatives all vanish at x=1: g(1)=g'(1)=g''(1)=g'''(1)=0.
The fourth derivative of g at 1 is equal to 24 times f(1),
and in turn f(1) is the sum of the coefficients of f,
that is, the total initial wealth of the residents.
After one year, the total wealth of the residents is
(f(x)*(1-x)) evaluated at x=1, that is, 0. The government essentially collects all the wealth in the first year.
So we have the linear equations (in b,c,d,e):
1 + b + c + d + e = 0
qb + rc + sd + te = 0
q(q-1)b + r(r-1)c + s(s-1)d + t(t-1)e = 0
q(q-1)(q-2)b + r(r-1)(r-2)c + s(s-1)(s-2)d + t(t-1)(t-2)e = 0
q(q-1)(q-2)(q-3)b + ... + t(t-1)(t-2)(t-3)e = 24*f(1).
Or, after subtracting constant multiples of some equations from
others, we get the simpler set of equations:
1 + 1*b + 1*c + 1*d + 1*e = 0
0 + q*b + r*c + s*d + t*e = 0
0 + q^2*b + r^2*c + s^2*d + t^2*e = 0
0 + q^3*b + r^3*c + s^3*d + t^3*e = 0
0 + q^4*b + r^4*c + s^4*d + t^4*e = 24*f(1).
The coefficients on the left-hand form a matrix M.
Applying M^(-1) to the vector (0,0,0,0,24*f(1))
we will recover 1 as well as the unknowns b,c,d,e.
We are interested in the upper right-hand entry of M^(-1)
(that is, its (1,5) entry).
By Cramer's rule, this is given by the ratio of two determinants:
in the numerator, the determinant of the upper right-hand 4x4
submatrix of M (up to sign),
q r s t
q^2 r^2 s^2 t^2
q^3 r^3 s^3 t^3
and in the denominator, the determinant of M itself,
which is the same as the determinant of its lower right-hand 4x4
q r s t
q^2 r^2 s^2 t^2
q^3 r^3 s^3 t^3
q^4 r^4 s^4 t^4
The latter differs from the former in that the first column has been multiplied by q, the second by r, and so on.
Putting it all together, we find
1 = (1/(q*r*s*t))(24*f(1)), or
f(1) = q*r*s*t/24 = the government's first-year profit.
Adapted from the 1986 Putnam Examination, problem A-6.
If you have any problems you think we might enjoy, please send them in. All replies should be sent to: webmster@us.ibm.com | {"url":"http://domino.research.ibm.com/Comm/wwwr_ponder.nsf/Solutions/April2000.html","timestamp":"2014-04-19T07:12:04Z","content_type":null,"content_length":"17418","record_id":"<urn:uuid:4fa0cf8a-2c6b-4252-96c9-d684995a6cab>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Current ufunc signatures for review
Travis E. Oliphant oliphant@enthought....
Tue May 27 14:53:59 CDT 2008
Charles R Harris wrote:
> Hi All,
> Here is the current behavior of the ufuncs and some comments. They
> don't yet cover mixed types for binary functions,
> but when they do we will see things like:
> In [7]: power(True,10)
> Out[7]:
> array([ 0.5822807 , 0.66568381, 0.11748811, 0.97047323, 0.60095205,
> 0.81218886, 0.0167618 , 0.80544138, 0.59540082, 0.82414302])
> Which looks suspect ;)
I don't understand this. Like Robert, I don't get this output, and I'm
not sure what the point being made is.
> 1) Help strings on ufuncs don't work. This seems to be a problem with
> the help function, as
> printing the relevant __doc__ works fine. The docstrings are
> currently defined in
> code_generators/generate_umath.py and add_newdoc doesn't seem to
> work for them.
This has been known for a long time. It is the reason that I wrote
numpy.info. I should push for the Python help to change, but I'm not
sure what problems that might create.
> 2) Complex divmod(), // and % are deprecated, should we make them
> raise errors?
Sometimes you have float data that is complex because of an intermediate
calculation. I don't think we should cause these operations not to
work on Numeric data just because Python deprecated them. I'm actually
not sure why Python deprecated these functions.
> 3) The current behavior of remainder for complex is bizarre. Nor does
> it raise a deprecation warning.
Please show what you mean:
>>> x = array([5.0, 3.0],'D')
>>> x
array([ 5.+0.j, 3.+0.j])
>>> x % 3
__main__:1: DeprecationWarning: complex divmod(), // and % are deprecated
array([(2+0j), 0j], dtype=object)
I don't get why it should be deprecated.
> 4) IMHO, absolute('?') should return 'b'
> 5) Negative applied to '?' is equivalent to not. This gives me mixed
> feelings; the same functionality
> is covered by invert and logical_not.
Yes, it is true. Do you have another suggestion as to what negative
should do?
> 6) The fmod ufunc applied to complex returns AttributeError. Shouldn't
> it be a TypeError?
Maybe, but the error comes from
complex-> promoted to object -> search for fmod method on Python object
of complex type -> raise Attribute Error.
Some special-case error re-mapping would have to be done to change it.
> 7) Should degrees and radians work on complex? Hey, they work on
> booleans and it's just scaling.
Sure -- for the same reason that floor_divide (//) and remainder (%)
should work on complex (I realize that right now, the default object
implementation is called for such cases).
I didn't see anything of alarm in the list of signatures that you
provided. If you have something of concern, please pick it out.
Thanks for the close-up examination of the behavior.
> ------------------------------------------------------------------------
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/034514.html","timestamp":"2014-04-18T13:45:59Z","content_type":null,"content_length":"6337","record_id":"<urn:uuid:9b1aa07f-e0bb-476d-a5c5-ebc2b2b33e58>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
A certain store sells small, medium, and large toy trucks i
Author Message
A certain store sells small, medium, and large toy trucks i [#permalink] 08 Jan 2013, 04:36
45% (medium)
Question Stats:
fozzzy (02:24) correct
Director 48% (01:10)
Joined: 29 Nov 2012 wrong
Posts: 936 based on 60 sessions
Followers: 11 A certain store sells small, medium, and large toy trucks in each of the colors red, blue, green, and yellow. The store has an equal number of trucks of each possible
color-size combination. If Paul wants a medium, red truck and his mother will randomly select one the trucks in the store, what is the probability that the truck she selects
Kudos [?]: 157 [0], will have at least one of the two features Paul wants?
given: 543
A. 1/4
B. 1/3
C. 1/2
D 7/12
E. 2/3
Spoiler: OA
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Re: A certain store sells small, medium, and large toy trucks i [#permalink] 08 Jan 2013, 05:04
This post received
Expert's post
fozzzy wrote:
A certain store sells small, medium, and large toy trucks in each of the colors red, blue green, and yellow. The store has an equal number of trucks of each possible
color-size combination. If Paul wants a medium, red truck and his mother will randomly select one the trucks in the store, what is the probability that the truck she selects
will have at least one of the two features Paul wants?
A. 1/4
B. 1/3
C. 1/2
D 7/12
Marcab E. 2/3
Verbal Forum Moderator let there be x number of trucks of all possible color-size combination.
Status: Preparing for probability that the truck she selects will have at least one of the two features Paul wants can be found by subtracting the probability of selecting a truck that doesn't has
the another shot...! either of the property i.e. it is neither red in color nor of medim size from 1.
Joined: 03 Feb 2011 In the diagram attached, the I crossed off all the desired results i.e. cancelled all the possible combination of red-medium.
Posts: 1427 # of remaining outcomes= # of blue circles = # of trucks that don't have either of the desired property.
Location: India No. of colors=4
Concentration: Finance, no. of sizes=3
Total outcomes=12
GPA: 3.75
Remaining outcomes=6
Followers: 108
hence probability that truck is neither red nor of medium size is
Kudos [?]: 491 [9] ,
given: 62 6/12
hence probability that the truck she selects will have at least one of the two features Paul wants will be
1 - 6/12
probability.png [ 9.42 KiB | Viewed 2140 times ]
Prepositional Phrases Clarified|Elimination of BEING| Absolute Phrases Clarified
Rules For Posting
fozzzy Re: A certain store sells small, medium, and large toy trucks i [#permalink] 08 Jan 2013, 06:53
Director Great Explanation! Thanks!
Joined: 29 Nov 2012 _________________
Posts: 936 Click +1 Kudos if my post helped...
Followers: 11 Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
Kudos [?]: 157 [0], GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html
given: 543
Re: A certain store sells small, medium, and large toy trucks i [#permalink] 08 Jan 2013, 08:05
Joined: 18 Oct 2011
Assume there are 1 of each giving - 3 sizes x 4 colours = 12 trucks
Posts: 92
Since the q is asking for "at least one" (medium truck or red truck) let's calculate probability of selecting neither a red or medium truck and subtract from 1
Location: United States
Therefore - 1 - 6/12 = 1/2
Concentration: Answer: C
GMAT Date: 01-30-2013
GPA: 3.3
Followers: 2
Kudos [?]: 17 [0],
given: 0
Re: A certain store sells small, medium, and large toy trucks i [#permalink] 08 Jan 2013, 08:22
Expert's post
The key is to take in account whenever see you the word
AT Least
carcass is to subtract the other probability from 1
Moderator In this case you DO NO want the large and the small size so
Joined: 01 Sep 2010 2/3AND
Posts: 2176 (means
Followers: 172 *
Kudos [?]: 1518 [0], ) the other colors so
given: 610
From this
1 - 2/3 * 3/4 = 1 - 6/12 = 6/12 = 1/2
KUDOS is the good manner to help the entire community.
Re: A certain store sells small, medium, and large toy trucks i [#permalink] 27 Jan 2013, 04:24
Joined: 09 Jan 2013
fozzzy wrote:
Posts: 14
A certain store sells small, medium, and large toy trucks in each of the colors red, blue green, and yellow. The store has an equal number of trucks of each possible
Concentration: color-size combination. If Paul wants a medium, red truck and his mother will randomly select one the trucks in the store, what is the probability that the truck she selects
Entrepreneurship, will have at least one of the two features Paul wants?
A. 1/4
Schools: NUS '15 B. 1/3
C. 1/2
GMAT 1: 650 Q45 V34 D 7/12
E. 2/3
GRE 1: 1440 Q790 V650
Please add a comma between blue and green. I mistook it as 1 category!
GPA: 3.76
WE: Other
(Pharmaceuticals and
Followers: 1
Kudos [?]: 0 [0], given:
bumpbot Re: A certain store sells small, medium, and large toy trucks i [#permalink] 07 Mar 2014, 19:21
VP Hello from the GMAT Club BumpBot!
Joined: 09 Sep 2013 Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you
may find it valuable (esp those replies with Kudos).
Posts: 1095
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Followers: 122
Kudos [?]: 29 [0],
given: 0 GMAT Books | GMAT Club Tests | Best Prices on GMAT Courses | GMAT Mobile App | Math Resources | Verbal Resources
gmatclubot Re: A certain store sells small, medium, and large toy trucks i [#permalink] 07 Mar 2014, 19:21 | {"url":"http://gmatclub.com/forum/a-certain-store-sells-small-medium-and-large-toy-trucks-i-145423.html?sort_by_oldest=true","timestamp":"2014-04-19T15:19:39Z","content_type":null,"content_length":"172589","record_id":"<urn:uuid:68ebd41e-a18f-4fc1-b41a-a27405342368>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
CSTA 2012 Reasoning About Weather
Here are links to download both the activities from the workshop, and the rest of the thermal energy unit.
Weather Models:
Air pressure
Water Cycle:
Thermal Energy:
• Thinking About Energy (introduction to energy conservation)
• Energy Cards (sorting categories of energy into energy of motion and energy of position)
• Hot or Cold (thermal equilibrium)
• Energy Transfers (thermal equilibrium)
• Hot Rocks (temperature v. thermal energy)
• Temperature and Thermal Energy
• Hot Rod (conduction)
• Moving Colors (convection)
• Convection - with teacher's guide
• Solar Heating (investigates differential heating of different materials, and angle of sunlight)
• Thermal Energy puzzle (the naked guy problem)
• Thermal Energy Revisited (review of heat transfer using our favorite idiots, Doogie and Kyle)
• Heat Transfer problems (applying conduction, convection and radiation in various situations) | {"url":"http://www.csus.edu/indiv/k/kusnickj/CSTA/CSTA.html","timestamp":"2014-04-18T23:25:02Z","content_type":null,"content_length":"3154","record_id":"<urn:uuid:a828f01c-ea22-4ea4-947b-5eb2e30b38aa>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1997 [00105]
[Date Index] [Thread Index] [Author Index]
Re: Useful Dumb User Questions,,,
• To: mathgroup at smc.vnet.net
• Subject: [mg9080] Re: [mg8988] Useful Dumb User Questions,[mg9020],[mg9027],[mg9032]
• From: Olivier Gerard <jacquesg at pratique.fr>
• Date: Thu, 9 Oct 1997 01:43:02 -0400
• Sender: owner-wri-mathgroup at wolfram.com
Dear List members,
What follows is about the questions raised by Mark Evans [mg8988] and
[mg9020] and several of the reactions it has already caused.
* About Mathematica Wizards, Condescendant answers
Why do I subscribe to this newsgroup/mailing list ?
Clearly because reading questions from every member
is a good opportunity to check my own knowledge of
Mathematica and motivate me going deeper in the use
and mastering of this programming language (as well
as learning bits of mathematics and physics).
So clearly I benefit from "Dumb User Questions"
and I take the time to answer some of them.
Clearly, there are people a lot more systematic
and fast to draw than me on this job. To quote
a few David Withoff (of WRI), Paul Abbott, Allan Hayes.
And every time I read their answers, I found them
kind to their interlocutors, sometimes taking
great pain to build a workable example or decipher
the original question.
* Mathematical knowledge in Mathematica ?
Matthias Weber wrote:
> Ideally, one would like a formal proof of Mathematica's claims.
> This being too much (really ? I don't know.), it would be nice
> to be able to get some sort of information what the more complex
> functions of Mathematica were doing in a special situation. I figure
> that Simplify uses certain sets of rules, and it would be nice if
> one would be notified about which sets were used.
> I know that I am asking in fact for a more complicated system,
> one which would be even more difficult to understand and to program.
First: formal or peer-reviewed certification of Mathematica algorithms
would certainly be a very strong point for WRI both as a public mark of
excellence and as a token of openness towards users and the research community.
Just quoting (but without bibliography) bunches of algorithm names as
it is done in the Book is not enough. There is certainly a middle
point to find between protecting industrial secrets and giving a fair
chance to researcher and users to give a valuable input to WRI in return.
This would also certainly help researchers recognize WRI achievements
and the help Mathematica has provided to so many people.
Second: Among Mathematica 3.0 Demos, there is a step by step derivation code.
I find it really nice for teaching and self-teaching purposes.
For doing this, the author had to rewrite a derivate code,
inserting proper hooks and messages.
What was perfectly doable in this case would have not been practical
for commands like Simplify or Integrate which concentrate so much
knowledge and experience. Clearly, only people at WRI can do that
in these cases. It would not complicate unduly the use of Mathematica.
Just having a selectable level of technicality and a level of detail for
comments on processes being tried. As the Book points out, in many
cases algorithms suitable to computer programs would not be sensible
to try by hand and inversely you cannot count on human insight to
direct an internal process but this is not a reason to discard such
a feature altogether.
Third: It leads naturally to an interesting problem in Mathematica:
accessing the mathematical knowledge it contains. Large Black Boxes
like Integrate or Sum or DSolve are organized in a competitive spirit:
"Do everything you can to get a definitive answer but if you do not
succeed just leave it alone".
A computer algebra system or a human being cannot solve every mathematical
problem one can dream of. But a human learns a lot asking questions
and partial answers are informative. If we want to make Mathematica
a more pleasant system to use in many situations we must learn from
its (sometimes unsuccessful) tentatives to solve our question and
make it a way to share the scientific knowledge it contains which was
accumulated by generations of people for several thousands of years.
This is why I propose a new series of commands, something we could
call QSolve, QDSolve, QIntegrate, etc... (to mimic the current N
prefix of numerical versions) which would analyze the
input and give as many conclusions on its nature and qualitative
aspects as possible (Q is for Query or Question).
A trivial example:
QDSolve[ y''[x] + y'[x] + y[x] == f[x], y[x], x]
would give something like:
"This looks like a second-order linear differential equation.
The unknown function is y and the variable is x.
There is a 2-dimensional set of solutions. You can specify
a precise solution by giving 2 initial conditions.
I will not be able to integrate it completely for y[x] until I have
more information on f[x]. f[x] should be continuous."
And much more...
I am very interested by your opinion on this proposal.
* Making Mathematica easy to use and learn.
Compared to improving the inner workings of the Kernel and the
Front End, there is much that every Mathematica user can do -- and
especially members of this list -- to make Mathematica easier to learn
and use for a variety of publics. And first make Notebooks
about subjects you know, you teach or work on. Try to take advantage
of the difficulties you eventually learning Mathematica language
by writing your experience out. This list is
a place where you can advertize what you have done and have plenty
of benevolent review and beta-testing, initiate teamwork,
ask for resources.
* What is the public of Mathematica ?
Jens-Peer Kuska, after making a lot of other repositionning
comments on Mark Evans' post, wrote (was it a joke?):
> It might be a good idea to put some people from the street say
> - a police men
> - a house wife
> - a school boy (age 9 or 10 years)
> - a taxi driver
> an let them perform some tasks like
> - solving an integral equation
> - solving a partial differential equation
> - drawing the Riemann surface of a polynom equation of order > 5
> - find the eigenfunctions of a helium atom
> monitoring the mistakes will make Mathematica also more intuitive.
It might look ironic, but I see these people as possible Mathematica
users. A police man with a strong interest in math or keen to know
more to help his children learn mathematics or to understand some
aspects of this sophisticated discipline: forensics ; a bright school boy
bored by the progressiveness of the math program wanting to explore
by himself; a house wife (or a house husband) modeling whatever with
Mathematica; a taxi driver with a PhD but no academic position or salary
available (not so rare in East European Countries and more and more frequent
in other countries) wanting to practice. Do you need more examples ?
As Mark Evans pointed out in his second post, the real trouble is
with highly educated people without flexibility towards tools and
Charles loboz wrote:
> I do not like mma interface that much, feels awkward. Still, we are dealing
> here with a product appealing to a very limited market (in comparison with,
> say, Excel)
I must disagree with this sophism. Innovation is not RealPolitik.
I would have liked more details on your feelings and what you would dream
Certainly more comments in a future post.
Olivier Gerard | {"url":"http://forums.wolfram.com/mathgroup/archive/1997/Oct/msg00105.html","timestamp":"2014-04-20T13:22:30Z","content_type":null,"content_length":"41174","record_id":"<urn:uuid:4788bac5-1fe0-41ad-aee6-1116ae523cf0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing polynomial
I'm having problems dividing the polynomial 2x^5 + 2x^3 + x^2 + 1 by x^2 + 1
Looks straight forward to me. But I don't know what to tell you without seeing what you did. Did you remember to include "0x" in the divisor and " $0x^4$" and " $0x$" in the dividend? How many times
does $x^2$ divide into $2x^5$?
It goes in 2x^3 times but I get x^2 as the next quotient part but the correct quotient should be 2x^3 - 1 according to the answer the book gives
$x^2+0x+1sqrt(2x^5+0x^4+2x^3+x^2+0x+1)$ This is the subtracting parts: The quotient i got is $2x^3+x^2+x+1$ Part 1 $2x^5+0x^4+2x^3$ Part 2 $x^4+x^2$ $x^4+x^3+x^2$ Part 3 $x^3+0x$ $x^3+x^2+x$ Part 4
$x^2+1$ $x^2+x+1$ Final $x$
Yes, $x^2$ divides into $2x^5$$2x^3$ times. And multiplying the entire divisor by $2x^3$ gives $2x^5+ 2x^3$ Now subtract that from the dividend: $2x^5+ 2x^3+ x^2+ 1- (2x^5+ 2x^3)= x^2+ 1$. And now,
of course, $x^2+ 1$ divides into that exactly once: the quotient is $2x^3+ 1$. | {"url":"http://mathhelpforum.com/algebra/175394-dividing-polynomial-print.html","timestamp":"2014-04-16T21:08:18Z","content_type":null,"content_length":"8757","record_id":"<urn:uuid:4b10dc84-d217-4edc-b80c-ebc598d8ad81>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homeomorphic sets.
Please show that [0,1)x[0,1) is homeomorphic to [0,1]x[0,1). Thank you...
Last edited by johnsomeone; October 8th 2012 at 05:44 AM.
There are a ton of ways to do this. The common approach is to rely on expansions and contractions of line segments, and note that they're not merely continuous, but are also continuous in their
parameters. i.e. For (a,b) -> (c,d), use h(t) = ( (d-c)/(b-a) ) (t-a) + c. If a, b, c, and d are continous functions in s, then H(t,s) is continuous (so long as b(s) = a(s) never happens). In the
diagram above, the f homeomorphism is from the square to the disk. It's just an expansion from the blue segment of the square to the green radius of the circle. It leaves fixed the 4 diagnoals from
the center to a vertex, which happen to be radii. Note that the actual homeomorphisms in the diagram are restrictions of f and f inverse. The g homeomorphism maps the circle to the circle by, in
polar coordinates, expanding the angle in over one range, and contracting it over another. It leaves the center fixed. The definition of g will be split into cases, but equal where those cases
overlap. Thus, although g is obviously continuous, you'd need to invoke a proposition about the continuity of such split cases to prove that g is continuous. Also, g's continuity at the origin might
seem problematic (generally, where the "twisting all comes together" is a bad spot), but is actually trivial to show, as every open ball centered at the origin is invariant under g. If you're asked
to explicitly write down a homeomorphism, you should be able to do so with these functions. If a full proof is required, you might want to establish that they're homeomorphisms by looking at the
"solid" maps, and then use that a continuous bijection from a compact space to a Hausdorff space is a homeomorphism. Then show that the restrictions to those specific spaces are still bijections. You
could also do it by explicitly writing out the inverses. Again, my solution is FAR from the only way to do this problem.
Last edited by johnsomeone; October 8th 2012 at 06:18 AM. | {"url":"http://mathhelpforum.com/differential-geometry/204847-homeomorphic-sets.html","timestamp":"2014-04-18T23:28:10Z","content_type":null,"content_length":"43112","record_id":"<urn:uuid:cbeffc7f-d9a9-4fe7-a95c-f61617149d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quo Vadis, Graph Theory
- Computational Geometry. Theory and Applications , 1993
"... Unit disk graphs are the intersection graphs of unit diameter closed disks in the plane. This paper reduces SATISFIABILITY to the problem of recognizing unit disk graphs. Equivalently, it shows
that determining if a graph has sphericity 2 or less, even if the graph is planar or is known to have s ..."
Cited by 78 (1 self)
Add to MetaCart
Unit disk graphs are the intersection graphs of unit diameter closed disks in the plane. This paper reduces SATISFIABILITY to the problem of recognizing unit disk graphs. Equivalently, it shows that
determining if a graph has sphericity 2 or less, even if the graph is planar or is known to have sphericity at most 3, is NP-hard. We show how this reduction can be extended to 3 dimensions, thereby
showing that unit sphere graph recognition, or determining if a graph has sphericity 3 or less, is also NP-hard. We conjecture that K-sphericity is NP-hard for all fixed K greater than 1. 1
Introduction A unit disk graph is the intersection graph of a set of unit diameter closed disks in the plane. That is, each vertex corresponds to a disk in the plane, and two vertices are adjacent in
the graph if the corresponding disks intersect. The set of disks is said to realize the graph. Of course, the unit of distance is not critical, since the disks realize the same graph even if the | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2753943","timestamp":"2014-04-21T04:49:40Z","content_type":null,"content_length":"12304","record_id":"<urn:uuid:179da037-b543-4d34-94a5-5a4a33633fab>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help please
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
this is meant for you to visualize the horizontal asymptote is the horizontal line that the graph approaches as it goes to the right \(\infty\) and to the left \(-\infty\)
Best Response
You've already chosen the best response.
they have marked it for you with a dotted line the vertical asymptote is the vertical dotted line. it is hard for me to read, but i think that is the line \(x=-1\)
Best Response
You've already chosen the best response.
there are no "oblique asymptotes" otherwise you would see a non vertical line dotted
Best Response
You've already chosen the best response.
the domain is all real numbers except where you see that you cannot plug in a value of \(x\) which not coincidentally is the vertical asymptote in other words it is all real numbers except for \
Best Response
You've already chosen the best response.
For domain, how far can x go before being intervened by an asymptote. Same for y. Intercepts, when an x value touches the x-axis or the y value touches the y axis. And you should know asymptotes
and what are they. (DOTTED LINES). Give equations and be done with it. Considering this is a test based on what you learned over lessons, I cannot help you further.
Best Response
You've already chosen the best response.
and similarly you can see from the graph that the function approaches, but never actually achieves the value of \(y=3\) which is the horizontal asymptote
Best Response
You've already chosen the best response.
ohh i see
Best Response
You've already chosen the best response.
and finally the "intercepts" are where the graph crosses the \(x\) axis, and where the function crosses the \(y\) axis. it looks like they are at the same place, since the graph crosses both axes
at \((0,0)\)
Best Response
You've already chosen the best response.
so domain is (-infinity to infinity?
Best Response
You've already chosen the best response.
if you have any question let me know. no
Best Response
You've already chosen the best response.
all real numbers except -1
Best Response
You've already chosen the best response.
domain is all real numbers except \(x=-1\)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
and range all real numbers except 3?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
x-intercept -1?
Best Response
You've already chosen the best response.
or 0?
Best Response
You've already chosen the best response.
i think x-intercept is -1 and y intercept is 3
Best Response
You've already chosen the best response.
no the intercept is where it crosses the \(x\) axis, for the \(x\) intercept it is not the same as the asymptotes
Best Response
You've already chosen the best response.
so 0
Best Response
You've already chosen the best response.
it crosses the \(x\) axis at \((0,0)\)
Best Response
You've already chosen the best response.
your answer should be an ordered pair \((0,0)\)
Best Response
You've already chosen the best response.
well it asks only for the x intercepts
Best Response
You've already chosen the best response.
no actually it says "the intercepts" but it is the same answer for both
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
oh i see well zero for both
Best Response
You've already chosen the best response.
ok and vertical is -1 and horizontal?
Best Response
You've already chosen the best response.
\(y=3\) is the equation of the dotted horizontal line
Best Response
You've already chosen the best response.
ooh wow
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i have another problem
Best Response
You've already chosen the best response.
post, you will get lots of answers some may even be right
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b25b64e4b09749ccac92ce","timestamp":"2014-04-20T08:39:59Z","content_type":null,"content_length":"106976","record_id":"<urn:uuid:5cf9c392-5b0b-456c-8c25-f1cab5a84b8c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to Two-Part Analysis Questions in Integrated Reasoning
TPA questions measure your ability to solve complex problems. This is designed to mimic complex, multi-part, real-world problems. MBA.com describes the Two-Part format as testing “the ability to
evaluate trade-offs, solve simultaneous equations, and discern relationships between two entities.” Because of this, your acquired skills in solving complex algebra and in discerning harder word
problems will definitely come in handy on TPA! Let’s practice one together!
Question #1 – Better Home Mortgage has 1200 realtors on staff, while Dream Lenders has 2400. Each year, the number of realtors is increasing by a fixed amount. A housing market specialist suggests
that if each organization keeps its current rate of increase, they will have the same number of realtors on staff after three years, and that subsequently Better Home Mortgage will have more
Identify the number of increased realtors per year for Better Home Mortgage and for Dream Lenders that are consistent with the specialist’s prediction. Make only one selection in each column.
Better Home Mortgage Dream Lenders
The correct answer is 410 and 10. Since Better Home Mortgage starts with half as many realtors, but overtakes Dream Lenders in just three years, then Better Home Mortgage must have the higher rate of
realtor increase. We can set up an equation since we know the number of realtors will be equal after three years.
1200 + 3x = 2400 + 3y
3x = 1200 + 3y
x = 400 + y
The only option where Better Home’s rate is 400 higher than Dream Lenders’ is 10 and 410.
Question #2 – Identify the number of lenders less than Better Home Mortgage Dream Lenders is adding to its organization annually if the specialist was mistaken and the number of realtors would be
equal after just two years. Also identify the number of lenders Better Home Mortgage will have after two years if they add 115% of that difference annually.
Dream Lender Better Home Mortgage
The correct answer is 600 and 690. The new equation becomes: 1200 + 2x = 2400 + 2y
600 + x = 1200 + y
x = 600 + y
That means each year Better Home Mortgage is adding 600 more realtors than Dream Lenders, so Dream Lenders is adding 600 fewer. Better Home Mortgage will add 5% of “that difference” annually, or
600. 115% of 600 = 1.15(600) = 690.
Remember, TPA questions may focus solely on Quantitative concepts, or may include aspects of Verbal such as inference/reading comprehension. Luckily, as you study for the other sections of the GMAT,
you’ll already be developing the skills you need to tackle IR!
Plan on taking the GMAT soon? We have GMAT prep courses starting all the time. And, be sure to find us on Facebook and Google+, and follow us on Twitter!
Vivian Kerr is a regular contributor to the Veritas Prep blog, providing advice to help students better prepare for the GMAT and the SAT. | {"url":"http://www.veritasprep.com/blog/2013/06/an-introduction-to-two-part-analysis-questions-in-integrated-reasoning/","timestamp":"2014-04-19T11:58:08Z","content_type":null,"content_length":"48237","record_id":"<urn:uuid:b2af956c-4949-491a-8ff3-8c4b601c1097>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Information
Course Code MATH 4P61
(also offered as COSC 4P61)
Course Title Theory of Computation
Description Regular languages and finite state machines: deterministic and non-deterministic machines, Kleene's theorem, the pumping lemma, Myhill-Nerode Theorem and decidable questions.
Context-free languages: generation by context-free grammars and acceptance by pushdown automata, pumping lemma, closure properties, decidability. Turing machines: recursively enumerable
languages, universal Turing machines, halting problem and other undecidable questions.
Course Lectures, 3 hours per week.
Restrictions open to COSC (single or combined) majors.
Prerequisite MATH 1P67.
Notes MATH students may take this course with permission of Department. | {"url":"http://www.brocku.ca/registrar/guides/course_details.php?code=MATH%204P61&ay=2013&as=FW&at=EX&al=ALL&sc=MATH&ep=December&ct=5","timestamp":"2014-04-18T18:30:25Z","content_type":null,"content_length":"2143","record_id":"<urn:uuid:0885479f-52b7-4c3c-983c-fde649a99935>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
If a paper has been published, the files made available here are mostly preprints and usually slightly different from the official published version.
A Balan, A Kurz, J Velebil: Positive Fragments of Coalgebraic Logics . CALCO 2013. Most recent version on arxiv.
A Kurz, A Palmigiano: Epistemic Updates on Algebras. Logical Methods in Computer Science, 2013.
M. Bilkova, A. Kurz, D. Petrisan, J. Velebil: Relation lifting, with an application to the many-valued cover modality . Logical Methods in Computer Science, 2013.
A Kurz, D Petrisan, P Severi, F-J de Vries: Nominal Coalgebraic Data Types with Applications to Lambda Calculus . Logical Methods in Computer Science, 2013.
M Bonsangue, H Hansen, A Kurz, J Rot: Presenting Distributive Laws . CALCO 2013.
A. Kurz, D. Petrisan, P. Severi and F.-J. de Vries : An Alpha-Corecursion Principle for the Infinitary Lambda Calculus. CMCS 2012.
A. Kurz, T. Suzuki, E. Tuosto: A Characterisation of Languages on Infinite Alphabets with Nominal Regular Expressions. IFIP TCS 2012.
A. Kurz, J. Rosicky: Strongly complete logics for coalgebras. LMCS 8 (3:14) 2012. (The old draft from July 2006 for reference: .ps, .pdf, .dvi).
C. Kupke, A. Kurz, Y. Venema: Completeness for the coalgebraic cover modality. LMCS 8 (3:14) 2012. (The conference version from AiML 2008 for reference: .pdf )
A. Kurz, T. Suzuki, E. Tuosto: On Nominal Regular Languages with Binders. Fossacs 2012.
A. Kurz, J. Velebil: Enriched logical connections. Appl. Categ. Structures, online first, 23 September 2011. pdf
M. Bilkova, A. Kurz, D. Petrisan, J. Velebil: Relation Liftings on Preorders and Posets. CALCO'11 Updated version with proofs, September 2012.
A Balan, A Kurz: Finitary Functors: From Set to Preord and Poset. CALCO'11
J. Velebil, A. Kurz: Equational presentations of functors and monads . Mathematical Structures in Computer Science (2011) 21(2):363-381. pdf
A. Kurz, T. Suzuki, E. Tuosto: Towards Nominal Formal Languages . February 2011.
A. Kurz, T. Suzuki, E. Tuosto: Nominal Monoids. Technical Report CS-10-004, University of Leicester, October 2010.
A. Kurz, A. Palmigiano, Y.Venema: Coalgebra and Logic: A Brief Overview. Editorial of a Special Issue on Coalgebra and Logic: J Logic Computation (2010) 20(5): 985-990 doi:10.1093/logcom/exn094 .
A. Kurz, Y.Venema: Coalgebraic Lindström Theorems. Advances in Modal Logic, Moscow 2010. ( .pdf )
A. Kurz, D. Petrisan, J. Velebil: Algebraic Theories over Nominal Sets. (arXiv )
A. Balan, A. Kurz: On coalgebras over algebras. CMCS, Cyprus 2010. (arXiv ) (Revised and extended version to appear in TCS preprint)
V. Ciancia, A. Kurz and U. Montanari: Families of Symmetries as Efficient Models of Resource Binding . CMCS, Cyprus 2010.
A. Kurz, D. Petrisan: On Universal Algebra over Nominal Sets. . (.pdf ) Mathematical Structures in Computer Science 20:285-318 (2010)
A. Kurz, D. Petrisan: Presenting functors on many-sorted varieties and applications. (.pdf ) To appear in Information and Computation.
A. Kurz, R. Leal: Equational Coalgebraic Logic. MFPS XXV, Oxford 2009. (.pdf ) Revised as Modalities in the Stone age: A comparison of coalgebraic logics. To appear in the special issue of MFPS
XXV. (.pdf )
A. Kurz, M. Lenisa, A. Tarlecki: Algebra and Coalgebra in Computer Science. Third International Conference, CALCO 2009, Udine, Italy, September 7-10, 2009. Proceedings. Lecture Notes in Computer
Science, Volume 5728. (springerlink)
A. Kurz, R. Leal: Equational Coalgebraic Logic. MFPS XXV, Oxford 2009. (.pdf )
C. Cirstea, A. Kurz, D. Pattinson, L. Schröder, Y. Venema: Modal Logics are Coalgebraic. BCS Visions in Computer Science 2008. (.pdf )
G. Bezhanishvili, N. Bezhanishvili, D. Gabelaia, A. Kurz: Bitopological Duality for Distributive Lattices and Heyting Algebras. Mathematical Structures in Computer Science, Vol. 20, Issue 03, pp.
359-393, 2010 . (.pdf )
C. Kupke, A. Kurz, Y. Venema: Completeness of the finitary Moss logic. AiML 2008. (.pdf )
A. Kurz, D. Petrisan: Functorial Coalgebraic Logic: The case of many-sorted varieties. CMCS 2008. (.pdf )
M. Bonsangue, A. Kurz: Pi-Calculus in Logical Form. LICS 2007. (.pdf )
A. Kurz, J. Rosicky: The Goldblatt-Thomason Theorem for Coalgebras. CALCO 2007. (.pdf )
N. Ghani, A. Kurz: Higher Order Trees, Algebraically. CALCO 2007. (.pdf )
N. Bezhanishvili, A. Kurz: Free modal algebras: a coalgebraic perspective. CALCO 2007. (.pdf )
M. Hammoudeh, A. Kurz, E. Gaura: MuMHR: Multi-path, Multi-hop, Hierarchical Routing. SensorComm 2007. (.pdf )
A. Kurz: Coalgebras and Their Logics. Logic Column of the SIGACT News 37 (2), pp. 57-77, 2006.
M. Bonsangue, A. Kurz: Presenting Functors by Operations and Equations. January 2006 (supersedes drafts from Feb and Oct 2005). (.dvi .ps .pdf). Fossacs 2006.
M. Bonsangue, A. Kurz, I.M. Rewitzky: Coalgebraic representations of distributive lattices with operators. October 2005, to appear in Topology and its Applications. (.dvi .ps .pdf)
C. Kupke, A. Kurz, D. Pattinson: Ultrafilter extensions for coalgebras. CALCO 2005. (.dvi, .ps, .pdf)
M. Bonsangue, A. Kurz: Duality for Logics of Transition Systems. Fossacs 2005. (replaces a draft of October 2004 and extends an abstract presented at TANCL I, Tbilisi 7 - 11 July 2003) (.dvi,
.pdf, .ps)
A. Kurz, J. Rosicky: Weak Factorizations, Fractions and Homotopies. Applied Categorical Structures 13:141-160,2005. Preprint, October 2004. (.dvi, .pdf, .ps)
C. Kupke, A. Kurz, D. Pattinson: Algebraic Semantics for Coalgebraic Modal Logic. CMCS 2004. (.dvi, .pdf (recompiled with diagrams))
A. Kurz, A. Palmigiano: Coalgebras and Modal Expansions of Logics. CMCS 2004. (.dvi, .ps, (.pdf (recompiled with diagrams)))
A. Kurz, J. Rosicky: Operations and Equations for Coalgebras. Mathematical Structures in Computer Science 15:149-166, 2005. Revised and extended version of `Modal Predicates and Coequations',
CMCS'02. (Preprint February 2004 .dvi, .ps)
C. Kupke, A. Kurz, Y. Venema: Stone Coalgebras. In Proceedings of CMCS 2003. Volume 82.1 of ENTCS, Elsevier, 2003. (CMCS'03: .ps, .pdf, May 2003; CWI Technial Report: .pdf, July 2003; Revised:
.ps, October 2003). Revised and extended version appeared in Theoretical Computer Science 327:109-134, 2004. (Preprint May 2004: .dvi, .ps, .pdf)
A.Kurz: Notions of Behaviour and Reachable-Part and their Institutions. In Proceedings of WADT 2002. Volume 2755 of LNCS, Springer, 2003. (.ps)
M.Bidoit, R.Hennicker, A.Kurz: Observational Logic, Constructor-Based Logic and their Duality. CWI Technical Report SEN-R0223, 2002. (.ps.gz, .ps) Continues and contains "On the Duality between
Observability and Reachability", FoSSaCS 2001 by the same authors. Accepted for TCS.
A.Kurz, D.Pattinson: Coalgebraic Modal Logic of Finite Rank. CWI Technical Report SEN-R0222, 2002. (.ps.gz, .ps) Revised version of "Definability, Canonical Models, Compactness for Finitary
Coalgebraic Modal Logic" by the same authors. Accepted for Mathematical Structures in Computer Science.
A.Kurz (Ed.): Proceedings of the Workshop on Categorical Methods for Concurrency, Interaction, and Mobility (CMCIM). Volume 68.1 of Electronic Notes in Theoretical Computer Science, 2002.
A.Kurz, D.Pattinson: Definability, Canonical Models, Compactness for Finitary Coalgebraic Modal Logic. In Lawrence Moss, editor, Coalgebraic Methods in Computer Science (CMCS'02), volume 65.1 of
Electronic Notes in Theoretical Computer Science, 2002. (.ps.gz, .ps).
A.Kurz: Logics Admitting Final Semantics. Fossacs 2002. (.ps.gz, .ps, © Springer Verlag).
A.Kurz: Coalgebras and Modal Logic. Course Notes for ESSLLI 2001, Version of October 2001. Appeared on the CD-Rom ESSLLI'01, Department of Philosophy, University of Helsinki, Finland. (.ps.gz,
A.Kurz, D.Pattinson: Coalgebras and Modal Logic for Parameterised Endofunctors. CWI Technical Report, SEN-R0040, December 2000. (.ps.Z, .pdf, .ps.gz)
A.Kurz: Logics for Coalgebras and Applications to Computer Science. Doctoral Thesis. July 2000. (gzipped postscript), (postscript).
A.Kurz: Modal Logic is Dual to Equational Logic. Extended abstract. January 2000. The current version is chapter 2 of my thesis, see above. (gzipped postscript).
A.Kurz, D.Pattinson: Notes on Coalgebras, Co-Fibrations and Concurrency. To appear in Proceedings of Coalgebraic Methods in Computer Science, Berlin, March 2000 (CMCS'00), ENTCS Volume 33.
(Extended version of the Dresden workshop contribution, December 1999.) March 2000. (gzipped postscript).
A.Kurz: Limits in Categories of Coalgebras. A short note on a way to construct limits in categories of coalgebras. Draft, November 1999. (gzipped postscript).
A.Kurz, R.Hennicker: On Institutions for Modular Coalgebraic Specifications. Accepted for publication in TCS. (gzipped postscript).
A.Kurz, D.Pattinson: Notes on Coalgebras, Co-Fibrations and Concurrency. Draft, presented at the Workshop on Categorical Models of Concurrency, Dresden, October 1999. There is a new version now,
see above.
A.Kurz: Modal Rules are Co-Implications. Draft, revised 15.5.2000. (gzipped postscript).
R.Hennicker, A.Kurz: On the Algebraic Extension of Coalgebraic Specifications. In J. Rutten and B. Jacobs, editors, Proceedings of Coalgebraic Methods in Computer Science, Amsterdam, March 1999
(CMCS'99). Published in ENTCS Volume 19. (gzipped postscript).
A.Kurz: A Co-Variety-Theorem for Modal Logic. Preprint (revised 29.3.1999). To be published in Proceedings of Advances in Modal Logic, Uppsala, 1998. CSLI,Stanford. (gzipped postscript).
A.Kurz: Coalgebras and Modal Logic. Proceedings of Advances in Modal Logic, Uppsala, 1998. Title changed to "A Co-Variety-Theorem for Modal Logic", see above.
A.Kurz: Specifying Coalgebras with Modal Logic. In B. Jacobs, L. Moss, H. Reichel, and J. Rutten, editors, Proceedings of Coalgebraic Methods in Computer Science, Lisbon, March 1998 (CMCS'98).
Published in ENTCS Volume 11. A revised version will appear in Theoretical Computer Science, Vol.260/1-2. (gzipped postscript).
A.Kurz: A Note on the Frame Semantics of Modal Logic. Talk given on the workshop on Polymodal Logics, ESSLLI'97, Aix-en-Provence. (gzipped postscript).
A. Kurz: Sequence Frames. Proc. Verif. in New Orientation, Univ. Maribor (1995). (gzipped postscript). | {"url":"http://www.cs.le.ac.uk/people/akurz/works.html","timestamp":"2014-04-20T05:42:34Z","content_type":null,"content_length":"25157","record_id":"<urn:uuid:0f142bc6-582c-46f3-b7df-85edb2e7c7fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Direct Substitution
November 9th 2012, 08:39 PM #1
Jun 2012
Direct Substitution
I've come across some notation that I've never had to deal with, so I'm not to sure what I'm supposed to do.
I have an equation $s(t) = e^{-\frac {t}{2}}cos(2t)$
I'm supposed to show by direct substitution that the displacement satisfies the differential equation
$4\frac{d^2s}{dt^2} + f\frac{ds}{dt} + 17s = 0$
I'm really thrown off by the term 'direct substitution' as we haven't used it in this unit. What am I supposed to substitute into what??
Also, the notation of $\frac {d^2s}{dt^2}$ is confusing me too - what is this in relation to?
Also, please, no giving me answers to the equation, I'm moreso after an explanation of what I'm supposed to be doing here...
Re: Direct Substitution
Hey astuart.
Basically it's asking you to calculate the derivatives and then substitution those into the equation and show that it equals zero.
So s = s(t) is your original equation while ds/dt is the first derivative and d^2s/dt^2 is the second derivative.
Re: Direct Substitution
November 9th 2012, 11:51 PM #2
MHF Contributor
Sep 2012
November 11th 2012, 01:51 PM #3
Jun 2012 | {"url":"http://mathhelpforum.com/differential-equations/207139-direct-substitution.html","timestamp":"2014-04-18T15:01:13Z","content_type":null,"content_length":"37199","record_id":"<urn:uuid:9c6141cd-e45e-428f-b024-5ad433d7c01f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Margin of Error Question
October 13th 2009, 12:46 PM #1
Oct 2009
Margin of Error Question
Hello, I am new to this site. I am taking a basic stats class online and I am working on a homework problem with a friend that we are a little stumped. The question is:
An opinion poll says that the result of their latest sample has a margin of error of plus or minus three percentage points. This means that...
a) We can be certain that the poll result is within three percentage points of the truth about the population.
b) We could be certain that the poll result is within three percentage points of the truth if there we no no-response.
c)The poll used a method that gives a result within three percentage points of the truth in 95% of all samples
I am thinking that it is "a" because from my understanding when you have a margin of error it takes into account everything. My friend is saying that it is "b" and we have ruled out "c" all
Thank you in advance for any help that you may have!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/statistics/107817-margin-error-question.html","timestamp":"2014-04-20T08:40:43Z","content_type":null,"content_length":"29395","record_id":"<urn:uuid:5ece13b8-40be-4c60-9e0a-f3cdf03fa94c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
What have I just downloaded from ZTE's site - Page 6 - ZTE Skate - Skate.MoDaCo.com
You mean first time before the latest (hypothetically perfect) version, or with it for first try?
Anyway, imo you don't have to restore the stock state, not even if you are in CM, (which as far as I know doesn't contain the system update option from SD), just put the image folder on your SD,
turn off the phone, than turn it on with holding the minus volume button (that's the bootloader mode if I'm correct).
I mean that was my first try.
Put the image folder on your SD, turn off the phone, than turn it on with holding the
volume button and
button and | {"url":"http://www.modaco.com/topic/346977-what-have-i-just-downloaded-from-ztes-site/page-6","timestamp":"2014-04-20T13:37:14Z","content_type":null,"content_length":"139386","record_id":"<urn:uuid:4c4f7f7b-139a-41f8-9a5c-dda2e58b46f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Types of angles
There are many different types of angles. We will define them in this lesson.
Acute angle:
An angle whose measure is less than 90 degrees. The following is an acute angle.
Right angle:
An angle whose measure is 90 degrees. The following is a right angle.
Obtuse angle:
An angle whose measure is bigger than 90 degrees but less than 180 degrees. Thus, it is between 90 degrees and 180 degrees. The following is an obtuse angle.
Straight angle
An angle whose measure is 180 degrees.Thus, a straight angle look like a straight line. The following is a straight angle.
Reflex angle:
An angle whose measure is bigger than 180 degrees but less than 360 degrees.The following is a reflex angle.
Adjacent angles:
Angle with a common vertex and one common side. <1 and <2, are adjacent angles.
Complementary angles:
Two angles whose measures add to 90 degrees. Angle 1 and angle 2 are complementary angles because together they form a right angle.
Note that angle 1 and angle 2 do not have to be adjacent to be complementary as long as they add up to 90 degrees
Supplementary angles:
Two angles whose measures add to 180 degrees. The following are supplementary angles.
Vertical angles:
Angles that have a common vertex and whose sides are formed by the same lines. The following(angle 1 and angle 2) are vertical angles.
When two parallel lines are crossed by a third line(Transversal), 8 angles are formed. Take a look at the following figure
Angles 3,4,5,8 are interior angles
Angles 1,2,6,7 are exterior angles
Alternate interior angles:
Pairs of interior angles on opposite sides of the transversal.
For instance, angle 3 and angle 5 are alternate interior angles. Angle 4 and angle 8 are also alternate interior angles.
Alternate exterior angles:
Pairs of exterior angles on opposite sides of the transversal.
Angle 2 and angle 7 are alternate exterior angles.
Corresponding angles:
Pairs of angles that are in similar positions.
Angle 3 and angle 2 are corresponding angles.
Angle 5 and angle 7 are corresponding angles
Here we go! Study the types of angles carefully. This is where any serious study of geometry begin.
You may want to check also Central angles. | {"url":"http://www.basic-mathematics.com/types-of-angles.html","timestamp":"2014-04-21T12:21:22Z","content_type":null,"content_length":"36694","record_id":"<urn:uuid:3c56ec99-9b18-48d3-b01f-ba827c66c2bc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Network Theory (Part 20)
August 3, 2012
Network Theory (Part 20)
We're in the middle of a battle: in addition to our typical man vs. equation scenario, it's a battle between two theories. For those good patrons following the network theory series, you know the two
opposing forces well. It's our old friends, at it again:
Stochastic Mechanics vs Quantum Mechanics!
Today we're reporting live from a crossroads, and we're facing a skirmish that gives rise to what some might consider a paradox. Let me sketch the main thesis before we get our hands dirty with the
gory details.
First I need to tell you that the battle takes place at the intersection of stochastic and quantum mechanics. We recall from Part 16 that there is a class of operators called 'Dirichlet operators'
that are valid Hamiltonians for both stochastic and quantum mechanics. In other words, you can use them to generate time evolution both for old-fashioned random processes and for quantum processes!
Staying inside this class allows the theories to fight it out on the same turf. We will be considering a special subclass of Dirichlet operators, which we call 'irreducible Dirichlet operators'.
These are the ones where starting in any state in our favorite basis of states, we have a nonzero chance of winding up in any other. When considering this subclass, we found something interesting:
Thesis. Let $H$ be an irreducible Dirichlet operator with $n$ eigenstates. In stochastic mechanics, there is only one valid state that is an eigenvector of $H$: the unique so-called 'Perron–Frobenius
state'. The other $n-1$ eigenvectors are forbidden states of a stochastic system: the stochastic system is either in the Perron–Frobenius state, or in a superposition of at least two eigensvectors.
In quantum mechanics, all $n$ eigenstates of $H$ are valid states.
This might sound like a riddle, but today as we'll prove, riddle or not, it's a fact. If it makes sense, well that's another issue. As John might have said, it's like a bone kicked down from the gods
up above: we can either choose to chew on it, or let it be. Today we are going to do a bit of chewing.
One of the many problems with this post is that John had a nut loose on his keyboard. It was not broken! I'm saying he wrote enough blog posts on this stuff to turn them into a book. I'm supposed to
be compiling the blog articles into a massive LaTeX file, but I wrote this instead.
Another problem is that this post somehow seems to use just about everything said before, so I'm going to have to do my best to make things self-contained. Please bear with me as I try to recap
what's been done. For those of you familiar with the series, a good portion of the background for what we'll cover today can be found in Part 12 and Part 16.
At the intersection of two theories
As John has mentioned in his recent talks, the typical view of how quantum mechanics and probability theory come into contact looks like this:
The idea is that quantum theory generalizes classical probability theory by considering observables that don't commute.
That's perfectly valid, but we've been exploring an alternative view in this series. Here quantum theory doesn't subsume probability theory, but they intersect:
What goes in the middle you might ask? As odd as it might sound at first, John showed in Part 16 that electrical circuits made of resistors constitute the intersection!
For example, a circuit like this:
gives rise to a Hamiltonian $H$ that's good both for stochastic mechanics and stochastic mechanics. Indeed, he found that the power dissipated by a circuit made of resistors is related to the
familiar quantum theory concept known as the expectation value of the Hamiltonian!
$$ \textrm{power} = -2 \langle \psi, H \psi \rangle $$
Oh—and you might think we made a mistake and wrote our Ω (ohm) symbols upside down. We didn't. It happens that ℧ is the symbol for a 'mho'—a unit of conductance that's the reciprocal of an ohm. Check
out Part 16 for the details.
Stochastic mechanics versus quantum mechanics
Let's recall how states, time evolution, symmetries and observables work in the two theories. Today we'll fix a basis for our vector space of states, and we'll assume it's finite-dimensional so that
all vectors have $n$ components over either the complex numbers $ \mathbb{C}$ or the reals $ \mathbb{R}$. In other words, we'll treat our space as either $ \mathbb{C}^n$ or $ \mathbb{R}^n$. In this
fashion, linear operators that map such spaces to themselves will be represented as square matrices.
Vectors will be written as $\psi_i$ where the index $i$ runs from 1 to $n$, and we think of each choice of the index as a state of our system—but since we'll be using that word in other ways too,
let's call it a configuration. It's just a basic way our system can be.
Besides the configurations $i = 1,\dots, n$, we have more general states that tell us the probability or amplitude of finding our system in one of these configurations:
• Stochastic states are $n$-tuples of nonnegative real numbers:
$$ \psi_i \in \mathbb{R}^+ $$
The probability of finding the system in the $i$th configuration is defined to be $\psi_i$. For these probabilities to sum to one, $\psi_i$ needs to be normalized like this:
$$ \sum_i \psi_i = 1 $$
or in the notation we're using in these articles:
$$ \langle \psi \rangle = 1 $$
where we define
$$ \langle \psi \rangle = \sum_i \psi_i $$
• Quantum states are $n$-tuples of complex numbers:
$$ \psi_i \in \mathbb{C} $$
The probability of finding a state in the $i$th configuration is defined to be $|\psi(x)|^2$. For these probabilities to sum to one, $\psi$ needs to be normalized like this:
$$ \sum_i |\psi_i|^2 = 1 $$
or in other words
$$ \langle \psi, \psi \rangle = 1 $$
where the inner product of two vectors $\psi$ and $\phi$ is defined by
$$ \langle \psi, \phi \rangle = \sum_i \overline{\psi}_i \phi_i $$
Now, the usual way to turn a quantum state $\psi$ into a stochastic state is to take the absolute value of each number $\psi_i$ and then square it. However, if the numbers $\psi_i$ happen to be
nonnegative, we can also turn $\psi$ into a stochastic state simply by multiplying it by a number to ensure $\langle \psi \rangle = 1$.
This is very unorthodox, but it lets us evolve the same vector $\psi$ either stochastically or quantum-mechanically, using the recipes I'll describe next. In physics jargon these correspond to
evolution in 'real time' and 'imaginary time'. But don't ask me which is which: from a quantum viewpoint stochastic mechanics uses imaginary time, but from a stochastic viewpoint it's the other way
Time evolution
Time evolution works similarly in stochastic and quantum mechanics, but with a few big differences:
• In stochastic mechanics the state changes in time according to the master equation:
$$ \frac{d}{d t} \psi(t) = H \psi(t) $$
which has the solution
$$ \psi(t) = \exp(t H) \psi(0) $$
• In quantum mechanics the state changes in time according to Schrödinger's equation:
$$ \frac{d}{d t} \psi(t) = -i H \psi(t) $$
which has the solution
$$ \psi(t) = \exp(-i t H) \psi(0) $$
The operator $H$ is called the Hamiltonian. The properties it must have depend on whether we're doing stochastic mechanics or quantum mechanics:
• We need $H$ to be infinitesimal stochastic for time evolution given by $\exp(tH)$ to send stochastic states to stochastic states. In other words, we need that (i) its columns sum to zero and (ii)
its off-diagonal entries are real and nonnegative:
$$ \sum_i H_{i j}=0 $$ $$ i\neq j\Rightarrow H_{i j}\geq 0 $$
• We need $ H$ to be self-adjoint for time evolution given by $\exp(-itH)$ to send quantum states to quantum states. So, we need
$$ H = H^\dagger $$
where we recall that the adjoint of a matrix is the conjugate of its transpose:
$$ (H^\dagger)_{i j} := \overline{H}_{j i} $$
We are concerned with the case where the operator $ H$ generates both a valid quantum evolution and also a valid stochastic one:
• $H$ is a Dirichlet operator if it's both self-adjoint and infinitesimal stochastic. We will soon go further and zoom in on this intersection! But first let's finish our review.
As John explained in Part 12, besides states and observables we need symmetries, which are transformations that map states to states. These include the evolution operators which we only briefly
discussed in the preceding subsection.
• A linear map $U$ that sends quantum states to quantum states is called an isometry, and isometries are characterized by this property:
$$ U^\dagger U = 1$$
• A linear map $U$ that sends stochastic states to stochastic states is called a stochastic operator, and stochastic operators are characterized by these properties:
$$ \sum_i U_{i j} = 1 $$
$$ U_{i j}\geq 0 $$
A notable difference here is that in our finite-dimensional situation, isometries are always invertible, but stochastic operators may not be! If $U$ is an $n \times n$ matrix that's an isometry, $U^\
dagger$ is its inverse. So, we also have
$$ U U^\dagger = 1$$
and we say $U$ is unitary. But if $U$ is stochastic, it may not have an inverse—and even if it does, its inverse is rarely stochastic. This explains why in stochastic mechanics time evolution is
often not reversible, while in quantum mechanics it always is.
Puzzle 1. Suppose $U$ is a stochastic $n \times n$ matrix whose inverse is stochastic. What are the possibilities for $U$?
It is quite hard for an operator to be a symmetry in both stochastic and quantum mechanics, especially in our finite-dimensional situation:
Puzzle 2. Suppose $U$ is an $n \times n$ matrix that is both stochastic and unitary. What are the possibilities for $U$?
'Observables' are real-valued quantities that can be measured, or predicted, given a specific theory.
• In quantum mechanics, an observable is given by a self-adjoint matrix $O$, and the expected value of the observable $O$ in the quantum state $\psi$ is
$$ \langle \psi , O \psi \rangle = \sum_{i,j} \overline{\psi}_i O_{i j} \psi_j $$
• In stochastic mechanics, an observable $O$ has a value $O_i$ in each configuration $i$, and the expected value of the observable $O$ in the stochastic state $\psi$ is
$$ \langle O \psi \rangle = \sum_i O_i \psi_i $$
We can turn an observable in stochastic mechanics into an observable in quantum mechanics by making a diagonal matrix whose diagonal entries are the numbers $O_i$.
From graphs to matrices
Back in Part 16, John explained how a graph with positive numbers on its edges gives rise to a Hamiltonian in both quantum and stochastic mechanics—in other words, a Dirichlet operator.
Here's how this works. We'll consider simple graphs: graphs without arrows on their edges, with at most one edge from one vertex to another, and with no edge from a vertex to itself. And we'll only
look at graphs with finitely many vertices and edges. We'll assume each edge is labelled by a positive number, like this:
If our graph has $n$ vertices, we can create an $n \times n$ matrix $A$ where $A_{i j}$ is the number labelling the edge from $i$ to $j$, if there is such an edge, and 0 if there's not. This matrix
is symmetric, with real entries, so it's self-adjoint. So $A$ is a valid Hamiltonian in quantum mechanics.
How about stochastic mechanics? Remember that a Hamiltonian in stochastic mechanics needs to be 'infinitesimal stochastic'. So, its off-diagonal entries must be nonnegative, which is indeed true for
our $A$, but also the sums of its columns must be zero, which is not true when our $A$ is nonzero.
But now comes the best news you've heard all day: we can improve $A$ to a stochastic operator in a way that is completely determined by $A$ itself! This is done by subtracting a diagonal matrix $L$
whose entries are the sums of the columns of $A$:
$$L_{i i} = \sum_i A_{i j} $$ $$ i \ne j \Rightarrow L_{i j} = 0 $$
It's easy to check that
$$ H = A - L $$ is still self-adjoint, but now also infinitesimal stochastic. So, it's a Dirichlet operator: a good Hamiltonian for both stochastic and quantum mechanics!
In Part 16, we saw a bit more: every Dirichlet operator arises this way. It's easy to see. You just take your Dirichlet operator and make a graph with one edge for each nonzero off-diagonal entry.
Then you label the edge with this entry. So, Dirichlet operators are essentially the same as finite simple graphs with edges labelled by positive numbers.
Now, a simple graph can consist of many separate 'pieces', called components. Then there's no way for a particle hopping along the edges to get from one component to another, either in stochastic or
quantum mechanics. So we might as well focus our attention on graphs with just one component. These graphs are called 'connected'. In other words:
Definition. A simple graph is connected if it is nonempty and there is a path of edges connecting any vertex to any other.
Our goal today is to understand more about Dirichlet operators coming from connected graphs. For this we need to learn the Perron–Frobenius theorem. But let's start with something easier.
Perron's theorem
In quantum mechanics it's good to think about observables that have positive expected values:
$$ \langle \psi, O \psi \rangle > 0 $$
for every quantum state $\psi \in \mathbb{C}^n$. These are called positive definite. But in stochastic mechanics it's good to think about matrices that are positive in a more naive sense:
Definition. An $n \times n$ real matrix $T$ is positive if all its entries are positive:
$$ T_{i j} > 0 $$
for all $1 \le i, j \le n$.
Definition. A vector $\psi \in \mathbb{R}^n$ is positive if all its components are positive:
$$ \psi_i > 0 $$
for all $1 \le i \le n$.
We'll also define nonnegative matrices and vectors in the same way, replacing $> 0$ by $\ge 0$. A good example of a nonnegative vector is a stochastic state.
In 1907, Perron proved the following fundamental result about positive matrices:
Perron's Theorem. Given a positive square matrix $T$, there is a positive real number $r$, called the Perron–Frobenius eigenvalue of $T$, such that $r$ is an eigenvalue of $T$ and any other
eigenvalue $\lambda$ of $T$ has $ |\lambda| < r$. Moreover, there is a positive vector $\psi \in \mathbb{R}^n$ with $T \psi = r \psi$. Any other vector with this property is a scalar multiple of $\
psi$. Furthermore, any nonnegative vector that is an eigenvector of $T$ must be a scalar multiple of $\psi$.
In other words, if $T$ is positive, it has a unique eigenvalue with the largest absolute value. This eigenvalue is positive. Up to a constant factor, it has an unique eigenvector. We can choose this
eigenvector to be positive. And then, up to a constant factor, it's the only nonnegative eigenvector of $T$.
From matrices to graphs
The conclusions of Perron's theorem don't hold for matrices that are merely nonnegative. For example, these matrices
$$ \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) , \qquad \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) $$
are nonnegative, but they violate lots of the conclusions of Perron's theorem.
Nonetheless, in 1912 Frobenius published an impressive generalization of Perron's result. In its strongest form, it doesn't apply to all nonnegative matrices; only to those that are 'irreducible'.
So, let us define those.
We've seen how to build a matrix from a graph. Now we need to build a graph from a matrix! Suppose we have an $n \times n$ matrix $T$. Then we can build a graph $G_T$ with $n$ vertices where there is
an edge from the $i$th vertex to the $j$th vertex if and only if $T_{i j} \ne 0$.
But watch out: this is a different kind of graph! It's a directed graph, meaning the edges have directions, there's at most one edge going from any vertex to any vertex, and we do allow an edge going
from a vertex to itself. There's a stronger concept of 'connectivity' for these graphs:
Definition. A directed graph is strongly connected if there is a directed path of edges going from any vertex to any other vertex.
So, you have to be able to walk along edges from any vertex to any other vertex, but always following the direction of the edges! Using this idea we define irreducible matrices:
Definition. A nonnegative square matrix $T$ is irreducible if its graph $G_T$ is strongly connected.
The Perron–Frobenius theorem
Now we are ready to state:
The Perron-Frobenius Theorem. Given an irreducible nonnegative square matrix $T$, there is a positive real number $r$, called the Perron-.Frobenius eigenvalue of $T$, such that $r$ is an eigenvalue
of $T$ and any other eigenvalue $\lambda$ of $T$ has $|\lambda| \le r$. Moreover, there is a positive vector $\psi \in \mathbb{R}^n$ with $T\psi = r \psi$. Any other vector with this property is a
scalar multiple of $\psi$. Furthermore, any nonnegative vector that is an eigenvector of $T$ must be a scalar multiple of $\psi$.
The only conclusion of this theorem that's weaker than those of Perron's theorem is that there may be other eigenvalues with $|\lambda| = r$. For example, this matrix is irreducible and nonnegative:
$$ \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) $$
Its Perron–Frobenius eigenvalue is 1, but it also has -1 as an eigenvalue. In general, Perron–Frobenius theory says quite a lot about the other eigenvalues on the circle $|\lambda| = r,$ but we won't
need that fancy stuff here.
Perron–Frobenius theory is useful in many ways, from highbrow math to ranking football teams. We'll need it not just today but also later in this series. There are many books and other sources of
information for those that want to take a closer look at this subject. If you're interested, you can search online or take a look at these:
• Dimitrious Noutsos, Perron Frobenius theory and some extensions, 2008. (Includes proofs of the basic theorems.)
• V. S. Sunder, Perron Frobenius theory, 18 December 2009. (Includes applications to graph theory, Markov chains and von Neumann algebras.)
• Stephen Boyd, Lecture 17: Perron Frobenius theory, Winter 2008-2009. (Includes a max-min characterization of the Perron–Frobenius eigenvalue and applications to Markov chains, economics,
population growth and power control.)
I have not taken a look myself, but if anyone is interested and can read German, the original work appears here:
• Oskar Perron, Zur Theorie der Matrizen, Math. Ann. 64 (1907), 248–263.
• Georg Frobenius, Über Matrizen aus nicht negativen Elementen, S.-B. Preuss Acad. Wiss. Berlin (1912), 456–477.
And, of course, there's this:
It's quite good.
Irreducible Dirichlet operators
Now comes the payoff. We saw how to get a Dirichlet operator $H$ from any finite simple graph with edges labelled by positive numbers. Now let's apply Perron–Frobenius theory to prove our thesis.
Unfortunately, the matrix $H$ is rarely nonnegative. If you remember how we built it, you'll see its off-diagonal entries will always be nonnegative... but its diagonal entries can be negative.
Luckily, we can fix this just by adding a big enough multiple of the identity matrix to $H$! The result is a nonnegative matrix
$$ T = H + c I $$
where $c > 0$ is some large number. This matrix $T$ has the same eigenvectors as $H$. The off-diagonal matrix entries of $T$ are the same as those of $H$, so $T_{i j}$ is nonzero for $i \ne j$
exactly when the graph we started with has an edge from $i$ to $j$. So, for $i \ne j$, the graph $G_T$ will have an directed edge going from $i$ to $j$ precisely when our original graph had an edge
from $i$ to $j$. And that means that if our original graph was connected, $G_T$ will be strongly connected. Thus, by definition, the matrix $T$ is irreducible!
Since $T$ is nonnegative and irreducible, the Perron–Frobenius theorem swings into action and we conclude:
Lemma. Suppose $H$ is the Dirichlet operator coming from a connected finite simple graph with edges labelled by positive numbers. Then the eigenvalues of $H$ are real. Let $\lambda$ be the largest
eigenvalue. Then there is a positive vector $\psi \in \mathbb{R}^n$ with $H\psi = \lambda \psi$. Any other vector with this property is a scalar multiple of $\psi$. Furthermore, any nonnegative
vector that is an eigenvector of $H$ must be a scalar multiple of $\psi$.
Proof. The eigenvalues of $H$ are real since $H$ is self-adjoint. Notice that if $r$ is the Perron–Frobenius eigenvalue of $T = H + c I$ and
$$ T \psi = r \psi$$
$$ H \psi = (r - c)\psi $$
By the Perron–Frobenius theorem the number $r$ is positive, and it has the largest absolute value of any eigenvalue of $T$. Thanks to the subtraction, the eigenvalue $r - c$ may not have the largest
absolute value of any eigenvalue of $H$. It is, however, the largest eigenvalue of $H$, so we take this as our $\lambda$. The rest follows from the Perron–Frobenius theorem. █
But in fact we can improve this result, since the largest eigenvalue $\lambda$ is just zero. Let's also make up a definition, to make our result sound more slick:
Definition. A Dirichlet operator is irreducible if it comes from a connected finite simple graph with edges labelled by positive numbers.
This meshes nicely with our earlier definition of irreducibility for nonnegative matrices. Now:
Theorem. Suppose $H$ is an irreducible Dirichlet operator. Then $H$ has zero as its largest real eigenvalue. There is a positive vector $\psi \in \mathbb{R}^n$ with $H\psi = 0$. Any other vector with
this property is a scalar multiple of $\psi$. Furthermore, any nonnegative vector that is an eigenvector of $H$ must be a scalar multiple of $\psi$.
Proof. Choose $\lambda$ as in the Lemma, so that $H\psi = \lambda \psi$. Since $\psi$ is positive we can normalize it to be a stochastic state:
$$ \sum_i \psi_i = 1 $$
Since $H$ is a Dirichlet operator, $\exp(t H)$ sends stochastic states to stochastic states, so
$$ \sum_i (\exp(t H) \psi)_i = 1 $$
for all $t \ge 0$. On the other hand,
$$ \sum_i (\exp(t H)\psi)_i = \sum_i e^{t \lambda} \psi_i = e^{t \lambda} $$
so we must have $\lambda = 0$. █
What's the point of all this? One point is that there's a unique stochastic state $\psi$ that's an equilibrium state: since $H \psi = 0$, it doesn't change with time. It's also globally stable: since
all the other eigenvalues of $H$ are negative, all other stochastic states converge to this one as time goes forward.
An example
There are many examples of irreducible Dirichlet operators. For instance, in Part 15 we talked about graph Laplacians. The Laplacian of a connected simple graph is always irreducible. But let us try
a different sort of example, coming from the picture of the resistors we saw earlier:
Let's create a matrix $A$ whose entry $A_{i j}$ is the number labelling the edge from $i$ to $j$ if there is such an edge, and zero otherwise:
$$A = \left( \begin{array}{ccccc} 0 & 2 & 1 & 0 & 1 \\ 2 & 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 2 & 1 \\ 0 & 1 & 2 & 0 & 1 \\ 1 & 1 & 1 & 1 & 0 \end{array} \right) $$
Remember how the game works. The matrix $A$ is already a valid Hamiltonian for quantum mechanics, since it's self adjoint. However, to get a valid Hamiltonian for both stochastic and quantum
mechanics—in other words, a Dirichlet operator—we subtract the diagonal matrix $L$ whose entries are the sums of the columns of $A.$ In this example it just so happens that the column sums are all 4,
so $L = 4 I,$ and our Dirichlet operator is
$$ H = A - 4 I = \left( \begin{array}{ccccc} -4 & 2 & 1 & 0 & 1 \\ 2 & -4 & 0 & 1 & 1 \\ 1 & 0 & -4 & 2 & 1 \\ 0 & 1 & 2 & -4 & 1 \\ 1 & 1 & 1 & 1 & -4 \end{array} \right) $$
We've set up this example so it's easy to see that the vector $\psi = (1,1,1,1,1)$ has
$$ H \psi = 0 $$
So, this is the unique eigenvector for the eigenvalue 0. We can use Mathematica to calculate the remaining eigenvalues of $H$. The set of eigenvalues is
$$\{0, -7, -8, -8, -3 \} $$
As we expect from our theorem, the largest real eigenvalue is 0. By design, the eigenstate associated to this eigenvalue is
$$ | v_0 \rangle = (1, 1, 1, 1, 1) $$
(This funny notation for vectors is common in quantum mechanics, so don't worry about it.) All the other eigenvectors fail to be nonnegative, as predicted by the theorem. They are:
$$ | v_1 \rangle = (1, -1, -1, 1, 0), $$ $$ | v_2 \rangle = (-1, 0, -1, 0, 2), $$ $$ | v_3 \rangle = (-1, 1, -1, 1, 0), $$ $$ | v_4 \rangle = (-1, -1, 1, 1, 0). $$
To compare the quantum and stochastic states, consider first $ |v_0\rangle$. This is the only eigenvector that can be normalized to a stochastic state. Remember, a stochastic state must have
nonnegative components. This rules out $ |v_1\rangle$ through to $ |v_4\rangle$ as valid stochastic states, no matter how we normalize them! However, these are allowed as states in quantum mechanics,
once we normalize them correctly. For a stochastic system to be in a state other than the Perron–Frobenius state, it must be a linear combination of least two eigenstates. For instance,
$$ \psi_a = (1-a)|v_0\rangle + a |v_1\rangle $$
can be normalized to give stochastic state only if $ 0 \leq a \leq \frac{1}{2}$.
And, it's easy to see that it works this way for any irreducible Dirichlet operator, thanks to our theorem. So, our thesis has been proved true!
Puzzles on irreducibility
Let us conclude with a couple more puzzles. There are lots of ways to characterize irreducible nonnegative matrices; we don't need to mention graphs. Here's one:
Puzzle 3. Let $T$ be a nonnegative $n \times n$ matrix. Show that $T$ is irreducible if and only if for all $i,j \ge 0$, $(T^m)_{i j} > 0$ for some natural number $m$.
You may be confused because today we explained the usual concept of irreducibility for nonnegative matrices, but also defined a concept of irreducibility for Dirichlet operators. Luckily there's no
conflict: Dirichlet operators aren't nonnegative matrices, but if we add a big multiple of the identity to a Dirichlet operator it becomes a nonnegative matrix, and then:
Puzzle 4. Show that a Dirichlet operator $H$ is irreducible if and only if the nonnegative operator $H + c I$ (where $c$ is any sufficiently large constant) is irreducible.
Irreducibility is also related to the nonexistence of interesting conserved quantities. In Part 11 we saw a version of Noether's Theorem for stochastic mechanics. Remember that an observable $O$ in
stochastic mechanics assigns a number $O_i$ to each configuration $i = 1, \dots, n$. We can make a diagonal matrix with $O_i$ as its diagonal entries, and by abuse of language we call this $O$ as
well. Then we say $O$ is a conserved quantity for the Hamiltonian $H$ if the commutator $[O,H] = O H - H O$ vanishes.
Puzzle 5. Let $H$ be a Dirichlet operator. Show that $H$ is irreducible if and only if every conserved quantity $O$ for $H$ is a constant, meaning that for some $c \in \mathbb{R}$ we have $O_i = c$
for all $i$. (Hint: examine the proof of Noether's theorem.)
In fact this works more generally:
Puzzle 6. Let $H$ be an infinitesimal stochastic matrix. Show that $H + c I$ is an irreducible nonnegative matrix for all sufficiently large $c$ if and only if every conserved quantity $O$ for $H$ is
a constant.
You can also read comments on Azimuth, and make your own comments or ask questions there!
© 2012 John Baez | {"url":"http://math.ucr.edu/home/baez/networks/networks_20.html","timestamp":"2014-04-18T08:02:34Z","content_type":null,"content_length":"33301","record_id":"<urn:uuid:45c409e6-2bf5-47a4-9a8b-e87dec3b1b50>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Four-digit Targets
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
This problem
would fit in well when members of the group are partitioning, rounding and ordering four-digit whole numbers. It requires considerable understanding of how the number system works. Playing the game,
and discussing it afterwards, can really help to develop a firm concept of place value.
Possible approach
You could start by writing a four-digit number on the board, for example $6149$, and asking how the digits could be rearranged to make the numbers that are the largest, smallest and nearest to
$5000$. Could they be rearranged to make a multiple of $5$? Of $3$? If not, why not?
Next you could introduce an item in the problem such as making the largest possible even number using only two of each digit. Alternatively, you could make up your own examples such as the smallest
even number or the nearest to $7000$. You will need to establish whether $0$ can be used at the beginning of a number. This, in itself, can form an interesting discussion point. (The final decision
itself does not matter - it is the reasons that are important, and the fact that the children feel as if it is their decision!)
Probably the best way of continuing on this problem is to use
this sheet
from BEAM and for learners to work competitively in pairs. It is ideal if each player can also have two sets of digit cards to use to make the numbers.
Once learners have played a few times, instigate a general discussion on the best strategies and the nearest that anyone got to the target. You could repeat what you did at the start and give
learners numbers such as $2681$ and ask them how they would rearrange this to make the highest/lowest number possible and why this is so. This will make a good assessment opportunity.
Key questions
Why have you put a $5$ here?
Where is the best place to put $9$ when you are aiming for the lowest/highest number?
Possible extension
Learners could make up their own criteria for a new game possibly using four five-digit numbers.
Possible support
Children could use
simpler version of the problem. | {"url":"http://nrich.maths.org/6342/note?nomenu=1","timestamp":"2014-04-17T07:15:53Z","content_type":null,"content_length":"6431","record_id":"<urn:uuid:24eb9673-d2ea-4a1b-8897-0316c8c03a7c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need Help with Logarithmic function problem???
March 26th 2010, 03:08 PM #1
Mar 2010
Need Help with Logarithmic function problem???
Logarithmic Function Problem
The public health service monitors the spread of an epidemic of a particularly long-lasting strain of the flu in a city of 500,000 people using the logistic function. At the begining of the first
week (time zero), 200 cases had been reported. During the first week 300 new cases were reported.
a. Determine the logistic function.
b. Estimate the number of individuals infected after 5 weeks.
c. When will the epidemic spread at the greatest rate?
d. At what rate will the epidemic spread when 40% of the population has been infected?
e. Graph the logistic function for the first 20 weeks of the epidemic's spread.
Logarithmic Function Problem
The public health service monitors the spread of an epidemic of a particularly long-lasting strain of the flu in a city of 500,000 people using the logistic function. At the begining of the first
week (time zero), 200 cases had been reported. During the first week 300 new cases were reported.
a. Determine the logistic function.
b. Estimate the number of individuals infected after 5 weeks.
c. When will the epidemic spread at the greatest rate?
d. At what rate will the epidemic spread when 40% of the population has been infected?
e. Graph the logistic function for the first 20 weeks of the epidemic's spread.
What is the logistic function?
Your first difficulty is thinking that it involves a logarithm when it doesn't! Your problem says the spread of the disease is modeled by a logistic function, not a logarithm. So I repeat Captain
Black's question, "What is a logistic function"?
Last edited by mr fantastic; March 27th 2010 at 03:47 PM. Reason: Fixed bold tag.
you were given information about points on the curve ...
in a city of 500,000 people
... what constant in the function represents this value?
At the begining of the first week (time zero), 200 cases had been reported.
... (0,200) correct?
During the first week 300 new cases were reported.
... for t measured in weeks, this would be (1,300) , correct?
use this info to determine each constant (a, b, and k) in the function.
Thank you for the enlightenment:
So I used to two point to find the slope to be (300-200)/(1-0)=100. using y - mx+b to find 200= 100(0) +B, therefore b = 200.
So the constant a = 500000
constant b = 200
and constant k = 100
Is this correct???
sorry ... not even close.
$b$ is not a y-intercept ... you're dealing with a logistic growth curve, not a linear function.
Can you give me some formulas and hints on how to find constant a, b and k cause I haven't got a clue. thanks..
$y = \frac{a}{1 + be^{-kt}}$
$a$ is the limiting value; the maximum possible (also called the carrying capacity) value for $y$.
$y = \frac{500000}{1 + be^{-kt}}$
at $t = 0$ , $y = 200$
$200 = \frac{500000}{1 + be^{0}} = \frac{500000}{1 + b}$
solve for $b$ ...
$b = 2499$
$y = \frac{500000}{1 + 2499e^{-kt}}$
now use the point $(1, 300)$ and solve for $k$
$y = \frac{a}{1 + be^{-kt}}$
$a$ is the limiting value; the maximum possible (also called the carrying capacity) value for $y$.
$y = \frac{500000}{1 + be^{-kt}}$
at $t = 0$ , $y = 200$
$200 = \frac{500000}{1 + be^{0}} = \frac{500000}{1 + b}$
solve for $b$ ...
$b = 2499$
$y = \frac{500000}{1 + 2499e^{-kt}}$
now use the point $(1, 300)$ and solve for $k$
Oddly my prof. gave me the exact same problem. The only thing is, is that I don't know how to solve the equation in terms of k.
I got:
dy/dt=Ky(1+y/L) for the growth model y=L/1+be^-kt
So when solving for the point 1, 500 (I believe it's 500 and not 300 because they said 300 new cases, not 300 total), I got that k=500.5. That doesn't seem to make sense though because everything
different number of weeks I put in I always get 500,000.
Any help is much appreciated!
Oddly my prof. gave me the exact same problem. The only thing is, is that I don't know how to solve the equation in terms of k.
I got:
dy/dt=Ky(1+y/L) for the growth model y=L/1+be^-kt
So when solving for the point 1, 500 (I believe it's 500 and not 300 because they said 300 new cases, not 300 total), I got that k=500.5. That doesn't seem to make sense though because everything
different number of weeks I put in I always get 500,000.
Any help is much appreciated!
I did more thinking and I messed up how I got 500.5 big time. I did it again and I found that it does come to k=.9168. I tested it and it came out right. So for part b I got 18,850 infections.
The only trouble I'm having is parts c and d. How do I find the greatest rate and how do I find the rate when 40% of the population is infected?
March 27th 2010, 12:47 AM #2
Grand Panjandrum
Nov 2005
March 27th 2010, 03:23 AM #3
MHF Contributor
Apr 2005
March 27th 2010, 05:58 AM #4
Mar 2010
March 27th 2010, 06:08 AM #5
March 27th 2010, 06:51 AM #6
Mar 2010
March 27th 2010, 07:03 AM #7
March 27th 2010, 07:26 AM #8
Mar 2010
March 27th 2010, 08:14 AM #9
April 4th 2010, 11:45 PM #10
Apr 2010
April 5th 2010, 12:39 AM #11
Apr 2010 | {"url":"http://mathhelpforum.com/calculus/135844-need-help-logarithmic-function-problem.html","timestamp":"2014-04-21T02:26:09Z","content_type":null,"content_length":"72613","record_id":"<urn:uuid:0becdf64-c223-4c82-b669-72e49b36df33>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hamiltonian cycles through specified edges in bipartite graphs, domination game, and the game of revolutionaries and spies
Abstract: This thesis deals with the following three independent problems. Po'sa proved that if $G$ is an $n$-vertex graph in which any two nonadjacent vertices have degree sum at least $n+k$, then
$G$ has a spanning cycle containing any specified family of disjoint paths with a total of $k$ edges. We consider the analogous problem for a bipartite graph $G$ with $n$ vertices and parts of equal
size. Let $F$ be a subgraph of $G$ whose components are nontrivial paths. Let $k$ be the number of edges in $F$, and let $t_1$ and $t_2$ be the numbers of components of $F$ having odd and even
length, respectively. We prove that $G$ has a spanning cycle containing $F$ if any two nonadjacent vertices in opposite partite sets have degree-sum at least $n/2+\tau(F)$, where $\tau(F)=\ceil{k/2}+
\epsilon$ (here $\epsilon=1$ if $t_1=0$ or if $(t_1,t_2)\in\{(1,0),(2,0)\}$, and $\epsilon=0$ otherwise). We show also that this threshold on the degree-sum is sharp when $n>3k$. Bostjan Bresar,
Sandi Klavzar and Douglas F. Rall proposed a game involving the notion of graph domination number. Two players, Dominator and Staller, occupy vertices of a graph $G$, playing alternatingly. Dominator
starts first. A vertex is valid is to be occupied if adding it to the occupied set enlarges the set of vertices dominated by the occupied set. The game ends when the occupied set becomes a dominating
set (A \emph{dominating set} is a set of vertices $U$ such that every vertex is in $U$ or has a neighbor in $U$; the minimum size of a dominating set is the \emph{domination number}, written $\gamma
(G)$). Dominator's goal is to finish the game as soon as possible, and Staller's goal is to prolong it as much as possible. The size of the dominating set obtained when both players play optimally is
the \emph{game domination number} of $G$, written as $\gd(G)$. The \emph{Staller-first game domination number}, written as $\gd'(G)$, is defined similarly; the only difference is that Staller starts
the game. Bre\v{s}ar \etal showed that $\gamma(G)\le\gd(G)\le 2\gamma(G)-1$ and that for any $k$ and $k'$ such that $k\le k'\le 2k-1$, there exists a graph $G$ with $\gamma(G)=k$ and $\gd(G)=k'$.
Their constructions use graphs with many vertices of degree 1. We present an $n$-vertex graph $G$ with domination number, minimum degree and connectivity of order $\theta(\sqrt{n})$ that satisfies $\
gd(G)=2\gamma(G)-1$. Building on the work of Bre\v{s}ar et al., Kinnersley proved that $|\gd(G)-\gd'(G)|\le 1$. Bre\v{s}ar \etal defined a pair $(k,k')$ to be \emph{realizable} if $\gd(G)=k$ and $\
gd'(G)=k'$ for some graph $G$. They showed that the pairs $(k,k)$, $(k,k+1)$ and $(2k+1,2k)$ are realizable for $k\ge 1$. Their constructions for $(k,k+1)$ and $(2k+1,2k)$ are not connected. We show
that for $k\ge 1$, the pairs $(k,k+1)$, $(2k+1,2k)$ and $(2k+2,2k+1)$ are realizable by connected graphs. Jo'zef Beck invented the following game, the game of \emph{\revs and spies}. It is a
two-player game $\rs(G,m,r,s)$ played on a graph $G$ by two players $\Rv$ and $\Sp$. Player $\Rv$ controls $r$ pieces called \emph{\revs} and player $\Sp$ controls $s$ pieces called \emph{spies}. At
the start, $\Rv$ places his pieces on vertices of $G$, and then $\Sp$ does so also. At each subsequent round, $\Rv$ moves some of his pieces from their current vertex to a neighboring vertex, and
then $\Sp$ does so also. If at the end of a round there is a meeting of at least $m$ \revs on some vertex without a spy, then $\Rv$ wins. Player $\Sp$ wins if he can prevent such a meeting forever.
We show that $s\ge \gamma(G)\floor{r/m}$ suffices for $\Sp$ to win $\rs(G,m,r,s)$. Given $r$ and $s$, let $H$ be a complete bipartite graph with at least $r+s$ vertices in each partite set. We will
show that $7r/10+O(1)$ is the minimum number of spies needed to win $\rs(H,2,r,s)$. We also show $r/2+O(1)$ is the minimum number of spies needed to win $\rs(H,3,r,s)$. For $m\ge 4$, we show that the
minimum number of required spies to win $\rs(H,m,r,s)$ is at least $\bigfloor{\floor{r/2} / \ceil{m/3}}-1$ and at most $(1+{1/ \sqrt{3}}){r/ m}+1$. | {"url":"https://www.ideals.illinois.edu/handle/2142/26187","timestamp":"2014-04-19T15:29:38Z","content_type":null,"content_length":"28262","record_id":"<urn:uuid:38f8b50a-d98f-4beb-b313-8d81130cf091>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
What math do architects use?
What math do architects use?
The math skills needed to create or construct a building are not many indeed. If you have ever watched someone build anything, you may have seen them use a tape measurer or a ruler. they are not
supposed to. The ability to measure is a basic math skill. It is how an architect communicates the length or size of things we want to build. We must be able to add and subtract these numbers but
also must be able to use fractions of these numbers. Today, we deal with measurements that are both the English system and the Metric system. Architects use math to convert between these two systems
of measurement. You may also find architects using the many formulas and principles of geometry to create a building.
Think about designing a building. How strong do the walls and beams and ceilings need to be? You figure that out with math, knowing the weights and strengths of the materials. How much of the
materials do you need? It depends on the sizes of the walls, rooms, ceilings, all of that is math. Is concrete cheaper, or steel or cinder block? It depends on how much of each you need, what they
cost to buy and build. How long will each stage of the construction take, and how can they be planned to overlap to get the building done the fastest? How much does each sq. foot. of the building
rent for, and how long will it take to pay off the investment? How will it affect traffic in the neighborhood? How big does the garage need to be to hold the cars of what percentage of the people who
work in the building or customers/clients who visit the building?
What about windows? How many sq. feet per wall? 567. How much heat will build up in the rooms because of the windows? That will affect how much air conditioning you need. Also windows leak heat
during the Winter so more of them means more heat too. How many electrical outlets do you need? How many amps do you need from the power lines? How many circuit-breakers? How many lighting fixtures
do you need? In modern buildings, the lighting system is considered part of the heating system--which is why you see lights on in skyscrapers in the middle of the night when nobody is there.
For big, tall buildings, architects are even concerned with how the wind affects them, how it swirls between buildings. They use computer models for this. Also things like earthquake resistance.
There's almost every kind of math here! Geometry and trig, calculus, accounting, engineering, finite element analysis, computer modeling, etc.etc.
Did we answer your question?
Last edit by
Contributors Supervisors
Trust: 5677 | {"url":"http://wiki.answers.com/Q/What_math_do_architects_use","timestamp":"2014-04-18T10:35:29Z","content_type":null,"content_length":"127729","record_id":"<urn:uuid:6dba064f-36de-4e85-a34a-47247b6a807f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 14
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 2005
"... Abstract — Process variations are of increasing concern in today’s technologies, and can significantly affect circuit performance. We present an efficient statistical timing analysis algorithm
that predicts the probability distribution of the circuit delay considering both inter-die and intra-die va ..."
Cited by 41 (4 self)
Add to MetaCart
Abstract — Process variations are of increasing concern in today’s technologies, and can significantly affect circuit performance. We present an efficient statistical timing analysis algorithm that
predicts the probability distribution of the circuit delay considering both inter-die and intra-die variations, while accounting for the effects of spatial correlations of intra-die parameter
variations. The procedure uses a first-order Taylor series expansion to approximate the gate and interconnect delays. Next, principal component analysis techniques are��and ��� are employed to
transform the set of correlated parameters into an uncorrelated set. The statistical timing computation is then easily performed with a PERT-like circuit graph traversal. The run-time of our
algorithm is linear in the number of gates and interconnects, as well as the number of varying parameters and grid partitions that are used to model spatial correlations. The accuracy of the method
is verified with Monte Carlo simulation. On average, for 100nm technology, the errors of mean and standard deviation values computed by the proposed method respectively, and the errors of predicting
the��and confidence point are ���and ���respectively. A testcase with about 17,800 gates was solved in about�seconds, with high accuracy as compared to a Monte Carlo simulation that required more
- Operations Research , 2005
"... informs ® doi 10.1287/opre.1050.0254 © 2005 INFORMS This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized
geometric program (GGP), which can be transformed to a convex optimization problem and then very efficiently s ..."
Cited by 27 (7 self)
Add to MetaCart
informs ® doi 10.1287/opre.1050.0254 © 2005 INFORMS This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized geometric
program (GGP), which can be transformed to a convex optimization problem and then very efficiently solved. We start with a basic gate scaling problem, with delay modeled as a simple
resistor-capacitor (RC) time constant, and then add various layers of complexity and modeling accuracy, such as accounting for differing signal fall and rise times, and the effects of signal
transition times. We then consider more complex formulations such as robust design over corners, multimode design, statistical design, and problems in which threshold and power supply voltage are
also variables to be chosen. Finally, we look at the detailed design of gates and interconnect wires, again using a formulation that is compatible with GP or GGP.
- IEEE Transactions on Circuits and Systems-I , 2004
"... A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain
others have finished. One critical performance measure for an activity network is its makespan, which is th ..."
Cited by 12 (4 self)
Add to MetaCart
A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain others
have finished. One critical performance measure for an activity network is its makespan, which is the minimum time required to complete all activities. In a stochastic activity network (SAN), the
durations of the activities and the makespan are random variables. The analysis of SANs is quite involved, but can be carried out numerically by Monte Carlo analysis. This paper concerns the
optimization of a SAN, i.e., the choice of some design variables that affect the probability distributions of the activity durations. We concentrate on the problem of minimizing a quantile (e.g.,
95%) of the makespan, subject to constraints on the variables. This problem has many applications, ranging from project management to digital integrated circuit (IC) sizing (the latter being our
motivation). While there are effective methods for optimizing DANs, the SAN optimization problem is much more difficult; the few existing methods cannot handle large-scale problems.
- In IEEE/ACM International Conference on Computer Aided Design , 2005
"... ..."
- in Proceedings of the 7th International Symposium on Quality Electronic Design , 2006
"... This paper quantifies the approximation error in Clark’s approach [1] to computing the maximum (max) of Gaussian random variables; a fundamental operation in statistical timing. We show that a
finite Look Up Table can be used to store these errors. Based on the error computations, approaches to diff ..."
Cited by 7 (4 self)
Add to MetaCart
This paper quantifies the approximation error in Clark’s approach [1] to computing the maximum (max) of Gaussian random variables; a fundamental operation in statistical timing. We show that a finite
Look Up Table can be used to store these errors. Based on the error computations, approaches to different orderings for pair-wise max operations on a set of Gaussians are proposed. Experiments show
accuracy improvements in the computation of the max of multiple Gaussians by up to 50 % in comparison to the traditional approach. To the best of our knowledge, this is the first work addressing the
mentioned issues. 1
- IEEE/ACM International Conference on Computer Aided Design, 2004. ICCAD-2004 , 2004
"... With aggressive scaling down of feature sizes in VLSI fabrication, process variations have become a critical issue in designs, especially for high-performance ICs. Usually having level-sensitive
latches for their speed, high-performance IC designs need to verify the clock schedules. With process var ..."
Cited by 5 (0 self)
Add to MetaCart
With aggressive scaling down of feature sizes in VLSI fabrication, process variations have become a critical issue in designs, especially for high-performance ICs. Usually having level-sensitive
latches for their speed, high-performance IC designs need to verify the clock schedules. With process variations, the verification needs to compute the probability of correct clocking. Because of
complex statistical correlations, traditional iterative approaches are difficult to get accurate results. Instead, a statistical checking of the structural conditions for correct clocking is
proposed, where the central problem is to compute the probability of having a positive cycle in a graph with random edge weights. The proposed method only traverses the graph once to avoid the
correlations among iterations, and it considers not only data delay variations but also clock skew variations. Experimental results showed that the proposed approach has an error of 0.14 % on average
in comparisons with the Monte Carlo simulations. 1
- IEEE Transactions on Computer-Aided Design
"... Abstract—A model for process-induced parameter variations is proposed, combining die-to-die, within-die systematic, and withindie random variations. This model is put to use toward finding
suitable timing margins and device file settings, to verify whether a circuit meets a desired timing yield. Whi ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract—A model for process-induced parameter variations is proposed, combining die-to-die, within-die systematic, and withindie random variations. This model is put to use toward finding suitable
timing margins and device file settings, to verify whether a circuit meets a desired timing yield. While this parameter model is cognizant of within-die correlations, it does not require specific
variation models, layout information, or prior knowledge of intrachip covariance trends. The approach works with a “generic ” critical path, leading to what is referred to as a “processspecific”
statistical-timing-analysis technique that depends only on the process technology, transistor parameters, and circuit style. A key feature is that the variation model can be easily built from process
data. The derived results are “full-chip, ” applicable with ease to circuits with millions of components. As such, this provides a way to do a statistical timing analysis without the need for
detailed statistical analysis of every path in the design. Index Terms—Correlations, die-to-die variations, generic critical path, parametric yield, principal component analysis, statistical timing
analysis, timing margin, virtual corner, within-die variations. I.
- In Proceedings of IEEE International Symposium on Circuits and Systems , 2005
"... Abstract—As process variations become a significant problem in deep sub-micron technology, a shift from deterministic static timing analysis to statistical static timing analysis for
high-performance circuit designs could reduce the excessive conservatism that is built into current timing design met ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract—As process variations become a significant problem in deep sub-micron technology, a shift from deterministic static timing analysis to statistical static timing analysis for high-performance
circuit designs could reduce the excessive conservatism that is built into current timing design method. In this paper, we address the timing yield problem for sequential circuits and propose a
statistical approach to handle it. In our approach, we consider the spatial and path reconvergence correlations between path delays, set-up time and hold time constraints, as well as clock skew due
to process variations. We propose a method to get the timing yield based on the delay distributions of register-to-register paths in the circuit. On average, the timing yield results obtained by our
approach have average errors of less than 1.0 % in comparison with Monte Carlo simulation. Experimental results show that shortest path variations and clock skew due to process variations have
considerable impact on circuit timing, which could bias the timing yield results. In addition, the correlation between longest and shortest path delays is not significant. 1.
- Probabilities in VLSI Circuits,” IEE Proc. of Computers , 2005
"... Abstract—We propose a novel fault/error model based on a graphi-cal probabilistic framework. We arrive at the Logic Induced Fault En-coded Directed Acyclic Graph (LIFE-DAG) that is proven to be
a Bayesian network, capturing all spatial dependencies induced by the circuit logic. Bayesian Networks are ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract—We propose a novel fault/error model based on a graphi-cal probabilistic framework. We arrive at the Logic Induced Fault En-coded Directed Acyclic Graph (LIFE-DAG) that is proven to be a
Bayesian network, capturing all spatial dependencies induced by the circuit logic. Bayesian Networks are the minimal and exact representation of the joint probability distribution of the underlying
probabilistic dependencies that not only use conditional independencies in modeling but also exploits them for achieving minimality and smart probabilistic inference. The detection probabilities also
act as a measure of soft error susceptibility (an increased threat in nano-domain logic block) that depends on the structural correla-tions of the internal nodes and also on input patterns. Based on
this model, we show that we are able to estimate detection probabilities of faults/errors on ISCAS’85 benchmarks with high accuracy, linear space requirement complexity, and with an order of
magnitude ( 5 times) reduction in esti-mation time over corresponding BDD based approaches. I.
"... In this paper we give a brief overview of a heuristic method for approximately solving a statistical digital circuit sizing problem, by reducing it to a related deterministic sizing problem that
includes extra margins in each of the gate delays to account for the variation. Since the method is based ..."
Cited by 1 (1 self)
Add to MetaCart
In this paper we give a brief overview of a heuristic method for approximately solving a statistical digital circuit sizing problem, by reducing it to a related deterministic sizing problem that
includes extra margins in each of the gate delays to account for the variation. Since the method is based on solving a deterministic sizing problem, it readily handles large-scale problems. Numerical
experiments show that the resulting designs are often substantially better than one in which the variation in delay is ignored, and often quite close to the global optimum. Moreover, the designs seem
to be good despite the simplicity of the statistical model (which ignores gate distribution shape, correlations, and so on). We illustrate the method on a 32-bit Ladner-Fischer adder, with a simple
resistor-capacitor (RC) delay model, and a Pelgrom model of delay variation. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=641087","timestamp":"2014-04-16T23:50:11Z","content_type":null,"content_length":"40890","record_id":"<urn:uuid:03a93941-7655-4d70-9a80-49f9473e4bca>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
4HCl(g) + O2 2H2O(g) + 2Cl2(g)
HCl and O2 ratio of mole is 4:1. Cl2 concentration is 0.5 mol dm^-3 in equilibrium... - Homework Help - eNotes.com
4HCl(g) + O2 <---> 2H2O(g) + 2Cl2(g)
HCl and O2 ratio of mole is 4:1. Cl2 concentration is 0.5 mol dm^-3 in equilibrium state, and the volume precentage of Cl2 is 20 %. DeltaH = -2231KJ
What are the equilibrium concentrations of HCl(g), O2(g) , H2O(g) , Cl2(g) ?
The balanced chemical equation for the reaction is:
4HCl(g) + O2(g) `stackrel(larr)(->)` 2H2O(g) + 2Cl2(g)
Let, initially there were ‘a’ moles of oxygen, out of which x moles reacted according to the given reaction. Again, moles of HCl taken initially was 4*a = 4a.
As per the balanced equation, 1 mole of oxygen reacts with 4 moles of HCl.
So, x moles of oxygen should react with 4x moles of HCl.
Proceeding with similar logic, the concentration of various species at equilibrium will be given by:
4HCl(g) + O2(g) <---> 2H2O(g) + 2Cl2(g)
Initially 4a a 0 0
At eqm. (4a-4x) (a-x) 2x 2x
By the first condition of the problem 2x =0.5 mole*dm^-3 = 0.5 M
`rArr` x = 0.5/2 = 0.25 M
The total volume of all the gaseous species, at eqm.
= 4a-4x+a-x+2x+2x = (5a-x)
Volume percentage of Cl2 is `(2x*100)/(5a-x)`
By the second condition of the problem,
`(2x*100)/(5a-x) =20`
`rArr (2x)/(5a-x)=1/5`
`rArr 10x = 5a-x`
`rArr a =(11x)/5`
Putting the value of x, `a = (11*0.25)/5` = 0.55
Therefore, at eqm., [HCl] = 4a-4x = 4*(0.55-0.25) =1.2 M
[O2] =a-x = 0.55-0.25 = 0.3 M
[H2O] =[Cl2] = 2x = 2*0.25 = 0.5 M
=> answer.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/4hcl-g-o2-lt-gt-2h2o-g-cl2-g-hcl-o2-ratio-mole-4-1-440265","timestamp":"2014-04-17T22:10:45Z","content_type":null,"content_length":"29113","record_id":"<urn:uuid:5104d5a8-31e4-4b6e-b4d2-c78c85237365>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In 1995, the average cost of tuition and fees at private four year universities in the United States was $12,216 for full time students. By 2009, it had risen approximately 115.1%. To the nearest
dollar, what was the approximate cost in 2009?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
convert this 115.1% into decimal form and multiply it by $12,216 Then whatever you get add it to $12,216 and that will be your answer
Best Response
You've already chosen the best response.
Thank you very much!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510634d8e4b03186c3f9ff42","timestamp":"2014-04-16T04:42:21Z","content_type":null,"content_length":"30164","record_id":"<urn:uuid:c4d41062-8c90-4fb7-9fa0-9b427402d35a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics, marriage and finding somewhere to eat
Issue 3
September 1997
There are a lot of everyday situations where people make decisions one after the other, and what is decided earlier affects the choices later on. One of the more serious of these is finding a
partner. Most people want to find the best possible partner.
Generally in Britain, you only have one partner at a time. Typical human behaviour is to have a succession of friendships, which come to an end when you decide that the current friend is not suitable
to be your partner, and eventually you find "Mr Right" or "Miss Right" and then you make the decision not to go on looking any more. Quite often, social etiquette means that you can't go back to one
of the people that you have rejected earlier, and you can't find out much about anyone who might come along later.
At the British Psychological Society's conference in April 1997, on Dr Peter Todd, of the Max Planck Institute in Munich, spoke about the best (optimal) strategy for finding a partner. He also drew a
parallel with the employer trying to find a suitable new employee from a range of applicants, and quoted the 37% rule. Once you have seen 37% of the application forms, "a coherent picture of the
ideal employee is built up and the next person to fulfil these criteria gets the job".
Psychologists in several universities have been looking at the way that human beings, animals and birds behave when searching for a mate. For some reason, the mating behaviour of pied flycatchers has
been extensively studied over the last decade. A rule which determines the best strategy to follow was developed in the middle of the 1960's, and some birds do follow this quite closely in practice.
It is possible to simulate the process of making decisions one after another. "Googol" is a game which was described by Martin Gardner in his column in the "Scientific American". Scattered over a
table, there are a lot of cards, face down. Each one has a sum of money written on the hidden side, and you are allowed to look at one card at a time, decide whether to take the money or throw the
card away. What should you do?
Other university researchers have looked at the way people go house-hunting. Looking for an ideal place to live is a problem which has many similar features to that of choosing a partner, although it
is a little easier to change a house than to change a partner!
How does a mathematician look at such problems? In each case, you have to make decisions in succession. Each time you make a decision, you either stop and have a reward, or you take the risk of going
on to another decision, choosing a partner, a sum of money, a house, an employee. This mathematical approach is called "optimal stopping".
Optimal Stopping
Like all mathematical models, we need some assumptions. The model simplifies the real world, and we need to agree on the way that we build the model.
Here, we will assume that if you looked at all the potential partners (or sums of money), you could put them into order without making any two equal. But if you can't look at all of the partners, you
must be able to work out the order for those that you have seen, and be able to tell if you meet someone who is better than any of them. We will assume that there are N potential partners, who could
be ranked 1 (the worst) to N (the best). We assume that you meet them in a random order. If there were 4, each of the 24 orders below would have the same chance of occurring.
What do you want to achieve? In the language of mathematics, what is your objective function? The simplest model for this problem is to try to make the chance of finding the best partner as large as
It turns out that the best way of selecting is to look at M-1 of the potential partners (where M ranges from 1 to N) and then choose the first one who comes along afterwards who is better than the
best that you have seen among those M-1. The reason that this is best is because when you have seen, say 2 potential partners, with one better than the other, you still don't know whether these are
the two best or the two worst or anywhere in between. So you want to use the information that you have before it is too late!
(The reason for doing the calculations with M-1 is that M is the Minimum number of potential partners that you will encounter; there are M-1 who give you the data you need before you start searching,
and the Mth who is the first who can be accepted.)
When N is 4, you can make a table of the times when you would win with each value of M.
The best value of M is 2, when you find the best partner on 11 occasions out of the possible 24. Choosing the first or the last potential partner means that you find the best one on 6 occasions,
which is the same as picking randomly, 1 in N.
But what happens for bigger N?
You can follow the same exhaustive rule when N is 5 (120 possibilities) and 6 (720 possible orders for meeting partners) but for bigger values, it is sensible to try and find a simpler (less
time-consuming) way of finding the answer. It turns out that there is a simple relationship between the best value for M given N potential partners when N is large: the 37% rule. This relationship is
derived in a separate article in this issue. See "Optimal stopping".
Mathematics of finding a partner and eating out
Building models like this gives interesting results, but you don't know, at the age of zero, how many potential partners you are likely to meet in your lifetime. Dr Peter Todd turned the problem
around. Instead of trying to estimate the number of potential partners one could consider, he put forward a rule which would work for most people. He suggested that a typical person should count up
to about a dozen potential partners, and then start hunting seriously. The first dozen would give enough information for a reasonable choice.
The same idea can be applied to the problem of trying to find somewhere to eat. Because of the limited time available before you get so hungry that you could eat anything, you estimate that you are
likely to drive past 7 or 8 pubs, cafes and restaurants. The optimal policy then is to look at 2 or 3 of them before selecting. Hopefully you will be satisfied!
The author
Dr David K. Smith, Mathematical Statistics and Operational Research Department, University of Exeter
You can contact him via email D.K.Smith@exeter.ac.uk
David Smith studied mathematics before he went on to specialise in operational research. When The Independent reported on the British Psychological Society's conference in April 1997, see "Choosing a
partner is like finding a job", he was prompted to write to the editor, see "Choice numbers", to point out that the Dr Todd's results were familiar ones, at least as far as the newspaper reported
them. He was asked to expand on his letter for PASS Maths. | {"url":"http://plus.maths.org/content/mathematics-marriage-and-finding-somewhere-eat","timestamp":"2014-04-18T23:29:47Z","content_type":null,"content_length":"31817","record_id":"<urn:uuid:055f2b6e-b8d0-464f-bdfa-43eb336838b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sacred Geometry Symbols and Their Meanings
The Flower of Life - The Flower of Life is the modern name given to a geometrical figure composed of multiple evenly-spaced, overlapping circles. They are arranged to form a flower-like pattern with
a sixfold symmetry, similar to a hexagon. The center of each circle is on the circumference of six surrounding circles of the same diameter.
It is considered by some to be a symbol of sacred geometry, said to contain ancient, religious value depicting the fundamental forms of space and time.
There are many spiritual beliefs associated with the Flower of Life; for example, depictions of the five Platonic solids are found within the symbol of Metatron's Cube, which may be derived from the
Flower of Life pattern. These Platonic solids are geometrical forms which are said to act as a template from which all life springs.
Fibonacci Spiral - In contrast to the golden mean (which has no beginning and no end) the Fibonacci spiralhas a definite beginning but not necessarily an end. Once begun, the Fibonacci spiral
cancontinue on into infinity.
The Fibonacci sequence possesses a unique property. Different from the Golden Mean, theFibonacci begins at 0 or 1 but quickly approximates the Golden Mean with ever increasingaccuracy. The Fibonacci
sequence seems to be strongly attracted to the Golden Mean Sequence(phi ratio) and attempts to approximate the phi ratio (1.6180339…).
or Golden Mean - In mathematics and the arts, two quantities are in the golden ratio if the ratio of the sum of the quantities to the larger quantity is equal to the ratio of the larger quantity to
the smaller one. At least since the 20th century, many artists and architects have proportioned their works to approximate the golden ratio - especially in the form of the golden rectangle, in which
the ratio of the longer side to the shorter is the golden ratio - believing this proportion to be aesthetically pleasing (see Applications and observations below). A golden rectangle can be cut into
a square and a smaller rectangle with the same aspect ratio. Mathematicians since Euclid have studied the golden ratio because of its unique and interesting properties. The golden ratio is also used
in the analysis of financial markets, in strategies such as Fibonacci retracement.
is a shape that is the intersection of two circles with the same radius, intersecting in such a way that the center of each circle lies on the circumference of the other. The name literally means the
"bladder of a fish" in Latin. The Vesica Piscis is also used as proportioning system in architecture, in particular Gothic architecture. In Christian art, some aureolas are in the shape of a
vertically oriented vesica piscis, and the seals of ecclesiastical organizations can be enclosed within a vertically oriented vesica piscis. The vesica piscis has been the subject of mystical
speculation at several periods of history, and is viewed as important in some forms of Kabbalah. More recently, numerous New Age authors have interpreted it as a yonic symbol and claimed that this, a
reference to the female genitals, is a traditional interpretation.
Spiral Nautilus Shell - Notice the spiral shape inside the nautilus shell.
Symbol of the Holy Trinity (also called Triquetra) The doctrine of the Trinity defines God as three divine persons, the Father, the Son (Jesus Christ), and the Holy Spirit. The three persons are
distinct yet coexist in unity, and are co-equal, co-eternal and consubstantial. Put another way, the three persons of the Trinity are of one being. The Trinity is considered to be a mystery of
Christian faith. | {"url":"http://www.symbolism.co/sacred_geometry_symbols.html","timestamp":"2014-04-20T18:23:53Z","content_type":null,"content_length":"26525","record_id":"<urn:uuid:8c09cdfb-5bbd-4857-a239-e1cb2d552304>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ellicott, MD Geometry Tutor
Find an Ellicott, MD Geometry Tutor
...Grammar: I studied Latin in high school and college; this has allowed me to become very adept at understanding English grammar. Literature: While my favorite genre is Fantasy, I also tend to
read and reread many classics that are commonly assigned to English classes. I am very skilled at interpreting them and identifying the key points.
32 Subjects: including geometry, reading, algebra 2, calculus
With a combination of a Master's degree and 16 years of teaching experience, I am the tutor that would be most effective in building your child's educational capacity and ability. I have taught
elementary, middle and high school levels, as well as college level courses. With my experience and expe...
14 Subjects: including geometry, English, reading, biology
...I worked for over 12 years as a mechanical engineer in industry. I have also worked as a full time instructor of engineering design and drafting classes at the college level. I was a long time
user of MS Project in an industrial environment.
21 Subjects: including geometry, calculus, physics, algebra 2
...The key to understanding this class, is not just memorizing the formulas, but understanding what they mean and how they can be properly applied. I study mathematics. When I took this class it
was very proof intensive, meaning, most things done in class, I did a proof for it: from subspace, kern...
9 Subjects: including geometry, calculus, algebra 1, algebra 2
...As a tutor, my main job isn't to talk, but to listen: I think the real centerpiece is getting my students to explain the material in their own words. Everyone learns differently, and the
beauty of tutoring is that we can adapt our approach on-the-fly to address just what it is that one specific ...
18 Subjects: including geometry, calculus, writing, algebra 1
Related Ellicott, MD Tutors
Ellicott, MD Accounting Tutors
Ellicott, MD ACT Tutors
Ellicott, MD Algebra Tutors
Ellicott, MD Algebra 2 Tutors
Ellicott, MD Calculus Tutors
Ellicott, MD Geometry Tutors
Ellicott, MD Math Tutors
Ellicott, MD Prealgebra Tutors
Ellicott, MD Precalculus Tutors
Ellicott, MD SAT Tutors
Ellicott, MD SAT Math Tutors
Ellicott, MD Science Tutors
Ellicott, MD Statistics Tutors
Ellicott, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/Ellicott_MD_Geometry_tutors.php","timestamp":"2014-04-19T17:13:23Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:8197f970-b291-4a7b-96ae-9a5ec0cd5be6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanics of Materials 3rd Edition Chapter 11.4 Solutions | Chegg.com
Calculate the work done on a rod AB by external force using the relation below
Apply the work-energy principle as follows
From equations (2) and (3), obtain the relation to calculate the displacement as follows
Therefore, the displacement | {"url":"http://www.chegg.com/homework-help/mechanics-of-materials-3rd-edition-chapter-11.4-solutions-9781118136331","timestamp":"2014-04-18T22:27:46Z","content_type":null,"content_length":"42654","record_id":"<urn:uuid:1c1db753-3eff-460a-8dfa-e1770289be27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Electronic Journal of Differential Equations, Vol. 2007(2007), No. 19, pp. 112.
ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu
ftp ejde.math.txstate.edu (login: ftp)
Abstract. We are concerned with the existence and form of positive solutions
to a nonlinear third-order three-point nonlocal boundary-value problem on
general time scales. Using Green's functions, we prove the existence of at least
one positive solution using the Guo-Krasnoselskii fixed point theorem. Due
to the fact that the nonlinearity is allowed to change sign in our formulation,
and the novelty of the boundary conditions, these results are new for discrete,
continuous, quantum and arbitrary time scales.
1. Statement of the problem
We will develop an interval of values whereby a positive solution exists for the
following nonlinear, third-order, three-point, nonlocal boundary-value problem on
arbitrary time scales
) (t) = f(t, x(t)), t [t1, t3]T, (1.1)
x((t1)) - x | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/840/1153293.html","timestamp":"2014-04-19T08:48:29Z","content_type":null,"content_length":"8297","record_id":"<urn:uuid:359ed353-d45d-441e-bf47-20e8aeb8fc7d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Smyrna, GA Prealgebra Tutor
Find a Smyrna, GA Prealgebra Tutor
...In person I only tutor in the Buckhead, Sandy Springs, Vinings, Brookhaven, Dunwoody and Midtown areas. If you live outside of these areas, you could either meet me at a coffee shop, or online
through WyzAnt's online platform. WyzAnt's platform consists of video, audio, chat and whiteboard and is a very effective and convenient learning and tutoring tool.
19 Subjects: including prealgebra, physics, calculus, geometry
...I am currently a Bible Study teacher at my local church, teaching Bible Study classes to students ages 4-17. Overall, I have at least 5 years experience in Bible Study teaching. I am a recent
Spelman graduate with a Bachelor of Arts degree in Art.
26 Subjects: including prealgebra, English, reading, Spanish
...While she has primarily tutored students at the college level since she is currently a Ph.D student in the School of Physics at Georgia Tech, she would love to tutor a younger child in need of
help as she has had a lot of fun on the outreach programs mentioned above. As she is currently still a ...
17 Subjects: including prealgebra, calculus, algebra 1, algebra 2
...I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. I can provide references upon request. I look forward to hearing from you.
10 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I sometimes catch myself doing algebra problems just for fun! I love any kind of algebra and can explain it well. I taught Algebra 2 in a small private school for 4 years.
9 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Related Smyrna, GA Tutors
Smyrna, GA Accounting Tutors
Smyrna, GA ACT Tutors
Smyrna, GA Algebra Tutors
Smyrna, GA Algebra 2 Tutors
Smyrna, GA Calculus Tutors
Smyrna, GA Geometry Tutors
Smyrna, GA Math Tutors
Smyrna, GA Prealgebra Tutors
Smyrna, GA Precalculus Tutors
Smyrna, GA SAT Tutors
Smyrna, GA SAT Math Tutors
Smyrna, GA Science Tutors
Smyrna, GA Statistics Tutors
Smyrna, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/smyrna_ga_prealgebra_tutors.php","timestamp":"2014-04-17T13:19:31Z","content_type":null,"content_length":"23889","record_id":"<urn:uuid:0a27f01b-dcad-4944-90df-7a0067af9a75>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Book of Numbers by Fernando Q. Gouvêa | First ThingsThe Book of Numbers by Fernando Q. Gouvêa | Articles | First Things
Is God a Mathematician?
by Mario Livio
Simon & Schuster, 320 pages, $26
When it comes to mathematics, two things seem evident. First, a great many people have nothing but negative feelings about it. There is probably no other subject about which people so proudly
proclaim their ignorance and even their basic incompetence ( I can’t even balance my checkbook! ). Second, those who like mathematics do so passionately. Some seem to love mathematics itself,
finding it aesthetically pleasing and intellectually compelling.
In 1960, physicist Eugene Wigner wrote a paper that expressed a more complex reaction. He called it The Unreasonable Effectiveness of Mathematics in the Physical Sciences, thereby giving us a
useful name for the underlying philosophical question. It is a mystery, he argued, that mathematics could actually be so useful in the physical sciences. Mathematicians, says Wigner, typically
develop their theories out of pure curiosity and aesthetic appreciation. It is sometimes suggested, wrote British mathematician G.H. Hardy, that pure mathematicians glory in the uselessness of
their work, and make it a boast that it has no practical applications.
So why is mathematics as useful as it is? We are not surprised that the calculus can be used to describe motion, or that statistics is useful for analyzing society; they were, at least to some
extent, constructed for those purposes. But it seems amazing that group theory, invented in the 1830s to explain why certain kinds of equations were harder to solve than others, has turned out to
have a fundamental role in describing the symmetries of quantum physics, or that number theory, once considered the quintessence of pure mathematics, has turned out to be crucial to cryptography. It
is, as Wigner put it, a wonderful gift which we neither understand nor deserve.
The question of the unreasonable effectiveness of mathematics is the starting point for a new book by Mario Livio called Is God a Mathematician? The title reflects one possible answer to Wigner’s
question: Mathematics is useful for understanding the universe because God, who designed the universe, did so mathematically. We, being made in God’s image, also think mathematically. Livio, who does
not seem to believe there is a God, does not like that answer, and the goal of his book is to propose a different solution to Wigner’s conundrum.
Livio complicates the issue by introducing a far deeper (and older) problem in the philosophy of mathematics: the question of what mathematics is about. Is it more akin to science, in which we use
our minds to explore an external reality, or more akin to poetry, in which we construct new realities? When mathematicians speak of regular polygons, five-dimensional spaces, symmetry groups, or, for
that matter, the number seven , what are they actually talking about? When we prove a theorem, are we discovering something or creating something? If the latter, why do we feel that mathematical
results carry so much certainty?
This is a big question. The mathematician Barry Mazur calls it The Question and argues that it is unavoidable: The bizarre aspect of the mathematical experience”and this is what gives such fierce
energy to The Question”is that one feels (I feel) that mathematical ideas can be hunted down, and in a way that is essentially different from, say, the way I am currently hunting the next word to
write to finish this sentence. One can be a hunter and gatherer of mathematical concepts, but one has no ready words for the location of the hunting grounds. Of course we humans are beset with
illusions, and the feeling just described could be yet another. There may be no location.
Philosophers (and philosophically inclined mathematicians) have proposed many answers to The Question. The way that mathematicians speak about their everyday work seems to suggest a kind of naive
platonism: They cannot be precise about where exactly these mathematical entities are, but they feel as if they are dealing with some sort of reality. Of course, other answers have been proposed.
Logicism posits that the vast edifice of mathematics is nothing but a working out of logic, of the rules of reasoning. This was Bertrand Russell’s view, famously worked out with Alfred North
Whitehead in Principia Mathematica . Most mathematicians and most philosophers of mathematics found the book unreadable and the argument unpersuasive.
Another alternative, formalism, claims that mathematics is a game: Choose your axioms and follow the rules to prove your theorems. (This view is often attributed to David Hilbert, though Hilbert’s
view was, in fact, somewhat different.) Mathematical formalism intends to put the metaphysical question out of bounds by saying that mathematics makes no claims at all about reality.
Recently, in the philosophy of mathematics as in other fields, several kinds of social constructivism have been gaining popularity. These often start from the obvious fact that mathematics is
something human beings do, and then they try to explain away the feeling of reality and solidity that seems to attach to mathematical objects and theorems. Some of these would argue that we should
not expect an alien civilization to share our mathematics.
Several things drive all this discussion. Mathematics does seem to evoke a feeling of timelessness and certainty. We may not formulate geometry exactly as Euclid did, but none of Euclid’s theorems is
now considered false. The proofs given by Apollonius and Archimedes still work as proofs for us, and the theorems they prove are, we say, true , not just agreed upon or universally accepted. Livio
quotes Ian Stewart: There is a word in mathematics for previous results that are later changed”they are simply called mistakes . Indeed, mathematicians seem never to have to deal with the kind of
debate about fundamentals with which other disciplines contend. Once a theorem has been proved and the proof has been examined and certified by those competent to judge, the result is just accepted
as true, and everyone goes on to the next step.
The question of unreasonable effectiveness is by far the easier one to address. Convenient as it is for mathematicians to quote Wigner’s dicta, which provide the perfect justification for pursuing
whatever interests us while assuring society that applications for it are bound to show up eventually, I think most mathematicians, in their more serious moments, don’t quite buy it. There are plenty
of things that can be made to seem miraculous given the right point of view, from the physiology of trees to the fact that children learn to speak. There is a philosophical question there,
discussed in detail, for example, in Mark Steiner’s 1998 book The Applicability of Mathematics as a Philosophical Problem , but in general philosophers of mathematics have not found it too difficult
to deal with. The question of the nature of mathematics seems deeper.
In order to use these issues as the driving force for a book for the general public, Livio faces the crucial problem that most of the interesting examples are far too difficult for
that public. In order to explain, for example, why it is surprising that the theory of Lie groups is useful in quantum mechanics, it is necessary to be at least a little familiar with both Lie groups
and quantum mechanics.
Livio’s solution is to proceed by way of history. He begins with Pythagoras and Plato, early proponents of the idea of mathematics as being about real things. (In fact, the Pythagoreans seem to have
felt that mathematical objects are more real than the objects of sense experience.) As examples of mathematics applied to the world, he discusses the work of Archimedes, Galileo, Descartes, and
Newton”a motley crew whose ideas about how mathematics relates to the world couldn’t be more varied.
Later in the book he looks at the development of statistics, the birth of non-Euclidean geometry, and the debate about the foundations of mathematics in the late nineteenth and early twentieth
century. Livio’s account of these episodes is, for the most part, uncritical: He recounts what one might describe as the folk history of mathematics, the stories mathematicians tell each other. His
bibliography cites few recent histories of mathematics, which results in many statements that will annoy careful historians.
All of these topics are standard fare in books about mathematics aimed at the general public, and few of them seem to be on point. Statistics, having been developed precisely for the purpose it
serves, is the weakest of his examples. In the early modern period, the development of mathematics was so thoroughly intertwined with questions of natural philosophy that it is not too surprising to
see mathematics turning out to be useful. The work of the logicians in the early twentieth century was important in the philosophical debate about The Question, but mostly in a negative way: It
convinced most mathematicians that both Logicism and Formalism were inadequate as descriptions of mathematics.
Only two of Livio’s examples are really illustrative of unreasonable effectiveness. The story of non-Euclidean geometry has the element of surprise that drove Wigner’s article: Developed for
rigorously internal reasons, it turned out to be the key element in the formulation of Einstein’s general theory of relativity. In addition, the discovery that one could get consistent geometries
from different sets of axioms raised the deeper question of what geometry was about. Up to that point, it had been easy to say that geometry was the study of the properties of physical space. Now
that there were alternative geometries, all of which seemed to share the peculiar solidity of mathematical reality, this was clearly no longer tenable.
The other good example is knot theory. Mathematicians started studying knots in the late nineteenth century because Lord Kelvin had proposed a theory of atoms based on knots. Kelvin’s theory was
quickly discovered to be wrong, but mathematicians had found a neat subject to think about and kept right on proving theorems. Then, late in the twentieth century, knot theory was suddenly
applicable again, both in biochemistry and, more spectacularly, in quantum physics. These links were so surprising and so deep that a publisher started a series of books on Knots and Everything.
After working for most of the book to convince his reader that both Wigner’s question and The Question are worth thinking about, Livio goes on, in the last two chapters, to propose answers. As to
the nature of mathematics, he ends up arguing for a kind of middle ground: Mathematical concepts are created, but the theorems we prove about them are discovered. He thinks of these concepts as
existing only in human minds. Influenced by cognitive science, he explains the apparent solidity of these mathematical objects as reflecting the structure of the human brain as shaped by natural
selection. The end result is a kind of neurologically accented social constructivism.
Livio is not prepared, however, to dissolve physics itself in this solvent. The appeal to natural selection is made in order to allow the argument that, while mathematics is shaped by the structure
of the human brain, that structure, having been shaped by the pressures of the real world, is adequate to explain reality. Of course, it is hard to see how having brains capable of quantum mechanics
would have survival value to early hominids.
But Livio tries to deal with this by arguing that nature has been kind to us by being governed by universal laws, that our brains have evolved in such a way as to be able to grasp such laws, and
that in fact we have selected out those parts of nature that are amenable to mathematical treatment. He seems to realize, in the end, that this is unsatisfactory. For one thing, isn’t the fact that
there are such things as natural laws”and that they are mathematical”almost exactly the mystery Wigner was talking about?
The book does not end with a bang: Have we then solved the mystery of the effectiveness of mathematics once and for all? I have certainly given it my best shot, but doubt very much that everybody
would be utterly convinced by the arguments that I have articulated in this book. We are left, after that, with a line from Bertrand Russell to the effect that philosophical questions are worth
thinking about even though we can never find final answers to them.
Fair enough. Unfortunately, Livio does little to advance that thinking.
Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College and author, with William P. Berlinghoff, of Math Through the Ages: A Gentle History for Teachers and Others. | {"url":"http://www.firstthings.com/article/2009/01/001-the-book-of-numbers?keepThis=true&TB_iframe=true&height=500&width=700","timestamp":"2014-04-20T18:32:32Z","content_type":null,"content_length":"31114","record_id":"<urn:uuid:3f3d4e2f-d20d-40d7-b332-7e0c7ffc9e49>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flat Netting Beading Tutorial
Learn to Graph and Stitch Flat Net Beading
In this beading tutorial for Flat Netting, you will first learn the beading thread path and then learn how to use the tool to design beadwork with this stitch without it causing a lot of frustration.
The base row for this beading stitch is always made up of sets of four beads each. The example below shows three sets of four beads each set, with the last set forming both the end of the first row
and the beginning of the second. I will call this the "turn." So, to make a base row for flat netting, pick up as many sets of four as you wish to use plus one extra set for the turn. Count six beads
back and go through the seventh to form the turn.
Each row after the base row is created from 3 bead sets for each stitch except the turn stitches at each end which use sets of four. So pick up 3 beads, skip 3 beads in the base row and go through
the 4th. Continue adding sets of 3 beads in this manner until you get to the end and you need to turn again. Then pick up four beads and go through the middle bead of the last set of three you added
to make the turn.
That's it really, except at the very end when you are finished, add one more bead to square up the corner.
Now, before we get to the graph paper, let me explain how to use it. I didn't want to make a seperate graph for say, two sets, three sets, four sets, etc, because that would be too limiting in terms
of what you could design on it. Instead I'm going to give you a full page of graph paper and explain how you can use it to design pieces with any number of sets in the base row.
For the design to be workable, it needs to start with the bead one below and to the left of the bead at the top of a diamond shape. I know that's kinda confusing, but the picture should help clarify.
Look at your graph, see how only the beads at the tops and bottoms of the diamond shapes are perfectly vertical? Choose one of these beads and draw a vertical line through the middle of it extending
to the top and bottom edges of the paper. A ruler might help. ;-) Now, beginning from the next bead up and to the right of the vertical bead you chose, count as many sets of four as you want to use
in your base row not counting the turn set. Then draw another vertical line through the center of the next vertical bead over. Ignore the beads that the lines pass through and any beads outside of
the lines. All the whole beads inside the two lines are part of the design, partial beads and beads outside the lines are not. The picture shows two sets of four and illustrates how the turn set
looks on the graph. Turn beads are indicated by a T in each bead. Note that in your actual beadwork the two turn beads on the edge will be closer to horizontal than shown on the graph. I may be
over-explaining this, but in my experience, figuring out where each row ends without some kind of guide can be very confusing!
Click Here For The Graph
Now, don't let all this make you feel that you have to design "inside the lines." You can go ahead and graph out a figure and then add the lines later to see where the rows end. Also, don't forget
that you can turn the paper sideways and have your base row go from top to bottom instead of side to side, it works the same either way. In fact, that's what I did for both of the little sample
designs shown below.
Heh, this is supposed to be Kokopelli. You might want to use higher contrast colors if you try this one. (g) | {"url":"http://beadwork.about.com/library/weekly/aa070698.htm","timestamp":"2014-04-19T13:40:09Z","content_type":null,"content_length":"37849","record_id":"<urn:uuid:09e41738-a8e1-4f63-8bf5-f4a562113d77>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using two flow sensors connected to one Arduino. - Arduino Forum
Liter per time: 24.8400
...and that is not what I had expected.
The engines I develop this flow meter to, uses between 3 and 50 liters per hour.
24 is between 3 and 50, so ?? Be aware that this is the consumption per hour!
The device gives 10.000 pulses per liter according to spec. So the formula seems OK
pulseCount = (pulseCountIN - pulseCountOUT) / 10000.0 * 3600;
better split the two (and give the vars good names) to debug them seperately.
float LPH_IN = pulseCountIN/ 10000.0 * 3600; // pulses to Liter/hour
float LPH_OUT = pulseCountOUT/ 10000.0 * 3600;
The fact that you get ~24 Liter means that the incounter get far more pulses than the outcounter. This makes sense as the device consumes fuel.
About Accuracy
The minimum consumption it can detect is one pulse. If you measure a delta of 1 pulse per second it equals a difference of 1/10000*3600 = 0.36L
Look at your measurements
Liter per time: 23.4000
Liter per time: 24.1200
Liter per time: 24.8400
Liter per time: 25.2000
Liter per time: 25.5600
Liter per time: 24.8400
and you see the delta's between the measurements are 0.36 or 0.72. That is one or two pulses difference.
The best way to have a better accuracy is to measure the fuel-consumption over the last minute * 60. A delta of 1 pulse will result in a delta of 0.006 L
The best way to do this is to make measurements per second and use them to fill a circular buffer of 60 elements. | {"url":"http://forum.arduino.cc/index.php?topic=124115.15","timestamp":"2014-04-17T01:20:04Z","content_type":null,"content_length":"52031","record_id":"<urn:uuid:affe4ce4-4d02-4161-b950-a309d5bcaa07>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: SPECIFICATION, OPTIMIZATION AND MATCHING OF OPTICAL SYSTEMS BY USE OF ORIENTATIONAL ZERNIKE POLYNOMIALS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The present disclosure relates to specification, optimization and matching of optical systems by use of orientation Zernike polynomials. In some embodiments, a method for assessing the suitability of
an optical system of a microlithographic projection exposure apparatus is provided. The method can include determining a Jones pupil of the optical system, at least approximately describing the Jones
pupil using an expansion into orientation Zernike polynomials, and assessing the suitability of the optical system on the basis of the expansion coefficient of at least one of the orientation Zernike
polynomials in the expansion.
A method, comprising:at least approximately describing a Jones pupil of a microlithographic projection exposure apparatus using an expansion into orientation Zernike polynomials; andassessing the
suitability of the optical system on the basis of an expansion coefficient of at least one of the orientation Zernike polynomials in the expansion.
The method according to claim 1, wherein the optical system is considered within tolerance if the expansion coefficient of the at least one of the orientation Zernike polynomials is less than a
threshold value, and the optical system is considered not within tolerance if the expansion coefficient of the at least one of the orientation Zernike polynomials is not less than the threshold
The method according to claim 1, wherein the optical system is rated as being within tolerance only if the expansion coefficient of the at least one of the orientation Zernike polynomials between a
center and an edge of an illumination field is less than a threshold value.
The method according to claim 1, further comprising:determining a sensitivity function, which describes, for at least for some of the orientation Zernike polynomials, an impact of the respective
orientation Zernike polynomial on a lithography parameter; andassessing the suitability of the optical system using the sensitivity function.
The method according to claim 4, wherein the lithography parameter comprises a parameter selected from the group consisting of CD deviation, image placement errors and deviation between horizontal
and vertical structures.
The method according to claim 4, wherein the sensitivity function is multiplied with the expansion.
The method according to claim 1, wherein the optical system comprises at least layer selected from the group consisting of anti-reflective layers and high-reflective layers, and a thickness and/or
material of the at least one layer is modified in dependence of the assessment.
The method according to claim 1, wherein, during assessing, the expansion coefficients of only a subgroup of orientation Zernike polynomials are considered, and the number of orientation Zernike
polynomials in the subgroup does not exceed
25. 9.
The method according to claim 1, wherein, during assessing, only a subgroup of orientation Zernike polynomials are considered, and the order of orientation Zernike polynomials in the subgroup does
not exceed
20. 10.
The method according of claim 1, wherein the orientation Zernike polynomials can be defined as{right arrow over (W)}
| Φ
εwherein R
are radial polynomials given by R n m ( r ) = s = 0 ( n - m ) / 2 ( - 1 ) s ( n - s ) ! ( 1 2 ( n + m ) - s ) ! ( 1 2 ( n - m ) - s ) ! r n - 2 s ##EQU00050## with m, n, s being integers, m=-n, . . .
, n and ε=0 or 1,and wherein Φ
ε is given by Φ → m 0 = ( cos m Φ 2 - sin m Φ 2 ) , Φ → m 1 = ( sin m Φ 2 cos m Φ 2 ) ##EQU00051## or any linear combination thereof.
The method according to claim 1, wherein the optical system is a projection lens of the microlithographic projection exposure apparatus.
The method according to claim 1, wherein the optical system is a single optical element of the microlithographic projection exposure apparatus, or a group of elements of a projection lens of the
microlithographic projection exposure apparatus.
A method, comprising:at least approximately describing, for each of the optical systems, respective Jones pupils of at least two optical systems of a microlithographic projection exposure apparatus
using an expansion into orientation Zernike polynomials; andmodifying at least one of the optical systems such that a difference between an expansion coefficient of at least one of the orientation
Zernike polynomials in the expansions for the optical systems is reduced.
A method, comprising:at least approximately describing a Jones pupil for a design of an optical system of a microlithographic projection exposure apparatus using an expansion into orientation Zernike
polynomials;establishing a quality function which incorporates the expansion coefficient of at least one of the orientation Zernike polynomials in the expansion; anddesigning the optical system with
a modified design being selected such that the quality function is improved for the modified design with respect to the preset design.
The method according to anyone of the claim 14, wherein the optical system comprises at least layer selected from the group consisting of an anti-reflective and a high-reflective layer, and designing
comprises varying the thickness and/or the material of at least one of the layers.
This application claims priority under 35 U.S.C. §119(e)(1) to U.S. Provisional Application Nos. 61/059,893, filed Jun. 9, 2008, and 61/107,748 filed on Oct. 23, 2008. The contents of both of these
applications are hereby incorporated by reference.
FIELD [0002]
The disclosure relates to specification, optimization and matching of optical systems by use of orientation Zernike polynomials.
BACKGROUND [0003]
Microlithography is used in the fabrication of microstructured components like integrated circuits, LCD's and other microstructured devices. The microlithographic process is performed in a so-called
microlithographic exposure system including an illumination system and a projection lens. The image of a mask (or reticle) being illuminated by the illumination system is projected, through the
projection lens, onto a resist-covered substrate, typically a silicon wafer bearing one or more light-sensitive layers and being provided in the image plane of the projection lens, in order to
transfer the circuit pattern onto the light-sensitive layers on the wafer.
The generalized description of the propagation of polarized light through the projection lens uses complex electromagnetic transfer functions like Jones pupils, Mueller matrices or Stokes parameters.
Nevertheless, Geh, B., et al., "The impact of projection lens polarization properties on lithographic process at hyper-NA" in Optical Microlithography XX. 2007. USA: SPIE-Int. Soc. Opt. Eng. Vol
6520, p. 6520-15, showed that in current lithography lenses these transfer functions can be simplified to pupil maps corresponding to the basic physical effects of apodization, retardation and
diattenuation. The so-called Zernike expansion of the scalar projection lens aberrations has been successfully introduced to provide the basis for a better understanding, control, and reduction of
aberration induced imaging errors. In certain optical systems, the scalar Zernike polynomials provide a convenient base set.
SUMMARY [0005]
In some embodiments, it is desirable to have a corresponding base set also for relatively high performance and polarized operation. It can be advantageous if the set has the following properties:
Symmetry: Terms with m-fold rotation symmetry, i.e. the pupil reproduces itself after rotation by i/m*360° (i=1 . . . m)
Physical meaning: Typical imaging errors correspond to a low number of coefficients (optionally just one)
Simple relationship to scalar imaging errors
Simple relationship to component errors.
Aspects of the present disclosure are related to the specification, optimization and matching of optical systems by use of orientation Zernike polynomials (="OZPs"). The extension of scalar Zernike
polynomials to orientation Zernike polynomials allows the expansion of diattenuation and retardance of the pupil. In combination with scalar Zernike polynomials they provide a set for the complete
specification of the imaging properties of high NA lenses.
In some embodiments, the disclosure provides a method for assessing the suitability of an optical system of a microlithographic projection exposure apparatus. The method includes: determining a Jones
pupil of the optical system; at least approximately describing the Jones pupil using an expansion into orientation Zernike polynomials; and assessing the suitability of the optical system on the
basis of the expansion coefficient of at least one of the orientation Zernike polynomials in the expansion.
In certain aspects, the disclosure involves the concept of decomposing the Jones pupil, according to the so-called SVD-algorithm (SVD=singular value decomposition), into pupil maps corresponding to
the basic physical effects of apodization (i.e. scalar pupil transmission), phase, retardation and diattenuation. In some embodiments, these basic pupil maps are decomposed into suited base
For apodization, i.e. the scalar pupil transmission, as well as for the phase it is possible to stick to the well known scalar Zernike approach. Retardation and diattenuation, however, are caused by
orientation dependent effects like birefringence which do not allow a scalar description like Zernikes. Therefore, the "Orientation Zernike Polynomials" are introduced, which are defined below and
which allow a complete and systematic description of polarized imaging using lithography lenses. The base function treatment disclosed herein can allow a complete and systematic description of
polarized imaging using lithography lenses. Similar to the Zernike polynomials of the scalar wavefront, the orientation Zernike polynomials allow to calculate imaging sensitivities with respect to
the Zernike and Orientation-Zernike coefficients, to do simulations based on these sensitivities with focus on OPC behavior, and to define process control limits in lens production.
The orientation Zernike polynomials support a thorough understanding of polarized imaging, its modelling, and represent the basis for a control of polarization effects to uncritical levels.
In some embodiments, assessing the suitability the optical system is rated as being within tolerance if the expansion coefficient(s) is/are below a respective predetermined threshold, and that the
optical system is rated as being not within tolerance if the expansion coefficient(s) is/are not below the predetermined threshold.
In certain embodiments, the mean value for the expansion coefficient(s) across an illumination field (e.g. the scanner slit) is/are used. Additionally or alternatively, in some embodiments, the field
variation of the expansion coefficient(s) between center and edge of an illumination field (e.g. the scanner slit) is considered.
In some embodiments, when assessing the suitability, the optical system is rated as being within tolerance only if for the expansion coefficient the field variation of the expansion coefficient(s)
between the center and the edge of an (typically rectangular) illumination field (e.g. the scanner slit) is less than a predetermined threshold value.
In certain embodiments, the disclosure provides a method that includes: determining a sensitivity function, which describes, for at least for some of the orientation Zernike polynomials, the impact
of the respective orientation Zernike polynomial on a lithography parameter; and assessing the suitability of the optical system using the sensitivity function.
In some embodiments, the lithography parameter belongs to the group CD deviation, image placement errors and deviation between horizontal and vertical structures.
In certain embodiments, the sensitivity function is multiplied with the expansion.
In some embodiments, the optical system includes at least one anti-reflective (AR) layer and/or at least one high-reflective (HR) layer, and the thickness and/or material of at least one of the
layers is modified in dependence of the assessment.
In certain embodiments, when assessing the suitability, the expansion coefficients of only a subgroup of orientation Zernike polynomials are considered, the number of orientation Zernike polynomials
in the subgroup not exceeding 25 (e.g., not exceeding 16, not exceeding 8, not exceeding 5). Here the disclosure makes use of the realization that comparable few of the expansion coefficients have
significant impact on imaging. Data taken from actual lens populations demonstrate the successful control of these parameters in lens production.
In some embodiments, when assessing the suitability, the expansion coefficients of only a subgroup of orientation Zernike polynomials are considered, the order of orientation Zernike polynomials in
the subgroup not exceeding 20 (e.g., not exceeding 15, not exceeding 10).
The orientation Zernike polynomials can be defined as
{right arrow over (W)}
.sup.|m| Φ
wherein R
are radial polynomials given by
R n m
( r ) = s = 0 ( n - m ) / 2 ( - 1 ) s ( n - s ) ! s ! ( 1 2 ( n + m ) - s ) ! ( 1 2 ( n - m ) - s ) ! r n - 2 s ##EQU00001##
with m, n, s being integers, m=-n, . . . , n and ε=0 or 1,
and wherein Φ
ε is given by
Φ → m 0 = ( cos m Φ 2 - sin m Φ 2 ) , Φ → m 1 = ( sin m Φ 2 cos m Φ 2 ) . ##EQU00002##
In some embodiments, the optical system is a projection lens of the microlithographic projection exposure apparatus.
In certain embodiments, the optical system is a single optical element, such as the last lens element, of the microlithographic projection exposure apparatus, or a group of elements of a projection
lens of the microlithographic projection exposure apparatus.
The disclosure also provides a method for matching the polarization properties of at least two optical systems of a microlithographic projection exposure apparatus to each other. In some embodiments,
the method includes: determining a Jones pupil of each of the optical systems; at least approximately describing, for each of the optical systems, the respective Jones pupil using an expansion into
orientation Zernike polynomials; and modifying at least one of the optical systems such that the difference between the expansion coefficient of at least one of the orientation Zernike polynomials in
the expansions for the optical systems is reduced.
In certain aspects, it may be acceptable if the respective expansion coefficient(s) being considered are not very good or optimized for each individual one of the optical systems, but a similarity or
even identity exists with regard to the respective expansion coefficient(s) if a comparison is made between the different optical systems, so that similar or at least almost identical results are
obtained in the microlithographic process, as far as polarization effects are concerned, if changing from one system to the other.
In some embodiments, the disclosure provides a method for designing an
optical system of a microlithographic projection exposure apparatus
. The method includes: determining, for a preset design of the optical system, a Jones pupil of the optical system; at least approximately describing the Jones pupil using an expansion into
orientation Zernike polynomials; establishing a quality function which incorporates the expansion coefficient of at least one of the orientation Zernike polynomials in the expansion; and designing
the optical system with a modified design being selected such that the quality function is improved for the modified design with respect to the preset design.
Optionally, the optical system may be the illumination system or the projection lens of the microlithographic projection exposure apparatus, or it may also be a single optical element (e.g. a last
lens element) or a group of optical elements of the microlithographic projection exposure apparatus.
In certain embodiments, the disclosure provides a method for designing a microlithographic projection exposure apparatus that includes an illumination system and a projection lens. The method
includes: determining, for a preset design of the microlithographic projection exposure apparatus, a first Jones pupil of the illumination system; at least approximately describing the first Jones
pupil using a first expansion into orientation Zernike polynomials; determining, for the given design of the microlithographic projection exposure apparatus a second Jones pupil of the projection
lens; at least approximately describing the second Jones pupil using a second expansion into orientation Zernike polynomials; establishing a quality function which incorporates the expansion
coefficient of at least one of the orientation Zernike polynomials in each of the first and second expansion; and designing the microlithographic projection exposure apparatus with a modified design
being selected such that the quality function is improved for the modified design with respect to the preset design.
In some embodiments, it may be acceptable if the respective expansion coefficient(s) being considered are not very good or optimized for both the illumination system and the projection lens, but the
illumination system and the projection lens synergize to yield a desired or acceptable overall performance of the microlithographic projection exposure apparatus.
In some embodiments, the disclosure provides a method for evaluating the polarization properties of an optical system of a microlithographic projection exposure apparatus. The method can include:
determining a Jones pupil of the optical system; at least approximately describing the Jones pupil using an expansion into orientation Zernike polynomials; and evaluating the polarization properties
of the optical system on the basis of the expansion.
In certain embodiments, the disclosure provides an optical system of a microlithographic projection exposure apparatus. The optical system includes a device for determining a Jones pupil of the
optical system; and a computer which at least approximately describes the Jones pupil using an expansion into orientation Zernike polynomials.
The determination of the Jones pupil can be realized by simulation and/or measurement.
In certain embodiments, the computer is configured to compare the expansion coefficient of at least one of the the Orientation Zernike Polynomials with a predetermined threshold.
In some embodiments, the optical system further includes at least one manipulator to manipulate the polarization properties of the optical system based upon the comparison. Such a manipulation can be
realized such that the resulting expansion coefficient(s) of the considered Orientation Zernike polynomial(s) (=OZPs) are minimized or at least reduced.
Embodiments can provide one or more of the following features:
providing an optical system of which the OZP coefficients are provided, too;
providing an optical system of which the OZPs are below a given threshold (which can be applied to both the mean value and the field variation);
providing an optical system of which the components are specified using OZPs;
providing a set of optical systems in which the difference of certain OZPs is below a given value;
optical system including illumination and projection lens, in which the OZPs of both parts are related, in order to yield a specified overall performance;
determining a sensitivity function, which describes, for at least for some of the orientation Zernike polynomials, the impact of the respective orientation Zernike polynomial on a lithography
parameter, and assessing the suitability of the optical system using the sensitivity function.
BRIEF DESCRIPTION OF THE DRAWINGS [0048]FIG. 1
depicts polarization, retardance and diattenuation as ellipses;
FIG. 2 shows orientation Zernike polynomials;
FIG. 3 shows correspondence between Retardation Orientation Zernike polynomials and Jones pupil;
FIG. 4 shows OZP grouped according to rotational symmetry (waviness);
[0052]FIG. 5
shows a meridional section of a microlithography projection lens
FIG. 6 illustrates a Jones pupil;
[0054]FIG. 7
shows pupils resulting from the polar decomposition using the singular value decomposition results;
[0055]FIG. 8
shows the complex pupils after Pauli decomposition;
[0056]FIG. 9
shows the scalar Zernike spectra;
[0057]FIG. 10
shows the orientation Zernike spectra;
[0058]FIG. 11
shows a two-mirror design;
[0059]FIG. 12
shows the PV values of the OZP's for diattenuation;
[0060]FIG. 13
shows the PV values of the OZP's for retardation;
[0061]FIG. 14
illustrates an exemplary evaluation of the lithographic parameters based on a sensitivity analysis; and
[0062]FIG. 15
illustrates a between the CD deviation obtained using the above sensitivity approach.
DETAILED DESCRIPTION [0063]
The scalar pupil of an optical system includes the phase and the intensity effect on traversing rays. Unfortunately it may not possible to describe the imaging properties of high numerical aperture
optical systems completely by use of a scalar pupil. The polarization dependence of the imaging contrast makes the incorporation of the transformation of polarization states mandatory. A rule of
thumb for the occurrence of high-NA effects is that such effects occur for an NA of 0.8 or larger.
The complete specification of the pupil--apart from depolarization effects--is accomplished via the Jones pupil. The Jones pupil assigns a Jones matrix to each pupil point. As is well known, the
Jones matrix describes the transformation of Jones vectors which in turn describe the--in general elliptical--polarization states of a given ray.
The effects on the polarization state of a traversing ray are conveniently separated into a set of elemental effects. These are
a) change of phase→scalar phase
b) change of intensity→scalar transmission (apodization)
c) induced partial polarization→diattenuation
d) change of polarization ellipticity→retardance
e) rotation of polarization state.→retardance
A reliable and stable method to separate a Jones pupil into the elemental effects can be provided by the so-called polar decomposition obtained from the singular value decomposition.
=VDW.sup.+ (1)
V and W are unitary matrices
. Their product yields the retardance matrix U=VW.sup.+. The diagonal matrix D has positive real elements--the singular values. The relation to the elemental effects is given in Table 1:
-US-00001 TABLE 1 Effect Formula Scalar phase Φ = arg { U } ##EQU00003## (2) Scalar transmission T = 1 2 ( D 11 2 + D 22 2 ) ##EQU00004## (3) Retardance magnitude ΔΦ = 2 + arccos { Re ( U 21 ) 2 + Re
( U 11 ) 2 U } ##EQU00005## (4) Retardance orientation ψ = arctan { - Re ( U 12 ) Re ( U 11 ) } - arctan { Im ( U 12 ) Im ( U 11 ) } ##EQU00006## (5) Diattenuation magnitude ΔT = D 11 2 - D 22 2 D 11
2 + D 22 2 ##EQU00007## (6) Diattenuation orientation τ = 0.5 arctan { 2 Re { W 11 W 21 * } W 11 2 - W 21 2 } ##EQU00008## (7) Diattenuation ellipticity = tan { 1 2 arcsin 2 Im { W 11 W 21 * } W 11 2
+ W 21 2 } ##EQU00009## (8) Rotation R = arctan { Re ( U 12 ) Re ( U 11 ) } ##EQU00010## (9)
The above table gives a rough overview of the formulas. Not all particular cases are included.
The imaging errors stemming from the scalar phase are well understood and conveniently described by an expansion of the wavefront (phase over pupil) into Zernike polynomials. Single Zernike
polynomials correspond to particular imaging errors.
In principle, the same procedure is possible with respect to the transmission and the rotation. Here again an expansion into scalar Zernike polynomials can be done. Diattenuation and retardance,
however, are no scalar quantities: They consist of a magnitude and a direction. For their expansion the Zernike polynomials have to be modified.
1. Scalar Zernike Polynomials
Zernike polynomials are defined on the unit circle. A real function W(r,φ) on that circle is represented as
( r , Φ ) = n = 0 N m = - n n C nm R n m m Φ = n = 0 N m = - n n A nm U nm = n = 0 N m = - n n A nm R n m { cos m Φ : m ≧ 0 sin m Φ : m < 0 ( 10 ) ##EQU00011##
where the radial polynomials are given by R n m
( r ) = s = 0 ( n - m ) / 2 ( - 1 ) s ( n - s ) ! s ! ( 1 2 ( n + m ) - s ) ! ( 1 2 ( n - m ) - s ) ! r n - 2 s ( 11 ) ##EQU00012##
denotes the coefficients of the expansion.
For convenience a linear single number scheme can be used. A possible and convenient numbering scheme relates the sub-indices n and m according to the following formulas to the single number j ("ceil
(x)"=lowest natural number larger than x)
= ceil j a = b 2 - j + 1 m = { - a 2 a even - a - 1 2 a odd n = 2 ( b - 1 ) - m ( 12 ) ##EQU00013##
The numbering scheme of the fringe Zernikes used here is depicted concisely according to Table 2.
-US-00002 TABLE 2 ##STR00001##
2. Jones-Zernike Polynomials
Non-published US-Provisional application U.S. 60/655,563 discloses the representation of the electric field in the pupil plane--i.e. the Jones vectors there--as a superposition of vector modes V
(p,q) with scalar excitation coefficients:
( p , q ) = i Z i V i ( p , q ) ( 13 ) ##EQU00014##
The expansion of a two-dimensional vector field of M pixels into N vector modes is performed as the expansion of a scalar wavefront into scalar Zernike polynomials. Here, this done by a least square
solution of a linear system of equations.
= 1 N A ij x j = b i mit i = 1 , , M ( 14 ) ##EQU00015##
with A ij
= ( V j p ( p i , q i ) V j q ( p i , q i ) ) x j = Z j b i = ( E p ( p i , q i ) E q ( p i , q i ) ) ( 15 ) ##EQU00016##
.e. the considered optical system is characterized by the coefficients Z
3. Pauli-Zernike Polynomials
The decomposition of a Jones pupil into Pauli Spin matrices
σ 0 = ( 1 0 0 1 ) , σ 1 = ( 1 0 0 - 1 ) , σ 2 = ( 0 1 1 0 ) , σ 3 = ( 0 - 0 ) , ( 16 ) ##EQU00017##
is described in McGuire
, J. P., Jr. and R. A. Chipman: "Polarization aberrations. 1. Rotationally symmetric optical systems", Applied Optics, 1994. 33(22): p. 5080-5100, and in McIntyre, G. R., et al.: "Polarization
aberrations in hyper-numerical-aperture projection printing: a comparison of various representations", Journal of Microlithography, Micro-fabrication and Microsystems, 2006. 5(3): p. 33001-31.
4. Orientators
4.1 Definition
Orientators are introduced here to describe magnitude and orientation of polarization states, retardance or diattenuation in free space. All these quantities can be visualized by ellipses with a well
defined meaning of the large and small axis A and B, respectively (see Table 3 in connection with
FIG. 1
-US-00003 TABLE 3 Quantity Large axis Small axis Polarization state Main linear polarization Minor linear polarization Retardance Fast axis Slow axis Diattenuation Bright axis Dark axis
The orientation angle ψ of the ellipse defines a direction in space. This not a vector because it is defined modulo 180°, i.e. a rotation of 180° returns the ellipse to its original orientation. The
inverse direction is obtained by rotation by 90°. These are properties of retardance and diattenuation but also of polarization, provided the ellipticity is also inverted.
Because of that property an orientation can be represented by a vector with doubled directional angle ψ. By attaching a magnitude a to it we get an orientator
_ ( a , ψ ) = a ( cos 2 ψ sin 2 ψ ) ( 17 ) ##EQU00018##
4.2 Properties
(i) Two orientators enclosing an angle of 45° are orthogonal to each other.
Proof: The inner product of two orientators under 45° is obviously zero, because the inner product of two vectors enclosing 90° is zero
(ii) The negative (inverse) element to an orientator encloses an angle of 90° with it
Proof: The negative element is
_ ( - a , ψ ) = - a ( cos 2 ψ sin 2 ψ ) = a ( cos ( 2 ψ + π ) sin ( 2 ψ + π ) ) = a ( cos ( 2 ( ψ + π 2 ) ) sin ( 2 ( ψ + π 2 ) ) ) ( 18 ) ##EQU00019##
(iii) An orientator represents an orthogonal transformation matrix with an eigenvector along it
Proof: A orthogonal transformation matrix with an eigenvector along ψ is given by
= ( cos ψ - sin ψ sin ψ cos ψ ) ( A 0 0 B ) ( cos ψ sin ψ - sin ψ cos ψ ) = ( A cos 2 ψ + B sin 2 ψ ( A - B ) cos ψ sin ψ ( A - B ) cos ψ sin ψ A sin 2 ψ + B cos 2 ψ ) = ( A cos 2 ψ + B sin 2 ψ ( A -
B ) cos ψ sin ψ ( A - B ) cos ψ sin ψ A sin 2 ψ + B cos 2 ψ ) = ( A + ( B - A ) sin 2 ψ ( A - B ) cos ψ sin ψ ( A - B ) cos ψ sin ψ A + ( B - A ) sin 2 ψ ) = ( A + B - A 2 ( 1 - cos 2 ψ ) A - B 2 sin
2 ψ A - B 2 sin 2 ψ A + B - A 2 ( 1 + cos 2 ψ ) ) = A + B 2 ( 1 0 0 1 ) + A - B 2 ( cos 2 ψ sin 2 ψ sin 2 ψ - cos 2 ψ ) = A + B 2 ( I + A - B A + B O ( ψ ) ) ( 19 ) ##EQU00020##
Herein, I denotes the unity matrix and O(ψ) a matrix formed by two orthogonal orientators.
( ψ ) = ( cos 2 ψ sin 2 ψ sin 2 ψ - cos 2 ψ ) = [ O _ ( ψ ) , O _ ( π - ψ ) ] ] ( 20 ) ##EQU00021##
(iv) The product of two transformations with eigenvalues deviating only weakly from one, is given by the sum of the orientators (times the product of the mean value of the eigenvalues)
1 T 2 = ( A 1 + B 1 ) 2 ( A 2 + B 2 ) 2 [ I + A 1 - B 1 A 1 + B 1 O ( ψ 1 ) ] [ I + A 2 - B 2 A 2 + B 2 O ( ψ 2 ) ] ≈ ( A 1 + B 1 ) 2 ( A 2 + B 2 ) 2 [ I + A 1 - B 1 A 1 + B 1 O ( ψ 1 ) + A 2 - B 2 A
2 + B 2 O ( ψ 2 ) ] ( 21 ) ##EQU00022##
(v) Stokes vector component 1 and 2 (0°-90° linear and 45°-135° linear) correspond to an orientator
The general form of a stokes vector of intensity one for a degree of polarization DoP, an ellipticity χ and a direction ψ of the main axis of the polarization ellipse is given by
→ = ( S 0 S 1 S 2 S 3 ) ( a DoP cos 2 χ cos 2 ψ DoP cos 2 χ sin 2 ψ DoP sin 2 χ ) ( 22 ) ##EQU00023##
.e. the vector formed by S
and S
is actually an orientator
( S 1 S 2 ) = DoP cos 2 χ O → ( ψ ) ( 23 ) ##EQU00024##
5. Orientation Zernike Polynomials
5.1 OZP Definition
The orientation Zernike polynomials (="OZP") are an expansion for orientator fields. As in the vector Zernike polynomials the radial and angular part of the vector Zernike polynomials are separated.
The radial part is the same as in the scalar ones, the angular part, however, is now an orientator
{right arrow over (W)}
.sup.|m| Φ
ε (24)
with m=-n, . . . , n and ε=0 or 1.
Accordingly, an expansion into OZP can be given as
= n = 0 N m = - n n = 0 , 1 C nm R n m Φ _ m . ##EQU00025##
The functional form is the same as for VZP. The interpretation, however, is different: The angular part of the OZP is interpreted as an orientator, i.e. it represents an orientation angle ψ that is
half of the angle of the vector.
Φ → m 0 = ( cos m Φ 2 - sin m Φ 2 ) , Φ → m 1 = ( sin m Φ 2 cos m Φ 2 ) ( 25 ) ##EQU00026##
This is not the only possible definition of {right arrow over (Φ)}
ε. Any two independent linear combinations of {right arrow over (Φ)}
and {right arrow over (Φ)}
also represent a valid basis set. Another possible set is outlined in the following:
Let j be the number of a certain scalar Fringe Zernike polynomial, and m≧0 its waviness. Furthermore choose j always to be the smaller index of the two possible values corresponding to its waviness
(lets call this Zernike x-wave). For instance for m=1, j could be 2, 7, 14, . . . and for m=2, j could be 5, 12, 21, . . . . Let the corresponding OZP be denoted by OZ=±j. Then the angular parts of
the OZP are given by
Φ → m 0 = ( cos m Φ 2 sgn ( OZ ) sin m Φ 2 ) for OZ = ± j ( x - wave ) ( 26 ) ##EQU00027##
Φ → m 1 = ( sin ( m Φ 2 + π 4 ) - sgn ( OZ ) cos ( m Φ 2 + π 4 ) ) ##EQU00028##
for OZ
=±(j+1) (y-wave) (27)
The term π/4 is inserted in order to make the correspondence between the numbering scheme and the Jones pupils more apparent. If the given OZP represents some retardation, then with this choice, the
phase of the J
element of the Jones matrix pupil of this OZP always corresponds to the scalar Zernike with the same Fringe number. A similar analogy holds for an OZP representing some diattenuation.
For the positive x-wave OZP the orientation angle ψ is
ψ = arc tan ( tan m Φ 2 ) = m Φ 2 , ##EQU00029##
whereas for positive y-wave OZP the orientation angle ψ is
ψ = arc tan ( - cot ( m Φ 2 + π 4 ) ) = π 2 + m Φ 2 + π 4 = m Φ 2 + 3 π 4 = m Φ 2 - π 4 , ##EQU00030##
since ψ is π-periodic. Hence the orientation between an x-wave and y-wave OZP differs by 45°.
5.2 Depiction of the First OZP
The first OZP looks the illustration in FIG. 2.
Correspondence of OPZ and Jones Pupils
As mentioned above, the numbering scheme of the OZP is chosen such that for a given OZ number, the Jones matrix element J
corresponds to the scalar Fringe Zernike Z=|OZ|. The sign of the OZP shows up in the offdiagonal elements.
The correspondence between Retardation Orientation Zernike polynomials and Jones pupil is shown in FIG. 3.
Rotational Symmetry of the OZP
The OZP show rotational symmetry of various degrees (OZ 2 for instance, is threefold and OZ 6 completely rotational symmetrical). For the rotational symmetry, the difference between the orientation ψ
and the azimuthal angle φ (that varies linearly from 0 to 2π)
Δ(φ)=φ-ψ(φ) (28)
is the determining quantity. Provided, 2π is an integer multiple of Δ, the number of symmetry-axes (the "foldness" is
= Δ ( Φ = 2 π ) π ( 29 ) ##EQU00031##
Substituting for ψ the orientation of the OZP yields
= 2 π + m 2 π 2 π = 2 + m ( 30 ) ##EQU00032##
Accordingly, the number of symmetry axes of an OZP for azimuthal index m is
=2+m (31)
For the first OZP we get
-US-00004 j 1 -1 2 -2 3 -3 4 -4 5 -5 6 -6 m 0 0 1 1 -1 -1 0 0 2 2 -2 -2 k 2 2 3 3 1 1 2 2 4 4 0 0
The symmetry properties of the OZP can be summarized as follows:
(i) The spherical terms (==1,4,9,16, . . . ) are two-fold symmetrical
(ii) The symmetry of OZP of the same group differs by 2m
(iii) OZP of indices of identical magnitude have the same number of symmetry axes.
The Spin-Number of an OZP
The rotational symmetry follows from the rotation of the orientation while following a closed curve around the azimuth.OZ1 for instance, does not rotate at all. OZ 2, however, performs a 180°
rotation following a path once around the pupil. For OZ-2 this rotation is also 180°, but in the opposite direct ion. So we can assign OZ1 a spin of 0, OZ2 a spin of 0.5 (the orientation rotated
counterclockwise) and OZ-2 a spin of -0.5 (clockwise rotation of orientation). It turns out that the spin number is given by the index m according to
= m 2 ( 32 ) ##EQU00033##
Summary of Indices of the OZP
In Table 4 the indices assigned to orientation Zernike polynomials are summarized.
-US-00005 TABLE 4 Quantity symbol formula examples Number j -- 1 -1 2 -2 3 -3 4 -4 5 -5 6 -6 Azimuth m 0 0 1 -1 1 -1 0 0 2 -2 2 -2 index Radial n 0 0 1 1 1 1 2 2 2 2 2 2 index Spin s s = m 2 ##
EQU00034## 0 0 1 2 ##EQU00035## - 1 2 ##EQU00036## 1 2 ##EQU00037## - 1 2 ##EQU00038## 0 0 1 -1 1 -1 Foldness f f = 2 - m 2 2 1 3 1 3 2 2 0 4 0 4
Rotation of a Single OZP
This section shows that the rotation of a single OZP is given by a superposition of OZP of the same index m. This behavior is in accordance with scalar Zernike polynomials. The amplitude of any OZP
is independent of the azimuth. Therefore it is sufficient to consider the vector containing the angular dependence alone.
Φ → m 0 = ( cos m ψ 2 - sin m ψ 2 ) , Φ → m 1 = ( sin m ψ 2 cos m ψ 2 ) ( 33 ) ##EQU00039##
Rotation of the orientator for ε=0 yields with
(α+β)=cos α cos β-sin α sin β and sin(α+β)=sin cos β+cos α sin β
Φ _ m 0 ( ψ + Δ ψ ) = ( cos m ( ψ + Δ ψ ) 2 - sin m ( ψ + Δ ψ ) 2 ) = cos m Δ ψ 2 ( cos m ψ 2 - sin m ψ 2 ) - sin m Δ ψ 2 ( sin m ψ 2 cos m ψ 2 ) ( 34 ) ##EQU00040##
Φ _ m 0 ( ψ + Δψ ) = cos m Δ ψ 2 Φ _ m 0 - sin m Δ ψ 2 Φ _ m 1 ( 35 ) ##EQU00041##
Rotation of the orientator for ε=1 yields a similar result
Φ _ m 0 ( ψ + Δψ ) = ( sin m ( ψ + Δ ψ ) 2 cos m ( ψ + Δ ψ ) 2 ) = cos m Δ ψ 2 ( sin m ψ 2 cos m ψ 2 ) + sin m Δ ψ 2 ( cos m ψ 2 - sin m ψ 2 ) ( 36 ) i . e . Φ _ m 1 ( ψ + Δψ ) = cos m Δ ψ 2 Φ _ m 1
+ sin m Δ ψ 2 Φ _ m 0 ( 37 ) ##EQU00042##
Putting the last two results together we obtain for a rotation of a linear combination of the angular parts with ε=0 and ε=1 (=positive and negative index in the linear scheme)
Φ _ m 0 ( ψ + Δψ ) + b Φ _ m 1 ( ψ + Δ ψ ) = ( a cos m Δ ψ 2 + b sin m Δ ψ 2 ) Φ _ m 0 + ( b cos m Δ ψ 2 - a sin m Δ ψ 2 ) Φ _ m 1 ( 38 ) ##EQU00043##
Because of this simple relationship it is sufficient for the following examples to consider one azimuthal position only.
Grouping of OZP
Using the symmetry properties together with the indices derived above, the disclosure suggests to group the OZP as explained in the following with respect to FIG. 4:
The waviness--denoted on top of the columns--becomes an integer number, i.e. it is positive and negative. The sign depends on the handiness of the rotation of the orientators with respect to the
azimuth (the spin number). This can be clockwise or counterclockwise. The grouping of the OZP desirably takes the handiness into account, because the superposition is completely different for
different handiness.
Relation to Pauli Pupils
There is a close relationship to the Zernike expansion of the Jones-pupil decomposed into Pauli matrices as will be shown in the following. An orthogonal transformation matrix T was written in (3) as
= A + B 2 ( 1 0 0 1 ) + A - B 2 ( cos 2 ψ sin 2 ψ sin 2 ψ - cos 2 ψ ) = A + B 2 ( I + A - B A + B O ( ψ ) ) ( 39 ) ##EQU00044##
With the definition of the Pauli-matrices
σ 0 = ( 1 0 0 1 ) , σ 1 = ( 1 0 0 - 1 ) , σ 2 = ( 0 1 1 0 ) σ 3 = ( 0 - 0 ) ( 40 ) ##EQU00045##
it becomes
= A + B 2 σ 0 + A - B 2 ( cos ( 2 ψ ) σ 1 + sin ( 2 ψ ) σ 2 ) ( 41 ) ##EQU00046##
From a comparison of equations (39) and (41) it is obvious, that the Zernike decomposition of the prefactors of the Pauli matrices is closely related to the Orientation Zernike decomposition. To
provide the equations we consider the Pauli decomposition of an OZP-group
W j
= W nm 0 = ( C n m S n m S n m - C n m ) = C n m σ 1 + S n m σ 2 , W j + 1 = W n - m 0 = ( C n m - S n m - S n m - C n m ) = C n m σ 1 - S n m σ 2 , W - j = W nm 1 = ( S n m C n m C n m - S n m ) = S
n m σ 1 + C n m σ 2 W - j - 1 = W n - m 1 = ( - S n m C n m C n m S n m ) = - S n m σ 1 + C n m σ 2 ( 42 ) ##EQU00047##
It is sufficient to consider a single group. We denote the spectrum of the OZP by o
(=-∞ . . . ∞), the spectrum of σ1 by s
((=0, . . . , ∞) and the spectrum of σ2 by t
((j=0, . . . , ∞). That yields
, (43)
The inversion is straight forward
s j
= o j + o j + 1 2 s j + 1 = o - j - o - j - 1 2 , t j = o j - o j + 1 2 , t j + 1 = o - j + o - j - 1 2 , ( 44 ) ##EQU00048##
For the terms with rotational symmetry n=0 (i.e. S
=0) we get the simple relationship
[j] o
+1 (45)
Optical System [0150]
As an illustrative example,
FIG. 5
shows a meridional section of a microlithography projection lens which is also disclosed in WO/2004/019128 A2 (see FIG. 19 and Table 9, 10 of that publication). This projection lens 500 is a
catadioptrical projection lens which includes, along an optical system axis OA and between an object (or reticle) plane OP and an image (or wafer) plane IP, a first refractive subsystem, a second
catadioptric subsystem and a third refractive subsystem, and is also referred to as a RCR-system ("refractive-catadoptrical-refractive").
The projection lens 500 has a numerical aperture (NA) of 1.3, an image field of 26 mm, 4-times demagnification and an operational wavelength of 193 nm.
The simulated effects include the antireflection coatings, the high-reflection coatings and the volume absorption of SiO
Jones-Pupil Polar Decomposition of Jones Pupil and Pauli Decomposition
The Jones pupil is illustrated in FIG. 6. The polar decomposition using the singular value decomposition results in the following pupils shown in
FIG. 7
. The complex pupils after Pauli decomposition are shown in
FIG. 8
The scalar Zernike spectra are shown in
FIG. 9
, while the orientation Zernike spectra are shown in
FIG. 10
As can be gathered from
FIG. 10
, the OZP being the most significant for the diattenuation is the OZP for which j=-5, followed up by the OZP's for j=5, j=4, j=-12 and j=1. Further OZP's which still give (relatively small)
contributions to diattenuation are the OZP's for j=3, j=-3, j=8, j=-8, j=9, j=-11, j=12, j=-15, j=16, j=17, j=-17, j=-20, j=21 and j=-21.
As can also be gathered from
FIG. 10
, the OZP being the most significant for the retardance is the OZP for which j=1, followed up by the OZP's for j=4, j=5, j=-5 and j=-12. Further OZP's which still give (relatively small)
contributions to retardance are the OZP's for j=3, j=-3, j=-8, j=9, j=-11, j=12, j=16, j=17, j=-17, j=-20, j=21 and j=-21.
Optical System [0158]FIG. 11
shows a further example of a so-called "2M design" ("two-mirror design"). The layout of the projection objective 600 is also disclosed in WO 2007/086220 A1 (see FIG. 4 and Table 1 of that
publication). The projection objective 600 is rotationally symmetric, has one straight optical axis common to all refractive and reflective optical components and has two concave mirrors. The concave
mirrors are both constructed and illuminated as off-axis sections of axially symmetric surfaces. The projection objective 600 is designed as an immersion objective for λ=193 nm having an image side
numerical aperture NA=1.3 when used in conjunction with a high index immersion fluid between the exit surface of the objective closest to the image plane and the image plane. Calculations of optical
properties have been performed for operation with a rectangular effective image field with size 26 mm*5.5 mm offset by 2.57 mm from the optical axis.
Each of the entry and exit surfaces of the transparent optical elements in the objective 600 of
FIG. 11
is provided with an antireflection (AR) structure effective to reduce reflection losses and thereby to increase transmittance of the coated surface. The concave mirror surfaces are coated with high
reflectance (HR) reflective coatings. As a conventional antireflection (AR) structure, the AR structure disclosed in U.S. Pat. No. 5,963,365 is used (see embodiment 1 of U.S. Pat. No. 5,963,365).
Then, the thicknesses of the AR structures in the above design have been optimized with respect to the field dependency of the OZP's as discussed in the following in more detail. Hereto, a suitable
variation of the thickness with respect to the lens height (with respect to the optical system axis, i.e. the lens radius) has been selected. The thickness of the AR structure can be described by the
following polynomial:
*h.- sup.4+a
) (46)
Accordingly, the thickness of the AR structures after optimization can be described by appropriate values of the coefficients in the above equation (46), wherein the respective values for the
optimized design are given below in Table 5. In equation (46), d
denotes the nominal thickness of the respective AR layer in line with the embodiment 1 of U.S. Pat. No. 5,963,365, so that this nominal thickness is multiplied by a factor according to equation (46).
-US-00006 TABLE 5 Surf. a1 a2 a3 a4 a5 a6 a7 S1 1.00E+00 7.28E-05 7.21E-07 1.75E-09 -9.94E-11 -2.94E-12 -5.93E-14 S2 1.00E+00 7.80E-05 7.00E-07 9.92E-10 -1.06E-10 -2.88E-12 -5.56E-14 S3 1.01E+00
7.29E-05 5.36E-07 -7.05E-10 -1.10E-10 -2.59E-12 -4.69E-14 S4 9.95E-01 3.92E-05 5.36E-07 3.73E-09 1.63E-12 -4.77E-13 -1.11E-14 S5 1.01E+00 1.10E-04 5.41E-07 -3.36E-09 -1.48E-10 -2.90E-12 -4.70E-14 S6
9.95E-01 5.48E-05 5.21E-07 2.38E-09 -1.35E-11 -5.60E-13 -1.05E-14 S7 1.01E+00 1.69E-04 6.92E-07 -8.01E-09 -2.70E-10 -5.16E-12 -8.45E-14 S8 9.69E-01 -4.61E-05 1.12E-06 2.83E-08 5.31E-10 9.09E-12
1.54E-13 S9 9.62E-01 -7.24E-05 1.64E-06 4.17E-08 8.04E-10 1.46E-11 2.63E-13 S10 9.80E-01 1.21E-04 2.23E-06 2.79E-08 3.47E-10 4.62E-12 6.70E-14 511 9.73E-01 5.48E-05 2.56E-06 4.91E-08 8.61E-10
1.52E-11 2.77E-13 S12 9.55E-01 -7.92E-05 4.17E-06 1.08E-07 2.44E-09 5.61E-11 1.34E-12 S13 9.58E-01 -3.52E-05 4.58E-06 1.12E-07 2.50E-09 5.75E-11 1.37E-12 S14 9.76E-01 3.95E-05 2.91E-06 7.48E-08
1.78E-09 3.66E-11 7.77E-13 S15 9.71E-01 -5.93E-05 1.45E-06 3.60E-08 5.67E-10 7.91E-12 1.02E-13 S16 1.02E+00 2.31E-04 7.74E-07 -1.61E-08 -4.49E-10 -7.91E-12 -1.21E-13 S17 1.01E+00 2.25E-04 7.91E-07
-1.51E-08 -4.26E-10 -7.49E-12 -1.14E-13 S18 9.87E-01 -2.20E-04 -1.03E-06 6.67E-09 2.62E-10 4.40E-12 6.08E-14 S19 9.92E-01 -1.91E-04 -1.09E-06 2.49E-09 1.84E-10 3.37E-12 4.78E-14 S20 9.97E-01 8.01E-05
6.77E-07 1.81E-09 -3.48E-11 -8.35E-13 -1.29E-14 S21 1.01E+00 8.40E-05 2.68E-07 -3.55E-09 -8.30E-11 -1.16E-12 -1.38E-14 S22 1.01E+00 7.83E-05 2.14E-07 -3.70E-09 -8.03E-11 -1.09E-12 -1.28E-14 S23
1.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 S24 1.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 S25 1.01E+00 1.03E-05 -2.09E-09 -8.46E-10 -1.56E-11 -2.01E-13
-2.22E-15 S26 9.99E-01 3.78E-06 1.01E-07 5.54E-10 -2.73E-13 -4.23E-14 -6.36E-16 S27 1.01E+00 3.84E-05 1.30E-07 -1.04E-09 -2.78E-11 -3.84E-13 -4.41E-15 S28 9.96E-01 3.11E-05 2.36E-07 8.20E-10
-3.61E-12 -1.10E-13 -1.49E-15 S29 9.97E-01 6.65E-05 3.28E-07 -6.04E-10 -3.51E-11 -5.76E-13 -7.79E-15 S30 1.01E+00 1.67E-04 6.47E-07 -8.61E-09 -2.54E-10 -4.48E-12 -6.89E-14 S31 9.96E-01 1.01E-04
4.75E-07 -1.66E-09 -6.72E-11 -1.09E-12 -1.50E-14 S32 1.02E+00 1.50E-05 -2.33E-06 -3.86E-08 -3.60E-10 -3.90E-13 8.15E-14 S33 1.01E+00 8.51E-05 -3.32E-08 -4.57E-09 -7.59E-11 -1.04E-12 -1.35E-14 S34
1.00E+00 5.29E-05 -3.39E-08 -2.36E-09 -3.53E-11 -4.53E-13 -5.48E-15 S35 1.02E+00 1.02E-04 -4.47E-08 -5.52E-09 -9.03E-11 -1.18E-12 -1.43E-14 S36 1.01E+00 4.44E-05 2.12E-08 -1.90E-09 -3.30E-11
-4.43E-13 -5.41E-15 S37 1.01E+00 3.69E-05 -2.94E-08 -2.02E-09 -3.14E-11 -4.06E-13 -4.84E-15 S38 1.01E+00 3.79E-05 6.05E-09 -1.46E-09 -2.40E-11 -3.05E-13 -3.51E-15 S39 1.01E+00 3.77E-05 -1.98E-08
-1.43E-09 -1.92E-11 -2.11E-13 -2.14E-15 S40 1.02E+00 5.34E-05 -1.14E-07 -2.53E-09 -2.61E-11 -2.32E-13 -1.94E-15 S41 1.00E+00 2.77E-05 -2.79E-08 -7.73E-10 -7.94E-12 -7.33E-14 -6.60E-16 S42 1.01E+00
5.38E-05 -3.82E-08 -1.89E-09 -2.15E-11 -2.05E-13 -1.85E-15 S43 1.00E+00 3.02E-05 -3.51E-08 -8.96E-10 -7.96E-12 -5.85E-14 -3.74E-16 S44 1.01E+00 7.07E-05 -4.93E-08 -3.04E-09 -3.58E-11 -3.48E-13
-3.18E-15 S45 9.98E-01 3.87E-05 2.67E-08 -6.45E-10 -6.88E-12 -5.09E-14 -3.00E-16 S46 1.03E+00 1.90E-04 1.33E-07 -1.04E-08 -1.93E-10 -2.76E-12 -3.63E-14 S47 9.95E-01 7.00E-05 3.39E-07 6.15E-10
-1.26E-11 -3.03E-13 -5.40E-15 S48 1.09E+00 7.94E-04 1.14E-06 -1.24E-07 -3.65E-09 -8.08E-11 -1.63E-12
As to the HR layer, the parameters of the optimized structure can be gathered from Table 6.
-US-00007 TABLE 6 Partial Layer Refractive No. thickness number Absorption Material 1 40.92 1.7 3.20E-03 Al2O3 2 37.2 1.41 1.00E-04 AlF3 3 26.97 1.7 3.20E-03 Al2O3 4 37.2 1.41 1.00E-04 AlF3 5 26.97
1.7 3.20E-03 Al2O3 6 37.2 1.41 1.00E-04 AlF3 7 26.04 1.7 3.20E-03 Al2O3 8 38.13 1.41 1.00E-04 AlF3 9 26.04 1.7 3.20E-03 Al2O3 10 38.13 1.41 1.00E-04 AlF3 11 26.04 1.7 3.20E-03 Al2O3 12 38.13 1.41
1.00E-04 AlF3 13 26.04 1.7 3.20E-03 Al2O3 14 39.06 1.41 1.00E-04 AlF3 15 25.11 1.7 3.20E-03 Al2O3 16 39.99 1.41 1.00E-04 AlF3 17 25.11 1.7 3.20E-03 Al2O3 18 37.2 1.41 1.00E-04 AlF3 19 6.51 1.68
2.10E-04 LaF3 20 20.46 1.7 3.20E-03 Al2O3 21 38.13 1.41 1.00E-04 AlF3 22 8.37 1.68 2.10E-04 LaF3 23 18.6 1.7 3.20E-03 Al2O3 24 38.13 1.41 1.00E-04 AlF3 25 9.3 1.68 2.10E-04 LaF3 26 16.74 1.7 3.20E-03
Al2O3 27 39.06 1.41 1.00E-04 AlF3 28 28.83 1.68 2.10E-04 LaF3 29 38.13 1.41 1.00E-04 AlF3 30 28.83 1.68 2.10E-04 LaF3 31 39.99 1.41 1.00E-04 AlF3 32 26.04 1.68 2.10E-04 LaF3 33 39.06 1.41 1.00E-04
[0163]FIG. 12
shows, in order to demonstrate the field dependency of the OZP's before and after optimization, the PV values (PV="Peak-to-Value") of the OZP's for diattenuation, while
FIG. 13
shows the same for retardation. It can be seen in both figures that after optimization the PV values have been significantly reduced for most OZP's.
As can be gathered from Tables 7-10 shown below, relevant lithographic parameters as the CD variation (=variation of critical dimension, i.e. deviation from the nominal structure width or dimension)
between the center of the field and the edge of the field, HV (=differences between horizontal and vertical lines), OVL (=overlay, i.e. lateral displacement or deviation from the desired position)
and telecentry have been significantly improved as a result of the optimization of the AR-structures in the above design with respect of the field dependency of the OZP's.
These lithographic parameters have been determined for an annular setting, wherein the ratio between inner and outer radius in pupil coordinates was 0.72/0.97 and wherein a so-called XY-polarization
(also referred to as "quasi-tangential polarized setting"), a numerical aperture NA=1.3, an operating wavelength λ=193 nm and a desired CD of 45 nm on the wafer have been used. In order to obtain the
results of Table 7-10, the aerial image in resist has been evaluated using a simple threshold model.
-US-00008 TABLE 7 CD [nm] CD [nm] pitch before optimization after optimization Difference Δ 90 0.13717381 0.0532614 0.08391241 110 -0.52389101 -0.34792701 0.17596401 140 -0.6279239 -0.38442915
0.24349475 180 -0.42093159 -0.25616406 0.16476753 250 -0.33736285 -0.18033748 0.15702538 500 0.17847836 0.18173183 -0.00325347
-US-00009 TABLE 8 HV [nm] HV [nm] pitch before optimization after optimization Difference Δ 90 -0.54879654 -0.36532542 0.18347112 110 0.19030984 0.04403086 0.14627897 140 0.55631032 0.2781398
0.27817052 180 0.3779319 0.19011843 0.18781347 250 0.36802477 0.16910316 0.19892161 500 -0.06739512 -0.08496163 -0.01756651
-US-00010 TABLE 9 OVL [nm] OVL [nm] pitch before optimization after optimization Difference Δ 90 -0.33609839 -0.16690079 0.1691976 110 -0.13377339 -0.01220941 0.12156398 140 -0.14866116 -0.05040817
0.09825298 180 -0.27403985 -0.1742175 0.09982234 250 -0.27833902 -0.192899 0.08544003 500 -0.30159824 -0.22785506 0.07374318
-US-00011 TABLE 10 Telecentricity Telecentricity [mrad] [mrad] pitch before optimization after optimization Difference Δ 90 -2.02573647 -0.69436835 1.33136812 110 -5.33776738 -2.5596211 2.77814629
140 -8.53805454 -4.13551583 4.40253872 180 -7.19794185 -2.82028254 4.37765931 250 -9.63064101 -3.91872157 5.71191944 500 -12.8822704 -5.2802258 7.60204457
In some embodiments, the evaluation of the lithographic parameters can be performed based on a sensitivity analysis, wherein e.g. the change in CD (in nm per %) can be plotted for each OZP, which is
exemplarily illustrated in
FIG. 14
, where the diattenuation sensitivities are plotted for a certain dipole setting. The legend given in the graph denotes the pitch (in nanometers) corresponding to each bar. In the example in FIG. 14,
it can be gathered e.g. that a 1% change in the OZP No. 4 for diattenuation may result in a CD variation of approximately ˜0.05 nm. If a complete spectrum of OZP's has been determined, each single
coefficient of this spectrum can be multiplied by the respective sensitivity value. If e.g. the spectrum shows a value of 4% for the OZP 4, and the sensitivity amounts to 0.05 (nm per %) for this OZP
4, a CD variation of 4%*0.05 (nm/%)=0.2 nm is obtained, etc.
Then the desired performance of the respective lithographic parameter can be calculated as a scalar product between a "sensitivity vector" including the sensitivities for all OZP's and a "coefficient
vector" describing the complete spectrum of OZP's. If, in addition to linear terms, also quadratic terms in the sensitivity coefficient relation are considered, this sensitivity vector becomes a
sensitivity matrix, and the calculation of this scalar product can be written as follows:
Δ CD = i S i apo , lin Z i + i S i dia , lin OZ i + i S i ret , lin OZ i + i , j S ij apo , quad Z i Z j + i , j S ij dia , quad OZ i OZ j + i , j S ij ret , quad OZ i OZ j ( 47 ) ##EQU00049##
As can be gathered from
FIG. 15
, a good correlation between the CD deviation obtained using the above sensitivity approach, on the one hand, and the result obtained for consideration of the whole Jones pupil in the simulation, on
the other hand, is achieved. As data input for the simulations, measured data sets (Jones pupils) of a plurality of immersion projection lenses having a numerical aperture of NA=1.35 have been
In most cases the consideration of the linear terms in equation (47) already gives a good correlation. In some cases, however, especially if the retardation is important, the inclusion of the
quadratic terms might be involved to improve the accuracy. Furthermore, the cross correlation between apodization and diattenuation or between diattenuation and retardation can also be taken into
An analogous calculation, as given above for the CD deviation, can be performed e.g. for overlay errors. Furthermore, other settings, such as an annular setting, or other mask patterns may be
Generally, using the above concept, any arbitrary Jones pupil can be described by an expansion into OZP's to obtain an OZP spectrum, followed by a multiplication of this OZP spectrum with a
sensitivity vector or matrix related to the impact on the lithographic parameters. In other words, the evaluation or assessment of the optical system is, in that approach, not made using threshold
values for certain OZP's, but based on the performance obtained in the relevant lithography parameters (CD deviation, overlay etc.) using the above sensitivity-based analysis.
It can be demonstrated that the sensitivity pattern of the Zernike expansion for the lens transmission (apodization), and that of the OZP expansion for retardation and diattenuation are quite similar
and obeys similar symmetries.
Furthermore, a tendency can be observed that the lower order OZP dominate the sensitivity spectra. This means that it can already be sufficient to control a very limited number of OZP (e.g. up to an
order of 20, up to an order of 15 or up to an order of 10) to control the vectorial imaging behaviour of a lithography lens.
Even if the disclosure has been described via certain embodiments numerous variations and alternative embodiments will be apparent to the man skilled in the art, for example by combination and/or
exchange of features of individual embodiments. Accordingly it will be appreciated by the man skilled in the art that such variations and alternative embodiments are also embraced by the present
disclosure and the scope of the disclosure is limited only in the sense of the accompanying claims and equivalents thereof.
Patent applications by Daniel Kraehmer, Essingen DE
Patent applications by Johannes Ruoff, Aalen DE
Patent applications by Michael Totzeck, Schwaebisch-Gmuend DE
Patent applications by Ralf Mueller, Aalen DE
Patent applications by Vladan Blahnik, Aalen DE
Patent applications by CARL ZEISS SMT AG
Patent applications in class Having judging means (e.g., accept/reject)
Patent applications in all subclasses Having judging means (e.g., accept/reject)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20090306921","timestamp":"2014-04-16T18:01:14Z","content_type":null,"content_length":"107240","record_id":"<urn:uuid:ebd5d2c0-4d79-431e-b890-5da76e90432a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ellicott, MD Geometry Tutor
Find an Ellicott, MD Geometry Tutor
...Grammar: I studied Latin in high school and college; this has allowed me to become very adept at understanding English grammar. Literature: While my favorite genre is Fantasy, I also tend to
read and reread many classics that are commonly assigned to English classes. I am very skilled at interpreting them and identifying the key points.
32 Subjects: including geometry, reading, algebra 2, calculus
With a combination of a Master's degree and 16 years of teaching experience, I am the tutor that would be most effective in building your child's educational capacity and ability. I have taught
elementary, middle and high school levels, as well as college level courses. With my experience and expe...
14 Subjects: including geometry, English, reading, biology
...I worked for over 12 years as a mechanical engineer in industry. I have also worked as a full time instructor of engineering design and drafting classes at the college level. I was a long time
user of MS Project in an industrial environment.
21 Subjects: including geometry, calculus, physics, algebra 2
...The key to understanding this class, is not just memorizing the formulas, but understanding what they mean and how they can be properly applied. I study mathematics. When I took this class it
was very proof intensive, meaning, most things done in class, I did a proof for it: from subspace, kern...
9 Subjects: including geometry, calculus, algebra 1, algebra 2
...As a tutor, my main job isn't to talk, but to listen: I think the real centerpiece is getting my students to explain the material in their own words. Everyone learns differently, and the
beauty of tutoring is that we can adapt our approach on-the-fly to address just what it is that one specific ...
18 Subjects: including geometry, calculus, writing, algebra 1
Related Ellicott, MD Tutors
Ellicott, MD Accounting Tutors
Ellicott, MD ACT Tutors
Ellicott, MD Algebra Tutors
Ellicott, MD Algebra 2 Tutors
Ellicott, MD Calculus Tutors
Ellicott, MD Geometry Tutors
Ellicott, MD Math Tutors
Ellicott, MD Prealgebra Tutors
Ellicott, MD Precalculus Tutors
Ellicott, MD SAT Tutors
Ellicott, MD SAT Math Tutors
Ellicott, MD Science Tutors
Ellicott, MD Statistics Tutors
Ellicott, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/Ellicott_MD_Geometry_tutors.php","timestamp":"2014-04-19T17:13:23Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:8197f970-b291-4a7b-96ae-9a5ec0cd5be6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Hochschild-Serre spectral sequence relative to an ideal containing the derived subalgebra
up vote 5 down vote favorite
Is the Hochschild-Serre spectral sequence $$H_\bullet(\mathfrak g/\mathfrak h,H_\bullet(\mathfrak h,k))\Rightarrow H_\bullet(\mathfrak g,k)$$ for an extension of Lie algebras $$0\to\mathfrak h\to\
mathfrak g\to\mathfrak g/\mathfrak h\to0$$ whose kernel contains the derived subalgebra $[\mathfrak g,\mathfrak g]$ (so that $\mathfrak g/\mathfrak h$ is abelian) special in any way? Does one have
extra information on its differentials?
I hope this has been treated in the literature —it seems like an easy case, somehow— but I don't seem to be able to find anything useful.
Usually it's the opposite extreme which is the easy case; namely, the case when the quotient is semisimple, since then the spectral sequence degenerates at the second page and you get a useful
factorisation. I have not seen this case treated, but then it never arose in anything I have done and did not look. – José Figueroa-O'Farrill Jun 10 '11 at 18:22
3 Well, abelian is like at the other extreme from semisimple, so one can hope that the complexity is in the middle! :) – Mariano Suárez-Alvarez♦ Jun 10 '11 at 18:28
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged lie-algebras spectral-sequences reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/67454/the-hochschild-serre-spectral-sequence-relative-to-an-ideal-containing-the-deriv","timestamp":"2014-04-17T01:45:25Z","content_type":null,"content_length":"49244","record_id":"<urn:uuid:62f15e88-82e3-426a-9193-46b6e9b3519d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Proof Theory, volume 43 of Cambridge Tracts in Theoretical Computer Science
Results 1 - 10 of 23
, 2000
"... In this paper we present a strongly normalising cut-elimination procedure for classical logic. This procedure adapts Gentzen's standard cut-reductions, but is less restrictive than previous
strongly normalising cut-elimination procedures. In comparison, for example, with works by Dragalin and Danos ..."
Cited by 35 (4 self)
Add to MetaCart
In this paper we present a strongly normalising cut-elimination procedure for classical logic. This procedure adapts Gentzen's standard cut-reductions, but is less restrictive than previous strongly
normalising cut-elimination procedures. In comparison, for example, with works by Dragalin and Danos et al., our procedure requires no special annotations on formulae and allows cut-rules to pass
over other cut-rules. In order to adapt the notion of symmetric reducibility candidates for proving the strong normalisation property, we introduce a novel term assignment for sequent proofs of
classical logic and formalise cut-reductions as term rewriting rules.
, 2000
"... We obtain two results about the proof complexity of deep inference: 1) deep-inference proof systems are as powerful as Frege ones, even when both are extended with the Tseitin extension rule or
with the substitution rule; 2) there are analytic deep-inference proof systems that exhibit an exponential ..."
Cited by 31 (13 self)
Add to MetaCart
We obtain two results about the proof complexity of deep inference: 1) deep-inference proof systems are as powerful as Frege ones, even when both are extended with the Tseitin extension rule or with
the substitution rule; 2) there are analytic deep-inference proof systems that exhibit an exponential speed-up over analytic Gentzen proof systems that they polynomially simulate.
- Journal of Logic and Computation , 2005
"... Abstract. Hybrid logics are a principled generalization of both modal logics and description logics. It is well-known that various hybrid logics without binders are decidable, but decision
procedures are usually not based on tableau systems, a kind of formal proof procedure that lends itself towards ..."
Cited by 21 (4 self)
Add to MetaCart
Abstract. Hybrid logics are a principled generalization of both modal logics and description logics. It is well-known that various hybrid logics without binders are decidable, but decision procedures
are usually not based on tableau systems, a kind of formal proof procedure that lends itself towards computer implementation. In this paper we give four different tableaubased decision procedures for
a very expressive hybrid logic including the universal modality; three of the procedures are based on different tableau systems, and one procedure is based on a Gentzen system. The decision
procedures make use of so-called loop-checks which is a technique standardly used in connection with tableau systems for other logics, namely prefixed tableau systems for transitive modal logics, as
well as prefixed tableau systems for certain description logics. The loop-checks used in our four decision procedures are similar, but the four proof systems on which the procedures are based
constitute a spectrum of different systems: prefixed and internalized systems, tableau and Gentzen systems.
, 1998
"... The Curry-Howard isomorphism states an amazing correspondence between systems of formal logic as encountered in proof theory and computational calculi as found in type theory. For instance,
minimal propositional logic corresponds to simply typed λ-calculus, first-order logic corresponds to dependent ..."
Cited by 7 (0 self)
Add to MetaCart
The Curry-Howard isomorphism states an amazing correspondence between systems of formal logic as encountered in proof theory and computational calculi as found in type theory. For instance, minimal
propositional logic corresponds to simply typed λ-calculus, first-order logic corresponds to dependent types, second-order logic corresponds to polymorphic types, etc. The isomorphism has many
aspects, even at the syntactic level: formulas correspond to types, proofs correspond to terms, provability corresponds to inhabitation, proof normalization corresponds to term reduction, etc. But
there is much more to the isomorphism than this. For instance, it is an old idea—due to Brouwer, Kolmogorov, and Heyting, and later formalized by Kleene’s realizability interpretation—that a
constructive proof of an implication is a procedure that transforms proofs of the antecedent into proofs of the succedent; the Curry-Howard isomorphism gives syntactic representations of such
procedures. These notes give an introduction to parts of proof theory and related
"... Various problems in artificial intelligence can be solved by translating them into a quantified boolean formula (QBF) and evaluating the resulting encoding. In this approach, a QBF solver is
used as a black box in a rapid implementation of a more general reasoning system. Most of the current solvers ..."
Cited by 6 (1 self)
Add to MetaCart
Various problems in artificial intelligence can be solved by translating them into a quantified boolean formula (QBF) and evaluating the resulting encoding. In this approach, a QBF solver is used as
a black box in a rapid implementation of a more general reasoning system. Most of the current solvers for QBFs require formulas in prenex conjunctive normal form as input, which makes a further
translation necessary, since the encodings are usually not in a specific normal form. This additional step increases the number of variables in the formula or disrupts the formula’s structure.
Moreover, the most important part of this transformation, prenexing, is not deterministic. In this paper, we focus on an alternative way to process QBFs without these drawbacks and describe a solver,
qpro, which is able to handle arbitrary formulas. To this end, we extend algorithms for QBFs to the non-normal form case and compare qpro with the leading normal form provers on several problems from
the area of artificial intelligence. We prove properties of the algorithms generalized to non-clausal form by using a novel approach based on a sequent-style formulation of the calculus. 1.
- University of Edinburgh , 2002
"... this paper is an attempt at providing a systematic presentation of Quantified Modal Logics (with constant domains and rigid designators). We present a set of modular, uniform, normalizing, sound
and complete labelled sequent calculi for all QMLs whose frame properties can be expressed as a finite se ..."
Cited by 5 (3 self)
Add to MetaCart
this paper is an attempt at providing a systematic presentation of Quantified Modal Logics (with constant domains and rigid designators). We present a set of modular, uniform, normalizing, sound and
complete labelled sequent calculi for all QMLs whose frame properties can be expressed as a finite set of first-order sentences with equality. We first present CQK, a calculus for the logic QK, and
then we extend it to any such logic QL. Each calculus, called CQL, is modular (obtained by adding rules to CQK), uniform (each added rule is clearly related to a property of the frame), normalizing
(frame reasoning only happens at the top of the proof tree) and Kripke-sound and complete for QL. We improve on the existing literature on the subject (mainly, [21]) by extending the class of logics
for which such a presentation is given, and by giving a new proof of soundness and completeness.
, 2003
"... nctive normal form, is obtained in SKSg as 2 i# -------------------------------- [([aa],R,T),(a,R),(a,T),U] s;s -------------------------------- [(a,R),(a,T),(a,R),(a,T),U] c#;c#
-------------------------------- . Proving the structure Q by h resolution steps means finding a proof t . . ..."
Cited by 5 (2 self)
Add to MetaCart
nctive normal form, is obtained in SKSg as 2 i# -------------------------------- [([aa],R,T),(a,R),(a,T),U] s;s -------------------------------- [(a,R),(a,T),(a,R),(a,T),U] c#;c#
-------------------------------- . Proving the structure Q by h resolution steps means finding a proof t . . . [ t,P] r# ----- h-1 . . . r# --- , Where P is in disjunctive normal form. A refutation
by resolution is simply obtained by top-down flipping the derivation above. This means that both styles of resolution are directly supported by the calculus of structures. By flipping the derivation
above one introduces cuts (in correspondence to i# rules). Please notice that these cuts are finitary, since the atoms introduced by them are present in the conclusion as well (thanks to Kai Brnnler
for this observation, see the paper [BG] about finitary cuts). This is not so important anyway, because in a refutation one builds the derivation top-down, so the presence of cuts, finitary or
, 2000
"... The work reported in this thesis arises from the old idea, going back to the origins of constructive logic, that a proof is fundamentally a kind of program. If proofs can be ..."
Cited by 5 (2 self)
Add to MetaCart
The work reported in this thesis arises from the old idea, going back to the origins of constructive logic, that a proof is fundamentally a kind of program. If proofs can be
- RTA 2008 , 2008
"... Powerful proof techniques, such as logical relation arguments, have been developed for establishing the strong normalisation property of term-rewriting systems. The first author used such a
logical relation argument to establish strong normalising for a cut-elimination procedure in classical logic. ..."
Cited by 5 (3 self)
Add to MetaCart
Powerful proof techniques, such as logical relation arguments, have been developed for establishing the strong normalisation property of term-rewriting systems. The first author used such a logical
relation argument to establish strong normalising for a cut-elimination procedure in classical logic. He presented a rather complicated, but informal, proof establishing this property. The
difficulties in this proof arise from a quite subtle substitution operation. We have formalised this proof in the theorem prover Isabelle/HOL using the Nominal Datatype Package, closely following the
first authors PhD. In the process, we identified and resolved a gap in one central lemma and a number of smaller problems in others. We also needed to make one informal definition rigorous. We thus
show that the original proof is indeed a proof and that present automated proving technology is adequate for formalising such difficult proofs.
"... this paper we give a functional completeness result for a natural deduction formulation of hybridized S5 . Hybridized S5 is obtained by adding to ordinary S5 further expressive power in the form
of so-called satisfaction operators and a second sort of propositional symbols called nominals ..."
Cited by 3 (0 self)
Add to MetaCart
this paper we give a functional completeness result for a natural deduction formulation of hybridized S5 . Hybridized S5 is obtained by adding to ordinary S5 further expressive power in the form of
so-called satisfaction operators and a second sort of propositional symbols called nominals | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=218531","timestamp":"2014-04-16T07:22:56Z","content_type":null,"content_length":"37043","record_id":"<urn:uuid:347523d4-1b28-4d8e-9658-bd69a5eb0e5d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cyclic deformation of bidisperse two-dimensional foams
In-plane deformation of foams was studied experimentally by subjecting bidisperse foams to cycles of traction and compression at a prescribed rate. Each foam contained bubbles of two sizes with given
area ratio and one of three initial arrangements: sorted perpendicular to the axis of deformation (iso-strain), sorted parallel to the axis of deformation (iso-stress), or randomly mixed. Image
analysis was used to measure the characteristics of the foams, including the number of edges separating small from large bubbles Nsl, the perimeter (surface energy), the distribution of the number of
sides of the bubbles, and the topological disorder \mu_2(N). Foams that were initially mixed were found to remain mixed after the deformation. The response of sorted foams, however, depended on the
initial geometry, including the area fraction of small bubbles and the total number of bubbles. For a given experiment we found that (i) the perimeter of a sorted foam varied little; (ii) each foam
tended towards a mixed state, measured through the saturation of Nsl; and (iii) the topological disorder \mu_2(N) increased up to an ‘‘equilibrium’’ value. The results of different experiments showed
that (i) the change in disorder, Delta \mu_2 (N); decreased with the area fraction of small bubbles under iso-strain, but was independent of it under iso-stress; and (ii) Delta \mu_2(N) increased
with Delta Nsl under iso-strain, but was again independent of it under iso-stress. We offer explanations for these effects in terms of elementary topological processes induced by the deformations
that occur at the bubble scale.
M. Fátima Vaz, S.J. Cox & P.I.C. Teixeira (2011): Cyclic deformation of bidisperse two-dimensional foams, Philosophical Magazine, 91:34, 4345-4356 | {"url":"http://cadair.aber.ac.uk/dspace/handle/2160/7782","timestamp":"2014-04-16T10:23:25Z","content_type":null,"content_length":"22751","record_id":"<urn:uuid:abf4a386-0ab3-4b0e-9dac-235e5584c47f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
A survey of
Topology Atlas Document # ppae-19
A survey of J-spaces
E. Michael
Proceedings of the Ninth Prague Topological Symposium (2001) pp. 191-193
This note is a survey of J-spaces.
Mathematics Subject Classification. 54D20 (54D30 54D45 54E45 54F65).
Keywords. $J$-spaces, covering properties.
Document formats
AtlasImage (for online previewing)
LaTeX 6.7 Kb
DVI 12.6 Kb
PostScript 78.4 Kb
gzipped PostScript 33.9 Kb
PDF 107.7 Kb
Reference list in BibTeX
Comments. This contribution is excerpted from a published article. Reprinted from Topology and its Applications, Volume 102, number 3, E. Michael, J-Spaces, pp. 315-339, Copyright (2000), with
permission from Elsevier Science.
Copyright © 2002 Charles University and Topology Atlas. Published April 2002. | {"url":"http://www.emis.de/proceedings/TopoSym2001/19.htm","timestamp":"2014-04-19T04:32:49Z","content_type":null,"content_length":"1901","record_id":"<urn:uuid:a42f1da3-e851-4a84-aa07-5d58c77a849e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flip Five - Competition Problem
I just got back from a 3-hour competitive programming competition. There were 7 problems, my team mate solved 1, and I solved 5. That took 2 hours - we had an hour left to solve the first problem,
the problem we had skipped. We couldn't solve it - only first place beginners and the first three places of advanced division solved all seven problems. (We got 7th place)
I still have no idea how to even go about the problem at all - I know from past experience that the host of the competition likes problems that require dynamic programming - that is, a naive approach
will exceed the time limit whereas the intended simpler solution will work. The time limit for this problem was 3 seconds, so brute force was not an option.
You have a 3x3 grid of cells that are each initially either black or white, and you have to output the number of the least flips it takes to make all the cells white, where flipping one cells causes
the adjacent (but not diagonal) cells to flip with it.
I figured out that the order didn't matter, and it didn't matter if you went from given to all white or all white to given, you just had to output the least number of flips for the solution. There
was one requirement I could not understand, and it was that we could not rotate the grid - as far as we could tell it would make no difference, and they wouldn't be able to tell if we did that in our
code anyway.
There were two given sample input and outputs:
Output 1 (just flip the left center tile)
Output 3 (flip the top center, left bottom, and bottom center)
We did actually spend our whole remaining hour working on this one, trying to understand the mechanics of the puzzles - we had 9 ripped square papers with scribbles on one side and nothing on the
other that we were using to play with. We figured out some techniques but nothing got us close to solving the puzzle in programming.
As for the title "Flip Five", I am sure it is relevant but I don't know how - I think, but have not made sure, that the most number of moves a puzzle will ever take is five (at least we could not
find a puzzle that took 6 moves or more).
I'm still stumped on this one, and my coach and the other teams' coaches were stumped too. Several teams did solve it though, so it has a solution. (One team in the advanced division even solved all
7 problems during the first hour of the competition).
I want to know how to solve this puzzle the correct way, could I get some help?
Just an idea.
Begin with the solution (all squares the same color.) Flip the top left square. Record the pattern generated and the square flipped. If this pattern in encountered again, it can be solved with one
flip of the recorded square.
Revert back to the solution. Flip the top middle square. Record the pattern generated and the square flipped. If this pattern is encountered again, it can be solved with one flip of the recorded
Revert back to the solution. Flip the top right square. Record the pattern generated and the square flipped. If this pattern is encountered again, it can be solved with one flip of the recorded
At this point, one may check against the patterns generated by rotating the grid so that each side is interpreted as the top one, but the requirements are that you not do that, so you need to repeat
the steps for all of the squares in the grid.
Once that's done, begin with the first pattern that required one flip to solve. Do as above, but instead of reverting back to the solution, revert back to the first pattern that required one flip to
solve. Patterns generated that require one flip to solve (we already know what all those are right?) or result in a solution should be ignored. All patterns so recorded will require two flips to
Repeat for each pattern that required one flip to solve.
I think you can see how this goes. You eventually catalog every pattern that can occur and at that point you can stop further processing and just look up the pattern to see the smallest number of
flips a particular pattern requires for a solution.
You have to take advantage of the fact that the given input is solvable, because it appears that not all inputs are.
> The time limit for this problem was 3 seconds, so brute force was not an option.
3 seconds for checking 512 movements seems viable.
Interesting problem.
@L B would you mind sharing the other problems please? (I feel like doing some of them)
Based on the complexity of the 'harder' problems and what the competition host said, I am pretty sure there is a very simple solution. I do very much enjoy reading your solutions, though :)
The judges' solutions will be emailed to our coaches soon, so I'll post that when we get it.
The explanation provided for that particular problem is roughly what I coded, except I didn't store the click pattern required to achieve a particular pattern, only the minimum number of clicks to
get there. (And of course, it doesn't solve the problem, but that would be easy enough to do once the solutions are mapped out.)
I only just noticed your links now, thanks L B :)
Last edited on
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/lounge/98939/","timestamp":"2014-04-17T01:32:38Z","content_type":null,"content_length":"32364","record_id":"<urn:uuid:c92bd238-73b3-4fa8-810c-c5dce9a2b57b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interpolation and approximation
Quick description
Interpolation is the task of finding a function from a certain class that matches given data , . That is,
Typically, is taken from the set of polynomials of degree .
Approximation is the task, given a function , where is a suitable space, to find a function taken from a certain class that is in some sense close to . Often we consider, for example, polynomial
interpolation where is a bounded subset of and .
Different measures of closeness can be used for determining how close functions are. The most common are:
In polynomial approximation is required to be a polynomial of degree for some fixed . Often approximations can be found by interpolation: the data used is simply , for suitably chosen points .
Example 1
Polynomial interpolation in an interval with data , can be done with polynomials of degree provided that and all 's are distinct. The interpolant is unique (amongst all polynomials of degree )
provided in addition, . There are a number of different ways of computing the polynomial interpolant for , including solving linear systems with Van der Monde matrices, Lagrange interpolation
polynomials, and Newton divided differences.
If the data for polynomial interpolation comes from a smooth function , then there is a commonly used formula for the error in the interpolant:
for some between and .
Login or register to post comments
Recent comments | {"url":"http://www.tricki.org/article/Interpolation_and_approximation","timestamp":"2014-04-20T18:24:04Z","content_type":null,"content_length":"25141","record_id":"<urn:uuid:1abfe1af-a3cc-4efd-be25-b7b4a6f835db>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can you help me find the critical points of this equation? h(x)=x^2lnx^2
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i think h(x) has only one asymptote on x=0 and has 1 critical point at x=e^-1/2
Best Response
You've already chosen the best response.
actually, i think there are two critical points, one at x=e^1/2 and another at x=-e^1/2
Best Response
You've already chosen the best response.
First solve for h'(x) \[h\prime(x)=x^{2}(\frac{ 2x }{ x^{2}})+2xln(x^{2})\] \[h \prime(x) =2x(1+\ln(x ^{2}))\] Equating h'(x)=0 will give you the first critical point, which is x=0. We can find
the other two by equating 1+ln(x^2)=0 \[-1=\ln (x ^{2})\] \[e ^{-1}=x^2\] \[\frac{ 1 }{e }=x^2\] \[x= \pm \sqrt{\frac{ 1 }{ e }}\]
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c8c0dde4b0b766106e65e6","timestamp":"2014-04-18T00:23:31Z","content_type":null,"content_length":"35105","record_id":"<urn:uuid:a1bbc678-423c-4210-bf8e-072f15c17c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
1.5.4 Traveling Salesman Problem
INPUT OUTPUT
Input Description: A weighted graph G.
Problem: Find the cycle of minimum cost visiting all of the vertices of G exactly once.
Excerpt from The Algorithm Design Manual: The traveling salesman problem is the most notorious NP-complete problem. This is a function of its general usefulness, and because it is easy to explain to
the public at large. Imagine a traveling salesman who has to visit each of a given set of cities by car.
Although the problem arises in transportation applications, its most important applications arise in optimizing the tool paths for manufacturing equipment. For example, consider a robot arm assigned
to solder all the connections on a printed circuit board. The shortest tour that visits each solder point exactly once defines the most efficient path for the robot. A similar application arises in
minimizing the amount of time taken by a graphics plotter to draw a given figure.
Recommended Books
by G. Gutin and A. Punnen by E.L. Lawler (Editor) and A. H. Rinnooy-Kan
Related Links
Library for testing TSPs
Related Problems
This page last modified on 2008-07-10 . www.algorist.com | {"url":"http://www.cs.sunysb.edu/~algorith/files/traveling-salesman.shtml","timestamp":"2014-04-19T04:48:30Z","content_type":null,"content_length":"19461","record_id":"<urn:uuid:0513650c-5fe8-47d0-84c0-0a07d94e1f93>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
MS Cycling
Proposed Cycle of classes for the MS Program
Sam Houston State University
Fall (odd year)
MATH 6333, Analysis I (first-year)
MATH 6335, Algebra I (first-year)
MATH 6379, Functions of a Complex Variable (first-year & second-year)
Topics Course #1 (second-year students) Fall 2011: MATH 6368
MATH 6398, Research course (for second-year students choosing the thesis option)
or STAT 5361, Theory & Application of Probability (second-year students, non-thesis)
Spring (even year)
MATH 6334, Analysis II (first-year)
MATH 6336, Algebra II (first-year)
MATH 6332, Topology (first-year & second-year)
Topics Course #2 (second-year) Spring 2012: Algebraic Geometry
MATH 6099, Thesis or MATH 6398, Research course (second-year)
First-year students are examined over the sequences in analysis and algebra and the course in topology.
Fall (even year)
MATH 6333, Analysis I (first-year)
MATH 6335, Algebra I (first-year)
MATH 6368, Numerical Linear Algebra (first-year & second-year)
Topics Course #3 (second-year) Fall 2012: MATH 6368
MATH 6398, Research course (for second-year students choosing the thesis option)
or STAT 5361, Theory & Application of Probability (second-year students, non-thesis)
Spring (odd year)
MATH 6334, Analysis II (first-year)
MATH 6336, Algebra II (first-year)
MATH 5397, Discrete Mathematics (first-year & second-year)
Topics Course #4 (second-year) Spring 2013: Foundations of Applied Mathematics
MATH 6099, Thesis or MATH 6398, Research course (second-year)
First-year students are examined over the sequences in analysis and algebra and the course in discrete mathematics.
Return to MS-Math program page | {"url":"http://www.shsu.edu/~kws006/Professional/MS_Cycling.html","timestamp":"2014-04-17T10:11:57Z","content_type":null,"content_length":"9750","record_id":"<urn:uuid:8614b824-ecfe-4757-9db3-60ec8ea50140>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
CRpuzzles Logic Problem Solution - Pete Vs. the Puzzles
Logic Puzzle # 11 Logic Problems Help
Logic Problem Solution:
Pete Vs. the Puzzles
From the introduction, Pete spent a total of two hours working on the puzzles. By clue 2, Pete spent 60 mins. total at the 2nd-4th web sites and therefore 60 mins. between the 1st and 5th web sites;
also by clue 2, no two puzzles took the same amount of time to solve. By clue 6, the longest amount of time Pete spent on any single puzzle was 45 mins. In clue 3, three of the web sites are listed
in order: the one at which Pete spent 10 mins., the one where he did the Word Math, and www.top-flite-puzzles.com--so the 10 mins. site was the 1st, 2nd, or 3rd Pete visited. If it had been 1st, by
clue 2, the last site visited would have been for 50 mins.--more than the 45 mins. maximum (6). If Pete were at the 3rd web site for the 10 mins., the Word Math site and www.top-flite-puzzles.com
would then be the 4th and 5th visited (3). If the Word Math were at www.puzzazzles.com, by clue 5, it would have taken 25 mins. to complete. However, by clue 2, since the 2nd-4th sites took an hour
total, the 2nd site would also have required 25 mins. for Pete to solve the puzzle tried--a conflict with the clue 2 fact that no two sites took the same time. By clue 5, the www.puzzazzles.com
puzzle couldn't have taken the 10 mins. So, the Pete would have visited www.puzzazzles.com 2nd. Pete then couldn't have been at the 1st site for the maximum 45 mins. or he would have been at
www.puzzazzles.com for 60 (5). Pete couldn't have been at www.puzzazzles.com for the 45 mins.--if he had, he would have been at the 1st site 30 mins., sites 2-4 60 mins. (2), and the 5th site then
also 30 mins., contradicting clue 2. Pete couldn't have been at the Word Math site for 45 mins. or he would have been at www.puzzazzles.com for 5 mins. (2)--impossible (5). So, Pete would have been
at the 5th web site 45 mins. and at the 1st 15 mins. (the 1st and 5th sites totalled 60 mins. by clue 2). He would have then spent 30 mins. on the Crossword (1) and at www.puzzazzles.com (5), which
would have to be the same site--no (7). Therefore, there is no way for Pete to have spent the 10 mins. at the 3rd web site. He was there 2nd, solved the Word math 3rd, and visited web site
www.top-flite-puzzles.com 4th. www.puzzazzles.com can't be the 1st or 2nd web site visited (5). If it were 3rd, then it would have taken Pete 25 mins. to do the Word Math there (5). But, by clue 2,
it also would have taken Pete 25 mins. at the 4th site--by clue 2, a conflict. So, Pete did the www.puzzazzles.com puzzle 5th. That puzzle wasn't the Crossword (7). The Crossword also wasn't the 1st
puzzle done (1). If the Crossword had been the 2nd and had taken 10 mins., then the 1st puzzle would have taken 5 mins. (1) and the 5th one 55 mins., contradicting clue 6. So, the Crossword was
solved at the 4th web site, www.top-flite-puzzles.com. The site at which Pete spent the 45 mins. isn't the 1st (1). It can't be the Word Math site, or the Crossword would have taken 5 mins. (2),
impossible (1, 6). It also isn't the Crossword site, or the www.puzzazzles.com solution would have taken longer (5). The solution of the www.puzzazzles.com work took 45 mins. Then the Crossword took
30 mins. (5), the Word Math 20 (2), and the 1st one 15 (1). By clue 4, the 1st site Pete visited was www.pro-puzzlers.com, and the Cryptogram was at the 2nd. The Cryptogram was at
www.mrs-puzzlemaker.com and the Word Math at www.puzzle-maniac.com (4). Pete solved the Word Search 1st and the Acrostic took the 45 mins. (6). Pete thus took the following times to solve the five
puzzles in order
• www.pro-puzzlers.com, Word Search, 15 mins.
• www.mrs-puzzlemaker.com, Cryptogram, 10 mins.
• www.puzzle-maniac.com, Word Math, 20 mins.
• www.top-flite-puzzles.com, Crossword, 30 mins.
• www.puzzazzles.com, Acrostic, 45 mins.
CRpuzzles.com. Copyright © 2000-2007 by Calvin J. Hamilton & Randall L. Whipkey. All rights reserved. Privacy Statement. | {"url":"http://www.crpuzzles.com/logic/logic0011s.html","timestamp":"2014-04-18T00:13:55Z","content_type":null,"content_length":"10010","record_id":"<urn:uuid:176cd6bf-c263-421d-ae3a-891137cd9d12>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sign test
In statistics, the sign test can be used to test the hypothesis that there is "no difference in medians" between the continuous distributions of two random variables X and Y, in the situation when we
can draw paired samples from X and Y. It is a non-parametric test which makes very few assumptions about the nature of the distributions under test - this means that it has very general applicability
but may lack the statistical power of other tests such as the paired-samples t-test or the Wilcoxon signed-rank test.^[citation needed]
Let p = Pr(X > Y), and then test the null hypothesis H[0]: p = 0.50. In other words, the null hypothesis states that given a random pair of measurements (x[i], y[i]), then x[i] and y[i] are equally
likely to be larger than the other.
To test the null hypothesis, independent pairs of sample data are collected from the populations {(x[1], y[1]), (x[2], y[2]), . . ., (x[n], y[n])}. Pairs are omitted for which there is no difference
so that there is a possibility of a reduced sample of m pairs.^[1]
Then let W be the number of pairs for which y[i] − x[i] > 0. Assuming that H[0] is true, then W follows a binomial distribution W ~ b(m, 0.5). The "W" is for Frank Wilcoxon who developed the test,
then later, the more powerful Wilcoxon signed-rank test.^[2]
Let Z[i] = Y[i] – X[i] for i = 1, ... , n.
1. The differences Z[i] are assumed to be independent.
2. Each Z[i] comes from the same continuous population.
3. The values of X[i] and Y[i] represent are ordered (at least the ordinal scale), so the comparisons "greater than", "less than", and "equal to" are meaningful.
Significance testing
Since the test statistic is expected to follow a binomial distribution, the standard binomial test is used to calculate significance. The normal approximation to the binomial distribution can be used
for large sample sizes, m>25.^[1]
The left-tail value is computed by Pr(W ≤ w), which is the p-value for the alternative H[1]: p < 0.50. This alternative means that the X measurements tend to be higher.
The right-tail value is computed by Pr(W ≥ w), which is the p-value for the alternative H[1]: p > 0.50. This alternative means that the Y measurements tend to be higher.
For a two-sided alternative H[1] the p-value is twice the smaller tail-value.
See also
• Wilcoxon signed-rank test - A more powerful variant of the sign test, but one which also assumes a symmetric distribution.
• Median test - An unpaired alternative to the sign test.
1. ^ ^a ^b Mendenhall, W.; Wackerly, D. D. and Scheaffer, R. L. (1989), "15: Nonparametric statistics", Mathematical statistics with applications (Fourth ed.), PWS-Kent, pp. 674–679, ISBN
2. ^ Karas, J. & Savage, I.R. (1967) Publications of Frank Wilcoxon (1892–1965). Biometrics 23(1): 1–10
• Gibbons, J.D. and Chakraborti, S. (1992). Nonparametric Statistical Inference. Marcel Dekker Inc., New York.
• Kitchens, L.J.(2003). Basic Statistics and Data Analysis. Duxbury.
• Conover, W. J. (1980). Practical Nonparametric Statistics, 2nd ed. Wiley, New York.
• Lehmann, E. L. (1975). Nonparametrics: Statistical Methods Based on Ranks. Holden and Day, San Francisco. | {"url":"http://blekko.com/wiki/Sign_test?source=672620ff","timestamp":"2014-04-18T17:01:08Z","content_type":null,"content_length":"12795","record_id":"<urn:uuid:b5d39e27-9262-4eb8-a597-f0c097c814ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
e Web
Lecture #7
Today we covered basic examples of volumes (Sec 6.2) and work (Sec. 6.4). Here is the scan: wk3day7_wp.
The discussion on general area bounded by 2 curves on Tuesday paved the way to find the volume of a general class of object: solid of revolutions. We exploited the idea of Riemann sum again and
sliced an arbitrary object into thin pieces, like a piece of bread from a loaf.
We then interpreted volume as a sum of these pieces, each approximated by the cross-sectional area times the width \(\triangle x\).
The same idea can be applied to physical problems as well. Like what we have done before for velocity and displacement, we investigated how work done of moving an object could be understood as a sum
of the “pieces” too. Each piece is approximated by the formula \( W = F d \) and the infinite sum gives a natural definition of work when F may change with d (possibly linear or nonlinear way) as a
A typed version of the word problems for work done (the explanation of the last question is expanded) is here: wordprob3_lecture7_withsol.
Posted in Lecture | {"url":"http://blogs.ubc.ca/math101sec210portal/2013/01/24/lecture-7/","timestamp":"2014-04-19T14:50:07Z","content_type":null,"content_length":"21923","record_id":"<urn:uuid:5f29a634-ef15-46f9-b943-18232f97a7e7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1997 [00262]
[Date Index] [Thread Index] [Author Index]
Re: Mathematica 3.0.0 bug in LerchPhi function
• To: mathgroup at smc.vnet.net
• Subject: [mg9190] Re: [mg9065] Mathematica 3.0.0 bug in LerchPhi function
• From: koehler at REMOVE-THIS.math.uni-bonn.de (Kai Koehler)
• Date: Tue, 21 Oct 1997 02:03:03 -0400
• Organization: RHRZ - University of Bonn (Germany)
• Sender: owner-wri-mathgroup at wolfram.com
In article <61uhb4$8ca at smc.vnet.net>, David Withoff
<withoff at wolfram.com> wrote:
> > The actual problem is a major bug in the numerical calculation of the
> > LerchPhi function: Try e.g.
> >
> > z = 0.4; v = -0.5; Plot[NSum[z^n/(n + v), {n, 0, 20}] -
> > LerchPhi[z, 1, v], {v, -1, 1}]
> Although there is indeed a bug within these examples, the above
> characterization of the bug is not correct, or at least it is not
> complete. I would like to provide a (hopefully) useful description
> of this bug.
> There are at least two common definitions for LerchPhi[z, s, a]:
> Sum[z^n/(a+n)^s, {n,0,Infinity}]
> Sum[z^n/((a+n)^2)^(s/2), {n,0,Infinity}]
> The sum Sum[Exp[-k]/(1+k^3), {k,0,Infinity}], and the documentation,
> use the first definition.
> Numerical evaluation of LerchPhi[z, s, a] uses the second definition.
> The bug is that Mathematica uses the first definition in some places,
> and the second definition in other places.
> If you use the first definition to compute numerical approximations
> for the result from Sum[Exp[-k]/(1+k^3), {k,0,Infinity}], you will
> get a correct result:
> In[1]:= A = Sum[Exp[-k]/(1+k^3), {k,0,Infinity}];
> In[2]:= A /. LerchPhi[z_, s_, a_] :>
> NSum[z^k/(a + k)^s, {k, 0, Infinity}] //Chop
> Out[2]= 1.20111
> If you use the second definition you will get agreement with
> numerical evaluation of LerchPhi[z, s, a]:
> In[3]:= Block[{z = 0.4, v = -0.5},
> {NSum[z^n/((n + v)^2)^(1/2), {n, 0, 20}], LerchPhi[z, 1, v]}]
> Out[3]= {2.94299, 2.94299}
Thanks once more to Dave Withoff for the fast clarification of the
problem. Still there is a point which should be dealt with:
There is no such thing like a second "common" definition of the Lerch
Phi function.
Lerch in his article in Acta Mathematica XI (p. 19-24) gives the first
definition (already in the title of his paper). This is also the
definition which one finds in the usual textbooks and formula
collections like Whittaker-Watson or Gradshteyn-Ryzhik.
The second definition does not verify the functional relations of the
Lerch Phi function. Furthermore, it has a branching point of order two
in the variable a in contrast to the correct definition, which is
meromorphic in a and has no branching point at all. It is a very
strange idea to complicate matters by artificially creating a
branching point in a meromorphic function in a program like Mathematica
which has traditionally lots of problems with branching points and
Riemann surfaces.
I have been told by Dave Withoff that the Hurwitz zeta function is
evaluated in Mathematica 3.0 in the same wrong way, namely as
Sum[1/((k + a)^2)^(s/2), {k, 0, Infinity}].
I want to emphasize once more that Wolfram should provide free bug fixes
for serious problems like these. Also, Wolfram should officially inform
its customers about these errors, e.g. on their web page.
Best regards
Kai Koehler | {"url":"http://forums.wolfram.com/mathgroup/archive/1997/Oct/msg00262.html","timestamp":"2014-04-18T00:33:51Z","content_type":null,"content_length":"37291","record_id":"<urn:uuid:2e4667fc-8c64-4528-b351-203ad53a7fdc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Equation Help
December 29th 2010, 09:29 PM #1
Dec 2010
Quadratic Equation Help
Hi, i'm very new to this forum and i am stuck on this problem. Perhaps i'm just having a brain freeze, but i'd like to know how to solve this question.Thanks
The height, h(t) metres, of a batted baseball as a function of the time, t seconds, since the ball was hit can be modelled by the function:
$h(t)=-2.1(t-2.4)^2 +13$
Question: How many seconds after it was hit did the ball hit the groud, to the nearest tenth of a second?
The answer key tells me 4.9 seconds, however, i am unable to arrive at that answer. My idea was to substitute h(t) with 0 to get the x intercept, but the numbers greatly confused me.
Any help? Thanks in advance
h(t) = 0 the ball hits the ground.
$\displaystyle 0=-2.1(t-2.4)^2+13\Rightarrow \frac{13}{2.1}=(t-2.4)^2\Rightarrow\sqrt{\frac{13}{2.1}}+2.4=t$
$\approx 4.888066$
gah. solved lol. thanks xD. i missed the square root step xP.
Last edited by damocles; December 29th 2010 at 10:18 PM. Reason: hi
December 29th 2010, 10:16 PM #2
MHF Contributor
Mar 2010
December 29th 2010, 10:17 PM #3
Dec 2010 | {"url":"http://mathhelpforum.com/algebra/167108-quadratic-equation-help.html","timestamp":"2014-04-21T07:52:44Z","content_type":null,"content_length":"34838","record_id":"<urn:uuid:07cd81a9-e18d-4980-a994-66e9a757f457>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jumper Question
1. The problem statement, all variables and given/known data
A person reaches a maximum height of 64 cm when jumping straight up from a crouched position. During the jump itself, the person's body from the knees up rises a distance of around 41 cm. To keep the
calculations simple and yet get a reasonable result, assume that the entire body rises this much during the jump.
Part A: With what initial speed does the person leave the ground to reach a height of 64 cm?
Part B: In terms of this jumper's weight W , what force does the ground exert on him or her during the jump?
I figured out the first part, but I can't figure out the second part. :(
Also the answer to part B has to be a number? This is online homework and the answer is in the format:
2. Relevant equations
3. The attempt at a solution
The velocity for part A was 3.5
I tried to figure out the time by using v=d/t, and got t=.117 seconds which seems wrong.
I tried to figure out the a so I used vf-vo/t=29.91???
and using 29.91 as a tried to figure it out. But I'm just stuck. I have no idea. | {"url":"http://www.physicsforums.com/showthread.php?s=e7abd6ffc0b817c19ca9ea609095e034&p=3971046","timestamp":"2014-04-17T04:08:14Z","content_type":null,"content_length":"25881","record_id":"<urn:uuid:f14a8930-2666-4a59-8cf7-1303bdb62813>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: OpenSSL (3)
Updated: 2004-03-23
Index Return to Main Contents
RSA_public_encrypt, RSA_private_decrypt - RSA public key cryptography
#include <openssl/rsa.h>
int RSA_public_encrypt(int flen, unsigned char *from,
unsigned char *to, RSA *rsa, int padding);
int RSA_private_decrypt(int flen, unsigned char *from,
unsigned char *to, RSA *rsa, int padding);
encrypts the
bytes at
(usually a session key) using the public key
and stores the ciphertext in
must point to RSA_size(
) bytes of memory.
padding denotes one of the following modes:
flen must be less than RSA_size(rsa) - 11 for the PKCS #1 v1.5 based padding modes, less than RSA_size(rsa) - 41 for RSA_PKCS1_OAEP_PADDING and exactly RSA_size(rsa) for RSA_NO_PADDING. The random
number generator must be seeded prior to calling RSA_public_encrypt().
RSA_private_decrypt() decrypts the flen bytes at from using the private key rsa and stores the plaintext in to. to must point to a memory section large enough to hold the decrypted data (which is
smaller than RSA_size(rsa)). padding is the padding mode that was used to encrypt the data.
returns the size of the encrypted data (i.e., RSA_size(
returns the size of the recovered plaintext.
On error, -1 is returned; the error codes can be obtained by ERR_get_error(3).
SSL, PKCS #1 v2.0
argument was added in SSLeay 0.8. RSA_NO_PADDING is available since SSLeay 0.9.0, OAEP was added in OpenSSL 0.9.2b. | {"url":"http://www.thelinuxblog.com/linux-man-pages/3/RSA_private_decrypt","timestamp":"2014-04-17T15:31:36Z","content_type":null,"content_length":"24475","record_id":"<urn:uuid:26c5d031-091a-4a56-8cfa-b8ed07b42a0c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Metrizable spaces on Dan Ma's Topology Blog
All spaces under consideration are Hausdorff. First countable spaces are those spaces where there is a countable local base at every point in the space. This is quite a strong property. For example,
every first countable space that is also compact has a cap on its cardinality and the cap is the cardinality of the real line (the continuum). See The cardinality of compact first countable spaces, I
in this blog. In fact, if the compact and first countable space is uncountable, it has cardinality continuum (see The cardinality of compact first countable spaces, III). Any metric space (or
metrizable space) is first countable. In this post, we discuss the product of first countable spaces. In this regard, first countable spaces and metrizable spaces behave similarly. We show that the
product of countably many first countable spaces is first countable while the product of uncountably many first countable is not first countable. For more information on the product topology, see
The Product Space
Consider a collection of sets $A_\alpha$ where $\alpha \in S$. Let $W=\bigcup \limits_{\alpha \in S} A_\alpha$. The product $\prod \limits_{\alpha \in S} A_\alpha$ is the set of all functions $f:S \
mapsto W$ such that for each $\alpha \in S$, $f(\alpha) \in A_\alpha$. If the index set $S=\left\{1,2,\cdots,n\right\}$ is finite, the functions $f$ can be regarded as sequences $(f_1,f_2,\cdots,f_n)
$ where each $f_i \in A_i$. If the index set $S=\mathbb{N}$, we can think of elements $f$ of the product as the sequence $(f_1,f_2,\cdots)$ where each $f_i \in A_i$. In general we can regard $f \in \
prod \limits_{\alpha \in S} A_\alpha$ as functions $f:S \mapsto W$ or as sequences $f=(f_\alpha)_{\alpha \in S}$.
Consider the topological spaces $X_\alpha$ where $\alpha \in S$. Let $X=\prod \limits_{\alpha \in S} X_\alpha$ be the product as defined above. The product space of the spaces $X_\alpha$ is $X$ with
the topology defined in the following paragraph.
Let $\tau_\alpha$ be the topology of each space $X_\alpha$, $\alpha \in S$. Consider $Y=\prod \limits_{\alpha \in S} O_\alpha$ where for each $\alpha \in S$, $O_\alpha \in \tau_\alpha$ (i.e. $O_\
alpha$ is open in $X_\alpha$) and $O_\alpha=X_\alpha$ for all but finitely many $\alpha \in S$. The set of all such sets $Y$ is a base for a topology on the product $X=\prod \limits_{\alpha \in S} X_
\alpha$. This topology is called the product topology of the spaces $X_\alpha$, $\alpha \in S$.
To more effectively work with product spaces, we consider a couple of equivalent bases that we can define for the product topology. Let $\mathcal{B}_\alpha$ be a base for the space $X_\alpha$.
Consider $B=\prod \limits_{\alpha \in S} B_\alpha$ such that there is a finite set $F \subset S$ where $B_\alpha \in \mathcal{B}_\alpha$ for each $\alpha \in F$ and $B_\alpha=X_\alpha$ for all $\
alpha \in S-F$. The set of all such sets $B$ is an equivalent base for the product topology.
Another equivalent base is defined using the projection maps. For each $\alpha \in S$, consider the map $\pi_\alpha:\prod \limits_{\beta \in S} X_\beta \mapsto X_\alpha$ such that $\pi_\alpha(f)=f_\
alpha$ for each $f$ in the product. In words, the function $\pi_\alpha$ maps each point in the product space to its $\alpha^{th}$ coordinate. The mapping $\pi_\alpha$ is called the $\alpha^{th}$
projection map. For each set $U \subset X_\alpha$, $\pi_\alpha^{-1}(U)$ is the following set:
$\pi_\alpha^{-1}(U)=\left\{f \in \prod \limits_{\beta \in S} X_\beta: \pi_\alpha(f)=f_\alpha \in U\right\}$
Consider sets of the form $\bigcap \limits_{\alpha \in F} \pi_\alpha^{-1}(U_\alpha)$ where $F \subset S$ is finite and $U_\alpha$ is open in $X_\alpha$ for each $\alpha \in F$. The set of all such
sets is another equivalent base for the product topology. If we only require that each $U_\alpha \in \mathcal{B}_\alpha$, a predetermined base for the coordinate space $X_\alpha$, we also obtain an
equivalent base for the product topology.
Countable Product of First Countable Spaces
For $i=1,2,3,\cdots$, let $X_i$ be a first countable space. We show that $X=\prod \limits_{i=1}^{\infty}X_i$ is a first countable space.
Let $\mathbb{N}$ be the set of positive integers. For each $n \in \mathbb{N}$, let $[n]=\left\{1,2,\cdots,n\right\}$ and let $\mathbb{N}^{[n]}$ be the set of all functions $t:[n] \mapsto \mathbb{N}$.
For each $i$ and each $x \in X_i$, let $\mathcal{B}_x(i)=\left\{B_x(i,j): j \in \mathbb{N}\right\}$ be a countable local base at $x$.
Let $f \in X=\prod \limits_{i=1}^{\infty}X_i$. We wish to define a countable local base at $f$. For each $n \in \mathbb{N}$, define $W_n$ to be:
$W_n=\left\{\prod \limits_{i=1}^n B_{f(i)}(i,t(i)):t \in \mathbb{N}^{[n]}\right\}$
Let $W$ be the set of all subsets of the product space $X$ of the following form:
$\prod \limits_{j=1}^{\infty}V_j$ where there is some $n \in \mathbb{N}$ such that $\prod \limits_{j=1}^{n}V_j \in W_n$ and for all $j>n$, $V_j=X_j$.
Each $W_n$ is countable and $W$ is essentially the union of all the $W_n$. Thus $W$ is countable. We claim that $W$ is a local base at $f$. Let $O \subset X$ be an open set containing $f$. We can
assume that $O=\prod \limits_{i=1}^\infty O_i$ where there is some $n \in \mathbb{N}$ such that for each $i \le n$, $O_i$ is open in $X_i$ and for $i>n$, $O_i=X_i$.
For each $i \le n$, $f(i) \in O_i$. Choose some $B_{f(i)}(i,t(i))$ such that $f(i) \in B_{f(i)}(i,t(i)) \subset O_i$. Let $V=\prod \limits_{i=1}^{\infty} V_i$ such that $\prod \limits_{i=1}^{n} V_i=\
prod \limits_{i=1}^n B_{f(i)}(i,t(i))$ and $V_i=X_i$ for all $i>n$. Then $V \in W$ and $f \in V \subset O$. This completes the proof that $X=\prod \limits_{i=1}^{\infty}X_i$ is a first countable
Uncountable Product
Let $S$ be an uncountable index set. For $\alpha \in S$, let $X_\alpha$. We want to avoid the situation that all but countably many $X_\alpha$ are one-point space. So we assume each coordinate space
$X_\alpha$ has at least two points, say, $p_\alpha$ and $q_\alpha$ with $p_\alpha e q_\alpha$. We show that $X=\prod \limits_{\alpha \in S}X_\alpha$ is not first countable.
Let $f \in \prod \limits_{\alpha \in S}X_\alpha$. Let $U_1,U_2, \cdots$ be open subsets of the product space such that for each $i$, $f \in U_i$. We show that there is some open set $O$ such that $f
\in O$ and each $U_i subseteq O$. For each $i$, there is a basic open set $B_i=\bigcap \limits_{\alpha \in F_i} \pi_\alpha^{-1}(U_{\alpha,i})$ such that $f \in B_i \subset U_i$.
Let $F=F_1 \cup F_2 \cup \cdots$. Since $S$ is uncountable and $F$ is countable, choose $\gamma \in S-F$. Since $X_\gamma$ has at least two points $p_\gamma$ and $q_\gamma$, choose one of them that
is different from $f_\gamma$, say, $p_\gamma$. Choose two disjoint open subsets $M_1$ and $M_2$ of $X_\gamma$ such that $f_\gamma \in M_1$ and $p_\gamma \in M_2$ of $X_\gamma$. Let $O=\prod \limits_
{\alpha \in S}O_\alpha$ such that $O_\gamma=M_1$ and $O_\alpha=X_\alpha$ for all $\alpha e \gamma$. We have $f \in O$. For each $i$, there is $g_i \in B_i \subset U_i$ such that $g_i(\gamma)=p_\
gamma$. Thus each $g_i otin O$. Thus there is no countable local base at $f$. Thus any product space with uncountably many factors, each of which has at least two points, is never first countable.
1. Engelking, R. General Topology, Revised and Completed edition, 1989, Heldermann Verlag, Berlin.
2. Willard, S., General Topology, 1970, Addison-Wesley Publishing Company.
A note on metrization theorems for compact spaces
In a previous post (Metrization Theorems for Compact Spaces), three classic metrization theorems for compact spaces are discussed. The three theorems are: any Hausdorff compact space $X$ is
metrizable if any of the following holds:
1. $X$ has a countable network,
2. $X$ has a $G_\delta$ diagonal,
3. $X$ has a point countable base.
The metrization results for conditions 2 and 3 hold for countably compact spaces as well. See the following posts:
Countably Compact Spaces with G-delta Diagonals
Metrization Theorems for Compact Spaces
In this post, we discuss another metrization theorem for compact spaces. We show that a compact Hausdorff space $X$ is metrizable if and only if the function space $C_p(X)$ is separable.
Let’s discuss the function space. Let $Y$ by any Tychonoff space and let $\mathbb{R}$ be the set of all real numbers. Let $C(Y,\mathbb{R})$ be the set of all real-valued continuous functions defined
on $Y$. For any $A \subset Y$ and for any $V \subset \mathbb{R}$, define $[A,V]=\lbrace{f \in C(Y,\mathbb{R}): f(A) \subset V}\rbrace$. If we restrict $A$ to $\lbrace{x}\rbrace$ and restrict $V$ to
open sets, then the set of all $[A,V]$ is a subbase for a topology on $C(Y,\mathbb{R})$. This topology is called the pointwise convergence topology. The function space $C(Y,\mathbb{R})$ with this
topology is denoted by $C_p(Y)$.
It is a theorem that $C_p(Y)$ is separable if and only if $Y$ has a weaker topology that forms a separable metric space. The result on compact spaces is a corollary of this theorem.
Theorem. Let $Y$ be a Tychonoff space with $\tau$ being the topology. The following conditions are equivalent:
1. $C_p(Y)$ is separable.
2. There is a topology $\tau_1 \subset \tau$ such that $(Y, \tau_1)$ is a separable metric space.
$1 \Rightarrow 2$. Let $D \subset C_p(Y)$ be a countable dense subspace. Let $\mathcal{V}$ be the class of all bounded open intervals of $\mathbb{R}$ with rational endpoints. Consider $\mathcal{B}=\
lbrace{f^{-1}(V): f \in D, V \in \mathcal{V}}\rbrace$. Note that $\mathcal{B}$ is a subbase for a topology $\tau_1$ on $Y$. Since $\mathcal{B}$ is countable, the topology $\tau_1$ has a countable
base and is thus separable and metrizable.
$2 \Rightarrow 1$. Let $\tau_1 \subset \tau$ be a topology. Suppose that $\tau_1$ is generated by a countable base $\mathcal{U}$. As in $1 \Rightarrow 2$, let $\mathcal{V}$ be the class of all
bounded open intervals of $\mathbb{R}$ with rational endpoints. Let $\mathcal{N}$ be the class of all finite intersections of the sets in the following collection of sets.
$\displaystyle \lbrace{[A,V]: A \in \mathcal{U},V \in \mathcal{V}}\rbrace$
Note that $\mathcal{N}$ is countable. For each $W \in \mathcal{N}$, choose $f_W \in W$. We claim that $\lbrace{f_W: W \in \mathcal{N}}\rbrace$ is a countable dense set of $C_p(Y)$. To see this, let
$T=\bigcap_{i \le n} [A_i,V_i]$ be a basic open set in $C_p(Y)$ where $A_i=\lbrace{x_i}\rbrace$. Fix $f \in T$. For each $i$, choose $O_i \in \mathcal{U}$ such that $x_i \in O_i$ and $f(O_i) \subset
V_i$. Then $W=\bigcap_{i \le n} [O_i,V_i] \in \mathcal{N}$. Now, we have $f_W \in T$.
Corollary. Let $X$ be a compact Hausdorff space. Then the following conditions are equivalent:
1. $X$ is metrizable.
2. $C_p(X)$ is separable.
$1 \Rightarrow 2$. This follows from $2 \Rightarrow 1$ in the above theorem.
$2 \Rightarrow 1$. This follows from $1 \Rightarrow 2$ in the above theorem. Note that any compact Hausdorff space cannot have a strictly weaker (or coarser) Hausdorff topology. Thus if a compact
Hausdorff space has a weaker metrizable topology, it must be metrizable.
The Evaluation Map
The evaluation map is a useful tool for embedding a space $X$ into a product space. In this post we demonstrate that any Tychonoff space $X$ can be embedded into a cube $I^{\mathcal{K}}$ where $I$ is
the unit interval $[0,1]$ and $\mathcal{K}$ is some cardinal. Any regular space with a countable base (second-countable space) can also be embedded into the Hilbert cube $I^{\omega}$ (Urysohn’s
metrization theorem). The evaluation map also plays an important role in the theory of Cech-Stone compactification.
The Evaluation Map
Let $X$ be a space. Let $\displaystyle Y=\Pi_{\alpha \in A}Y_\alpha$ be a product space. For each $y \in Y$, we use the notation $y=\langle y_\alpha \rangle_{\alpha \in A}$ to denote a point in the
product space $Y$. Suppose we have a family of continuous functions $\mathcal{F}=\lbrace{f_\alpha:\alpha \in A}\rbrace$ where $f_\alpha:X \rightarrow Y_\alpha$ for each $\alpha$. Define a mapping
that maps each $x \in X$ to the point $\langle f_\alpha(x) \rangle_{\alpha \in A} \in Y$. This mapping is called the evaluation map of the family of continuous functions $\mathcal{F}=\lbrace{f_\
alpha:\alpha \in A}\rbrace$ and is denoted by $E_{\mathcal{F}}$.
The family of continuous functions $\mathcal{F}$ is said to separate points if for any two distinct points $x,y \in X$, there is a function $f \in \mathcal{F}$ such that $f(x) eq f(y)$. The family of
continuous functions $\mathcal{F}$ is said to separate points from closed sets if for each point $x \in X$ and for each closed set $C \subset X$ with $x otin C$, there is a function $f \in \mathcal
{F}$ such that $f(x) otin \overline{f(C)}$.
Theorem 1. Given an evaluation map $E_{\mathcal{F}}$ as defined above, the following conditions hold.
1. The mapping $E_{\mathcal{F}}$ is continuous.
2. If the family of continuous functions $\mathcal{F}=\lbrace{f_\alpha:\alpha \in A}\rbrace$ separates points, then$E_{\mathcal{F}}$ is a one-to-one map.
3. If the family of continuous functions $\mathcal{F}=\lbrace{f_\alpha:\alpha \in A}\rbrace$ separates points from closed sets, then $E_{\mathcal{F}}$ is a homeomorphism from $X$ into the product
space $\displaystyle Y=\Pi_{\alpha \in A}Y_\alpha$.
In this post, basic open sets in the product space $\displaystyle Y=\Pi_{\alpha \in A}Y_\alpha$ are of the form $\bigcap_{\alpha \in W} [\alpha,V_\alpha]$ where $W \subset A$ is finite, for each $\
alpha \in W$, $V_\alpha$ is an open set in $Y_\alpha$ and $[\alpha,V_\alpha]=\lbrace{y \in Y:y_\alpha \in V_\alpha}\rbrace$.
Proof of 1. We show that $E_{\mathcal{F}}$ is continuous at each $x \in X$. Let $x \in X$. Let $h=\langle f_\alpha(x) \rangle_{\alpha \in A}$ and let $h \in V \cap E_{\mathcal{F}}(X)$ where $V=\
bigcap_{\alpha \in W} [\alpha,V_\alpha]$ is a basic open set. Consider $U=\bigcap_{\alpha \in W} f_\alpha^{-1}(V_\alpha)$. It is easy to verify that $x \in U$ and $E_{\mathcal{F}}(U) \subset V\cap E_
Proof of 2. Let $x,y \in X$ be distinct points. There is $\alpha \in A$ such that $f_\alpha(x) eq f_\alpha(y)$. Clearly, $E_{\mathcal{F}}(x)= \langle f_\beta(x) \rangle_{\beta \in A} eq E_{\mathcal
{F}}(y)=\langle f_\beta(y) \rangle_{\beta \in A}$.
Proof of 3. Note that by condition 2 in this theorem, the map $E_{\mathcal{F}}$ is one-to-one. It suffices to show that $E_{\mathcal{F}}$ is an open map. Let $U \subset X$ be open. We show that $E_{\
mathcal{F}}(U)$ is open in $E_{\mathcal{F}}(X)$. To this end, let $\langle f_\alpha(x) \rangle_{\alpha \in A} \in E_{\mathcal{F}}(U)$. Then $x \in U$. Since $\mathcal{F}$ separates points from closed
sets, there is some $\beta$ such that $f_\beta(x) otin \overline{f_\beta(X-U)}$. Let $V_\beta=Y_\beta-\overline{f_\beta(X-U)}$. Then $\langle f_\alpha(x) \rangle_{\alpha \in A} \in [\beta,V_\beta] \
cap E_{\mathcal{F}}(X)=W_\beta$. We show that $W_\beta \subset E_{\mathcal{F}}(U)$. For each $\langle f_\alpha(y) \rangle_{\alpha \in A} \in W_\beta$, we have $f_\beta(y) otin \overline{f_\beta(X-U)}
$. If $y otin U$, then $f_\beta(y) \in f_\beta(X-U)$, a contradiction. So we have $y \in U$ and this means that $\langle f_\alpha(y) \rangle_{\alpha \in A} \in E_{\mathcal{F}}(U)$. It follows that
$W_\beta \subset E_{\mathcal{F}}(U)$.
Some Applications
A space $X$ is a Tychonoff space (also known as completely regular space) if for each $x \in X$ and for each closed set $C \subset X$ where $x otin C$, there is a continuous function $f:X \rightarrow
I$ such that $f(x)=0$ and $f(y)=1$ for all $y \in C$. The following is a corollary to theorem 1.
Corollary 1. Any Tychonoff space can be embedded in a cube $I^{\mathcal{K}}$.
Proof. Let $\mathcal{F}$ be the family of all continuous functions from the Tychonoff space $X$ into the unit interval $I$. By the definition of Tychonoff space, $\mathcal{F}$ separates points from
closed sets. By theorem 1, the evaluation map $E_{\mathcal{F}}$ is a homeomorphism from $X$ into the cube $I^{\mathcal{K}}$ where $\mathcal{K}=\lvert \mathcal{F} \lvert$.
We now turn our attention to regular second countable space. Having a countable base has many strong properties, one of which is that it can be embedded into the Hilbert Cube $I^{\omega}=I^{\aleph_0}
$. Before we prove this, observe that any regular space with a countable base is a regular Lindelof space. Furthermore, the property of having a countable base is hereditary. Thus a regular space
with a countable base is hereditarily Lindelof (hence perfectly normal). The Vendenisoff Theorem states that in a perfectly normal space, every closed set is a zero-set (i.e. every open set is a
cozero-set). So we make use of this theorem to obtain continuous functions that separate points from closed sets. There is a proof of The Vendenisoff Theorem in this blog. A set $Z \subset X$ is a
zero-set in the space $X$ if there is a continuous function $f:X \rightarrow I$ such that $f^{-1}(0)=Z$. A set $W \subset X$ is a cozero-set if $X-W$ is a zero-set. We are now ready to prove one part
of the Urysohn’s metrization theorem.
Urysohn’s metrization theorem. The following conditions are equivalent.
1. The space $X$ is a regular space with a countable base.
2. The space $X$ can be embedded into the Hilbert cube $I^{\aleph_0}$.
3. The space $X$ is a separable metric space.
We prove the direction $1 \Rightarrow 2$. Let $\lbrace{B_0,B_1,B_2,...}\rbrace$ be a countable base for the regular space $X$. Based on the preceding discussion, $X$ is perfectly normal. By the
Vendenisoff Theorem, for each $n$, $X-B_n$ is a zero-set. Thus for each $n$, there is a continuous function $f_n:X \rightarrow I$ such that $f_n^{-1}(0)=X-B_n$ and $f_n^{-1}((0,1])=B_n$. Let $\
mathcal{F}=\lbrace{f_0,f_1,f_2,...}\rbrace$. It is easy to verify that $\mathcal{F}$ separates points from closed sets. Thus the evaluation map $E_{\mathcal{F}}$ is a homeomorphism from $X$ into $I^
On Spaces That Can Never Be Dowker
A Dowker space is a normal space $X$ for which the product with the closed unit interval $[0,1]$ is not normal. In 1951, Dowker characterized Dowker’s spaces as those spaces that are normal but not
countably paracompact ([1]). Soon after, spaces that are normal but not countably paracompact became known as Dowker spaces. In 1971, M. E. Rudin ([2]) constructed a ZFC example of a Dowker’s space.
But this Dowker’s space is large. It has cardinality $(\omega_\omega)^\omega$ and is pathological in many ways. Thus the search for “nice” Dowker’s spaces continued. The Dowker’s spaces being sought
were those with additional properties such as having various cardinal functions (e.g. density, character and weight) countable. Many “nice” Dowker’s spaces had been constructed using various
additional set-theoretic assumptions. In 1996, Balogh constructed a first “small” Dowker’s space (cardinaltiy continuum) without additional set-theoretic axioms beyond ZFC ([4]). Rudin’s survey
article is an excellent reference for Dowker’s spaces ([3]).
In this note, I make several additional observations on Dowker’s spaces. In this previous post, I presented a proof of the Dowker’s theorem characterizing the normal spaces for which the product with
the unit interval is normal (see the statement of the Dowker’s theorem below). In another post, I showed that perfectly normal spaces can never be Dowker’s spaces. Based on the Dowker’s theorem,
several other classes of spaces are easily seen as not Dowker.
Dowker’s Theorem. For a normal space $X$, the following conditions are equivalent.
1. The space $X$ is countably paracompact.
2. The product $X \times Y$ is normal for any infinite compact metric space $Y$.
3. The product $X \times [0,1]$ is normal.
4. For each sequence of closed subsets $\lbrace{A_0,A_1,A_2,...}\rbrace$ of $X$ such that $A_0 \supset A_1 \supset A_2 \supset ...$ and $\bigcap_{n<\omega} A_n=\phi$, there is open sets $U_n \supset
A_n$ for each $n$ such that $\bigcap_{n<\omega} U_n=\phi$.
Observations. If $X$ is perfectly compact, then it can be shown that it is countably paracompact by showing that it satisfies condition 4 in the Dowker’s theorem (there is a proof in this blog). Thus
there are no perfectly normal Dowker’s spaces. There are no countably compact Dowker’s spaces since any countably compact space is countably paracompact. This can also be seen using condition 4
above. In a countably compact space, any decreasing nested sequence of closed sets has non-empty intersection and thus condition 4 is satisfied vacuously. Furthermore, all metric spaces, compact
spaces, regular Lindelof spaces cannot be Dowker since these spaces are paracomapct.
Normal Moore spaces are perfectly normal. Thus there are no Dowker’s spaces that are Moore spaces. Note that a space is perfectly normal if it is normal and if every closed set is $G_\delta$. We show
that in a Moore space, every closed set is $G_\delta$. Let $\lbrace{\mathcal{O}_n:n \in \omega}\rbrace$ be a development for the regular space $X$. Let $A$ be a closed set in $X$. We show that $A$ is
a $G_\delta-$ set in $X$. For each $n$, let $U_n=\lbrace{O \in \mathcal{O}_n:O \bigcap A eq \phi}\rbrace$. Obviously, $A \subset \bigcap_n U_n$. Let $x \in \bigcap_n U_n$. If $x otin A$, there is
some $n$ such that for each $O \in \mathcal{O}_n$ with $x \in O$, we have $O \subset X-A$. Since $x \in \bigcap_n U_n$, $x \in O$ for some $O \in \mathcal{O}_n$ and $O \cap A eq \phi$, a
contradiction. Thus we have $A=\bigcap_n U_n$.
There are other classes of spaces that can never be Dowker. We point these out without proof. For example, there are no linearly ordered Dowker’s spaces and there are no monotonically normal Dowker’s
spaces (see Rudin’s survey article [3]).
1. Dowker, C. H., On Countably Paracompact Spaces, Canad. J. Math. 3, (1951) 219-224.
2. Rudin, M. E., A normal space $X$ for which $X \times I$ is not normal, Fund. Math., 73 (1971), 179-186.
3. Rudin, M. E., Dowker Spaces, Handbook of Set-Theoretic Topology (K. Kunen and J. E. Vaughan, eds), Elsevier Science Publishers B. V., Amsterdam, (1984) 761-780.
4. Balogh, Z., A small Dowker space in ZFC, Proc. Amer. Math. Soc., 124 (1996), 2555-2560.
A Note About Countably Compact Spaces
This is a discussion on several additional conditions that would turn a countably compact space into a compact space. For example, a countably compact space having a $G_\delta-$ diagonal is compact
(proved in this post). Each of the following properties, if possessed by a countably compact space, would lead to compactness: (1) having a $G_\delta-$ diagonal, (2) being metrizable, (3) being a
Moore space, (4) being paracompact, and (5) being metacompact. All spaces are at least Hausdorff. We have the following theorem. Some relevant definitions and links to posts in this blog are given
below. For any terms that are not defined here, see Engelking ([1]).
Theorem. Let $X$ be a countably compact space. If $X$ possesses any one of the following conditions, then $X$ is compact.
1. Having a $G_\delta-$ diagonal.
2. Being a metrizable space.
3. Being a Moore space.
4. Being a paracompact space.
5. Being a metacompact space.
The proof of 1 has already been presented in another post in this blog. Since metrizable spaces are Moore spaces, between 2 and 3 we only need to prove 3. Between 4 and 5, we only need to prove 5
(since paracompact compact spaces are metacompact).
Proof of 3. A Moore space is a regular space that has a development (see this post for the definition). In this post, I showed that a space $X$ has a $G_\delta-$diagonal if and only it has a $G_\
delta-$diagonal sequence. It is easy to verify that the development for a Moore space is a $G_\delta-$diagonal sequence. Thus any Moore space has a $G_\delta-$diagonal and any countably compact Moore
space is compact (and metrizable). Saying in another way, in the class of Moore spaces, countably compactness is equivalent to compactness.
Proof of 5. A space $X$ is metacompact if every open cover of $X$ has a point-finite open refinement. Let $X$ be metacompact. Let $\mathcal{U}$ be an open cover of $X$. By the metacompactness, $\
mathcal{U}$ has a point-finite open refinement $\mathcal{O}$. We are done if we can show $\mathcal{O}$ has a finite subcover. This finite subcover is obtained through the following claims.
Claim 1. There is a set $M \subset X$ such that $\lvert M \cap O \lvert \thinspace \leq 1$ for each $O \in \mathcal{O}$ and such that $M$ is maximal. That is, by adding an additional point $x otin M$
, $\lvert (M \cup \lbrace{x}\rbrace) \cap O \lvert \thinspace \ge 2$ for some $O \in \mathcal{O}$.
Such a set can be obtained by using the Zorn’s Lemma.
Claim 2. Let $\mathcal{W}=\lbrace{O \in \mathcal{O}:O \cap M eq \phi}\rbrace$. We claim that $\mathcal{W}$ is an open cover of $X$.
To see this, let $x \in X$. If $x \in M$, then $x \in O$ for some $O \in \mathcal{W}$. If $x otin M$, then by the maximality of $M$, $M \cup \lbrace{x}\rbrace$ intersects with some $O \in \mathcal{O}
$ with at least 2 points. This means that $x$ and at least one point of $M$ are in $O$. Then $O \in \mathcal{W}$.
Since each open set in $\mathcal{W}$ contains at most one point of $M$, $M$ is a closed and discrete set in $X$. By the countably compactness of $X$, $M$ must be finite. Since each point of $M$ is in
at most finitely many open sets in $\mathcal{O}$, $\mathcal{W}$ is finite. Thus $\mathcal{W}$ is a finite subcover of $\mathcal{O}$.
1. Engelking, R., General Topology, Revised and Completed Edition, 1989, Heldermann Verlag, Berlin.
Countably Compact Spaces with G-delta Diagonals
It is a classic result in general topology that any compact space with a $G_\delta-$diagonal is metrizable ([3]). This theorem also holds for countably compact spaces (due to Chaber in [2]). The goal
of this post is to present a proof of this theorem. We prove that if $X$ is countably compact and has a $G_\delta-$diagonal, then $X$ is compact and thus metrizable. All spaces are at least
Hausdorff. This post has a discussion on the theorem on compact spaces with $G_\delta-$diagonal. This post has a discussion on some metrizaton theorems for compact spaces.
If $\mathcal{T}$ is a collection of subsets of a space $X$, then for each $x \in X$, define $st(x,\mathcal{T})=\bigcup\lbrace{T \in \mathcal{T}:x \in T}\rbrace$. A sequence of open covers $\lbrace{\
mathcal{T}_n:n \in \omega}\rbrace$ of the space $X$ is a $G_\delta-$diagonal sequence for $X$ if for each $x \in X$, we have $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{T}_n)$. We use the
following lemma (due to Ceder, [1]). This lemma was proved in this previous post.
Lemma. The space $X$ has a $G_\delta-$diagonal if and only if it has a $G_\delta-$diagonal sequence.
Theorem. Let $X$ be a countably compact space that has a $G_\delta-$diagonal. Then $X$ is compact.
Proof. Let $X$ be a countably compact space. Let $\lbrace{\mathcal{T}_n:n \in \omega}\rbrace$ be a $G_\delta-$diagonal sequence for $X$. If $X$ is Lindelof, then we are done. Suppose we have an open
cover $\mathcal{V}$ of $X$ that has no countable subcover. From this open cover $\mathcal{V}$, we derive a contradiction.
We inductively, for each $\alpha < \omega_1$, choose a point $x_\alpha \in X$ and an integer $m(\alpha) \in \omega$ with the following properties:
For each $\alpha < \omega_1$,
1. $x_\alpha \in X-\bigcup\lbrace{st(x_\beta,\mathcal{T}_{m(\beta)}): \beta < \alpha}\rbrace$, and
2. the open cover $\mathcal{V}$ does not have a countable subcollection that covers $X-\bigcup_{\beta \leq \alpha} st(x_\beta,\mathcal{T}_{m(\beta)})$.
To start off, choose $x_0 \in X$. There is an integer $m(0) \in \omega$ such that no countable subcollection of $\mathcal{V}$ covers $X-st(x_0,\mathcal{T}_{m(0)})$. Suppose this integer $m(0)$ does
not exist. Then for each $n \in \omega$, we have a countable $\mathcal{V}_n \subset \mathcal{V}$ such that $\mathcal{V}_n$ covers $X-st(x_0,\mathcal{T}_n)$. Then $\bigcup_{n<\omega} \mathcal{V}_n$
would be a countable subcollection of $\mathcal{V}$ that covers $X-\lbrace{x_0}\rbrace$. This would mean that $\mathcal{V}$ has a countable subcover of $X$.
Suppose that $\lbrace{x_\beta:\beta<\alpha}\rbrace$ and $\lbrace{m(\beta):\beta<\alpha}\rbrace$ have been chosen such that conditions (1) and (2) are satisfied for each $\beta<\alpha$. We have the
following claim. Proving this claim allows us to choose $x_\alpha$ and $m(\alpha)$.
Claim. No countable subcollection of $\mathcal{V}$ covers $X-\bigcup_{\beta<\alpha} st(x_\beta,\mathcal{T}_{m(\beta)})$.
Suppose we do have a countable $\mathcal{W} \subset \mathcal{V}$ such that $\mathcal{W}$ covers $X-\bigcup_{\beta<\alpha} st(x_\beta,\mathcal{T}_{m(\beta)})$. Then $\mathcal{S}=\lbrace{st(x_\beta,\
mathcal{T}_m(\beta)):\beta < \alpha}\rbrace \cup \mathcal{W}$ is a countable open cover of $X$ and thus has a finite subcover $\mathcal{F}$. Let $\delta$ be the largest ordinal $<\alpha$ such that
$st(x_\delta,\mathcal{T}_m(\delta))$ is in this finite subcover $\mathcal{F}$. Then $\mathcal{W}$ is a counntable subcollection of $\mathcal{V}$ that covers $X-\bigcup_{\beta \leq \delta} st(x_\beta,
\mathcal{T}_{m(\beta)})$. This violates condition (2) above for the ordinal $\delta$. This proves the claim.
Now, pick $x_\alpha \in X-\bigcup\lbrace{st(x_\beta,\mathcal{T}_{m(\beta)}): \beta < \alpha}\rbrace$. There must be some integer $m(\alpha) \in \omega$ such that conditon (2) above is satisfied for $
\alpha$. If not, for each $n \in \omega$, there is some countable $\mathcal{V}_n \subset \mathcal{V}$ such that $\mathcal{V}_n$ covers $X-\bigcup_{\beta \leq \alpha} st(x_\beta,\mathcal{T}_n)$. Then
$\bigcup_{n<\omega} \mathcal{V}_n$ would be a countable subcollection of $\mathcal{V}$ that covers $X-\biggl(\bigcup_{\beta<\alpha} st(x_\beta,\mathcal{T}_{m(\beta)}) \biggr) \bigcup \lbrace{x_\
alpha}\rbrace$. This would mean that $\mathcal{V}$ has a countable subcover of $X-\bigcup_{\beta<\alpha} st(x_\beta,\mathcal{T}_{m(\beta)})$. This violates the above claim. Now the induction process
is completed.
To conclude the proof of the theorem, note that there is some $n \in \omega$ and there is some uncountable $D \subset \omega_1$ such that for each $\alpha \in D$, $n=m(\alpha)$. Then $Y=\lbrace{x_\
alpha:\alpha \in D}\rbrace$ is an uncountable closed and discrete set in $X$. Note that each open set in $\mathcal{T}_n$ contains at most one point of $Y$. Thus $X$ must be Lindelof. With $X$ being
countably compact, $X$ is compact.
1. Ceder, J. G. Some generalizations of metric spaces, Pacific J. Math., 11 (1961), 105-125.
2. Chaber, Conditions which imply compactness in countably compact spaces, Bull. Acad. Pol. Sci. Ser. Math., 24 (1976), 993-998.
3. Sneider, V., Continuous images of Souslin and Borel sets: metrization theorems, Dokl. Acad. Nauk USSR, 50 (1945), 77-79.
Perfect Image of Separable Metric Spaces
In a previous post on countable network, it was shown that having a countable network is equivalent to being the continuous image of a separable metric space. Since there is an example of a
non-metrizable space with countable netowrk, the continuous image of a separable metric space needs not be a separable metric space. However, the perfect image of a separable metrizable space is
separable metrizable. First some definitions. A continuous mapping $f:X \rightarrow Y$ is a closed mapping if $f(H)$ is closed in $Y$ for any closed set $H \subset X$. A continuous surjection $f:X \
rightarrow Y$ is a perfect mapping if $f$ is closed and $f^{-1}(y)$ is compact for each $y \in Y$.
Let $f:X \rightarrow Y$ be a perfect mapping where $X$ has a countable base $\mathcal{B}$. Assume $\mathcal{B}$ is closed under finite unions. Because $f$ is a closed mapping, $f(X-B)$ is closed and
$f(B)$ is open in $Y$ for each $B \in \mathcal{B}$. We show that $\mathcal{B}_f=\lbrace{f(B):B \in \mathcal{B}}\rbrace$ is a base for $Y$. Let $y \in Y$ and $U \subset Y$ be open with $y \in U$. For
each $x \in f^{-1}(y)$, choose $B_x \in \mathcal{B}$ such that $f(B_x) \subset U$. Since $f^{-1}(y)$ is compact, we can choose $B_{x(0)},...,B_{x(n)}$ that cover $f^{-1}(y)$. Let $B=B_{x(0)} \cup ...
\cup B_{x(n)}$, which is in $\mathcal{B}$. We have $y \in f(B) \subset U$. Thus the topology on $Y$ can be generated by $\mathcal{B}_f$.
Update (11/24/2009):
The proof in the above paragraph is faulty. Thanks to Dave Milovich for pointing this out. Here’s the corrected proof.
Let me first prove a lemma.
Lemma. Let $f: X \rightarrow Y$ be a closed mapping and let $V \subset X$ be open. Then $f_*(V)=\lbrace{y \in Y:f^{-1}(y) \subset V}\rbrace$ is open in $Y$. Furthermore, $f_*(V) \subset f(V)$.
Proof of Lemma. Since $f$ is a closed mapping, $f(X-V)$ is closed. We claim that $f(X-V)=Y-f_*(V)$. It is clear that $f(X-V) \subset Y-f_*(V)$. To show that $Y-f_*(V) \subset f(X-V)$, let $z \in
Y-f_*(V)$. Then $f^{-1}(z)$ cannot be a subset of $V$. Choose $x \in f^{-1}(z)-V$. Then we have $z=f(x) \in f(X-V)$. Thus $f(X-V)=Y-f_*(V)$ and $f_*(V)$ is open. It is straitforward to verify that
$f_*(V) \subset f(V)$.
Now I prove that the perfect image of a separable metric space is a separable metric space. Let $f:X \rightarrow Y$ be a perfect mapping where $X$ has a countable base $\mathcal{B}$. Assume $\mathcal
{B}$ is closed under finite unions. We show that $\mathcal{B}_f=\lbrace{f_*(B):B \in \mathcal{B}}\rbrace$ is a base for $Y$.
Let $y \in Y$ and $U \subset Y$ be open with $y \in U$. For each $x \in f^{-1}(y)$, choose $B_x \in \mathcal{B}$ such that $x \in B_x$ and $f(B_x) \subset U$. Since $f^{-1}(y)$ is compact, we can
choose $B_{x(0)},...,B_{x(n)}$ that cover $f^{-1}(y)$. Let $B=B_{x(0)} \cup ... \cup B_{x(n)}$, which is in $\mathcal{B}$. Since $f^{-1}(y) \subset B$, we have $y \in f_*(B)$. We also have $f_*(B) \
subset f(B) \subset U$. Thus the topology on $Y$ can be generated by the countable base $\mathcal{B}_f$.
A Discussion About The Michael Line
The original post was a basic discussion of the Michael line. It was written back in Oct 2009 and is now replaced by the following newer posts.
“Finite and Countable Products of the Michael Line”
“Bernstein Sets and the Michael Line” | {"url":"http://dantopology.wordpress.com/tag/metrizable-spaces/page/2/","timestamp":"2014-04-16T13:39:30Z","content_type":null,"content_length":"206792","record_id":"<urn:uuid:c1609fed-34ce-4f28-beb6-1fecb8df322e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Continuous-time Signal X(t) Is Obtained At The ... | Chegg.com
Image text transcribed for accessibility: A continuous-time signal x(t) is obtained at the output of an ideal lowpass filter with cutoff frequency omega = 1,000 pi. If impulse-train sampling is
performed on x(t). Which of the following sampling periods would guarantee that x(t) can be recovered from its sampled version using an appropriate lowpass filter? T = 2 times 10-3
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/continuous-time-signal-x-t-obtained-output-ideal-lowpass-filter-cutoff-frequency-omega-1-0-q2391371","timestamp":"2014-04-17T08:49:54Z","content_type":null,"content_length":"20770","record_id":"<urn:uuid:53371f8a-5f15-4e12-8d0d-9b4dc6a97e85>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partially well ordered sets and partial ordinals. Fund
, 2003
"... We study the set of monomial ideals in a polynomial ring as an ordered set, with the ordering given by reverse inclusion. We give a short proof of the fact that every antichain of monomial
ideals is finite. Then we investigate ordinal invariants for the complexity of this ordered set. In particular ..."
Cited by 4 (1 self)
Add to MetaCart
We study the set of monomial ideals in a polynomial ring as an ordered set, with the ordering given by reverse inclusion. We give a short proof of the fact that every antichain of monomial ideals is
finite. Then we investigate ordinal invariants for the complexity of this ordered set. In particular, we give an interpretation of the height function in terms of the Hilbert-Samuel polynomial, and
we compute upper and lower bounds on the maximal order type.
"... Abstract. We show that the maximal linear extension theorem for well partial orders is equivalent over RCA0 to ATR0. Analogously, the maximal chain theorem for well partial orders is equivalent
to ATR0 over RCA0. 1. ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. We show that the maximal linear extension theorem for well partial orders is equivalent over RCA0 to ATR0. Analogously, the maximal chain theorem for well partial orders is equivalent to
ATR0 over RCA0. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5481407","timestamp":"2014-04-23T19:54:14Z","content_type":null,"content_length":"13995","record_id":"<urn:uuid:fda30a9f-272c-4f68-a256-5a0e6f7ea4f0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale Estates Precalculus Tutor
Find an Avondale Estates Precalculus Tutor
...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom to
tutor from home so that I can be a stay at home mom.
10 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I know the content of the test like the back of my hand, and I know lots of little tricks and strategies to help you bring your scores up. I ask my students to determine their "baseline" score
by taking a practice test, and to decide what their target score is (usually based on what college they...
25 Subjects: including precalculus, English, reading, calculus
...In addition to increasing my students' scores, I help students take ownership of their learning, with the goal of making them an independent and successful student. I look forward to helping
you or your child to achieve your/their goals and to bringing fun back to education. Email or call me to...
19 Subjects: including precalculus, physics, calculus, geometry
...I am able to help students in Math, Chemistry and French. I had my own tutoring Business. I helped students understand their homework and prepared them for the standardized tests ( GED, SAT,
CAT, ...). Beside the tutoring, I have 20 years of Research, Formulation and Quality Control experience....
9 Subjects: including precalculus, chemistry, French, SAT math
...In addition I have tutored many high school and college students in foundational math and physics skills. I can help you build confidence in these difficult subjects! I have a deep
understanding of principles of math, science, and engineering as well as a passion to help others reach the same understanding.
15 Subjects: including precalculus, calculus, physics, algebra 2 | {"url":"http://www.purplemath.com/Avondale_Estates_precalculus_tutors.php","timestamp":"2014-04-16T16:01:33Z","content_type":null,"content_length":"24395","record_id":"<urn:uuid:418f3a64-dcbd-4e2f-9223-3b02107283a8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from January 2012 on The Math Less Traveled
Monthly Archives: January 2012
[This is the ninth, and, I think, final in a series of posts on the decadic numbers (previous posts: A curiosity, An invitation to a funny number system, What does "close to" mean?, The decadic
metric, Infinite decadic numbers, More … Continue reading
Patrick Vennebush of Math Jokes 4 Mathy Folks recently wrote about the following procedure that yields surprising results. Choose some positive integer . Now, starting with consecutive integers,
raise each integer to the th power. Then take pairwise differences by … Continue reading
[This is the eighth in a series of posts on the decadic numbers (previous posts: A curiosity, An invitation to a funny number system, What does "close to" mean?, The decadic metric, Infinite decadic
numbers, More fun with infinite decadic … Continue reading
Posted in computation, convergence, infinity, iteration, modular arithmetic, number theory, programming Tagged decadic, Haskell, idempotent, streaming, u 2 Comments
I was sad to learn that Herbert Wilf died yesterday. Long-time readers of this blog may remember him as one of the discoverers of the Calkin-Wilf tree, which I wrote about in a ten-part series of
posts (1, 2, 3, … Continue reading
[This is the seventh in a series of posts on the decadic numbers (previous posts: A curiosity, An invitation to a funny number system, What does "close to" mean?, The decadic metric, Infinite decadic
numbers, More fun with infinite decadic … Continue reading | {"url":"http://mathlesstraveled.com/2012/01/","timestamp":"2014-04-20T10:49:34Z","content_type":null,"content_length":"58524","record_id":"<urn:uuid:e0d53199-5b7c-4091-97cb-8f22eeb6dfc8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Magnetic characterization of magnetic tunnel junction devices using circle transfer curves
Schematic showing some details about how the different measurement techniques are conducted in two-dimensional field space. A Stoner-Wohlfarth asteroid curve is also shown, with its origin shifted
from the field-space origin due to internal offset fields ( and ).
(a) Schematic of the MTJ multilayer showing the relevant physical quantities that dictate the junction behavior. (b) Simplified model of the free layer, where the magnetization is determined by the
two offset field components ( and ), the sample’s uniaxial anisotropy, and the external applied field . (c) Model of the pinned layer magnetization, which is assumed to be the vector sum of two
forces: the applied field and the exchange biasing field .
Examples of the raw data from a circle curve (taken at 130 G; dot-dashed line) and a remnant resistance curve (solid line), along with a theoretical remnant resistance curve (dashed line). The two
sets of data were taken during the same set of field sweeps.
Experimentally measured angle-dependent asteroid curve of a sample MTJ. Solid circles represent the data, while the solid line represents the best fit to the modified S-W model [Eq. (7)].
Experimental data (solid lines) and theoretical fitting results (dashed lines) for a set of three circle curves (taken at 40, 70, and 130 G) taken on a representative MTJ element. All fits are made
using a single set of junction parameters.
Comparison of the measured anisotropy angles obtained from remnant resistance curves (-axis), asteroid measurements (open diamonds), and circle curve fits (open circles).
Comparison between measured transfer curves taken at different field sweep angles (solid lines), with simulated transfer curves based on the extracted circle curve parameters (dashed lines). This
plot shows that the circle curve results alone can accurately predict junction behavior in arbitrary applied fields.
Plot showing the distribution of junction anisotropy angle and pinned layer direction as a function of wafer position for the twice-annealed sample. The arrows indicate the pinned layer orientation,
while the length and orientation of the solid lines indicate the strength and direction of the sample anisotropy, respectively.
(a) Plot of the positional distribution of junction anisotropy angle and magnitude for the sample with standard annealing. The numbers below each line show the anisotropy angle in degrees, while the
length of each line is proportional to the anisotropy strength. (b) Distribution of the extracted pinned layer direction for the same sample, as a function of wafer position.
A summary of the strengths and weaknesses of the different magnetic characterization methods.
Scitation: Magnetic characterization of magnetic tunnel junction devices using circle transfer curves | {"url":"http://scitation.aip.org/content/aip/journal/jap/103/3/10.1063/1.2837115","timestamp":"2014-04-16T05:18:23Z","content_type":null,"content_length":"84458","record_id":"<urn:uuid:4b60db67-73f5-4bf4-85ca-be4db0437b93>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predator-Prey Relationships
Predator-Prey Relationships
Sharks appear to be a major threat to fish ... Fish & Sharks ... Where the predators are sharks and the prey are fish. ... – PowerPoint PPT presentation
Number of Views:47
Avg rating:3.0/5.0
Slides: 72
Added by: Anonymous
more less
Transcript and Presenter's Notes | {"url":"http://www.powershow.com/view1/23f5bb-ZDc1Z/Predator-Prey_Relationships_powerpoint_ppt_presentation","timestamp":"2014-04-16T07:59:58Z","content_type":null,"content_length":"124010","record_id":"<urn:uuid:908009e7-b658-4102-a296-3f187ed662d7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. "Vortex phase separation in mesoscopic superconductors" (with O. Iaroshenko, V. Rybalko, V. M. Vinokur), Scientific Reports: Nature Publishing Group 3 (2013).
2. "Collective dynamics in semidilute bacterial suspensions" (with S.D. Ryan, A. Sokolov, I.S. Aranson), New Journal of Physics 15 105021 (2013)
3. "Effective viscosity of puller-like microswimmers: a renormalization approach" (with S. Gluzman and D.A. Karpeev), J. Royal Society Interface 10:89 (2013)
4. "Minimax Critical Points in Ginzburg-Landau Problems with Semi-stiff Boundary Conditions: Existence and Bubbling" (with P. Mironescu, V. Rybalko, E. Sandier), accepted to Comm. in PDEs (2013).
5. "A kinetic model for semi-dilute bacterial suspensions" (with S.D. Ryan, B.M. Haines, D.A. Karpeev), SIAM MMS 11:4, pp. 1176-1196 (2013).
6. "Polyharmonic homogenization, rough polyharmonic splines and sparse super-localization" (with H. Owhadi, L. Zhang), accepted to ESAIM: Mathematical Modelling and Numerical Analysis. Special issue
7. "Collision of microswimmers in viscous fluid" (with M. Potomkin, V. Gyrya, I. Aranson), Physical Review E 87 053005 (2013).
8. "On the limit p to infinity of global minimizers for a p-Ginzburg-Landau-type energy" (with Y. Almog, L. Berlyand, D. Golovaty, I. Shafrir), accepted to Annales de l'Institut Henri Poincare (c)
Analyse Non Linéaire (2012).
9. "Homogenized description of multiple Ginzburg-Landau vortices pinned by small holes" (with V. Rybalko), special issue NHM 8:1 pp.115-130 (2013).
10. "Renormalized Ginzburg-Landau energy and location of near boundary vortices" (with V. Rybalko, N.K. Yip), NHM, 7:1 (online publication) (2012).
11. "Effective viscosity of bacterial suspensions: A three-dimensional PDE model with stochastic torque" (with B.M. Haines, I.S. Aranson, D.A. Karpeev), Comm. Pure Appl. Anal., v. 11(1), pp. 19-46
12. "Viscosity of bacterial suspensions: Hydrodynamic interactions and self-induced noise" (with S.D. Ryan, B.M. Haines, F. Ziebert, I.S. Aranson), Rapid Communication to Phys. Rev. E, 83 050904(R)
13. "Minimizers of the magnetic Ginzburg-Landau functional in simply connected domain with prescribed degree on the boundary" (with O. Misiats, V. Rybalko), Communications in Contemporary Mathematics
, v. 13:1, pp. 53-66 (2011).
14. "Effective shear viscosity and dynamics of suspensions of micro-swimmers from small to moderate concentrations" (with V. Gyrya, K. Lipnikov, I. Aranson), J. Math. Biology , v. 62 (5), pp. 707-740
15. "Global Energy Matching Method for Atomistic to Continuum Modeling of Self-Assembling Biopolymer Aggregates" (with Lei Zhang, M.V. Fedorov, H. Owhadi), Multiscale Model. Simul. v. 8, n. 5, pp.
1958-1980 (2010).
16. "Flux norm approach to finite dimensional homogenization approximations with non-separated scales and high contrast" (with H. Owhadi), Arch. Rat. Mech. Anal. , v. 198, n. 2, pp. 677-721 (2010).
17. "Near boundary vortices in a magnetic Ginzburg-Landau model: their locations via tight energy bounds" (with O. Misiats, V. Rybalko), J. Func. Analysis, v. 258, pp. 1728-1762 (2010).
18. "A three-dimensional model for the effective viscosity of bacterial suspensions" (with B.M. Haines, A. Sokolov, I.S. Aranson, D.A. Karpeev), Physical Review E, v. 80, pp.041922 (2009).
19. "A model of hydrodynamic interaction between swimming bacteria" (with V.T. Gyrya, I.S. Aranson, D.A. Karpeev), Bulletin of Mathematical Biology v. 72, pp. 148-183 (2010).
20. "Solutions with Vortices of a Semi-Stiff Boundary Value Problem for the Ginzburg-Landau Equation" (with V. Rybalko), J. European Math. Society v. 12 n. 6, pp.1497-1531 (2009).
21. "Global minimizers for a p-Ginzburg-Landau-type energy in R2" (with Y. Almog, D. Golovaty, and I. Shafrir), J. Func. Analysis, v.256, n.7, pp. 2268-2290 (2009).
22. "Fictitious Fluid Approach and Anomalous Blow-up of the Dissipation Rate in a 2D Model of Concentrated Suspensions" (with Y. Gorb and A. Novikov), Arch. Rat. Mech. Anal., v. 193, n. 3, pp.
585-622, (2009), DOI:10.1007/s00205-008-0152-2.
23. "Two-parameter homogenization for a Ginzburg-Landau problem in a perforated domain" (with P. Mironescu), Networks and Heterogenous Media, v.3, n.3, pp. 461-487 (2008).
24. "Effective Viscosity of Dilute Bacterial Suspensions: A Two-Dimensional Model" (with B. Haines, I. Aronson, and D. Karpeev), Physical Biology, 5:4, 046003 (9pp) (2008).
25. "The Homogenized Model of Small Oscillations of Complex Fluids" (with M. Berezhnyy and E. Khruslov), Networks and Heterogenous Media, v.3, n.4, pp. 831-862 (2008).
26. "A Network Model of Geometrically Constrained Deformations of Granular Materials" (with K.A. Ariyawansa and A. Panchenko), Networks and Heterogenous Media, v.3, pp. 125-148 (2008).
27. "Rise of Correlations of Transformation Strains in Random Polycrystals" (with O. Bruno and A. Novikov), SIAM J. Math. Analysis, v.40, n.4, pp. 15501584 (2008).
28. "Nonlinear Dielectric Response of Periodic Composite Materials" (with A. Kolpakov, A. Tagantsev, and A. Kanareikin), J. of Electroceramics, v.18, pp. 129-137 (2007).
29. "Strong and weak blow up of the viscous dissipation rates for concentrated suspensions" (with A. Panchenko), Journal of Fluid Mechanics, v. 578, pp. 1-34 (2007).
30. "Asymptotic analysis of an array of absolutely conducting holes" (with G. Cardone, Y. Gorb, and G. Panasenko), Networks and Heterogenous Media, 1:3, pp. 353-377 (2006).
31. "Nonexistence of Ginzburg-Landau minimizers with prescribed degree on the boundary of a doubly connected domain" (with D. Golovaty and V. Rybalko), C. R. Acad. Sci, Paris, 343/1, pp. 63-68
32. "Ginzburg-Landau minimizers with prescribed degrees. Capacity of the domain and emergence of vortices" (with P. Mironescu), J. Functional Analysis, v. 239, n. 1, pp. 76-99 (2006).
33. "Continuum limit for three-dimensional mass-spring networks and discrete Korn's inequality" (with M. Berezhnyy), Journal of Mechanics and Physics of Solids, 54:3, pp. 635-669 (2006).
34. "Methodology, theory and practice of sociological analysis of modern society" (with O. Kutsenko and V. Sherstobitov), in collection of papers A theoretical model to explain support of terrorist
actions, Kharkov State University, pp 63-69 (2005).
35. "Ginzburg-Landau model of a liquid crystal with random inclusions" (with E. Khruslov), Journal of Mathematical Physics, 46, pp. 095107:1-15 (2005).
36. "Network approximation for effective viscosity of concentrated suspensions with complex geometry" (with L. Borcea and A. Panchenko), SIAM Journal on Mathematical Analysis, 36:5, pp 1580-1628
37. "Asymptotics of the Effective Conductivity of Composites with Closely Spaced Inclusions of Optimal Shape" (with Y. Gorb), Quarterly Journal of Mechanics and Applied Mathematics, 58 (1), pp.
83-106 (2005).
38. "Homogenization of a Ginzburg-Landau functional" (with D. Cioranescu and D. Golovaty), C. R. Acad. Sci., Paris, 340/1 pp 87-92 (2005).
39. "Discrete Network Approximation for Highly-Packed Composites with Irregular Geometry in Three Dimensions" (with Y. Gorb and A. Novikov), in Lecture Notes in Computational Science and Engineering,
editors B. Engquist, P. Lotstedt and O. Runborg, Springer Verlag (2004).
40. "Increase and Decrease of the Effective Conductivity of Two Phase Composites due to Polydispersity" (with V. Mityushev), Journal of Statistical Physics, 118, 3/4 pp 481-509 (2005).
41. "Transport properties of densely packed composites. Effects of shapes and spacings of inclusions" (with D. Golovaty, A. Movchan, and J. Phillips) The Quarterly Journal of Mechanics and Applied
Mathematics, 57, pp. 495-528 (2004).
42. "The Effective Conductivity of Densely Packed High Contrast Composites with Inclusions of Optimal Shape" (with Y. Gorb) Proceedings of Continuum Models and Discrete Systems (CMDS10) Conference,
Tel-Aviv, Israel (2004).
43. "Homogenization of a Ginzburg-Landau model for a nematic liquid crystal with inclusions" (with D. Cioranescu and D. Golovaty), Journal des Mathematiques Pures et Appliquees, 84:1, pp. 97-136
44. "Homogenized Non-Newtonian Viscoelastic Rheology of a Suspension of Interacting Particles in a Viscous Newtonian Fluid" (with E. Ya. Khruslov) SIAM Journal of Applied Mathematics, 64:3, pp.
1002-1034 (2004).
45. "Random network model for heat transfer in high contrast composite materials" (with D. Gerenrot and J. Phillips), IEEE Transactions on Advanced Packaging, 26:4, pp. 410-417 (2003).
46. "Ginzburg-Landau Minimizers with Prescribed Degrees: Dependence on Domain" (with P. Mironescu) C. R. Acad. Sci, Paris, 337 (6), pp. 375-380 (2003).
47. "Geometric patterns and effective conductivity of highly packed two-phase composites" (with A. Novikov), Homogenization 2001, (Naples), 113-129 GAKUTO Internat. Ser. Math. Sci. Appl., 18, Tokyo
48. "Error of the network approximation for densely packed composites with irregular geometry" (with A. Novikov), SIAM Journal on Mathematical Analysis, 34(2), pp. 385-408 (2002).
49. "Competition between the surface and the boundary layer energies in a Ginzburg-Landau model of a liquid crystal composite" (with E. Ya. Khruslov), Asymptotic Analysis, 29, pp. 185-219 (2002).
50. "On uniqueness of vector-valued minimizers of the Ginzburg-Landau functional in annular domains" (with D. Golovaty), Calculus of Variations, 14, 213-232 (2002).
51. "Network Approximation in the Limit of Small Interparticle Distance of the Effective Properties of a High-Contrast Random Dispersed Com- posite" (with A. Kolpakov), Archive for Rational Mechanics
and Analysis, 159, pp. 179-227 (2001).
52. "Generalized Clausius-Mosotti formula for random composite with circular fibers" (with V. Mityushev), Journal of Statistical Physics, 102, 1/2 pp. 115-145 (2001).
53. "Homogenization of harmonic maps with large number of vortices and applications in superconductivity and superfluidity" (with E. Ya. Khruslov), Advances in Differential Equations, v. 6/2, pp.
229-256 (2001).
54. "Symmetry Breaking in Annular Domains for a Ginzburg-Landau Superconductivity Model" (with K. Voss), Proceedings of IUTAM 99/4 Symposium, Sydney, Australia, January, Kluwer Academic Publishers,
189-200 (2001).
55. "Frequency-dependent acoustics of composites with interfaces" (with M. Avellaneda and J. Clouet), SIAM Journal of Applied Mathematics, 60:6, 2143-2181 (2000).
56. "Homogenization. In Memory of Sergei Kozlov." Edited by V. Berdichevsky, V. Jikov and G. Papanicolaou", Series on Advances in Mathematics for Applied Sciences, 50, pp. xiv+418 (1999).
57. "Homogenization of the Ginzburg-Landau functional with a surface energy term", Asymptotic Analysis, 37-59, 21 (1999).
58. "Asymmetric Strain-Stress Distribution Function for Crystal with Random Point Defects", In the book Homogenization, ed. V. Berdichevsky, V. Jikov and G. Papanicolaou, World Scientific, 179-192
59. "Homogenization of harmonic maps and superconducting composites" (with E. Ya. Khruslov), SIAM Journal of Applied Mathematics, 59, 5 pp. 1892-1916 (1999).
60. "Effective Properties of Superconducting and Superfluid Composites", International Journal of Modern Physics B, 12, 29, 3063-3073 (1999).
61. "First-Passage Percolation, Semi-Directed Bernoulli Percolation, and Failure in Brittle Materials" (with M. Rintoul and S. Torquato) Journal of Statistical Physics, 91, 3/4, 603-623 (1998).
62. "Non-Gaussian Limiting Behavior of the Percolation Threshold in a Large System" (with J.Wehr), Communications in Mathematical Physics, 185, 73-92 (1997).
63. "Renormalization Group Technique for Asymptotic Behavior of a Thermal Diffusive Model with Critical Nonlinearity" (with J. Xin), in Pitman Research Notes, No. 324 "Recent Development in Evolution
Equations", 76-85 (1995).
64. "Large Time Asymptotics of Solutions to a Model Combustion System with Critical Nonlinearity" (with J. Xin), Nonlinearity, 8:161-178 (1995).
65. "The Probability Distribution of the Percolation Threshold in a Large System" (with J. Wehr), Journal of Physics A: Mathematics and General, 28:24, 7127-7133 (1995).
66. "The accuracy of O'Doherty-Anstey approximation for wave propagation in highly disordered stratiffied media" (with R. Burridge), Wave Motion, 21:3, 357-373 (1995).
67. "Effective Elastic Moduli of a Soft Medium with Hard Polygonal Inclusions and Extremal Behavior of Effective Poisson's Ratio" (with K.S. Promislow), Journal of Elasticity, 40:1, 45-73 (1995).
68. "Exact Result for the Effective Conductivity of a Continuum Percolation Model" (with K. Golden), Physical Review B, 50 (4):2114-2117 (1994).
69. "Asymptotics of the Homogenized Moduli for the Elastic Chess-Board Composite" (with S. Kozlov), Archive for Rational Mechanics and Analysis, 118, 95-112 (1992).
70. "Exactly solvable random model and IR spectroscopy of strained defective lattice" (with N. Chukanov and V. Dubovitskii), Chemical Physics Letters, 181, No. 5:450-454 (1991).
71. "The localization problem for a random elastic continuum with dispersion" (with S. Molchanov), Mathematics Notes, USSR Academy, 49, No. 3:346-348 (1991).
72. "The averaging of the diffusion equation in porous medium with weak absorption" (with M. Goncharenko), Journal of Soviet Mathematics, 53, No. 5:3428-3435 (1990).
73. "Operational separation of variables in problems of short wave asymptotic behavior for differential equations with fast oscillating coefficients" (with S. Yu. Dobrokhotov), Sov. Phys. Dokl., 32
(9):714-716 (1987).
74. "Homogenization and short wave asymptotics for solutions of an initial value problem for the Schroedinger equation with a rapidly oscillating potential" (with S. Yu. Dobrokhotov), Uspekhy Mat.
Nauk., 41 (4):195-196 (in Russian, 1986).
75. "Asymptotic behavior of solutions of the Dirichlet boundary value problem in elasticity for domains with fine-grained boundaries" Uspekhy Mat. Nauk., 38, No. 6 (234), 107-108 (in Russian, 1983).
76. "Averaging of elasticity equations in domains with fine-grained boundaries, Part 1-2", Functional theory, functional analysis and applications, Kharkov, 39:16-25 - Part 1; 40:16-23 - Part 2 (in
Russian, 1983).
77. "An asymptotic description for a thin plate with a large number of small holes", Ukrainian Academy of Science Reports, Ser. A., No. 10:5-8 (in Russian, 1983).
78. "Averaging of boundary value problems for higher order differential operators in domains with holes" (with I. Yu. Chudinovich), Soviet Math. Dokl., 28, No. 2:427-430 (1983).
79. "An averaged description of an elastic medium with a large number of small absolutely solid inclusions" (with A. Okhotsimskii), Soviet Phys. Dokl., 28 (1):81-84 (1983).
80. "On the vibration of an elastic body with a large number of small holes" Ukrainian Academy of Science reports, Ser. A, 2:3-5 (in Russian, 1983).
81. "On convergence of spectral families of operators for Neumann boundary value problems", Function theory, functional analysis and applications, Kharkov, 33:3-8 (in Russian, 1980).
Recent Preprints and Submitted Papers (PostScript/PDF files)
1. "Sharp interface limit in a phase field model of cell motility" (with V. Rybalko, M. Potomkin), arXiv sumitted (2013)
2. "Complexity reduction in many particles systems with random initial data" (with P.E. Jabin, M. Potomkin), arXiv submitted (2013) | {"url":"http://www.personal.psu.edu/lvb2/publications/publications.html","timestamp":"2014-04-17T12:29:44Z","content_type":null,"content_length":"23058","record_id":"<urn:uuid:2132daa4-d144-4b3f-9218-04440e7fdd00>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: [Mata newbie] optimize() vs -ml-
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: [Mata newbie] optimize() vs -ml-
From jpitblado@stata.com (Jeff Pitblado, StataCorp LP)
To statalist@hsphsun2.harvard.edu
Subject Re: st: [Mata newbie] optimize() vs -ml-
Date Tue, 16 Oct 2007 16:37:19 -0500
Antoine Terracol <terracol@univ-paris1.fr> asks about relative speed between
Stata's -ml- command and Mata's -optimize()- suite (new is Stata 10):
> I was wondering if there was any general rule to decide wether a given
> likelihood maximisation problem will be more efficiently handled using
> Mata's optimize() or Stata's -ml- command ?
> When I compare the example code given in the help file for optimize() to
> fit a ml estimation of a beta density (v0 method) whith the equivalent
> -ml- code (lf method), I find that optimize() runs approximately 3 to 5
> times faster than -ml-, depending on the size of the (simulated) dataset.
> However, when fitting a simple probit model (two explanatory variables),
> optimize() (v0 method) runs significantly slower than -ml- (lf method).
> Is it caused by the matrix multiplication (cross()) needed to calculate
> x*beta when explanatory variables are present, or is my code badly
> written?
> (omitting Antoine's code)
When comparing -ml- and -optimize()- on equal playing fields, -optimize()-
will tend to be faster than -ml-.
The transpose operator in Antoine's -optimize()- evaluator for the probit
model can be expensive (it causes Mata to make a copy) but is not the culprit
Antoine's observations to the contrary are due to the choice of evaluator
types in the comparison. In fact having predictors in the probit model
accentuates the timing difference between the -v0- and -lf- evaluator types.
1. Predictors:
For the 'beta density' case Antoine's models do not contain predictors, but
the 'probit' model contains 2 predictors.
If Antoine removes the predictors from the 'probit' models, -optimize()- will
finish the job faster than -ml- for the same constant-only model.
2. Evaluator types:
The reason for the flip-flop of results is due to how -ml- and -optimize()-
take numerical derivatives given the user specified evaluator type.
-optimize()- with type -v0- evaluators must take the numerical
derivative with respect to each element of the parameter vector passed
to the users evaluator function (3 parameter elements, for each
predictor and the intercept).
-ml- with type -lf- evaluators take numerical derivatives with respect
to the linear predictor in each equation (1 equation for the probit
Here is a table listing the evaluator types of -ml- and -optimize()-; with the
comparable evaluator types in the same row:
Stata's -ml- Mata -optimize()-
d0 d0
d1 (without scores) d1
d2 (without scores) d2
lf --
-- v0
d1 (with scores) v1
d2 (with scores) v2
The point here is that -ml-'s methods for handling type -lf- evaluators are
not comparable to -optimize()-'s methods for type -v0-.
If Antoine can derive formulas for producing the gradient for an -ml- type
-d1- evaluator and the parameter level scores for an -optimize()- type -v1-
evaluator, a speed comparison will show that -ml- with -d1- and -optimize()-
with -v1- will be faster than -lf- and -v0-, respectively, and -optimize()-
will tend to be faster than -ml-.
Following my signature, I've included a modified version of Antoine's 'probit'
model comparison. My -probit_v2()- function computes parameter level scores
(and a Hessian matrix which is not used by -v1-) for -optimize()-, while my
-myprobit_d2- program computes a gradient (and the negative Hessian which is
not used by -d1-) for -ml-. A quick timing on my machine showed the following
. timer list
1: 0.13 / 1 = 0.1280
2: 0.18 / 1 = 0.1830
Note that -optimize()- cannot have a type -lf- evaluator because there is no
way to impose the linear form assumptions. User arguments are free to be any
valid Mata object and -optimize()- does not have the concept of an equation.
Incidentally, changing the specified evaluator types from -v1- to -v2- and -d1-
to -d2- yielded the following results:
. timer list
1: 0.04 / 1 = 0.0440
2: 0.08 / 1 = 0.0830
And moving the evaluator types to -v0- and -d0- (all numerical derivatives)
. timer list
1: 0.18 / 1 = 0.1780
2: 0.27 / 1 = 0.2710
(Note that timings will vary from run-to-run due to the amount of activity on
your computer).
clear all
set mem 20m
set obs 2000
drawnorm x1 x2 eps
gen cons=1
gen y=(1+x1+x2+eps>0)
timer clear 1
timer clear 2
void lnprobit_v2(
real scalar todo,
real rowvector beta,
real colvector y,
real matrix X,
real colvector llj,
real matrix g,
real matrix H
real colvector pm
real colvector xb
real colvector lj
real colvector dllj
real colvector d2llj
real scalar dim
real scalar nobs
nobs = rows(y)
dim = cols(X)
if (nobs != rows(X) | dim != cols(beta)) {
pm = 2*(y :!= 0) :- 1
xb = X*beta'
lj = normal(pm:*xb)
llj = ln(lj)
if (todo == 0 | missing(llj)) return
dllj = pm :* normalden(xb) :/ lj
if (missing(dllj)) {
llj = .
g = dllj :* X
if (todo == 1) return
d2llj = dllj :* (dllj + xb)
if (missing(d2llj)) {
llj = .
H = - cross(X, d2llj, X)
st_view(y, ., "y")
st_view(x, ., ("x1","x2","cons"))
S = optimize_init()
optimize_init_evaluator(S, &lnprobit_v2())
optimize_init_evaluatortype(S, "v1")
optimize_init_params(S, (0.1,0.5,0))
optimize_init_argument(S, 1, y)
optimize_init_argument(S, 2, x)
betahat = optimize(S)
program myprobit_d2
version 10
args todo b lnf g negH g1
tempvar xb lj
mleval `xb' = `b'
quietly {
gen double `lj' = normal( `xb') if $ML_y1 == 1
replace `lj' = normal(-`xb') if $ML_y1 == 0
mlsum `lnf' = ln(`lj')
if (`todo'==0 | `lnf' >= .) exit
replace `g1' = normalden(`xb')/`lj' if $ML_y1 == 1
replace `g1' = -normalden(`xb')/`lj' if $ML_y1 == 0
mlvecsum `lnf' `g' = `g1', eq(1)
if (`todo'==1 | `lnf' >= .) exit
mlmatsum `lnf' `negH' = `g1'*(`g1'+`xb'), eq(1,1)
ml model d1 myprobit_d2 (y=x1 x2)
mat I= (0.1 , 0.5,0)
ml init I, copy
timer on 2
ml max, search(off)
timer off 2
timer list
***** END:
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-10/msg00564.html","timestamp":"2014-04-19T02:05:50Z","content_type":null,"content_length":"11566","record_id":"<urn:uuid:e0e6b5e7-00db-4ffd-99b6-7267a28f4d62>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Line in 3D space (crossing two circles)
February 5th 2008, 05:14 AM
Line in 3D space (crossing two circles)
Given two circles and an external point P in 3D space, I want to find the line in which passes through point P and also through the perimeters of both circles. The circles are parallell and
concentric but offset in height, as on a cone. (Whether there exists none, one, two or infinite solutions depend on the positions of P and the circles. We can assume that there exist two
This is how I have started out:
- The point P outside of the circles has the known coordinates
{xP; zP; yP}
- With the parametric equation for a circle, the point on circle one, C1, has the coordinates
{x1=r1*Cos[alpha1]; z1=r1*Sin[alpha1]; y1}
where r1 is the radius of circle one and alfa1 is the origo-angular position of point C1.
- Correspondingly for the point on the other circle, C2. The heights of the circles are independent of the angular parameter alpha, and are y1 and y2 respectively.
But then how do I find the conditions for when these three points line up?
I've actually used the law of cosinus to formulate the angle C1'P'C2 at point P as a function of alpha1 and alpha2 in order to minimize it numerically. However, this is cumbersome and I'm sure
there are far simpler and better approaches.
I'm grateful for any assistance!
February 5th 2008, 08:30 AM
A better idea would be to use the distance between points C1 and C2 as target equation to minimize. That simplifies the numerical method. I'll try that when I have access to Mathematica next. I'm
just thining out loud here.
So, my plan now is to:
1) Choose some starting point C1 on circle one.
2) Calculate the point C2 on circle two which is closest to the line through C1 and P.
3) Check if (close enough to) zero and leave the loop, or else continue.
4) Choose some other point on circle one, through a numerical method such as the secant method. Go to step 2 above.
In step 2 above, it should be possible to have a nice algebraic solution. The 3D equation of the line through P to C1 is known. And the y-coordinate of C2 is given beforehand.
So I formulate the distance between the coordinates of the line at Y=y2 (in the plane of circle 2), from the coordinates of the equation of circle 2. I derivate that expression with respect to
the parametric angle alfa2. I set the derivative to zero and solve for alpha2. That gives the point on C2 which is closest to the line (in its own plane Y=y2, not orthogonally). I then calculate
the distance C1 to C2 and use secant method to minimize it by choosing new points on circle 1. If the line crosses both circles, then that distance will be zero for some chosen C1.
I'm still convinced that there are better ways, so if anyone sees is it, I'd appreciate the advice! | {"url":"http://mathhelpforum.com/geometry/27515-line-3d-space-crossing-two-circles-print.html","timestamp":"2014-04-20T15:05:39Z","content_type":null,"content_length":"6180","record_id":"<urn:uuid:01368dd8-79d3-477f-80b1-dd49ddce69b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Somerville, MA Algebra 2 Tutor
Find an East Somerville, MA Algebra 2 Tutor
...Through short stories, plays, and novels, we can appreciate important questions and lessons that cannot be captured in any other way. Your enjoyment of reading is greatly increased by how much
literature you read and how much analysis you make of its contents. I'm one of the world's most popular online reviewers of English literature.
55 Subjects: including algebra 2, English, reading, algebra 1
...I've written a workbooks to help high school teachers with their chemistry licensure by passing the MTEL. I received the Omega Chi Epsilon award for the outstanding chemical engineering senior
at Penn State, and I won an NSF fellowship to attend MIT as a chemical engineering graduate student. I...
23 Subjects: including algebra 2, chemistry, physics, calculus
...I have a thorough understanding of all the fundamental concepts, and I am confident that I can explain the subject in such a way that you too will soon find it an easy subject to master!
American history is one of my favorite subjects. When it comes to a subject like history, getting the motiva...
44 Subjects: including algebra 2, English, chemistry, reading
For over 20 years, I’ve effectively instructed and led people with Fortune 500 organizations Tiffany & Co., Verizon, and the Walt Disney Company. Now I’m looking to pursue a life’s ambition and
transition from the professional realm into public education. I’ve completed testing for state licensure and begun classroom teaching with a local school system.
22 Subjects: including algebra 2, calculus, geometry, GRE
...So is trigonometry really relevant in your day to day activities? You bet it is. Let's explore areas where this science finds use in our daily activities and how we can use this to resolve
problems we might encounter.
10 Subjects: including algebra 2, calculus, physics, geometry
Related East Somerville, MA Tutors
East Somerville, MA Accounting Tutors
East Somerville, MA ACT Tutors
East Somerville, MA Algebra Tutors
East Somerville, MA Algebra 2 Tutors
East Somerville, MA Calculus Tutors
East Somerville, MA Geometry Tutors
East Somerville, MA Math Tutors
East Somerville, MA Prealgebra Tutors
East Somerville, MA Precalculus Tutors
East Somerville, MA SAT Tutors
East Somerville, MA SAT Math Tutors
East Somerville, MA Science Tutors
East Somerville, MA Statistics Tutors
East Somerville, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Beachmont, MA algebra 2 Tutors
Cambridgeport, MA algebra 2 Tutors
Charlestown, MA algebra 2 Tutors
East Milton, MA algebra 2 Tutors
East Watertown, MA algebra 2 Tutors
Grove Hall, MA algebra 2 Tutors
Kendall Square, MA algebra 2 Tutors
Kenmore, MA algebra 2 Tutors
Reservoir, MS algebra 2 Tutors
Somerville, MA algebra 2 Tutors
South Waltham, MA algebra 2 Tutors
Squantum, MA algebra 2 Tutors
West Lynn, MA algebra 2 Tutors
West Somerville, MA algebra 2 Tutors
Winter Hill, MA algebra 2 Tutors | {"url":"http://www.purplemath.com/East_Somerville_MA_Algebra_2_tutors.php","timestamp":"2014-04-17T04:26:43Z","content_type":null,"content_length":"24593","record_id":"<urn:uuid:f422f630-597a-4e81-ba6a-5eda7089e993>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Need **working** code example of 2-D arrays
Pierre GM pgmdevlist@gmail....
Mon Oct 13 00:48:38 CDT 2008
If you're familiar with Matlab syntax, you may find this link interesting:
Here another couple of useful links
For your specific example, if you want to create a (256,128) array of unsigned
import numpy as np
a = np.zeros((256,128), dtype=np.uint32)
Note that if you intend to fill the array afterwards with other values, it
might be more efficient to create an 'empty' array instead of an array full
of zeros:
b=np.empty((256,128), dtype=uint32)
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038050.html","timestamp":"2014-04-20T23:34:19Z","content_type":null,"content_length":"3559","record_id":"<urn:uuid:1fb60997-d6dc-44e2-becd-22c80793e1ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Stabilizability of linear systems over a commutative normed algebra with applications to spatially distributed and parameter-dependent systems.
(English) Zbl 0564.93054
The authors study the feedback stabilization of linear systems given by a pair of matrices (F,G) with entries in a commutative normed
- algebra
with identity. The problem is to find a suitable feedback matrix L over
such that the closed loop system is stable. The system is transformed into another system
over a commutative
-algebra, where
means the Gelfand transform. Necessary and sufficient conditions for the stabilizability of
are obtained in terms of the corresponding Riccati equation. If the image
(B: the completion of
under the Gelfand transform is *-closed, it is shown that the stabilizability of (F,G) is equivalent to that of
. Another condition for the stabilizability of (F,G) is stated in terms of local stabilizability of the system which is equivalent to the local rank condition for
. An example is shown as to the positioning of a seismic cable which is written by a discrete-time linear equation.
93D15 Stabilization of systems by feedback
93B25 Algebraic theory of control systems
93C05 Linear control systems
44A15 Special transforms (Legendre, Hilbert, etc.)
46H25 Normed modules and Banach modules, topological modules
93B17 System transformation
93C25 Control systems in abstract spaces | {"url":"http://zbmath.org/?q=an:0564.93054","timestamp":"2014-04-20T21:06:00Z","content_type":null,"content_length":"24064","record_id":"<urn:uuid:978548b8-cdec-48c0-b5c3-7f8e0512f812>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
22 yards are how many feet
You asked:
22 yards are how many feet
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/22_yards_are_how_many_feet","timestamp":"2014-04-19T04:33:57Z","content_type":null,"content_length":"53114","record_id":"<urn:uuid:768d9c69-1329-49ce-8d7d-b4e49ea7cfe9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do you type the infinity sign on a graphing calculator ?
• 5 months ago
• 5 months ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i just write it out ?
Best Response
You've already chosen the best response.
it says error
Best Response
You've already chosen the best response.
thats calculator for: 1x10^(99)
Best Response
You've already chosen the best response.
which model
Best Response
You've already chosen the best response.
TI - 84 plus
Best Response
You've already chosen the best response.
your comma button has an EE atop it, shift to get it
Best Response
You've already chosen the best response.
oh i see it now , hold up
Best Response
You've already chosen the best response.
Do you have a TI-89? These bad boys have an infinity BUTTON!! :D
Best Response
You've already chosen the best response.
oh no i have a 84 /:
Best Response
You've already chosen the best response.
lol @austinL
Best Response
You've already chosen the best response.
ti84 is the most ill go ... after that they started having to have terabytes for the instruction manuals and i gave up :)
Best Response
You've already chosen the best response.
@amistre64 it says EE but you said it was IE99
Best Response
You've already chosen the best response.
Same thing :)
Best Response
You've already chosen the best response.
it not my fault they cant type lol
Best Response
You've already chosen the best response.
Button likely says EE, but on screen it should say E
Best Response
You've already chosen the best response.
ok lemme try
Best Response
You've already chosen the best response.
the output on the screen is still: one: 1 2nd, , : E 99: 99
Best Response
You've already chosen the best response.
is it a one or an I ?
Best Response
You've already chosen the best response.
they might have done that to distinguish it fromthe alpha E key :/
Best Response
You've already chosen the best response.
an i is not a real numerical value
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i^k is one of 4 values ... why are you needing an infinity?
Best Response
You've already chosen the best response.
Find the sum of the infinite geometric series, if possible. Show steps, if possible. \sum_{{j}={1}}^{\infty}4 \cdot 0.5^{j-1}
Best Response
You've already chosen the best response.
hold up
Best Response
You've already chosen the best response.
\[\sum_{{j}={1}}^{\infty}4 \cdot 0.5^{j-1}\]
Best Response
You've already chosen the best response.
\[\sum_{{j}={1}}^{\infty}4 \cdot 0.5^{j-1}\] \[\sum_{{j}={1}}^{\infty}4 \cdot 0.5^{j}~.5^{-1}\] \[\frac{4}{.5}\sum_{{j}={1}}^{\infty}0.5^{j}\]
Best Response
You've already chosen the best response.
aha now what ? lol
Best Response
You've already chosen the best response.
is .5 less than 1?
Best Response
You've already chosen the best response.
theres a formula for this ....
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
whats the formula ? im sorry my teacher doesnt give me all this info /:
Best Response
You've already chosen the best response.
\[\frac{1-r^k}{1-r}\] if r < 1, then the limit as k to infinity is \[\frac{1}{1-r}\]
Best Response
You've already chosen the best response.
whats r though ? the common ratio ?
Best Response
You've already chosen the best response.
but would it be possible to find the sum ?
Best Response
You've already chosen the best response.
let me rechk my thought tho ... S = r + r^2 + r^3 + ... + r^n -rS = - r^2 - r^3 - ... - r^n - r^(n+1) ----------------------------------- (1-r)S = r + 0 ..........................- r^(n+1) S= r -
r^(n+1) ----------- 1 - r
Best Response
You've already chosen the best response.
if r < 1 then that exponented part goes to 0 leaving us: S = r/(1-r) for the sum
Best Response
You've already chosen the best response.
might be better stated the |r| < 1, but why nitpick
Best Response
You've already chosen the best response.
\[\frac{4}{.5}\sum_{{j}={1}}^{\infty}0.5^{j}\] \[\frac{4}{.5}~\frac{.5}{1-.5}\] \[\frac{4}{.5}=8\]
Best Response
You've already chosen the best response.
thanks soooo much !
Best Response
You've already chosen the best response.
youre welcome, as far as i know the ti84 does not have an infinite sum function
Best Response
You've already chosen the best response.
yes im aware haha
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/528cb43ae4b09a30e765a6c3","timestamp":"2014-04-21T02:19:24Z","content_type":null,"content_length":"128599","record_id":"<urn:uuid:89c772f4-b422-45bd-89f4-e6d84fd63b9e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Objective Type
Re: Objective Type
Hi ganesh
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi bobbym and anonimnystefy,
The answers 19 and 20 are correct. Well done!
21. The Standard Deviation of a set of 10 numbers is 5. When each number is added by 7, the new Standard Deviation is ______________.
(a) 12
(b) 7
(c) 5
(d) None of these.
22. The probability of getting a black card from a pack of cards is ______________.
(a) 1/13
(b) 2/13
(c) 1/4
(d) 1/2
Character is who you are when no one is looking.
Re: Objective Type
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi ganesh
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi bobbym and anonimnystefy,
Both the answers are correct, bobbym! Well done!
anonimnystefy : The answers and
23. The common ratio 'r' in the Geometric Progression 9, 3, 1, ........ is _______________
(a) 3
(b) 1/3
(c) 1
(d) 9
24. The value of
(d) None of these
Character is who you are when no one is looking.
Re: Objective Type
Hi ganesh
Question #22 wasn't precise.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi anonimnystefy and bobbym,
Both the answers are correct. Well done!
25. Volume of hemisphere with radius 4 centimeters (in cubic centimeters) is ___________________
(d) None of these
26. When is divided by x - 2, the remainder is 20 then the value of m is __________________
(a) 4
(b) -4
(c) 3
(d) 2
Character is who you are when no one is looking.
Re: Objective Type
Hi ganesh
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi anonimnystefy and bobbym,
The answers 25 and 2 are correct. Well done!
27. The roots of the equation x^2 - x - 6 = 0 are __________________
(a) 3,2
(b) 3,-2
(c} -3,2
(d) -3,-2
28. Which of the following is a point on 3x + 4y ≤ 7?
(a) (1,1)
(b) (1,2)
(c) (2,1)
(d) (0,2)
Character is who you are when no one is looking.
Re: Objective Type
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi ganesh
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi bobbym and anonimnystefy,
The answers 27 and 28 are correct. Well done!
29. The points of intersection of the lines x = 3 and y = 4 is _________________
(a) (3,0)
(b) (4,0)
(c) (0,0)
(d) (3,4)
30. The equation y = c represents a _________________
(a) x axis
(b) y axis
(c) a line parallel to x axis
(d) a line parallel to y axis
Character is who you are when no one is looking.
Re: Objective Type
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi ganesh
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi bobbym and anonimnystefy,
bobbym : Well done!
anonimnystefy :
31. P(E) + P(E') = ____________
(a) 0
(b) 1
(c) 2
(d) None of these
32. When A and B are mutually exclusive,
= ______________
(a) P (A)
(b) P (B)
(c) P (A) + P (B)
Character is who you are when no one is looking.
Re: Objective Type
Hi ganesh
Yes it was. I don't know how that happened.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi anonimnystefy and bobbym,
The answers 31 and 32 are correct. Well done!
33. If
= , then = ________________
(a) 1
(b) 0
Last edited by ganesh (2012-06-26 16:12:57)
Character is who you are when no one is looking.
Re: Objective Type
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Objective Type
Hi ganesh
You gave me so much trouble with this one. I though you had written theta=cos(theta).
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Objective Type
Hi bobbym and anonimnystefy,
The answer 33 is correct. Well done!
(Sorry for the trouble, anonimnystefy)
Character is who you are when no one is looking.
Re: Objective Type
35. The Common Difference in the Arithmetic Progression 5, 2, -1, ..... is _______________.
(a) 3
(b) -3
(c) 2
(d) -2
(d) None of these
Character is who you are when no one is looking.
Re: Objective Type
Hi ganesh
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=223586","timestamp":"2014-04-19T02:17:04Z","content_type":null,"content_length":"49531","record_id":"<urn:uuid:954803b4-49d9-4c8a-a532-b8eecdd71229>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hoyt, CO Algebra 2 Tutor
Find a Hoyt, CO Algebra 2 Tutor
...I also attended UC Berkeley as an Engineering major.I took this class at American River College in Sacramento, CA. I received an A, one of the highest grades in the class. I've tutored many
students in this subject over the last 12 years, at both the junior college and university level.
11 Subjects: including algebra 2, calculus, geometry, statistics
...My tutoring experience was as an undergrad student, particularly during my work at the University of Colorado at Colorado Springs, where I was a chemistry major with a biochemistry option. I
graduated Summa Cum Laude from UCCS in 2005. I also was a tutor for homeschooled students and was homeschooled myself for middle school and high school.
26 Subjects: including algebra 2, English, reading, chemistry
...Also as an applied mathematician I use algebra everyday. As a doctoral student in applied mathematics, I use and teach Calculus every day. I have taught calculus to students from high school
to seniors in college.
6 Subjects: including algebra 2, calculus, geometry, algebra 1
...This is at least partly true. The problems you encounter in algebra 1 are more challenging than those you encounter in arithmetic. However, you often use the same techniques you used in
arithmetic to solve algebra 1 problems!
18 Subjects: including algebra 2, calculus, geometry, statistics
...My teaching experience has been in the mathematics area where I have taught all high school math courses through calculus as well as college algebra and trigonometry. I have prepared students
to take the ACT as well as AP Calculus exams. My teaching experience has included teaching the mathemat...
11 Subjects: including algebra 2, geometry, algebra 1, GED
Related Hoyt, CO Tutors
Hoyt, CO Accounting Tutors
Hoyt, CO ACT Tutors
Hoyt, CO Algebra Tutors
Hoyt, CO Algebra 2 Tutors
Hoyt, CO Calculus Tutors
Hoyt, CO Geometry Tutors
Hoyt, CO Math Tutors
Hoyt, CO Prealgebra Tutors
Hoyt, CO Precalculus Tutors
Hoyt, CO SAT Tutors
Hoyt, CO SAT Math Tutors
Hoyt, CO Science Tutors
Hoyt, CO Statistics Tutors
Hoyt, CO Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Adams City, CO algebra 2 Tutors
Arickaree, CO algebra 2 Tutors
Aspen Park, CO algebra 2 Tutors
Bovina, CO algebra 2 Tutors
Cadet Sta, CO algebra 2 Tutors
Deckers, CO algebra 2 Tutors
Dupont, CO algebra 2 Tutors
Foxton, CO algebra 2 Tutors
Irondale, CO algebra 2 Tutors
Montclair, CO algebra 2 Tutors
Raymer, CO algebra 2 Tutors
Riverbend, CO algebra 2 Tutors
Schriever Air Sta, CO algebra 2 Tutors
Virginia Dale, CO algebra 2 Tutors
Western Area, CO algebra 2 Tutors | {"url":"http://www.purplemath.com/Hoyt_CO_Algebra_2_tutors.php","timestamp":"2014-04-18T23:45:38Z","content_type":null,"content_length":"23835","record_id":"<urn:uuid:6895f636-20d2-4777-9146-3404eb5eda6f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shige's Research Blog
I am running GEE logistic regression model for my fetal loss paper. As usual, I compare results between Stata and R and make sure they are consistent. To my surprise, the models assuming independent
correlation structure give similar results but the models assuming exchangeable correlation structure give drastically different results.
It turns out that there is only one woman in my sample who reported a total number of eleven pregnancies (all others reported ten or less) and the presence of this single observation had huge
influence on the algorithm used in R but not the one used in Stata. After excluding this single observation, the two sets of results look identical.
5 comments:
I am not a statistician, but statistics is been a favorite subject for me recently. So, based on your article, do you want to say that R is more sensitive than Stata? Is it good or bad? do you
already publish your paper so i can get more explanation? thanks.
That's what the results seem to suggest. It will be worthwhile to dig deeper to figure out how these different packages handle such "abnormal" cases.
My paper is not about GEE; instead, it is a demographic research on involuntary fetal loss that makes use of GEE and statistical simulation.
how did you assess influence in R in the GEE model?
Hi Shige,
How did you assess influence in R for the GEE model? I get errors when I try influence.measures(model). Would be curious to find out how you did it?
how did you assess influence in R in the GEE model? | {"url":"http://sgsong.blogspot.com/2011/10/gee-using-stata-vs-r.html","timestamp":"2014-04-18T15:38:52Z","content_type":null,"content_length":"107334","record_id":"<urn:uuid:9e7fd3b1-36c3-4b75-9cf9-d50153a897ea>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do you label density?
A physical quantity (or "physical magnitude") is a physical property of a phenomenon, body, or substance, that can be quantified by measurement.
Physical chemistry is the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of laws and concepts of physics. It applies the principles, practices and
concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics and dynamics, equilibrium.
Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a macroscopic or supra-molecular science, as the majority of the principles on which physical chemistry was
founded, are concepts related to the bulk rather than on molecular/atomic structure alone. For example, chemical equilibrium, and colloids.
The partial specific volume $\bar{v_i},$ express the variation of the extensive volume of a mixture in respect to composition. It is the partial derivative of volume with respect to the mass of
the component of interest.
where $\bar{v_i}$ is the partial specific volume of a component $i$ defined as:
Hospitality is the relationship between the guest and the host, or the act or practice of being hospitable. This includes the reception and entertainment of guests, visitors, or strangers.
Related Websites: | {"url":"http://answerparty.com/question/answer/how-do-you-label-density","timestamp":"2014-04-18T19:05:17Z","content_type":null,"content_length":"25268","record_id":"<urn:uuid:77af720f-db42-423b-b9b9-cd08b9e1ef06>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Using {fa} in nfdisc() computations
Bill Allombert on Mon, 08 Dec 2008 11:41:54 +0100
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: Using {fa} in nfdisc() computations
On Sun, Dec 07, 2008 at 08:08:28PM -0700, Kurt Foster wrote:
> The optional {fa} parameter [factor() matrix] seems like just the
> thing for speeding up the computations in a situation I'm looking at.
> I've got a parametric family of polynomials. I want the exact
> nfdisc()s for a whole swarm of them. I could, of course, just feed
> the polynomials to nfdisc() and let it crunch away. This does work,
> of course, but it occurred to me it might be faster if I could take
> advantage of the fact that the poldiscs have a parametric algebraic
> factorization, and the algebraic factors are generally much smaller
> than the poldisc(), in the sense that
> log(abs(algebraic factor))/log(abs(poldisc)
> will usually be a lot less than 1. Thus, factoring the algebraic
> factors should be a LOT quicker than tackling the whole poldisc.
> But in order to produce the factor() matrix fa for the poldisc, I've
> got to "combine" the factorization matrices of the algebraic factors.
> There's probably a slick way to do this, but I'm a dunce at
> programming. Would factor()ing the algebraic factors and "combining"
> the results actually be likely to be faster than tackling the whole
> poldisc? If so, what's an expeditious way to combine the factor()
> matrices of the separate algebraic factors into the correct factor()
> matrix for the poldisc?
You can totally cheat and use default(factor_add_primes,1) to
record the prime factors automatically.
Then you only need to factor the algebraic factors and then
ask for the discriminant.
You can then use removeprimes(addprimes()) from time to time
to avoid the "addprimes" vector to be too large (but not too often
if you expect some discriminants to have common primes factors). | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0812/msg00016.html","timestamp":"2014-04-19T22:07:19Z","content_type":null,"content_length":"5985","record_id":"<urn:uuid:520d12c9-ecfa-46c9-abdb-199d56462d56>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Galois Multiplication
01-03-2012, 11:17 PM
Galois Multiplication
Righto! I've been writing my implementation of the Rijndal cipher, and I've got stuck on one particular part. the "mixColumns()" method to be exact. Basically,
at first i was using lookup tables from wikipedia. Then I was having problems with the values my implementation was churning out. Anyhow, after some headache I wrote a function that multiplies
over Rijndael's finite field, GF(2^8), and implemented it. Then to test it, I compared it to a step by step guide that I found here. Now I'm having a bit of an issue. The lookup tables, and my
function for multiplying in GF(2^8) are churning out the same numbers, and the guide is showing me something different. Basically, according to the guide:
d4 * 02 = 04 in Rijndael's finite field, however to the lookup table and my function, it's b3. I'm assuming the guide is right, it's from the cs website, so it's a credible source. I just want to
know what I'm doing wrong.
private int galoisMultiply(int a, int b) {
int p = 0;
for (int n=0; n<8; n++) {
p = ((b & 0x01) > 0) ? p^a : p;
boolean ho = ((a & 0x80) > 0);
a = ((a<<1) & 0xFE);
if (ho)
a = a ^ 0x1b;
b = ((b>>1) & 0x7F);
return p;
Above is my multiplication function. I reckon that works fine, and that is returning b3 for my values as well, same with my lookup table, which can be found here. In this case, it's the table for
multiplying by 2.
I need either someone to talk me through Galois Multiplication, or perhaps someone can help me by looking at the tables and vouching for their credibility. Seriously am stuck for ideas at the
moment and I would really love it if you guys could help me out here!
01-04-2012, 02:13 PM
Re: Galois Multiplication
Solved it. I was performing galois multiplication, just didn't take into account that it was matrix multiplication :) | {"url":"http://www.java-forums.org/advanced-java/53470-galois-multiplication-print.html","timestamp":"2014-04-16T20:28:18Z","content_type":null,"content_length":"7615","record_id":"<urn:uuid:a21afb21-8e84-4713-8713-40f4be20f9ad>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |