content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Symbolic Computations in MATLAB
• Symbolic Computations in MATLAB
Symbolic Computations in MATLAB
Symbolic Math Toolbox lets you perform symbolic computations from the MATLAB command line by defining symbolic math expressions and operating on them. Functions are called using the familiar MATLAB
syntax and are available for integration, differentiation, simplification, equation solving, and other mathematical tasks.
Integration, Differentiation, and Other Calculus
You can perform differentiation and definite and indefinite integration, calculate limits, compute series summation and product, generate the Taylor series, and compute Laplace, Fourier, and
Z-transforms and their inverses. You can also perform vector calculus such as calculating the curl, divergence, gradient, Jacobian, Laplacian, and potential.
Formula Manipulation and Simplification
Symbolic Math Toolbox enables you to simplify long expressions into shorter forms, transform expressions to particular forms or rewrite them in specific terms, and replace parts of expressions with
specified symbolic or numeric values.
Equation Solving
You can analytically solve for well-posed systems of algebraic equations and ordinary differential equations to get exact answers that are free from numerical approximations.
Linear Algebra
You can perform matrix analysis on symbolic matrices such as computing norm, condition number, determinant, and characteristic polynomial. You can execute matrix operations and transformations with
functions for computing the inverse and exponential, and for working with rows and columns of the matrix. You can also get symbolic expressions for the eigenvalues and eigenvectors and perform a
symbolic singular value decomposition of a matrix.
Mathematical Functions
Symbolic Math Toolbox includes the symbolic versions of many mathematical functions, such as logarithm, Dirac, gamma, Bessel, Airy, LambertW, hypergeom, and error functions.
Executing MuPAD Statements
From MATLAB you can also execute statements written in the MuPAD language, which lets you fully access the functionality in the MuPAD engine.
Next: Interactive Computations in the MuPAD Notebook
|
{"url":"http://www.mathworks.se/products/symbolic/description3.html?nocookie=true","timestamp":"2014-04-23T14:52:15Z","content_type":null,"content_length":"27041","record_id":"<urn:uuid:56930a2a-e46d-4bf4-9888-352116906128>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert kilometers to steps - Conversion of Measurement Units
›› Convert kilometre to step
›› More information from the unit converter
How many kilometers in 1 steps? The answer is 0.000762.
We assume you are converting between kilometre and step.
You can view more details on each measurement unit:
kilometers or steps
The SI base unit for length is the metre.
1 metre is equal to 0.001 kilometers, or 1.31233595801 steps.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between kilometres and steps.
Type in your own numbers in the form to convert the units!
›› Definition: Kilometer
A kilometre (American spelling: kilometer, symbol: km) is a unit of length equal to 1000 metres (from the Greek words khilia = thousand and metro = count/measure). It is approximately equal to 0.621
miles, 1094 yards or 3281 feet.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0027 seconds.
|
{"url":"http://www.convertunits.com/from/kilometers/to/steps","timestamp":"2014-04-19T17:03:03Z","content_type":null,"content_length":"20010","record_id":"<urn:uuid:1e87d5d5-a581-43a8-bb53-dd38cb08346a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
resolve in N...
December 21st 2008, 11:52 PM
the equation :
$<br /> x^2-3y^2+4z^2=0<br />$
good luck
sorry, it's not in N but in Z
there isn't any difference ^^
December 22nd 2008, 02:28 AM
First, we obviously have the trivial solution $(x,y,z)=(0,0,0)$
Now suppose there's another solution $(x,y,z)$
If $x$ and $z$ are not multiples of 3 then $x^2+4z^2\equiv{5}(\bmod.3)$ a contradiction.
Again if only one of them ( x or z) is multiple of 3 we get a contradiction ( try it).
So both, $x$ and $z$ are multiples of 3. And thus $3y^2$ is multiple of 9, so it must be that $y$ is multiple of 3. Now set $x=3x'$; $y=3y'$; $z=3z'$
We get: $x'^2-3y'^2+4z'^2=0$ (dividing by 9)
And it follows by infinite descent that there can be no other solution.
December 22nd 2008, 03:04 AM
yes good, i post an other equation ^^
|
{"url":"http://mathhelpforum.com/number-theory/65765-resolve-n-print.html","timestamp":"2014-04-17T21:01:04Z","content_type":null,"content_length":"6665","record_id":"<urn:uuid:587d26ab-c84c-47b9-bc0d-b5901d6098da>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
vba Using Eval but maintaing internal string
up vote 1 down vote favorite
So I am using Instr with Evaluation and facing some difficulties
The code is as follows
Evaluate( "Instr(" & myString1 & "," & myString2 & ")" & myIneq & cstr(0)
I am getting an Error 2029. Based off this msdn link I am assuming it is trying to evaluate "Hello" as a variable name. What is the work around for this, I know there must be one.
excel vba
2 Why are you using Evaluate? – SLaks Oct 10 '11 at 23:01
this is a more generic example of what i am trying to od – jason m Oct 10 '11 at 23:40
This is a bad security hole. Whatever you're trying to do, there are better ways to do it. – SLaks Oct 11 '11 at 0:12
I would be glad to know a better way. It is a function that will allow a user to filter a data set. The user will say if a field may or may not contain a string. That's it. The easiest way I
thought to do this was with instr and modifying the inequality. – jason m Oct 11 '11 at 3:29
O.K., seeing your comment here, there probably is a better way to do what you really want. If this is all VBA code, why can't you just call Instr directly? Why generate code and evaluate it? But
I'm glad my answer helped with your immediate issue. – jtolle Oct 11 '11 at 14:26
add comment
3 Answers
active oldest votes
I infer from the Error 2029 (#NAME?) and the link that you're using Excel. In this case the answer is simple. Application.Evaluate evaluates Excel expressions, not VBA code. That is,
any functions you call in your expression have to be things you could call from an Excel formula. (And you're correct that Excel is trying to evaluate the value of a symbol it doesn't
recognize, and is thus giving you back a #NAME? error.)
There is an Excel worksheet function, FIND, that does pretty much the same thing that the VBA function Instr does, so if your example is not too simplified, that might be all you need
to do.
I just typed this into the Immediate window:
?Evaluate("FIND(""" & y & """, """ & x & """)")
up vote 1 down 2
vote accepted
ineq = ">"
?Evaluate("FIND(""" & y & """, """ & x & """)" & ineq & "0")
and it seems to work.
Note that Evaluate is a function, so it expects to receive a string argument, and then return what that string evaluates to if treated as an Excel formula-syntax expression. In your
example, you don't seem to be doing anything with the return value, so I thought I'd mention it.
I will try this and accept if it works. Thanks. – jason m Oct 11 '11 at 3:30
add comment
"Evaluate" doesn't understand all excel functions.
For example, trying to evaluate "instr" will give an Error 2029. But there is a nice workaround:
• "evaluate" recognizes all added vba functions of your excel sheet
• so just wrap a single-line function around the reluctant function.
Code will be similar to this:
Sub test()
up vote 1 down vote MsgBox Evaluate(" Instring(""Hello"",""el"") ")
Msgbox "\o/ ! ... I owe a beer to someone out there"
End Sub
Function Instring(a, b)
'make instr visible to 'evaluate'
Instring = InStr(a, b)
End Function
add comment
You're evaluating the string InStr(Hello,el).
Obviously, that's not what you want.
up vote 0 down vote You need to use a quoted string literal.
this is failing as well: "Instr(" & """ & dictFile(Key1)(strField) & """ & _ "," & """ & strFilter & """ & ")" – jason m Oct 10 '11 at 23:30
@jason: How does it look if you Debug.Print it? – Tim Williams Oct 11 '11 at 0:06
You need InStr(""" – SLaks Oct 11 '11 at 0:12
add comment
Not the answer you're looking for? Browse other questions tagged excel vba or ask your own question.
|
{"url":"http://stackoverflow.com/questions/7719623/vba-using-eval-but-maintaing-internal-string/10862382","timestamp":"2014-04-16T16:34:24Z","content_type":null,"content_length":"80746","record_id":"<urn:uuid:943075b0-89ff-476c-bc7c-5ad3b12eb511>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
You are using the online sample of the Teaching Mental and Written computation CDROM. Not all linked pages are accessible in this version. For further information about the complete CDROM, click here
Teaching algorithms for multiplication
In the primary school, children are taught multiplication using a formal written method that is based on:
the place value system
multiplication tables up to 10 by 10
the distributive property of multiplication over addition.
Understanding the formal written algorithm for multiplication depends on assembling together understanding of several separate steps. Therefore the ideas must be introduced through a number of
stages. Students need to be competent and comfortable with each stage prior to moving onto the next stage. Ample experience with place value materials prior to the introduction of symbolic notation
will assist children consolidate knowledge at each stage.
The sections below give the reasoning behind the steps of the formal written algorithm, the intermediate forms which teachers use before students learn the most efficient procedure and link to
explanations using place value material.
Stage 2: Multiplication by a single digit
23 is 2 tens and 3 ones.
3 ones multiplied by 4 gives 12 ones and
2 tens multiplied by 4 gives 8 tens (that is 80).
80 and 12 are added to give the final product 92.
Children should write multiplication in this form for some time, until the procedure is familiar and the concepts (especially the distributive property) is well understood. Ruling up (or using
squared paper) and labeling columns for tens and ones is recommended in the early stages. Later it can be reduced to a more compact form:
The 3 ones are first multiplied by 4 giving the product 12, which is 1 ten and 2 ones. 2 is written in the ones column and the 1 is recorded in the tens column. Now the 2 tens are multiplied by 4 to
give 8 tens. The 1 ten recorded before is added on, so the product has 9 tens.
Click here to see how this aspect of multiplication is explained with place value material.
Back to top
Stage 3: Multiplication by ten
Children must learn how to multiply by multiples of ten. It is very important that they know that to multiply a whole number by ten a zero can be "added to the number", but this is dangerous
terminology. Teachers should be aware that many children may fall victim to simply knowing the rule of "adding zeros". It is important that children understand that the effect of the zero is to move
digits into the next larger place value column.
10 x 2 = 10 x 2 ones = 2 tens = 20
10 x 152 = 10 x (1 hundred + 5 tens + 2 ones)
= 10 hundreds + 50 tens + 2 tens
= 1 thousand + 5 hundreds + 2 tens
= 1520
Click here to see how this is explained with place value material.
Back to top
Stage 4. Multiplication by a multiple of ten and power of ten
After learning how to multiply by ten, children can see how to multiply by multiples of ten. This step relies on understanding the associative property of multiplication and understanding. This step
is NOT assisted by place value or other concrete materials.
Multiplication by 30 is done by multiplying by 3 and then by 10 (or vice versa). To multiply by 30, first multiply by ten (by putting down the zero) and then by 3
To multiply by 300, first multiply by one hundred (by multiplying by ten and then by ten again i.e. putting down two zeros and so moving the digits by two place value columns) and then by 3
Back to top
Stage 5: Multiplication by a number with two or more digits
These multiplications require understanding of all that has come before. They are less important now that calculators are common so not all children need to practice sufficiently to be able to
accurately calculate with very large numbers. Extensive practice is no longer a high priority.
To compute this product, 57 is first multiplied by 6 ones and then by 4 tens. The two results are then added to get the final result. It will be written down as follows.
Next, 57 is multiplied by 40 (this is done by multiplying by 10, putting down the zero, and then multiplying by 4)
Back to top
Other ways of setting out the algorithm
There are many slightly different ways of setting out the algorithm. The choice is unimportant, except that omitting zeros (as in the final example) is inadvisable. Children are more likely to keep
columns aligned if they put in the zeros. Using squared paper for mathematics is a great idea to help keep digits aligned - in many countries normal writing paper is already squared.
│ Move carry digits somewhere else and move the multiplication sign to the other side. │ │
│ Move carry digits somewhere else and place the multiplication sign on top other side. │ │
│ Not advised. │ │
|
{"url":"https://extranet.education.unimelb.edu.au/DSME/tmwc/wholenumbers/multiply/algorith.shtml","timestamp":"2014-04-17T21:28:01Z","content_type":null,"content_length":"21941","record_id":"<urn:uuid:98f9997a-85e9-402e-b2be-bd9075286ade>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Common Core Standards
NAEP Math Curriculum Study
Steal These Tools / Professional Development Modules / Math Shifts Module Jump to a Section Sign up to receive updates from us. Introduction to the Math Shifts Download All Send us your feedback This
module provides participants with an introduction to the key shifts required by the CCSS for Math.
Here is a suggested arrangement of the high school standards into courses, developed with funding from the Bill and Melinda Gates Foundation and the Pearson Foundation, by a group of people including
Patrick Callahan and Brad Findell. I haven’t looked at it closely, but it seems to be a solid effort by people familiar with the standards, so I put it up for comment and discussion. There are five
files: the first four are graphic displays of the arrangement of the standards into both traditional and integrated sequences, with the standards referred to by their codes. The fifth is a
description of the arrangement with the text of the standards and commentary. 9_11 Scope and Sequence_traditional1 Arranging the high school standards into courses | Tools for the Common Core
Kansas Common Core Standards > Resources > 2012 Summer Academies
Staff Training
“And I’m calling on our nation’s governors and state education chiefs to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they
possess 21st Century skills like problem solving and critical thinking and entrepreneurship and creativity.” President Obama, 1 March 2009. New for 2013: Ten new 'Classroom Challenge' formative
assessment lessons for Middle School are now available, including the first five lessons for Grade 6.
|
{"url":"http://www.pearltrees.com/rwillis/common-core-standards/id5261486","timestamp":"2014-04-18T16:24:48Z","content_type":null,"content_length":"18428","record_id":"<urn:uuid:0d853609-902c-4524-8e33-641e89b774d9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Identity Property of Addition
From WikiEducator
This glossary is far from complete. We are constantly adding math terms.
For instructions on adding new terms, please refer to Math Glossary Main Page
Identity property of Addition
When a number Zero is added to any real number the number remains unchanged, i.e. it does not looses its identity. The number Zero is called the identity element and this property is called Identity
property of addition
56.87 + 0 = − 56.87
786 + 0 = 786
|
{"url":"http://wikieducator.org/MathGloss/I/Identity_Property_of_Addition","timestamp":"2014-04-19T22:05:50Z","content_type":null,"content_length":"19589","record_id":"<urn:uuid:8efcfff1-9858-486d-93fe-fab7435d839f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Happy Leap Day! Persian edition
Roland Young has brought to my attention that the Persian calendar uses a hybrid 7/29 and 8/33 system. I was going to post this as an addendum to today's Leap Day article, but it got too long.
If I understand the rules correctly, to determine if a Persian year is a leap year, one applies the following algorithm to the Persian year number y. (Note that the current Persian year is not 2008,
but 1386. Persian year 1387 will begin on the vernal equinox.) I will write a % b to denote the remainder when a is divided by b. Then:
1. Let a = (y + 2345) % 2820.
2. If a is 2819, y is a leap year. Otherwise,
3. Let b = a % 128.
4. If b < 29, let c = b. Otherwise, let c = (b - 29) % 33.
5. If c = 0, y is not a leap year. Otherwise,
6. If c is a multiple of 4, y is a leap year. Otherwise,
7. y is not a leap year.
(Perl source code is available.)
This produces 683 leap years out of every 2820, which means that the average calendar year is 365.24219858 days.
How does this compare with the Dominus calendar? It is indeed more accurate, but I consider 683/2820 to be an unnecessarily precise representation of the vernal equinox year, especially inasmuch as
the length of the year is changing. And the rule, as you see, is horrendous, requiring either a 2,820-entry lookup table or complicated logic.
Moreover, the Persian and Gregorian calendar are out of sync at present. Persian year 1387, which begins next month on the vernal equinox, is a leap year. But the intercalation will not take place
until the last day of the year, around 21 March 2009. The two calendars will not sync up until the year 2092/1470, and then will be confounded only eight years later by the Gregorian 100-year
exception. After that they will agree until 2124/1502. Clearly, even if it were advisable to switch to the Persian calendar, the time is not yet right.
I found this Frequently Asked Questions About Calendars page extremely helpful in preparing this article. The Wikipedia article was also useful. Thanks again to Roland Young for bringing this matter
to my attention.
[Other articles in category /calendar] permanent link
Happy Leap Day!
I have an instructive followup to yesterday's article all ready to go, analyzing a technique for finding rational roots of polynomials that I found in the First Edition of the Encyclopædia
Britannica. A typically Universe-of-Discourse kind of article. But I'm postponing it to next month so that I can bring you this timely update.
Everyone knows that our calendar periodically contains an extra day, known to calendar buffs as an "intercalary day", to help make it line up with the seasons, and that this intercalary day is
inserted at the end of February. But, depending on how you interpret it, this isn't so. The extra day is actually inserted between February 23 and February 24, and the rest of February has to move
down to make room.
I will explain. In Rome, 23 February was a holiday called Terminalia, sacred to Terminus, the god of boundary markers. Under the calendars of the Roman Republic, used up until 46 BCE, an intercalary
month, Mercedonius, was inserted into the calendar from time to time. In these years, February was cut down to 23 days (and good riddance; nobody likes February anyway) and Mercedonius was inserted
at the end.
When Julius Caesar reformed the calendar in 46, he specified that there would be a single intercalary day every four years much as we have today. As in the old calendar, the intercalary day was
inserted after Terminalia, although February was no longer truncated.
So the extra day is actually 24 February, not 29 February. Or not. Depends on how you look at it.
Scheduling intercalary days is an interesting matter. The essential problem is that the tropical year, which is the length of time from one vernal equinox to the next, is not an exact multiple of one
day. Rather, it is about 365¼ days. So the vernal equinox moves relative to the calendar date unless you do something to fix it. If the tropical year were exactly 365¼ days long, then four tropical
years would be exactly 1461 days long, and it would suffice to make four calendar years 1461 days long, to match. This can be accomplished by extending the 365-day calendar year with one intercalary
day every four years. This is the Julian system.
Unfortunately, the tropical year is not exactly 365¼ days long. It is closer to 365.24219 days long. So how many intercalary days are needed?
It suffices to make 100,000 calendar years total exactly 36,524,219 days, which can be accomplished by adding a day to 24,219 years out of every 100,000. But this requires a table with 100,000
entries, which is too complicated.
We would like to find a system that requires a simpler table, but which is still reasonably accurate. The Julian system requires a table with 4 entries, but gives a calendar year that averages 365.25
days long, which is 0.00781 too many. Since this is about 1/128 day, the Julian calendar "gains a day" every 128 years or so, which means that the vernal equinox slips a day earlier every 128 years,
and eventually the daffodils and crocuses are blooming in January.
Not everyone considers this a problem. The Islamic calendar is only 355 days long, and so "loses" 10 days per year, which means that after 18 years the Islamic new year has moved half a year relative
to the seasons. The annual Islamic holy month of Ramadan coincided with July-August in 1980 and with January-February in 1997. The Muslims do intercalate, but they do it to keep the months in line
with the phases of the moon.
Still, supposing that we do consider this a problem, we would like to find an intercalation scheme that is simple and accurate. This is exactly the problem of finding a simple rational approximation
to 0.24219. If p/q is close to 0.24219, then one can introduce p intercalary days every q years, and q is the size of the table required. The Julian calendar takes p/q = 1/4 = 0.25, for an error
around 1/128. The Gregorian calendar takes p/q = 97/400 = 0.2425, for an error of around 1/3226. Again, this means that the Gregorian calendar gains a day on the seasons every 3,226 years or so. Can
we do better?
Any time the question is "find a simple rational approximation to a number" the answer is likely to involve continued fractions. 365.24219 is equal to:
$$ 365 + {1\over \displaystyle 4 + {\strut 1\over\displaystyle 7 + {\strut 1\over\displaystyle 1 + {\strut 1\over\displaystyle 3 + {\strut 1\over\displaystyle 24 + {\strut 1\over\displaystyle 6 + \
cdots }}}}}}$$
which for obvious reasons, mathematicians abbreviate to [365; 4, 7, 1, 3, 24, 6, 2, 2]. This value is exact. (I had to truncate the display above because of a bug in my TeX formula tool: the full
fraction goes off the edge of the A0-size page I use as a rendering area.)
As I have mentioned before, the reason this horrendous expression is interesting is that if you truncate it at various points, the values you get are the "continuants", which are exactly the best
possible rational approximations to the original number. For example, if we truncate it to [365], we get 365, which is the best possible integer approximation to 365.24219. If we truncate it to [365;
4], we get 365¼, which is the Julian calendar's approximation.
Truncating at the next place gives us [365; 4, 7], which is 365 + 1/(4 + 1/7) = 356 + 1/(29/7) = 365 + 7/29. In this calendar we would have 7 intercalary days out of 29, for a calendar year of
365.241379 days on average. This calendar loses one day every 1,234 years.
The next convergent is [365; 4, 7, 1] = 8/33, which requires 8 intercalary days every 33 years for an average calendar year of 0.242424 days. This schedule gains only one day in 4,269 years and so is
actually more accurate than the Gregorian calendar currently in use, while requiring a table with only 33 entries instead of 400.
The real question, however, is not whether the table can be made smaller but whether the rule can be made simpler. The rule for the Gregorian calendar requires second-order corrections:
1. If the year is a multiple of 400, it is a leap year; otherwise
2. If the year is a multiple of 100, it is not a leap year; otherwise
3. If the year is a multiple of 4, it is a leap year.
And one frequently sees computer programs that omit one or both of the exceptions in the rule.
The 8/33 calendar requires dividing by 33, which is its most serious problem. But it can be phrased like this:
1. Divide the year by 33. If the result is 0, it is not a leap year. Otherwise,
2. If the result is divisible by 4, it is a leap year.
The rule is simpler, and the weird exceptions come every 33 years instead of every 100. This means that people are more likely to remember them. If you are a computer programmer implementing calendar
arithmetic, and you omit the 400-year exception, it may well happen that nobody else will catch the error, because most of the time there is nobody alive who remembers one. (Right now, many people
remember one, because it happened, for the second time ever, only 8 years ago. We live at an unusual moment of history.) But if you are a computer programmer who omits the exception in the 8/33
calendar, someone reviewing your code is likely to speak up: "Hey, isn't there some exception when the result is 0? I think I remember something like that happening in third grade."
Furthermore, the rule as I gave it above has another benefit: it matches the Gregorian calendar this year and will continue to do so for several years. This was more compelling when I first proposed
this calendar back in 1998, because it would have made the transition to the new calendar quite smooth. It doesn't matter which calendar you use until 2016, which is a leap year in the Gregorian
calendar but not in the 8/33 calendar as described above. I may as well mention that I have modestly named this calendar the Dominus calendar.
But time is running out for the smooth transition. If we want to get the benefits of the Dominus calendar we have to do it soon. Help spread the word!
[ Pre-publication addendum: Wikipedia informs me that it is not correct to use the tropical year, since this is not in fact the time between vernal equinoxes, owing to the effects of precession and
nutation. Rather, one should use the so-called vernal equinox year, which is around 365.2422 days long. The continued fraction for 365.2422 is slightly different from that of 356.24219, but its first
few convergents are the same, and all the rest of the analysis in the article holds the same for both years. ]
[ Addendum 20080229: The Persian calendar uses a hybrid 7/29 and 8/33 system. Read all about it. ]
[Other articles in category /calendar] permanent link
Algebra techniques that don't work, except when they do
In Problems I Can't Fix in the Lecture Hall, Rudbeckia Hirta describes the efforts of a student to solve the equation 3x^2 + 6x - 45 = 0. She describes "the usual incorrect strategy selected by
students who can't do algebra":
3x^2 + 6x - 45 = 0
3x^2 + 6x = 45
x(3x + 6) = 45
She says "I stopped him before he factored out the x.".
I was a bit surprised by this, because the work so far seemed reasonable to me. I think the only mistake was not dividing the whole thing by 3 in the first step. But it is not too late to do that,
and even without it, you can still make progress. x(3x + 6) = 45, so if there are any integer solutions, x must divide 45. So try x = ±1, ±3, ±5, ±9, ±15 in roughly that order. (The "look for the
wallet under the lamppost" principle.) x = 3 solves the equation, and then you can get the other root, x=-5, by further application of the same method, or by dividing the original polynomial by x-3,
or whatever.
If you get rid of the extra factor of 3 in the first place, the thing is even easier, because you have x(x + 2) = 15, so x = ±1, ±3, or ±5, and it is obviously solved by x=3 and x=-5.
Now obviously, this is not always going to work, but it works often enough that it would have been the first thing I would have tried. It is a lot quicker than calculating b^2 - 4ac when c is as big
as 45. If anyone hassles you about it, you can get them off your back by pointing out that it is an application of the so-called rational root theorem.
But probably the student did not have enough ingenuity or number sense to correctly carry off this technique (he didn't notice the 3), so that M. Hirta's advice to just use the damn quadratic formula
already is probably good.
Still, I wonder if perhaps such students would benefit from exposure to this technique. I can guess M. Hirta's answer to this question: these students will not benefit from exposure to anything.
[ Addendum 20080228: Robert C. Helling points out that I could have factored the 45 in the first place, without any algebraic manipulations. Quite so; I completely botched my explanation of what I
was doing. I meant to point out that once you have x(x+2) = 15 and the list [1, 3, 5, 15], the (3,5) pair jumps out at you instantly, since 3+2=5. I spent so much time talking about the unreduced
polynomial x(3x+6) that I forgot to mention this effect, which is much less salient in the case of the unreduced polynomial. My apologies for any confusion caused by this omission. ]
[ Addendum 20080301: There is a followup to this article. ]
[Other articles in category /math] permanent link
Uniquely-decodable codes
Ricardo J.B. Signes asked me a few days ago if there was a way to decide whether a given set S of strings had the property that any two distinct sequences of strings from S have distinct
For example, consider S[1] = { "ab", "abba", "b" }. This set does not have the specified property, because you can take the two sequences [ "ab", "b", "ab" ] and [ "abba", "b" ], and both concatenate
to "abbab". But S[2] = { "a", "ab", "abb" } does have this property.
Coding theory
In coding theory, the property has the awful name "unique decodability". The idea is that you have some input symbols, and each input symbol is represented with one output symbol, which is one of the
strings from S. Then suppose you receive some message like "abbab". Can you figure out what the original input was? For S[2], yes: it must have been ZY. But for S[1], no: it could have been either YZ
or XZX.
In coding theory, the strings are called "code words" and the set of strings is a "code". So the question is how to tell whether a code is uniquely-decodable. One obvious way to take a
non-uniquely-decodable code and turn it into a uniquely-decodable code is to append delimiters to the code words. Consider S[1] again. If we delimit the code words, it becomes { "(ab)", "(abba)", "
(b)" }, and the two problem sequences are now distinguishable, since "(ab)(b)(ab)" looks nothing like "(abba)(b)". It should be clear that one doesn't need to delimit both ends; the important part is
that the words are separated, so one could use { "ab-", "abba-", "b-" } instead, and the problem sequences translate to "ab-b-ab-" and "abba-b-". So every non-uniquely-decodable code corresponds to a
uniquely-decodable code in at least this trivial way, and often the uniquely-decodable property is not that important in practice because you can guarantee uniquely-decodableness so easily just by
sticking delimiters on the code words.
But if you don't want to transmit the extra delimiters, you can save bandwidth by making your code uniquely-decodable even without delimiters. The delimiters are a special case of a more general
principle, which is that a prefix code is always uniquely-decodable. A prefix code is one where no code word is a prefix of another. Or, formally, there are no code words x and y such that x = ys for
some nonempty s. Adding the delimiters to a code turns it into a prefix code. But not all prefix codes have delimiters. { "a", "ba", "bba", "bbba" } is an example, as are { "aa", "ab", "ba", "bb" }
and { "a", "baa", "bab", "bb" }.
The proof of this is pretty simple: you have some concatenation of code words, say T. You can decode it as follows: Find the unique code word c such that c is a prefix of T; that is, such that T = cU
. There must be such a c, because T is a concatenation of code words. And c must be unique, because if there were c' and U' with both cU = T and c'U' = T, then cU = c'U', and whichever of c or c' is
shorter must be a prefix of the one that is longer, and that can't happen because this is a prefix code. So c is the first code word in T, and we can pull it off and repeat the process for U, getting
a unique sequence of code words, unless U is empty, in which case we are done.
There is a straightforward correspondence between prefix codes and trees; the code words can be arranged at the leaves of a tree, and then to decode some concatenation T you can scan its symbols one
at a time, walking the tree, until you get to a leaf, which tells you which code word you just saw. This is the basis of Huffman coding.
Prefix codes include, as a special case, codes where all the words are the same length. For those codes, the tree is balanced, and has all branches the same length.
But uniquely-decodable codes need not be prefix codes. Most obviously, a suffix code is uniquely-decodable and may not be a prefix code. For example, {"a", "aab", "bab", "bb" } is uniquely-decodable
but is not a prefix code, because "a" is a prefix of "aab". The proof of uniquely-decodableness is obvious: this is just the last prefix code example from before, with all the code words reversed. If
there were two sequences of words with the same concatenation, then the reversed sequences of reversed words would also have the same concatenation, and this would show that the code of the previous
paragraph was not uniquely-decodable. But that was a prefix code, and so must be uniquely-decodable.
But codes can be uniquely-decodable without being either prefix or suffix codes. For example, { "aabb", "abb", "bb", "bbba" } is uniquely-decodable but is neither a prefix nor a suffix code. Rik
wanted a method for deciding.
I told Rik about the prefix code stuff, which at least provides a sufficient condition for uniquely-decodableness, and then started poking around to see what else I could learn. Ahem, I mean,
researching. I suppose that a book on elementary coding theory would have a discussion of the problem, but I didn't have one at hand, and all I could find online concerned prefix codes, which are of
more practical interest because of the handy tree method for speedy decoding.
But after tinkering with it for a couple of days (and also making an utterly wrong intermediate guess that it was undecidable, based on a surface resemblance to the Post correspondence problem) I did
eventually figure out an algorithm, which I wrote up and released on CPAN, my first CPAN post in about a year and a half.
An example
The idea is pretty simple, and I think best illustrated by an example, as so many things are. We will consider { "ab", "abba", "b" } again. We want to find two sequences of code words whose
concatenations are the same. So say we want pX[1] = qY[1], where p and q are code words and X[1] and Y[1] are some longer strings. This can only happen if p and q are different lengths and if one is
a prefix of the other, since otherwise the two strings pX[1] and qY[1] don't begin with the same symbols. So we consider just the cases where p is a prefix of q, which means that in this example we
want to find "ab"X[1] = "abba"Y[1], or, equivalently, X[1] = "ba"Y[1].
Now X[1] must begin with "ba", so we need to either find a code word that begins with "ba", or we need to find a code word that is a prefix of "ba". The only choice is "b", so we have X[1] = "b"X[2],
and so X[1] = "b"X[2] = "ba"Y[1], or equivalently, X[2] = "a"Y[1].
Now X[2] must begin with "a", so we need to either find a code word that begins with "a", or we need to find a code word that is a prefix of "a". This occurs for "abba" and "ab". So we now have two
situations to investigate: "ab"X[3] = "a"Y[1], and "abba"X[4] = "a"Y[1]. Or, equivalently, "b"X[3] = Y[1], and "bba"X[4] = Y[1].
The first of these, "b"X[3] = Y[1] wins immediately, because "b" is a code word: we can take X[3] to be empty, and Y[1] to be "b", and we have what we want:
"ab" X[1] = "abba" Y[1]
"ab" "b" X[2] = "abba" Y[1]
"ab" "b" "ab" X[3] = "abba" Y[1]
"ab" "b" "ab" = "abba" "b"
where the last line of the table is exactly the solution we seek.
Following the other one, "bba"X[4] = Y[1], fails, and in a rather interesting way. Y[1] must begin with two "b" words, so put "bb"Y[2] = Y[1], so "bba"X[4] = "bb"Y[2], then "a"X[4] = Y[2].
But this last equation is essentially the same as the X[2] = "a"Y[1] situation we were investigating earlier; we are just trying to make two strings that are the same except that one has an extra "a"
on the front. So this investigation tells us that if we could find two strings with "a"X = Y, we could make longer strings "abba"Y = "b" "b" "a"X. This may be interesting, but it does not help us
find what we really want.
The algorithm
Having seen an example, here's the description of the algorithm. We will tabulate solutions to Xs = Y, where X and Y are sequences of code words, for various strings s. If s is empty, we win.
We start the tabulation by looking for pairs of keywords c[1] and c[2] with c[1] a prefix of c[2], because then we have c[1]s = c[2] for some s. We maintain a queue of s-values to investigate. At one
point in our example, we had X[1] = "ba"Y[1]; here s is "ba".
If s begins with a code word, then s = cs', so we can put s' on the queue. This is what happened when we went from X[1] = "ba"Y[1] to "b"X[2] = "ba"Y[1] to X[2] = "a"Y[1]. Here s was "ba" and s' was
If s is a prefix of some code word, say ss' = c, then we can also put s' on the queue. This is what happened when we went from X[2] = "a"Y[1] to "abba"X[4] = "a"Y[1] to "bba"X[4] = Y[1]. Here s was
"a" and s' was "bba".
If we encounter some queue item that we have seen before, we can discard it; this will prevent us from going in circles. If the next queue item is the empty string, we have proved that the code is
not uniquely-decodable. (Alternatively, we can stop just before queueing the empty string.) If the queue is empty, we have investigated all possibilities and the code is uniquely-decodable.
Here's the summary:
1. Initialization: For each pair of code words c[1] and c[2] with c[1]s = c[2], put s in the queue.
2. Main loop: Repeat the following until termination
□ If the queue is empty, terminate. The code is uniquely-decodable.
□ Otherwise:
1. Take an item s from the queue.
2. For each code word c:
○ If c = s, terminate. The code is not uniquely-decodable.
○ If cs' = s, and s' has not been seen before, queue s'.
○ If c = ss', and s' has not been seen before, queue s'.
To this we can add a little bookkeeping so that the algorithm emits the two ambiguous sequences when the code is not uniquely-decodable. The implementation I wrote uses a hash to track which strings
s have appeared in the queue already. Associated with each string s in the hash are two sequences of code words, P and Q, such that Ps = Q. When s begins with a code word, so that s = cs', the
program adds s' to the hash with the two sequences [P, c] and Q. When s is a prefix of a code word, so that ss' = c, the program adds s' to the hash with the two sequences Q and [P, c]; the order of
the sequences is reversed in order to maintain the Ps = Q property, which has become Qs' = Pss' = Pc in this case.
As I said, I suspect this is covered in every elementary coding theory text, but I couldn't find it online, so perhaps this writeup will help someone in the future.
After solving this problem I meditated a little on my role in the programming community. The kind of job I did for Rik here is a familiar one to me. When I was in college, I was the math guy who hung
out in the computer lab with the hackers. Periodically one of them would come to me with some math problem: "Crash, I am writing a ray tracer. If I have a ray and a triangle in three dimensions, how
can I figure out if the ray intersects the triangle?" And then I would go off and figure out how to do that and come back with the algorithm, perhaps write some code, or perhaps provide some
instruction in matrix computations or whatever was needed. In physics class, I partnered with Jim Kasprzak, a physics major, and we did all the homework together. We would read the problem, which
would be some physics thing I had no idea how to solve. But Jim understood physics, and could turn the problem from physics into some mathematics thing that he had no idea how to solve. Then I would
do the mathematics, and Jim would turn my solution back into physics. I wish I could make a living doing this.
Puzzle: Is { "ab", "baab", "babb", "bbb", "bbba" } uniquely-decodable? If not, find a pair of sequences that concatenate to the same string.
Research question: What's the worst-case running time of the algorithm? The queue items are all strings that are strictly shorter than the longest code word, so if this has length n, then the main
loop of the algorithm runs at most (a^n-1) / (a-1) times, where a is the number of symbols in the alphabet. But can this worst case really occur, or is the real worst case much faster? In practice
the algorithm always seems to complete very quickly.
Project to do: Reimplement in Haskell. Compare with Perl implementation. Meditate on how they can suck in such completely different ways.
[ There is a brief followup to this article. ]
[Other articles in category /CS] permanent link
Crappiest literary theory this month
Someone on Wikipedia has been pushing the theory that the four bad children in Charlie and the Chocolate Factory correspond to the seven deadly sins.
[Other articles in category /book] permanent link
Once I was visiting my grandparents while home from college. We were in the dining room, and they were talking about a book they were reading, in which the author had used a word they did not know:
cornaptious. I didn't know it either, and got up from the table to look it up in their Webster's Second International Dictionary. (My grandfather, who was for his whole life a both cantankerous and a
professional editor, loathed the permissive and descriptivist Third International. The out-of-print Second International Edition was a prized Christmas present that in those days was hard to find.)
Webster's came up with nothing. Nothing but "corniculate", anyway, which didn't appear to be related. At that point we had exhausted our meager resources. That's what things were like in those days.
The episode stuck with me, though, and a few years later when I became the possessor of the First Edition of the Oxford English Dictionary, I tried there. No luck. Some time afterwards, I upgraded to
the Second Edition. Still no luck.
Years went by, and one day I was reading The Lyre of Orpheus, by Robertson Davies. The unnamed Dean of the music school describes the brilliant doctoral student Hulda Schnakenburg:
"Oh, she's a foul-mouthed, cornaptious slut, but underneath she is all untouched wonderment."
"Aha," I said. "So this is what they were reading that time."
More years went by, the oceans rose and receded, the continents shifted a bit, and the Internet crawled out of the sea. I returned to the problem of "cornaptious". I tried a Google book search. It
found one use only, from The Lyre of Orpheus. The trail was still cold.
But wait! It also had a suggestion: "Did you mean: carnaptious", asked Google.
Ho! Fifty-six hits for "carnaptious", all from books about Scots and Irish. And the OED does list "carnaptious". "Sc. and Irish dial." it says. It means bad-tempered or quarrelsome. Had Davies
spelled it correctly, we would have found it right away, because "carnaptious" does appear in Webster's Second.
So that's that then. A twenty-year-old spelling error cleared up by Google Books.
[ Addendum 20080228: The Dean's name is Wintersen. Geraint Powell, not the Dean, calls Hulda Schnakenburg a cornaptious slut. ]
[Other articles in category /lang] permanent link
Acta Quandalia
Several readers have emailed me to discuss my recent articles about mathematical screwups, and a few have let drop casual comments that suggest that they think that I invented Acta Quandalia as a
joke. I can assure you that no journal is better than Acta Quandalia. Since it is difficult to obtain outside of university libraries, however, I have scanned the cover of one of last year's issues
for you to see:
[Other articles in category /math] permanent link
The least interesting number
Berry's paradox goes like this: Some natural numbers, like 2, are interesting. Some natural numbers, like 255610679 (I think), are not interesting. Consider the set of uninteresting natural numbers.
If this set were nonempty, it would contain a smallest element s. But then s, would have the interesting property of being the smallest uninteresting number. This is a contradiction. So the set of
uninteresting natural numbers must be empty.
This reads like a joke, and it is tempting to dismiss it as a trite bit of foolishness. But it has rather interesting and deep connections to other related matters, such as the Grelling-Nelson
paradox and Gödel's incompleteness theorem. I plan to write about that someday.
But today my purpose is only to argue that there are demonstrably uninteresting real numbers. I even have an example. Liouville's number L is uninteresting. It is defined as:
$$\sum_{i=1}^\infty {10}^{-i!} = 0.1100010000000000000001000\ldots$$
Why is this number of any concern? In 1844 Joseph Liouville showed that there was an upper bound on how closely an irrational algebraic number could be approximated by rationals. L can be
approximated much more closely than that, and so must therefore be transcendental. This was the proof of the existence of transcendental numbers.
The only noteworthy mathematical property possessed by L is its transcendentality. But this is certainly not enough to qualify it as interesting, since nearly all real numbers are transcendental.
Liouville's theorem shows how to construct many transcendental numbers, but the construction generates many similar numbers. For example, you can replace the 10 with a 2, or the n! with floor(e^n) or
any other fast-growing function. It appears that any potentially interesting property possessed by Liouville's number is also possessed by uncountably many other numbers. Its uninterestingness is
identical to that of other transcendental numbers constructed by Liouville's method. L was neither the first nor the simplest number so constructed, so Liouville's number is not even of historical
The argument in Berry's paradox fails for the real numbers: since the real numbers are not well-ordered, the set of uninteresting real numbers need have no smallest element, and in fact (by Berry's
argument) does not. Liouville's number is not the smallest number of its type, nor the largest, nor anything else of interest.
If someone were to come along and prove that Liouville's number was the most uninteresting real number, that would be rather interesting, but it has not happened, nor is it likely.
[Other articles in category /math] permanent link
Trivial theorems
Mathematical folklore contains a story about how Acta Quandalia published a paper proving that all partially uniform k-quandles had the Cosell property, and then a few months later published another
paper proving that no partially uniform k-quandles had the Cosell property. And in fact, goes the story, both theorems were quite true, which put a sudden end to the investigation of partially
uniform k-quandles.
Except of course it wasn't Acta Quandalia (which would never commit such a silly error) and it didn't concern k-quandles; it was some unspecified journal, and it concerned some property of some sort
of topological space, and that was the end of the investigation of those topological spaces.
This would not qualify as a major screwup under my definition in the original article, since the theorems are true, but it certainly would have been rather embarrassing. Journals are not supposed to
publish papers about the properties of the empty set.
Hmm, there's a thought. How about a Journal of the Properties of the Empty Set? The editors would never be at a loss for material. And the cover almost designs itself.
Handsome, isn't it? I See A Great Need!
Ahem. Anyway, if the folklore in question is true, I suppose the mathematicians involved might have felt proud rather than ashamed, since they could now boast of having completely solved the problem
of partially uniform k-quandles. But on the other hand, suppose you had been granted a doctorate on the strength of your thesis on the properties of objects from some class which was subsequently
shown to be empty. Wouldn't you feel at least a bit like a fraud?
Is this story true? Are there any examples? Please help me, gentle readers.
[Other articles in category /math] permanent link
Steganography in 1665: correction
(A correction to this.)
Phil Rodgers has pointed out that a "physique" is not an emetic, as I thought, but a laxative.
Are there any among you who doubt that Bruce Schneier can shoot sluggbullets out of his ass? Let the unbelievers beware!
[Other articles in category /IT] permanent link
Steganography in 1665
Today's entry in Samuel Pepys' diary says:
He told us a very handsome passage of the King's sending him his message ... in a sluggbullet, being writ in cypher, and wrapped up in lead and swallowed. So the messenger come to my Lord and
told him he had a message from the King, but it was yet in his belly; so they did give him some physique, and out it come.
Sure, Bruce Schneier can mount chosen-ciphertext attacks without even choosing a ciphertext. But dare he swallow a "sluggbullet" and bring it up again to be read?
Silly me. Bruce Schneier can probably cough up a sluggbullet without swallowing one beforehand.
[ Addendum 20080205: A correction. ]
[Other articles in category /IT] permanent link
Major screwups in mathematics: example 1
Last month I asked for examples of major screwups in mathematics. Specifically, I was looking for cases in which some statement S was considered to be proved, and later turned out to be false. I
could not think of any examples myself.
Readers suggested several examples, and I got lucky and turned up one on my own.
Some of the examples were rather obscure technical matters, where Professor Snorfus publishes in Acta Quandalia that all partially uniform k-quandles have the Cosell property, and this goes
unchallenged for several years before one of the other three experts in partially uniform quandle theory notices that actually this is only true for Nemontovian k-quandles. I'm not going to report on
matters that sounded like that to me, although I realize that I'm running the risk that all the examples that I do report will sound that way to most of the audience. But I'm going to give it a try.
General remarks
I would like to make some general remarks first, but I don't quite know yet what they are. Two readers independently suggested that I should read Proofs and Refutations by Imre Lakatos, and raised a
number of interesting points that I'm sure I'd like to expand on, except that I haven't read the book. Both copies are checked out of the Penn library, which is a good sign, and the interlibrary loan
copy I ordered won't be here for several days.
Still, I can relate a partial secondhand understanding of the ideas, which seem worth repeating.
Whether a result is "correct" may be largely a matter of definition. Consider Lakatos' principal example, Euler's theorem about polyhedra: Let F, E, and V be the number of faces, edges, and vertices
in a polyhedron. Then F - E + V = 2. For example, the cube has (F, E, V) = (6, 12, 8), and 6 - 12 + 8 = 2.
Sometime later, someone observed that Euler's theorem was false for polyhedra with holes in them. For example, consider the object shown at right. It has (F, E, V) = (9, 18, 9), giving F - E + V = 9
- 18 - 9 = 0.
Can we say that Euler was wrong? Not really. The question hinges on the definition of "polyhedron". Euler's theorem is proved for "polyhedra", but we can see from the example above that it only holds
for "simply-connected polyhedra". If Euler proved his theorem at a time when "polyhedra" was implicitly meant "simply-connected", and the generally-understood definition changed out from under him,
we can't hold that against Euler. In fact, the failure of Euler's theorem for the object above suggests that maybe we shouldn't consider it to be a polyhedron, that it is somehow rather different
from a polyhedron in at least one important way. So the theorem drives the definition, instead of the other way around.
Okay, enough introductory remarks. My first example is unquestionably a genuine error, and from a first-class mathematician.
Mathematical background
Some terminology first. A "formula" is just that, for example something like this:
$$\displaylines{ ((\forall a.\lnot R(a,a)) \wedge\cr (\forall b\forall c.R(b,c)\to\lnot R(c,b))\wedge\cr (\forall d\forall e\forall f.(R(d,e)\wedge R(e,f)\to R(d,f))) \to\cr (\forall x\exists y.R
(y,x)) }$$
It may contain a bunch of quantified variables (a, b, c, etc.), relations (like R), and logical connectives like ∧. A formula might also include functions and constants (which I didn't) or equality
symbols (there are none here).
One can ask whether the formula is true (or, in the jargon, "valid"), which means that it must hold regardless of how one chooses the set S from which the values of the variables will be drawn, and
regardless of the meanings assigned to the relation symbols (and to the functions and constants, if there are any). The following formula, although not very interesting, is valid:
$$ \forall a\exists b.(P(a)\wedge P(b))\to P(a) $$
This is true regardless of the meaning we ascribe to P, and regardless of the set from which a and b are required to be drawn.
The longer formula above, which requires that R be a linear order, and then that the linear order R have no minimal element, is not universally valid, but it is valid for some interpretations of R
and some sets S from which a...f, x, and y may be drawn. Specifically, it is true if one takes S to be the set of integers and R(x, y) to mean x < y. Such formulas, which are true for some
interpretations but not for all, are called "satisfiable". Obviously, valid formulas are satisfiable, because satisfiable formulas are true under some interpretations, but valid formulas are true
under all interpretations.
Gödel famously showed that it is an undecidable problem to determine whether a given formula of arithmetic is satisfiable. That is, there is no method which, given any formula, is guaranteed to tell
you correctly whether or not there is some interpretation in which the formula is true. But one can limit the form of the allowable formulas to make the problem easier. To take an extreme example,
just to illustrate the point, consider the set of formulas of the form:
∃a∃b... ((a=0)∨(a=1))&and((b=0)∨(b=1))∧...∧R(a,b,...)
for some number of variables. Since the formula itself requires that a, b, etc. are each either 0 or 1, all one needs to do to decide whether the formula is satisfiable is to try every possible
assignment of 0 and 1 to the n variables and see whether R(a,b,...) is true in any of the 2^n resulting cases. If so, the formula is satisfiable, if not then not.
Kurt Gödel, 1933
One would like to prove decidability for a larger and more general class of formulas than the rather silly one I just described. How big can the class of formulas be and yet be decidable?
It turns out that one need only consider formulas where all the quantifiers are at the front, because there is a simple method for moving quantifiers to the front of a formula from anywhere inside.
So historically, attention has been focused on formulas in this form.
One fascinating result concerns the class of formulas called [∃^*∀^2∃^*, all, (0)]. These are the formulas that begin with ∃a∃b...∃m∀n∀p∃q...∃z, with exactly two ∀ quantifiers, with no intervening
∃s. These formulas may contain arbitrary relations amongst the variables, but no functions or constants, and no equality symbol. [∃^*∀^2∃^*, all, (0)] is decidable: there is a method which takes any
formula in this form and decides whether it is satisfiable. But if you allow three ∀ quantifiers (or two with an ∃ in between) then the set of formulas is no longer decidable. Isn't that freaky?
The decidability of the class [∃^*∀^2∃^*, all, (0)] was shown by none other than Gödel, in 1933. However, in the last sentence of his paper, Gödel added that the same was true even if the formulas
were also permitted to include equality:
In conclusion, I would still like to remark that Theorem I can also be proved, by the same method, for formulas that contain the identity sign.
This was believed to be true for more than thirty years, and the result was used by other mathematicians to prove other results. But in the mid-1960s, Stål Aanderaa showed that Gödel's proof would
not actually work if the formulas contained equality, and in 1983, Warren D. Goldfarb proved that Gödel had been mistaken, and the satisfiability of formulas in the larger class was not decidable.
Gödel's original 1933 paper is Zum Entscheidungsproblem des logischen Funktionenkalküls (On the decision problem for the functional calculus of logic) which can be found on pages 306–327 of volume I
of his Collected Works. (Oxford University Press, 1986.) There is an introductory note by Goldfarb on pages 226–231, of which pages 229–231 address Gödel's error specifically.
I originally heard the story from Val Tannen, and then found it recounted on page 188 of The Classical Decision Problem, by Egon Boerger, Erich Grädel, and Yuri Gurevich. But then blog reader Jeffrey
Kegler found the Goldfarb note, of which the Boerger-Grädel-Gurevich account appears to be a summary.
Thanks very much to everyone who contributed, and especially to M. Kegler.
(I remind readers who have temporarily forgotten, that Acta Quandalia is the quarterly journal of the Royal Uzbek Academy of Semi-Integrable Quandle Theory. Professor Snorfus, you will no doubt
recall, won the that august institution's prestigious Utkur Prize in 1974.)
[ Addendum 20080206: Another article in this series. ]
[Other articles in category /math] permanent link
Addenda to recent articles 200801
Here are some notes on posts from the last month that I couldn't find better places for.
• As a result of my research into the Harriet Tubman mural that was demolished in 2002, I learned that it had been repainted last year at 2950 Germantown Avenue.
• A number of readers, including some honest-to-God Italians, wrote in with explanations of Boccaccio's term milliantanove, which was variously translated as "squillions" and "a thousand hundreds".
The "milli-" part suggests a thousand, as I guessed. And "-anta" is the suffix for multiples of ten, found in "quaranta" = "forty", akin to the "-nty" that survives in the word "twenty". And
"nove" is "nine".
So if we wanted to essay a literal translation, we might try "thousanty-nine". Cormac Ó Cuilleanáin's choice of "squillions" looks quite apt.
• My article about clubbing someone to death with a loaded Uzi neglected an essential technical point. I repeatedly said that
for my $k (keys %h) {
if ($k eq $j) {
could be replaced with:
But this is only true if $j actually appears in %h. An accurate translation is:
f($h{$j}) if exists $h{$j}
I was, of course, aware of this. I left out discussion of this because I thought it would obscure my point to put it in, but I was wrong; the opposite was true.
I think my original point stands regardless, and I think that even programmers who are unaware of the existence of exists should feel a sense of unease when presented with (or after having
written) the long version of the code.
An example of this error appeared on PerlMonks shortly after I wrote the article.
• Robin Houston provides another example of a nonstandard adjective in mathematics: a quantum group is not a group.
We then discussed the use of nonstandard adjectives in biology. I observed that there seemed to be a trend to eliminate them, as with "jellyfish" becoming "jelly" and "starfish" becoming "sea
star". He pointed out that botanists use a hyphen to distinguish the standard from the nonstandard: a "white fir" is a fir, but a "Douglas-fir" is not a fir; an "Atlas cedar" is a cedar, but a
"western redcedar" is not a cedar.
Several people wrote to discuss the use of "partial" versus "total", particularly when one or the other is implicit. Note that a total order is a special case of a partial order, which is itself
a special case of an "order", but this usage is contrary to the way "partial" and "total" are used for functions: just "function" means a total function, not a partial function. And there are
clear cases where "partial" is a standard adjective: partial fractions are fractions, partial derivatives are derivatives, and partial differential equations are differential equations.
• Steve Vinoski posted a very interesting solution to my question about how to set Emacs file modes: he suggested that I could define a replacement aput function.
• In my utterly useless review of Robert Graves' novel King Jesus I said "But how many of you have read I, Claudius and Suetonius? Hands? Anyone? Yeah, I didn't think so." But then I got email from
James Russell, who said he had indeed read both, and that he knew just what I meant, and, as a result, was going directly to the library to take out King Jesus. And he read the article on Planet
Haskell. Wow! I am speechless with delight. Mr. Russell, I love you. From now on, if anyone asks (as they sometimes do) who my target audience is, I will say "It is James Russell."
• A number of people wrote in with examples of "theorems" that were believed proved, and later turned out to be false. I am preparing a longer article about this for next month. Here are some
□ Cauchy apparently "proved" that if a sum of continuous functions converges pointwise, then the sum is also a continuous function, and this error was widely believed for several years.
□ I just learned of a major screwup by none other than Kurt Gödel concerning the decidability of a certain class of sentences of first-order arithmetic which went undetected for thirty years.
□ Robert Tarjan proved in the 1970s that the time complexity of a certain algorithm for the union-find problem was slightly worse than linear. And several people proved that this could not be
improved upon. But Hantao Zhang has a paper submitted to STOC 2008 which, if it survives peer review, shows that that the analysis is wrong, and the algorithm is actually O(n).
□ Finally, several people, including John Von Neumann, proved that the axioms of arithmetic are consistent. But it was shown later that no such proof is possible.
• A number of people wrote in with explanations of "more than twenty states"; I will try to follow up soon.
[Other articles in category /addenda] permanent link
|
{"url":"http://blog.plover.com/2008/02/","timestamp":"2014-04-21T14:41:23Z","content_type":null,"content_length":"75842","record_id":"<urn:uuid:24069146-d957-4fa0-b0eb-0a71fb0a8b9b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the particular solution of the differential equation x^3 dy/dx = 2
given y = 3 , X= 1 - Homework Help - eNotes.com
Find the particular solution of the differential equation x^3 dy/dx = 2
given y = 3 , X= 1
We must solve the differential equation:
`x^3 dy/dx = 2`
The first step is to notice that this is a separable equation, so we will separate:
`dy = 2/x^3 dx`
Next, we can integrate both sides:
`int dy = int 2/x^3 dx`
`\implies y = -1/x^2 + c`
We have now found the general solution, and can proceed to solve the initial value problem. We'll plug in y=3 and x=1:
`3 = -1/1 + c \implies c=4`
Therefore our final solution is `y = -1/x^2 + 4`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/find-particular-solution-differential-equation-x-3-333724","timestamp":"2014-04-19T19:58:59Z","content_type":null,"content_length":"24985","record_id":"<urn:uuid:a6695b39-6616-4f59-bc4c-0bc564c173d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ALEX Lesson Plans
Subject: Mathematics (9 - 12), or Technology Education (6 - 8)
Title: Adding Up Stats
Description: In groups, students will conduct surveys and collect the data from their surveys. Then they will analyze the data and create various methods of displaying the collected data.
Subject: Mathematics (9 - 12)
Title: A Piece of Pi
Description: This lesson uses graphing to help students understand that pi is a constant and is the slope of the line graphed on a circumference vs. diameter graph.This lesson plan was created as a
result of the Girls Engaged in Math and Science University, GEMS-U Project.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: Predict the Future?
Description: Students will use data collected and a "best-fit line" to make predictions for the future. The example the students will be working on for this lesson will demonstrate an exponential
Subject: Mathematics (9 - 12), or Science (9 - 12)
Title: The Composition of Seawater
Description: This lesson develops student understanding of ocean water as a true solution. It demonstrates the differences of salinity and "salt" water. This lesson prepares the student to be able to
apply the concepts of temperature, density, and layering of the oceans before conducting a lab dealing with these variables.This lesson plan was created as a result of the Girls Engaged in Math and
Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (7 - 12)
Title: Statistically Thinking
Description: The object of this project is for students to learn how to find univariate and bivariate statistics for sets of data. Also, the students will be able to determine if two sets of data are
linearly correlated and to what degree. The students will use Microsoft PowerPoint and Excel to find, organize, and present their projects to the class.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: Math is Functional
Description: This lesson is a technology-based activity in which students extend graphing of linear functions to the use of spreadsheet software. After students have become proficient in constructing
a table of values, students are able to efficiently graph equations with more extensive computational requirements. Furthermore, inquiry and discovery about slope and y-intercept will help students
conceptualize material normally presented in Algebra I textbooks.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: You Mean ANYTHING To The Zero Power Is One?
Description: This lesson is a technology-based project to reinforce concepts related to the Exponential Function. It can be used in conjunction with any textbook practice set. Construction of
computer models of several Exponential Functions will promote meaningful learning rather than memorization.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: Creating a Payroll Spreadsheet
Description: Spreadsheet software allows you to calculate numbers arranged in rows and columns for specific financial tasks. This activity allows students to create an "Employee Work/Pay Schedule"
spreadsheet to reinforce spreadsheet skills. Students will practice spreadsheet skills by entering data, creating formulas, and using formatting commands.
Subject: Mathematics (6 - 12)
Title: Swimming Pool Math
Description: Students will use a swimming pool example to practice finding perimeter and area of different rectangles.
Thinkfinity Lesson Plans
Subject: Mathematics,Science
Title: Finding Our Top Speed
Description: This Illuminations lesson sets the stage for a discussion of travel in the solar system. By considering a real-world, hands-on activity, students develop their understanding of time and
distance. The mathematics necessary for the lesson relate to measuring time and distance as well as graphing to portray the data collected.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics,Professional Development
Title: Building Bridges
Description: In this lesson, from Illuminations, students attempt to make a transition from arithmetical to algebraic thinking by extending from problems that have single-solution responses. Values
organized into tables and graphs are used to move toward symbolic representations. Problem situations involving linear, quadratic, and exponential models are employed.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Mathematics as Communication
Description: This lesson, from Illuminations, focuses on interpreting and creating graphs that are functions of time. Students complete four activity sheets that focus on graphs of time vs. speed and
how many times an event occurred in a specific amount of time.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Automobile Mileage: Age vs. Mileage
Description: In this lesson, one of a multi-part unit from Illuminations, students plot data about automobile mileage and interpret the meaning of the slope and y-intercept of the least squares
regression line. By examining the graphical representation of the data, students analyze the meaning of the slope and y-intercept of the line and put those meanings in the context of the real-life
application. This lesson incorporates an interactive regression line applet.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: The Centroid and the Regression Line
Description: This lesson, one of a multi-part unit from Illuminations, provides students with the opportunity to investigate the relationship between a set of data points and a curve used to fit the
data points, using a computer-based interactive tool. Using the Regression Line Applet, students investigate the centroid of a data set and its significance for the line fitted to the data.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Determining Functions Using Regression
Description: In these two lessons, students collect data and graph the data. They will analyze the data and use regression on a calculator to find the function of best fit for each.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Least Squares Regression
Description: In this nine-lesson unit, from Illuminations, students interpret the slope and y-intercept of least squares regression lines in the context of real-life data. Students use an interactive
applet to plot the data and calculate the correlation coefficient and equation of the least squares regression line. These lessons develop skills in connecting, communicating, reasoning, and problem
solving as well as representing fundamental ideas about data.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Think of a Graph
Description: This reproducible transparency, from an Illuminations lesson, asks students to sketch a graph in which the side length of a square is graphed on the horizontal axis and the perimeter of
the square is graphed on the vertical axis.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Graph Chart
Description: This reproducible transparency, from an Illuminations lesson, contains the answers to the similarly named student activity in which students identify the independent and dependent
variables, the function, symbolic function rule and rationale for a set of graphs.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: On Top of the World
Description: If you were standing on the top of Mount Everest, how far would you be able to see to the horizon? In this lesson, students will consider two different strategies for finding an answer
to this question. The first strategy is algebraic-students use data about the distance to the horizon from various heights to generate a rule. The second strategy is geometric-students use the radius
of the Earth and right triangle relationships to construct a formula. Then, students compare the two different rules based on ease of use as well as accuracy.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Bathtub Water Levels
Description: In this lesson, one of a multi-part unit from Illuminations, students examine real-life data that illustrates a negative slope. Students interpret the meaning of the negative slope and
y-intercept of the graph of the real-life data. By examining the graphical representation of the data, students relate the slope and y-intercept of the least squares regression line to the real-life
data. They also interpret the correlation coefficient of the least squares regression line. This lesson incorporates an interactive regression line applet.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Road Rage
Description: In this Illuminations lesson, students use remote-controlled cars to create a system of equations. The solution of the system corresponds to the cars crashing. Multiple representations
are woven together throughout the lesson, using graphs, scatter plots, equations, tables, and technological tools. Students calculate the time and place of the crash mathematically, and then test the
results by crashing the cars into each other.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Movie Lines
Description: This Illuminations lesson allows students to apply their knowledge of linear equations and graphs in an authentic situation. Students plot data points corresponding to the cost of DVD
rentals and interpret the results.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Health,Mathematics,Science
Title: Reflecting on Your Work
Description: In this lesson, one of a multi-part unit from Illuminations, students explore rates of change and accumulation in context. They apply the methods they explored in the previous lesson to
two new situations: analyzing sediment flow in a river and blood flow in the brain.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: The Effects of Outliers
Description: This lesson, one of a multi-part unit from Illuminations, provides students with the opportunity to investigate the relationship between a set of data points and a curve used to fit the
data points, using a computer-based interactive tool. Using the Regression Line Applet, students investigate the effect of outliers on a regression line and easily see their significance.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Traveling Distances
Description: In this lesson, one of a multi-part unit from Illuminations, students interpret the meaning of the slope and y-intercept of a graph of real-life data. By examining the graphical
representation of the data, students relate the slope and y-intercept of the least squares regression line to the real-life data. They also interpret the correlation coefficient of the resulting
least squares regression line. This lesson incorporates an interactive regression line applet.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics,Science
Title: Do You Hear What I Hear?
Description: In this lesson, from Illuminations, students explore the dynamics of a sound wave. Students use an interactive Java applet to view the effects of changing the initial string displacement
and the initial tension.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Automobile Mileage: Comparing and Contrasting
Description: In this lesson, one of a multi-part unit from Illuminations, students compare and contrast their findings from previous lessons of the unit. This lesson allows students the time they
need to think about and discuss what they have done in the previous lessons. This lesson provides the teacher with another opportunity to listen to student discourse and assess student understanding.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics,Science
Title: Shedding the Light
Description: In this four-lesson unit, from Illuminations, students investigate a mathematical model for the decay of light passing through water. The goal of this investigation is a rich exploration
of exponential models in context. Students examine the way light changes as water depth increases, conduct experiments, explore related algebraic functions using an interactive Java applet and
analyze the data collected.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Automobile Mileage: Year vs. Mileage
Description: In this lesson, one of a multi-part unit from Illuminations, students plot data about automobile mileage and interpret the meaning of the slope and y-intercept in the resulting equation
for the least squares regression line. By examining the graphical representation of the data, students analyze the meaning of the slope and y-intercept of the line and interpret them in the context
of the real-life application. Students also make decisions about the age and mileage of automobiles based on the equation of the least squares regression line. This lesson incorporates an interactive
regression line applet.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Health,Mathematics
Title: Make a Conjecture
Description: In this lesson, one of a multi-part unit from Illuminations, students explore rates of change and accumulation in context. They are asked to think about the mathematics involved in
determining the amount of blood being pumped by a heart.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Exact Ratio
Description: This reproducible activity sheet, from an Illuminations lesson, features a series of questions pertaining to exact ratios and geometric sequences. In the lesson, students measure lengths
on stringed musical instruments and discuss how the placement of frets on a fretted instrument is determined by a geometric sequence.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics, Science
Title: How Old Are the Stars?
Description: In this Science NetLinks lesson, students determine the age of a star cluster by observing, measuring, and plotting astronomical data. They examine the Jewelbox cluster, located within
the southern constellation Crux, and determine its age using a relationship between temperature, color, and luminosity. Before beginning this lesson, students should understand the life cycle and
composition of stars. Students should also understand the relationship between temperature and color.
Thinkfinity Partner: Science NetLinks
Grade Span: 9,10,11,12
Thinkfinity Learning Activities
Subject: Mathematics
Title: Line of Best Fit
Description: This student interactive, from Illuminations, allows the user to enter a set of data, plot the data on a coordinate grid, and determine the equation for a line of best fit. Students can
choose to display a line of best fit based on their visual approximation as well as a computer-generated least-squares regression line.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
|
{"url":"http://alex.state.al.us/all.php?std_id=54168","timestamp":"2014-04-20T15:56:04Z","content_type":null,"content_length":"223112","record_id":"<urn:uuid:207a5642-b8df-46db-b928-1d31a202f48f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Reading SAT Math Tutor
Find a North Reading SAT Math Tutor
...Seasonally I work with students on SAT preparation, which I love and excel at. I have worked successfully with students of all abilities, from Honors to Summer School. I work in Acton and
Concord and surrounding towns, (Stow, Boxborough, Harvard, Sudbury, Maynard, Littleton) and along the Route 2 corridor, including Harvard, Lancaster, Ayer, Leominster, Fitchburg, Gardner.
15 Subjects: including SAT math, calculus, physics, statistics
...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in
their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English.
16 Subjects: including SAT math, French, calculus, algebra 1
...I am currently licensed to teach math (8-12). Not only am I licensed in the state of Massachusetts to teach high school math, but I have taken classes through Calculus III. I use trigonometry
almost on a daily basis thanks to my graduate-level mathematics courses. In addition, I am licensed to teach high school math in the state of Massachusetts.
9 Subjects: including SAT math, geometry, algebra 1, algebra 2
...Have you tried expensive language schools? Why didn't they work? Because the teacher gave every student the same lesson!
19 Subjects: including SAT math, reading, English, writing
I am currently teaching both middle school and high school math and often teach adjunct classes at community college. I have great success with my students' scores on standardized tests and have
had many students increase their percentile scores year-to-year by 20% or more. I have over 20 years of teaching experience and have also worked as an engineer.
8 Subjects: including SAT math, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/North_Reading_SAT_math_tutors.php","timestamp":"2014-04-17T11:00:21Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:80921cae-e04f-4093-81c9-a5f971a09fb6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert Time To Decimal In Excel
have you tried this formula?
Convert time to decimal
If that doesn't help here is a list of hints on similar issues that might help you.
In The Matters Of Style,
swim with the current;
in matters of principle,
Stand Like A Rock
"People demand freedom of speech to make up for the
freedom of thought which they avoid."
|
{"url":"http://www.computing.net/answers/windows-xp/convert-time-to-decimal-in-excel/168971.html","timestamp":"2014-04-20T05:57:15Z","content_type":null,"content_length":"37594","record_id":"<urn:uuid:8b0d1fc3-4bfd-4a06-9767-5a9c9acf3817>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Design Calculations for Slurry Agitators in Alumina Refinery
Hi Friends,
Today I am presenting the gist of my technical paper on "Motor rating calculations for slurry mixing agitators in Alumina refinery" which has been published recently in aluminium issue of the
magazine "Minerals & Metals Review" (page 30 and 31 in MMR, August 2011 issue).
In various technical forums, process experts as well as equipment manufacturers have opined that the design of agitators for mixing bauxite, residue and hydrate in Alumina refinery is complicated and
tricky issue. In this paper, we will discuss the subject with brief description of involved terminology, associated design parameters and methodology with sample motor rating calculations for the
slurry mixing agitator of a Pre-desilication tank of Alumina refinery.
Method to arrive at motor rating:
Impeller power for slurry mixing agitator is calculated using following mathematical relations-
Impeller Power, P = N[p] * ρ *N^3 * D[i]^5/(16*10^4) h.p.
Where D[i] = Diameter of impeller in meters,
N = Revolution per minute for impeller,
N[p] = Power number for impeller and
ρ = Specific gravity of slurry.
Sample calculations:
Simplified calculations to arrive at the motor rating for the agitator of Pre-desilication tank of around 3000 m^3 gross capacity with realistic assumptions have been presented below-
Fluid Height in Tank , H = 16 m and Diameter of tank, D[t] = 14 m
Slurry volume in tank = π *D[t]^2*H/4 = π * (14)^2*16 /4 = 2463 m^3
Solid consistency in Slurry = 50 % (w/w), Specific gravity of slurry, ρ = 1.602,
Viscosity of slurry, μ = 550 cp
Agitator Impeller Diameter, D[i]= 33 % of tank diameter = 14 * 33% m = 4.62 m
Tip speed of Impeller = 290 m/minute, Drive motor RPM = 1500 rpm.
Gear Box Reduction Ratio = 75
∴ Agitator RPM, N = Drive Motor RPM/Gear Box Reduction Ratio = 1500/75 = 20 rpm
Flow Number N[q] = 0.56 and Power Number, N[p] = 0.51 (assumed figures)
∴ Pumping Capacity = N[q] * N * D[i]^3 m^3/minute
= 0.56 * 20 * (4.62)^3 = 1104.44 m^3/min. = 18.41 m^3/sec.
Area of Tank = π * D[t]^2 = π *(14)^2 / 4 = 153.94 m^2
Bulk fluid Velocity = pumping capacity/area of tank
= 1104.44 / 153.94= 7.18 m/min.= 23.55 ft./min.
Degree of Agitation = bulk fluid velocity / 6
(For 6 ft/min., degree of agitation =1 and Degree of agitation varies from 0 to 10)
= 23.55 / 6 = 3.93 ~ 4
Annular Area = π * (D[t]^2- D[i]^2 ) /4
Where D[t] = Diameter of tank and D[i ]= Diameter of impeller in meters.
= 3.14 * (14^2 – (4.62^2) / 4 = 137.18 m^2
Rising velocity of particles = pumping capacity / annular area
= 1104.44 / 137.18 = 8.051 m/min. = 0.1342 m/sec.
Tank Turnover rate = Pumping capacity / tank capacity
= 1104.44 / 2463 = 0.45 times / min.
Power Number N[p] = 0.51
Shaft Power, P = N[p]* ρ *(D[i])^5 * N^3 /(16 *10^4)
Where N[p] = Impeller power no., D[i] = Diameter of impeller in meters,
Shaft RPM, N = revolutions per minute
∴ Shaft Power, P = 0.51 * 1.602 * (4.402)^5 * 20^3 / (16 *10^4) = 85.98 h.p.
Taking Gear Box Efficiency = 80% and Drive Motor Efficiency = 95%,
Design margin = 1.15
∴ Drive Motor Rating = 1.15 * 85.98/(0.80 * 0.95) =130 h.p. = 97.0 kW.
Thus the drive motor of about 100 kW shall be adequate for successful operation of agitator of 3000 m^3 Pre-desilication tank in Alumina refinery.
The developed methodology clearly reveals that motor rating calculations for any slurry mixing agitator can be carried out easily by simply replacing the associated input process conditions,
operating parameters, dimensions of tanks / vessel and appropriate power number for impeller in above simplified derivation.
Please put your views / suggestion / remarks / comments, if any.
If you like this article, then please press your rating as +1 .
Thanks and regards.
Kunwar Rajendra
10 comments:
1. Hi, I would like ask is it the method you proposed above can be use for design the precipitation tank as well?
2. Dear Mr. Roger Soon,
Yes, you are absolutely correct. This is the method to carry out the basic design depending on the characteristics of material to be handled and operating conditions suiting the process
requirement more particularly the RPM of agitator blades. Based on the basic design, Mechanical design of agitator is done.
Kunwar Rajendra
3. Dear Mr. Rajendra ,
I have a open top slurry tank which holds 5 m3 of Ca(OH)2 ,in other words it is called milk of lime .
The tank height & internal diameter both are about 2.0 m and i intend to design and install a top mounted agitator on top of the tank.
Much appreciated if you could post a typical calculation methodology for sizing agitator based on the below data.
Slurry density =1080 kg/m3
Viscocity = 10 cP
% Solids (w/w)=12.5
Fluid temp = 50 deg max
4. Dear Mr. Kiran Kumar,
It appears from your e.mail that you do not want to learn new things. May be that you are either very old or lethargic in nature.Do you want that some body else should develop calculations for
you? It is just too much.Request you to take a piece of paper and a simple calculator to carry out calculations systematically as explained above. It will not take more than 10 minutes for you.
Take this small challenge and let me know the outcome of your design calculations.
Hope, you will follow my advice and keep me posted.
Kunwar Rajendra
5. Hi,
Can you suggest me how would we assume the impeller power number and flow number?
1. Hi Manas,
Thanks for the very interesting question. Flow number and Power number are purely based on the design of impeller carried out by manufacturers. Thus, based on the experience of Designer with
various types of impellers used in operating plants, these numbers are assumed and taken into consideration while designing such complicated system.
Hope, the above clarifies your doubts.
Kunwar Rajendra
2. Hi,
Kindly clarify me whether the slurry viscosity plays any role for selection of agitator or not?
6. Dear Manas,
Viscosity, Specific gravity and Particle size in slurry play vital role in selection of agitator including its drive components.
Thanks and regards.
Kunwar Rajendra
7. Dear Mr. Rajendra, thanks for this important contribution. I have two questions.
1. How can we integrate the fact if having 2 impellers on the same shaft.
2. What are the main differences with calculating a thickener
8. Dear Roger,
Thank you very much for appreciating the articles published on my blog. The number of stages of impellers are decided on the basis of degree of agitation required for particular service
conditions to prevent settling of solids. However, agitator (rake) for settling tanks are designed on the basis of torque developed for movement of settled mud.
Trust, the above clarifies your doubts.
Kunwar Rajendra
|
{"url":"http://bauxite2aluminium.blogspot.com/2011/09/design-calculations-for-slurry.html","timestamp":"2014-04-16T13:05:53Z","content_type":null,"content_length":"111666","record_id":"<urn:uuid:c185c76f-e3c0-4842-8090-3f963342b9db>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Return to List
\(J\) Contractive Matrix Functions, Reproducing Kernel Hilbert Spaces and Interpolation
A co-publication of the AMS and CBMS.
             
CBMS Regional This book evolved from a set of lectures presented under the auspices of the Conference Board of Mathematical Sciences at the Case Institute of Technology in September 1984. The
Conference Series original objective of the lectures was to present an introduction to the theory and applications of \(J\) inner matrices. However, in revising the lecture notes for publication, the
in Mathematics author began to realize that the spaces \({\mathcal H}(U)\) and \({\mathcal H}(S)\) are ideal tools for treating a large class of matrix interpolation problems including ultimately
two-sided tangential problems of both the Nevanlinna-Pick type and the Carathéodory-Fejér type, as well as mixtures of these. Consequently, the lecture notes were revised to bring \
1989; 160 pp; ({\mathcal H}(U)\) and \({\mathcal H}(S)\) to center stage. This monograph is the first systematic exposition of the use of these spaces for interpolation problems.
• \(J\) inner functions
Number: 71 • Reproducing kernel Hilbert spaces
• Linear fractional transformations, matrix balls and more on admissibility
ISBN-10: • More on \(\mathcal H(U)\) spaces
0-8218-0722-6 • The Nevanlinna-Pick problem
• Carathéodory-Fejér interpolation
ISBN-13: • Singular Pick matrices
978-0-8218-0722-4 • The lossless inverse scattering problem
• Nehari interpolation
List Price: US$29 • A matrix interpolation problem
• Maximum entropy
Member Price:
All Individuals:
Order Code: CBMS/
|
{"url":"http://ams.org/bookstore?fn=20&arg1=cbmsseries&ikey=CBMS-71","timestamp":"2014-04-21T16:17:45Z","content_type":null,"content_length":"15312","record_id":"<urn:uuid:510c9ad0-5167-45fe-af12-22366251032f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From proof normalization to compiler generation and type-directed change-of-representation
- Proceedings of the Twenty-Third Annual ACM Symposium on Principles of Programming Languages , 1996
"... Abstract. Type-directed partial evaluation stems from the residualization of arbitrary static values in dynamic contexts, given their type. Its algorithm coincides with the one for coercing
asubtype value into a supertype value, which itself coincides with the one of normalization in the-calculus. T ..."
Cited by 207 (39 self)
Add to MetaCart
Abstract. Type-directed partial evaluation stems from the residualization of arbitrary static values in dynamic contexts, given their type. Its algorithm coincides with the one for coercing asubtype
value into a supertype value, which itself coincides with the one of normalization in the-calculus. Type-directed partial evaluation is thus used to specialize compiled, closed programs, given their
type. Since Similix, let-insertion is a cornerstone of partial evaluators for callby-value procedural programs with computational e ects. It prevents the duplication of residual computations, and
more generally maintains the order of dynamic side e ects in residual programs. This article describes the extension of type-directed partial evaluation to insert residual let expressions. This
extension requires the userto annotate arrowtypes with e ect information. It is achieved by delimiting and abstracting control, comparably to continuation-based specialization in direct style. It
enables type-directed partial evaluation of e ectful programs (e.g.,ade nitional lambda-interpreter for an imperative language) that are in direct style. The residual programs are in A-normal form. 1
, 1998
"... A Hindley-Milner type system such as ML's seems to prohibit type-indexed values, i.e., functions that map a family of types to a family of values. Such functions generally perform case analysis
on the input types and return values of possibly different types. The goal of our work is to demonstrate h ..."
Cited by 43 (0 self)
Add to MetaCart
A Hindley-Milner type system such as ML's seems to prohibit type-indexed values, i.e., functions that map a family of types to a family of values. Such functions generally perform case analysis on
the input types and return values of possibly different types. The goal of our work is to demonstrate how to program with type-indexed values within a Hindley-Milner type system. Our first approach
is to interpret an input type as its corresponding value, recursively. This solution is type-safe, in the sense that the ML type system statically prevents any mismatch between the input type and
function arguments that depend on this type. Such specific type interpretations, however, prevent us from combining different type-indexed values that share the same type. To meet this objection, we
focus on finding a value-independent type encoding that can be shared by different functions. We propose and compare two solutions. One requires first-class and higher-order polymorphism, and, thus,
is not implementable in the core language of ML, but it can be programmed using higher-order functors in Standard ML of New Jersey. Its usage, however, is clumsy. The other approach uses embedding/
projection functions. It appears to be more practical. We demonstrate the usefulness of type-indexed values through examples including type-directed partial evaluation, C printf-like formatting, and
subtype coercions. Finally, we discuss the tradeoffs between our approach and some other solutions based on more expressive typing disciplines.
- Computer Science Department, New York University , 2001
"... iii Acknowledgment My graduate study and life, of which this dissertation is one of the outcomes, benefitted greatly from the time, energy, and enthusiasm of many people. ..."
Cited by 2 (0 self)
Add to MetaCart
iii Acknowledgment My graduate study and life, of which this dissertation is one of the outcomes, benefitted greatly from the time, energy, and enthusiasm of many people.
"... We give an alternative, proof-theory inspired proof of the well-known result that the untyped -calculus presented with variable names and `a la de Bruijn are isomorphic. The two presentations of
the -calculus come about from two isomorphic logic formalisations by observing that, for the logic in ..."
Add to MetaCart
We give an alternative, proof-theory inspired proof of the well-known result that the untyped -calculus presented with variable names and `a la de Bruijn are isomorphic. The two presentations of the
-calculus come about from two isomorphic logic formalisations by observing that, for the logic in question, the Curry-Howard correspondence is formulaindependent. We identify the exchange rule as the
the proof-theoretical difference between the two representations of the systems. 1 Introduction The Curry-Howard correspondence relates formal inference systems of symbolic logic to typed -like
calculi. An inference system for formal, symbolic logic is said to be in Hilbert-style if, 1 no logical rule (i.e., excluding cut, weakening, etc.) change the set of assumptions. Such systems are
also referred to as combinatory logics, in that they typically consist of a set of tautologies (or combinators) which are combined by the, so-called, Modus Ponens rule: A ! B A (Modus Ponens) B
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1607782","timestamp":"2014-04-16T06:35:46Z","content_type":null,"content_length":"21757","record_id":"<urn:uuid:2b3212f2-b69f-46b5-aa99-e34a024907fe>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
velocity, acceleration vectors
February 20th 2010, 10:10 AM #1
Feb 2010
velocity, acceleration vectors
I have two questions.
"As a figure skater spins, she raises and lowers her fingertip. This moving point has the parametrization
r(t)=<x,y,z>=<2Cos(2t), 2Sin(2t), 3 + 2t - t^2>
(The units are in feet and seconds.)
If a ring slips off her finger at time t=0 and falls due to gravity, write a parametric equation for the motion of this falling projectile."
I found the velocity vector to be v(t)=<-4sin(2t), 4cos(2t), 2-2t>. The position vector was given...and then I kinda got stuck.
And for the second question:
"What launch speed C is needed in order to kick a football over a goalpost that is 10 feet high and 100 feet away? Assume the launch angle is alpha=45 degrees. Use g=32 ft/sec^2. Solve for C in
units of ft/sec.
Hint. The projectile motion is described by
x= Ct Cos(alpha)
y= Ct Sin(alpha) - 1/2g*t^2."
I drew a triangle and used the pythagorean theorum to find the hypotenuse to be 100.5, then found the value of sin(45) to be .707...and plugged in 100.5*.707 (aka, 71.1) into the x equation for
C...and I can't really follow what I did after that. I ended up getting 71.1 ft/sec, which was wrong.
Thanks for your help! :-)
I have two questions.
"As a figure skater spins, she raises and lowers her fingertip. This moving point has the parametrization
r(t)=<x,y,z>=<2Cos(2t), 2Sin(2t), 3 + 2t - t^2>
(The units are in feet and seconds.)
If a ring slips off her finger at time t=0 and falls due to gravity, write a parametric equation for the motion of this falling projectile."
I found the velocity vector to be v(t)=<-4sin(2t), 4cos(2t), 2-2t>. The position vector was given...and then I kinda got stuck.
And for the second question:
"What launch speed C is needed in order to kick a football over a goalpost that is 10 feet high and 100 feet away? Assume the launch angle is alpha=45 degrees. Use g=32 ft/sec^2. Solve for C in
units of ft/sec.
Hint. The projectile motion is described by
x= Ct Cos(alpha)
y= Ct Sin(alpha) - 1/2g*t^2."
I drew a triangle and used the pythagorean theorum to find the hypotenuse to be 100.5, then found the value of sin(45) to be .707...and plugged in 100.5*.707 (aka, 71.1) into the x equation for
C...and I can't really follow what I did after that. I ended up getting 71.1 ft/sec, which was wrong.
Thanks for your help! :-)
$r(t)= \left<x,y,z\right>= \left<2\cos(2t), 2\sin(2t), 3 + 2t - t^2\right>$
$v(t)= \left<-4\sin(2t), 4\cos(2t), 2-2t\right>$
as the ring leaves, it experiences a acceleration g only in the vertical direction , assumed to be z .
$r(0) = \left<2,0,3\right>$
$v(0) = \left<0,4,2\right>$
for the ring, $t \ge 0$ ...
$v(t) = \left<0,4,2-gt\right>$
$r(t) = \left< 2, 4t , 3 + 2t - \frac{1}{2}gt^2 \right>$
February 20th 2010, 12:52 PM #2
|
{"url":"http://mathhelpforum.com/calculus/129760-velocity-acceleration-vectors.html","timestamp":"2014-04-24T09:48:31Z","content_type":null,"content_length":"37499","record_id":"<urn:uuid:60371ca2-a89c-4810-bec2-edcb366c496e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Digitisation, Representation and Formalisation
Digital Libraries of Mathematics
A. A. Adams
School of Systems Engineering, The University of Reading.
Abstract. One of the main tasks of the mathematical knowledge management com-
munity must surely be to enhance access to mathematics on digital systems. In this
paper we present a spectrum of approaches to solving the various problems inherent
in this task, arguing that a variety of approaches is both necessary and useful. The
main ideas presented are about the differences between digitised mathematics, digi-
tally represented mathematics and formalised mathematics. Each has its part to play
in managing mathematical information in a connected world. Digitised material is
that which is embodied in a computer file, accessible and displayable locally or glob-
ally. Represented material is digital material in which there is some structure (usually
syntactic in nature) which maps to the mathematics contained in the digitised infor-
mation. Formalised material is that in which both the syntax and semantics of the
represented material, is automatically accessible. Given the range of mathematical
information to which access is desired, and the limited resources available for man-
aging that information, we must ensure that these resources are applied to digitise,
form representations of or formalise, existing and new mathematical information in
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/755/1158119.html","timestamp":"2014-04-18T04:05:16Z","content_type":null,"content_length":"8549","record_id":"<urn:uuid:28729722-d219-4652-a833-af6961a96dd4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Taylor/Maclaurin proof
July 30th 2009, 09:30 AM #1
Nov 2008
Taylor/Maclaurin proof
I would like to prove the following inequalities,
1) $f(x) = \frac{7-3\cos x-6\sin x}{9-3\cos x-8\sin x}$, show that $\frac{1}{2}\leq f(x)\leq 1$
2) Given that $x > 0$, prove that $x> \sin x > x - \frac{1}{6}x^3$
It's in part of a book about power series expansion of functions, so I'm pretty sure they're looking for a solution involving those. I've put in the expansions for sin and cos but I don't see the
logic in how to prove these statements for all x.
A nudge in the right direction would be much appreciated, thanks
$\sin x = x + O(x^{3})$
$\sin x = x -\frac{x^{3}}{3!} + O(x^{5})$
$\sin x = x -\frac{x^{3}}{3!} + \frac{x^{5}}{5!} + O(x^{7})$
So if $x>0$, then $x$ overestimates the value of $\sin x$ and $x- \frac{x^{3}}{3!}$ underestimates the value of $\sin x$
But that's not much of a proof.
Yeah I was thinking along the same lines, it was proving that the remaining terms do not affect it which was the bit where I was getting stuck, do I have to prove $\lim_{x \to +\infty}\frac{x^n}
{n!}=0$ anywhere?
July 30th 2009, 09:58 AM #2
July 30th 2009, 10:26 AM #3
Nov 2008
|
{"url":"http://mathhelpforum.com/calculus/96533-taylor-maclaurin-proof.html","timestamp":"2014-04-20T03:30:41Z","content_type":null,"content_length":"36950","record_id":"<urn:uuid:6b260634-e280-4f56-ad35-9a19d7b274c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] arange and floating point arguments
Anton Sherwood bronto@pobox....
Sun Sep 16 02:50:20 CDT 2007
Christopher Barker wrote:
> Very good point. Binary arithmetic is NOT less accurate that decimal
> arithmetic, it just has different values that it can't represent
> exactly. . . .
Quibble: any number that can be represented exactly in binary can also
be represented in decimal, but not vice versa, so binary can indeed be
less accurate for some numbers.
Anton Sherwood, http://www.ogre.nu/
"How'd ya like to climb this high *without* no mountain?" --Porky Pine
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-September/029230.html","timestamp":"2014-04-18T01:18:36Z","content_type":null,"content_length":"3146","record_id":"<urn:uuid:a82b4539-9c50-4f65-bc76-279882a4390f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chicago Ridge ACT Tutor
...I am willing to travel to meet wherever the student feels comfortable. I hope to hear from you to talk about how I can help with whatever needs you might have. Thank you for considering me!I
have excelled in math classes throughout my school career.
5 Subjects: including ACT Math, statistics, prealgebra, probability
...I also can teach them things they don't know with just enough detail to allow them to apply the material on the exam, and I can re-teach them material they may have forgotten. I've been
tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on th...
24 Subjects: including ACT Math, calculus, physics, geometry
Harvard/Johns Hopkins Grad- High Impact Math/Verbal Reasoning Tutoring I am a certified teacher who offers ACT/SAT prep for high school students, ISAT/MAP test prep for elementary school students,
and GRE/GMAT prep for adults. As a graduate of Northside College Prep, I am also well versed in the s...
38 Subjects: including ACT Math, Spanish, reading, statistics
...I also provide Excel tutoring to working professionals and Small Businesses that seek to learn Excel for normal Business use to the creation of advanced Excel workbooks that are aimed at
automation of repetitive tasks and thereby resulting in sharp cycle-time reduction. I have created an Excel w...
18 Subjects: including ACT Math, geometry, ASVAB, algebra 1
...I'm very capable in this subject - proof-based geometry requires students to think like mathematicians or lawyers. I already have a mathematicians' intuitions, and I know so many ways to push
students to greater understanding. I recently received teacher training at a commercial tutoring center.
21 Subjects: including ACT Math, chemistry, calculus, geometry
|
{"url":"http://www.purplemath.com/Chicago_Ridge_ACT_tutors.php","timestamp":"2014-04-19T23:31:16Z","content_type":null,"content_length":"23796","record_id":"<urn:uuid:32066ac0-2ae5-4d44-8fd0-71aa66580346>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of results: 3,811
We're learning about different kinds of functions and I don't really understand the difference between rational and algebraic functions. I know that rational functions are functions that are a ratio
of two polynomials, and algebraic functions are any functions that can be made...
Thursday, September 13, 2007 at 11:09am by Kelly
advanced functions/precalculus
1. The function f(x) = (2x + 3)^7 is the composition of two functions, g(x) and h(x). Find at least two different pairs of functions g(x) and h(x) such that f(x) = g(h(x)). 2. Give an example of two
functions that satisfy the following conditions: - one has 2 zeros - one has ...
Wednesday, January 15, 2014 at 2:33am by Diane
1) Graph these functions using a graphing calculator or program: f(x) = x^2 + 5 g(x) = 2x + 5 h(x)= x^3 + 5 What similarities and differences can you see among them? 2) Given the above functions, do
you think that all linear equations can be expressed as functions?
Tuesday, August 18, 2009 at 7:42pm by Anonymous
High School Pre Calculus
Do you know what the parent functions of y = sqrt(x) and y = x^3 look like? The functions f and g are simply those parent functions with a horizontal shift.
Saturday, January 16, 2010 at 8:02pm by Marth
A mini-computer system contains two components, A and B. The system will function so long as either A or B functions. The probability that A functions is 0.95, the probability that B functions is
0.90, and the probability that both function is 0.88. What is the probability ...
Saturday, March 5, 2011 at 3:54am by Paula
Functions in Math
Im looking at internet sites and Im not quite understanding functions. I have a question like; A function f(x) has the properties i) f(1) = 1 ii) f(2x) = 4f(x)+6 Could you help me out. I know nothing
about functions.
Monday, October 15, 2007 at 3:54pm by Lena
Eight types of functions are graphed and explained at this tutorial site: http://www.analyzemath.com/Graph-Basic-Functions/Graph-Basic-Functions.html There is a place to click on the page to get it
to work interactively. I don't know what you mean by Parents graphs.
Monday, May 12, 2008 at 10:22pm by drwls
algebra functions
construct two composite functions, Evaluate each composite function for x=2. i do not fully understand functions yet can someone explain and show me what to do step by step f(x)=x+1 g(x)=x-2
Wednesday, October 16, 2013 at 9:21pm by tani
Use composition of functions to show that the functions f(x) = 5x + 7 and g(x)= 1/5x-7/5 are inverse functions. That is, carefully show that (fog)(x)= x and (gof)(x)= x.
Wednesday, July 29, 2009 at 3:30am by Alicia
Composite Functions
Find the composite functions for the given functions. (6 marks) f(x) = 4x + 1 and g(x) = x2 f(x) = sin x and g(x) = x2 - x + 1 f(x) = 10x and g(x) = log x
Saturday, January 19, 2013 at 4:28pm by Anonymous
Graph and label the following two functions: f(x)=(x^2+7x+12)/(x+4) g(x)=(-x^2+3x+9)/(x-1) 1. Describe the domain and range for each of these functions. 2. Determine the equation(s) of any asymptotes
found in the graphs of these functions, showing all work. 3. Discuss the ...
Monday, May 2, 2011 at 11:22am by Debra
how can you possibly have an assignment working with something you have not studied? If you have f(x) and g(x) as functions, then there are two simple composite functions: f(g(x)) and g(f(x)) Given
your functions, f(g) = g+1 = (x^2+2x+1)+1 x^2+2x+2 g(f) = f^2+2f+1 = (x+1)^2 + ...
Friday, October 18, 2013 at 5:17pm by Steve
Find the sum and difference functions f + g and f – g for the functions given. f(x) = 2x + 6 and g(x) = 2x - 6 f(x) = x^2 - x and g(x) = -3x + 1 f(x) = 3x^3 - 4 and g(x) = -x^2 + 3 State the domain
and range for the sum and difference functions in #3. Find the product and the ...
Monday, September 2, 2013 at 4:08pm by Bee
Computer programming
Functions and procedures are used all the time in Modular programming. Can functions replace procedures? what are the advantages of using functions over procedures? Are there any advantages to using
procedures over functions?
Tuesday, August 17, 2010 at 6:10pm by Brenda
Operations with Functions write the function below as either a sum/difference/product/quotient of 2 r more functions a. h(x) = x^2+3x+9 How Would I do this? My teacher just did this f(x) = x^2 g(x)
3x+9 ^I dont really understand h(x)=(x+5)(x-3) how would I do this one? If h(x...
Wednesday, September 26, 2012 at 7:35pm by Shreya
Consider the functions
Consider the functions f(x)= 5x+4/x+3(This is a fraction) and g(x)= 3x-4/5-x(This is a fraction) a)Find f(g(x)) b)Find g(f(x)) c)Determine whether the functions f and g are inverses of each other.
Wednesday, April 10, 2013 at 12:03pm by Kayleigh
For each problem, construct two composite functions, . Evaluate each composite function for x=2 ok i have not dealt with composite functions can I please get some help with this as I have 20
questions to do can someone show me step by step on how to properly solve this ...
Friday, October 18, 2013 at 5:17pm by Johnathan
Math questions, please help me? :(? Find the composite functions f o g and g o f for the given functions. f(x) = 10^x and g(x) = log x State the domain and range for each: f(x) = 4x + 1 and g(x) = x^
2 f(x) = sin x and g(x) = x^2 - x + 1 f(x) = 10^x and g(x) = log x If f = {(-2...
Monday, September 2, 2013 at 4:08pm by Britt
Will someone please help me find a website to answer the following questions? What are the various functions of a police agency? Compare how the functions of a police agency differ at the federal,
state, and local levels. What would happen if the various functions and roles of...
Sunday, July 8, 2012 at 10:24am by Picaboo
Algebra 2 Functions
Use these functions: f(x)=3x-2 g(x)=2 h(x)=4x 1. What is f(g(x))? 2. What is g(18)? 3. Find h(5).
Wednesday, January 16, 2008 at 9:51pm by Amy
advance functions gr12 HELP ASAP
Using rational functions solve 7+ (1)/x = 1/(x-2) thank you
Tuesday, April 2, 2013 at 1:10pm by james
How are the functions of a human cell different to the functions of a computer?
Sunday, July 6, 2008 at 3:47pm by Wanda
Use the functions f(x)=x+4 and g(x)=2x-5 to find the specified functions. (g x f)^-1
Wednesday, March 25, 2009 at 5:23pm by Jake
Functions of f(x)
this question wants you to make up the functions of f(x) and g(x) to proform each expression
Sunday, January 1, 2012 at 7:04pm by Hunter
advance functions gr12
Using rational functions solve (7+ 1)/x = 1/(x-2)
Tuesday, April 2, 2013 at 11:24am by james
maths functions - eh?
There are infinitely many functions that contain those 5 points. Is there something else?
Monday, April 9, 2012 at 7:06am by Steve
advance functions gr12 HELP ASAP
using rational functions solve (x+5)/(x-3)= (2x+7)/(x) thank you
Tuesday, April 2, 2013 at 11:28am by ALEX
use the functions f(x)=x+4 and g(x)=2x-5 to find the specified functions. (g X f)^-1 Would the answer be x+1 over 2?
Friday, February 20, 2009 at 1:22am by Dan
Medical Seekers
I am confused!! what are Functions ? Math functions?
Wednesday, September 14, 2011 at 5:40pm by Liz
advanced functions
Solve for x, x ϵ R ¨Cx3 + 25x ¡Ü 0
Friday, July 5, 2013 at 2:08pm by FUNCTIONS
Domains of Functions
I dont understand how to do this problem, can you please help me? Given the functions f and g, determine the domain of f+g. 4. f(x)= 2x/(x-3) g(x)=3/(x+6)
Friday, September 14, 2007 at 9:43pm by Jules
Body structures & Functions
See "functions of the liver" at (Broken Link Removed) It's interesting reading.
Saturday, August 9, 2008 at 8:15pm by drwls
aadvanced functions HELP!
nope i don't, the school makes it manditory to take advanced functions before calculus
Wednesday, December 10, 2008 at 9:43pm by james
advance functions gr12 (SORRY LAST ONE HAD A TYPO)
Using rational functions solve 7+ (1)/x = 1/(x-2)
Tuesday, April 2, 2013 at 11:43am by james
Check my CALCULUS answers please!
Any short explanation for things I got wrong would be great, too, if possible! Thanks in advanced! :) 8. Which of the following functions grows the fastest? ***b(t)=t^4-3t+9 f(t)=2^t-t^3 h(t)=5^t+t^5
c(t)=sqrt(t^2-5t) d(t)=(1.1)^t 9. Which of the following functions grows the ...
Monday, October 7, 2013 at 12:08pm by Samantha
Advanced Functions
Beginning with the function f(x) = (0.8)x, state what transformations were used on this to obtain the functions given below: g(x) = - (0.8)x -2 h(x) = ½ (0.8)x-2 k(x) = (0.8)-3x+9
Wednesday, January 12, 2011 at 4:05pm by Hailey
pls. explain transformations on functions..
Wednesday, September 7, 2011 at 6:09pm by it'sme
advanced functions
Solve for x, x ϵ R a) (x + 1)(x – 2)(x – 4)2 > 0
Friday, July 5, 2013 at 2:07pm by FUNCTIONS
Will someone please help me find a website to find the answer to the following questions? What are the various functions of a police agency? Compare how the functions of a police agency differ at the
federal, state, and local levels. What would happen if the various functions ...
Saturday, July 7, 2012 at 9:09pm by Picaboo
Math - Functions
For the real valued functions f(x) - |x| - 4 and g(x) = sqrt(5 - x) Find the composition of f o g and the domain in interval notation. Can someone help me solve this?
Wednesday, May 2, 2007 at 2:23pm by UnstoppableMuffin
give 3 examples of functions from everyday life that are described verbally.what can you say about the domain and range of each of your functions?
Monday, August 27, 2007 at 8:56pm by dani
give 3 examples of functions from everyday life that are described verbally.what can you say about the domain and range of each of your functions?
Monday, August 27, 2007 at 8:59pm by dani
Medical Seekers
Health Care providers use computers to perform many functions. Identify at least 4 of those functions!
Wednesday, September 14, 2011 at 5:40pm by Liz
algebra functions
(f+g) just means to add f and g. Since they're both functions of x, that's why the unfamiliar notation. (f+g)(x) = f(x)+g(x) = (3x+1)+(x+2) = 4x+3 so, (f+g)(1) = 4(1)+3 = 7 or, you could evaluate f
(1)+g(1) = 4+3
Sunday, October 20, 2013 at 3:42pm by Steve
algebra functions
(f+g) just means to add f and g. Since they're both functions of x, that's why the unfamiliar notation. (f+g)(x) = f(x)+g(x) = (3x+1)+(x+2) = 4x+3 so, (f+g)(1) = 4(1)+3 = 7 or, you could evaluate f
(1)+g(1) = 4+3
Sunday, October 20, 2013 at 5:32pm by Steve
math test
Hi! I have a math test coming up tomorrow, and I need to prepare. but,i dont know how I should. . . any help would be great. she told us to: Be prepared to identify and explain functions, linear
functions and proportional functions. Be prepared to use the new function notation...
Thursday, October 11, 2007 at 9:39pm by Keon
Math - Functions
Is it possible to graph linear functions like h(x) = 3x + 7 without doing a table of values? If it is, how? And I know the slope is 7, which means the y-intercept is (0,7).
Tuesday, December 18, 2007 at 1:42pm by Anonymous
I have to find the domain and range for the composite of functions G(x)=sinx and h(x)=x^2-x+1. I'm confused about this and would really appreciate some help. Thanks.
Friday, September 24, 2010 at 3:58pm by Sarah
You have defined three independent functions of x. Unless you define how the functions are related to one another, there is no "solution". An example would be to require that f(x) = g(x)
Thursday, March 18, 2010 at 9:42pm by drwls
for the given functions f and g, find the specified value of the following functions and state their domains: f(x)=x-5; g(x)=2x^2 a.(f+g)(3)= b.(f/g)(3)= Can someone help me figure this out?
Wednesday, March 27, 2013 at 1:45am by asia
What does the "o" between g and f mean? You have not said what g and f are. It is as simplified as it can get. You posted a similar question earlier with sepecified functions. Why are you posting it
again without functions?
Tuesday, June 3, 2008 at 12:05am by drwls
See if any of the following tutorials on functions will help you: http://search.yahoo.com/search?fr=mcafee&p=functions+tutorials Sra
Friday, September 24, 2010 at 3:58pm by SraJMcGin
Graphing polynomial functions
the standard definition of (f+g)(x) = f(x) + g(x) so for your functions (f+g)(x) = (4x^2+9x+2) + (3x-2) simplify this and then follow the same technique for the other two questions, and let me know
what you got
Saturday, October 20, 2007 at 1:49pm by Reiny
when working with composite functions, does fog=gof? how do i create two functions,f(x) and g(x) to show that this statement is either true or false.i need to expain my reasoning
Wednesday, October 19, 2011 at 6:20pm by ayotal
for the given functions f and g, find the specified value of the following functions and state the domain of each one. f(x)=2+3/x; g(x)=3/x a.(f-g)(2)= b.(f/g)(3)= how do I do this? I need help! it
throws me off because of it being in fraction form.
Tuesday, March 26, 2013 at 1:35pm by marie
Math Grade 12 Advance Functions
I need help understanding equivalent trigonometric functions! Can someone PLEASE help me because the test is tommorrow and the internet or my textbook isn't helping at all!
Wednesday, December 19, 2012 at 12:31am by Buddy
Hmmm. Wolframalpha says this is a complicated mix of Bessel Functions. I typed in rsolve [{a[n]==n*a[n-1]+a[n-2],a[0]=24,a[1]=42},a[n],n] Sorry, my knowledge of recursive relations is a bit sparse.
Wednesday, July 3, 2013 at 2:24pm by Steve
domain? all real numbers. Now on the specifics of the INT function, different computatational functions treats floor and ceiling (and integer) functions differently.
Friday, July 19, 2013 at 3:07pm by bobpursley
Right now we are learning about functions, relations, etc. and I am kind of confused. Here is what I have to answer: Determine whether each of the following is a function. Identify any relations that
are not functions. Domain A set of avenues Correspondence An intersecting ...
Thursday, December 4, 2008 at 4:16pm by Celina
1) Find three functions with unlimited domains and ranges. 2) Find two functions with restricted domains. 3) Find two functions with restricted ranges. Im confused as to how I can find the answer for
these question, can you please explain along the steps.
Tuesday, July 19, 2011 at 11:34am by Kate
I do not understand the { symbol at the beginning and why you have two functions of x, separated by a comma. Are you asking where the two functions intersect?
Sunday, January 31, 2010 at 2:23pm by drwls
advance functions gr 12
Using rational functions solve 7+ 1(numerator) x (denominator) = 1 (numerator) x-2(demominator)
Tuesday, April 2, 2013 at 7:38am by james
Medical Seekers
You're not naming FUNCTIONS. You're naming reasons for using computers. The question asks you to name four FUNCTIONS.
Wednesday, September 14, 2011 at 5:40pm by Writeacher
advance functions gr 12
Using rational functions solve x+5(numerator) x-3 (denominator) = 2x+7 (numerator) x(demominator)
Monday, April 1, 2013 at 6:11pm by Fadak
advance functions gr 12
Using rational functions solve x+5(numerator) x-3 (denominator) = 2x+7 (numerator) x(demominator)
Monday, April 1, 2013 at 6:13pm by Fadak
Math advance functions gr 12
Using rational functions solve 7+ 1(numerator) x (denominator) = 1 (numerator) x-2(demominator)
Monday, April 1, 2013 at 6:15pm by alex
works for me. the co-functions are the functions of the complementary angles. So, by definition, tan(π/2-x) = cot(x). Your proof works as well, though.
Sunday, July 14, 2013 at 8:48pm by Steve
Can anyone tell me if these functions are odd, even, or neither. Also, what is the domain, range, x-intercept, and y-intercept of these functions? 1. f(x) = x^2 - 3x + 6 2. f(x) = cubicroot(x)
Tuesday, August 19, 2008 at 2:51am by Anthony
Algebra 2
Are the functions f and g inverse functions if f(x)=5/3x+1 and g(x)=3(x-1)/5? I need all of the steps on how to determine this. I need this ASAP because this assignment is due tomorrow. Thanks
Wednesday, May 25, 2011 at 5:46pm by Brandon
College Algebra! help!
Consider the following functions f(x)= (7x+8)/(x+3) and g(x)= (3x-8)/(7-x) (a) Find g(g(x)) (b) Find g(f(x)) (c) Determine whether the functions f and g are inverses of each other.
Tuesday, May 14, 2013 at 10:32pm by Rosie
Thank you for your help. It's just that I didn't take functions in grade 11 and now I'm taking Advanced Functions in grade 12 and I'm difficulty with. Thanks for your help though.
Sunday, November 24, 2013 at 4:10pm by Jessy
Calculus is a branch of mathematics that lets you solve equatiuns that involve the rate of change of functions, or the areas under curves. In the course of solving these equations, often many new
functions are introduced.
Sunday, October 24, 2010 at 10:49pm by drwls
consider the functions g(x)=(4-x^2)^0.5 and h(x)=(x^2)-5. find the composite functions hog(x) and goh(x), if they exist, and state the domain and range of the composite functions.
------------------------------ ok, i have figured out that hog(x)=-1-X^2, but in my answer ...
Saturday, May 17, 2008 at 9:25am by Ann
In engineering, many of the transfer functions used in the analysis involve polynomial functions, and one has to analyze for the transfer function poles and zeroes.
Monday, May 19, 2008 at 3:13pm by bobpursley
Discuss the universal functions of marketing and those functions that would apply to the products/services provided by a fictitious firm of your choosing.
Sunday, April 10, 2011 at 8:29pm by ak
Discuss the universal functions of marketing and those functions that would apply to the products/services provided by a fictitious firm of your choosing.
Thursday, June 30, 2011 at 1:13am by Destiny
consider the functions f(x)=x^3-2 and g(x)=3 sqrt x+2: a. find f(g(x)) b. find g(f(x)) c. determine whether the functions f and g are inverse of each other. I have no clue where to even begin!
Saturday, March 23, 2013 at 10:34am by marie
pre calculus
consider the functions f(x)=x^3-3 and g(x)=3 sqrt x+3: a. find f(g(x)) b. find g(f(x)) c. determine whether the functions f and g are inverses of each other. I really need help with these, I don't
get it at all.
Wednesday, March 27, 2013 at 2:35pm by Staci
advance functions gr 12
Using rational functions solve 3(numerator) 2x-4 (denominator) < (with a "-" underneath) 4 (numerator) x-2(demominator)
Monday, April 1, 2013 at 6:16pm by james
statistical research
You posted this question more than once. Look for my answer elsewhere. Your data refers to FATAL accidents, not ANY accident. You will also find the problem worked out at: http://www.algebra.com/
Monday, September 8, 2008 at 9:41pm by drwls
I transfered classes and they use graphing calculators. I've never used the graphing functions on mine. We were supposed to graph exponetial functions. Help?
Tuesday, November 13, 2007 at 8:18pm by Kellie
generally speaking, linear functions yield graphs that are straight lines, quadratic functions yield parabolas, .... If you give me a specific example ...
Tuesday, June 10, 2008 at 8:34pm by Reiny
math-advanced functions
on my exam review, i have this question for composition of functions Given f(x)=3x^2+x-1, g(x)=2cos(x), determine the values of x when f(g(x))=1 for 0≤x≤2π.
Thursday, January 23, 2014 at 9:32am by Liz
If Y1 is a continuous random variable with a uniform distribution of (0,1) And Y2 is a continuous random variable with a uniform distribution of (0,Y1) Find the joint distribution density function of
the two variables. Obviously, we know the marginal density functions of each ...
Sunday, November 15, 2009 at 10:28pm by Sean
What are the similarities and differences between functions and linear equations? How do you graph functions on a coordinate plane? Is there an instance when a linear equation is not a function?
Provide an example. Write a function for your classmates to graph. Consider ...
Monday, February 22, 2010 at 10:07am by AR
What are the similarities and differences between functions and linear equations? How do you graph functions on a coordinate plane? Is there an instance when a linear equation is not a function?
Provide an example. Write a function for your classmates to graph. Consider ...
Monday, February 22, 2010 at 10:08am by AR
Math - Inverse Functions
Find the inverses of the following functions. y = 3(x - 1)^2, x >= 1 Work: x = 3(y - 1)^2 x = 3(y - 1)(y - 1) x = 3(y^2 - y - y + 1) x = 3y^2 - 6y + 3 And now what do I do!? Please explain and show
me how to solve this inverse function! ... Thank you
Thursday, January 10, 2008 at 7:33pm by Anonymous
Let U(x; y) = 5x:8y:2 showing all derivation work, find: (a) the Marshallian demand functions for x and y (b) the Indirect Utility Function (c) the compensated demand functions xc and yc
Friday, February 21, 2014 at 8:00pm by Anonymous
What similarities and differences do you see between functions and linear equations? Are all linear equations functions? Is there an instance when a linear equation is not a function? Support your
answer. Create an equation of a nonlinear function and provide two inputs for ...
Tuesday, August 5, 2008 at 10:09pm by D
What similarities and differences do you see between functions and linear equations studied in Ch. 3? Are all linear equations functions? Is there an instance when a linear equation is not a
function? Support your answer. Create an equation of a linear function and provide two...
Tuesday, April 29, 2008 at 10:35am by Bobby
Try reviewing your trig functions and post an attempt at some of these. We'll be happy to check your work. They're all pretty straightforward applications of the definitions of trig functions and
Pythagorean Theorem.
Monday, March 5, 2012 at 1:06pm by Steve
programming c#
Looks like the first class performs all of the functions, and the second class calls of the functions to make sure they are performing properly. Do you have any specific questions about this?
Wednesday, October 2, 2013 at 10:23am by Leo
for the given functions f & g find the specified value of the following functions and state the domain of each one. f(x)=2+6/x; (g)x 6/x a. (f-g)(3)= what is the domain of f-g? b. (f/g)(6)= what si
the domain of f/g?
Saturday, October 13, 2012 at 8:29pm by ladybug
Algebra-Mathmate or Reiny or Dr. Bob, please check
correct, if you meant logb(x/y^2) F(x)=x^-1 is an inverse function, not an exponential function. Exponential functions are functions which can be written as f(x)=k e^ax for instance f(x)=3e^-5 or f
(x)=12(1-e^(-4x) ) Can it be applied to g(x)=3 a^bx ? many people include this ...
Tuesday, December 14, 2010 at 11:49am by bobpursley
Syllabi, class outlines, and discipline reports are practical applications of A) systems procedures. B) spreadsheet functions. C) database functions. D) word processing packages. I answer is C, it
sounds where all three of these would be, is this correct?
Thursday, February 14, 2013 at 6:25pm by Lori
Here square-root is taken of the square of the product of two functions, and not the numerical values. To me it is justified to retain the signs of the original functions, namely sin(x) and cos(x) in
the square-root. So if we evaluate the functions after taking square-root, ...
Tuesday, April 26, 2011 at 2:15pm by MathMate
what is the function on health insurance? What is the functions of disability insurance? whatis the functions of life insurance?
Sunday, September 19, 2010 at 6:38pm by cara
MAT 116 Algebera 1A
What similarities and differences do you see between functions and linear equations studied in Ch. 3? Are all linear equations functions? Is there an instance when a linear equation is not a
function? Support your answer. Create an equation of a nonlinear function and provide ...
Sunday, May 25, 2008 at 3:36pm by shelly, k
Use your knowledge of exponents to solve. a) 1/2^x=1/(x+2) b) 1/2^x>1/x^2 So I know that these functions are rational functions.. and I am trying to solve for x. I tried to solve them by I keep
getting stuck with the exponent 2 which is the exponential function.. Help please
Sunday, June 7, 2009 at 5:13pm by Halle
pre calc
Are you referring to the trigonometric functions? Namely sin, cos, tan, csc, sec, and cot that make up the first 6, and their inverses that make the other 6? It would be a good exercise to sketch
each of the three basic functions (sin, cos, and tan) to familiarize yourself ...
Thursday, December 17, 2009 at 10:07pm by MathMate
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=Functions","timestamp":"2014-04-16T09:01:40Z","content_type":null,"content_length":"38602","record_id":"<urn:uuid:6bb49c55-b341-4a88-9e4e-8c87794f1167>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/fiddlearound/medals","timestamp":"2014-04-17T04:09:31Z","content_type":null,"content_length":"83740","record_id":"<urn:uuid:74431a82-9371-42b1-8b05-d0dc60c089ab>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Quantum numbers of atoms in a given state "(number^number)letter"
1. The problem statement, all variables and given/known data
An atom is in the state 4^2F.
Write down the values of n,l,s and j.
2. Relevant equations
3. The attempt at a solution
I'm having problems with s and j. I know j=l+s, does one have to take parity into account?
I'm not even sure that the 2 I attempted are correct.
If someone could just tell me the method of working out the different quantum numbers when given a state such as above it would help me a lot.
Any help would be appreciated thanks!
|
{"url":"http://www.physicsforums.com/showpost.php?p=3290267&postcount=1","timestamp":"2014-04-21T02:15:03Z","content_type":null,"content_length":"9132","record_id":"<urn:uuid:2b6dacf3-6804-4cf0-8bd3-c289d11b0560>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Amazon Interview Question SDE1s
• Print N numbers of form 2^i.5^j in increasing order for all i >= 0 , j >= 0 ?
Example : - 1,2,4,5,8,10,16,20.....
Team: WebStore
Country: India
Interview Type: Phone Interview
Comment hidden because of low score. Click to expand.
void print(int N)
int arr[N];
arr[0] = 1;
int i = 0, j = 0, k = 1;
int numJ, numI;
int num;
for(int count = 1; count < N; )
numI = arr[i] * 2;
numJ = arr[j] * 5;
if(numI < numJ)
num = numI;
num = numJ;
if(num > arr[k-1])
arr[k] = num;
for(int counter = 0; counter < N; counter++)
printf("%d ", arr[counter]);
Comment hidden because of low score. Click to expand.
Can you explain how it works?
Comment hidden because of low score. Click to expand.
cool answer.. I'm still wondering how did this work?
Comment hidden because of low score. Click to expand.
This is the best way to do this. You can think of i and j as chasing pointers, where i's value is waiting to be doubled and j's value is waiting to be quintupled. After a while your array will look
something like this:
8 j
16 i
?? k
i is pointing to 16, which means it's queuing up 2*16 == 32
j is pointing to 8, which means it's queuing up 5*8 == 40.
On the next pass, 32 will be smaller, so that's you're next result, and then i will advance to 20, thereby queuing up 40.
On the next pass, you'll have a tie.
@nitingupta180, nicely done.
Comment hidden because of low score. Click to expand.
This is a famous problem (ugly numbers problem or hamming numbers problem), and this solution is well known as Dijkstra's solution.
Comment hidden because of low score. Click to expand.
Comment hidden because of low score. Click to expand.
The logic should be as follows:
Maintain two queues. Q2 having one element 2. And Q5 having one element 5.
From both the queues, extract the min number that stood in front of queue. If the number comes out of Q2, append 2*number to Q2 and 5*number to Q5. If the min number is from Q5, append 5*number in
Q5. Repeat the process.
For example:
Step 1: Initialization
Q2 = 2
Q5 = 5
Step 2
min is 2; print 2
Q2 = 4
Q5 = 5, 10
step 3
min 4; print 4
Q2 = 8
Q5 = 5, 10, 20
step 4
min 5; print 5
Q2 = 8
Q5 = 10, 20,25
step 5
min 8; print 8
Q2 = 16
Q5 = 10, 20, 25, 40
on and on....till we print N numbers
Comment hidden because of low score. Click to expand.
This is a fine way of approaching the problem, but if you are storing the results as you emit them, then you can make Q2 and Q5 be virtual queues. In other words, don't store all upcoming Q5 values;
instead, simply keep track of the head of the queue. For Q5=10,20,25,40, think of it as 5*[2,4,5,8], where [2,4,5,8] is just a subset of the numbers that you've already produced. In this case, you
just store the index of "2" in the original result to know the head of the queue. When you peek at Q5, you multiple 2 by 5 to get 10. When you pop Q5, then the next element will the index of 4 in
your result set. This type of reasoning basically gets you to the solution that @nitingupta180 presented.
Comment hidden because of low score. Click to expand.
Question says we need to print N numbers.
Every step compare 2^i with 5^j and print the minimum. In every step increment i and j based on what you have used in that step.
package algo;
public class PrintN {
public static void printN(int N){
int count =0;
int num1=1;
int num2=1;
while(count <N){
int k=num1*2;
int l=num2*5;
System.out.print(k+" ");
System.out.print(num2+" ");
public static void main(String args[]){
Comment hidden because of low score. Click to expand.
This doesn't work..
for N=10, it prints
[2 4 5 8 16 25 32 64 125 128].... well 20 is missing out
Comment hidden because of low score. Click to expand.
Actually 20 is not producible from 2^i or 5^j.
Comment hidden because of low score. Click to expand.
I think the question is for 2^i X 5^j... you should take a look at it, which makes
20 = 2^2 X 5^1 i.e. i=2,j=1... by the way the solution is provided by nitingupta180, it works as expected
Comment hidden because of low score. Click to expand.
Comment hidden because of low score. Click to expand.
Python Blow-Your-Mind Version.
Google "activestate python hamming numbers" for more explanation. This code is a minor adaptation:
from itertools import tee, chain, islice, groupby
from heapq import merge
def hamming_numbers():
def deferred_output():
for i in output:
yield i
result, p2, p5 = tee(deferred_output(), 3)
m2 = (2*x for x in p2)
m5 = (5*x for x in p5)
merged = merge(m2, m5)
combined = chain([1], merged)
output = (k for k, v in groupby(combined))
return result
if __name__ == '__main__':
print(list(islice(hamming_numbers(), 10)))
Comment hidden because of low score. Click to expand.
Simple Python Version.
def test():
assert [] == list(smooth_2_5(0))
assert [1] == list(smooth_2_5(1))
assert [1,2,4] == list(smooth_2_5(3))
assert [1,2,4,5,8,10,16,20] == list(smooth_2_5(8))
def smooth_2_5(n):
if n == 0:
queue = [1]
cnt = 0
while True:
result = queue.pop(0)
yield result
n -= 1
if n == 0:
if result*2 not in queue: queue.append(result*2)
if result*5 not in queue: queue.append(result*5)
Comment hidden because of low score. Click to expand.
count =0
tresult = (2**i)*(5**j)
print tresult
ithnum = (2**(i+1))*(5**j)
jthnum = (2**i)*(5**(j+1))
if ithnum<jthnum:
Comment hidden because of low score. Click to expand.
public static void main (String args[]){
int counter=0;
int i=0,j=0;
double sum1,sum2;
sum1=Math.pow(2, i+1);
Comment hidden because of low score. Click to expand.
public static void printNum(int i, int j) {
int[] a = new int[i+j];
for(int m=0;m<=i;m++) {
a[m] = (int)Math.pow(2*1.0, m*1.0);
for(int m=1;m<=j;m++){
a[m+i-1] = (int)Math.pow(5*1.0, m*1.0);
for(int n=0;n<a.length;n++)
System.out.print(a[n] + " ");
Comment hidden because of low score. Click to expand.
the above is wrong. discard it. It is 2^i*5^j not 2^i or 5^j. I misunderstood the question
Comment hidden because of low score. Click to expand.
public static void printNum(int i, int j) {
int len = (i+1)*(j+1);
int[] a = new int[len];
int tmp=0, k=0;
for(int m=0; m<=i; m++) {
tmp = (int)Math.pow(2*1.0, m*1.0);
for(int n=0; n<=j; n++) {
a[k] = tmp*((int)Math.pow(5*1.0, n*1.0));
for(int n=0;n<a.length;n++)
System.out.print(a[n] + " ");
Comment hidden because of low score. Click to expand.
#include <set>
int printIncreasingOrder(int i, int j, int N)
std::set<int> myset;
std::set<int>::iterator it
// note that std::set sorts the values in increasing order...
for(int x=0; x<i; x++) myset.insert(pow(2.0, x));
for(int y=0; y<j; y++) myset.insert(pow(5.0, y));
for(int count = 0, it=myset.begin(); it!=myset.end(); ++it, ++count)
if(count < N)
std::cout << ' ' << *it;
Comment hidden because of low score. Click to expand.
Use a min Heap. Start by putting in 1 (i = 0, j = 0). Pop 1, print it and increase your printed count. Then throw back in the heap 1 * 2, and 1 * 5. Rinse and repeat until your count reaches N.
public static void printFirstElements2i5j(int N)
PriorityQueue<Integer> queue = new PriorityQueue<Integer>();
for (int i = 0; i < N; i++)
int x = queue.remove();
if (!queue.contains(x * 2))
queue.add(x * 2);
if (!queue.contains(x * 5))
queue.add(x * 5);
public static void main(String[] args) {
|
{"url":"http://www.careercup.com/question?id=16378662","timestamp":"2014-04-20T19:08:57Z","content_type":null,"content_length":"73453","record_id":"<urn:uuid:c09720ab-7bd8-4b95-9da6-36351947d0a6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Curriculum Guidelines - Homeschooling
Here are the guidelines for skills that your first through fifth grade child should master in Math. Keep in mind that every child is individual, and many of these skills build on one another.
Grade 1-
1.Counting, using a variety of strategies, to identify the number of objects in a set.
2. Identify numbers that are one more and one than + between
3. Add and subtract numbers using counting strategies.
4. Recognize, describe, and extend repeating patterns.
5. Compare and order the number of objects in sets.
6. Mastery of addition and subtraction facts, plus method awareness
7. Write and solve number sentences for story problems involving addition and subtraction.
8. Add and subtract 1- and 2-digit numbers without regrouping.
9. Solve problems using nonstandard measurement (ie. estimate)
10. Measure and select appropriate tools to measure length, time, and weight.
11. Use a ruler and estimate and measure length in inches.
12. Master addition and subtraction facts with sums through 10; be able to estimate these answers as well.
13. Gather and organize data using tallies, bar graphs, and simple pictographs; interpret data also
14. Identify and describe attributes of 2-dimensional and 3-dimensional figures, as well as shapes that are congruent
15. Use symbols and pictures to represent #'s 12, 13 & 14
16. Determine the value of a set of coins through $1.00.
17. Solve problems involving adding and subtracting up to $1.00
Grade 2-
1. Understand and complete problems with numbers up to 1000
2. Identify numbers 10 more or less
3. Know the concepts of odd, even and equal to.
4. Know plane and solid figures
5. Add and subtract 2- and 3-digit numbers using a variety of strategies and solve problems involving addition and subtraction using models and number sentences.
6. Identify missing numbers in number sentences.
7. Estimate and measure length, weight, temperature, time, and capacity.
8. Demonstrate mastery of basic addition and subtraction facts for sums through 18.
9. Solve problems involving money through $10.00.
Grade 3
1. Represent 3- and 4-digit numbers in a variety of ways, and
subtract 2- and 3-digit numbers with regrouping.
2. Solve problems involving the area and perimeter of figures.
3. Demonstrate mastery of multiplication facts for 0, 1, 2, 5,
and 10; Solve multiplication and division problems using a variety of strategies. (Estimate these also)
4. Identify, describe, and classify 2- and 3-dimensional shapes;
Describe and represent slides, flips, and turns using pictures and objects.
5. Identify angles and describe how they compare with right angles.
6. Locate whole numbers and fractions with denominators of 2, 3, and 4 on a number line.
7. Time- tell time accurately and estimate and determine elapsed time using clocks and calendars.
8. Locate points on a grid or map.
9. Show mastery of temperature and measurement of it.
Grade 4-
1. Mean, Median, Mode and Range mastery
2. Classification of angle types.
3. Use place value skills through millions/millionths place.
4. Use problem solving skills.
5. Understands the concept of negative numbers.
6. Identify and describe points, lines, line segments, and rays.
7. Describe the relationship between fractions and decimals.
8. Multiply fractions and whole numbers using models and pictures.
9. Understands basic probability concepts.
10. Multiply any whole number by a 2- or 3-digit factor and divide any whole number by a 1-digit divisor.
11. Solve problems involving area, perimeter, volume, and elapsed time.
Grade 5-
1. Mastery of graphs, tables, etc. This includes creation of, interpretation and analyzation.
2. Understanding of Greatest Common Factor and Least Common Multiple.
3. Know radius, diameter, center ad circumference of a circle.
4. Compare and calculate fractions to percents to decimals.
5. Order decimals.
6. Solve volume, perimeter and area of a figure or space.
7. Ratios and Probability.
8. Solve tessalation puzzles.
9. Use simple Algebraic expressions.
Below you will find suggestions for excellent Math curriculum guidelines, resources and tools for homeschooling.
|
{"url":"http://www.bellaonline.com/ArticlesP/art27578.asp","timestamp":"2014-04-20T15:55:26Z","content_type":null,"content_length":"11131","record_id":"<urn:uuid:89550c71-4e90-41c5-b454-193edfc2c81b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Clay Mathematics Institute
Yang–Mills and Mass Gap
The laws of quantum physics stand to the world of elementary particles in the way that Newton's laws of classical mechanics stand to the macroscopic world. Almost half a century ago, Yang and Mills
introduced a remarkable new framework to describe elementary particles using structures that also occur in geometry. Quantum Yang-Mills theory is now the foundation of most of elementary particle
theory, and its predictions have been tested at many experimental laboratories, but its mathematical foundation is still unclear. The successful use of Yang-Mills theory to describe the strong
interactions of elementary particles depends on a subtle quantum mechanical property called the "mass gap": the quantum particles have positive masses, even though the classical waves travel at the
speed of light. This property has been discovered by physicists from experiment and confirmed by computer simulations, but it still has not been understood from a theoretical point of view. Progress
in establishing the existence of the Yang-Mills theory and a mass gap and will require the introduction of fundamental new ideas both in physics and in mathematics.
|
{"url":"http://www.claymath.org/millenium-problems/yang%E2%80%93mills-and-mass-gap","timestamp":"2014-04-17T21:46:05Z","content_type":null,"content_length":"23366","record_id":"<urn:uuid:850e340e-4b45-470c-b69f-c87d39675c12>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Information Theory. Wiley-Interscience
, 1975
"... A new definition of program-size complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest self-delimiting program for calculating strings A and B if one is given a
minimal-size selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Cited by 333 (16 self)
Add to MetaCart
A new definition of program-size complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest self-delimiting program for calculating strings A and B if one is given a
minimal-size selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be self-delimiting, i.e. no program is a prefix of another,
and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin
properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the
probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal
program, probab...
- Control Method, Workshop on Privacy and Electronic Society, 10 th ACM CCS , 2000
"... In this paper, we first introduce minimal, maximal and weighted disclosure risk measures for microaggregation disclosure control method. Our disclosure risk measures are more applicable to
reallife situations, compute the overall disclosure risk, and are not linked to a target individual. After defi ..."
Cited by 4 (2 self)
Add to MetaCart
In this paper, we first introduce minimal, maximal and weighted disclosure risk measures for microaggregation disclosure control method. Our disclosure risk measures are more applicable to reallife
situations, compute the overall disclosure risk, and are not linked to a target individual. After defining those disclosure risk measures, we then introduce an information loss measure for
microaggregation. The minimal disclosure risk measure represents the percentage of records, which can be correctly identified by an intruder based on prior knowledge of key attribute values. The
maximal disclosure risk measure considers the risk associated with probabilistic record linkage for records that are not unique in the masked microdata. The weighted disclosure risk measure allows
the data owner to compute the risk of disclosure based on weights associated with different clusters of records. Information loss measure, introduced in this paper, extends the existing measure
proposed by Domingo-Ferrer, and captures the loss of information at record level as well as from the statistical integrity point of view. Using simulated medical data in our experiments, we show that
the proposed disclosure risk and information loss measures perform as expected in real-life situations..
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3414614","timestamp":"2014-04-21T00:47:30Z","content_type":null,"content_length":"15800","record_id":"<urn:uuid:44a73f5b-b1c6-449d-9d91-4124e59434ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Why does assert_array_almost_equal sometimes raise ValueError instead of AssertionError ?
David Cournapeau david@ar.media.kyoto-u.ac...
Mon Jul 27 02:00:34 CDT 2009
In some cases, some of the testing functions assert_array_* raise a
ValueError instead of AssertionError:
>>> np.testing.assert_array_almost_equal(np.array([1, 2, np.nan]),
np.array([1, 2, 3])) # raises ValueError
>>> np.testing.assert_array_almost_equal(np.array([1, 2, np.inf]),
np.array([1, 2, 3])) # raises AssertionError
This seems at least inconsistent - is there a rationale or should this
be considered as a bug ?
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-July/044173.html","timestamp":"2014-04-17T01:32:34Z","content_type":null,"content_length":"3405","record_id":"<urn:uuid:ae99a006-f83f-4d19-ae3d-7bf4a067d3b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5-Color Theorem
5-color theorem – Every planar graph is 5-colorable.
Proof by contradiction.
Let G be the smallest planar graph (in terms of number of vertices) that cannot be colored with five colors.
Let v be a vertex in G that has the maximum degree. We know that deg(v) < 6 (from the corollary to Euler’s formula).
Case #1: deg(v) ≤ 4. G-v can be colored with five colors.
There are at most 4 colors that have been used on the neighbors of v. There is at least one color then available for v.
So G can be colored with five colors, a contradiction.
Case #2: deg(v) = 5. G-v can be colored with 5 colors.
If two of the neighbors of v are colored with the same color, then there is a color available for v.
So we may assume that all the vertices that are adjacent to v are colored with colors 1,2,3,4,5 in the clockwise order.
Consider all the vertices being colored with colors 1 and 3 (and all the edges among them).
If this subgraph G is disconnected and v[1] and v[3] are in different components, then we can switch the colors 1 and 3 in the component with v[1].
This will still be a 5-coloring of G-v. Furthermore, v[1] is colored with color 3 in this new 5-coloring and v[3] is still colored with color 3. Color 1 would be available for v, a contradiction.
Therefore v[1] and v[3] must be in the same component in that subgraph, i.e. there is a path from v[1] to v[3] such that every vertex on this path is colored with either color 1 or color 3.
Now, consider all the vertices being colored with colors 2 and 4 (and all the edges among them). If v[2] and v[4 ]don't lie of the same connected component then we can interchange the colors in the
chain starting at v[2 ]and use left over color for v.
If they do lie on the same connected component then there is a path from v[2] to v[4] such that every vertex on that path has either color 2 or color 4.
This means that there must be two edges that cross each other. This contradicts the planarity of the graph and hence concludes the proof. ڤ
PREVIOUS: Theorems NEXT: Algorithm
|
{"url":"http://cgm.cs.mcgill.ca/~athens/cs507/Projects/2003/MatthewWahab/5color.html","timestamp":"2014-04-21T14:58:58Z","content_type":null,"content_length":"5967","record_id":"<urn:uuid:cf2aa646-e1fd-478b-bbc6-ece38f81dee1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Similar Searches: mathematics, linear algebra, algebra 2 teacher edition, algebra 1, finite mathematics 2nd edition, axler, algebra 2, intermediate algebra 2nd edition, general chemistry second
edition, engineer mathematics, prentice hall algebra 1, the college board, algebra two, algebra 1 2, pre algebra 2nd edition lial, dan kennedy, basic college mathematics fourth edition, mathematics
2, prentice hall algebra, and prentice hall algebra 1 california edition
We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals.
As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based
Customer Service team is ready to help by email, chat or phone.
For all your procrastinators, the Semester Guarantee program lasts through January 11, 2012, so get going!
*It can take up to 24 hours for the extension to appear in your account. **BookRenter reserves the right to terminate this promotion at any time.
With Standard Shipping for the continental U.S., you'll receive your order in 3-7 business days.
Need it faster? Our shipping page details our Express & Express Plus options.
Shipping for rental returns is free. Simply print your prepaid shipping label available from the returns page under My Account. For more information see the How to Return page.
Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students
the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same
people, only our corporate name has changed.
|
{"url":"http://www.bookrenter.com/mathematic/search--p7","timestamp":"2014-04-19T08:52:07Z","content_type":null,"content_length":"42427","record_id":"<urn:uuid:bd7fac86-70a1-4032-91dd-3952522fe727>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit of a natural logarithm
July 19th 2012, 10:06 PM #1
Junior Member
Jul 2011
Limit of a natural logarithm
I've some trouble understanding how to work with the natural logarithm when trying to calculate a limit. I just can't seem to fit what I know together:
- I know that if $x \to 1$, then $\ln x \approx x - 1$, because $\ln(1) = 0$, as $e^0 = 1$. $\ln x, 0 < x < 1$ is a negative, so is $x - 1, x < 1$, so their quotient is a positive.
- I know the given standard limits: $\lim_{x \to 1}\frac{\ln x}{x -1} = 1$ (following from the above: $\frac{a}{a} = 1$) and $\lim_{x \to 0}\frac{\ln (1 + x)}{x } = 1$
But, I just can't seem to fit it together when I'm asked to solve an exercise like this: $\lim_{x \to 0}\frac{\ln (1 + x^2)}{x } = x$
How does this all fit together so I actually get to x? I need this explained in baby steps, it just won't ring a bell at the moment
Last edited by Lepzed; July 19th 2012 at 10:13 PM.
Re: Limit of a natural logarithm
Okay, here it is. You can do it in any of these ways :-
1:- $\lim (x->0) \frac{ln(1+x^2)}{x} * \frac{x}{x}$
So we have multiplied and divided by x
now using this
$\lim (f(x)->0) \frac{ln(1+f(x)}{f(x)} = 1$
(You can try this yourself. Just apply L'hospital and you will get it)
we get the limit as 1*x as x->0
(Where f(x) = x^2)
so it becomes 0
2:- Just Apply L'hospital as both the numerator and denominator are tending to zero.
You will get
which tends to 0 as x-> 0
Spare me for my poor Latex. I am trying to learn.
cheers !
Re: Limit of a natural logarithm
Made a mistake, brb
I'm actually not quite sure what's happening here, why do you introduce $\frac{x}{x}$?
How does this apply to:
$lim_{x \to 0}\frac{ln(1 + 2x)}{x}$?
I'm confused to say the least
Last edited by Lepzed; July 20th 2012 at 12:22 AM.
Re: Limit of a natural logarithm
See the thing inside the log guy is
$\frac {log(1+x^2)}{x}$
and to use the result we need $x^2$ in the denominator as well
so we make it $x^2$, But we need to compensate for that extra x . So we multiply it again.
Its the same thing that you do in rationalizing ....multiply divide by conjugate. Its a jugglery, that's all.
$lim_{x \to 0} \frac{1+2x}{x} = Infinity$
You can see the numerator goes to 1 as denominator goes to 0 so it becomes infinitely huge as you approach 0.
Re: Limit of a natural logarithm
Sorry i meant ln(1 + 2x), thanks again tho!
Re: Limit of a natural logarithm
ya so multiply divide by 2 so that denominator becomes 2x and the limit becomes 2 .
Also you can just apply L'Hospital and get the limit as 2 .
July 19th 2012, 11:43 PM #2
Junior Member
Jul 2012
New Delhi
July 19th 2012, 11:47 PM #3
Junior Member
Jul 2011
July 20th 2012, 12:20 AM #4
Junior Member
Jul 2012
New Delhi
July 20th 2012, 12:22 AM #5
Junior Member
Jul 2011
July 20th 2012, 01:35 AM #6
Junior Member
Jul 2012
New Delhi
|
{"url":"http://mathhelpforum.com/pre-calculus/201167-limit-natural-logarithm.html","timestamp":"2014-04-16T14:48:27Z","content_type":null,"content_length":"45391","record_id":"<urn:uuid:1263166c-9528-48d3-b34e-6696faab15fd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A non-uniform bound for translated Poisson approximation
Andrew D Barbour (Universitat Zurich) Kwok Pui Choi (National University of Singapore)
Let $X_1, \ldots , X_n$ be independent, integer valued random variables, with $p^{\text{th}}$ moments, $p >2$, and let $W$ denote their sum. We prove bounds analogous to the classical non-uniform
estimates of the error in the central limit theorem, but now, for approximation of ${\cal L}(W)$ by a translated Poisson distribution. The advantage is that the error bounds, which are often of order
no worse than in the classical case, measure the accuracy in terms of total variation distance. In order to have good approximation in this sense, it is necessary for ${\cal L}(W)$ to be sufficiently
smooth; this requirement is incorporated into the bounds by way of a parameter $\alpha$, which measures the average overlap between ${\cal L}(X_i)$ and ${\cal L}(X_i+1), 1 \le i \le n$.
Full Text: Download PDF | View PDF online (requires PDF plugin)
Pages: 18-36
Publication Date: February 4, 2004
DOI: 10.1214/EJP.v9-182
1. Bikelis, A. (1966), Estimates of the remiander in the central limit theorem (in Russian), Litovsk. Mat. Sb. , 6, 323-346. MR 35 #1067
2. Barbour, A. D. and v{C}ekanaviv cius, V. (2002), Total variation asymptotics for sums of independent integer random variables, Ann. Probab. 30, 509-545. MR 2003g: 60072
3. Barbour, A. D., Holst, L. and Janson, S. (1992), Poisson Approximation, Oxford Studies in Probability 2, Clarendon Press, Oxford. MR 93g:60043
4. Barbour, A. D. and Jensen, J. L. (1989), Local and tail approximations near the Poisson limit, Scand. J. Statist.16, 75-87. MR 91a:60057
5. Barbour, A. D. and Xia, A. (1999), Poisson perturbations. ESAIM: P & S3, 131-150. MR 2000j:60026
6. Cekanaviv cius, V. and Vaitkus, P. (2001), Centered Poisson approximation by Stein method, (in Russian) Liet. Mat. Rink.41, 409-423; translation in Lith. Math. J.41, (2001), 319-329. MR
7. Chen, L. H. Y. and Suan, I. (2003), Nonuniform bounds in discretized normal approximations, manuscript. Math. Review number not available.
8. Chen, L. H. Y. and Shao, Q. (2001), A non-uniform Berry-Esseen bound via Stein's method, Probab. Theory Related Fields120, 236-254. MR 2002h:60037
9. Chen, L. H. Y. and Shao, Q. (2003), Normal approximation under local dependence, manuscript. Math. Review number not available.
10. Lindvall, T. (1992), Lectures on the coupling method, Wiley, New York. MR 94c:60002
11. Petrov, V. V. (1975), Sums of independent random variables, Springer-Verlag, Berlin. MR 52 #9375
This work is licensed under a
Creative Commons Attribution 3.0 License
|
{"url":"http://www.emis.de/journals/EJP-ECP/article/view/182.html","timestamp":"2014-04-16T07:15:26Z","content_type":null,"content_length":"20299","record_id":"<urn:uuid:2fc98504-0272-41ec-9ffa-75a7c7f25e70>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learn about Applications of the Discriminant
Learn about Applications of the Discriminant Word problems, the heights above the ground of a model rocket on a particular launch can be modeled by the equation h=-4.9t²+102t+100, where t is the time
in seconds. Will the rocket reach a height of 600 meters? Use the discriminant. So in this problem we need to check whether the rocket will reach height of 600 meters. So we need to plug in the
height in this equation. So our equation says h=-4.9t²+102t+100 and then we’re going to go ahead and plug in h which is our height 600. So I'm going to go ahead and plug 600 in and that will equal
-4.9t²+102t+100. Now, we need the write the equation in standard form, we need to make sure the equation written in standard from and remember standard form is ax²+bx+c=0. So we need to get this
equation to equal 0, so I'm going to go ahead and subtract 600 from both sides which will give me 0=-4.9t²+102t++ (-500). So you could write it this was or you could write it as -4.9t²+102t+-500=0,
so now it’s written in standard form. So now that we have it written in standard from we need to find our a, b and c. So a, b and c in this equation, so a in this equation is -4.9, b is 102 and c is
-500. Now what we need to do is evaluate the discriminant and remember when it’s written in standard form your discriminant is b²-4ac, so this is the discriminant. So now, we’re going to plug this in
to our discriminant so we’re going to have b²-4ac so that equals b is 102 so we’re going take 102²-4×a, which is -4.9×c, which is -500. So if we go ahead and solve this 102² is 10,404- and if I
multiply this 4×4.9×500 I get 9800. And then if I go ahead and subtract these two numbers I get 604. So my discriminant is positive at 604 which is positive and remember that if the discriminant is
greater than zero if it’s positive you have two real solutions. And if the discriminant is equal to 0 you have one real solution. And then, if the discriminant is less than 0 then you have no
solutions, no real solutions. So since this discriminate is 604 it’s greater than zero, it’s positive than this equation has two real solutions, which means there are 2×t when the height h is 600. So
the rocket will reach a height of 600 meters twice once on the way up and once on the way down. So, things to keep in mind, remember that the number solutions of the quadratic equation can be
determine by evaluating your discriminant just like we did down here we evaluated our discriminant to find how many solutions we have. Also, if the quadratic equation is written in standard form it
discriminant at b²-4ac. And if the discriminant is greater than zero or positive the equation has two real solutions. If the discriminant is equal to zero there's one real solution and if the
discriminant is less than zero there are no real solutions.
|
{"url":"http://www.healthline.com/hlvideo-5min/learn-about-applications-of-the-discriminant-286300979","timestamp":"2014-04-17T15:29:02Z","content_type":null,"content_length":"39165","record_id":"<urn:uuid:99bb998c-ecc2-4183-9742-e9a6b32b58e8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Notes
We will not be using a textbook this semester, but rather a task-sequence adopted for IBL. The task-sequence that we are using was written by me, but the first half of the notes are an adaptation of
notes written by Stan Yoshinobu (Cal Poly) and Matthew Jones (California State University, Dominguez Hills). Any errors in the notes are no one's fault but my own. In this vein, if you think you see
an error, please inform me, so that it can be remedied.
In addition to working the problems in the notes, I expect you to be reading them. I will not be covering every detail of the notes and the only way to achieve a sufficient understanding of the
material is to be digesting the reading in a meaningful way. You should be seeking clarification about the content of the notes whenever necessary by asking questions in class or posting questions to
the course forum on our Moodle page.
You can find the course notes below. I reserve the right to modify them as we go, but I will always inform you of any significant changes. The notes will be released incrementally.
• Chapter 1: Introduction to Mathematics
• Chapter 2: Set Theory and Topology
• Chapter 3: Induction
• Chapter 4: Relations and Functions
|
{"url":"http://danaernst.com/archive/spring2012/ma3110/notes.html","timestamp":"2014-04-18T23:48:48Z","content_type":null,"content_length":"5847","record_id":"<urn:uuid:451371bd-5f3c-4a40-bdf1-e2a8ddb2e641>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Applied Mathematics vs. Science
I'm an applied mathematician. I (and almost everybody I work with) have very little interest in existence/uniqueness theorems. I have never actually done a proof in the context of my research. In
fact, of the literature that I read, I would say 98% (sort of random figure) have no theorem/proofs.
I can't actually name any theorem/proof paper I've read in the last 2-3 years. Okay, that's a lie. I can name one. Written by a Russian.
nice, i want to avoid rigorous thoremes/proof as much as possible
|
{"url":"http://www.physicsforums.com/showpost.php?p=1968148&postcount=9","timestamp":"2014-04-18T15:46:40Z","content_type":null,"content_length":"7933","record_id":"<urn:uuid:8fc23437-dc62-432d-8f99-96ef10ab5406>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decision Modeling and Optimization in Game Design, Part 3: Allocation and Facility Location Problems
This article is the third in an ongoing weekly series on the use of decision modeling and optimization techniques for game design. The full list of articles includes:
SuperTank: Solved!
In the
first article
in this series, we introduced an example problem for a game called SuperTank. In the
second article
, we introduced the basic concepts of decision modeling and guided you through a simple example using Excel's Solver tool.
Now, it’s time to apply the techniques we learned in Part 2 to the SuperTank problem from Part 1, and prove that we can use them to quickly and easily solve SuperTank. Just to refresh your memory,
SuperTank is a game where you’re able to fight with a customizable tank. A SuperTank looks something like this:
Each SuperTank can have any number of weapons of five different types, as shown below:
Your SuperTank can hold 50 tons worth of weapons, and you have 100 credits to spend. Your SuperTank also has 3 “critical slots” used by special weapons like MegaRockets and UltraLasers.
You can download the spreadsheet for this example
The goal is to pick the weapons that maximize the SuperTank’s damage, while staying within the limits of 50 tons, 100 credits, and 3 critical slots. We also assume that this table properly
encapsulates everything we need to know, and that factors such as weapon range, refire rate, and accuracy are either irrelevant or are already properly factored into the “Damage” setting for that
In order to optimize this, we’ll first enter the table above into a spreadsheet. Then, just below it, we’ll enter a second table that has a set of 5 “quantity” cells to specify the quantity of each
of the 5 weapon types.
For now, we’ll enter a value of ‘1’ into these cells just to test that they work properly, but these are our decision cells – we will ask Excel Solver to find the correct values of these cells for us
(you can tell that they are decision cells due to the yellow coloring, as we are following the formatting guidelines laid out in the second article). To the right of “quantity” cells, we’ll add
calculation cells that multiply the quantity values in these decision cells by the Damage, Weight, Cost, and Critical Slots values from the table above. Thus, each row of this table will properly add
up how much damage, weight, cost, and critical slots all the weapons in each weapon category will use up.
Finally, we set up a section below that sums up all the quantity, weight, cost, and critical slots values from the table above, and compares those against the maximum weight, cost, and critical slots
settings specified in the problem statement (50, 100, and 3, respectively).
Following our formatting conventions from Part 2 of this series, the blue cells along the top are our criteria from the problem statement. The grey cells are calculation cells representing the total
weight, cost, and critical slot values based on the summations from the quantity table above (i.e. the totals of the “Weight x Quantity,” “Cost x Quantity,” and “Critical Slots x Quantity” columns.
Finally, the orange cell represents the total damage of our SuperTank, based on the total damage from the “Damage x Quantity” column of the table above.
Before we run ahead and solve this, let’s take the time to make our spreadsheet a bit more user-friendly. We’ll take advantage of Excel’s ability to name each cell to give user-friendly names to
these 7 cells in the final calculation table. This isn’t strictly necessary, but in the long run, it will help make our spreadsheet significantly more understandable if we can give cells readable
names such as “MaxCriticalSlots” instead of “$F$21.” To do this, we simply select a cell and go to the name entry box above the spreadsheet and just to the left of the formula bar and type in the new
Finally, let’s go to Excel Solver and find the solution (go to the right side of the “Data” tab and select “Solver”; if you don’t see it, go to Excel Options, select the Add-Ins category, ensure that
the “Manage” drop box is set to “Excel Add-Ins,” hit “Go…” and ensure that “Solver Add-in” is checked.)
Under “Set Objective,” we’ll select the orange objective cell and select the “Max” radio button below it. Under “By Changing Variable Cells,” we’ll select the decision cells (the yellow cells in the
“Quantity” column of the second table). Below that, we’ll hit the “Add” button to add constraints as follows:
• The decision cells should be between 0 and some reasonable maximum (we pick 50, even though this is probably a much larger upper limit than we need). It’s also essential to set the “= integer”
constraint on each decision cell, since you cannot have a fractional amount of any given weapon, and since Excel Solver assumes any variable is a real number unless you tell it otherwise, it
would undoubtedly try to do this if we let it.
• We also set the total cost, total weight, and total critical slots values to less than the maximums specified in the problem statement. You can see from the dialog image below that these now have
the nice user-specified names we added in the table below, making this dialog much more readable.
Now we hit the “Solve” button, and after a brief wait, Solver fills in the “Quantity” values for us, giving us:
• 1 Machine Gun
• 3 Rockets
• 2 MegaRockets
• 1 Laser
• 1 UltraLaser
All of this gives us total damage of 83 and uses exactly 50 tons, 100 credits, and 3 critical slots. You can see that the best solution does not change no matter how much time you give Solver to run.
If you reset these values and re-optimize, or if you go to Options and change the random seed, it will still give you these values. We can't be 100% sure this is the optimal solution, but given that
Solver seems to be unable to improve on it after repeated optimization passes, it seems quite likely that this is the real optimum and not just a local maximum.
Problem solved!
Additional Uses
What’s exciting about this is that we've not only solved the problem much more quickly than we could have done by hand, we've also set it up to allow us to test which weapons are most useful with
different (weight, cost, critical slots) settings for a SuperTank. This means that we can relatively easily see and measure the effects of various changes to any of these parameters on the SuperTank,
and if we wanted to introduce a new alternate model of SuperTank that was lighter, heavier, or had a different number of critical slots, we could do so very easily.
We can also get a sense of the relative utility of all of our five weapon types as we modify all of these parameters, and quickly identify which weapons are too useful, not useful enough,
inappropriately priced for their weight and damage characteristics, and so on.
Again, the point is that this kind of tool allows us to search through the design space much more quickly than we could do by hand. For any incremental design decision we might consider, whether it
be changing any of the parameters of the weapons or the SuperTank itself, adding new weapons or SuperTank models, or adding new parameters (say, a Size requirement measured in cubic meters), this
gives us a very easy way to get visibility into some of the ramifications of that potential change.
To see what I mean, go to the blue “Max Cost” cell and change it from 100 to 99. Then re-run Solver, and you should now get a very different weapon loadout:
• 0 Machine Guns
• 2 Rockets
• 3 MegaRockets
• 3 Lasers
• 0 UltraLasers
This loadout gets a slightly lower damage score (82 instead of 83), but is radically different from the previous loadout.
If you set Max Cost to 101 or 102 and re-run, chances are that you’ll get a configuration similar or identical to the first one; in either case, damage will remain at 83 (actual results may vary
since there are several optimal loadouts in these cases). However, if you set Max Cost to 103, you should get:
• 1 Machine Gun
• 4 Rockets
• 2 MegaRockets
• 0 Lasers
• 1 UltraLaser
… which increases our total damage to 84.
This is interesting; this weapon loadout is very different from the first two.
As you can see, we get a surprising result: the optimal choice of weapons in our loadout is highly dependent on the SuperTank's parameters and can change dramatically with even a tiny change in those
parameters. This also gives us all kinds of other useful information: all five of the weapon types are useful in at least two of the three SuperTank settings, with Rockets and MegaRockets having
clear utility in all three. This seems to indicate that all five weapons are well-balanced in the sense that they are all useful relative to one another, while at the same time remaining unique.
And you can also see, this sort of decision modeling and optimization gives us an excellent way to quickly search the local neighborhood and re-optimize. For some problem types, it can allow us to
gain visibility into dominant player strategies and exploits that might be difficult or impossible to find any other way.
Wormhole Teleporters
After looking at the last two examples (the strategy game tax rate example and the SuperTank), you may think these techniques only apply to cases where users have to deal with numbers. Nothing could
be further from the truth! As we’ll see, there are countless instances where we can gain benefits from optimizing design elements that either do not appear as numbers to the user, or do not seem to
involve any numbers at all!
You might also be inclined to conclude that decision modeling only applies when modeling the decisions that players will make in our games. This is also incorrect: in some cases, you can also use it
to model an optimize your own decisions as a designer.
Assume that you are working on a massively multiplayer space-based role-playing game. One day your design lead comes to you with a look of worry on his face. “We've just wrapping up a redesign of the
Omega Sector,” he says, “and we’re running into a problem. We're planning to have some wormhole teleporters in that world segment, but we can’t agree on where to put them.”
“How many teleporters?” you ask.
“We’re not sure yet. It will probably be three, but it could be anywhere between 2 and 4. We just don’t know right now.” Then he lays down a map that looks like this:
“What’s that?” you ask.
“It’s a map of the Omega Sector. Or at least, the star systems the player can visit within that quadrant. We need you to figure out which cells should contain wormholes.”
“OK, what are the rules for wormhole placement? And can I have a wormhole in the same quadrant as a star system?”
“We want you to place the wormholes in a way that minimizes the distance between any star system and the nearest wormhole. And yes, you can put them in the same quadrant as a star system; they’re
just small teleporters hanging in space, so you can put them anywhere. And don’t forget, we haven’t agreed on how many wormholes this sector should have yet, so try to give me solutions for 2, 3, and
4 wormholes.”
How would you frame this problem, and how would you solve it?
Optimize Me Up, Scotty!
Let’s start by setting up the decision cells. We’ll call the four wormhole teleporters ‘A,’ ‘B,’ ‘C,’ and ‘D.’ We know that each teleporter is essentially nothing more than an (x,y) coordinate on the
Omega Sector star map above. We also know that we’ll need some way to specify how many of these four teleporters are actually active, so we’ll add a cell to let us specify the number of teleporters.
We will use teleporter ‘D’ only in the case where we are using 4 wormholes, and we’ll use ‘C’ only when we have 3 or more.
Below that, we’ll set up a table to figure out the distance from each star system to the nearest teleporter. That table looks like this:
On the left side, in blue, we have the coordinates of each star system on the map. Each star system is represented in one row. We simply typed this in from the Omega Sector map we were handed above.
To the right of that, we calculate the distance to each of the four teleporters A, B, C, and D. This is simply the Pythagorean theorem, calculated as the square root of the horizontal and vertical
distance between the star system and the teleporter:
(Don’t worry – I promise that this is as complicated as the math will get in this series!)
We get the X and Y coordinates of each star system from the blue cells in the table above, and we get the X and Y coordinates of each teleporter (the cells named “Ax” and “Ay” for teleporter A in the
SQRT() function above) from the yellow decision cells at the top.
Finally, we take the minimum of these four values in the “Dist to Closest” column, which simply uses the MIN() function to determine the minimum of the four values immediately to its left. We then
sum that entire column at the bottom; this sum is our objective cell.
You may have also noticed that in the screenshot above, the “Dist to D” cells all have the value 99. The reason for this is that we use the “Number of Teleporters?” cell in the section at the top of
the decision model to let us tweak the number of teleporters we are considering. If the number of teleporters is 2, we use the value ’99’ in both the “Dist to C” and “Dist to D” columns, while if it
is 3, we use ‘99’ in the “Dist to D” column only. This ensures that each star system will ignore any extraneous teleporters when calculating the distance to the closest teleporter in the case of 2 or
3 teleporters.
Now, we run Solver, as before:
The objective cell is the sum at the bottom of our “Dist to Closest” column. Note that unlike other examples, we want to use “To: Min” in the radio button for this, because we want the minimum
distance from all the star systems to our teleporters, not the maximum.
Below that, we specify the decision cells (“By Changing Variable Cells”) as the eight yellow decision cells for the X and Y coordinates of wormholes A, B, C, and D. In the constraints section at the
bottom, we constrain each of our coordinates to be an integer between 0 and 12. Note that we are using an integer constraint on these decision cells because we are assuming our design lead simply
wants to know which cell each teleporter should be in, but we could just as easily skip this constraint if our design lead wanted to know a real-valued location.
If we set the “Number of Teleporters?” cell to 2, 3, and 4, and re-run Solver at each setting, we get the following configurations:
Armed with this information, you can go back to your design lead and show the optimal locations to place any number of teleporters between 2 and 4. Here is what these optimal wormhole locations look
like on the map (shown in green) for 2, 3, and 4 wormholes, respectively.
You can download the spreadsheet for this example
Did I Mention There Are Ninjas?
“OK, that’s terrific,” your design lead replies, but you can see a slight look of anguish on his face. “But, uhh … well, I forgot to tell you some of these star systems are inhabited by Space Ninjas.
And we actually want the systems with Space Ninjas to be farther away from the wormholes, because we don’t want players to feel too threatened.”
“Oh. Well, that kind of throws a monkey wrench into things.”
“Yeah. Also, some star systems have 2 colonies in them instead of one, so it makes it twice as important for them to be closer to the wormhole teleporters. Or twice as important for them to be
farther, in the case of that one star system that has 2 Space Ninja colonies. Here’s what the map looks like now:”
He continues: “Every negative number is a Space Ninja colony. The system with a ‘2’ has two human colonies, while the ‘-2’ has two Space Ninja Colonies. So, can you tell us where to put the
teleporters in this case?”
“Tell me you've at least decided whether there will be 2, 3, or 4 teleporters by now,” you reply snarkily.
“No such luck, I’m afraid.”
Solving For Ninjas
In order to solve this, we need to add a new column to our table to represent the weightings in the table above. We will call this the “multiplier.” We will then multiply this value by the value in
the “Dist to Closest” column
When we do this, though, “Dist to Closest” changes its meaning slightly. It’s not really the distance to the closest star system, since for Space Ninja star systems, it’s -1 times that. It’s more of
a generalized “score,” so let’s call it that instead.
In this way, the score now represents an aggregate value. By minimizing it, we ensure that Solver attempts to be as close as possible to human-colonized star systems and as far as possible from Space
Ninja-occupied systems simultaneously.
Now we get the following results:
As you can see, this gives us a significantly different wormhole configuration in each case from the simpler, pre-Space-Ninja version.
The spreadsheet for this extended version of the teleporter example can be downloaded
As you can see, our decision model was able to very quickly solve this non-trivial problem, and we could adapt it to changing requirements remarkably quickly.
This problem is from a class of problems called "facility location problems," which are very well-studied in the operations management field. But as you can see, it has potential applications in game
design and level design as well, and is easy (to the point of being trivial) to set up in Excel.
Tune in next time, when we'll apply decision modeling and optimization to a challenging game balancing problem for multiple classes in player-vs-player (PvP) combat for a simplified role-playing
-Paul Tozour
Part 4 in this series is now available here.
This article was edited by Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.
5 comments:
1. Fascinating read.
How are these tools not given more attention? Or are they something all experienced designers know about without ever talking about it?
1. I think very few designers actually know that they exist or understand how to use them in depth. The surveys I've passed around seem to indicate that the subject matter is more or less
unknown to designers.
Most likely, the few who do, either haven't had the time to discuss them or haven't been willing to do so.
2. Nice article! I can't believe I didn't know about the solvers in excel before :(. I am going to have some fun with these.
The LP Simplex solver (select from the Solving Method dropdown) is designed to solve linear problems like this (simple combinations of quantities with multipliers). It has less knobs and dials
than the others, and if it converges it will be a guaranteed optimal solution. And it should always converge for well behaved problems like the above.
1. Thanks for the compliment, Huw!
Yes, the LP Simplex solver can work for some problems, but it's generally too simple to work on the kinds of complex, nonlinear problems we discuss in this series. They generally require the
Evolutionary or GRG solver.
2. Oops, should have read the whole article. Only the SuperTank problem is linear.
The Wormhole problem is not linear and LP will complain. I couldn't really get GRG to converge (it does seem to find an optimal solution but struggles to satisfy the integer constraint, I
think), so Evolutionary worked best for me here.
|
{"url":"http://intelligenceengine.blogspot.com/2013/07/decision-modeling-and-optimization-in_21.html","timestamp":"2014-04-19T17:04:27Z","content_type":null,"content_length":"103601","record_id":"<urn:uuid:b46acf7d-a613-4d20-8152-45e9f25a7b77>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: This is a preprint of a paper to appear in the Proceedings of the
Seventeenth Annual IEEE Symposium on Logic in Computer Science, to
be held July 2225, 2002 in Copenhagen, Denmark. Copyright 2002 IEEE.
Separation Logic: A Logic for Shared Mutable Data Structures
John C. Reynolds
Computer Science Department
Carnegie Mellon University
In joint work with Peter O'Hearn and others, based on
early ideas of Burstall, we have developed an extension of
Hoare logic that permits reasoning about lowlevel impera
tive programs that use shared mutable data structure.
The simple imperative programming language is ex
tended with commands (not expressions) for accessing and
modifying shared structures, and for explicit allocation and
deallocation of storage. Assertions are extended by intro
ducing a ``separating conjunction'' that asserts that its sub
formulas hold for disjoint parts of the heap, and a closely
related ``separating implication''. Coupled with the induc
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/568/3784580.html","timestamp":"2014-04-20T18:27:52Z","content_type":null,"content_length":"8585","record_id":"<urn:uuid:a5a432cb-bb4d-455b-a1e5-770fc320a832>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Final Cuts: the Players Union [Archive] - Steelers Fever Forums
09-06-2010, 12:54 PM
If anyone has any doubts about the destructiveness of Unions, then every September when the NFL draft teams make their final cuts, should be a good model to show how destructive Unions can be. There
is nothing wrong for workers to assemble and petition management about their demands and grievances. The issue is when the Unions starts to demand minimum pay.
There is no other reason NFL teams have 53 members on the team other than the players union's convoluted pay scale. Teams could carry 60, 70, 80 players like colleges do, on their roster, but because
of the salary cap and minimum pay policy, they can't afford too. So what happens? Some players, who are on the bubble, have their dreams of playing in the NFL burst.
Some of you may say, well if they didn't pay the stars tens of millions of dollars then they could redistribute the wealth amongst more players. This is also a fallacy. The union restricts how many
players can play football each year 32X53 = 1696 + practice squad = 256 for a total of = 1952 players. That may seem like a lot. But it isn't. The average NFL career is 3 years. So that means every
year there are 650 players leaving the game for one reason or another.
That means that 10 teams would disappear each year if there were no draft. Also, it means that after 3 years, the NFL would not exist. The NFL needs players consistently. The more the merrier would
be ideal. However the Union restricts the number players and hurts the game.
It is time to cut the Players union.
|
{"url":"http://forums.steelersfever.com/archive/index.php/t-56696.html","timestamp":"2014-04-17T04:19:52Z","content_type":null,"content_length":"51363","record_id":"<urn:uuid:68ce4a02-7c82-4d0f-8b59-0fb3322a995a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - some reaction between a proton and antiproton
1. The problem statement, all variables and given/known data
Is the following reaction possible? If so, what is the type of interaction (EM, weak or strong)?
[tex]p+\bar{p}\rightarrow\pi^+ + \pi^- + \pi^0[/tex]
2. Relevant equations
Conservation laws and rules of thumb regarding types of interactions.
3. The attempt at a solution
I don't think any conservation laws are broken so the process is possible. The question is how exactly. I don't have any idea how to put [itex]\pi^0[/itex] into Feynman diagrams... since its a
superposition of two separate quark configurations. I drew the following:
If [itex]\pi^0[/itex] was just [itex]u\bar{u}[/itex] then my question would be whether only changing quark configuration means that it is a strong interaction. And if so, how to draw a Feynman
diagram showing what really changed.
But [itex]\pi^0[/itex] is neither it nor [itex]d\bar{d}[/itex] - so what should I do?
|
{"url":"http://www.physicsforums.com/showpost.php?p=1360739&postcount=1","timestamp":"2014-04-19T19:42:28Z","content_type":null,"content_length":"9476","record_id":"<urn:uuid:dd3e00a2-893d-4956-b9e9-637290b4eef3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
simple derivative problem!
1. This is a term and not a function, so ... 2. Where are you stuck (and why?)? 3. I assume that you want to differentiate $f(x) = \sqrt{2x^2}$ If so: Re-write the equation of the function: $f(x) = \
sqrt{2x^2}~\implies~f(x) = \left \lbrace \begin{array}{rcl}x \cdot \sqrt{2}& if & x \ge 0 \\ -x \cdot \sqrt{2}& if & x < 0\end{array} \right.$ 4. Continue!
This is where I am stuck. I have to find the equation of a line tangent to the curve $f(x) = \frac{\sqrt{2x^3}}{2}$at the point (2,2) but when I use the quotient rule to find the derivative I get
stuck when differentiating $\sqrt{2x^3}$ Why? Does the $x^\frac{1}{2}$ (from simplifying the root) get distributed to $2$ and $x^3$? If you know a simpler way of doing this please show me.
Last edited by rabert1; April 17th 2012 at 11:33 PM.
Quotient rule not necessary here. f(x)=(root2)/2 times x^3/2 When differentiating (root2)/2 stays in front and x^3/2 becomes (3/2)x^1/2 You want this when x=2 ( I get 3/2 )
|
{"url":"http://mathhelpforum.com/calculus/197481-simple-derivative-problem.html","timestamp":"2014-04-17T23:24:32Z","content_type":null,"content_length":"47755","record_id":"<urn:uuid:4024e460-3fae-4ea3-8c46-d48848e9fc3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dolton Math Tutor
Find a Dolton Math Tutor
...I attended Northern Illinois University and received a bachelors degree in Elementary Education and Concordia University of Chicago for a masters degree in Curriculum and Instruction with an
emphasis on English Language Learner Education. I focus on learning about your child and the methods tha...
10 Subjects: including prealgebra, reading, English, grammar
...I have taught kindergarten and 2nd grade within the Chicago Public School system. I have a solid command on interventions for struggling students, mandated standardized tests and I know the
skills your child needs to become a fluent reader, writer, thinker, and problem solver. I have a plethora...
14 Subjects: including prealgebra, reading, dyslexia, autism
...I love helping others to achieve their educational goals. For the last 5 years I have taught Mathematics at Brown Mackie College in Merrillville, IN. Through my experience there I have
developed skills and knowledge on the different method of teaching math to a diversity population.
7 Subjects: including precalculus, statistics, linear algebra, algebra 1
My name is Jonathon and I have been teaching math and science at Hobart high school for 21 years. I am licensed in both math and physics at the high school level. I have taught a wide variety of
courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus, advanced placement calculus, integrated chemistry/physics, and physics.
12 Subjects: including algebra 1, algebra 2, calculus, geometry
...I am very much interested in tutoring, as it seems like an effective way to connect with students and address the issues unique to their learning styles. While I still enjoy the classroom, I
feel like any and every child I can help is a huge victory for a teacher, regardless of the setting. I have a particularly good rapport with teens and am passionate about teaching.
22 Subjects: including ACT Math, geometry, algebra 2, prealgebra
|
{"url":"http://www.purplemath.com/dolton_il_math_tutors.php","timestamp":"2014-04-17T13:31:40Z","content_type":null,"content_length":"23722","record_id":"<urn:uuid:defab7f1-6abd-470b-bfbc-040a4d184e87>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lists and natural numbers are two classes of data whose description requires self-referential data definitions. Both data definitions consist of two clauses; both have a single self-reference. Many
interesting classes of data, however, require more complex definitions than that. Indeed, there is no end to the variations. It is therefore necessary to learn how to formulate data definitions on
our own, starting with informal descriptions of information. Once we have those, we can just follow a slightly modified design recipe for self-referential data definitions.
Medical researchers rely on family trees to do research on hereditary diseases. They may, for example, search a family tree for a certain eye color. Computers can help with these tasks, so it is
natural to design representations of family trees and functions for processing them.
One way to maintain a family tree of a family is to add a node to the tree every time a child is born. From the node, we can draw connections to the node for the father and the one for the mother,
which tells us how the people in the tree are related. For those people in the tree whose parents are unknown, we do not draw any connections. The result is a so-called ancestor family tree because,
given any node in the tree, we can find the ancestors of that person if we follow the arrows but not the descendants.
As we record a family tree, we may also want to record certain pieces of information. The birth date, birth weight, the color of the eyes, and the color of the hair are the pieces of information that
we care about. Others record different information.
Figure 35: A sample ancestor family tree
See figure 35 for a drawing of an ancestor family tree. Adam is the child of Bettina and Carl; he has yellow eyes and was born in 1950. Similarly, Gustav is the child of Eva and Fred, has brown eyes,
and was born in 1988. To represent a child in a family tree is to combine several pieces of information: information about the father, the mother, the name, the birth date, and eye color. This
suggests that we define a new structure:
(define-struct child (father mother name date eyes))
The five fields of child structures record the required information, which suggests the following data definition:
A child is a structure:
(make-child f m na da ec)
where f and m are child structures; na and ec are symbols; and da is a number.
While this data definition is simple, it is unfortunately also useless. The definition refers to itself but, because it doesn't have any clauses, there is no way to create a child structure. If we
tried to create a child structure, we would have to write
... ... ... ...)
without end. It is for this reason that we demand that all self-referential data definitions consist of several clauses (for now) and that at least one of them does not refer to the data definition.
Let's postpone the data definition for a moment and study instead how we can use child structures to represent family trees. Suppose we are about to add a child to an existing family tree, and
furthermore suppose that we already have representations for the parents. Then we can just construct a new child structure. For example, for Adam we could create the following child structure:
(make-child Carl Bettina 'Adam 1950 'yellow)
assuming Carl and Bettina stand for representations of Adam's parents.
The problem is that we don't always know a person's parents. In the family depicted in figure 35, we don't know Bettina's parents. Yet, even if we don't know a person's father or mother, we must
still use some Scheme value for the two fields in a child structure. We could use all kinds of values to signal a lack of information (5, false, or 'none); here, we use empty. For example, to
construct a child structure for Bettina, we do the following:
(make-child empty empty 'Bettina 1926 'green)
Of course, if only one of the two parents is missing, we fill just that field with empty.
Our analysis suggests that a child node has the following data definition:
A child node is (make-child f m na da ec) where
1. f and m are either
1. empty or
2. child nodes;
2. na and ec are symbols;
3. da is a number.
This definition is special in two regards. First, it is a self-referential data definition involving structures. Second, the data definition mentions two alternatives for the first and second
component. This violates our conventions concerning the shape of data definitions.
We can avoid this problem by defining the collection of nodes in a family tree instead:
A family-tree-node (short: ftn) is either
1. empty; or
2. (make-child f m na da ec)
where f and m are ftns, na
and ec are symbols, and da is a number.
This new definition satisfies our conventions. It consists of two clauses. One of the clauses is self-referential, the other is not.
In contrast to previous data definitions involving structures, the definition of ftn is not a plain explanation of what kind of data can show up in which field. Instead, it is multi-clausal and
self-referential. Considering that this is the first such data definition, let us carefully translate the example from figure 35 and thus reassure ourselves that the new class of data can represent
the information of interest.
The information for Carl is easy to translate into a ftn:
(make-child empty empty 'Carl 1926 'green)
Bettina and Fred are represented with similar nodes. Accordingly, the node for Adam is created with
(make-child (make-child empty empty 'Carl 1926 'green)
(make-child empty empty 'Bettina 1926 'green)
As the examples show, a simple-minded, node-by-node transliteration of figure 35 requires numerous repetitions of data. For example, if we constructed the child structure for Dave like the one for
Adam, we would get
(make-child (make-child empty empty 'Carl 1926 'green)
(make-child empty empty 'Bettina 1926 'green)
Hence it is a good idea to introduce a variable definition per node and to use the variable thereafter. To make things easy, we use Carl to stand for the child structure that describes Carl, and so
on. The complete transliteration of the family tree into Scheme can be found in figure 36.
;; Oldest Generation:
(define Carl (make-child empty empty 'Carl 1926 'green))
(define Bettina (make-child empty empty 'Bettina 1926 'green))
;; Middle Generation:
(define Adam (make-child Carl Bettina 'Adam 1950 'yellow))
(define Dave (make-child Carl Bettina 'Dave 1955 'black))
(define Eva (make-child Carl Bettina 'Eva 1965 'blue))
(define Fred (make-child empty empty 'Fred 1966 'pink))
;; Youngest Generation:
(define Gustav (make-child Fred Eva 'Gustav 1988 'brown))
Figure 36: A Scheme representation of the sample family tree
The structure definitions in figure 36 naturally correspond to an image of deeply nested boxes. Each box has five compartments. The first two contain boxes again, which in turn contain boxes in their
first two compartments, and so on. Thus, if we were to draw the structure definitions for the family tree using nested boxes, we would quickly be overwhelmed by the details of the picture.
Furthermore, the picture would copy certain portions of the tree just like our attempt to use make-child without variable definitions. For these reasons, it is better to imagine the structures as
boxes and arrows, as originally drawn in figure 35. In general, a programmer must flexibly switch back and forth between both of these graphical illustrations. For extracting values from structures,
the boxes-in-boxes image works best; for finding our way around large collections of interconnected structures, the boxes-and-arrows image works better.
Equipped with a firm understanding of the family tree representation, we can turn to the design of functions that consume family trees. Let us first look at a generic function of this kind:
;; fun-for-ftn : ftn -> ???
(define (fun-for-ftn a-ftree) ...)
After all, we should be able to construct the template without considering the purpose of a function.
Since the data definition for ftns contains two clauses, the template must consist of a cond-expression with two clauses. The first deals with empty, the second with child structures:
;; fun-for-ftn : ftn -> ???
(define (fun-for-ftn a-ftree)
[(empty? a-ftree) ...]
[else ; (child? a-ftree)
... ]))
Furthermore, for the first clause, the input is atomic so there is nothing further to be done. For the second clause, though, the input contains five pieces of information: two other family tree
nodes, the person's name, birth date, and eye color:
;; fun-for-ftn : ftn -> ???
(define (fun-for-ftn a-ftree)
[(empty? a-ftree) ...]
... (fun-for-ftn (child-father a-ftree)) ...
... (fun-for-ftn (child-mother a-ftree)) ...
... (child-name a-ftree) ...
... (child-date a-ftree) ...
... (child-eyes a-ftree) ...]))
We also apply fun-for-ftn to the father and mother fields because of the self-references in the second clause of the data definition.
Let us now turn to a concrete example: blue-eyed-ancestor?, the function that determines whether anyone in some given family tree has blue eyes:
;; blue-eyed-ancestor? : ftn -> boolean
;; to determine whether a-ftree contains a child
;; structure with 'blue in the eyes field
(define (blue-eyed-ancestor? a-ftree) ...)
Following our recipe, we first develop some examples. Consider the family tree node for Carl. He does not have blue eyes, and because he doesn't have any (known) ancestors in our family tree, the
family tree represented by this node does not contain a person with blue eyes. In short,
(blue-eyed-ancestor? Carl)
evaluates to false. In contrast, the family tree represented by Gustav contains a node for Eva who does have blue eyes. Hence
(blue-eyed-ancestor? Gustav)
evaluates to true.
The function template is like that of fun-for-ftn, except that we use the name blue-eyed-ancestor?. As always, we use the template to guide the function design. First we assume that (empty? a-ftree)
holds. In that case, the family tree is empty, and nobody has blue eyes. Hence the answer must be false.
The second clause of the template contains several expressions, which we must interpret:
1. (blue-eyed-ancestor? (child-father a-ftree)), which determines whether someone in the father's ftn has blue eyes;
2. (blue-eyed-ancestor? (child-mother a-ftree)), which determines whether someone in the mother's ftn has blue eyes;
3. (child-name a-ftree), which extracts the child's name;
4. (child-date a-ftree), which extracts the child's date of birth; and
5. (child-eyes a-ftree), which extracts the child's eye color.
It is now up to us to use these values properly. Clearly, if the child structure contains 'blue in the eyes field, the function's answer is true. Otherwise, the function produces true if there is a
blue-eyed person in either the father's or the mother's family tree. The rest of the data is useless.
Our discussion suggests that we formulate a conditional expression and that the first condition is
(symbol=? (child-eyes a-ftree) 'blue)
The two recursions are the other two conditions. If either one produces true, the function produces true. The else-clause produces false.
In summary, the answer in the second clause is the expression:
[(symbol=? (child-eyes a-ftree) 'blue) true]
[(blue-eyed-ancestor? (child-father a-ftree)) true]
[(blue-eyed-ancestor? (child-mother a-ftree)) true]
[else false])
The first definition in figure 37 pulls everything together. The second definition shows how to formulate this cond-expression as an equivalent or-expression, testing one condition after the next,
until one of them is true or all of them have evaluated to false.
;; blue-eyed-ancestor? : ftn -> boolean
;; to determine whether a-ftree contains a child
;; structure with 'blue in the eyes field
;; version 1: using a nested cond-expression
(define (blue-eyed-ancestor? a-ftree)
[(empty? a-ftree) false]
[else (cond
[(symbol=? (child-eyes a-ftree) 'blue) true]
[(blue-eyed-ancestor? (child-father a-ftree)) true]
[(blue-eyed-ancestor? (child-mother a-ftree)) true]
[else false])]))
;; blue-eyed-ancestor? : ftn -> boolean
;; to determine whether a-ftree contains a child
;; structure with 'blue in the eyes field
;; version 2: using an or-expression
(define (blue-eyed-ancestor? a-ftree)
[(empty? a-ftree) false]
[else (or (symbol=? (child-eyes a-ftree) 'blue)
(or (blue-eyed-ancestor? (child-father a-ftree))
(blue-eyed-ancestor? (child-mother a-ftree))))]))
Figure 37: Two functions for finding a blue-eyed ancestor
The function blue-eyed-ancestor? is unusual in that it uses the recursions as conditions in a cond-expressions. To understand how this works, let us evaluate an application of blue-eyed-ancestor? to
Carl by hand:
(blue-eyed-ancestor? Carl)
= (blue-eyed-ancestor? (make-child empty empty 'Carl 1926 'green))
= (cond
[(empty? (make-child empty empty 'Carl 1926 'green)) false]
(child-eyes (make-child empty empty 'Carl 1926 'green))
(child-father (make-child empty empty 'Carl 1926 'green)))
(child-mother (make-child empty empty 'Carl 1926 'green)))
[else false])])
= (cond
[(symbol=? 'green 'blue) true]
[(blue-eyed-ancestor? empty) true]
[(blue-eyed-ancestor? empty) true]
[else false])
= (cond
[false true]
[false true]
[false true]
[else false])
= false
The evaluation confirms that blue-eyed-ancestor? works properly for Carl, and it also illustrates how the function works.
Exercise 14.1.1. The second definition of blue-eyed-ancestor? in figure 37 uses an or-expression instead of a nested conditional. Use a hand-evaluation to show that this definition produces the same
output for the inputs empty and Carl. Solution
(blue-eyed-ancestor? empty)
evaluates to false with a hand-evaluation.
Evaluate (blue-eyed-ancestor? Gustav) by hand and with DrScheme. For the hand-evaluation, skip those steps in the evaluation that concern extractions, comparisons, and conditions involving empty?.
Also reuse established equations where possible, especially the one above. Solution
Exercise 14.1.3. Develop count-persons. The function consumes a family tree node and produces the number of people in the corresponding family tree. Solution
Exercise 14.1.4. Develop the function average-age. It consumes a family tree node and the current year. It produces the average age of all people in the family tree. Solution
Exercise 14.1.5. Develop the function eye-colors, which consumes a family tree node and produces a list of all eye colors in the tree. An eye color may occur more than once in the list.
Hint: Use the Scheme operation append, which consumes two lists and produces the concatenation of the two lists. For example:
(append (list 'a 'b 'c) (list 'd 'e))
= (list 'a 'b 'c 'd 'e)
We discuss the development of functions like append in section 17. Solution
Exercise 14.1.6. Suppose we need the function proper-blue-eyed-ancestor?. It is like blue-eyed-ancestor? but responds with true only when some proper ancestor, not the given one, has blue eyes.
The contract for this new function is the same as for the old one:
;; proper-blue-eyed-ancestor? : ftn -> boolean
;; to determine whether a-ftree has a blue-eyed ancestor
(define (proper-blue-eyed-ancestor? a-ftree) ...)
The results differ slightly.
To appreciate the difference, we need to look at Eva, who is blue-eyed, but does not have a blue-eyed ancestor. Hence
(blue-eyed-ancestor? Eva)
is true but
(proper-blue-eyed-ancestor? Eva)
is false. After all Eva is not a proper ancestor of herself.
Suppose a friend sees the purpose statement and comes up with this solution:
(define (proper-blue-eyed-ancestor? a-ftree)
[(empty? a-ftree) false]
[else (or (proper-blue-eyed-ancestor? (child-father a-ftree))
(proper-blue-eyed-ancestor? (child-mother a-ftree)))]))
What would be the result of (proper-blue-eyed-ancestor? A) for any ftn A?
Fix the friend's solution. Solution
Programmers often work with trees, though rarely with family trees. A particularly well-known form of tree is the binary search tree. Many applications employ binary search trees to store and to
retrieve information.
To be concrete, we discuss binary trees that manage information about people. In this context, a binary tree is similar to a family tree but instead of child structures it contains nodes:
(define-struct node (ssn name left right))
Here we have decided to record the social security number, the name, and two other trees. The latter are like the parent fields of family trees, though the relationship between a node and its left
and right trees is not based on family relationships.
The corresponding data definition is just like the one for family trees: A binary-tree (short: BT) is either
1. false; or
2. (make-node soc pn lft rgt)
where soc is a number, pn is a symbol, and lft and rgt are BTs.
The choice of false to indicate lack of information is arbitrary. We could have chosen empty again, but false is an equally good and equally frequent choice that we should become familiar with.
Here are two binary trees:
(make-node 24 'i false false))
(make-node 87 'h false false)
Figure 38 shows how we should think about such trees. The trees are drawn upside down, that is, with the root at the top and the crown of the tree at the bottom. Each circle corresponds to a node,
labeled with the ssn field of a corresponding node structure. The trees omit false.
Exercise 14.2.1. Draw the two trees above in the manner of figure 38. Then develop contains-bt. The function consumes a number and a BT and determines whether the number occurs in the tree. Solution
Exercise 14.2.2. Develop search-bt. The function consumes a number n and a BT. If the tree contains a node structure whose soc field is n, the function produces the value of the pn field in that
node. Otherwise, the function produces false.
Hint: Use contains-bt. Or, use boolean? to find out whether search-bt was successfully used on a subtree. We will discuss this second technique, called backtracking, in the intermezzo at the end of
this part. Solution
Figure 38: A binary search tree and a binary tree
Both trees in figure 38 are binary trees but they differ in a significant way. If we read the numbers in the two trees from left to right we obtain two sequences:
The sequence for tree A is sorted in ascending order, the one for B is not.
A binary tree that has an ordered sequence of information is a BINARY SEARCH TREE. Every binary search tree is a binary tree, but not every binary tree is a binary search tree. We say that the class
of binary search trees is a PROPER SUBCLASS of that of binary trees, that is, a class that does not contain all binary trees. More concretely, we formulate a condition -- or data invariant -- that
distinguishes a binary search tree from a binary tree:
The BST Invariant
A binary-search-tree (short: BST) is a BT:
1. false is always a BST;
2. (make-node soc pn lft rgt) is a BST if
1. lft and rgt are BSTs,
2. all ssn numbers in lft are smaller than soc, and
3. all ssn numbers in rgt are larger than soc.
The second and third conditions are different from what we have seen in previous data definitions. They place an additional and unusual burden on the construction BSTs. We must inspect all numbers in
these trees and ensure that they are smaller (or larger) than soc.
Exercise 14.2.3. Develop the function inorder. It consumes a binary tree and produces a list of all the ssn numbers in the tree. The list contains the numbers in the left-to-right order we have used
Hint: Use the Scheme operation append, which concatenates lists:
(append (list 1 2 3) (list 4) (list 5 6 7))
evaluates to
(list 1 2 3 4 5 6 7)
What does inorder produce for a binary search tree? Solution
Looking for a specific node in a BST takes fewer steps than looking for the same node in a BT. To find out whether a BT contains a node with a specific ssn field, a function may have to look at every
node of the tree. In contrast, to inspect a binary search tree requires far fewer inspections than that. Suppose we are given the BST:
If we are looking for 66, we have found it. Now suppose we are looking for 63. Given the above node, we can focus the search on L because all nodes with ssns smaller than 66 are in L. Similarly, if
we were to look for 99, we would ignore L and focus on R because all nodes with ssns larger than 66 are in R.
Exercise 14.2.4. Develop search-bst. The function consumes a number n and a BST. If the tree contains a node structure whose soc field is n, the function produces the value of the pn field in that
node. Otherwise, the function produces false. The function organization must exploit the BST Invariant so that the function performs as few comparisons as necessary. Compare searching in binary
search trees with searching in sorted lists (exercise 12.2.2). Solution
Building a binary tree is easy; building a binary search tree is a complicated, error-prone affair. To create a BT we combine two BTs, an ssn number and a name with make-node. The result is, by
definition, a BT. To create a BST, this procedure fails because the result would typically not be a BST. For example, if one tree contains 3 and 5, and the other one contains 2 and 6, there is no way
to join these two BSTs into a single binary search tree.
We can overcome this problem in (at least) two ways. First, given a list of numbers and symbols, we can determine by hand what the corresponding BST should look like and then use make-node to build
it. Second, we can write a function that builds a BST from the list, one node after another.
Exercise 14.2.5. Develop the function create-bst. It consumes a BST B, a number N, and a symbol S. It produces a BST that is just like B and that in place of one false subtree contains the node
(make-node N S false false)
Test the function with (create-bst false 66 'a); this should create a single node. Then show that the following holds:
(create-bst (create-bst false 66 'a) 53 'b)
= (make-node 66
(make-node 53 'b false false)
Finally, create tree A from figure 38 using create-bst. Solution
Exercise 14.2.6. Develop the function create-bst-from-list. It consumes a list of numbers and names; it produces a BST by repeatedly applying create-bst.
The data definition for a list of numbers and names is as follows:
A list-of-number/name is either
1. empty or
2. (cons (list ssn nom) lonn)
where ssn is a number, nom a symbol,
and lonn is a list-of-number/name.
Consider the following examples:
(define sample (define sample
'((99 o) (list (list 99 'o)
(77 l) (list 77 'l)
(24 i) (list 24 'i)
(10 h) (list 10 'h)
(95 g) (list 95 'g)
(15 d) (list 15 'd)
(89 c) (list 89 'c)
(29 b) (list 29 'b)
(63 a))) (list 63 'a)))
They are equivalent, although the left one is defined with the quote abbreviation, the right one using list. The left tree in figure 38 is the result of using create-bst-from-list on this list.
The World Wide Web, or just ``the Web,'' has become the most interesting part of the Internet, a global network of computers. Roughly speaking, the Web is a collection of Web pages. Each Web page is
a sequence of words, pictures, movies, audio messages, and many more things. Most important, Web pages also contain links to other Web pages.
A Web browser enables people to view Web pages. It presents a Web page as a sequence of words, images, and so on. Some of the words on a page may be underlined. Clicking on underlined words leads to
a new Web page. Most modern browsers also provide a Web page composer. These are tools that help people create collections of Web pages. A composer can, among other things, search for words or
replace one word with another. In short, Web pages are things that we should be able to represent on computers, and there are many functions that process Web pages.
To simplify our problem, we consider only Web pages of words and nested Web pages. One way of understanding such a page is as a sequence of words and Web pages. This informal description suggests a
natural representation of Web pages as lists of symbols, which represent words, and Web pages, which represent nested Web pages. After all, we have emphasized before that a list may contain different
kinds of things. Still, when we spell out this idea as a data definition, we get something rather unusual:
A Web-page (short: WP) is either
1. empty;
2. (cons s wp)
where s is a symbol and wp is a Web page; or
3. (cons ewp wp)
where both ewp and wp are Web pages.
This data definition differs from that of a list of symbols in that it has three clauses instead of two and that it has three self-references instead of one. Of these self-references, the one at the
beginning of a constructed list is the most unusual. We refer to such Web pages as immediately embedded Web pages.
Because the data definition is unusual, we construct some examples of Web pages before we continue. Here is a plain page:
'(The TeachScheme! Project aims to improve the
problem-solving and organization skills of high
school students. It provides software and lecture
notes as well as exercises and solutions for teachers.)
It contains nothing but words. Here is a complex page:
'(The TeachScheme Web Page
Here you can find:
(LectureNotes for Teachers)
(Guidance for (DrScheme: a Scheme programming environment))
(Exercise Sets)
(Solutions for Exercises)
For further information: write to scheme@cs)
The immediately embedded pages start with parentheses and the symbols 'LectureNotes, 'Guidance, 'Exercises, and 'Solutions. The second embedded Web page contains another embedded page, which starts
with the word 'DrScheme. We say this page is embedded with respect to the entire page.
Let's develop the function size, which consumes a Web page and produces the number of words that it and all of its embedded pages contain:
;; size : WP -> number
;; to count the number of symbols that occur in a-wp
(define (size a-wp) ...)
The two Web pages above suggest two good examples, but they are too complex. Here are three examples, one per subclass of data:
(= (size (cons 'One empty))
(= (size (cons (cons 'One empty) empty))
The first two examples are obvious. The third one deserves a short explanation. It is a Web page that contains one immediately embedded Web page, and nothing else. The embedded Web page is the one of
the second example, and it contains the one and only symbol of the third example.
To develop the template for size, let's carefully step through the design recipe. The shape of the data definition suggests that we need three cond-clauses: one for the empty page, one for a page
that starts with a symbol, and one for a page that starts with an embedded Web page. While the first condition is the familiar test for empty, the second and third need closer inspection because both
clauses in the data definition use cons, and a simple cons? won't distinguish between the two forms of data.
If the page is not empty, it is certainly constructed, and the distinguishing feature is the first item on the list. In other words, the second condition must use a predicate that tests the first
item on a-wp:
;; size : WP -> number
;; to count the number of symbols that occur in a-wp
(define (size a-wp)
[(empty? a-wp) ...]
[(symbol? (first a-wp)) ... (first a-wp) ... (size (rest a-wp)) ...]
[else ... (size (first a-wp)) ... (size (rest a-wp)) ...]))
The rest of the template is as usual. The second and third cond clauses contain selector expressions for the first item and the rest of the list. Because (rest a-wp) is always a Web page and because
(first a-wp) is one in the third case, we also add a recursive call to size for these selector expressions.
Using the examples and the template, we are ready to design size: see figure 39. The differences between the definition and the template are minimal, which shows again how much of a function we can
design by merely thinking systematically about the data definition for its inputs.
;; size : WP -> number
;; to count the number of symbols that occur in a-wp
(define (size a-wp)
[(empty? a-wp) 0]
[(symbol? (first a-wp)) (+ 1 (size (rest a-wp)))]
[else (+ (size (first a-wp)) (size (rest a-wp)))]))
Figure 39: The definition of size for Web pages
Exercise 14.3.1. Briefly explain how to define size using its template and the examples. Test size using the examples from above.
Exercise 14.3.2. Develop the function occurs1. The function consumes a Web page and a symbol. It produces the number of times the symbol occurs in the Web page, ignoring the nested Web pages.
Develop the function occurs2. It is like occurs1, but counts all occurrences of the symbol, including in embedded Web pages. Solution
Exercise 14.3.3. Develop the function replace. The function consumes two symbols, new and old, and a Web page, a-wp. It produces a page that is structurally identical to a-wp but with all occurrences
of old replaced by new. Solution
Exercise 14.3.4. People do not like deep Web trees because they require too many page switches to reach useful information. For that reason a Web page designer may also want to measure the depth of a
page. A page containing only symbols has depth 0. A page with an immediately embedded page has the depth of the embedded page plus 1. If a page has several immediately embedded Web pages, its depth
is the maximum of the depths of embedded Web pages plus 1. Develop depth, which consumes a Web page and computes its depth. Solution
DrScheme is itself a program that consists of several parts. One function checks whether the definitions and expressions we wrote down are grammatical Scheme expressions. Another one evaluates Scheme
expressions. With what we have learned in this section, we can now develop simple versions of these functions.
Our first task is to agree on a data representation for Scheme programs. In other words, we must figure out how to represent a Scheme expression as a piece of Scheme data. This sounds unusual, but it
is not difficult. Suppose we just want to represent numbers, variables, additions, and multiplications for a start. Clearly, numbers can stand for numbers and symbols for variables. Additions and
multiplications, however, call for a class of compound data because they consist of an operator and two subexpressions.
A straightforward way to represent additions and multiplications is to use two structures: one for additions and another one for multiplications. Here are the structure definitions:
(define-struct add (left right))
(define-struct mul (left right))
Each structure has two components. One represents the left expression and the other one the right expression of the operation.
│ Scheme expression │ representation of Scheme expression │
│ 3 │ 3 │
│ x │ 'x │
│ (* 3 10) │ (make-mul 3 10) │
│ (+ (* 3 3) (* 4 4)) │ (make-add (make-mul 3 3) (make-mul 4 4)) │
│ (+ (* x x) (* y y)) │ (make-add (make-mul 'x 'x) (make-mul 'y 'y)) │
│ (* 1/2 (* 3 3)) │ (make-mul 1/2 (make-mul 3 3)) │
Let's look at some examples:
These examples cover all cases: numbers, variables, simple expressions, and nested expressions.
Exercise 14.4.1. Provide a data definition for the representation of Scheme expressions. Then translate the following expressions into representations:
1. (+ 10 -10)
2. (+ (* 20 3) 33)
3. (* 3.14 (* r r))
4. (+ (* 9/5 c) 32)
5. (+ (* 3.14 (* o o)) (* 3.14 (* i i))) Solution
A Scheme evaluator is a function that consumes a representation of a Scheme expression and produces its value. For example, the expression 3 has the value 3, (+ 3 5) has the value 8, (+ (* 3 3) (* 4
4)) has the value 25, etc. Since we are ignoring definitions for now, an expression that contains a variable, for example, (+ 3 x), does not have a value; after all, we do not know what the variable
stands for. In other words, our Scheme evaluator should be applied only to representations of expressions that do not contain variables. We say such expressions are numeric.
Exercise 14.4.2. Develop the function numeric?, which consumes (the representation of) a Scheme expression and determines whether it is numeric. Solution
Exercise 14.4.3. Provide a data definition for numeric expressions. Develop the function evaluate-expression. The function consumes (the representation of) a numeric Scheme expression and computes
its value. When the function is tested, modify it so it consumes all kinds of Scheme expressions; the revised version raises an error when it encounters a variable. Solution
Exercise 14.4.4. When people evaluate an application (f a) they substitute a for f's parameter in f's body. More generally, when people evaluate expressions with variables, they substitute the
variables with values.
Develop the function subst. The function consumes (the representation of) a variable (V), a number (N), and (the representation of) a Scheme expression. It produces a structurally equivalent
expression in which all occurrences of V are substituted by N. Solution
|
{"url":"http://htdp.org/2003-09-26/Book/curriculum-Z-H-19.html","timestamp":"2014-04-16T16:03:11Z","content_type":null,"content_length":"84911","record_id":"<urn:uuid:b4810b8c-2858-438c-9356-fcfcc63e2f3c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving an Equation with Decimals
Date: 1/31/96 at 19:57:19
From: Anonymous
Subject: Algebric Equations
I can't figure out what 15.6 + z = 20.4 is.
I need step by step help.
Date: 2/2/96 at 10:40:11
From: Doctor Elise
Subject: Re: Algebric Equations
The way you solve any algebra problem is by putting all the
letters on one side of the equals sign, and all the numbers on the
other. The rule is that you can do anything you want to the
equation as long as you do the same thing on both sides of the
equals sign.
15.6 + z = 20.4
What we need to do is get the equation to look like "z =
So we need to subtract the 15.6 from the left side of the equals
We have to do the same thing to both sides, so what we'll do is
15.6 + z - 15.6 = 20.4 - 15.6
From the very beginning of math, you learn that 2 + 3 = 3 + 2, our
old friend the "commutative property" of addition. What this
really means is that if you have to add and subtract numbers (and
letters) you can rearrange them in any order and it doesn't make
any difference. In fact, rearranging the order is about the ONLY
thing you don't have to do to both sides of the equals sign. So I
can write:
15.6 - 15.6 + z = 20.4 - 15.6
Now, 15.6 - 15.6 is 0, so now I have:
0 + z = 20.4 - 15.6
I know that 0 + z is equal to z, so I'll get rid of the 0 and get
z = 20.4 - 15.6
And then you can go ahead and do the 20.4 - 15.6 part. Be sure to
estimate first so that you know what your answer should be: after
all, this is "a little more than 20 minus a little more than 15",
so the answer should be pretty close to 5.
Good luck!
-Doctor Elise, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/57630.html","timestamp":"2014-04-16T04:40:49Z","content_type":null,"content_length":"6447","record_id":"<urn:uuid:1e753569-1d57-4d2a-8e55-b1738e084415>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus - Cost & Production applied problems
November 26th 2012, 04:24 PM #1
Nov 2012
I have benn tackle with the following question for an hour but I still don't have a clue. All I got after an hour is Cost = 30r + 50n (Don't know if this is right as well).
Gym Sock Compant manufactures cotton athletic socks. Production is partially automated through the use of robots. Daily operating cost amount to $50 per labourer and $30 per robot.
The number of pairs of socks the company can manufacture in a day is given by x=50(n^0.6) (r^0.4) (A Cobb Douglas production formula), where x is the number of pairs of socks that cn be
manufactured by N labourers and R robots.
Assuming that the compant wishes to produce 1000 pairs of socks per day at a minimom cost, how many labourers and how many robots should it use? You will need to formulate a cost function.
Express your cost function as a function of robots, r, and continue from there.
Your answers for robots and labohrers will turn out to be fractional numbers. But you do not want fractional number of people or robots, naturally, so you will need to round your answers to
interger values for r and n. Choose the most economical integer values, yet still satisfying the requirement of producing at least 1000 pairs of socks.
Thanks a lot!
Re: Calculus - Cost & Production applied problems
$1000 = 50N^{0.6}R^{0.4}$
$20 = N^{0.6}R^{0.4}$
solve for N in terms of R, then substitute for N in your correct cost equation ...
$C = 30R + 50N$
... then minimize Cost w/r to R
November 26th 2012, 04:46 PM #2
|
{"url":"http://mathhelpforum.com/calculus/208488-calculus-cost-production-applied-problems.html","timestamp":"2014-04-16T13:28:15Z","content_type":null,"content_length":"35050","record_id":"<urn:uuid:adedcf1e-6d42-4573-ac95-8991946c9ed1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
End A Of The 8-kg Uniform Rod AB Is Attached To ... | Chegg.com
End A of the 8-kg uniform rod AB is attached to a collar that can slide without friction on a vertical rod. End B of the rod is attached to a vertical cable BC. If the rod is released form rest in
the position shown, determine immediately after release (a) the angular acceleration of the rod and (b) the reaction at A.
α = 16.99 rad/s^2 CCW
A = 25.5 N
Mechanical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/end-8-kg-uniform-rod-ab-attached-collar-slide-without-friction-vertical-rod-end-b-rod-atta-q1310523","timestamp":"2014-04-17T04:34:04Z","content_type":null,"content_length":"21538","record_id":"<urn:uuid:33887c50-f5f1-4319-bf03-41d7ada48ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Life of Francis Galton by Karl Pearson Vol 2
: image 470
Statistical Investigations 405
in this case, to an erroneous conclusion. The study of popular judgments and their value is an important matter and Galton rightly chose this material to illustrate it. The result, he concludes, is
more creditable to the trustworthiness of a democratic judgment than might be expected, and this is more than confirmed, if the material be dealt with by the "average" method, not the " middlemost
"judgment, the result then being only 1 lb. in
1198 out.
Among other matters which much interested Galton was the verification of theoretical laws of frequency by experiment. He considered that dice were peculiarly suitable for such investigations', as
easily shaken up and cast. As an instrument for selecting at random there was he held nothing superior to dice'. Each die presents 24 equal possibilities, for each face has four edges, and a
differential mark can be placed against each edge. If a number of dice, say four, are cast, these can without examination be put, by sense of touch alone, four in a row, and then the marks on the
edges facing the experimenter are the random selection. Galton uses another die, if desirable, to determine a plus or minus sign for each of the inscribed values. On the 24 edges of this die he
places the possible combinations of plus and minus signs four at a time (16), and of plus and minus signs three at a time (8). Then, when he has copied out in columns his data from the facing edges
of the first type of dice, he puts against their values the plus or minus sign according to the facing edge of the sign-die, which gives either three or four lines at a cast. The paper is somewhat
difficult reading, and there are a good many pitfalls in the way of those who wish experimentally to test theories of frequency, especially those of small sampling. The importance of distinguishing
between hypergeometrical and binomial distributions, between sampling from limited and from unlimited or very large populations, and the question of the returning or not of each individual before
drawing the next, are matters which much complicate experimental work with dice.
Galton, however, was not unconscious of the many pitfalls which beset the unwary student of the theory of chance. There is an interesting short paper by him on "A plausible Paradox in Chances,"
written in 1894'. The paradox is as follows : Three coins are tossed. What is the chance that the results are all alike, i.e. all heads or all tails?
"At least two of the coins must turn up alike, and as it is an even chance whether a third coin is heads or tails, therefore the chance of being all alike is 1 to 2 and not 1 to 4."
If the reader can distinctly specify off hand, without putting pen to paper, wherein the fallacy lies, he has had some practice in probability or has a clear head for visualising permutations. We
leave the solution to him
' Ordinary dice do not follow the rules usually laid down for them in treatises on probability, because the pips are cut out- on the faces, and the fives and sixes are thus more frequent than aces or
deuces. This point was demonstrated by W. F. R. Weldon in 25,000 throws of 12 ordinary dice. Galton had true cubes of hard ebony made as accurate dice, and these still exist in the Galtoniana.
2 "Dice for Statistical Experiments." Nature, Vol. XLII, pp. 13-14, 1890. 3 Nature, Vol. XLIX, pp. 365-6, Feb. 15,'1894.
|
{"url":"http://galton.org/cgi-bin/searchImages/galton/search/pearson/vol2/pages/vol2_0470.htm","timestamp":"2014-04-17T18:23:10Z","content_type":null,"content_length":"7170","record_id":"<urn:uuid:a8bf418f-6952-46f8-be3e-959504743673>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
An object is 2.55cm from a convex lens of 10.55cm focal length. where is the image located?
Teck $\frac{1}{d_i}+\frac{1}{d_o}=\frac{1}{f}$ Here do=2.55 cm and f = 10.55 cm. $\frac{1}{d_i}=\frac{1}{f}-\frac{1}{d_o}$ $\frac{1}{d_i}=\frac{1}{10.55}-\frac{1}{2.55} \approx -0.297370133$ $d_i \
approx -3.36$cm to 3 significant digits. Note that di is negative (so the image is on the same side as the object), as it should be since do is inside the focal length. -Dan
|
{"url":"http://mathhelpforum.com/pre-calculus/2389-problem.html","timestamp":"2014-04-21T10:16:22Z","content_type":null,"content_length":"28498","record_id":"<urn:uuid:4fc7bc5a-2a58-4328-8046-e3659c39f8db>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] numpy.array and subok kwarg
Darren Dale dsdale24@gmail....
Thu Jan 22 08:15:37 CST 2009
I have a test script:
import numpy as np
class MyArray(np.ndarray):
__array_priority__ = 20
def __new__(cls):
return np.asarray(1).view(cls).copy()
def __repr__(self):
return 'my_array'
__str__ = __repr__
def __mul__(self, other):
return super(MyArray, self).__mul__(other)
def __rmul__(self, other):
return super(MyArray, self).__rmul__(other)
mine = MyArray()
print type(np.array(mine,dtype='f'))
The type returned by np.array is ndarray, unless I specifically set
subok=True, in which case I get a MyArray. The default value of subok is
True, so I dont understand why I have to specify subok unless I want it to
be False. Is my subclass missing something important?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20090122/3e2ac281/attachment.html
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-January/039792.html","timestamp":"2014-04-19T23:46:47Z","content_type":null,"content_length":"3664","record_id":"<urn:uuid:68442b45-58c4-4c7d-a6fc-abb447ef55d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
optimization problem solution: [500x12000].X = [50
"keshav singh" wrote in message <jgg9j1$9m6$1@newscl01ah.mathworks.com>...
> Hello,
> I've run into the problem that I need to solve an optimization problem for very large matrices. The equality constraint matrix is around 500 (rows) by 12000 (column), There are two other
constraints sum-to-unity and non-negativity. The only way I can make such a large matrix is using sparse, but lsqlin/ quadprog constraints (matlab fn) do not cooperate with sparse matrices. Is there
some other way I can formulate the problem so I can specify this problem and solve?
> I have tried with 'quadprog', as we can always rewrite a least squares problem as a quadratic optimization , and I think quadprog accepts sparse equality constraints. But there might be trouble if
the matrix H of quadprog equivalent to A'*A of the lsqlin formulation is singular.
> Any and all replies are really appreciated!
> ~Keshav
Hi Keshav,
The default algorithms for both lsqlin and quadprog accept sparse matrices. However, they only solve problems with equality constraints OR bounds on the variables, *but not both*. Therefore, since
your problem has both, they switch to a dense matrix algorithm.
A possibility is to try the interior-point convex algorithm in quadprog (released in R2011a). It accepts sparse matrices and all combinations of constraints for quadprog. Here's an example:
|
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/316550","timestamp":"2014-04-21T00:46:45Z","content_type":null,"content_length":"35068","record_id":"<urn:uuid:3262e21e-4a5e-4845-98fa-88694e5ea34b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Kim on Sunday, May 31, 2009 at 12:00pm.
could you walk me thorugh these problems
square root 18 - 3 times sqaure root of 50 plus 5 times the square root of 80 - 2 times the square root of 125
I'm having troubles with the square root of 80 all of the other radicals can be simplfified with a 2 in the radical except for 80 so can you walk me through that problem
also how do i do this
(square root 5 - square root 8)^2
once i know how to do that problem i can probably figure out how to do this one which I don't know how to do either
(square root 71 - square root 21)(square root 71 + square root 21)
also how do I simplfy something that looks likethis
1 + (1/square root 2) all over quanitity(1 - (1/square root2)
Thanks for all the help once I know how to do these problems I shouldn't need anymore help thanks for taking the time to help me please walk me through how to simplify these types of problems
• Algebra 2 - bobpursley, Sunday, May 31, 2009 at 12:07pm
On the first: simplify all to haveing a factor sqrt5
on the second, use the FOIL method:
(sqrt5-sqrt8)^2=5-sqrt40-sqrt40+8 combine terms.
on the next, use foil (If you dont recognize it as the difference of two squares.
On the last, multipy the numberation and the denominator by (1+1/sqr2)
Then, it becomes two foil problems, just like the other two.
• Algebra 2 - drwls, Sunday, May 31, 2009 at 12:17pm
Your questions would be easier to answer if you'd write them in more conventional math notation. Here is how to do two of them.
(sqrt71 - sqrt21)(sqrt 71 + sqrt 21)
= 71 - (sqrt21)(sqrt 71) + (sqrt21)(sqrt 71) - 21 = 50
Just remember the general rule that
(a+b)(a-b) = a^2 -b^2, and you could have written that down right away.
For the next one, let's try to get rid of the fracxtions. First multiply numerator and denominator by sqrt2. That results in
[1 + (1/sqrt2)]/[1 - (1/sqrt2]
= (sqrt2 +1)/(sqrt2 -1)
Now multiply numerator and denominator by sqrt2 + 1, and that becomes
(sqrt2 +1)^2/(2-1) = 2 + 2sqrt2 + 1
= 3 + 2 sqrt2
Related Questions
math - What does it mean to have the sqaure root of 5 with a 2 in front of it? ...
Math - Just need some help... Directions: Multiply or raise to the power as ...
Algebra - Simplify: 3 squared divided by the square root of 80. Multiple choices...
math - Nakim simplified 3 times the square root of 2x plus x times the square ...
Algebra - I am a bit confused about conjugates in algebra. I am supposed to ...
Algebra (urgent!) - I am a bit confused about conjugates in algebra. I am ...
Math - What is the square root of 12? 3.464 Well, that's true, but you could ...
Algebra - What is the perimeter of a computer graphic with sides measuring: 2 ...
Math Help - the square root of 3 times the square root of 15 also the square ...
Math simplifying mixed radicals - Please help me simplify these following mixed ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1243785603","timestamp":"2014-04-17T16:57:02Z","content_type":null,"content_length":"10278","record_id":"<urn:uuid:5bb37dd4-5224-4f8e-a20b-6b928f0c7e78>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
History Of The Theory Of Numbers - I
CHAP. VII] BINOMIAL CONGRUENCES. 205
as those of the term ymn~n for y=l,..., mn, and hence equal (mri—n)!, so that Q(y) is not divisible by p for some values 1,..., mn of #.
Euler152 recurred to the subject. The main conclusion here and from his former paper is the criterion that, if p=wn+1 is a prime, xn==a (mod p) has exactly n roots or no root, according as a"*=l (mod p) or not. In particular, there are just ra roots of aw=l, and each root a is a residue of an nth power.
Euler152a stated that, if a^+6=p2, all the values of x making ax+5 a square are given by x= ay*=*=2py+q.
J. L. Lagrange153 gave the criterion of Euler, and noted that if p is a prime 4n+3, J?(p~1)/2-l is divisible by p, so that x=Bn+l is a root of x2=B (mod p). Given a root £ of the latter, where now p is any odd prune not dividing B, we can find a root of x2^B (mod p2) by setting x = £+Xp, £2-£=pco. Then x2-B=(\2+fj,)p2 if 2£X+co=/ip. The latter can be satisfied by integers X, ^ since 2£ and p are relatively prime. We can proceed similarly and solve x2=B (mod pn).
Next, consider £2=J3 (mod 2n), for n>2 and B odd (since the case B even reduces to the former). Then £ = 22+1, £2—B=Z-}-I— B, where Z=42(2+1) is a multiple of 8. Thus l—B must be a multiple of 8. Let ra>3 and l-.B = 2r/3, r>3. If r^n, it suffices to take z = 2n~2f, where £ is arbitrary. If r<n, Z must be divisible by 2r, whence z = 2r~2f or jr~2f—1. Hence w=f(2r~2f±l)+^ must be divisible by 2n~r. If n-r^r-2, it suffices to take f ±0 divisible by 2n~r. The latter is a necessary condition if n-r>r-2. Thus f = 2r-2/> =F& w = 2r-2(f2=*=p). Hence f2=*=p must be divisible by 2w~2r+2. We have two sub-cases according as the exponent of 2 is ^ or >r —1; etc.
Finally, the solution of x2=B (mod m) reduces to the case of the powers of primes dividing m. For, if / and g are relatively prime and £2 — B is divisible by /, and \l/2 — B by g, then z2—B is divisible by fgiix—ju/=*= % = vg*=\t/. But the final equality can be satisfied by integers /-t, *> since / is prime to g.
A. M. Legendre154 proved that if p is a prime and w is the g. c. d. of n and p — 1 =cop/, there is no integral root of
(1) xn=B (mod p)
unless $P'E= 1 (mod p); if the last condition is satisfied, there are co roots of (1) and they satisfy
(2) xu=Bl (mod p), where I is the least positive integer for which
(3) ln-q(p-I)=a>.
For, from (1) and x*~ls*l, we get xln=Bl, xff(p~1}=l, and hence (2), by use of (3). Set n = con'. Then, by (2) and (1),
Bn'l=xn^B, Bp>lzExp'»=xp-l=l (mod p).
1MNovi Comm..Petrop., 8, 1760-1, 74; Opusc. Anal. 1, 1772, 121; Comm. Arith., 1, 274, 487. 152aOpera postuma, 1, 1862, 213-4 (about 1771).
»'M6m. Acad. R. Sc. Berlin, 23, ann6e 1767, 1769; Oeuvres, 2, 497-504. . Ac. R. Sc. Paris, 1785, 468, 476-481. (Cf. Legendre.155)
|
{"url":"http://www.archive.org/stream/HistoryOfTheTheoryOfNumbersI/TXT/00000214.txt","timestamp":"2014-04-18T21:40:50Z","content_type":null,"content_length":"13543","record_id":"<urn:uuid:f82ba911-3915-4f2d-8fc1-877a83bdc941>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from August 21, 2010 on Quantum field theory
Fig. 1: Argand diagrams for the phase amplitude of a path and for the Feynman path integral, from Dr Robert D. Klauber’s paper Path Integrals in Quantum Theories: A Pedagogic 1st Step. The phase
amplitude for a given path, e^iS/h-bar (where S is the action i.e. the integral over time of the Lagrangian, L, as shown in the upper diagram; the Lagrangian of a path represents the difference
between a path’s kinetic and potential energy) by Euler’s formula has a real part and a complex (or imaginary) part so requires for graphical representation an Argand diagram (which has a real
horizontal axis and a complex or imaginary vertical axis). However, as the second graph above indicates, this doesn’t detract from Feynman’s simple end-to-end summing of arrows which represent
amplitudes and directions for individual path histories. Feynman’s approach keeps the arrows all the same length, but varies their direction. Those with opposite directions cancel out completely;
those with partially opposing directions partially cancel. Notice that the principle of least action does not arise by making the arrows all smaller as the action gets bigger; it simply allows them
to vary in all directions (random angles) with equal probability at large actions, so they cancel out effectively, while the geometry makes the arrow directions for paths with the least action add
together most coherently: this is why nature conforms to the principle of least action. In a nutshell: for large actions, paths have random phase angles and so they cancel each other out very
efficiently and contribute nothing to resultant in the path integral; but for small actions, paths have similar phase angles and so contribute the most to the resultant path integral. Notice also
that all of little arrows in the path integral above (or rather, the sum over histories) have equal lengths but merely varying directions. The resultant arrow (in purple) is represented by two pieces
of information: a direction and a length. The length of the resultant arrow represents the amplitude. Generally the length of the resultant arrow is all we need to find if we want to calculate the
probability of an interaction, but if the path integral is done to work out an effect which has both magnitude and direction (i.e. a vector quantity like a force), the direction of the resultant
arrow is also important.
Fig. 2: Feynman’s path integral for particle reflection off a plane in his 1985 book QED, from Dr Robert D. Klauber’s paper Path Integrals in Quantum Theories: A Pedagogic 1st Step. The arrows at the
bottom of the diagram are the Argand diagram phase vectors for each path; add them all up from nose to tail and you get the resultant, i.e. the sum-over-histories or “path integral” (strictly you are
summing a discrete number of paths in this diagram so it is not an integral, which would involve using calculus, however summation of a discrete number of paths was physically more sensible to
Feynman than integrating over an infinite number of paths for every tiny particle interaction). The square of the length of the resulting arrow from the summing of arrows in Fig. 1 is proportional to
the probability of the process occurring.
Witten has a talk out with the title above, mentioned by Woit, which as you might expect from someone who has hyped string theory, doesn’t physically or even mathematically approach any of the
interesting aspects of the path integral. It falls into the category of juggling which fails to make any new physical predictions, but of course these people don’t always want to make physical
predictions for fear that they might be found wanting, or maybe because they simply lack new ideas that are checkable. Let’s look at the interesting aspects of the path integral that you don’t hear
discussed by the fashionable Wittens and popular textbook authors. Feynman explains in his 1985 book QED that the path integral isn’t intrinsically mathematical because the real force or real photon
propagates along the uncancelled paths that lie near the path of least action; virtual field quanta are exchanged along other paths with contributions that cancel out.
Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84:
‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of
old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you
get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = path phase amplitudes in the path integral, i.e. e^iS(n)/h-bar] for
all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are
all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’
The only quantum field theory textbook author who actually seems to have read and understood this physics book by Feynman (which seems hard to grasp by mathematical physicists like Zee who claim to
be nodding along to Feynman but fail to grasp what he is saying and misrepresent the physics of path integrals by making physically incorrect claims) is Dr Robert D. Klauber who has an interesting
paper on his domain www.quantumfieldtheory.info called Path Integrals in Quantum Theories: A Pedagogic 1st Step
Mathematical methods for evaluating path integrals
Solve Schroedinger’s time-dependent equation and you find that the amplitude of the wavefunction changes with time in proportion to e^iHt/h-bar where H is the Hamiltonian energy of the system, so Ht
is at least dimensionally equal to the action, S (which in turn is defined as the integral of the Lagrangian energy over time, or alternatively, the integral of the Lagrangian energy density over 4-d
spacetime). The wavefunction at any later time t is simply equal to its present value multiplied by factor e^iS/h-bar. Now the first thing you ask is what is e^iS/h-bar? This factor is – despite
appearances – is the completely non-mathematical phase vector rotating on a complex plane as Feynman explained (see the previous post), and similar complex exponential factors are used to simplify
(rather than obfuscate) the treatment of alternating potential differences in electrical engineering. Let’s examine the details.
Notice that you can’t get rid of the complex conjugate, i = (-1)^1/2 by squaring since that just doubles the power, (e^iA)^2 = e^2iA, so if you square it, you don’t eliminate i‘s. So how do you get
from e^iA a real numerical multiplication factor that transforms a wavefunction at time zero into the wavefunction at time t? Of course there is a simple answer, and this is just the kind of question
that always arises when complex numbers are used in engineering. Complex analysis needs a clear understanding of the distinction between vectors and scalars, and this is best done using the Argand
diagram, which I first met during A-level mathematics. This stuff is not even undergraduate mathematics. For example in alternating current electrical theory, the voltage or potential difference is
proportional to e^2Pifti (note that electrical engineers use j for (-1)^1/2 to avoid confusing themselves, because they use i for electric current).
Euler’s equation states that e^iA = (cos A) + i (sin A), so we can simply set A = S/h-bar. The first term is now a real solution and the second term is the imaginary or complex solution. This
equation has real solutions wherever sin A = 0, because this condition completely eliminates the complex term, leaving simply the real solution: e^iA = cos A. For values of A equal to 0 or n*Pi where
n is an integer, sin A = 0, so in this case, e^iA = cos A. So you might wonder why Feynman didn’t ignore the complex term and simplify the phase amplitude to e^iS/h-bar = cos S/h-bar, to make the
path integral yield purely real numbers which (superficially) look easier to add up. The answer is that ignoring the complex plane has the price of losing directional information: as Feynman
explains, the amplitude e^iS/h-bar does not merely represent a scalar number, but a vector which has direction as well as magnitude. Although each individual arrow in the path integral has similar
fixed magnitude of 1 unit, the path integral adds all of the arrows together to find the resultant which can have a different magnitude to any individual arrow, as well as a direction. You therefore
need two pieces of information to be added in evaluating the the vector arrows to find the resultant arrow: length and direction are two separate pieces of information representing the resultant
arrow, and you will lose information if you ignore one of these parameters. The complex conjugate therefore gives the phase amplitude the additional information of the direction of the arrow whose
length represents magnitude.
However, if the length of the arrow is always the same size, which it is in Feynman’s formulation of quantum field theory, then there is only one piece of information involved: the direction of the
arrow. So, since we have only one variable in each path (the angle describing direction of the arrow on the Argand diagram), why not vary the length of the arrow instead, and keep the angle the same?
We can do that by dropping the complex term from Euler’s equation, and writing the phase amplitude as simply the real term in Euler’s equation,
instead of
Using cos(S/h-bar) as the amplitude in the path integral in place of e^iS/h-bar doesn’t cost us any information because it still conveys one piece of data: it simply replaces the single variable of
the direction of an arrow of fixed length on a complex plane by the single variable of a magnitude for the path that can be added easily.
You might complain that, like e^iS/h-bar as expanded by Euler’s formula, cos(S/h-bar) is a periodic function which is equal to +1 for values of S/h-bar = 0, 2Pi, 4Pi, etc., is zero for values of S/
h-bar of Pi/2, 3Pi/2, etc., and is -1 for values of S/h-bar = Pi, 3Pi, 5Pi, etc. Surely, you could complain, if the path integral is to emphasize phase contributions from paths with minimal action S
(to conform with the physical principle of least action), it must make contributions small for all large values of action S, without periodic variations. You might therefore want to think about
dropping the complex number from the exponential amplitude formula e^iS/h-bar and adding a negative sign in its place to give the real amplitude e^-S/h-bar. However, this is incorrect physically!
Feynman’s whole point when you examine his 1985 book QED (see Fig. 2 above for example) is that there is a periodic variation in path amplitude as a function of the action S. Feynman explains that
particles have a spinning polarization phase which rotates around the clock as they move, analogous to the way that particles are spinning anticlockwise around the Argand diagram of Fig. 1 as they
are moving along (all fundamental particles have spin). The complex amplitude e^iS/h-bar is a periodic function; expanded by Euler’s formula it is e^iS/h-bar = cos (S/h-bar) + i sin(S/h-bar) which
has real solutions when S/h-bar = nPi where n is an integer, since sin (nPi) = 0 causing the second term (which is complex) to disappear. Thus, e^iS/h-bar is equal to -1 for S/h-bar = Pi, +1 for S/
h-bar = 2Pi, -1 for S/h-bar = 3Pi, and so on.
e^iS/h-bar is therefore a periodic function in variations of action S, instead of being merely a function which is always big for small actions and always small for big actions! The principle of
least action does not arise by the most mathematically intuitive way; it arises instead, as Feynman shows, from the geometry of the situation. This is precisely how we came to formulate the path
integral for quantum gravity by a simple graphical summation that made checkable predictions.
Updates (28 August 2010):
“[String theory professor] Erik Verlinde has made a splash (most recently in the New York Times) with his claim that the reason we don’t understand gravity is that it is an emergent phenomenon, an
“entropic force”. Now he and Peter Freund are taking this farther, with a claim that the Standard Model is also emergent. Freund has a new paper out on the arXiv entitled “Emergent Gauge Fields” with
an abstract: “Erik Verlinde’s proposal of the emergence of the gravitational force as an entropic force is extended to abelian and non-abelian gauge fields and to matter fields. This suggests a
picture with no fundamental forces or forms of matter whatsoever“.”
- Dr Woit’s blog post, Everything is Emergent.
““For me gravity doesn’t exist,” said Dr. Verlinde, who was recently in the United States to explain himself. Not that he can’t fall down, but Dr. Verlinde is among a number of physicists who say
that science has been looking at gravity the wrong way and that there is something more basic, from which gravity “emerges,” the way stock markets emerge from the collective behavior of individual
investors or that elasticity emerges from the mechanics of atoms.
“Looking at gravity from this angle, they say, could shed light on some of the vexing cosmic issues of the day, like the dark energy, a kind of anti-gravity that seems to be speeding up the expansion
of the universe, or the dark matter that is supposedly needed to hold galaxies together.” – Dennis Overbye in the New York Times.
Dr Woit quotes Freund’s paper where it compares the “everything is emergent” concept with the “boostrap” theory of Geoffrey Chew’s analytic S-matrix (scattering matrix) in the 1960s: “It is as if
assuming certain forces and forms of matter to be fundamental is tantamount (in the sense of an effective theory) to assuming that there are no fundamental forces or forms of matter whatsoever, and
everything is emergent. This latter picture in which nothing is fundamental is reminiscent of Chew’s bootstrap approach [9], the original breeding ground of string theory. Could it be that after all
its mathematically and physically exquisite developments, string theory has returned to its birthplace?”
Dr Woit’s 2006 book Not Even Wrong (Jonathan Cape edition, London, p. 148) gives a description of Chew’s bootstrap approach:
“By the end of the 1950s, [Geoffrey] Chew was calling this [analytic S-matrix] the bootstrap philosophy. Because of analyticity, each particle’s interactions with all others would somehow determine
its own basic properties and … the whole theory would somehow pull itself up by its own bootstraps.
“By the mid-1960s, Chew was also characterising the bootstrap idea as nuclear democracy: no particle was to be elementary, and all particles were to be thought of as composites of each other.”
The Verlinde-Freund papers are just vague, arm-waving, versions of precise theoretical predictions we’ve already done years ago and have discussed and refined repeatedly on this blog and elsewhere:
whereas Verlinde in an ad hoc way “derives” Newton’s classical equation for gravity, he does so without producing a quantitative estimate for the gravitational coupling G (something we do), and of
course he failed to predict in advance of the discovery in 1998 the cosmological acceleration of the universe and thus the amount of “dark energy” quantitatively (something we did correctly in 1996,
despite censorship by string theorist “peer-reviewers” for Classical and Quantum Gravity, who stated that any new idea not based on string theory is unworthy of being reviewed scientifically).
Fig. 3: the first two Feynman beta decay diagrams (left and centre) are correct: the third Feynman diagram (right) is wrong but is assumed dogmatically to be correct due to the dogma that quarks
don’t decay into leptons as the main product. It’s explicitly assumed in the Standard Model that quarks and leptons are not vacuum polarization-modified versions of the same basic preon or underlying
particle. However, it’s clear that this assumption is wrong for many reasons, as we have demonstrated. As one example, we can predict the masses of leptons and quarks from a vacuum polarization
theory, whereas these masses have to be supplied as ad hoc constants into the Standard Model, which doesn’t predict them. In mainstream quantum gravity research, nobody considers the path integral
for the exchange of gravitons between all masses in the universe, and everyone pretends that gravitons are only exchanged between say an apple and the Earth, thus concluding that the graviton must
have a spin of 2 so that like gravitational charges attract. In fact, as we have proved, quantum gravity is an emergent effect in the sense it arises that the exchange of gravitons with immense
masses isotropically located around us in distant stars; the convergence of these exchanged gravitons flowing towards any mass, when an anistropy is produced by another mass, causes a net force
towards that other mass.
|
{"url":"http://nige.wordpress.com/2010/08/21/","timestamp":"2014-04-19T04:19:51Z","content_type":null,"content_length":"47234","record_id":"<urn:uuid:05e5b122-fc42-46fe-9b94-7e91271a551b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Before analyzing the velocity and pressure field for the case of an airfoil, we need to investigate a little more deeply the role played by circulation.
The Kutta-Joukowski theorem shows that lift is proportional to circulation, but apparently the value of the circulation can be assigned arbitrarily.
The solution of flow around a cylinder tells us that we should expect to find two stagnation points along the airfoil the position of which is determined by the circulation around the profile. There
is a particular value of the circulation that moves the rear stagnation point (V=0) exactly on the trailing edge.
This condition, which fixes a value of the circulation by simple geometrical considerations is the Kutta condition.
Using Kutta condition the circulation is not anymore a free variable and it is possible to evaluate the lift of an airfoil using the same techniques that were described for the cylinder. Note that
the flow fields obtained for a fixed value of the circulation are all valid solutions of the flow around an airfoil. The Kutta condition chooses one of these fields, one which represents the best
actual flow.
We can try to give a feasible physical justification of the Kutta condition; to do this we need to introduce a concept that is ignored by the theory for irrotational inviscid flow: the role played by
the viscosity of a real fluid.
Suppose we start from a static situation and give a small velocity to the fluid. If the fluid is initially at rest it is also irrotational and, neglecting the effect of viscosity, it must remain
irrotational due to Thompson theorem.
The flow field around the wing will then have zero circulation, with two stagnation points located one on the lower face of the wing, close to the leading edge, and one on the upper face, close to
the trailing edge.
A very unlikely situation is created at the trailing edge: a fluid particle on the lower side of the airfoil should travel along the profile, make a sharp U-turn at the trailing edge, go upstream on
the upper face until it reaches the stagnation point and then, eventually, leave the profile. A real fluid cannot behave in this way. Viscosity acts to damp the sharp velocity gradient along the
profile causing a separation of the boundary layer and a wake is created with shedding of clockwise vorticity from the trailing edge.
Since the circulation along a curve that includes both the vortex and the airfoil must still be zero, this leads to a counterclockwise circulation around the profile. But if a nonzero circulation is
present around the profile, the stagnation points would move and in particular the rear stagnation point would move towards the trailing edge. The sequence vortex shedding -> increase of circulation
around the airfoil -> downstream migration of the rear stagnation point continues until the stagnation point reaches the trailing edge. When this happens the sharp velocity gradient disappears and
the vorticity shedding stops. This ``equilibrium'' situation freezes the value of the circulation around the airfoil, which would not change anymore.
Let us now proceed to examine the velocity and pressure fields around an airfoil with the aid of some animations showing how they vary when the (effective) angle of attack is changed. In each shot
the flow field is obtained imposing the Kutta condition to determine the circulation. A sequential browsing of the following pages is suggested, at least for first time visitors.
┃ │ │ │ │ ┃
┃Stream│Streak│Velocity │Pressure │Forces┃
┃lines │lines │field │field │on the┃
┃ │ │ │ │body ┃
|
{"url":"http://www.av8n.com/irro/profilo_e.html","timestamp":"2014-04-17T06:40:58Z","content_type":null,"content_length":"5451","record_id":"<urn:uuid:39033027-c462-49b5-9df8-2ab4b1211cde>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability selection with replacement (balls in an urn)
September 20th 2012, 08:06 PM
Probability selection with replacement (balls in an urn)
I have a question regarding this subject. If we have replacement, technically the odds of pulling one colored ball remains he same form draw to draw, thus they're independent events, right? Here
is a specific example.
There is an urn with 80 blue balls, 60 red balls, and 60 green balls.
If I draw from this urn twice, with replacement, what is the probability that at least one ball is blue given that at least one ball is either green or red?
I go back and forth on this... since it is replacement, part of me says that there will always be a 40% chance that any ball pulled is blue. However, if we have the scenario where there is two
balls, and we know that for sure one of the balls chosen will be red or green (60%), would I multiply 80/200 * 120/200?
What is the probability that at least one ball is blue given that the SECOND ball is either green or red?
Does it matter that we know that the second ball is either green or red or is this question to be approached the same way we would answer the reverse - what is the probability that at least one
ball is blue given that the FIRST ball is either green or red.... same question? My gut says that this is the same answer as above.
Any insight that could help me get this to click would be greatly appreciated. Thank You.
September 20th 2012, 09:25 PM
Re: Probability selection with replacement (balls in an urn)
Your restrictions give only two drawing possibilities: $\{g,b\}$ and $\{r,b\}$
Combined probabilities would be:
$P\{g,b\}+P\{r,b\}=P(g=1|60,200)*P(b=1|80,200)+P(r= 1|60,200)*P(b=1|80,200)=\frac{6}{25}$
September 23rd 2012, 12:39 PM
Re: Probability selection with replacement (balls in an urn)
Thanks MaxJasper, in short on this one we're 4/10*6/10. This was the approach that I took. If we had an urn with 20 total balls, 10 yellow and 10 orange, in it and selected 2 balls with
replacement, what is the probability that at least one of the balls is orange given that at least one is yellow? Would I take the same approach here? 1/2 * 1/2 = 1/4 ? I see 2/3 being a
possibility too as we're looking at OO, YY, OY, YO as possibilities and since we know at least one is yellow, that eliminates OO, leaving us 3 options, YY, OY, and YO... and if at least one of
the balls is orange, we have 2 of these 3 possibilities remaining... am I over thinking this? Thanks again!
September 23rd 2012, 06:43 PM
Re: Probability selection with replacement (balls in an urn)
Again your restrictions specified as "at least" results in only {Y,O} and remember that {Y,O}={O,Y} and so you end up with only {Y,O} no matter what order of drawing the balls. You don't have
combinations: {O,O}, {Y,Y} because "at least" is violated!
September 23rd 2012, 07:05 PM
Re: Probability selection with replacement (balls in an urn)
This being said, the solution for the second question would be 1/2*1/2=0.25 ?
|
{"url":"http://mathhelpforum.com/new-users/203802-probability-selection-replacement-balls-urn-print.html","timestamp":"2014-04-19T02:58:01Z","content_type":null,"content_length":"8405","record_id":"<urn:uuid:ca4a62a8-e215-4894-a0af-96ea0d529b01>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Perrineville Math Tutor
Find a Perrineville Math Tutor
...As an educator, I recognize that the student must come first. This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of
explanation. I do hold high expectations on both parties, and do understand that this is a process that evolves as a deeper relationship is formed.
9 Subjects: including algebra 1, algebra 2, calculus, physics
...I am willing to tutor a range of subjects - from history, to writing skills, to social studies, to languages (Arabic, German, ESL and Spanish). I am looking for engaged students from elementary
school to college. I guarantee significant improvement in my students' performance.I studied classical...
24 Subjects: including SAT math, English, reading, writing
...I have taught K to adult students. I am currently a teacher in a public school and an adjunct at a community college. I have taught abroad and I am fluent in Portuguese and Spanish.
21 Subjects: including prealgebra, algebra 1, reading, Spanish
...As of now, I am tutoring junior high students for SHSAT and a sophomore for PSAT. I am patient with my students and help them build strong basic skills which will help them solve complicated
problems.I have helped students prepare for integrated algebra and geometry regents. One of my students ...
15 Subjects: including calculus, trigonometry, algebra 1, algebra 2
...I love both sciences and mathematics. I have had previous experiences in tutoring on a high school level in mathematics, chemistry, biology, and physics. I am available on all weekdays and
sometimes weekends.
7 Subjects: including algebra 1, prealgebra, trigonometry, algebra 2
|
{"url":"http://www.purplemath.com/perrineville_nj_math_tutors.php","timestamp":"2014-04-19T02:24:50Z","content_type":null,"content_length":"23769","record_id":"<urn:uuid:3b99b60c-ec6f-4c43-a181-471d7f99ac32>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definable collections of non measurable sets of reals
up vote 7 down vote favorite
Is there a definable (in Zermelo Fraenkel set theory with choice) collection of non measurable sets of reals of size continuum? More verbosely: Is there a class A = {x: \phi(x)} such that ZFC proves
"A is a collection, of size continuum, consisting of non Lebesgue measurable subsets of reals"?
3 Doesn't the set of translates of the Vitali set have this property? – Qiaochu Yuan Dec 18 '09 at 22:16
1 As Qiaochu says, as long as you have 1 nonmeasurable set, which you do with ZFC, taking all of its translates should do the trick. Or you could fix a nonmeasurable set A and take the set of sets A
\S where S is a countable subset of A. Maybe there are additional properties you want? – Jonas Meyer Dec 18 '09 at 22:26
2 But why should there be a definable Vitali set? Of course, some models of set theory have definable Vitali sets, because sometimes there is even a definable well-ordering of the reals, but the
question asks for a parameter-free definition that provably works in every model of set theory. – Joel David Hamkins Dec 19 '09 at 18:55
Joel, I will speak only for myself of course; my comment should have been phrased as a question if left at all, because I did not understand what was meant by definable. Thank you for providing so
much food for thought! – Jonas Meyer Dec 20 '09 at 6:17
To Qiaochu: Like Joel said, it is consistent to have no definable non measurable set of reals, even if you allow countably many ordinal parameters. – Ashutosh Dec 20 '09 at 7:29
add comment
2 Answers
active oldest votes
(Edit.) With a closer reading of your question, I see that you asked for a very specific notion of definability.
If you allow the family to have size larger than continuum, there is a trivial Yes answer. Namely, let phi(x) be the assertion "x is a non-measurable set of reals". In any model of ZFC,
this formula defines a family of non-measurable sets of reals, and it is not difficult to show in ZFC that there are at least continuum many such sets (for example, as in the comment of
Qiachu Yuan). Thus, ZFC proves that {x | phi(x)} is a family of non-measurable sets of size at least continuum.
But if you insist that the family have size exactly the continuum, as your question clearly states, then this trivial answer doesn't work. Indeed, one can't even take the class of all
Vitali sets in this case, since there are 2^continuum many sets of reals that contain exactly one point from each equivalence class for rational translations.
Qiachu Yuan's suggestion about translations of a single Vitali set does have size continuum, but there is little reason to expect the Vitali set to be definable in the way that you have
requested, and so it does not provide the desired definable family.
In my earlier posted answer, I considered the possibility that you might have meant some other notion of definability, or whether parameters are allowed in the definition, and so on. And
I find some of these other versions of the question to be quite interesting and subtle.
I pointed out that it is surely consistent with ZFC that there is the desired definable family of non-measurable sets, since in fact any set at all can be made definable in a forcing
up vote 6 extension that adds no reals and no sets of reals. So you can take any family of non-measurable sets that you like and go to a forcing extension where this family is definable.
down vote
accepted Perhaps a stronger notion of definibility would be to use the notion of projective definitions, where one wants to define the sets within the structure of the reals, using quantification
only over reals and natural numbers (rather than over the entire set-theoretic universe). Thus, we want a projective formula phi(x,z), such that A_z={x | phi(x,z)} is always
non-measurable for any z and all A_z are different. Such a formula would be a strong example of the phenomenon you seek.
The first answer to this way of asking the question is that it is consistent with ZFC that there is such a projective family. The reason is that I have mentioned in a number of questions
and answers on this site, under the Axiom of Constructibility V=L, there is a projectively definable well-ordering of the reals. Thus, under V=L, one can projectively define a Vitali
set, and then take the family of its translations. There is no need for a parameter in this definition, since a particular Vitali set can be projectively defined without parameters from
the projectively definable well-ordering of the reals.
The second answer to this version of the question, however, is that under certain set-theoretic assumptions such as Projective Determinacy, every projective set of reals is Lebesgue
measurable. In this case, there can be no such projectively defined family of non-measurable sets. The assumption of PD is consistent with ZFC from large cardinals, but perhaps one needs
a much weaker hypothesis meerely to get every projective set measurable.
In summary, if one wants a projectively definable family of non-measurable sets, then it is independent of ZFC, if large cardinals are consistent. (Perhaps the need for large cardinals
can be reduced.)
An inaccessible is required and sufficient for your last paragraph, and Harvey's paper mentioned in Ashutosh's answer shows it also suffices to give (consistently) a negative answer to
the original question. – Andres Caicedo Sep 30 '10 at 20:06
add comment
The following appears in On definability of non measurable sets, Harvey Friedman, Canadian Journal of Mathematics, Vol. 32, No. 3, 1980.
Let $M$ be the Solovay's model for $ZF + DC + V = L(R) +$ every set of reals is Lebesgue measurable etc. Let $\kappa$ be a regular cardinal of cofinality bigger than $\omega_1$ in $M$.
up vote 4 Then forcing with countable partial functions from $\kappa$ to $2$ gives a model $N$ which satisfies choice and the statement: "Every definable, with ordinal and real parameters, set of
down vote sets of reals of size less than continuum has only Lebesgue measurable sets".
Ashutosh, if possible, please replace $L[R]$ with $L(R)$. It is unfortunate that both notations mean different things and they used to be confused (in print!) a few decades ago. – Andres
Caicedo Sep 30 '10 at 20:07
You're right. The distinction does matter here. – Ashutosh Sep 30 '10 at 23:19
add comment
Not the answer you're looking for? Browse other questions tagged set-theory measure-theory fa.functional-analysis or ask your own question.
|
{"url":"https://mathoverflow.net/questions/9322/definable-collections-of-non-measurable-sets-of-reals","timestamp":"2014-04-17T01:28:20Z","content_type":null,"content_length":"68873","record_id":"<urn:uuid:fd65761b-01eb-451c-94d7-9b0d9135b01e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating distances (across matrices)
This Gist is mostly for my future self, as a reminder of how to find distances between each row in two different matrices. To create a distance matrix from a single matrix, the function dist(), from
the stats package is sufficient.
There are times, however, when I want to see how close each row of a matrix is to another set of observations, and thus I want to find distances between two matrices. For example, consider a set of
voter ideal points in several dimensions, from which I want to find the distance to a set of candidate ideal points in those same dimensions.
Creating a distance matrix can get very memory-intensive, so it is useful to focus only on finding the distances one needs, rather than calculating an entire n × n matrix and ignoring most of it. For
this purpose, I use the dist() function from the proxy package, as shown below.
I also include an example of the use of multidimensional scaling on a distance matrix, to show how useful this simple operation can be.
|
{"url":"http://is-r.tumblr.com/post/32930447064/calculating-distances-across-matrices","timestamp":"2014-04-21T07:03:38Z","content_type":null,"content_length":"39220","record_id":"<urn:uuid:b0a5a743-0968-4960-b5b6-8d1433de3733>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture 10: October 8, 2012
COMS W3261
Computer Science Theory
Lecture 10: October 8, 2012
Properties of Context-Free Languages
• Eliminating useless symbols
• Eliminating ε-productions
• Eliminating unit productions
• Chomsky normal form
• Pumping lemma for CFL's
• Cocke-Younger-Kasami algorithm
1. Eliminating Useless Symbols from a CFG
• A symbol X is useful for a CFG if there is a derivation of the form S ⇒^* αXβ ⇒^* w for some string of terminals w.
• If X is not useful, then we say X is useless.
• To be useful, a symbol X needs to be
1. generating; that is, X needs to be able to derive some string of terminals.
2. reachable; that is, there needs to be a derivation of the form S ⇒^* αXβ where α and β are strings of nonterminals and terminals.
• To eliminate useless symbols from a grammar, we
1. identify the nongenerating symbols and eliminate all productions containing one or more of these symbols, and then
2. eliminate all productions containing symbols that are not reachable from the start symbol.
• In the grammar
S → AB | a
A → b
S, A, a, and b are generating. B is not generating.
Eliminating the productions containing the nongenerating symbols we get
S → a
A → b
Now we see A is not reachable from S, so we can eliminate the second production to get
S → a
The generating symbols can be computed inductively bottom-up from the set of terminal symbols.
The reachable symbols can be computed inductively starting from S.
2. Eliminating ε-productions from a CFG
□ If a language L has a CFG, then L - { ε } has a CFG without any ε-productions.
□ A nonterminal A in a grammar is nullable if A ⇒^* ε.
□ The nullable nonterminals can be determined iteratively.
□ We can eliminate all ε-productions in a grammar as follows:
☆ Eliminate all productions with ε bodies.
☆ Suppose A → X[1]X[2] ... X[k] is a production and m of the k X[i]'s are nullable. Then add the 2^m versions of this production where the nullable X[i]'s are present or absent. (But if all
symbols are nullable, do not add an ε-production.)
□ Let us eliminate the ε-productions from the grammar G
S → AB
A → aAA | ε
B → bBB | ε
S, A and B are nullable.
For the production S → AB we add the productions S → A | B
For the production A → aAA we add the productions A → aA | a
For the production B → bBB we add the productions B → bB | b
The resulting grammar H with no ε-productions is
S → AB | A | B
A → aAA | aA | a
B → bBB | bB | b
We can prove that L(H) = L(G) - { ε }.
3. Eliminating Unit Productions from a CFG
□ A unit production is one of the form A → B where both A and B are nonterminals.
□ Let us assume we are given a grammar G with no ε-productions.
□ From G we can create an equivalent grammar H with no unit productions as follows.
☆ Define (A, B) to be a unit pair if A ⇒^* B in G.
☆ We can inductively construct all unit pairs for G.
☆ For each unit pair (A, B) in G, we add to H the productions A → α where B → α is a nonunit production of G.
□ Consider the standard grammar G for arithmetic expressions:
E → E + T | T
T → T * F | F
F → ( E ) | a
The unit pairs are (E,E), (E,T), (E,F), (T,T), (T,F), (F,F).
The equivalent grammar H with no unit productions is:
E → E + T | T * F | ( E ) | a
T → T * F | ( E ) | a
F → ( E ) | a
4. Putting a CFG into Chomsky Normal Form
□ A grammar G is in Chomsky Normal Form if each production in G is one of two forms:
1. A → BC where A, B, and C are nonterminals, or
2. A → a where a is a terminal.
□ Every context-free language without ε can be generated by a Chomsky Normal Form grammar.
□ Let us assume we have a CFG G with no useless symbols, ε-productions, or unit productions. We can transform G into an equivalent Chomsky Normal Form grammar as follows:
☆ Arrange that all bodies of length two or more consist only of nonterminals.
☆ Replace bodies of length three or more with a cascade of productions, each with a body of two nonterminals.
□ Applying these two transformations to the grammar H above, we get:
E → EA | TB | LC | a
A → PT
P → +
B → MF
M → *
L → (
C → ER
R → )
T → TB | LC | a
F → LC | a
5. Pumping Lemma for CFL's
□ For every nonfinite context-free language L, there exists a constant n that depends on L such that for all z in L with |z| ≥ n, we can write z as uvwxy where
1. vx ≠ ε,
2. |vwx| ≤ n, and
3. for all i ≥ 0, the string uv^iwx^iy is in L.
□ Proof: See HMU, pp. 281-282.
□ One important use of the pumping lemma is to prove certain languages are not context free.
□ Example: The language L = { a^nb^nc^n | n ≥ 0 } is not context free.
☆ The proof will be by contradiction. Assume L is context free. Then by the pumping lemma there is a constant n associated with L such that for all z in L with |z| ≥ n, z can be written as
uvwxy such that
1. vx ≠ ε,
2. |vwx| ≤ n, and
3. for all i ≥ 0, the string uv^iwx^iy is in L.
☆ Consider the string z = a^nb^nc^n.
☆ From condition (2), vwx cannot contain both a's and c's.
☆ Two cases arise:
1. vwx has no c's. But then uwy cannot be in L since at least one of v or x is nonempty.
2. vwx has no a's. Again, uwy cannot be in L.
☆ In both cases we have a contradiction, so we must conclude L cannot be context free. The details of the proof can be found in HMU, p. 284.
6. Cocke-Younger-Kasami Algorithm for Testing Membership in a CFL
□ Input: a Chomsky normal form CFG G = (V, T, P, S) and a string w = a[1]a[2] ... a[n] in T*.
□ Output: "yes" if w is in L(G), "no" otherwise.
□ Method: The CYK algorithm is a dynamic programming algorithm that fills in a triangular table X[ij] with nonterminals A such that A ⇒* a[i]a[i+1] ... a[j].
for i = 1 to n do
if A → a[i] is in P then
add A to X[ii]
fill in the table, row-by-row, from row 2 to row n
fill in the cells in each row from left-to-right
if (A → BC is in P) and for some i ≤ k < j
(B is in X[ik]) and (C is in X[k+1,j]) then
add A to X[ij]
if S is in X[1n] then
output "yes"
output "no"
The algorithm adds nonterminal A to X[ij] iff there is a production A → BC in P where B ⇒* a[i]a[i+1] ... a[k] and C ⇒* a[k+1]a[k+2] ... a[j].
To compute entry X[ij], we examine at most n pairs of entries: (X[ii], X[i+1,j]), (X[i,i+1], X[i+2,j]), and so on until (X[i,j-1], X[j,j]).
The running time of the CYK algorithm is O(n^3).
7. Practice Problems
1. Eliminate useless symbols from the following grammar:
S → AB | CA
A → a
B → BC | AB
C → aB | b
2. Put the following grammar into Chomsky Normal Form:
S → ASB | ε
A → aAS | a
B → BbS | A | bb
C → aB | b
3. Show that { a^nb^nc^n | n ≥ 0 } is not context free.
4. Show that { a^nb^nc^i | i ≤ n } is not context free.
5. Show that { ss^Rs | s is a string of a's and b's } is not context free.
6. (Hard) Show that the complement of { ss | ss is a string of a's and b's } is context free.
8. Reading Assignment
|
{"url":"http://www1.cs.columbia.edu/~aho/cs3261/lectures/12-10-08.htm","timestamp":"2014-04-19T08:00:52Z","content_type":null,"content_length":"11195","record_id":"<urn:uuid:cd5a2f04-f4cf-40ea-8a55-c078c4bc81a2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: May 1995 [00228]
[Date Index] [Thread Index] [Author Index]
More on "Extracting data points from Plot[]"
• To: mathgroup at christensen.cybernetics.net
• Subject: [mg959] More on "Extracting data points from Plot[]"
• From: cameron <cameron at dnaco.net>
• Date: Thu, 4 May 1995 03:14:11 -0400
Another post in regard to the subject of using Plot to sample functions.
In the development of "The Mathematica Graphics Guidebook", a great deal
of material was generated over and above what appeared in the finished
book. The project ran enormously over schedule, and we reached a point
at which we decided it was better to call a halt and publish a book than
to continue making incremental improvements and never get done at all.
So we drew a line between core material and nonessential embellishments,
polished up the core material, and published it.
However, there is much of value in the 150 pages or so of remaining
material, even though it was never brought up to publishable form.
Coincidentally, one of the examples that got eliminated is the use of
Plot and ParametricPlot to sample functions without making a graph --
the same problem that has recently been discussed in this forum.
I have attached below an "outtake" from "The Mathematica Graphics Guidebook",
in which we define the AdaptiveSample[] function. Share and Enjoy!
Best regards--
--Cameron Smith
co-author (with Nancy Blachman) of "The Mathematica Graphics Guidebook"
P.S. Some of the outtake material from the Guidebook may yet see the light
of day, in "The Mathematica Graphics Cookbook". This work -- if we
decide to go ahead with it -- would be a companion to the Guidebook.
The idea is that the Guidebook is a reference work explaining the
concepts underlying Mathematica's graphics model, and the Cookbook
would be a compendium of useful examples illustrating the practical
use of the model; the relationship would be similar to that between
the original PostScript Red and Green books.
The text below is copyright (c) 1995 by Cameron Smith.
All rights are reserved. Permission is granted to use the Mathematica
code for the AdaptiveSample[] function for any purpose, without fee,
at the user's own risk. No warranty, express or implied, is offered.
The adaptive sampling routine used by Plot and ParametricPlot
has other uses besides graphing functions -- for example, in certain
numerical work it would be nice to be able to say
AdaptiveSample[ f, { x, min, max } ]
to produce a list of (x,f(x)) pairs with f sampled more densely in
intervals over which it is less linear. Mathematica doesn't give us
this direct access to its sampling algorithm, but since we know the
structure of the graphics objects that Plot and ParametricPlot create,
it isn't too hard to write. We simply use one of the plotting
functions to prepare a graph of the function, then extract the list of
points from the graph and throw the rest away. Here is Mathematica
code that does this:
AdaptiveSample[ f_, {var_, min_, max_}, opts___ ] :=
Block[ { plot, cmp, pts, bend, div, sampler },
sampler = Plot;
If[ MatchQ[ Hold[f], Hold[{_,_}] ], sampler = ParametricPlot ];
{cmp, pts, bend, div} =
{Compiled,PlotPoints,MaxBend,PlotDivision} /. {opts} /. Options[sampler];
plot = sampler[ f, {var,min,max},
Compiled -> cmp, PlotPoints -> pts, MaxBend -> bend,
PlotDivision -> div, Axes -> None, Frame -> False,
DisplayFunction -> Identity, Prolog -> {}, Epilog -> {} ];
First /@ Cases[ First[plot], _Line, Infinity ]
What's going on here? The SetAttributes command ensures that
AdaptiveSample, like Plot and ParametricPlot, defers evaluation of its
arguments. In the body of the function, the first thing we have to do
is to decide whether to use Plot or ParametricPlot, and we make that
choice by examining the first argument: if we were given a pair of
functions, we use ParametricPlot, otherwise we use Plot.
Next we process the optional arguments, obtaining settings for the four
options that control the sampling mechanism (Compiled, PlotPoints,
MaxBend, and PlotDivision), and using the default value for each option
if no explicit setting was provided. Then we construct a graphics
object, setting other options (such as Axes and DisplayFunction) to
reduce the amount of unnecessary work done (no point in constructing
axes if we aren't going to look at the plot anyway) and to ensure that
the graphic isn't displayed. Finally, we extract the Line objects from
the list of graphics primitives in the plot, and extract the lists of
points from the Line objects.
Note that the result of AdaptiveSample is a list of lists of
points, rather than simply a list of points. This is necessary,
because the result of Plot or ParametricPlot can be a collection
of disjoint Line objects. The plotting functions break up their
results this way when they detect one or more discontinuities in
their arguments, and we want to preserve this information.
Here's an illustration of this phenomenon:
badfunction[x_] := Which[ x<1, 1, x>2, 2, True, Null ]
AdaptiveSample[ badfunction[x], {x,0,3}, PlotPoints -> 10 ]
Plot::plnr: CompiledFunction[{x}, <<1>>, -CompiledCode-][x]
is not a machine-size real number at x = 1..
Plot::plnr: CompiledFunction[{x}, <<1>>, -CompiledCode-][x]
is not a machine-size real number at x = 1.33333.
Plot::plnr: CompiledFunction[{x}, <<1>>, -CompiledCode-][x]
is not a machine-size real number at x = 1.66667.
Further output of Plot::plnr
will be suppressed during this calculation.
{{{0., 1.}, {0.333333, 1.}, {0.666667, 1.}, {0.833333, 1.},
{0.916667, 1.}, {0.958333, 1.}, {0.979167, 1.}, {0.989583, 1.}},
{{2.01042, 2.}, {2.02083, 2.}, {2.04167, 2.}, {2.08333, 2.},
{2.16667, 2.}, {2.33333, 2.}, {2.66667, 2.}, {3., 2.}}}
Still, it is true that the result of AdaptiveSample will usually be
a list containing a single list of points -- this will always be the
case if the function being sampled is continuous over the sampling
interval, and will usually be true even if it is not, since the odds
are small that the sampler will try to sample a function exactly at
an isolated point of discontinuity.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/1995/May/msg00228.html","timestamp":"2014-04-17T04:14:48Z","content_type":null,"content_length":"40380","record_id":"<urn:uuid:bc749950-5989-4d0c-9a1d-dbda291238ab>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum: Teacher2Teacher - Q&A #11798
View entire discussion
From: Tom Meyer <DrTMeyer@yahoo.com>
To: Teacher2Teacher Service
Date: Jul 18, 2003 at 13:11:13
Subject: "One-room schoolhouse" in a classroom
How can I do a better job of teaching a "one-room schoolhouse" of mathematics
levels in my high school bilingual mathematics classes?
In my 3 1/2 years of teaching math to recent immigrants from 14 Spanish-
speaking countries, I have made some observations and gathered suggestions:
My Algebra I, Pre-Algebra, and Geometry students literally vary in
mathematical background from very limited elementary education (placed in high
school because of their age) through the levels to those with advanced
preparation, ready for Pre-Calculus (placed at a lower level because of the
limitations of our bilingual program). (Note that class size averages 20-25
for 43 minute periods.)
When I attempt to teach the prescribed level of the official course to every
student, a substantial chunk of the class is bored and frustrated while
another large section is bewildered and lost. In this case the detracking
model stretches way beyond the snapping point. Having the advanced students
tutor the basic level students sometimes works, but to the detriment of the
advanced students learning new material.
Potential strategies: splitting the class into 3 or 4 levels with weekly
cross-level hands-on activities, having work stations like some elementary
teachers, trying to schedule similar topics at different spiral levels for the
different groups with questions that allow access at different levels. Is
there research on, say, rural schools in other countries with suggestions on
how to do this?
Perhaps a key is focusing on how to help students generate the desire to
become self-reliant, life-long learners.
I really want to address all my students needs; there are too many students at
the various levels to just do individual tutoring or enrichment as if they are
Thank you.
|
{"url":"http://mathforum.org/t2t/message.taco?thread=11798&message=1","timestamp":"2014-04-17T01:29:59Z","content_type":null,"content_length":"5940","record_id":"<urn:uuid:62b8cb8b-e089-4ed7-87b6-d8b68dd80051>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The LIFO Stack - A Gentle Guide
The LIFO Stack - A Gentle Guide Follow
Written by Harry Fairhead
Stacking Operators
When you write down an operator expression – for example, 2+3*4 – you don’t perform the operations in the order they are written.
That is, 2+3*4 is not 2+3 multiplied by 4 it is 3*4 plus 2 which is 14 not 20.
Given that this is another “alter the order of things” problem you should again be thinking “stack”!
In this case the method is slightly more complicated than the shunting algorithm but not much. The simplest way of doing it is to have two stacks – one for the values and one for
the operators, but it can be done using just one stack quite easily,
All you do is assign each operator a priority. For example, + is priority 1 and * is priority 2.
You then scan the expression from left to right stacking each item as you go on its appropriate stack.
Before you stack an operator however you compare its priority with the operator on the top of the stack. If the current operator has a higher priority then you pop it off the stack
and make it operate on the top two items on the value stack – pushing the answer back on the value stack.
When the scan is completed the final answer is evaluated by popping each operator off the operator stack and using it on the top two items on the value stack, pushing the result
back on until all the operators are used up.
The answer is then the single value on the stack.
In the case of the expression 2+3*4 this would result in the following steps –
Try the same method on 2*3+4 and you will discover that when you reach the + operator the 2*3 is automatically calculated before the + is pushed onto the operator stack (because the
* has a higher priority and we have to evaluate the top two items on the stack using it and push the result back on the stack)..
Reverse Polish -RPN
This stack operator algorithm is well known and there are lots of variations on it. It also gives rise to the once very well-known Reverse Polish (RP) notation.
If you write operator expressions so that each operator acts on the two items to its left then the expression can be evaluated using an even simpler stack algorithm.
For example, in RP the expression 2+3*4 would be written 2, 3,4*,+ and it can be evaluated simply by scanning from left to right and pushing values on the stack.
Each time you hit an operator you pop the top two items from the stack, apply the operator and push the result back on the stack.
When you get to the end of the RP expression the result is on the top of the stack.
Try it and you will discover it works. In fact you can use a stack to convert standard operator, known as infix, expressions to RP.
Back in the days when hardware was more expensive it was thought to be a good idea to get people to work directly in RP. You could and still can buy calculators that work in RP
To add 2 and 3 you enter 2, then 3 and press the plus button.
Burroughs even built a mainframe computer that didn’t have any other type of working memory than a stack. Everything it did was effectively in RP notation applied to the central
stack. The stack machine architecture was simple but not fast enough to compete with more general architectures.
However the stack approach to computing hasn't completely died a death. For example, the difference between the usual Java Virtual Machine - JVM and the one used in Android - Dalvik
is the use of a stack. The standard JVM is register based machine but this is too expensive and power hungry for a mobile device hence the Dalvik VM is a stack oriented VM.
Perhaps the longest standing overuse of the stack approach was, and is, the language Forth and its many derivatives.
It turns out that with a little effort you can build an entire programming language that can be written down as operators in RP notation.
Of course making it work is just a matter of using a stack to push operators and operands onto.This makes it easy to implement but arguably difficult to use.
However the fun that people had thinking in RP is such that they became really good at it!
The fascination for this sort of language is so great that you still find it in use in niche environments particularly where hardware is limited and it has its enthusiast following.
The LIFO stack is a wonderful invention but it isn’t the only thing computer science has to offer us as a practical tool!
If all you know is the LIFO stack then everything looks like a pop or push.
Related Articles
Introduction to data structures
Stack architecture demystified
Reverse Polish Notation - RPN
Brackets are Trees
Javascript data structures - Stacks
blog comments powered by Disqus
To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.
Genetic Algorithms
Genetic Algorithms are currently a hot topic. If this is an unfamiliar concept, or one you are unsure about here's an explanation.
+ Full Article
The Programmer's Guide to Fractals
Fractals encompass interesting pure maths and computing - and they are very pretty to look at. It is almost a rite of passage that every programmer has to face - write some sort of
fractal viewer!
+ Full Article
Other Articles
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved.
|
{"url":"http://i-programmer.info/babbages-bag/263-stacks.html?start=3","timestamp":"2014-04-16T04:40:32Z","content_type":null,"content_length":"40293","record_id":"<urn:uuid:56ddd60a-cdba-40c4-aaa1-2332cb660473>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distributive Properties
Suppose we have 3 baskets, each holding 2 apples and 4 oranges.
This is the same number of apples and oranges as if we had a bag with 6 apples and a bag with 12 oranges. Except that we now mysteriously no longer have our three baskets, which were handmade in
Santa Fe, New Mexico and actually hold quite a bit of sentimental value for us. That's a shame.
Regardless of how we package them, the number of fruits remains the same. Not that that will take away the sting of having had our wicker receptacles stolen from right under our noses.
This is an example of the distributive property, which basically says that it doesn't matter how we "package" numbers when performing multiplication. To write the distributive property in symbols we
say that, for all real numbers a, b, and c,
and a(b + c) = ab + ac.
When we go from the left side to the right side of this equation, we say we are "distributing a over the quantity (b + c)." We may not say that aloud often, but we certainly won't hesitate to type
it. In fact, we just did.
Sample Problems
1. Multiply: 3x(x + 2).
Now the thing we are distributing is 3x, rather than a plain old number all by its lonesome. That's okay, because the distributive property still works: 3x(x + 2) = 3x ยท x + 6x = 3x^2 + 6x.
2. - 2(a + b) = - 2a - 2b.
Be careful: When the value you are distributing has a negative sign, make sure you distribute the negative sign over everything in the parentheses. Your parents may need told you to stop spreading
your negativity, but ignore them for now.
Having a negative sign by itself outside the parentheses is the same as having - 1 outside the parentheses. The 1 is there; it is hiding. Did you check under the bed? That's totally its favorite
spot. To distribute the negative sign, you would simply multiply each term inside the parentheses by - 1.
Sample Problems
1. - (c + d) = - c - d.
2. - (2a - 5b - 6 + 11c) = - 2a + 5b + 6 - 11c.
Since multiplication is commutative, the distributive property also works if we write the multiplication the other way around. For all real numbers a, b, and c,
(b + c)a = ba + ca.
Sample Problems
1. Use the distributive property to multiply (4x - y)( - 3).
(4x - y)( - 3) = - 12x + 3y.
2. Use the distributive property to multiply (4 - x)( - 1).
(4 - x)( - 1) = - 4 + x.
Notice that, in the previous example, we needed to write out the negative one, since the expression (4 - x) - doesn't make sense. No hiding under the bed for him today.
The distributive property works even if the expression in parentheses has more than two terms. It is totally a team player.
Sample Problems
1. 4(x + y + z) = 4x + 4y + 4z.
The distributive property also works to multiply expressions where both factors have multiple terms. So, if you are a tennis player, it is like playing straight doubles rather than Canadian doubles.
Or triples. Okay, the analogy sort of falls apart at this point. Ignore us and take a look at these examples.
2. (3 + x)(y - 4) = (3 + x)y - (3 + x)4, using the distributive property to distribute (3 + x) over (y - 4).
3. (3 + x)(y - 4) = 3(y - 4) + x(y - 4), using the distributive property to distribute (y - 4) over (3 + x).
Shortcut alert! If you are especially on the ball, you may have noticed that we can distribute twice in the cases above. Consider the first example of distributivity. First we can distribute (3 + x)
over (y - 4), but afterwards we can actually continue and distribute the y over (3 + x) and distribute the (- 4) over (3 + x). Basically, you want to keep on distributin' until the day is done. Or at
least until there is nothing left to distribute.
(3 + x)(y - 4) = (3 + x)y - (3 + x)4
= 3y + xy - (12 + 4x)
= 3y + xy - 12 - 4x
That was a lot of steps! What are we, making our way into the Philadelphia Museum of Art? (Rocky reference, sorry.) Fortunately, there is a shortcut. (No, not for you, Rocky. Keep runnin'.) You may
have already heard of it. It is called FOIL. FOIL stands for First, Outer, Inner, Last. The idea here is to be able to carry out both steps of the distribution simultaneously. Aside from helping to
keep your baked chicken crispy, FOIL helps you do this stuff in a systematic way so that you don't make mistakes.
1. The First means that you multiply the first term in the first set of parentheses by the first term in the second set of parentheses. Below we put brackets around the first terms. In addition to
helping you easily identify the terms in question, putting them in a box also helps guarantee freshness.
([ 3 ] + x)([ y ] + (- 4))
2. The Outer means that you multiply the first time in the first set of parentheses by the second term in the second set of parentheses.
([ 3 ] + x)(y + [ - 4 ])
3. The Inner refers to multiplying the innermost terms:
(3 + [ x ])([ y ] + (- 4))
4. The Last refers to the last terms in each set of parentheses being multiplied:
(3 + [ x ])(y + [ - 4 ])
To carry out the multiplication using FOIL, we multiply the boxed terms in the order presented above:
(3 + x)(y - 4) = 3y - 12 + xy - 4x
Notice that we arrived at the same answer as we did when we distributed twice, but we did all of the steps at once! Think how much easier it would be if you could shower, brush your teeth, eat
breakfast, and get dressed all at once. What a life-saver that would be! Especially on mornings that your alarm didn't go off...
|
{"url":"http://www.shmoop.com/algebraic-expressions/distributive-properties.html","timestamp":"2014-04-17T21:53:13Z","content_type":null,"content_length":"44164","record_id":"<urn:uuid:995d6d29-ad92-4780-9939-77d76eec99bf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: June 2010 [00369]
[Date Index] [Thread Index] [Author Index]
Re: defining a function of functions
• To: mathgroup at smc.vnet.net
• Subject: [mg110474] Re: defining a function of functions
• From: Leonid Shifrin <lshifr at gmail.com>
• Date: Sun, 20 Jun 2010 03:45:05 -0400 (EDT)
you don't need a lot more effort to accomplish what you want. Just pass a
pure function as your first parameter:
I would not recommend embedding global parameters in a function in the
fashion you did it in your code though,since you make your function
implicitly dependent on values of global variables, and this is an
invitation to trouble. I'd either make all parameters explicit:
innerprod[f_, g_,mylist_List,l_Integer] := Sum[f[k]Conjugate[g[k]]Part[
> mylist,k],{k,1,l}]
(notice also that := (SetDelayed) is usually more appropriate to define
functions than = (Set), and that the use of names starting with a capital
letter is better avoided).
Or, if the same length and list are used many times, I'd make a function
makeInner[innerName_Symbol, mylist_List,l_Integer]:=
innerName[f_, g_] :=Sum[f[k]Conjugate[g[k]]Part[
> mylist,k],{k,1,l}]);
then you define your inner product function by calling makeInner, like
Clear[a, b, myInnerProd];
makeInner[myInnerProd, Range[10], 5]
In[6]:= ?myInnerProd
\*UnderoverscriptBox[\(\[Sum]\), \(k = 1\), \(5\)]\(f$[k]\ Conjugate[g$[k]]\
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}[[k]]\)\)
In[7]:= myInnerProd[a, b]
Out[7]= a[1] Conjugate[b[1]] + 2 a[2] Conjugate[b[2]] + 3 a[3]
Conjugate[b[3]] + 4 a[4] Conjugate[b[4]] + 5 a[5] Conjugate[b[5]]
Both ways, you don't mess with global variables.
On Sat, Jun 19, 2010 at 4:49 AM, J Davis <texasautiger at gmail.com> wrote:
> I want to define an inner product function such as
> (here mylist is a specified list and L is a specified constant)
> innerprod[f_, g_] = Sum[f[k]Conjugate[g[k]]Part[mylist,k],{k,1,L}]
> This works fine but now I need to apply it in a situation where f is a
> function of 2 variables while g is a function of only one variable,
> i.e. I want to compute something like the inner product of f[2,n] and
> g[n].
> Of course, I want the ability to freely vary the first input of f.
> I have accomplish this before rather easily but I'm presently drawing
> a blank.
> Thanks for your help.
> Best,
> JD
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jun/msg00369.html","timestamp":"2014-04-18T13:15:47Z","content_type":null,"content_length":"27256","record_id":"<urn:uuid:c1f7ce3c-0870-490a-ab53-94a6877c60d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: HF on infinitesimals (was: ReplyToDavis)
David Ross ross at math.hawaii.edu
Tue Nov 11 09:26:26 EST 1997
Though I am a regular user of nonstandard methods, I tend to agree with
HF that their applicability to fom is likely to not be great. However,
the question:
> But can one get a "preferred" one [NS model]?
...reminds me of an interesting foundational point, namely that a
'false' result of Cauchy about convergence of continuous functions can be
rendered 'true' provided we interpret his notion of 'infinitesimal' as the
equivalence class of a sequence <s_n> in an ultraproduct of R w/r to an
ultrafilter which is a p-point.
- David Ross
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000214.html","timestamp":"2014-04-16T13:38:02Z","content_type":null,"content_length":"2935","record_id":"<urn:uuid:d80ee11c-1fa8-4786-a0ba-f5bd2a1fb926>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Point P is inside equilateral triangle ABC. Points Q, R, and S are the feet of the perpendiculars from P to AB, BC, and CA, respectively. Given that P Q = 1, P R = 2, and P S = 3, what is AB ?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Having trouble visualizing this problem could someone draw it out please?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50e112bee4b028291d742cf3","timestamp":"2014-04-19T04:39:21Z","content_type":null,"content_length":"42374","record_id":"<urn:uuid:5854b5b0-224d-411f-8b2e-23f26aedc3f4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Projectile's length of path
March 16th 2013, 12:25 PM #1
Mar 2013
Hey brainys,
I was wondering how to derive a formula for a length of projectiles path in the air (2 dimensions). I figured out the relationship of x and y, y=x*tg(a) - gx^2/2v[0]^2cos^2(a)
Then i combined some logic with the mean value theorem and got that lenght of path is integral from zero to x=v[0]^2sin(2a)/g of sqrt(1 + (tg(a) - xg/v[0]^2cos^2(a))^2) dx
When we raise (tg(a) - xg/v[0]^2*cos^2(a))^2 and simplify a little bit we get integral from zero to x=v[0]^2sin(2a)/g of sqrt((v[0]^4cos^2(a) - 2v[0]^2sin^2(a)*xg + x^2g)/v[0]^4cos^4(a)) dx .
Any ideas how to solve it? Thank you!
Re: Projectile's length of path
Hey brainys,
I was wondering how to derive a formula for a length of projectiles path in the air (2 dimensions). I figured out the relationship of x and y, y=x*tg(a) - gx^2/2v[0]^2cos^2(a)
Then i combined some logic with the mean value theorem and got that lenght of path is integral from zero to x=v[0]^2sin(2a)/g of sqrt(1 + (tg(a) - xg/v[0]^2cos^2(a))^2) dx
When we raise (tg(a) - xg/v[0]^2*cos^2(a))^2 and simplify a little bit we get integral from zero to x=v[0]^2sin(2a)/g of sqrt((v[0]^4cos^2(a) - 2v[0]^2sin^2(a)*xg + x^2g)/v[0]^4cos^4(a)) dx .
Any ideas how to solve it? Thank you!
Hi BlueBeast!
Let me first reformat your equation to something I can understand
$\displaystyle \int_0^{v_0^2\sin(2a)/g} \sqrt{\frac {v_0^4\cos^2(a) - 2v_0^2\sin^2(a) \cdot xg + x^2g} {v_0^4\cos^4(a)}} dx$
I haven't tried to verify the validity of that length, but to solve something like this, first simplify it to for instance
$\int \sqrt{A+Bx+Cx^2}$
with appropriate choices for A, B, and C.
Now you can feed it to for instance Wolfram|Alpha to do the difficult integration part.
You can see the result here.
Afterward you can substitute your values for A, B, and C again.
Re: Projectile's length of path
WOW.... I didn't think it is sooo complicated.. I realised that it is but i thought that it is possible to solve it with somekind of substitution. When i saw result in wolfram alfa, i was
shocked :O . Thank you very much for this answer, i will try to derive that formula finally. When i'll do, i will post it here
Re: Projectile's length of path
It may still be that a lot of that stuff cancels, but Wolfram did not accept the original formula.
Btw, I can already see that the formula cannot be correct.
The numerator should have units that are $m^4/s^4$, but $x^2g$ has the unit $m^3/s^2$
March 17th 2013, 03:53 PM #2
March 18th 2013, 08:06 AM #3
Mar 2013
March 18th 2013, 09:58 AM #4
|
{"url":"http://mathhelpforum.com/calculus/214895-projectile-s-length-path.html","timestamp":"2014-04-19T04:39:12Z","content_type":null,"content_length":"41902","record_id":"<urn:uuid:aae9e623-2714-4d73-841c-d1446e7441f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistics Terms
Absolute risk: The difference between two risks, usually smaller than a relative risk
Average/mean: The middle value of a set of numbers, calculated by adding all of the values and dividing by the number of values in the set
Clinical significance: An assessment that a research finding will have practical effects on patient care
Cohort: A group of individuals who share a common experience, exposure, or trait and who are under observation in a research study
Confidence interval: A measure of the number of times out of 100 (similar to a percentage) that test results will be within a specified range. It is a measurement used to indicate the reliability of
an estimate.
Confounding variable: A factor in a scientific study that wasn’t addressed that could affect the outcome of the study, such as smoking history in a study of people with cancer
Control group: A group of individuals who do not receive the treatment being studied. Researchers compare this group to the group of individuals who do receive the treatment, which helps them
evaluate the safety and effectiveness of the treatment.
Endpoint: The results measured at the end of a study to see whether the research question was answered
Incidence: The number of new instances of a disease or condition in a particular population during a specific time period. Learn more about statistics used to estimate risk and recommend screening
Lifetime risk: The probability of developing a disease or dying from that disease across a person’s lifetime
Median: The middle value in a range of measurements ordered by value
Mortality rate: The number of deaths in a particular population during a specific time period
Odds ratio: A comparison of whether the likelihood of an event is similar between two groups; a ratio of 1 means it is equally likely between both groups.
Outcome: A measurable result or effect
Prevalence: The total number of instances of a disease or condition in a particular population at a specific time. Learn more about statistics used to estimate risk and recommend screening [1].
P-value: Describes the probability that an observed effect occurred by chance. If a p-value is greater than or equal to 0.05 (p ≥ 0.05), the effect could have occurred by chance and is, therefore,
“not statistically significant.” If a p-value is less than 0.05 (p < 0.05), the effect likely did not occur by chance and is, therefore, “statistically significant.”
Randomized: Refers to a clinical trial in which participants are assigned by chance to different groups receiving different treatments so that the comparison of treatments is fair
Relative risk: A ratio, or comparison, of two risks, usually larger than the absolute risk
Risk: The likelihood of an event
Sensitivity: Refers to the proportion of the time that a particular test will accurately give a positive result (indicate that a person has a specific disease)
Specificity: Refers to the proportion of the time that a particular test will accurately give a negative result (indicate that a person does not have a specific disease)
Survival rate: The proportion of patients alive in a particular population at some point after the diagnosis of a disease
Statistically significant: Refers to an observed effect that is not likely to have occurred by chance (See definition of p-value)
|
{"url":"http://www.cancer.net/print/24926","timestamp":"2014-04-19T02:35:54Z","content_type":null,"content_length":"15381","record_id":"<urn:uuid:0c243822-b203-4477-94d7-2bd8da1e6501>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chaos: logistic difference equation
Source: James Gleick's Chaos, Making a New Science
IMAGINE A pond with goldfish swimming around in it. Each year, you go out to the pond and count the number of fish. Some years the number is similar to number you got last year; sometimes it's
completely different. Assuming you're the kind of fish-keeper with a mathematics fetish, how do you go around modelling your fish population?
One model is to assume that your fish stock increases each year by some growth rate:
y(next year) = rate * y(this year)
...where "y" is the number of fish and "rate" is the growth rate (say, 2.4).
Of course the problem is that the pond is only so big, and there only so much food around. There has to be something in the model to limit the population growth. Otherwise you'll be up to your neck
in goldfish in no time.
Enough of the lecture, jump me straight to...
|
{"url":"http://www.dallaway.com/pondlife/","timestamp":"2014-04-19T04:45:34Z","content_type":null,"content_length":"1857","record_id":"<urn:uuid:fe2518da-5e14-4461-8a7c-3a3c98fba6d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Variables of multi-equations
The question is: $a^3+6abd+3c(ac+b^2+d^2)=1$ $b^3+6abc+3d(a^2+bd+c^2)=0$ $c^3+6bcd+3a(ac+b^2+d^2)=0$ $d^3+6acd+3b(a^2+bd+c^2)=0$ How to find the value of a,b,c,d? Thanks
Hello, haedious! $\begin{array}{ccc}a^3+6abd+3c(ac+b^2+d^2) &=& 1 \\<br /> b^3+6abc+3d(a^2+bd+c^2) &=& 0 \\<br /> c^3+6bcd+3a(ac+b^2+d^2) &=& 0 \\<br /> d^3+6acd+3b(a^2+bd+c^2) &=& 0 \end{array}$ By
inspection: . $\begin{Bmatrix}a &=& 1 \\ b &=& 0 \\ c&=&0 \\ d&=&0 \end {Bmatrix}$
Thank you quickly reply, seem I mess up something... But are there only the answer {1,0,0,0} to the question? What if I knew the value of a is equal to 0?
Dear Soroban, Is there other ways of solving this question instead of by inspection. I notice there is some form of symmetry in the given equations which is some kind of cyclic order. Thanks Kingman
|
{"url":"http://mathhelpforum.com/algebra/143806-variables-multi-equations.html","timestamp":"2014-04-18T16:30:21Z","content_type":null,"content_length":"38636","record_id":"<urn:uuid:ef1d53c4-f2b9-4eb7-9827-4e488e734f73>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Continuous univariate distributions. Vol. 2. 2nd ed.
(English) Zbl 0821.62001
New York, NY: Wiley. xix, 719 p. £70.00 (1995).
This is the third volume of the new edition of the four volumes sized “Distributions in statistics.” Similar to the first two companion volumes, this revised second part of “Continuous univariate
distributions” (CUD), for the review of Vol. 1 see Zbl 0811.62001, has been expanded more than twice in size (from 306 to 719 pages). The distributions covered in this volume are: Extreme Value,
Logistic, Laplace, Beta, Uniform, $F$-, $t$-, noncentral ${\chi }^{2}$-, noncentral $F$-, noncentral $t$-distributions, distributions of correlation coefficients , and lifetime distributions. The
chapter on Quadratic forms has been postponed to the projected revised edition of “Continuous Multivariate Distributions.” The length of all chapters has been substantially increased (about doubled).
As in the first volume of CUD, the first chapters are more or less organised according to definition, genesis and historical remarks, moments, estimation of parameters, relations to other
distributions. The $F$-, $t$-, and ${\chi }^{2}$- distributions are of less interest in modelling, so estimation is almost no topic in these chapters.
Many new examples of applications of distributions are included. Also numerous results relating to approximations are included. The number of references increased almost threefold (to over 2100
items). They are given at the end of the chapters.
In summary, it can be said again: This second edition of “Distributions in statistics” will serve as the primary source for statistical distributions for a long time.
62-00 Reference works (statistics)
62E15 Exact distribution theory in statistics
60-00 Reference works (probability theory)
62E10 Characterization and structure theory of statistical distributions
|
{"url":"http://zbmath.org/?q=an:0821.62001","timestamp":"2014-04-19T07:07:11Z","content_type":null,"content_length":"23739","record_id":"<urn:uuid:a811b332-dd26-46df-9f88-b13b0d50ff39>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dealing with large scale graphs
To a hammer everything looks like a nail but one great hammer to have in your toolbox is the graph. The
ACL anthology
alone lists more than 300 results for the query
"graph based"
. Graphs based formalisms allow us to write down solutions in succinct linear algebra representation. However implementation of such solutions for large problems, or even for small datasets with
blown-up graph representations can be challenging in limited resource environments. While some go for interesting
approximate solutions
, an alternative solution is to pool in several limited resource nodes into a map-reduce cluster and design a parallel algorithm to conquer scale with concurrency. This is easier said than done since
designing some parallel algorithms requires a different perspective of the problem. This is well worth the effort as the new insights gained will reveal connections between things you already knew.
For instance, in our
TextGraphs 2009
paper we started out scaling up
Label Propagation
but eventually the connection to
became obvious. To me this was a bigger learning moment than getting Label Propagation work for large graphs. [
Preprint Copy
2 comments:
|
{"url":"http://resnotebook.blogspot.com/2009/07/dealing-with-large-scale-graphs.html","timestamp":"2014-04-16T21:55:21Z","content_type":null,"content_length":"53668","record_id":"<urn:uuid:920f57fe-f0c5-47fe-b940-a28e342d466b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yahtzee Card Game
Change player image...
Choose your player image
Press 'Roll Dice' to start the game
YOU WIN!!
Start a new game
You Bill
Three of a kind
Four of a kind
Full House
Small straight
Large straight
TOTAL SCORE
Yahtzee Rules
The objective of YAHTZEE is to get as many points as possible by rolling five dice and getting certain combinations of dice.
In each turn a player may throw the dice up to three times. A player doesn't have to roll all five dice on the second and third throw of a round, he may put as many dice as he wants to the side and
only throw the ones that don't have the numbers he's trying to get. For example, a player throws and gets 1,3,3,4,6. He decides he want to try for the small straight, which is 1,2,3,4,5. So, he puts
1,3,4 to the side and only throws 3 and 6 again, hoping to get 2 and 5.
In this game you click on the dice you want to keep. They will be moved down and will not be thrown the next time you press the 'Roll Dice' button. If you decide after the second throw in a turn that
you don't want to keep the same dice before the third throw then you can click them again and they will move back to the table and be thrown in the third throw.
Upper section combinations
• Ones: Get as many ones as possible.
• Twos: Get as many twos as possible.
• Threes: Get as many threes as possible.
• Fours: Get as many fours as possible.
• Fives: Get as many fives as possible.
• Sixes: Get as many sixes as possible.
For the six combinations above the score for each of them is the sum of dice of the right kind. E.g. if you get 1,3,3,3,5 and you choose Threes you will get 3*3 = 9 points. The sum of all the above
combinations is calculated and if it is 63 or more, the player will get a bonus of 35 points. On average a player needs three of each to reach 63, but it is not required to get three of each exactly,
it is perfectly OK to have five sixes, and zero ones for example, as long as the sum is 63 or more the bonus will be awarded.
Lower section combinations
• Three of a kind: Get three dice with the same number. Points are the sum all dice (not just the three of a kind).
• Four of a kind: Get four dice with the same number. Points are the sum all dice (not just the four of a kind).
• Full house: Get three of a kind and a pair, e.g. 1,1,3,3,3 or 3,3,3,6,6. Scores 25 points.
• Small straight: Get four sequential dice, 1,2,3,4 or 2,3,4,5 or 3,4,5,6. Scores 30 points.
• Large straight: Get five sequential dice, 1,2,3,4,5 or 2,3,4,5,6. Scores 40 points.
• YAHTZEE: Five of a kind. Scores 50 points. In this version of the game there are no YAHTZEE bonuses, so a player can only get YAHTZEE once.
Strategy tips
Try to get the bonus. Focus on getting good throws with fives and sixes, then it won't matter if you put 0 in the ones or twos. You can always put in 0 for a combination if you don't have it, even if
you have some other combination. E.g. if you had 2,3,4,5,6 and the only things you had left were Ones and Sixes, then it would be better to put 0 in Ones than to put only 6 in Sixes.
Maximum score
The maximum possible score is 375, and you would get that by getting 5 ones (5), 5 twos (10), 5 threes (15), 5 fours (20), 5 fives (25), 5 sixes (30), get the bonus points (35), five sixes (30) for
three of a kind, five sixes (30) for four of a kind, get a full house (25), get a small straight (30), get a large straight (40), five sixes for chance (30), get a YAHTZEE (50). 5 + 10 + 15 + 20 + 25
+ 30 + 35 + 30 + 30 + 25 + 30 + 40 + 30 + 50 = 375!
YAHTZEE vs. YATZY
In Scandinavia this game is called YATZY and has different rules, scoring and combinations. If you're from Denmark, Sweden or Iceland and are playing this game and thinking there's something wrong
with it, then no, there's nothing wrong, they are just different rules. Check Wikipedia for a detailed list of the differences between the two games.
Anyway, I hope you enjoy the game. If you have any questions or comments, send them to simplegames@simplegames.io.
Back to game
About Yahtzee
This game was made by me. My name is Einar Egilsson and over there on the left is my current Facebook profile picture! In the last couple of years I've made a number of simple online card games,
including Hearts and Spades. After making seven card games and three solitaires I figured it was time to try something else, so I decided to make YAHTZEE (or YATZY as it's known here in Denmark,
where the rules and scoring are also slightly different). It's been a fun game to make, and I'm looking forward to seeing how well Bill will play against human opponents :)
The game is made using Javascript, HTML and CSS, with jQuery and a couple of jQuery plugins used for animations. Since I have no artistic talent whatsoever I used graphics that I found at OpenClipArt
, a great site with free graphics.
Any questions, comments or requests about the game, please send them to simplegames@simplegames.io. Hope you enjoy the game!
Back to game
This is version 215 of Yahtzee
|
{"url":"http://www.yahtzee-game.com/","timestamp":"2014-04-17T07:22:46Z","content_type":null,"content_length":"16434","record_id":"<urn:uuid:67270f3c-6a48-486b-b9d1-584163a815e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NEED help please.
March 30th 2006, 08:04 AM #1
Mar 2006
NEED help please.
A question I have to answer is from the given equasion
2x^2 - 3x - 14
----------------- =F(x)
x^2 - 2x -8
I need to find the Vertical asymtope
Horozontal asymtope, slant asymtope , any holes and the x and y intercepts.
So far I have divided and come up with x^2 - x - 24
I am not sure If I am correct in doing that or even close any help would be greatly apreciated
Last edited by mazdatuner; March 30th 2006 at 08:09 AM.
A question I have to answer is from the given equasion
2x^2 - 3x - 14
----------------- =F(x)
x^2 - 2x -8
I need to find the Vertical asymtope
Horozontal asymtope, slant asymtope , any holes and the x and y intercepts.
So far I have divided and come up with x^2 - x - 24
I am not sure If I am correct in doing that or even close any help would be greatly apreciated
Umm...no, that's not what you get when you divide them.
Horizontal asymptotes:
You want to take $\lim_{x \to \infty}\frac{2x^2-3x-14}{x^2-2x-8}$
Note that as x blows up, the most important terms are the x^2. So:
$\lim_{x \to \infty}\frac{2x^2-3x-14}{x^2-2x-8} \approx \frac{2x^2}{x^2} = 2$
(Or you can just plug a large number, say x=10000, and see what you get.)
So we have a horizontal asymptote at y=2 on the +x side.
$\lim_{x \to -\infty}\frac{2x^2-3x-14}{x^2-2x-8} \approx \frac{2x^2}{x^2} = 2$
(Squaring takes away the "-" sign, so the expressions are the same.)
So we have the same horizontal asymptote on both sides of the y-axis: y=2.
Vertical asymptotes:
It's a good idea to factor your expression here before we figure these. I'll show you why in a minute.
Generally you find vertical asymptotes where the denominator of the expression goes to zero. This means that x = 4 is a vertical asymptote. Normally x = -2 would be another, but the x+2 in the
denominator is cancelled by the x+2 in the numerator. That's why you ALWAYS want to factor these things.
Slant asymptotes:
You get a slant asymptote only when the degree of the numerator is one more than the degree of the denominator. In this case they are both quadratics, so we don't have any slant asymptotes.
Not sure what you mean by this. Perhaps we are speaking of x = -2? This is a point in the domain that is not allowed.
x intercepts:
This is where F(x) is zero. This will happen when the numerator goes to zero (assuming that none of these x values are where we have a zero in the denominator!) The only non-cancelling term in
the numerator is the 2x-7, so if we set that to zero we find an x intercept at the point (7/2,0).
y intercepts:
These are easy. Just set x = 0 in your function:
So we have a y intercept at (0, 7/4).
March 30th 2006, 09:34 AM #2
|
{"url":"http://mathhelpforum.com/pre-calculus/2399-need-help-please.html","timestamp":"2014-04-16T20:45:17Z","content_type":null,"content_length":"32715","record_id":"<urn:uuid:15903d24-6582-44da-bde3-26e6c829afc4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the center the length of the axes and eccentricity of ellipse.
Find the center the length of the axes and eccentricity of ellipse. $2x^2+3y^2-4x-12y+13=0$
Hi - Complete the square for x: $2x^2 - 4x = 2(x^2 - 2x)$ $=2((x-1)^2-1)$ $=2(x-1)^2-2$ and for y: $3y^2-12y =3(y^2-4y)$ $=3((y-2)^2-4)$ $=3(y-2)^2-12$ So the equation can be re-written: $2(x-1)^
2-2+3(y-2)^2-12+13=0$ i.e. $2(x-1)^2 + 3(y-2)^2=1$ Substitute $u=x-1$ and $v=y-2$: $2u^2+3v^2=1$ Re-arrange: $\frac{u^2}{(\frac{1}{\sqrt{2}})^2} + \frac{v^2}{(\frac{1}{\sqrt{3}})^2}=1$ OK from here?
|
{"url":"http://mathhelpforum.com/pre-calculus/65594-find-center-length-axes-eccentricity-ellipse.html","timestamp":"2014-04-24T16:30:44Z","content_type":null,"content_length":"41852","record_id":"<urn:uuid:f95c8c32-d17b-4822-a5a5-db1fde52a007>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: retransformation of ln(Y) coefficient and CI in regression
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: retransformation of ln(Y) coefficient and CI in regression
From "Steve Rothenberg" <drlead@prodigy.net.mx>
To <statalist@hsphsun2.harvard.edu>
Subject st: retransformation of ln(Y) coefficient and CI in regression
Date Sun, 5 Jun 2011 10:26:37 -0500
I have a simple model with a natural log dependent variable and a three
level factor predictor. I?ve used
. regress lnY i.factor, vce(robust)
to obtain estimates in the natural log metric. I want to be able to display
the results in a graph as means and 95% CI for each level of the factor with
retransformed units in the original Y metric.
I?ve also calculated geometric means and 95% CI for each level of the factor
variable using
. ameans Y if factor==x
simply as a check, though the 95% CI is not adjusted for the vce(robust)
standard error as calculated by the -regress- model.
Using naïve transformation (i.e. ignoring retransformation bias) with
. display exp(coefficient)
from the output of -regress- for each level of the predictor, with the
classic formulation:
Level 0 = exp(constant)
Level 1 = exp(constant+coef(1))
Level 2 = exp(constant+coef(2))
the series of retransformations from the -regress- command is the same as
the geometric means from the series of -ameans- commands.
When I try to do the same with the lower and upper 95% CI (substituting the
limits of the 95% CI for the coefficients) from the -regress- command,
however, the retransformed IC is much larger than calculated from the-
ameans- command, much more so than the differences in standard errors from
regress with and without the vce(robust) option would indicate.
I?ve discovered -levpredict- for unbiased retransformation of log dependent
variables in regression-type estimations by Christopher Baum in SSC but it
only outputs the bias-corrected means from the preceding -regress-. To be
sure there is some small bias in the first or second decimal place of the
mean factor levels compared to naïve retransformation.
Am I doing something wrong by treating the 95% CI of each level of the
factor variable in the same way I treat the coefficients without correcting
for retransformation bias? Is there any way I can obtain either the
retransformed CI or the bias-corrected retransformed CI for the different
levels of the factor variable in the original metric of Y?
I'd like to retain the robust SE from the above estimation as there is
considerable difference in variance in each level of the factor variable.
Steve Rothenberg
National Institute of Public Health
Cuernavaca, Morelos, Mexico
Stata/MP 11.2 for Windows (32-bit)
Born 30 Mar 2011
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-06/msg00173.html","timestamp":"2014-04-16T07:16:22Z","content_type":null,"content_length":"9932","record_id":"<urn:uuid:a84a91c8-2e6d-40db-a849-97f98aa8bf7c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maspeth Precalculus Tutor
Find a Maspeth Precalculus Tutor
...Whether coaching a student to the Intel ISEF (2014) or to first rank in their high school class, I advocate a personalized educational style: first identifying where a specific student's
strengths and weaknesses lie, then calibrating my approach accordingly. Sound interesting? I coach students ...
32 Subjects: including precalculus, reading, calculus, physics
...At Harvard, I concentrated in Visual and Environmental Studies and Literature. I'm currently an MFA candidate at the Graduate Film Program at NYU Tisch, and my short films have screened at
Sundance, the Brooklyn Academy of Music (BAM), IFC and Rooftop Films, among many others. Much of my work has been in prepping students to take the SAT, ACT, ISEE and SSAT.
36 Subjects: including precalculus, English, chemistry, calculus
...I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. I have been
tutoring since high school so I have more than 10 years of experience, having tutored students of all ages, starting from elementary school all the way to college-level.
11 Subjects: including precalculus, Spanish, calculus, physics
...I work one to one to find the source of your/your child's difficulties that go beyond finding the answers to the homework problems (of course, I can help with that as well.) I cater to each
student's needs and explore the best possible method to get ideas and concepts across. About myself: Thr...
6 Subjects: including precalculus, algebra 1, algebra 2, prealgebra
...I am experienced in teaching algebra, geometry, calculus, biology, chemistry, physics, writing, public speaking and presentation, as well as all elementary subjects. I have tutored pupils
ranging from elementary age through college. My number one priority is the learning experience of my pupil.
43 Subjects: including precalculus, reading, writing, English
|
{"url":"http://www.purplemath.com/maspeth_precalculus_tutors.php","timestamp":"2014-04-19T20:04:39Z","content_type":null,"content_length":"24190","record_id":"<urn:uuid:6d7e8149-fbad-43ec-a405-a99323f3341b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
chat possible?
Re: chat possible?
Hi Mathisfun;
From Real Dummy to Real Member in only nine days, woof! Love being here. Thanks.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=110536","timestamp":"2014-04-20T06:15:29Z","content_type":null,"content_length":"12972","record_id":"<urn:uuid:346bd89c-301c-4e7c-8868-dd5cbe0a60ad>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A number problem.
Re: A number problem.
That did not make sense.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: A number problem.
Which part?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
It does not seem like a proper sentence.
Amyway, the five thread conversation is really making me sweat. I am out of practice.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: A number problem.
Hmmm, you are passed your bedtime young fellow.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: A number problem.
Early to bed, early to rise, makes a man healthy, wealthy and wise.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
I guess I am having problem with precision.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
Hold on, I am eating. I will be checking up on this every few minutes.
What are you getting for
When you substitute a0 with 1 to 9?
If that is where you are having problems then do you want me to do one so you can see how to do it?
Last edited by bobbym (2013-03-04 14:54:28)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
Actually I am trying to solve this equation:
(10^57-6)*a0 congruent to 0 (mod 59))
And to me it seems that every value of a0 is satisfying the equation
Last edited by Agnishom (2013-03-04 15:14:19)
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
That is true 1 to 9 will all work but they will not all solve the problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
Why should they all work? But not satisfy the problem????????????????????????
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
There are extra conditions on the problem that just multiplying by a0 does not handle.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
What are they?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
Think of it like an extraneous root. Just because algebra gets an answer that does not mean it is a solution. Sometimes extraneous answers are produced. Checking by plugging in is always necessary.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
Are the two answers as stefy said to plug a0 as 6 and 9
Is there a 116 digit solution too?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
Before we try to answer that, there are two more solutions. Can you find them?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
Any hint?
Gotta go to school
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
I will show you how to check them, you will not need a hint. Have a good day at school and study hard.
Here is the number we find for a0 = 6:
Take the a0 and put it on the back of the answer. You get,
Now does 6 times that equal the original original number with the unit digit moved to the front?
That does equal the first number we found with the unit digit moved to the front. We have a solution! Check all the other possibles in the same way. You will find 4 solutions.
Last edited by bobbym (2013-03-04 17:23:15)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
Agnishom wrote:
Why should they all work? But not satisfy the problem????????????????????????
I will tell you once you solve the problem.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: A number problem.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
That makes it four. Anonymnistefy, please explain now
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
Hi Agnishom;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: A number problem.
Are you kidding me?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: A number problem.
That first one is very interesting but
For the second one please check again.
Last edited by bobbym (2013-03-04 22:22:59)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=256196","timestamp":"2014-04-20T01:05:11Z","content_type":null,"content_length":"43962","record_id":"<urn:uuid:29860b6f-b5c0-49a7-853e-398e5d7dcf05>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Holding all other variables constant, an increase in the interest... (1 Answer) | Transtutors
Holding all other variables constant, an increase in the interest rate will cause ________ to decrease.
a. Future values
b. Annuity payments
c. Present values
d. Growth rates
Posted On: Apr 11 2011 10:15 AM
Tags: Accounting, Financial Accounting, Accounting Concepts and Principles, College
1 Approved Answer
There is an inverse relationship between interest rates and present values. For example a bond issued at 98 with a coupon rate of 10% will have an interest rate higher than 10%. The inverse applies
to debt issued at a premium.
a. future values - this will be the same whether or not the interest rate is changed
b. annuity payments - these will increase with an increase in interest
c. present values - inverse relationship with interest
d. growth rates - also not correct.
Related Questions in Accounting Concepts and Principles
Ask Your Question Now
Copy and paste your question here...
Have Files to Attach?
Questions Asked
Questions Answered
Topics covered in Accounting
|
{"url":"http://www.transtutors.com/questions/tts-variables-194154.htm","timestamp":"2014-04-16T04:11:49Z","content_type":null,"content_length":"77002","record_id":"<urn:uuid:347fdd43-79d1-49d5-aba2-ce6a3fa89aec>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
artificial intelligence
1) Suppose you are searching for a girl’s name written using only the three letters L, O and A.
1a) How many strings of four or fewer letters are there? (1)
1b) In the above possibilities, are you searching in a depth first or breadth first way? (1)
1c) What are the next three possible names you would write down starting with LO? (2)
1d) How many possibilities will you write down before getting to the name LOLA, show it by
implementation? (5)
1e) Are you guaranteed to find all girls names with letters L, O and A in this manner? (1)
if u think u are a good programmer enough to solve this problem then solve and prove urself ,
Do your own homework.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/46688/","timestamp":"2014-04-19T10:04:21Z","content_type":null,"content_length":"11630","record_id":"<urn:uuid:92ec32bc-2c82-4d7b-a5e2-c93bcffcab7f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Specific Heat
The Specific Heat is the amount of heat required to change a unit mass of a substance by one degree in temperature
The Specific Heat is the amount of heat required to change a unit mass of a substance by one degree in temperature. The heat supplied to a unit mass can be expressed as
dQ = m c dt (1)
dQ = heat supplied (kJ, Btu)
m = unit mass (kg, lb)
c = specific heat (kJ/kg^ oC, kJ/kg^ oK, Btu/lb^ oF)
dt = temperature change (K, ^oC, ^oF)
Expressing Specific Heat using (1)
c = dQ / m dt (1b)
Converting between Common Units
• 1 Btu/lb[m]^oF = 4186.8 J/kg K = 1 kcal/kg^oC
Example - Heating Aluminum
2 kg of aluminum is heated from 20 ^oC to 100 ^oC. Specific heat of aluminum is 0.91 kJ/kg^0C and the heat required can be calculated as
dQ = (2 kg) (0.91 kJ/kg^0C) ((100 ^oC) - (20 ^oC))
= 145.6 (kJ)
Example - Heating Water
One litre of water is heated from 0 ^oC to boiling 100 ^oC. Specific heat of water is 4.19 kJ/kg^0C and the heat required can be calculated as
dQ = (1 litre) (1 kg/litre) (4.19 kJ/kg^0C) ((100 ^oC) - (0 ^oC))
= 419 (kJ)
Specific Heat Gases
There are two definitions of Specific Heat for vapors and gases:
c[p] = (δh / δT)[p] - Specific Heat at constant pressure (kJ/kg^oC)
c[v] = ( δh / δT)[v] - Specific Heat at constant volume (kJ/kg^oC)
Gas Constant
The gas constant can be expressed as
R = c[p] - c[v] (2)
R = Gas Constant
Ratio of Specific Heat
The Ratio of Specific Heat is expressed
k = c[p] / c[v] (3)
Related Topics
Related Documents
|
{"url":"http://www.engineeringtoolbox.com/specific-heat-capacity-d_339.html","timestamp":"2014-04-21T15:49:44Z","content_type":null,"content_length":"26112","record_id":"<urn:uuid:7b2dc4db-ebbc-476b-8566-b99c740a9f5f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|
28.3 Point Particles and Delta Functions
Home | 18.013A | Chapter 28 Tools Glossary Index Up Previous Next
28.3 Point Particles and Delta Functions
Coulomb's law tells us about the electric field of a point charge. This suggests the question: what is the charge density of a point charge located at point P?
This question can be answered in terms of integrals over volumes: if you integrate this density over a volume that does not contain P you get 0. If the volume contains P you get the amount of charge
located there.
This means essentially that the density is 0 except at the point P. But the contribution at that point must be large enough to make a significant contribution to the integral.
This is not possible for a bounded function or for any function with our definition of the integral.
However, we do not really know that point particles are such, and have no way to distinguish experimentally between a point particle and one that has extended shape with radius on the order of 10^
-100 centimeters.
The integration we perform over density to get the total mass or charge in a volume V is a volume integral, which, when expressed in terms of ordinary one dimensional integrals requires three one
dimensional integrals.
A phenomenon similar to the density of a point particle occurs in one dimension, where it is called a "delta function". The density of a point particle can actually be described as the product of
delta functions in variables x, y and z. We therefore turn the discussion to the one dimensional situation.
Before discussing it further, we address the questions: why do we care about such matters? And why now?
And here is the answer: Coulomb's law, describes the electric field that accompanies a point particle. We can to use this fact to determine the electric field produced by any charge distribution
characterized by charge density
We will soon see that in doing this we are actually solving a linear differential equation with an inhomogeneous term
The method that we blunder into here for solving this equation can be characterized as follows: we first find the solution for a delta function inhomogeneous term for an arbitrary point P (here this
is Coulomb's law independent of P). Then we exploit this solution (by integrating it), to find the solution for a general inhomogeneous term.
The solution we find is one which obeys the differential equation with appropriate zero boundary conditions. It is in general called a Green's Function for the given differential equation and those
boundary conditions, since Green invented this approach. (Green was, by the way, a baker, who had a keen interest in mathematics. He was self taught in science and mathematics.) Finding it allows us
to solve the same equation with an arbitrary inhomogeneous term by integration.
Here is another way to look at the same idea: you want to find the response of a given physical system to an arbitrary external stimulus whose response to a sum of stimulae is the sum of its response
to each. To do this you find the response to single point stimuli at each possible point. You can then find the response to the arbitrary stimulus by (summing) integrating the product of that
stimulus with the response function.
This very powerful method for solving general inhomogeneous linear differential equations, by solving such equations with delta function inhomogeneity first, means that we want to use delta
And we want to use them here to generalize Coulomb's law to determine the electric field from an arbitrary charge distribution.
The one dimensional delta function can be described as the derivative of a step function, so we begin by defining step functions.
There are two standard step functions in common use.
The first, denoted by
The second, often written as
These are related by
Obviously, as defined, neither of these functions has a derivative at x = 0.
Yet either one differs only slightly from simple functions that have derivatives everywhere. In fact there is no real way to tell the difference between the two.
In consequence, we may use delta functions while pretending to be using the derivative of one of these other functions.
Mathematicians were at first highly suspicious of the delta function. However they now accept it as what is called a "distribution" though not as a function.
A function that for all practical purposes is indistinguishable from
The second function is, apart from a constant, what is called the error function. Its derivative is
The one defined in terms of exponents and error functions gets much smaller away from zero than it gets big near zero which is a bit nice than the arctangent.
The nice properties of the delta function are that it is zero for argument other than 0, and its integral over an interval containing 0 is 1 and these properties are more or less shared by these
functions except at unobservable arguments.
As a function, the delta function does not make much sense at argument 0, unless it is integrated over; its integral is a step function, and that is perfectly well defined.
When you see a delta function outside an integral, which you mostly will not, you can think of it as one of the two functions mentioned above, and not lose sleep over it.
The density of a point particle can then be described as the product of delta functions in each of the three variables x, y and z. It can be written as
|
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter28/section03.html","timestamp":"2014-04-20T01:19:16Z","content_type":null,"content_length":"9499","record_id":"<urn:uuid:f0c19d7b-9a86-4df0-bf37-6c77082bd8df>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Winston, GA Precalculus Tutor
Find a Winston, GA Precalculus Tutor
...But "understanding" is my goal -- I don't focus on quick tricks, but on lasting and deep learning. I hold myself to a high standard and ask for feedback from students and parents. I never bill
for a tutoring session if the student or parent is not completely satisfied.
8 Subjects: including precalculus, statistics, trigonometry, algebra 2
...Tutored fellow MBA students in Accounting. Awarded Mason Gold Standard Award for contributing to the academic achievement of my peers. Helped high school student with the math portion of SAT.
28 Subjects: including precalculus, calculus, physics, economics
...I always focus my students' attention on striving for academic excellence. I use my own academic achievements (ranked 4th in my high school graduating class, being the recipient of a full
4-year academic college scholarship, as well as serving as president of two honor societies while in college) to motivate my students. I truly believe that every student is capable of learning.
57 Subjects: including precalculus, reading, chemistry, writing
I am a certified math teacher in the Georgia and New York. I have 11 years experience teaching 6th-12th grade mathematics. I teach and tutor because I am passionate about teaching and learning.
14 Subjects: including precalculus, reading, geometry, biology
...I'm very friendly, personable, relaxed and I'm very good at observing and determining what you're struggling with. My style is to first identify the root of the problem(s) and then develop
exercises to help strengthen weaknesses and improve performance. I pride myself on my ability to adapt to any learning style.
12 Subjects: including precalculus, calculus, GRE, geometry
Related Winston, GA Tutors
Winston, GA Accounting Tutors
Winston, GA ACT Tutors
Winston, GA Algebra Tutors
Winston, GA Algebra 2 Tutors
Winston, GA Calculus Tutors
Winston, GA Geometry Tutors
Winston, GA Math Tutors
Winston, GA Prealgebra Tutors
Winston, GA Precalculus Tutors
Winston, GA SAT Tutors
Winston, GA SAT Math Tutors
Winston, GA Science Tutors
Winston, GA Statistics Tutors
Winston, GA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Atlanta Ndc, GA precalculus Tutors
Bowdon Junction precalculus Tutors
Braswell, GA precalculus Tutors
Cedartown precalculus Tutors
Chattahoochee Hills, GA precalculus Tutors
Clarkdale, GA precalculus Tutors
Ephesus, GA precalculus Tutors
Fairburn, GA precalculus Tutors
Felton, GA precalculus Tutors
Mount Zion, GA precalculus Tutors
Palmetto, GA precalculus Tutors
Red Oak, GA precalculus Tutors
Roopville precalculus Tutors
Sargent, GA precalculus Tutors
Waco, GA precalculus Tutors
|
{"url":"http://www.purplemath.com/Winston_GA_Precalculus_tutors.php","timestamp":"2014-04-17T13:21:07Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:dad6d075-dbb3-474e-9a56-7323a2d775d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teacher Materials:
• Overhead projector and marker
• 12 counting blocks
Student Materials:
Explain that today you are going to learn to add. To add means to put two or more numbers together to make a bigger number. If you have two students (ask two students to stand up) and you add two
more students (have two more students stand up) then you have four students. You added 2 and 2 and got a bigger number 4.
Put an + sign on the overheard projector. Explain that this is a plus sign. Whenever you see the sign it means you are going to add numbers.
Put 3 number cubes on the whiteboard. Have students help you count the number cubes. Add 4 more number cubes and count them. Then have the students' help you count how many number cubes there are in
Repeat with different addition problems like 2 + 2, 4 + 1, 5 + 5, 6 + 3, etc until you feel the concept is understood.
Have students use their own number cubes to mimic the number cubes on the overhead and solve each problem together.
Use the worksheet to evaluate whether students understand the concept.
More Basic Math Worksheets
More Math Lesson Plans, Worksheets, and Activities
For more teaching material, lesson plans, lessons, and worksheets please go back to the InstructorWeb home page.
|
{"url":"http://www.instructorweb.com/lesson/computeaddition12.asp","timestamp":"2014-04-18T08:03:31Z","content_type":null,"content_length":"21241","record_id":"<urn:uuid:d00d9fca-6d70-46c0-98bc-11eb26254e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Burtonsville ACT Tutor
Find a Burtonsville ACT Tutor
...As a student of mechanical engineering I became even more familiar with Calculus, scoring A's in all calculus subjects, including Calculus 3 and Differential Equations. As a Mechanical
Engineer, I am extremely familiar with all types of Mechanical Physics problems. I am also familiar with the basics of EM Physics.
32 Subjects: including ACT Math, reading, algebra 2, calculus
...Often, even minor changes in their habits or approaches to schoolwork helps students get more out of their study time. I can help your student understand what their teachers are asking them to
do, and how to create and execute plans to work smarter, as well as harder. Many students find they ha...
37 Subjects: including ACT Math, chemistry, English, reading
I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a
single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi...
15 Subjects: including ACT Math, calculus, physics, GRE
After graduating from Indiana University of Pennsylvania with a Bachelor's of Science in Business Administration with a concentration in Accounting, I have been working as an accountant with over
25 years of professional experience under my belt. My volunteer experience includes many years at my ch...
13 Subjects: including ACT Math, geometry, accounting, algebra 1
Struggling with test prep or high school courses? Confused about picking the right tutor? I have almost a decade of experience in helping students achieve their personal best on standardized
41 Subjects: including ACT Math, reading, English, writing
|
{"url":"http://www.purplemath.com/Burtonsville_ACT_tutors.php","timestamp":"2014-04-19T05:20:52Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:afa881f6-8824-4917-8c7b-6babb66d409e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 11
, 1991
"... Memory Management Units (MMUs) are traditionally used by operating systems to implement disk-paged virtual memory. Some operating systems allow user programs to specify the protection level
(inaccessible, readonly. read-write) of pages, and allow user programs t.o handle protection violations. bur. ..."
Cited by 173 (2 self)
Add to MetaCart
Memory Management Units (MMUs) are traditionally used by operating systems to implement disk-paged virtual memory. Some operating systems allow user programs to specify the protection level
(inaccessible, readonly. read-write) of pages, and allow user programs t.o handle protection violations. bur. these mechanisms are not. always robust, efficient, or well-mat. ched to the needs of
- SIGCSE Bulletin , 2007
"... This work analyzes the advantages and disadvantages of using the novice programming environment Alice in the CS0 classroom. We consider both general aspects as well as specifics drawn from the
authors ’ experiences using Alice in the classroom over the course of the last academic year. ..."
Cited by 8 (0 self)
Add to MetaCart
This work analyzes the advantages and disadvantages of using the novice programming environment Alice in the CS0 classroom. We consider both general aspects as well as specifics drawn from the
authors ’ experiences using Alice in the classroom over the course of the last academic year.
, 1982
"... OTA Background Papers are documents containing information that supplements formal OTA assessments or is an outcome of internal exploratory planning and evaluation. The material is usually not
of immediate policy interest such as is contained in an OTA Report or Technical Memorandum, nor does it pre ..."
Add to MetaCart
OTA Background Papers are documents containing information that supplements formal OTA assessments or is an outcome of internal exploratory planning and evaluation. The material is usually not of
immediate policy interest such as is contained in an OTA Report or Technical Memorandum, nor does it present options for Congress to consider.-I \lt. r,,,,.~ ’.-> ‘w,
, 1997
"... The Bethe-Salpeter formalism is used to study two-body bound states within a scalar theory: two scalar fields interacting via the exchange of a third massless scalar field. The Schwinger-Dyson
equation is derived using functional and diagrammatic techniques, and the Bethe-Salpeter equation is obtain ..."
Add to MetaCart
The Bethe-Salpeter formalism is used to study two-body bound states within a scalar theory: two scalar fields interacting via the exchange of a third massless scalar field. The Schwinger-Dyson
equation is derived using functional and diagrammatic techniques, and the Bethe-Salpeter equation is obtained in an analagous way, showing it to be a two-particle generalization of the
Schwinger-Dyson equation. We also present a numerical method for solving the Bethe-Salpeter equation without three-dimensional reduction. The ground and first excited state masses and wavefunctions
are computed within the ladder approximation and space-like form factors are calculated. The authors: Mike, Melissa, and Mike. pichowsk@theory.phy.anl.gov; Physics Division, Argonne National
Laboratory, Argonne, IL 60439-4843 y mlk@curie.unh.edu; Physics Department, University of New Hampshire, Durham, NH 03824 z strickla@phy.duke.edu; Duke University, Durham, NC 1 Introduction Many of
the bound systems that oc...
"... As always in q-theory, (X; Q)n will stand for the product (1 − X)(1 − QX)...(1 − Q n−1 X), and when the ”base ” Q is q, we will abbreviate (X; q)n to (X)n. For any Laurent polynomial f in
x1,..., xn, CT (f) denotes the coefficient of x 0 1..x 0 n. Throughout this paper t: = q a, s = q b, u = q c. ..."
Add to MetaCart
As always in q-theory, (X; Q)n will stand for the product (1 − X)(1 − QX)...(1 − Q n−1 X), and when the ”base ” Q is q, we will abbreviate (X; q)n to (X)n. For any Laurent polynomial f in x1,..., xn,
CT (f) denotes the coefficient of x 0 1..x 0 n. Throughout this paper t: = q a, s = q b, u = q c.
"... The changing function of the modern lab environment results in additional challenges requiring flexible task-specific solutions to minimize environmental impact, protect operator safety, and
optimize ..."
Add to MetaCart
The changing function of the modern lab environment results in additional challenges requiring flexible task-specific solutions to minimize environmental impact, protect operator safety, and optimize
"... energy policies for sustainable development ..."
, 2008
"... Measurements on a multilayer two-dimensional electron system (2DES) near Landau level filling ν=1 reveal the disappearance of the nuclear spin contribution to the heat capacity as the ratio ˜g
between the Zeeman and Coulomb energies exceeds a critical value ˜gc≈0.04. This disappearance suggests the ..."
Add to MetaCart
Measurements on a multilayer two-dimensional electron system (2DES) near Landau level filling ν=1 reveal the disappearance of the nuclear spin contribution to the heat capacity as the ratio ˜g
between the Zeeman and Coulomb energies exceeds a critical value ˜gc≈0.04. This disappearance suggests the vanishing of the Skyrmion-mediated coupling between the lattice and the nuclear spins as the
spin excitations of the 2DES make a transition from Skyrmions to single spin-flips above ˜gc. Our experimental ˜gc is smaller than the calculated ˜gc=0.054 for an ideal 2DES; we discuss possible
origins of this discrepancy. PACS numbers: 73.20.Dx, 73.40.Hm, 65.40.-f Typeset using REVTEX 1 The ground state and spin excitations of a two-dimensional electron system (2DES) near Landau level (LL)
filling ν=1 have attracted much recent interest [1–9]. At this filling, the Coulomb exchange energy plays a dominant role, leading to a substantially larger quantum Hall effect (QHE) excitation gap
than the expected single-particle Zeeman splitting [1].
, 1986
"... This note proposes an experimental test of a single accelerating gap of a pulsed linac. This accelerating gap consists of two parallel coaxial disks; an electromagnetic wave injected uniformly
at the outer periphery of the disks (initial amplitude Vi) grows while traveling towards the center to GVo, ..."
Add to MetaCart
This note proposes an experimental test of a single accelerating gap of a pulsed linac. This accelerating gap consists of two parallel coaxial disks; an electromagnetic wave injected uniformly at the
outer periphery of the disks (initial amplitude Vi) grows while traveling towards the center to GVo, G being the
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1193258","timestamp":"2014-04-20T22:03:08Z","content_type":null,"content_length":"30836","record_id":"<urn:uuid:e56fda97-97b8-452f-807c-b4f7321ff566>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How much is alpha?
Hello ! We have a square - find alpha ?
Last edited by yehoram; June 19th 2011 at 04:34 AM.
There is not enough information to answer. There must be more given. What else do you know?
Rotate triangle AEB about B, until A rests on C. Label the new position of "E" as "G". Draw a line from E to G. Angle EBG is 90 degrees. Angles BEG and BGE are 45 degrees. Use Pythagoras' Theorem to
calculate |EG| Triangle ECG now has all 3 sides (if you let |AE|=1, |EB|=2, |EC|=3) and you rearrange the law of cosines to calculate "alpha-45" degrees and therefore "alpha".
|
{"url":"http://mathhelpforum.com/geometry/183276-how-much-alpha.html","timestamp":"2014-04-16T08:41:00Z","content_type":null,"content_length":"41666","record_id":"<urn:uuid:9081ef4d-99c4-412e-b2ba-cc6e6c610cfd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
finite group
December 26th 2009, 07:43 AM #1
Junior Member
Jul 2009
finite group
Let G be a finite group, and let S and T be 2 subsets of G such that G does not equal ST. Show that $<br /> \left| G \right| \geqslant \left| S \right| + \left| T \right|<br /> <br />$
Could you help me a little bit more? i still can't do this problem >.<
There are |S| elements in S. There are |T| elements in $T^{-1}$, and also in its coset $gT^{-1}$. Also, the sets S and $gT^{-1}$ are disjoint: for suppose that an element $s\in S$ is also in $gT^
{-1}$. Then $s = gt^{-1}$ for some $t\in T$. But that implies that $st=g$, which contradicts the choice of g. Therefore there are |S|+|T| distinct elements in the set $S\cup gT^{-1}$, and so $|G|
\geqslant |S|+|T|$.
ahh i see, thank you!
December 26th 2009, 12:55 PM #2
December 31st 2009, 08:33 AM #3
Junior Member
Jul 2009
December 31st 2009, 08:58 AM #4
December 31st 2009, 10:49 AM #5
Junior Member
Jul 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/121642-finite-group.html","timestamp":"2014-04-18T07:49:16Z","content_type":null,"content_length":"43821","record_id":"<urn:uuid:092770a7-e614-4641-a103-79889deb53e6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A codes provide many powerful features. These include arithmetic, relational, logical, and concatenation operators, the ability to reference fields by name or FMC, the capability to use other data
definition records as functions that return a value, and the ability to modify report data by using format codes.
The A code also allows you to handle the data recursively, or "nest" one A code expression inside another.
The A code function uses an algebraic format. There are two forms of the A code:
• A uses only the integer parts of stored numbers unless a scaling factor is included.
• AE handles extended numbers. Uses both integer and fractional parts of stored numbers.
n is a number from 1 to 6 that specifies the required scaling factor.
expression comprises operands, operators, conditional statements, and special functions.
The A code replaces and enhances the functionality of the F code. You will find A codes much easier to work with than F codes.
Valid A code formats are:
A;expression evaluates the expression.
An converts to a scaled integer.
An;expression converts to a scaled integer.
AE;expression evaluates the expression.
Performs the functions specified in expression on values stored without an embedded decimal point.
The An format converts a value stored with an embedded decimal point to a scaled integer. The stored value's explicit or implied decimal point is moved n digits to the right with zeros added if
necessary. Only the integer portion is returned.
Field 2 of the data definition record must contain the FMC of the field that contains the data to be processed.
The An;expression format performs the functions specified in expression on values stored with an embedded decimal point. The resulting value is then converted to a scaled integer.
The AE format uses both the integer and fractional parts of stored numbers. Scaling of output must be done with format codes.
Field 1 Field 2 A;1 + 2 A3;1 + 2 AE;1 + 2
-77 -22 -99 -99000 -99
0.12 22.09 22 22210 22.21
-1.234 -12.34 -13 -13574 -13.574
-1.234 123.45 122 122216 122.216
Input conversion is not allowed.
You can format the result of any A code operation by following the expression with a value mark, and then the required format code, like this:
Format codes can also be included within the expression. See Format Codes for more information.
Operands that you can use in A code expressions include: FMCs (field numbers), field names, literals, operands that return system parameters, and special functions.
You can format any operand by following it with one or more format codes enclosed in parentheses, and separated by value marks, (ctrl ]), like this:
See Format Codes for more information.
The field number operand returns the content of a specified field in the data record:
The first R specifies that any non-existent multivalues should use the previous non-null multivalue. When the second R is specified, this means that any non-existent subvalues should use the previous
non-null subvalue.
The field name operand returns the content of a specified field in the data record:
The literal operand supplies a literal text string or numeric value:
Several A code operands return the value of system parameters. They are:
D Returns the system date in internal format.
LPV Returns the previous value transformed by a format code.
NA Returns the number of fields in the record.
NB Returns the current break level counter. 1 is the lowest break level, 255 is the GRAND TOTAL line.
ND Returns the number of records (detail lines) since the last control break.
NI Returns the record counter.
NL Returns the record length in bytes
NS Returns the subvalue counter
NU Returns the date of last update
NV Returns the value counter
T Returns the system time in internal format.
V Returns the previous value transformed by a format code
Some operands allow you to use special functions. They are:
I(expression) Returns the integer part of expression.
R(exp1, exp2) Returns the remainder of exp1 divided by exp2.
S(expression) Returns the sum of all values generated by expression.
string[start-char-no, len] Returns the substring starting at character start-char-no for length len.
Operators used in A code expressions include arithmetic, relational and logical operators, the concatenation operator, and the IF statement.
Arithmetic operators are:
+ Sum of operands
- Difference of operands
* product of operands
/ Quotient (an integer value) of operands
Relational operators specify relational operations so that any two expressions can treated as operands and evaluated as returning true (1) or false (0). Relational operators are:
= or EQ Equal to
< or LT Less than
> or GT Greater than
<= or LE Less than or equal to
>= or GE greater than or equal to
# or NE Not equal
The logical operators test two expressions for true or false and return a value of true or false. Logical operators are:
AND Returns true if both expressions are true.
OR Returns true if any expressions is true.
The concatenation operator is a colon (:)
The IF operator gives the A code its conditional capabilities. An IF statement looks like this:
IF expression THEN statement ELSE statement
Specifies a field which contains the value to be used.
field-number is the number of the field (FMC) which contains the required value.
R specifies that the value obtained from this field is to be applied repeatedly for each multivalue not present in a corresponding part of the calculation.
RR specifies that the value obtained from this field is to be applied repeatedly for each subvalue not present in a corresponding part of the calculation.
The following field numbers have special meanings:
0 Record key
9998 Sequential record count
9999 Record size in bytes
EXAMPLE 1
Returns the value stored in field 2 of the record.
EXAMPLE 2
Returns the size of the record in bytes.
EXAMPLE 3
A;2 + 3R
For each multivalue in field 2 the system also obtains the (first) value in field 3 and adds it. If field 2 contains 1]7 and field 3 contains 5 the result would be two values of 6 and 12
respectively. Where 3 does not have a corresponding multivalue, the last non-null multivalue in 3 will be used.
EXAMPLE 4
A;2 + 3RR
For each subvalue in field 2 the system also obtains the corresponding subvalue in field 3 and adds it. If field 2 contains 1\2\3]7 and field 3 contains 5\4 the result would be five values of 6, 6,
7, 12 and 4 respectively.
N (FIELD NAME) OPERAND
References another field defined by name in the same dictionary or found in one of the default dictionaries.
field-name is the name of another field defined in the same dictionary or found in the list of default dictionaries (see the JBCDEFDICTS environment variable).
R specifies that the value obtained from this field is to be applied repeatedly for each multivalue not present in a corresponding part of the calculation.
RR specifies that the value obtained from this field is to be applied repeatedly for each subvalue not present in a corresponding part of the calculation.
If the data definition record of the specified field contains field 8 pre-process conversion codes, these are applied before the value(s) are returned.
Any pre-process conversion codes in the specified field-name, including any further N(field-name) constructs are processed as part of the conversion code.
N(field-name) constructs can be nested to 30 levels. The number of levels is restricted to prevent infinite processing loops. For example:
008 A;N(TEST2)
008 A;N(TEST1)
EXAMPLE 1
Returns the value stored in the field defined by S.CODE.
EXAMPLE 2
A;N(A.VALUE) + N(B.VALUE)R
For each multivalue in the field defined by A.VALUE the system also obtains the corresponding value in B.VALUE and adds it. If A.VALUE returns 1]7 and B.VALUE returns 5, the result would be two
values of 6 and 12 respectively.
EXAMPLE 3
A;N(A.VALUE) + N(B.VALUE)RR
For each subvalue in the field defined by A.VALUE the system also obtains the corresponding value in B.VALUE and adds it. If A.VALUE returns 1\2\3]7 and B.VALUE returns 5 the result would be four
values of 6, 7, 8 and 12 respectively.
Specifies a literal string or numeric constant enclosed in double quotes.
literal is a text string or a numeric constant.
A number not enclosed in double quotes is assumed to be a field number (FMC).
EXAMPLE 1
A;N(S.CODE) + "100"
Adds 100 to each value (subvalue) in the field defined by S.CODE.
EXAMPLE 2
Concatenates the string "SUFFIX" to each value (subvalue) returned by S.CODE.
Reference system parameters like date, time, the current break level, or the number of the current record.
system-operand can be any of the following:
D Returns the system date in internal format.
LPV Returns the previous value transformed by a format code.
NA Returns the number of fields in the record.
NB Returns the current break level counter. 1 is the lowest break level, 255 is the GRAND TOTAL line.
ND Returns the number of records (detail lines) since the last control break.
NI Returns the record counter.
NL Returns the record length in bytes
NS Returns the subvalue counter
NU Returns the date of last update
NV Returns the value counter
T Returns the system time in internal format.
V Returns the previous value transformed by a format code
The Integer Function I(expression) returns the integer portion of expression.
AE;I(N(COST) * N(QTY))
Returns the integer portion of the result of the calculation.
The Remainder Function R(exp1, exp2) takes two expressions as operands and returns the remainder when the first expression is divided by the second.
A;R(N(HOURS) / "24")
Returns the remainder when HOURS is divided by 24.
The Summation Function S(expression) evaluates an expression and then adds together all the values.
A;S(N(HOURS) * N(RATE)R)
Each value in the HOURS field is multiplied by the value of RATE. The multivalued list of results is then totalled.
The substring function [start-char-no, len] extracts the specified number of characters from a string, starting at a specified character.
start-char-no is an expression that evaluates to the position of the first character of the substring.
len is an expression that evaluates to the number of characters required in the substring. Use - len (minus prefix) to specify the end point of the substring. For example, [1, -2] will return all but
the last character and [-3, 3] will return the last three characters.
EXAMPLE 1
A;N(S.CODE)["2", "3"]
Extracts a sub-string from the S.CODE field, starting at character position 2 and continuing for 3 characters.
EXAMPLE 2
A;N(S.CODE)[2, N(SUB.CODE.LEN)]
Extracts a sub-string from the S.CODE field, starting at the character position defined by field 2 and continuing for the number of characters defined by SUB.CODE.LEN.
Specifies a format code to be applied to the result of the A code or an operand.
a-code is a complete A Code expression.
a-operand is one of the A Code operands.
format-code is one of the codes described later - G(roup), D(ate) or M(ask).
] represents a value mark that must be used to separate each format-code.
You can format the result of the complete A code operation by following the expression with a value mark and then the required format
code(s). (This is actually a standard feature of the data definition records.)
Format codes can also be included within A code expressions. In this case, they must be enclosed in parentheses, and separated with a value mark if more than one format code is used.
All format codes will convert values from an internal format to an output format.
EXAMPLE 1
A;N(COST)(MD2]G0.1) * ...
Shows two format code applied within an expression. Obtains the COST value and applies an MD2 format code. Then applies a group extract to acquire the integer portion of the formatted value. The
integer portion can then be used in the rest of the calculation. Could also have been achieved like this:
A;I(N(COST)(MD2)) * ...
EXAMPLE 2
A;N(COST) * N(QTY)]MD2
Shows the MD2 format code applied outside the A code expression. COST is multiplied by QTY and the result formatted by the MD2 format code.
Operators used in A code expressions include arithmetic, relational and logical operators, the concatenation operator, and the IF statement.
Arithmetic operators are:
+ Sum of operands
- Difference of operands
* product of operands
/ Quotient (an integer value) of operands
Relational operators specify relational operations so that any two expressions can treated as operands and evaluated as returning true (1)
or false (0). Relational operators are:
= or EQ Equal to
< or LT Less than
> or GT Greater than
<= or LE Less than or equal to
>= or GE greater than or equal to
# or NE Not equal
The logical operators test two expressions for true (1) or false (0) and return a value of true or false. Logical operators are:
AND|Returns true if both expressions are true.
OR|Returns true if any expressions is true.
The words AND and OR must be followed by at least one space. The AND operator takes precedence over the OR unless you specify a different order by means of parentheses. OR is the default operation.
A colon (:) is used to concatenate the results of two expressions.
For example, the following expression concatenates the character "Z" with the result of adding together fields 2 and 3:
A;"Z":2 + 3
The IF statement gives the A code conditional capabilities.
IF expression THEN statement ELSE statement
expression must evaluate to true or false. If true, executes the THEN statement. If false, executes the ELSE statement.
statement is a string or numeric value.
Each IF statement must have a THEN clause and a corresponding ELSE clause.
IF statements can be nested but the result of the statement must evaluate to a single value.
The words IF, THEN and ELSE must be followed by at least one space.
EXAMPLE 1
A;IF N(QTY) < 100 THEN N(QTY) ELSE ERROR!
Tests the QTY value to see if it is less than 100. If it is, output the QTY field. Otherwise, output the text "ERROR!".
EXAMPLE 2
A;IF N(QTY) < 100 AND N(COST) < 1000 THEN N(QTY) ELSE ERROR!
Same as example 1 except that QTY will only be output if it is less than 100 and the cost value is less than 1000.
EXAMPLE 3
A;IF 1 THEN IF 2 THEN 3 ELSE 4 ELSE 5
If field 1 is zero or null, follow else and use field 5. Otherwise test field 2. If field 2 is zero or null, follow else and use field 4. Otherwise use field 3. Field 3 is only used if both fields 1
and 2 contain a value.
|
{"url":"http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.A.htm","timestamp":"2014-04-21T07:40:43Z","content_type":null,"content_length":"26791","record_id":"<urn:uuid:c1510926-59ed-421c-a8bb-ca968eb3b553>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hyperfiniteness of CCR algebra
up vote 5 down vote favorite
Hi, It is known that the double commutant of the CCR algebra in it's GNS space with respect to some quasi-free states are always type III factors. My question is; Will some of them be hyperfinite
Sorry for confusing Ccr with ccr;-) – plusepsilon.de Mar 21 '13 at 16:18
2 Just to pick up on Marc's comment: by CCR you mean canonical commutation relations, not completely continuous representations (aka liminal), right? – Yemon Choi Mar 21 '13 at 18:25
I answered that the later are alway type 1, but the op referred to the former. – plusepsilon.de Mar 22 '13 at 14:23
add comment
1 Answer
active oldest votes
Yes, Araki-Woods showed they're always ITPFI factors and ITPFI factors are hyperfinite. See the following:
Araki-Woods, A classification of factors
up vote 4 down vote
It's a bit of a monster paper, the stuff on CCR algebras is near the end.
Thanks Ollie for the right reference. – Panchugopal Mar 23 '13 at 9:53
add comment
Not the answer you're looking for? Browse other questions tagged oa.operator-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/125149/hyperfiniteness-of-ccr-algebra","timestamp":"2014-04-18T13:49:38Z","content_type":null,"content_length":"53085","record_id":"<urn:uuid:7046b73e-739f-49df-b9c3-62e1cab20230>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Idea of the Decade
on August 31, 2010
in Progress
. 34 Comments
What is the largest (non-human) animal that’s ever found its way onto an airplane?
Someday Google (or its successor) will be able to answer that question. It will understand what you’re asking, it will perform relevant searches for old newspaper items, it will sift through the
results, it will know (or know how to find out) whether a vole is larger than a ferret, and it will give you an answer. We’ll call it the semantic web.
When the Chronicle of Higher Education asked me for a few hundred words on the defining idea of the next decade, this was the first thing that came to mind. Another was the partial conquest of
cognitive bias through better understanding of the systematic ways our brains let us down, together with software designed to compensate for our own mental shortcomings.
Efficiency Experts
on August 30, 2010
in Economics and Musings
. 80 Comments
Is it better to tax consumption or to tax income? Is it better to tax carbon or to mandate fuel efficiency? Is it better to foster global competition or to protect local industries?
Today, I will attack none of these questions. Instead, I will attack the meta-question of how to attack such questions. For economists evaluating alternative policies, the industry standard is the
efficiency criterion, also known as the welfare criterion. (I’ll illustrate what that means as I go along.) But now comes Princeton Professor Uwe Reinhardt with a piece in the New York Times that
questions the orthodox approach found in virtually all modern textbooks (including one in particular).
Let’s first dispense with the straw man. I’ve never heard of an economist who believes that every efficient policy is good, and I’ve heard of very few who believe that every inefficient policy is
bad. It’s true that most economists do seem to believe that any good policy analysis should start by considering efficiency. That doesn’t mean it should end there.
I think economists are right to emphasize efficiency, and I think so for (at least) two reasons. First, emphasizing efficiency forces us to concentrate on the most important problems. Second,
emphasizing efficiency forces us to be honest about our goals.
Weekend Roundup
on August 28, 2010
. 4 Comments
If there’s one thing I wish everybody understood about economics, it’s that wise resource allocation requires truly vast amounts of information, and that prices do an excellent job of summarizing
that information. We led off the week by applying this principle to grocery shopping. A rather silly column in the New York Times had seemed to suggest that socially responsible shoppers should care
about the energy costs of producing vegetables to the exclusion of all the other costs. The column was focusing, in other words, on the seen as opposed to the unseen. But the unseen costs of growing
a tomato in one location rather than another are just as important as the obvious ones, and because they are unseen (and unseeable) the only feasible way to account for them is to look at prices. We
followed up with a 25 year old application of exactly the same principle, this time to the problem of resource extraction.
We moved on to the perils of interpreting data, in this case with regard to the ingredients of a happy marriage. Then a look back to what the world of 1985 thought would constitue a marvelous future;
we seem to have met expectations pretty well. And finally, we came in a sense full circle — from lamenting those focus single-mindedly on energy costs to the exclusion of all else to lamenting those
who fault others for failing to focus single-mindedly on one political issue to the exclusion of all others.
I’ll be back next week with some thoughts on why we should care about economic efficiency, a little more on the foundations of arithmetic, and some surprises.
The New Parochialism
on August 27, 2010
in Politics and Rants
. 15 Comments
So a former chairman of the Republican National Committee comes out as gay, and endorses gay marriage, but continues to support politicians who oppose gay marriage. For this he is labeled (on blogs
too numerous to link) a first-class hypocrite.
I missed the memo about the new criteria for hypocrisy, so I’d like a little clarification here. Are Catholics now required to vote solely on the basis of Catholic issues, and union workers solely on
the basis of union issues, and billionaires solely on the basis of billionaire issues? Or is it only gays who are forbidden to prioritize, say, foreign affairs and tax policy? And what’s to become of
the multifaceted? If you’re a gay Jewish small business owner, to which brand of parochialism are you now in thrall? Please advise.
Living In the Future
on August 26, 2010
in Books and Progress
. 29 Comments
My treasured copy of the humor classic Science Made Stupid, copyright 1985, contains a Wonderful Future Invention Checklist. Who in 1985 would have thought that just 25 years later, I could check off
a third or so of the entries?
• Househould Robot. Does my Roomba count?
• Magnetic Train. Check.
• Flat-Screen TV. Check.
• Flat-Screen 3-D TV. Check.
• Two-Way Wrist Radio. We are so far past this.
The Match Game
on August 25, 2010
in Bad Reasoning and Statistics
. 22 Comments
reports that success in marriage is quite uncorrelated with the match between your personality traits and your partner’s. Your traits matter (it pays to be happy, for example) and so do your
partner’s, but the combination makes no difference. In other words, being a happy person (or an extrovert, or a stickler for detail) affects the quality of your marriage in exactly the same way
whether you marry Ruth Bader Ginsberg or Lady Gaga. (This applies specifically to personality traits, not to religion, politics, wealth, intelligence, etc.)
Edited to add: The original version of this post misstated the result; I’ve changed a few words in the preceding paragraph so it’s accurate now.
From this, Robin concludes:
If you want a happy relationship, be a happy person and pick a happy partner; no need to worry about how well you match personality-wise.
NO!!!! That’s not the right conclusion at all, and it’s worth understanding why not. Suppose we lived in a world where personality matches had a huge effect on the success of marriages. In that
world, why would two people with clashing personalities ever choose to marry? Presumably because there’s some special value in the match — like, say, an extraordinary mutual attraction — that
overrides the personality clash.
So a survey of married couples — which is exactly the sort of evidence Robin is reporting on — is not at all a random sample of couples. Instead, it consists, for the most part, of couples with
matched personalities on the one hand, and couples with mismatched personalities who are exceptionally well suited to each other for some other reason on the other hand. It’s not too surprising to
find similar success rates in those two classes of couples. The third class — the couples with mismatched personalities and no redeeming match characteristics — never gets married and therefore never
gets surveyed.
Conclusion: The results Robin quotes are perfectly consistent with a world where personality matching doesn’t matter — but also perfectly consistent with a world where it matters very much.
LocoVore Followup: A Blast From the Past
on August 24, 2010
in Bad Reasoning and Economics
. 22 Comments
By way of followup to yesterday’s post on locavores, I present this letter to the editor of Science, written in 1976 by Harvard economist Robert Dorfman. You can think of Earl Cook, to whom Dorfman
is responding, as the Steven Budiansky of his time.
The article by Earl Cook, “Limits to exploitation of nonrenewable resources”, is extremely informative. In fact, I should like to assign it to my class except that it is marred by an egregious
fallacy. Since this fallacy has been turning up repeatedly in writings about environmental and natural resource problems, I wish to call it to the attention of Science readers.
The mistake has to do with the nature of social cost. Cook, for example, writes “To society … the profit from mining (including oil and gas extraction) can be defined either as an energy surplus,
as from the exploitation of fossil and nuclear fuel deposits, or as a work saving, as in the lessened expenditure of human energy and time when steel is used in place of wood … “. A number of
other authors also equate social cost with the expenditure of energy.
on August 23, 2010
in Bad Reasoning and Economics
. 64 Comments
Steven Budiansky, the self-described Liberal Curmudgeon, thinks there’s something wrong with the locavore movement, and says so in the New York Times. But he misses the point just as badly as the
locavores themselves.
The locavores, in case you don’t follow this kind of thing, are an environmentalist sect who make a moral issue out of where your food is grown — preferring that which is local to that which comes
from afar. For example, as Budiansky puts it, “it is sinful in New York City to buy a tomato grown in California because of the energy spent to truck it across the country”.
Ah, says Budiansky, but let’s look deeper — the alternative to that California tomato might be one grown in a lavishly heated greenhouse in the Hudson Valley, and at a higher energy cost. This leads
him off on a merry chase through what he calls a series of math lessons, adding up the energy costs of growing and transporting food in different locations. The implicit recommendation seems to be
that when you’re choosing a tomato, you should care about all the energy costs.
Weekend Roundup
on August 21, 2010
. 5 Comments
It was a week of mathematics here at The Big Questions. I am still reeling from the momentous events that inspired Monday’s post; we now know that the Internet has changed mathematics forever. On
Friday, we celebrated the momentous achievenments of the new Fields Medalists.
In between, we began what will be an occasional series on the foundations of arithmetic. In Part I, we distinguished truth from provability. In Part II, we distinguished theories (that is, systems of
axioms) from models (that is, the mathematical structures that the theories are intended to describe). A theory is a map; a model is the territory. In Part III we talked about consistency and
stressed that it applies only to theories, not to models. A purported map of Nebraska can be inconsistent; Nebraska itself can’t be.
It turns out (a little surprisingly) that any consistent map must describe multiple territories. (That is, any consistent set of axioms must describe many mathematical structures — or in other words,
any consistent theory must have many models.) (This assumes the map has enough detail to let us talk about addition and multiplication.) These territories—i.e. these mathematical structures, all look
very different, even though they all conform to the map. Conclusion: No map can fully describe the territory. No set of axioms can fully describe the natural numbers.
I’ll continue this series sporadically, and eventually we’ll get into some controversial philosophical questions. So far we haven’t.
Speaking of controversy, I’ve increased the default font size for this blog. Tell me if you like it.
Wikipedia Fail
on August 20, 2010
in Heroes, Math and Progress
. 18 Comments
Congratulations to the 2010 Fields Medalists, announced yesterday in Hyderabad. Elon Lindenstrauss, Ngo Bau Chau, Stanislav Smirnov, and Cedric Villani have been awarded math’s highest honor. (Up to
four medalists are chosen every four years.)
My sense going in was that Ngo was widely considered a shoo-in, for his proof of the Fundamental Lemma of Langlands Theory. Do you want to know what the Fundamental Lemma says? Here is an 18-page
statement (not proof!) of the lemma. The others were all strong favorites. Nevertheless:
Basic Arithmetic, Part III: The Map is Not the Territory
on August 19, 2010
in Logic and Math
. 35 Comments
Today let’s talk about consistency.
Suppose I show you a map of Nebraska, with as-the-crow-flies distances marked between the major cities. Omaha to Lincoln, 100 miles. Lincoln to Grand Island, 100 miles. Omaha to Grand Island, 400
You are entitled to say “Hey, wait a minute! This map is inconsistent. The numbers don’t add up. If it’s 400 miles straight from Omaha to Grand Island, then there can’t be a 200 mile route that goes
through Lincoln!”
So a map can be inconsistent. (It can also be consistent but wrong.) Nebraska itself, however, can no more be inconsistent than the color red can be made of terrycloth. (Red things can be made of
terrycloth, but the color red certainly can’t.)
With that in mind, suppose I give you a theory of the natural numbers — that is, a list of axioms about them. You might examine my axioms and say “Hey! These axioms are inconsistent. I can use them
to prove that 0 equals 1 and I can also use them to prove that 0 does not equal 1!” And, depending on the theory I gave you, you might be right. So a theory can be inconsistent. But the intended
model of that theory — the natural numbers themselves — can no more be inconsistent than Nebraska can. Inconsistency in this context applies to theories, like the Peano axioms for arithmetic, not to
structures, like the natural numbers themselves.
Continue reading ‘Basic Arithmetic, Part III: The Map is Not the Territory’
Basic Arithmetic, Part II
on August 18, 2010
in Logic and Math
. 40 Comments
Today’s mini-lesson in the foundations of mathematics is about the key distinction between theories and models.
The first thing to keep in mind is that mathematics is not economics, and therefore the vocabulary is not the same. In economics, a “model” is some sort of an approximation to reality. In
mathematics, the word model refers to the reality itself, whereas a theory is a sort of approximation to that reality.
A theory is a list of axioms. (I am slightly oversimplifying, but not in any way that will be important here.) Let’s take an example. I have a theory with two axioms. The first axiom is “Socrates is
a man” and the second is “All men are mortal”. From these axioms I can deduce some theorems, like “Socrates is mortal”.
That’s the theory. My intended model for this theory is the real world, where “man” means man, “Socrates” means that ancient Greek guy named Socrates, and “mortal” means “bound to die”.
But this theory also has models I never intended. Another model is the universe of Disney cartoons, where we interpret “man” to mean “mouse”, we interpret “Socrates” to mean “Mickey” and we interpret
“mortal” to mean “large-eared”. Under that interpretation, my axioms are still true — all mice are large-eared, and Mickey is a mouse — so my theorem “Socrates is mortal” (which now means “Mickey is
large-eared”) is also true.
Basic Arithmetic
on August 17, 2010
in Math
. 27 Comments
With the P=NP problem in the news, this seems like a good time to revisit the distinction between truth and provability.
Start with this P=NP-inspired question:
Question 1: Is it or is it not possible to write a computer program that factors numbers substantially faster than by trial-and-error?
I don’t need you to answer that question. I just want you to answer an easier question:
Question 2: Does or does not Question 1 have an answer?
If you said yes (as would be the case, for example, if you happen to be sane), then you have recognized that statements about arithmetic can be either true or false independent of our ability to
prove them from some set of standard axioms. After all, nobody knows whether the standard axioms of arithmetic (or even the standard axioms for set theory, which are much stronger) suffice to settle
Question 1. Nevertheless, pretty much everyone recognizes that Question 1 must have an answer.
Let’s be clear that this is indeed a question about arithmetic, not about (say) electrical engineering. A computer program is a finite string of symbols, so it can easily be encoded as a string of
numbers. The power to factor quickly is a property of that string, and that property can be expressed in the language of arithemetic. So Question 1 is an arithmetic question in disguise. (You might
worry that phrases like “quickly” or “substantially faster” are suspiciously vague, but don’t worry about that — these terms have standard and perfectly precise definitions.)
O Brave New World!
on August 16, 2010
in Math, Progress and Truthseeking
. 12 Comments
Something momentous happened this week. Of this I feel certain.
A little over a week ago, HP Research Scientist Vinay Delalikar claimed he could settle the central problem of theoretical computer science. That’s not the momentous part. The momentous part is what
happened next.
Deolalikar claimed to prove that P does not equal NP. This means, very roughly, that in mathematics, easy solutions can be difficult to find. “Difficult to find” means, roughly, that there’s no
method substantially faster than brute force trial-and-error.
Plenty of problems — like “What are the factors of 17158904089?” — have easy solutions that seem difficult to find, but maybe that’s an illusion. Maybe there’s are easy solution methods we just
haven’t thought of yet. If Deolalikar is right and P does not equal NP, then the illusion is reality: Some of those problems really are difficult. Math is hard, Barbie.
So. Deolalikar presented (where “presented” means “posted on the web and pointed several experts to it via email”) a 102 page paper that purports to solve the central problem of theoretical computer
science. Then came the firestorm. It all played out on the blogs.
Dozens of experts leapt into action, checking details, filling in logical gaps, teasing out the deep structure of the argument, devising examples to illuminate the ideas, and identifying fundamental
obstructions to the proof strategy. New insights and arguments were absorbed, picked apart, reconstructed and re-absorbed, often within minutes after they first appeared. The great minds at work
included some of the giants of complexity theory, but also some semi-outsiders like Terence Tao and Tim Gowers, who are not complexity theorists but who are both wicked smart (with Fields Medals to
prove it).
The epicenter of activity was Dick Lipton’s blog where, at last count, there had been been 6 posts with a total of roughly 1000 commments. How to keep track of all the interlocking comment threads?
Check the continuously updated wiki, which summarizes all the main ideas and provides dozens of relevant links!
I am not remotely an expert in complexity theory, but for the past week I have been largely glued to my screen reading these comments, understanding some of them, and learning a lot of mathematics as
I struggle to understand the others. It’s been exhilarating.
Weekend Roundup
on August 14, 2010
. 10 Comments
More posts than usual this week as I was motivated twice to add a mid-day post to my usual morning fare. As a result, I’m afraid Jeff Poggi’s remarkable sonnet to Darwin got less attention than it
should have; I hope you’ll go back, read it, and spot the hidden Darwin references.
The mid-day posts were motivated by a pair of (in my opinion, of course) outrages — first Paul Krugman’s suggestion that if we control for education and a few other demographic factors, we can make a
meaningful comparison of private and public sector wages, ignoring all the ways in which public and private sector jobs differ. (And ignoring, too, all the ways in which one college degree might
differ from another.) I suggested that a better metric is the quit rate in each sector; some commenters rightfully pointed out that that’s also an insufficient statistic. I bet it still comes a lot
closer than Krugman’s attempt, though.
The second outrage was the Administration’s willingness to act as the equivalent of a Mafia enforcer for firms who prefer not to compete with foreign labor. Some commenters asked how this differed
from any other case of the American government enforcing American laws while asking the beneficiaries to contribute to the costs. That’s easy. This law, unlike, say, the laws against murder, has as
its primary purpose the restraint of trade (as opposed to oh, say, the general welfare).
We talked about how to estimate the peak of the Laffer curve (answer—it’s at about the 70% marginal tax rate, though I indicated some reasons why it might be somewhat leftward of that), mused about
the value of a good CEO, and gave new meaning to the phrase phone sex when we reported on the fact that iPhone users have many more lifetime sex partners than Android users.
Incidentally, those readers who thought the flashy iPhone pays off in the mating market can’t be right (or at least can’t have hit on the key story), because the effect holds even for 40 year olds,
who surely did not acquire their iPhones until long after they’d acquired most of their sex partners.
And we noted in passing the announcement of a proof that P does not equal NP (where you can look here for a very rough idea of what this means). Over the course of the week, this developed into a
story of, I think, monumental significance, which I will surely revisit early next week. See you then.
The Protection Racket
on August 13, 2010
in Current Events and Outrage
. 25 Comments
Say you run a restaurant. And say a competitor announces plans to set up shop just across the street. What can you do to minimize the impact on your business?
Well, you could lower your prices. Or you could work on providing better service. Or you could send over a couple of guys who are really good at convincing people it’s not in their interest to
compete with you.
Or say you run a personnel company that brings foreign workers into the United States. And say you’re worried about competitors who cross the border without your help. One option is to try doing a
better job. Another is to send over about 1500 guys with unmanned aerial vehicles, new forwarding operating bases and $14 million in new communications equipment to tamp down the flow.
President Obama, with support from both sides of the political aisle, will be signing a bill today that allocates $600 million for “border security”. According to CNN, “The bill is funded in part by
higher fees on personnel companies that bring foreign workers into the United States”.
I imagine the personnel companies will consider it money well spent. Let’s not lose sight of how ugly this is.
On Darwin’s 200th
on August 13, 2010
in Evolution and Poetry
. 1 Comment
Our reader Jeff Poggi sent me a sonnet he wrote in honor of Darwin’s 200′th birthday, and kindly allowed me to reproduce it here. How many hidden Darwin references can you spot?
On Darwin’s 200th
Jeff Poggi
Charles much under winter gray knew life
Would be back, be full, be gullible, need
Life. If inches crept by like miles rife
With their own history, then just a seed
Or stone therein would tell the story of
All this earth–all. He can’t let it be, sees
The earth make new earth, sees new stars above
Reflected, fits royal needs while he flees
Into his life in these new waters, lands.
Home in his garden he takes walks and writes,
Suffers loss most dear and is forced to hand
To them who will not hear what sorely smites
Their hallowed place, their no less hallowed birth—
From such simple forms we populate the earth.
Causation versus Correlation
on August 12, 2010
in Miscellaneous and Musings
. 30 Comments
Data from 9,785 users of the dating site OKCupid reveal that iPhone users have 50% to 100% more sex partners than Android users, at every age.
This graph combines men and women, but the same pattern holds for each gender separately.
Explain this to me!
More info here (if you scroll down a couple of screens).
Laffering All The Way
on August 11, 2010
in Economics and Policy
. 23 Comments
The Washington Post’s Ezra Klein had a great idea this week: He asked a bunch of economists and pundits to tell him where the Laffer curve bends. In other words, what is the marginal tax rate above
which higher taxes lead to lower revenues? Meanwhile, coincidentally or not, Paul Krugman blogged on the very same question.
There’s a lot worth mentioning here, but let me start with one point that will be relevant below: Imposing a 20% income tax is not the same as cutting your wage by 20%. That’s because the income tax
grabs not just a chunk of your current wages, but also a chunk of the future interest and dividends those wages enable you to earn. So a 20% income tax will, in general, discourage work more
effectively than a 20% wage cut. This is important if you’re using data on wage cuts to predict the effects of income taxes.
That having been said, let’s see what we can learn from the responses:
P, NP and All That
on August 10, 2010
in Math and Musings
. 16 Comments
The really big news from Hewlett Packard this week was not the dismissal of CEO James Hurd but the announcement by HP Labs researcher Vinay Deolalikar that he has settled the central question in
theoretical computer science.
That central question is called the “P versus NP” problem, and for those who already know what that means, his claim (of course) is that P does not equal NP. For those who don’t already know what
that means, “P versus NP” is a problem about the difficulty of solving problems. Here‘s a very rough and imprecise summary of the problem, glossing over every technicality.
Deolalikar’s paper is 102 pages long and less than about 48 hours old, so nobody has yet read it carefully. (This is a preliminary draft and Deolalikar promises a more polished version soon.) The
consensus among the experts who have at least skimmed the paper seems to be that it is a) not crazy (which already puts it in the top 1% of papers that have addressed this question), b) teeming with
creative ideas that are likely to have broad applications, and c) quite likely wrong.
As far as I’m aware, people are betting on point c) not because of anything they’ve seen in the paper, but because of the notorious difficulty of the problem.
And when I say betting, I really mean betting. Scott Aaronson, whose judgment on this kind of thing I’d trust as much as anyone’s, has publicly declared his intention to send Deolalikar a check for
$200,000 if this paper turns out to be correct. Says Aaronson: “I’m dead serious—and I can afford it about as well as you’d think I can.” His purpose in making this offer?
Krugman Phones One In
on August 9, 2010
in Paul Krugman
. 48 Comments
I rarely post in the middle of the day, but this seems to call for an immediate response:
Paul Krugman, feisty as ever, scoffs at the claim that public-sector employees are overcompensated. True, salaries are 13% higher in the public sector. But, says, Krugman, you’ve got to correct for
the fact that public employees are (on average) better educated. After the correction, those public servants earn 4% less than the rest of us.
Well, Krugman is certainly right that you can’t take the raw data at face value. But, at least if you’re trying to be honest, you don’t get to pick and choose what you correct for either. Sure, let’s
correct for education levels. Let’s also correct for the fact that public sector employees work fewer hours per week. And for differences in pension plans, and job security, and working conditions.
How can we ever be sure we’ve counted everything important? We can’t, as long as we do it Krugman’s way. So let’s do something sensible instead. Let’s look at quit rates. Quit rates in the public
sector are about one third what they are elsewhere. In other words, government employees sure do seem to like holding on to their jobs. More than just about anyone else, in fact. Doesn’t that tell us
everything we need to know about who’s overcompensated?
HP Falter
on August 9, 2010
in Current Events and Musings
. 30 Comments
How important is it to hire the best person for the job?
Here’s a data point: On Friday, Hewlett Packard’s CEO Mark Hurd resigned unexpectedly — and pretty much instantly the value of HP stock dropped by about $10 billion. If we assume Hurd would otherwise
have been around for another 10 years or so, that means shareholders think his departure will cost the company about a billion dollars a year. Which, incidentally, makes his $30 million or so in
annual compensation look like a hell of a bargain.
Now maybe some part of that $10 billion reflects expected short-term losses due to the turmoil of an unplanned transition. But even if that turmoil were to cost HP a full month of revenue (which
seems like a pretty extreme assumption), that’s still less than a billion — leaving over $9 billion to represent the difference between what the market expected from Hurd and what it expects from his
65 Years Later
on August 6, 2010
in Books and History
. 17 Comments
65 years ago today, the world changed. In his magnificent World War II memoir Quartered Safe Out Here, George McDonald Fraser looks back on what might have been:
I led Nine Section for a time; leading or not, I was part of it. They were my mates, and to them I was bound by ties of duty, loyalty and honor… Could I say, yes, Grandarse or Nick or Forster
were expendable, and should have died rather than the victims of Hiroshima? No, never. And the same goes for every Indian, American, Australian, African, Chinese and other soldier whose life was
on the line in August, 1945. So [I'd have said]: drop the bomb.
And then I have another thought.
You see, I have a feeling that if—and I know it’s an impossible if—but if, on that sunny August morning, Nine Section had known all that we know now of Hiroshima and Nagasaki, and could have been
shown the effect of that bombing, and if some voice from on high had said: “There — that can end the war for you, if you want. But it doesn’t have to happen, the alternative is that the war, as
you’ve known it, goes on to a normal victorious conclusion, which may take some time, and if the past is anything to go by, some of you won’t reach the end of the road. Anyway, Malaya’s down that
way … it’s up to you”, I think I know what would have happened. They would have cried “Aw, fook that!”, with one voice, and then they would have sat about, snarling, and lapsed into silence, and
then someone would have said heavily, “Aye, weel” and got to his feet, and been asked “W’eer th’ ‘ell you gan, then?”, and given no reply, and at last, the rest would have got up, too, gathering
their gear with moaning and foul language and ill-tempered harking back to the long dirty bloody miles from the Imphal boxes to the Sittang Bend and the iniquity of having to do it again,
slinging their rifles and bickering about who was to go on point, and “Ah’s aboot ‘ed it, me!” and “You, ye bugger, ye’re knackered afower ye start, you!”, and “We’ll a’ git killed!”, and then
they would have been moving south. Because that is the kind of men they were.
Equal Protection
on August 5, 2010
in Current Events and Law
. 46 Comments
A U.S. District Court has overturned California’s Proposition 8 (the prohibition of same-sex marriage), which, says the court, violates both the Due Process and Equal Protection clauses of the
Fourteenth Amendment. I am very happy to hear that the courts are open to overturning legislation that violates the Fourteenth Amendment. Next up, Title VII of the 1964 Civil Rights Act!
The issues are pretty much identical. Here is the District Court’s reasoning in the California case (this is the Court’s summary of the plaintiffs’ position, which the Court endorses):
This is Just to Say…
on August 4, 2010
in Humor and Poetry
. 4 Comments
William Carlos Williams is a
really bad roommate and I’m
tired of sharing an
apartment with him.
A hat tip to my buddy Rosa, with hat tips once and twice removed to Tim Pierce and Doctor Memory.
Click here to comment or read others’ comments.
Deflation Followup
on August 3, 2010
in Economics and Policy
. 36 Comments
Yesterday’s post on deflation prompted a flurry of comments and emails remarking on how much economists disagree. This, I think, misses the point. Indeed, what I was trying to emphasize was that we
all agree on the advantages of deflation as spelled out in Milton Friedman’s essay on The Optimum Quantity of Money. (I’ve put a quick summary of the key points here.) Therefore, the commentators who
are currently worried about deflation must fall into two categories: First, there are the ignoramuses, of whom there are plenty on all sides of all issues. Second, there are those who are clearly not
ignoramuses. Those in the latter camp have surely digested Friedman’s analysis, and understand the upside of deflation, but believe it is outweighed by some downside. It is frustrating to me that
many of those commentators have failed to explain exactly what downside they have in mind.
Deflating the Deflation Scare
on August 2, 2010
in Current Events, Economics and Policy
. 38 Comments
So apparently we’re all supposed to be worried these days about the specter of deflation. I am doubly baffled by this—I don’t see the problem in theory and I don’t see the problem in practice. Maybe
there’s something I’m missing.
Start with the theory: We learned long ago from Milton Friedman (who might have learned it from Irving Fisher) that a little bit of deflation is a good thing. That’s because deflation encourages
people to hold money, and people who hold money aren’t buying stuff, and when other people don’t buy stuff, there’s more stuff left over for you and me.
There are a couple of other ways to see this, though they all come down to the same thing. Here’s the first: falling prices are good for buyers and bad for sellers, but that all washes out. It washes
out in the aggregate because each gain to a buyer is offset by an equal and opposite loss to a seller. And it more or less washes out for each individual, because each of us sells roughly as much as
we buy (including the sale of our labor.) But over and above all that, deflation enriches the holders of money, because their money increases in value as it sits around. That part is pretty much (may
Milton’s ghost forgive me for putting it this way) a free lunch.
|
{"url":"http://www.thebigquestions.com/2010/08/","timestamp":"2014-04-18T11:24:14Z","content_type":null,"content_length":"128795","record_id":"<urn:uuid:fffd33b7-5996-43bc-bab5-cda7c7558365>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A characterisation of, and hypothesis test for, continuous local martingales
Owen D. Jones (Dept. of Mathematics and Statistics, University of Melbourne) David A. Rolls (Dept. of Psychological Sciences, University of Melbourne)
We give characterisations for Brownian motion and continuous local martingales, using the crossing tree, which is a sample-path decomposition based on first-passages at nested scales. These results
are based on ideas used in the construction of Brownian motion on the Sierpinski gasket (Barlow and Perkins 1988). Using our characterisation we propose a test for the continuous martingale
hypothesis, that is, that a given process is a continuous local martingale. The crossing tree gives a natural break-down of a sample path at different spatial scales, which we use to investigate the
scale at which a process looks like a continuous local martingale. Simulation experiments indicate that our test is more powerful than an alternative approach which uses the sample quadratic
Full Text: Download PDF | View PDF online (requires PDF plugin)
Pages: 638-651
Publication Date: October 21, 2011
DOI: 10.1214/ECP.v16-1673
1. Andersen, T.G., Bollerslev, T., Diebold, F.X. and Labys, P., Modelling and Forecasting Realized Volatility. Econometrica 71, pp. 579-625, 2003. Math. Review 1958138
2. Andersen, T.G., Bollerslev, T. and Dobrev, D., No-arbitrage semi-martingale restrictions for continuous-time volatility models subject to leverage effects, jumps and i.i.d. noise: theory and
testable distributional implications. J. Econometrics 138, pp. 125-180, 2007. Math. Review 2380695
3. Athreya, K.B. and Ney, P.E., Branching Processes. Springer, 1972. Math. Review 0373040
4. Barlow, M., Random walks, electrical resistance and nested fractals. In: Elworthy, K.D. and Ikeda, N. (Eds.), Asymptotic problems in probability theory: stochastic models and diffusions on
fractals. Pitman, Montreal, pp. 131-157, 1993. Math. Review 1354153
5. Barlow, M.T. and Perkins, E.A., Brownian motion on the Sierpinski gasket. Probab. Theory Related Fields 79, pp. 543-623, 1988. Math. Review 0966175
6. Chainais, P., Riedi, R. and Abry, P., Scale invariant infinitely divisible cascades. In: Int. Symp. on Physics in Signal and Image Processing, Grenoble, France, 2003. Math. Review number not
7. Dambis, K.E., On decomposition of continuous submartingales. Teor. Verojatnost. i Primenen. 10, pp. 438-448, 1965. Math. Review 0202179
8. Decrouez, G. and Jones, O.D., A class of multifractal processes constructed using an embedded branching process. Preprint, 2011.
9. Dubins, L. and Schwarz, G., On continuous martingales. Proc. Nat. Acad. Sci. USA 53, pp. 913-916, 1965. Math. Review 0178499
10. Guasoni, P., Excursions in the martingale hypothesis. In: Akahori, J., Ogawa, S. and Watanabe, S. (Eds.), Stochastic Processes and Applications in Mathematical Finance. World Scientific, pp.
73-96, 2004. Math. Review 2202693
11. Heyde, C., A risky asset model with strong dependence through fractal activity time. J. Appl. Probab. 36, pp. 1234-1239, 1999. Math. Review 1746407
12. Hull, J. and White, A., The pricing of options on assets with stochastic volatilities. J. Finance 42, pp. 281-300, 1987. Math. Review number not available.
13. Jones, O.D. and Rolls, D.A., Looking for continuous local martingales with the crossing tree (Working Paper), 2009. arXiv:0911.5204v2 [math.ST]
14. Jones, O.D. and Shen, Y., Estimating the Hurst index of a self-similar process via the crossing tree. Signal Processing Letters 11, pp. 416-419, 2004. Math. Review number not available.
15. Knight, F.B., On the random walk and Brownian motion. Trans. Amer. Math. Soc. 103, pp. 218-228, 1962. Math. Review 0139211
16. Knight, F.B., Essentials of Brownian motion and diffusion. Mathematical Surveys 18, Amer. Math. Soc., 1981. Math. Review 0613983
17. Le Gall, J-F., Brownian excursions, trees and measure-valued branching processes. Ann. Probab. 19, pp. 1399-1439, 1991. Math. Review 1127710
18. Monroe, I., Processes that can be embedded in Brownian motion. Ann. Probab. 6, pp. 42-56, 1978. Math. Review 0455113
19. O'Brien, G.L., A limit theorem for sample maxima and heavy branches in Galton-Watson trees. J. Appl. Prob. 17, pp. 539-545, 1980. Math. Review 0568964
20. Pakes, A., Extreme order statistics on Galton-Watson trees. Metrika 47, pp. 95-117, 1998. Math. Review 1622136
21. Peters, R.T. and de Vilder, R.G., Testing the continuous semimartingale hypothesis for the S&P 500. J. Business and Economic Stat. 24, pp. 444-453, 2006. Math. Review number not available.
22. Revuz, D. and Yor, M., Continuous Martingales and Brownian Motion, 3rd Edition. Vol. 293 of Grundlehren der Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences).
Springer-Verlag, Berlin, 1999. Math. Review 1725357
23. Rolls, D.A. and Jones, O.D., Testing for continuous local martingales using the crossing tree. Australian & New Zealand J. Stat. 53, pp. 79-107, 2011. Math. Review number not available.
24. Vasudev, R., Essays on time series: Time change and applications to testing, estimation and inference in continuous time models. Ph.D., Dept. Economics, Rice University, 2007.
25. Wald, A. and Wolfowitz, J., On a test whether two samples are from the same population. Ann. Math. Stat. 11, 147-162, 1940. Math. Review 0002083
This work is licensed under a
Creative Commons Attribution 3.0 License
|
{"url":"http://www.emis.de/journals/EJP-ECP/article/view/1673.html","timestamp":"2014-04-17T21:46:11Z","content_type":null,"content_length":"24144","record_id":"<urn:uuid:7ccd617a-d97f-41c8-b1dd-e81519ab63a6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Curvature of Ellipse
Thanks, but I'm not sure I completely understand
This is what I got so far, please correct me if I'm wrong:
r'(t) x r''(t)= 12sint^2+12cost^2 = 12
||r'(t) x r''(t)||= sqrt(144) =12
||r'(t)||^3 = (9sint^2+16cost^2)^(3/2)
k(t)= 12/(9sint^2+16cost^2)^(3/2)
from what you've said, (3*cos(t),4*sin(t))=(3,0), so 3cost=3 => t= 0 or 2pi and 4sint=0 => t=0 or 2pi
similarly (3*cos(t),4*sin(t))=(0,4), so 3cost=0 => t=pi/2 or 3pi/2 and 4sint=4 => t=pi/2
Sry if this is a stupid question, but how do I apply that to k(t)?
|
{"url":"http://www.physicsforums.com/showthread.php?t=278673","timestamp":"2014-04-19T12:45:15Z","content_type":null,"content_length":"31530","record_id":"<urn:uuid:6d3e8985-9f86-4902-b198-7c4bdb1171a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Resolving the cascade bottleneck in vortex-line turbulence
Seminar Room 1, Newton Institute
Both in many superfluid experimental situations and simulations of a 3D hard-core interaction model, it is found that the vortex line length in superfluid turbulence decays in a manner consistent
with classical turbulence. Two decay mechanisms have been proposed, Kelvin wave emission along lines and phonon radiation at small scales. It has been suggested that both would require a Kelvin wave
cascade, which theory says cannot reach the smallest scales due to a bottleneck. In this presentation we will discuss a new approach using a recent quaterionic formulation of the Euler equations,
coupled with the local induction approximation. Without the extra quaterionic terms It can be shown that if there are sharp reconnections, the above scenario occurs. But with the extra terms, the
direction of propagation of nonlinear waves is reversed, there is a cascade to the smallest scales that could create phonons, and the paradox can be resolved.
|
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008100109301.html","timestamp":"2014-04-17T09:51:17Z","content_type":null,"content_length":"4848","record_id":"<urn:uuid:b434fa2d-2ff1-41f8-8e81-0324e1ae8764>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
half_divide, native_divide
gentype half_divide ( gentype x,
gentype y)
gentype native_divide ( gentype x,
gentype y)
half_divide computes x / y. This function is implemented with a minimum of 10-bits of accuracy i.e. an ULP value less than or equal to 8192 ulp.
native_cos computes cosine over an implementation-defined range. The maximum error is implementation-defined.
The vector versions of the math functions operate component-wise. The description is percomponent.
The built-in math functions are not affected by the prevailing rounding mode in the calling environment, and always return the same value as they would if called with the round to nearest even
rounding mode.
For any specific use of a function, the actual type has to be the same for all arguments and the return type, unless otherwise specified.
The functions with the native_ prefix may map to one or more native device instructions and will typically have better performance compared to the corresponding functions (without the native__
prefix). The accuracy (and in some cases the input range(s)) of these functions is implementation-defined.
gentype indicates that the functions can take float, float2, float3, float4, float8 or float16 as the type for the arguments.
Copyright © 2007-2011 The Khronos Group Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and/or associated documentation files (the "Materials"), to
deal in the Materials without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Materials, and to permit
persons to whom the Materials are furnished to do so, subject to the condition that this copyright notice and permission notice shall be included in all copies or substantial portions of the
|
{"url":"http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/divide.html","timestamp":"2014-04-18T13:23:00Z","content_type":null,"content_length":"11721","record_id":"<urn:uuid:292255b2-83c8-4848-8c2e-57c6d92d970c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power function story problem!
July 24th 2010, 08:17 PM #1
I have several problems like this one in my book. It's asking me to find out an equation using a table of values.
"A precalculus class launches a model rocket on the football field. the rocket fires for two seconds. Each second thereafter they measure its altitude, finding these values.
t(s) h(ft)
At what time t do you predict the rocket will hit the ground? (Round to the nearest hundredth)"
I already know this is a power function, because the rocket will eventually hit the ground, something the exponential doesn't do. I just don't see how to use 234 feet, 220 feet, and 174 feet.
I'm confused out of my mind, and if I could get some guidance on this one problem I'll certainty be able to figure out the slew of other ones like this!
Thanks in advance ya'll
What did you plan to do with the 166 and the 216?
What tools do you get? Can you use the five points to find the maximum? It's unlikely to happen at a measurement point. I used a method that suggests 4.12 sec. What do you get?
Knowing 4.12 sec MAY suggest a landing time of 8.24 sec, but that might be ignoring the first two seconds.
Another method suggests a landing time of 8.103 seconds.
You must figure out how to use your data.
That's the issue, I'm at a total loss on how to use the data. Any guidance would be great since the book is very shady on this and I have other problems that I could use this information with to
figure them out
Is this the only approach, as in I cannot "show work" for this sort of problem?
Here's what my calculator gave me:
Can I write this out and show work?
first off, remember that the independent variable is t, not x ... and the domain of t is from t = 2 until t = whenever the rocket hits the ground.
one method to do this by hand is to use at least three of the points to get three equations ...
$a(2^2) + b(2) + c = 166$
$a(3^2) + b(3) + c = 216$
$a(6^2) + b(6) + c = 174<br />$
now, solve the system for the coefficients a, b, and c.
Would using the first two work? The question asks (another similar one) to use just two:
The rate at which water flows out of a hose is function of the water pressure at the faucet. Suppose that these flow rates have been measured (psi is "pounds per square inch").
x(psi) y(gal/min)
4 5.0
9 7.5
16 10.0
25 12.5
36 15.0
Based on physical considerations, the flow rate is expected to be a power function of the pressure. Use the first and second data points to find the particular equation of the power function.
what is the form of a power function?
I know what the form of a power function is, I've been spending quite some time on this problem. No where in the book does it show how a square in x values can relate to linear y values.
To answer your question, f(x)=ax^b
The rate at which water flows out of a hose is function of the water pressure at the faucet. Suppose that these flow rates have been measured (psi is "pounds per square inch").
x(psi) y(gal/min)
4 5.0
9 7.5
16 10.0
25 12.5
36 15.0
Based on physical considerations, the flow rate is expected to be a power function of the pressure. Use the first and second data points to find the particular equation of the power function.
$y = ax^b$
$5 = a \cdot 4^b$
$7.5 = a \cdot 9^b$
solve the system for a and b. this should get you started ...
$\displaystyle \frac{7.5}{5} = \frac{a \cdot 9^b}{a \cdot 4^b}$
The textbook says:
"The rate at which water flows out of a hose is function of the water pressure at the faucet. Suppose that these flow rates have been measured (psi is "pounds per square inch").
x(psi) y(gal/min)
4 5.0
9 7.5
16 10.0
25 12.5
36 15.0
Based on physical considerations, the flow rate is expected to be a power function of the pressure. Use the first and second data points to find the particular equation of the power function."
So I took their method, but what comes out is garbage:
one more time ...
$y = ax^b$
$5 = a \cdot 4^b$
$7.5 = a \cdot 9^b$
solve the system for a and b. this should get you started ...
$\displaystyle \frac{7.5}{5} = \frac{a \cdot 9^b}{a \cdot 4^b}$
Since there are two variables, won't it be impossible to find them both?
what happens to the value of "a" when you divide the equations as set up?
July 24th 2010, 08:48 PM #2
MHF Contributor
Aug 2007
July 25th 2010, 07:42 AM #3
July 25th 2010, 08:25 AM #4
July 25th 2010, 08:28 AM #5
July 25th 2010, 08:31 AM #6
July 25th 2010, 09:05 AM #7
July 25th 2010, 09:16 AM #8
July 25th 2010, 09:24 AM #9
July 25th 2010, 09:29 AM #10
July 25th 2010, 09:40 AM #11
July 25th 2010, 09:49 AM #12
July 25th 2010, 09:56 AM #13
July 25th 2010, 09:59 AM #14
July 25th 2010, 10:05 AM #15
|
{"url":"http://mathhelpforum.com/pre-calculus/151901-power-function-story-problem.html","timestamp":"2014-04-18T05:34:56Z","content_type":null,"content_length":"82154","record_id":"<urn:uuid:4e4e73ac-dc95-4dd8-8953-0e66f6d2f031>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: AW: How to form a vector for each line
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: AW: How to form a vector for each line
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: AW: How to form a vector for each line
Date Thu, 15 Oct 2009 17:57:54 +0100
I agree with the advice to use Mata.
Some readers may find Martin's opening sentence a little puzzling,
however. In Stata, as compared with Mata, there is no difficulty in
setting up row or column vectors, which are just matrices with one row
or column. In this sense, Stata has had the idea of vector for several
versions now.
Martin Weiss
Vectors are not really a well-known concept in Stata, although naturally
can have them in -mata-. I have never missed them, though, as you can do
what you want to do without them most of the time. What is it you want
input byte( a b c)
list, noobs
capt which tomata
if _rc ssc inst tomata
tomata a b c
Steven Ho
Suppose I have a data like this:
a b c
What I want is something like this, of course the following command
gen arrayZ[_n]=( a[_n], b[_n], c[_n])
so that arrayZ[3] for example would be (2,5,6)
ie. when I invoke Z[t] it will give me a VECTOR of t-th row
I need this because I will do a whole bunch of operations to Z[t]Z[t]'
How to achieve this?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-10/msg00704.html","timestamp":"2014-04-16T07:46:25Z","content_type":null,"content_length":"7311","record_id":"<urn:uuid:4d92f406-ac6b-4dd9-a6ec-404e5dfa9c83>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kentucky Department of Education
Mathematics Deconstructed Standards
Published: 10/28/2013 10:21 AM
The COMPLETE set of deconstructed standards for Mathematics is now available! These were created collaboratively by teachers and leaders across the Commonwealth. Please note that these
deconstructions have been reviewed, edited, and revised based on feedback from internal and external reviewers.
|
{"url":"http://education.ky.gov/curriculum/math/Pages/Mathematics-Deconstructed-Standards.aspx","timestamp":"2014-04-18T13:07:41Z","content_type":null,"content_length":"44951","record_id":"<urn:uuid:f58fc3c4-9058-4544-9f51-6334114ff557>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Knowledge Management 2007
Programme Committee, Organization
Important Dates
1 March 2007:
Deadline for electronic submissions of title and abstract
4 March 2007:
Deadline for electronic submissions of full papers
2 April 2007:
Notification of acceptance/rejection
13 April 2007:
Camera ready copies due
13 May 2007:
Early Registration
27--30 June 2007:
Conference at RISC, Hagenberg, Austria
Relevant Links
Mathematical Knowledge Management 2007
27 - 30 June 2007 -- RISC, Hagenberg, Austria -- RISC Summer 2007
Mathematical Knowledge Management is an innovative field in the intersection of mathematics and computer science. Its development is driven on the one hand by the new technological possibilities
which computer science, the internet, and intelligent knowledge processing offer, and on the other hand by the need for new techniques for managing the rapidly growing volume of mathematical
The conference is concerned with all aspects of mathematical knowledge management. A (non-exclusive) list of important areas of current interest includes:
• Representation of mathematical knowledge
• Repositories of formalized mathematics
• Diagrammatic representations
• Mathematical search and retrieval
• Deduction systems
• Math assistants, tutoring and assessment systems
• Mathematical OCR
• Inference of semantics for semi-formalized mathematics
• Digital libraries
• Authoring languages and tools
• MathML, OpenMath, and other mathematical content standards
• Web presentation of mathematics
• Data mining, discovery, theory exploration
• Computer Algebra Systems
• Collaboration tools for mathematics
Invited Speakers:
Neil J. A. Sloane AT&T Shannon Labs, Florham Park, NJ, USA The On-Line Encyclopedia of Integer Sequences
Peter Murray Rust University of Cambridge, Dep. of Chemistry, UK Mathematics and scientific markup
The conference proceedings are available (jointly with Calculemus 2007) as Springer LNAI 4573, Towards Mechanized Mathematical Assistants.
You can find some pictures of Calculemus/MKM from the Calculemus web site at http://www.risc.uni-linz.ac.at/about/conferences/Calculemus2007/?content=pics.
|
{"url":"http://www.cs.bham.ac.uk/~mmk/events/MKM07/","timestamp":"2014-04-20T21:26:09Z","content_type":null,"content_length":"6913","record_id":"<urn:uuid:6a86a587-fac9-4ee9-858b-6cf6a0691995>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User David Handelman
bio website
visits member for 5 months
seen 2 hours ago
stats profile views 113
Professor, mathematics department, University of Ottawa
interested in ordered K$_0$ groups, Choquet theory, matrices, algebraic and functional analytic aspects of Markov chains, random walks, and their friends, classification of ergodic transformations
(all these are basically the same subject!), .... See ArXiv for some recent preprints and papers.
Also interested in postal history, especially Canadian, and also worldwide avis de réception. See http://www.rfrajola.com/mercury/mercury.htm for some of my articles and exhibits.
MathOverflow 838 rep 128
Mathematics 101 rep
3 Votes Cast
all time by type
2 up 2 question
1 down 1 answer
|
{"url":"http://mathoverflow.net/users/42278/david-handelman","timestamp":"2014-04-19T04:29:08Z","content_type":null,"content_length":"59039","record_id":"<urn:uuid:76f33df4-a595-408b-9a48-aa65cae8a5ec>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|