content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
A better way to compute the mapping spaces of the category of spans in an enriched tensored category?
up vote 2 down vote favorite
Let X be a tensored and cotensored V-category, where V is a fixed complete, cocomplete, closed symmetric monoidal category.
Define $C:=Span(X)$ to be the category of spans in X (this is the functor category $X^{Sp}$ where $Sp$ is the walking span). We notice that $C$ is automatically "tensored" over $V$ (by computing the
tensor product pointwise). Then C has a natural V-enriched structure given as follows: $Map_C(a,b)$ is the object of $V$ representing the functor $M_{ab}(\gamma):= Hom_C(\gamma \otimes a, b)$ (such
an object exists by the adjoint functor theorem and since the tensor product is cocontinuous).
We can give another description of the mapping space as: $$Map_C(a,b)=Map_X(A,B)\underset{Map_X(A,B'\times B'')}{\times} (Map_X(A',B')\times Map_X(A'',B''))$$
Where $a=A'\leftarrow A \to A''$ and $b=B'\leftarrow B \to B''$.
To prove that these two descriptions are equivalent, I applied Yoneda's lemma to the second definition of $Map_C(a,b)$, which gives us $$Hom_V(Q,Map_C(a,b))=Hom_X(Q\otimes A, B)\underset{Hom_X(Q\
otimes A,B'\times B'')}{\times}(Hom_X(Q\otimes A',B')\times Hom_X(Q\otimes A'',B''))$$
Which by the ordinary fiber product in the category of sets is precisely the set of triplets of arrows $(Q\otimes A\to B,(Q\otimes A'\to B',Q\otimes A''\to B''))$ giving the commutativity of the
natural transformation diagram in $X$. This construction is obviously functorial in $Q$ for fixed $a$ and $b$.
Surely there must be a better way to do this, presumably without relying so heavily on the definition of the fiber product in the category of sets. What does such a proof look like? I assume there
must be a simpler proof, because this fact was asserted as though it were trivial in a book I'm reading.
Question: What's a slicker way to prove that the two definitions are equivalent?
add comment
2 Answers
active oldest votes
Because of the way you've chosen to write your second description, I don't think you're going to be able to avoid using something about the fiber product in Set. But there is a general
fact here: any V-functor V-category [A,X], where A and X are V-categories, inherits any V-enriched (weighted) limits that X has, constructed pointwise. Tensors are a particular kind of
V-weighted limit, and your category C is the V-functor V-category [V[Sp],X], where V[-] denotes the free V-category on an ordinary category, whose hom-objects are coproducts of copies of
the unit object of V. The property that V-valued homs represent the functor $M_{a b}$ is a reformulation of the definition of tensors as a V-enriched limit, so the question then simply
becomes, why is your second description an equivalent description of the canonical V-enrichment structure on C?
up vote 2
down vote Now the V-valued homs of such a functor category are "always" given by writing down the Set-valued homs as a limit of homsets in Set and reinterpreting it as a limit of hom-objects in V.
accepted The universally applicable way to do this is with an end, as Finn says, but any other way that is equivalent in Set will be equivalent in V as well, for the same Yoneda reason as in your
proof. So the question becomes simply, is the set of natural transformations between two spans (considered as functors out of Sp) described by the analogous pullback of homsets in Set?
And there you have to know something about pullbacks in Set.
You switched C and X, but I'll forgive you because this is such a good answer. ;) – Harry Gindi Aug 5 '10 at 16:56
Thanks, I fixed it. – Mike Shulman Aug 5 '10 at 18:01
add comment
Not quite an answer, but I hope it helps:
Your second definition looks (somewhat) like the usual definition of the V-valued hom of V-functors $[Sp,X] (a,b) = \int_{A \in Sp} X(aA, bA)$. If that's right then $$ V(\gamma, [Sp,X]
up vote 0 (a,b)) \cong \int_A V(\gamma,X(aA,bA)) \cong \int_A(\gamma \otimes aA,bA) = [Sp,X] (\gamma \otimes a,b) $$ and the first definition follows by applying $V(I,-)$. One way to prove the
down vote converse would be to show that $\operatorname{Nat}(?\otimes a, b) \cong \operatorname{Dinat}(?, X(a-,b-))$ in [V,Set], which is probably true, but my dinaturality-fu isn't what it should
be, so that's as far as I can go.
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory slick-proof monoidal-categories or ask your own question.
|
{"url":"http://mathoverflow.net/questions/34525/a-better-way-to-compute-the-mapping-spaces-of-the-category-of-spans-in-an-enrich","timestamp":"2014-04-17T01:14:43Z","content_type":null,"content_length":"59369","record_id":"<urn:uuid:22ef6390-e0ce-4dd0-ad9b-6d46fa32cff0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Hills, NY Calculus Tutor
Find a North Hills, NY Calculus Tutor
...Algebra 2 is vital for students’ success on the ACT, SAT 2 Math, and college mathematics entrance exams. Microsoft Excel is an extremely powerful spreadsheet tool and may initially seem
confusing, but with a little practice, can perform many diverse functions. I am confident that I will be able...
26 Subjects: including calculus, writing, statistics, geometry
...Whether coaching a student to the Intel ISEF (2014) or to first rank in their high school class, I advocate a personalized educational style: first identifying where a specific student's
strengths and weaknesses lie, then calibrating my approach accordingly. Sound interesting? I coach students ...
32 Subjects: including calculus, reading, physics, geometry
...My services also cover college/high school math and science courses. Since I have extensive experience working with local students attending Columbia, NYU, Hunter, and other CUNY schools, I am
quite familiar with the testing style of specific professors. Lastly, I am flexible with times and meeting locations.
24 Subjects: including calculus, chemistry, physics, biology
...When I am not busy with school or leisure, I will be helping you or your child grasp a difficult subject or improve a test score. I encourage my students to talk me through their thought
process when tackling a problem, so that I can pinpoint exactly what is preventing them from arriving at the ...
22 Subjects: including calculus, chemistry, physics, geometry
...I also passed actuarial exams that dealt heavily in probability theory. I am currently working as an Actuarial associate at an insurance company. I currently tutor Exams P, FM, MFE, and MLC.
15 Subjects: including calculus, algebra 1, algebra 2, finance
|
{"url":"http://www.purplemath.com/North_Hills_NY_Calculus_tutors.php","timestamp":"2014-04-18T00:52:10Z","content_type":null,"content_length":"24205","record_id":"<urn:uuid:71e8113c-998b-47e4-8dc9-a7ecb7b964a5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 68,190
The old stone bridge across Rugen Bay is one of my favorite places to play. I toss pebbles for Mom, and a pebble for Dad, And a rock for horses and chickens we had. I watch as each of the stones
makes rings like songs that each of memories sings. For my wife-for my dear and pr...
Season of mists and mellow fruitfulness! Close bosom-friend of the maturing sun; Conspiring with him how to load and bless With fruit the vines that round the hatch-eaves run; to bend with apples the
moss'd cottage-trees, And fill all fruit with ripeness to the core; To sw...
advanced algebra
jiskha. com/display.cgi?id=1296190762
advanced algebra
mark has a certain number of dimes in his pocket. If the dimes were pennies, he would have $1.08 less than he has now. how many dimes does he have in his pocket?
Isaac made a mistake in his checkbook. He wrote a check for $8.98 to rent a video game but mistakenly recorded it in his checkbook as an $8.98 deposit. Represent each transaction with a rational
number, and explain the difference between the transactions.
Help me with this math ms sue.
A B B C A C C A A C C C
what is a special noun
HELP heres a math problem ; give me a formula for it please;;; a carpenter has several boards of equal length. he cuts 3/5 of each board.after cutting the boards , he notices that he has enough
pieces left over to make up the same length as 4 of the origional boards. how many ...
asked to research a famous or recent case of an offender (serial murder, murder, sex offender, etc.) and complete an assessment and develop a plan of action for treatment. So I assuming that I need
to develop a plan for his mental disorder
I need to conduct an assessment and develop a plan of action for treatment on a famous serial murder or murder. I was thinking of Charles Manson. I am not sure where to start
The bee is not afraid of me, I know the butterfly; the pretty people in the woods Receive me cordially. The brooks laugh louder when I come, The breezes madder play. Wherefore, mine eyes, thy silver
mists? Wherefore, O summer's day? 1. what makes this poem a lyric? a. It t...
We met by yonder cherry tree. I glanced at her she winked at me. I offered her a slice of pie. How could I know our love would die? After a bite, her watery eyes Gazed at me; she gave a cry And
gagged; she turned and ran away. I have not seen her since that day. You see I like...
Which sentence is a commonplace assertion?
Which sentence is a commonplace assertion? A. We all know that Texans are mad about football, and the players usually get the attention. B.Known as the Allen Eagles Escadrille (French for
"squadron"), Allen's band is considered the largest in the country-high sch...
find the simple interest on $1200 at 8% for six months?
An Ideal solution is formed by mixing 0.26 mol of pentane (vapor pressure + 511 torr) with 0.34 mol of hexane (vapor pressure = 150 torr). What is the total vapor pressure of the ideal solution?
find sale price of an item if it originally costs $16.95, is on sale for 22% off and sales tax is 6.5%?
What is the theme ? Come outdoors to view The truth of Flowers blooming Amid poverty A. Frogs dive into the sound of water, not water itself B. even in poverty, people can take comfort in the beauty
of nature. C. Flowers are truthful and honest D. The poet dreams that he becom...
What's 1 5 --------- + ---------- 3 6
What happens in this haiku? Sick on my journey, only my dreams will wander these desolate moors. A. The poet is ill and cannot hike across the moors B. the poet sees flowers and takes heart in their
beauty C. The poet hikes across the moors and then becomes ill D. The poet dre...
Cybersafety, Really Please Help Me! :(
Both are correct.
LA check answers?
Midnight by Sara Holbrook When it s Sunday and it s midnight, the weekend put back in its chest, 5 the toys of recreation, party times and needed rest. When I lie in wait for Monday 10 to grab me by
the ear, throw me at the shower, off to school and when I hear the ...
Math Help
I have to do this question in several ways like any possible way to do it so I've found one mathematical way, I can't do it like a physics question cause it's math. a bike travels at a constant speed
of 10km/h. a bus starts 175 km behind the bike and catches up to ...
Math 2 questions
Thank you very much madame. :-)
Math 2 questions
14. 540.
Math 2 questions
13. Trapezoid.
Math 2 questions
13. Name the quadrilateral with exactly one pair of parallel sides. kite parallelogram rhombus trapezoid 14. What is the sum of the interior angles for a pentagon? 900º 540º 720º 108º
Language arts: check my answers
Thank you very much madame. U 2 Writeacher. :)
Language arts: check my answers
so i believe the answer would be: B.
Language arts: check my answers
It is a firmly held belief or opinion.
Language arts: check my answers
@writeacher "Persistence" means: firm or obstinate continuance in a course of action in spite of difficulty or opposition. @Ms. Sue #5. is it C? cause i couldn't chose between the two. #6.idk about
this one.
Language arts: check my answers
1. Another word for buoyed is tired. bothered. lifted.*** 2. If someone is illiterate, he is unable to walk on his own. see without glasses. read and write.*** operate a car. 3. Something that is
nutritious is hard to find. good for you.*** bad for you. expensive. 4. When some...
LA plz,plz help
Thank you very much.
LA plz,plz help
A. I can't see it being anything other than that.
LA plz,plz help
oh. lol that mas a dumb mistake.
LA plz,plz help
would it be B? :)
Plz help :(
are they right???
LA plz,plz help
1. ? 2. apostrophe 3.D 4.C 5.B
LA plz,plz help
#2 would be answer A.
LA plz,plz help
Choose the type of punctuation that is missing from the following sentence. My best friends new dog won t stop barking. apostrophe semicolon colon hyphen Choose the type of punctuation that is
missing from the following sentence. At this store, youre invited to bring your...
Math help, anyone?
thank u.
Math help, anyone?
Scalene triangle: A triangle with no congruent sides Isosceles triangle: A triangle with at least two congruent sides Equilateral triangle: A triangle with three congruent sides
Math help, anyone?
Would it be #1 or #3?
Math help, anyone?
11. Name the triangles that are classified by sides. right, scalene, isosceles scalene, isosceles, equilateral acute, right, obtuse obtuse, isosceles, acute
8th-g Science: Need help to check answer!!!
I believe Q1 would be B.
8th-g Science: Need help to check answer!!!
I disagree with 3.
pre cal
What would be the difference quotient,f(x+h)-f(x)/h , for the given function? (f(x) = a, where x is the variable, h is the change in the variable, and a is a constant.)
so could a media trope be gender identity, sexuality, violence, language
What are media tropes in films
Mickey has 1/3 the number of books that Minnie does. If they had 1 less book between them, they would have 27 books. The question, of course, is how many books each has.
I mean after WWII. One of my tests said that Jews believed God had given them the right to claim Israel, which I had no idea about.
Did Jews believe that God had given them Israel, and thus felt that they had the right to claim the land?
final retail price is (40*3.20*1.80)/(40*.8) = $7.20 so, what percent of that is $1?
assuming simple interest, I = PRT Total amount = P + I So, what do you get?
Math(check answer)
A bag contains 9 green marbles and 11 white marbles. You select a marble at random. What are the odds in favor of picking a green marble? A. 9:20 B. 2:9 C. 11:9 D. 9:11 (I think its a.)
How many ways can 3 students be chosen from a class of 16 to represent their class at a banquet? A. 3,360 B. 1,680 C. 1,120 D. 560
The probability of a certain hockey player making a goal after hitting a slap shot is 1/5. How many successful slap shots would you expect her to make after 120 attempts? A.5 B.20 C.24 D.60
A basket contains the following pieces of fruit: 3 apples, 2 oranges, 2 bananas, 2 pears, and 5 peaches. Jack picks a fruit at random and does not replace it. Then Bethany picks a fruit at random.
What is the probability that Jack gets a peach and Bethany gets an orange? A. 10...
A basket contains the following pieces of fruit: 3 apples, 2 oranges, 2 bananas, 2 pears, and 5 peaches. Jack picks a fruit at random and does not replace it. Then Bethany picks a fruit at random.
What is the probability that Jack gets a peach and Bethany gets an orange? A. 10...
How is this trinomial prime? 5x^2+x-2
You have six $1 bills, eight $5 bills, two $10 bills, and four $20 bills in your wallet. You select a bill at random. Without replacing the bill, you choose a second bill. What is P($1,then $10)? A.
77/190 B. 3/100 C.3/95 D.2/5
A bag contains 7 green marbles, 9 red marbles, 10 orange marbles, 5 brown marbles, and 10 blue marbles. You choose a marble, replace it, and choose again. What is P(red,then blue)? A. 77/164 B.19/41
C.90/1681 D. 45/41
If 2.5 grams of citric acid (H3C6H5O7) reacts, how many grams of carbon dioxide will be generated?
If 2.5 grams of citric acid (H3C6H5O7) reacts, how many moles of carbon dioxide will be generated?
If 2.5 grams of citric acid (H3C6H5O7) reacts, how many moles of sodium bicarbonate will be consumed?
Math(Check answer)
Jennifer writes the letters M-O-N-T-A-N-A on cards and then places the cards in a hat. What are the odds in favor of picking a vowel? 3:7 4:7 3:4 4:3 I think its 3:7 And my other question is..
Jennifer writes the letters M-O-N-T-A-N-A on cards and then places the cards in a ha...
city a is 300 miles directly north of city b assuming the earth to be a sphere of radius 4000 miles determine the difference in latitude of the two cities make answers accurate to the nearest second
A 0.80 10^3 kg Toyota collides into the rear end of a 2.6 1^03 kg Cadillac stopped at a red light. The bumpers lock, the brakes are locked, and the two cars skid forward 5.0 m before stopping. The
police officer, knowing that the coefficient of kinetic friction between tires a...
what are the verbs A Superblade was used at the 2 o'clock position, and the anterior chamber was entered
Bacteria are classified according to three basic shapes. Name them and give the scientific. name for the shape. 1. Round pairs - Diplococcus 2. Rod shaped - Bacilli 3. ?? I need another one. Please
help me.
A spaceship with a mass of 4.60 10^4 kg is traveling at 6.30 10^3 m/s relative to a space station. What mass will the ship have after it fires its engines in order to reach a speed of 7.84 10^3 m/s?
Assume an exhaust velocity of 4.86 10^3 m/s.
help i have no idea!!!!!!
Time (s) 0 1 2 3 4 5 7 8 9 10 Δ X (m/s) Position 0 3 7 12 17 24 22 17 14 14 1. Graph the table above (include title, label the independent (x) and dependent variable (y), use the proper scale) 2.
Identify on your scale the positive acceleration and the negative accelerati...
Bacteria are classified into 3 basic shapes, name them and give the scientific name. 1. Round pairs - Diplococcus 2. Rod shaped - Bacilli I got 2 can someone help me with one more?
In the figure below, a 3.7 kg box of running shoes slides on a horizontal frictionless table and collides with a 2.0 kg box of ballet slippers initially at rest on the edge of the table, at height h
= 6.9 m. The speed of the 3.7 kg box is 5.0 m/s just before the collision. If ...
Bacteria are classified into three basic shapes. Name them and give the scientific name for the shape. 1. Diplococcus - Round and paired. 2. Bacilli - Rod shaped. 3. I don't know which else shape
they would be classified into.
All of the following are characteristics of an effective study environment EXCEPT:
A baseball is thrown horizontally from the top of a cliff 50 meters up. The initial velocity of the ball is 10 m/s. How far from the base of the cliff will the ball land?
In order to convert a tough split in bowling, it is necessary to strike the pin a glancing blow as shown in Fig. 9-47. The bowling ball, initially traveling at 13.0 m/s, has four times the mass of a
pin and the pin flies off at 80° from the original direction of the ball. ...
Boy throws a penny into a wishing well at a velocity of 2 m/s and at an angle of 45 degrees below horizontal. If the penny takes 1.3 seconds to hit the water, how deep is the well?
Physic assignment
A 1.5okg ball whose velocity is 8mls southward collides with a 2kg ball traveling along the same path with a velocity of 3mls southward.if the velocity of the 2kg ball is 7.29mls southward after
impact.what is the velocity of the 1.50kgball?
Math help
I need help on these problems. There over solving equations by factoring. 1.4x^2=8
Does anyone know what the symbol for works cited is called? It looks like a colon with a small line in the middle.
Post a null hypothesis that would use a t test statistical analysis.. Use the same hypothetical situation taken in the t test hypothesis, and turn it into a null hypothesis using a one-way ANOVA
analysis and a two-way ANOVA.
A telephone company's records indicate that private customers pay on average $17.10 per month for long-distance telephone calls. A random sample of 10 customers' bills during a given month produced a
sample mean of $22.10 expended for long-distance calls and a sample v...
Math help
Would #4 be s=1 and s=2?
Math help
I need help on these problems. There over solving equations by factoring. 1. 4x^2=9 2. M^2-36=16M 3. R^2+9=10r 4. 6s^2=17s-12
Oops...forgot the parentheses. Thanks Damon and Steve!
x^4 + 10x + 18 - x^4 - 3x + 2 Is this 13x + 16?
Post a null hypothesis that would use a t test statistical analysis.. Use the same hypothetical situation taken in the t test hypothesis, and turn it into a null hypothesis using a one-way ANOVA
analysis and a two-way ANOVA.
sales for the first four months of the year were as follows: jan.5282 feb.4689 mar.3029 apr.6390 find he average monthly sales.(round off your answer to the nearest dollar.
English Question
If I post a definition from a dictionary and say which dictionary it is from, would that be plagiarizing?
Two satellites are in circular orbits around the earth. The orbit for satellite A is at a height of 418 km above the earth s surface, while that for satellite B is at a height of 731 km. Find the
orbital speed for (a) satellite A and (b) satellite B.
Pages: <<Prev | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=anonymous&page=16","timestamp":"2014-04-19T21:06:39Z","content_type":null,"content_length":"27744","record_id":"<urn:uuid:108e7a6c-1a4b-47a0-a953-920694704a6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wakefield, MA Statistics Tutor
Find a Wakefield, MA Statistics Tutor
...I would be happy to help you by proofreading your work! Although I am not a certified instructor, I have been studying yoga for over 10 years, and taught a free class weekly at my last
workplace. I have knowledge in the philosophy behind yoga, Ayurvedic traditions, and Hinduism.
18 Subjects: including statistics, English, writing, GRE
...Most high schools these days don't offer a course solely in trigonometry; rather, trig is typically integrated into a pre-calculus, algebra 2, or geometry course. I studied literature as an
undergraduate at MIT and Harvard, and I'm currently an Assistant Editor at the Boston Review -- a national...
47 Subjects: including statistics, English, reading, chemistry
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including statistics, calculus, ACT Math, algebra 1
...I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or
review sheets that they have been assigned.
17 Subjects: including statistics, geometry, algebra 1, economics
...If you fail to cancel at least 24 hours prior to the lesson, you will be charged a cancellation fee of 1/2 hour.I was nominated for a University Scholar award during my undergraduate studies
due to the highest quality of writing in my essays. During my law studies, I was the editor of the International Law Journal. Additionally, I have published articles available online.
67 Subjects: including statistics, reading, English, calculus
Related Wakefield, MA Tutors
Wakefield, MA Accounting Tutors
Wakefield, MA ACT Tutors
Wakefield, MA Algebra Tutors
Wakefield, MA Algebra 2 Tutors
Wakefield, MA Calculus Tutors
Wakefield, MA Geometry Tutors
Wakefield, MA Math Tutors
Wakefield, MA Prealgebra Tutors
Wakefield, MA Precalculus Tutors
Wakefield, MA SAT Tutors
Wakefield, MA SAT Math Tutors
Wakefield, MA Science Tutors
Wakefield, MA Statistics Tutors
Wakefield, MA Trigonometry Tutors
Nearby Cities With statistics Tutor
Arlington, MA statistics Tutors
Belmont, MA statistics Tutors
Burlington, MA statistics Tutors
Chelsea, MA statistics Tutors
Danvers, MA statistics Tutors
Lexington, MA statistics Tutors
Lynnfield statistics Tutors
Malden, MA statistics Tutors
Melrose, MA statistics Tutors
Reading, MA statistics Tutors
Saugus statistics Tutors
Stoneham, MA statistics Tutors
Wilmington, MA statistics Tutors
Winchester, MA statistics Tutors
Woburn statistics Tutors
|
{"url":"http://www.purplemath.com/Wakefield_MA_Statistics_tutors.php","timestamp":"2014-04-20T19:18:23Z","content_type":null,"content_length":"24194","record_id":"<urn:uuid:dba3ae03-ec31-41b7-accb-b2b8355e9903>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Collapse with sum function and missing values
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Collapse with sum function and missing values
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Collapse with sum function and missing values
Date Wed, 10 Feb 2010 10:06:30 -0500
This is somewhat semantic. The presence of 3 and 4 in the group id suggests that such groups exist; they merely have no members in the present sample. It should be easy enough to -mvdecode- x==0 to x==.
This reminds me of a grouse I had about the calculation of the s.d. of data that were all missing. The mean of these data was computed properly as missing, but the s.d. was reported as 0. Pedantically, as all values took on the same NAN value, there was indeed zero variance. I convinced StataCorp that this was not a good idea, and that the s.d. or variance of data that are all missing is indeed missing. That is now what -tabstat- does.
On Feb 10, 2010, at 2:33 AM, Michael wrote:
> Shouldn't the value of -x- for groups 3 and 4 be missing, not zero.
> To me, the sum of a series of missing values is a missing value. I am
> doing a collapse for about 100 variables (100 x values) and need the
> value to be defined as missing (not 0) in such cases. Any ideas?
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-02/msg00444.html","timestamp":"2014-04-18T21:00:37Z","content_type":null,"content_length":"7164","record_id":"<urn:uuid:cd164306-2c8b-4201-aa7f-df1dfc54d2c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
El Granada Trigonometry Tutor
Find an El Granada Trigonometry Tutor
...I'm an enthusiastic teacher, who loves helping students to succeed to the best of their ability, and loves all facets of mathematics and physics. I will teach all levels from middle school
mathematics up to calculus. I have extensive experience teaching international baccalaureate mathematics.
11 Subjects: including trigonometry, chemistry, physics, calculus
...I'm a 25-year veteran of Silicon Valley with multiple degrees in Engineering. I am currently an Adjunct Professor at the Golden Gate University School of Business. I teach graduate-level
courses in Business-to-Business Marketing, with a focus on technology marketing.
39 Subjects: including trigonometry, English, chemistry, reading
...I can help you to master Mandarin or Putonghua by guiding you how to read, write, speak, listen, pronounce (pinyin) and complete Chinese assignments. I am familiar with both traditional and
simplified Chinese. I have two degrees in Electrical Engineering: B.Sc.(Hons) and M.Sc.(Eng). I am also the Chartered Engineer of the Institution of Electrical Engineers.
10 Subjects: including trigonometry, geometry, Chinese, algebra 1
...I have been teaching math and computer science as a tutor and lecturer in multiple countries at high schools, boarding schools, and universities. I am proficient in teaching math at any high
school, college, or university level.Algebra 1 is the first math class that introduces equations with var...
41 Subjects: including trigonometry, calculus, geometry, statistics
...They involve negative exponents instead of positive ones, or the gravity on the moon instead of on earth. Finding a way to approach and understand these complications is all it takes to make a
stellar student. I am young, so the experiences and difficulties of high school are still fresh in my mind.
16 Subjects: including trigonometry, reading, physics, calculus
|
{"url":"http://www.purplemath.com/El_Granada_trigonometry_tutors.php","timestamp":"2014-04-16T19:25:53Z","content_type":null,"content_length":"24335","record_id":"<urn:uuid:dd8405a4-6b22-4e6d-8d10-f3c2cf53331d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please help me with this trace table!
Hi mantastrike;
There is no need to thank me. I am here to help, you have asked so I will. Because you are a polite and obviously thoughtful person I am making an effort to change your mind.
Yeah I know, but there are a lot of thing going on so I was not able to do this.
You are placing minutiae ahead of your studies. Ahead of your future. Ahead of your potential. Whether it is a girl, partying or even a job, it takes second to your education. Believe me I know. I am
going to do my best now to get you through but someday you will wish I did not.
A program is a series of instructions you give to a computer. Modern machines are basically sequential devices that means they do one thing at a time. Code runs line by line, one instruction at a
time, from top to bottom.
a: = 3, b: = 3
These 2 instructions store the value of 3 into a and then b.
for c: = 1 to 4
This instruction is called a for next loop. It starts by initializing some variable with a beginning value (1) and an end value (4). This loop starts with c = 1.
b: = b ∙ c
a: = a + c
These 2 instructions are the body of the loop.
The first one says take what is in b (3) and multiply it by what is in c (1) and store it back into b
The second one says take what is in a (3) and add to it by what is in c (1) and store it back into a
next c
This last statement says add one to c and check whether is greater than 4. If it is not then it will go back to the for c = 1 to 4 statement (underlined ). If it is greater than 4 then the program
What I have done above is called pseudocode. It is mixed english, programming and math.
If you run through the statements that I have given you above and write down the values for c, a, and b you will get the trace table.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=172081","timestamp":"2014-04-20T21:28:43Z","content_type":null,"content_length":"15056","record_id":"<urn:uuid:af7c2f75-c795-465a-a41a-ed462c0e1a79>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
would the answer of (a − b)(a − b) be a^2 - ab^2 - b^2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/502a748ee4b0ac7df90ccaec","timestamp":"2014-04-21T02:08:05Z","content_type":null,"content_length":"90400","record_id":"<urn:uuid:fb7cdb8a-a949-4ec9-909b-db755f23bb98>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regular Edge Labelings and Drawings of Planar Graphs
He, Xin and Kao, Ming-Yang (1995) Regular Edge Labelings and Drawings of Planar Graphs. In: Graph Drawing DIMACS International Workshop, GD 1994, October 10–12, 1994, Princeton, New Jersey, USA , pp.
96-103 (Official URL: http://dx.doi.org/10.1007/3-540-58950-3_360).
Full text not available from this repository.
The problems of nicely drawing planar graphs have received increasing attention due to their broad applications [5]. A technique, regular edge labeling, was successfully used in solving several
planar graph drawing problems, including visibility representation, straight-line embedding, and rectangular dual problems. A regular edge labeling of a plane graph G labels the edges of G so that
the edge labels around any vertex show certain regular pattern. The drawing of G is obtained by using the combinatorial structures resulting from the edge labeling. In this paper, we survey these
drawing algorithms and discuss some open problems.
Repository Staff Only: item control page
|
{"url":"http://gdea.informatik.uni-koeln.de/102/","timestamp":"2014-04-16T15:59:33Z","content_type":null,"content_length":"39311","record_id":"<urn:uuid:e3955164-6603-437e-98ca-b6e3e5de45d7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
|
More C++ Idioms/Type Selection
Type SelectionEdit
Select a type at compile-time based on a compile-time boolean value or a predicate.
Also Known AsEdit
Ability to take decisions based on the information known at compile-time is a powerful meta-programming tool. One of the possible decisions to be made at compile-time is deciding a type i.e., the
choice of the type may vary depending upon the result of a predicate.
For example, consider a Queue abstract data-type (ADT) implemented as a class template that holds a static array of Ts and the maximum capacity of the Queue is passed as a template parameter. The
Queue class also needs to store the number of elements present in it, starting from zero. A possible optimization for such a queue class could be to use different types to store the size. For
instance, when Queue's maximum capacity is less than 256, unsigned char can be used and if the capacity is less than 65,536, unsigned short can be used to store the size. For larger queues, unsigned
integer is used. Type selection idiom can be used to implement such compile-time decision making.
Solution and Sample CodeEdit
A simple way of implementing the type selection idiom is the IF template. IF template takes three parameters. The first parameter is a compile-time boolean condition. If the boolean condition
evaluates to true the 2nd type passed to the IF template is chosen otherwise third. Type selection idiom consists of a primary template and a partial specialization as shown below.
template <bool, class L, class R>
struct IF // primary template
typedef R type;
template <class L, class R>
struct IF<true, L, R> // partial specialization
typedef L type;
IF<false, int, long>::type i; // is equivalent to long i;
IF<true, int, long>::type i; // is equivalent to int i;
We now use the type selection idiom to implement the Queue size optimization mentioned above.
template <class T, unsigned int CAPACITY>
class Queue
T array[CAPACITY];
typename IF<(CAPACITY <= 256),
unsigned char,
typename IF<(CAPACITY <= 65536),
unsigned short,
unsigned int
>::type size;
// ...
The Queue class template declares an array or Ts. The type of the size data member depends on the result of two comparisons performed using the IF template. Note that these comparisons are not
performed at run-time but at compile-time.
Known UsesEdit
Related IdiomsEdit
Last modified on 15 November 2012, at 09:27
|
{"url":"https://en.m.wikibooks.org/wiki/More_C%2B%2B_Idioms/Type_Selection","timestamp":"2014-04-21T12:45:08Z","content_type":null,"content_length":"22159","record_id":"<urn:uuid:68db8751-cf1e-443f-bffd-e67afe1e82c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
25 Oct 17:22 2012
Type-directed functions with data kinds
Paul Visschers <mail <at> paulvisschers.net>
2012-10-25 15:22:24 GMT
Hello everyone,
I've been playing around with the data kinds extension to implement vectors that have a known length at compile time. Some simple code to illustrate:
> {-# LANGUAGE DataKinds, GADTs, KindSignatures #-}
> import Prelude hiding (repeat)
> data Nat = Zero | Succ Nat
> data Vector (n :: Nat) a where
> Nil :: Vector Zero a
> Cons :: a -> Vector n a -> Vector (Succ n) a
> class VectorRepeat (n :: Nat) where
> repeat :: a -> Vector n a
> instance VectorRepeat Zero where
> repeat _ = Nil
> instance VectorRepeat n => VectorRepeat (Succ n) where
> repeat x = Cons x (repeat x)
In this code I have defined a repeat function that works in a similar way to the one in the prelude, except that the length of the resulting vector is determined by the type of the result. I would
have hoped that its type would become 'repeat :: a -> Vector n a', yet it is 'repeat :: VectorRepeat n => a -> Vector n a'. As far as I can tell, this class constraint should no longer be necessary,
as all possible values for 'n' are an instance of this class. I actually really just want to define a closed type-directed function and would rather not (ab)use the type class system at all.
Is there a way to write the repeat function so that it has the type 'repeat :: a -> Vector n a' that I've missed? If not, is this just because it isn't implemented or are there conceptual caveats?
Paul Visschers
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org
|
{"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/101131","timestamp":"2014-04-18T13:17:20Z","content_type":null,"content_length":"30235","record_id":"<urn:uuid:fd32a087-6f83-47bc-b49a-49fed9282dab>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Iselin Statistics Tutor
Find a Iselin Statistics Tutor
...Prealgebra is the foundation for all secondary level math. It is vital that the math fundamentals are present for any student in order to become successful in any capacity. Obviously, most
people will not go on to become rocket scientists, but a solid understanding of fractions, decimals, and percentages is essential to functioning in everyday life.
26 Subjects: including statistics, calculus, writing, GRE
...I received a master's in epidemiology from Columbia University in 1992. I've been doing this for a long time and I continue because I enjoy it. Explaining statistics in regular, lay language so
that people can understand it and apply it is fun for me.
4 Subjects: including statistics, SPSS, SAS, biostatistics
...It's no wonder that my English skills exceed those of most of today's English teachers. Unlike most coaches, who specialize in only one section of the SAT, I have long experience and expertise
in all three parts of the test. The reading section of the SAT, like the math section, is much more challenging than it used to be.
23 Subjects: including statistics, English, calculus, algebra 1
...My passion is teaching. I am a very patient and understandable person, and teach according to the needs of the students. I use different methods to solve problems, hence using problem solving
10 Subjects: including statistics, calculus, discrete math, algebra 1
...As a former scientist, I can break down a difficult subject area into the component parts, teach each component, and then help my students put it all together again to master the subject.
Though I majored in biology and economics, my specialty for years has been tutoring Math -- for high school ...
14 Subjects: including statistics, geometry, algebra 1, trigonometry
|
{"url":"http://www.purplemath.com/iselin_nj_statistics_tutors.php","timestamp":"2014-04-19T23:18:55Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:d9d812d2-a970-4216-a19c-91e8255b8cfb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The thinning of the liquid layer over a probe in two-phase flow
Oliver, J. M. (1998) The thinning of the liquid layer over a probe in two-phase flow. Masters thesis, University of Oxford.
The draining of the thin water film that is formed between a two dimensional, infinite, initially flat oil-water interface and a smooth, symmetric probe, as the interface is advected by a steady and
uniform flow parallel to the probe axis, is modelled using classical fluid dynamics.
The governing equations are nondimensionalised using values appropriate to the oil extraction industry. The bulk flow is driven by inertia and, in some extremes, surface tension while the viscous
effects are initially confined to thin boundary layers on the probe and the interface. The flow in the thin water film is dominated by surface tension, and passes through a series of asymptotic
regimes in which inertial forces are gradually overtaken by viscous forces. For each of these regimes, and for those concerning the earlier stages of approach, possible solution strategies are
discussed and relevant literature reviewed.
Consideration is given to the drainage mechanism around a probe which protrudes a fixed specified distance into the oil. A lubrication analysis of the thin water film may be matched into a
capillary-static solution for the outer geometry using a slender transition region if, and only if, the pressure gradient in the film is negative as it meets the static meniscus. The remarkable
result is that, in practice, there is a race between rupture in the transition region and rupture at the tip. The analysis is applicable to the case of a very slow far field flow and offers
significant insight into the non-static case.
Finally, a similar approach is applied to study the motion of the thin water film in the fully inviscid approximation, with surface tension and a density contrast between the fluids.
Repository Staff Only: item control page
|
{"url":"http://eprints.maths.ox.ac.uk/7/","timestamp":"2014-04-17T12:38:04Z","content_type":null,"content_length":"16518","record_id":"<urn:uuid:4bc51913-f869-4bf6-bc5c-3d5fdec1ffdb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using matlab to solve algebraic equations...
June 20th 2011, 08:43 PM
Using matlab to solve algebraic equations...
Consider a simple one like this:
the "solve" command gives:
But we know that all 3 roots of the equation are REAL NUMBERS. So this result is not so satisfying to me... My question is this: how can we know if one solution is intrinsically complex or real,
when it gives us results like these?
(btw, I remember there's a command that numerically finds zero of a polynomial, what's its name, please?)
June 20th 2011, 08:50 PM
Also sprach Zarathustra
Re: Using matlab to solve algebraic equations...
Consider a simple one like this:
the "solve" command gives:
But we know that all 3 roots of the equation are REAL NUMBERS. So this result is not so satisfying to me... My question is this: how can we know if one solution is intrinsically complex or real,
when it gives us results like these?
(btw, I remember there's a command that numerically finds zero of a polynomial, what's its name, please?)
How we know?
June 20th 2011, 09:01 PM
Re: Using matlab to solve algebraic equations...
Simplify down to a cubic with positive leading coefficient , apply Descartes's rule of signs which shows this has 0 or 2 positive roots, and exactly one negative root. And since the cubic is
positive at x=0 and negative at x=2 it follows that it has one negative and two positive real roots.
June 20th 2011, 10:18 PM
Re: Using matlab to solve algebraic equations...
I figured out one way to check: apply the "vpa" function (variable precision arithmetic)... If the imaginary part of the answer is small enough we may believe the answer is real number... This is
not all that satisfying either, but perhaps we can do no better...
CB: Can you tell me how do we solve for n roots of polynomial of order n, analytically? Thanks!
June 21st 2011, 02:32 PM
Re: Using matlab to solve algebraic equations...
I figured out one way to check: apply the "vpa" function (variable precision arithmetic)... If the imaginary part of the answer is small enough we may believe the answer is real number... This is
not all that satisfying either, but perhaps we can do no better...
CB: Can you tell me how do we solve for n roots of polynomial of order n, analytically? Thanks!
General solution: Google for cubic formula and/or quartic formula. There is no general formula in radicals for a polynomial equation in one variable of degree five or higher.
You will have trouble with the cubic (and probably the quartic formulas) since this is almost certainly what solve uses.
|
{"url":"http://mathhelpforum.com/math-software/183352-using-matlab-solve-algebraic-equations-print.html","timestamp":"2014-04-19T10:01:35Z","content_type":null,"content_length":"9480","record_id":"<urn:uuid:573932f5-f6f7-486b-a40b-d740985426a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: FINGERPRINT VERIFICATION METHOD AND APPARATUS WITH HIGH SECURITY
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A fingerprint verification apparatus that adds chaff fingerprint information to real fingerprint information of a user and then, hides and stores the fingerprint information of the user with a
polynomial generated by unique information of the individual, thereby safely protecting the important fingerprint information of the user stored in a storing unit from an external attacker and safely
managing an private key using the fingerprint information when using the private key as the unique information for making the polynomial.
A fingerprint verification apparatus, comprising; a registration fingerprint minutiae extracting unit that extracts real minutiae for a registration fingerprint image subjected to a preprocessing
process; a minutiae protecting unit that transforms the extracted real minutiae by a polynomial based on unique information, generates a chaff minutiae which the chaff minutiae using a value
different from a polynomial result value of the real minutiae in order to hide the extracted real minutiae, generates registration minutiae information using the transformed real minutiae and the
chaff minutiae, and stores the generated registration minutiae information in the storage unit; a verification fingerprint minutiae extracting unit that extracts the minutiae from the input
verification fingerprint image; and a fingerprint comparing unit that determines whether the minutiae information extracted from the verification fingerprint minutiae extracting unit coincides with
the registration minutiae information stored in the storing unit by measuring the similarity therebetween.
The fingerprint verification apparatus according to claim 1, wherein the minutiae protecting unit includes: a unique information generating unit that generates the unique information for generating a
polynomial to be used to protect the extracted real minutiae; a polynomial generating unit that generates the generated unique information as the polynomial corresponding to a predetermined degree; a
polynomial projecting unit that adds results obtained by substituting coordinate information among a first material structure that the real minutiae extracted from the registration fingerprint
minutiae extracting unit into the polynomial to the first material structure and stores it as a third material structure; a chaff minutiae generating unit that generates the chaff minutiae as a
second material structure; and a registration feature generating unit that mixes the third material structure with the second material structure and registers it in the storing unit.
The fingerprint verification apparatus according to claim 2, wherein the first materials structure is represented by a type of (x, y, θ, type) (where x and y are coordinates of minutiae, θ is an
angle of minutiae, and a type is a type of minutiae), the third material structure is represented by a type of (x, y, θ, type, f(x), or f(y))(where f(x) is a result obtained by substituting the
information of the x coordinate among the minutiae into the polynomial and f(y) is a result obtained by substituting the information of the x coordinate into the polynomial), and the second material
structure is represented by a type of (x', y', θ', type', β), β≠f(x') or β≠f(y'), and x' or y' being not a root of the polynomial f(x) or f(y).
The fingerprint verification apparatus according to claim 2, wherein the degree of the polynomial generated from the polynomial generating unit is determined as a smaller degree than the number of
minutiae extracted from the registration fingerprint minutiae extracting unit.
The fingerprint verification apparatus according to claim 2, wherein the chaff minutiae generating unit generates the chaff minutiae beyond the defined tolerance with respect to the x and y
coordinate values and the direction of the real minutiae.
The fingerprint verification apparatus according to claim 2, wherein the minutiae protecting unit further includes a virtual image generating unit that generates a virtual fingerprint image, wherein
the chaff minutiae generating unit uses the virtual fingerprint image to add a larger chaff minutiae than the defined number in the fingerprint image.
The fingerprint verification apparatus according to claim 2, wherein the fingerprint comparing unit separates the real minutiae from the registration minutiae in which the real minutiae and the chaff
minutiae are mixed, uses the separated real minutiae to recover the same polynomial as one generated from the polynomial generating unit, and performs the verification by recovering the unique
information from the recovered polynomial.
The fingerprint verification apparatus according to claim 1, wherein the fingerprint comparing unit includes: a fingerprint alignment unit that performs an alignment process translating and rotating
the verification fingerprint minutiae extracted from the verification fingerprint minutiae extracting unit and the registration minutiae stored in the storing unit as much as the changed amount; a
coinciding pair of minutiae selecting unit that selects the coinciding pair of minutiae from the aligned registration minutiae and the verification fingerprint minutiae; an error correcting unit that
performs an error correcting process excluding the chaff minutiae from the coinciding pair of minutiae; a polynomial recovering unit that receives the pair of minutiae subjected to the error
correcting process and recovers the polynomial; a unique information recovering unit that obtains the unique information using the coefficient or root of the recovered polynomial; and a user
verifying unit that verifies as the person in question when the obtained unique information is the same as the unique information generated from the unique information generating unit.
A fingerprint verification method, comprising: forming a first material structure that extracts real minutiae from a registration fingerprint image of input user; generating a chaff minutiae for
hiding the real minutiae; generating unique information; generating a polynomial based on the generated unique information; forming a third material structure by substituting the first material
structure into the polynomial; forming a second material structure by generating a value different from a polynomial result value of the real minutiae with respect to the chaff minutiae; and
generating registration minutiae information using the third material structure and the second material structure and registering the generated registration minutiae information.
The fingerprint verification method according to claim 9, further comprising: extracting a verification fingerprint minutiae from a verification fingerprint image of the input user; selecting a
coinciding pair of minutiae by comparing the registration minutiae information and the verification fingerprint minutiae information; recovering a polynomial from the selected pair of minutiae;
recovering unique information from the recovered polynomial; and verifying as the person in question when the recovered unique information is the same as the unique information generated from the
unique information generating step.
The fingerprint verification method according to claim 9, wherein the first materials structure is represented by a type of (x, y, θ, type) (where x and y are coordinates of minutiae, θ is an angle
of minutiae, and a type is a type of minutiae), the third material structure is represented by a type of (x, y, θ, type, f(x) or f(y)) (where f(x) is a result obtained by substituting the information
of the x coordinate among the minutiae into the polynomial and f(y) is a result obtained by substituting the information of the x coordinate into the polynomial), and the second material structure is
represented by a type of (x', y', θ', type', β), β≠f(x') or β≠f(y') and x' or y' being not a root of the polynomial f(x) or f(y).
The fingerprint verification method according to claim 9, wherein the degree of the generated polynomial is determined as a smaller degree than the number of the extracted real minutiae.
The fingerprint verification method according to claim 9, wherein the generating the chaff minutiae generates the chaff minutiae beyond the defined tolerance with respect to the x and y coordinate
values and the direction of the real minutiae.
The fingerprint verification method according to claim 9, further comprising generating a virtual fingerprint image, wherein the generating the chaff minutiae uses the virtual fingerprint image to
add a larger chaff minutiae than the defined number in the fingerprint image.
The fingerprint verification method according to claim 14, wherein the adding the chaff minutiae determines the number of chaff minutiae to be added, determines the length and breadth of the virtual
fingerprint image to which the chaff minutiae of the determined number is added, selects a translation coordinate for translating a real fingerprint image to an optional position on the virtual
fingerprint image, assumes the selected translation coordinate as an original point of the real fingerprint image, and translates the real minutiae to the virtual fingerprint image.
The fingerprint verification method according to claim 10, further comprising prior to the selecting the pair of minutiae, translating and rotating the registration minutiae information and the
verification fingerprint minutiae information as much as the changed amount and aligning the registration minutiae information and the verification fingerprint minutiae information.
The fingerprint verification method according to claim 10, further comprising after selecting the pair of minutiae, performing an error correcting process excluding the chaff minutiae from the
coinciding pair of minutiae.
CROSS REFERENCE TO RELATED APPLICATIONS [0001]
This application claims priority to Korean Patent Application No. 10-2009-0113861 filed on Nov. 24, 2009, the entire contents of which are herein incorporated by reference.
BACKGROUND OF THE INVENTION [0002]
1. Field of the Invention
The present invention relates to a fingerprint verification method and apparatus safely storing fingerprint information and then performing verification. More specifically, the present invention
relates to a fingerprint verification method and apparatus hiding minutiae and safely storing the fingerprint information in order to prevent the reuse of the fingerprint information when the
important fingerprint information of the individual stored in a storing unit is leaked to the unauthorized users.
2. Description of the Related Art
Since the biometric data of the individual have finite information such as one face, two irises, etc., they cannot be freely changed unlike a password or a personal identification number (PIN) used
to approach an information system. Since the fingerprint information also has 10 fingers, when the registration fingerprint information is leaked, a serious problem that the fingerprint information
is changed only ten times may occur. Therefore, even though a living body, in particular, the fingerprint information stored in the storing unit is leaked, there is a need to prevent an attacker from
reusing the leaked fingerprint information.
The simplest method for safely storing the finger information generally uses a cryptograph scheme. However, a method for using a cryptograph scheme has an additional problem safely and effectively
managing a private key. In particular, since the a method for using a cryptograph scheme has a problem in repeatedly performing decoding operation during a comparison process for user verification
and performing encryption operation when encrypting and storing the fingerprint information registered in the fingerprint verification system, it cannot be used in the fingerprint verification system
that searches the users in a large capacity of database.
In order to solve the problem when using the encryption scheme for safely storing the fingerprint information, the fingerprint comparison should be performed in a transformed space in which the
fingerprint information is transformed and stored by non-invertible transform. However, non-linear transform is required to perform the non-invertible transform. When the non-linear transform is
performed, the geometrical information of the fingerprint information is lost, such that the general fingerprint comparing method according to the related art cannot perform the comparison in the
transformed space.
As an alternative of the non-invertible transform, there is a method that adds chaff fingerprint features optionally generated when registering the fingerprint features (real fingerprint features) of
the user and registers them together. Since the registration fingerprint features are stored without explicitly indicating the real and chaff fingerprint features, the attacker cannot separate and
reuse only the real minutiae when the registration fingerprint features are leaked to the attacker. However, since the person in question cannot identify what the real minutiae is, there is a problem
in that he and/she cannot identify the real minutiae even when he and/she attempts just verification.
In order to solve the problem, any polynomial is generally generated and the real minutiae is configured to include values obtained by substituting into the generated polynomial and the chaff
minutiae is configured to include values that do not exist in the generated polynomial. When the verification is requested, the same minutiae that coincides with each other by comparing the
registration fingerprint features consisting of the real and chaff fingerprint features and the verified finger features can be obtained. At this time, when requesting the verification using the
registration fingerprint of the user, most of the same minutiae are more likely to become the real minutiae. The same minutiae can confirm the person in question by generating the same polynomial as
one used in the registration process by using simultaneous equations.
As described above, there are two problems in a method of protecting the real minutiae by adding the chaff minutiae.
First, the strength of safely protecting the real minutiae is proportional to the number of the added chaff minutiae. In other words, as the chaff minutiae is more added, the attacker cannot identify
the real minutiae. However, the maximum number of addable chaff minutiae is determined depending on the size of the fingerprint image, such that the sufficient safety cannot be secured.
Second, in order to generate the same polynomial as one used in the registration process, a larger number of minutiae than the degree of the polynomial should be input to simultaneous equations. If
the number of real minutiae of the registration fingerprint is smaller than the degree of the polynomial, there occurs a serious problem in that it is impossible to recover the polynomial even though
all the real minutiae coincide with each other when performing the verification. When lowering the degree of the polynomial in order to solve the problem, the attacker can attempt the direct
polynomial recovery instead of separating the real and chaff fingerprint features from the registration fingerprint features, such that there occur a problem in that the safety of the fingerprint
verification system cannot be secured.
Korean Patent Registration No. 0714303 discloses an automatic fingerprint alignment of a finger fuzzy vault system and solves the problem by previously aligning minutiae using a geometric hashing
scheme in a process of registering fingerprints and performing the matching with the previously aligned minutiae in the verification process, since the alignment problem of the fingerprint minutiae
due to the similarity calculation of the template in which the minutiae of the user to be verified, the real minutiae, and the chaff minutiae are mixed occurs in a fingerprint fuzzy vault. However,
there are problems in that the Korean Patent Registration No. 0714303 limits the number of addable chaff minutiae and uses one polynomial having a fixed degree, depending on the size of the
fingerprint image such that it cannot secure the safety of the fingerprint verification system.
"Fuzzy Vault for Fingerprints", Audio- and Video-based Biometric Person Authentication, Vol. 5, pp. 310-319, 2005.7" published by U. Uludag et al. discloses a scheme that measures the fingerprint
verification performance by applying the fuzzy vault to the fingerprint and fixes the number of polynomials into one and then applies the fuzzy vault to the fingerprint. However, the scheme disclosed
in the article limits the number of addable chaff minutiae and uses one polynomial having a fixed degree, depending on the size of the fingerprint image, such that it cannot secure the safety of the
fingerprint verification system.
"Fingerprint-based Fuzzy Vault Implementation and Performance", IEEE Trans Information Forensics and Security, Vol 2, No 4, pp 744-757, 2007.12" published by K Nandakumar, et al. uses extracted data,
that is, Helper Data for fingerprint alignment separately from a fingerprint feature extracting process in order to solve the fingerprint alignment problem of the finger fuzzy vault and discloses a
scheme depending on total inspection using CRC decoding in order to recover the polynomial. However, the scheme disclosed in the article limits the number of addable chaff minutiae and uses one
polynomial having a fixed degree, depending on the size of the fingerprint image, such that it cannot secure the safety of the fingerprint verification system.
SUMMARY OF THE INVENTION [0016]
The present invention proposes to solve the above problems. It is an object of the present invention to provide a fingerprint verification method and apparatus with high security that adds chaff
fingerprint information to real fingerprint information of a user and then, hides and stores the fingerprint information of the user with a polynomial generated by unique information of the
individual, thereby safely protecting the important fingerprint information of the user stored in a storing unit from an external attacker and safely managing an private key using the fingerprint
information when using the private key as the unique information for making the polynomial.
It is another object of the present invention to provide a fingerprint verification method and apparatus with high security that generates a virtual fingerprint image larger than a size of a
fingerprint image in order to increase the maximum insertion number of chaff minutiae that is determined based on the size of the fingerprint image and generates more chaff minutiae through the
virtual fingerprint image and stores them together with a real image, thereby making it possible to safely protect the important fingerprint information of the user stored in a storing unit from an
external attacker.
Further, it is an object of the present invention to provide a fingerprint verification method and apparatus with high security that can improve accuracy of fingerprint verification by differentially
determining a degree of polynomial according to the number of real minutiae since the verification of the person in question fails due to the non-recovered polynomial even though minutiae coincides
completely when the number of real minutiae extracted from the registration fingerprint of the user is smaller than a degree of a polynomial and can secure the crypto-complexity of an entire system
by using two or more polynomials and improve the accuracy of the fingerprint verification since an attacker can directly attempt the polynomial recovery when the too small degree of a polynomial is
used, in differentially determining the degree of the polynomial.
According to an exemplary embodiment of the present invention, there is provided a fingerprint verification apparatus, including; a registration fingerprint minutiae extracting unit that extracts
real minutiae for a registration fingerprint image subjected to a preprocessing process; a minutiae protecting unit that transforms the extracted real minutiae by a polynomial based on unique
information, generates a predetermined chaff minutiae in order to hide the extracted real minutiae and generates the chaff minutiae using a value different from a polynomial result value of the real
minutiae, and generates registration minutiae information using the transformed real minutiae and the chaff minutiae and stores the generated registration minutiae information in the store unit; a
verification fingerprint minutiae extracting unit that extracts the minutiae from the input verification fingerprint image; and a fingerprint comparing unit that determines whether the minutiae
information extracted from the verification fingerprint minutiae extracting unit coincides with the registration minutiae information stored in the storing unit by measuring the similarity
The minutiae protecting unit includes a unique information generating unit that generates the unique information for generating a polynomial to be used to protect the minutiae, a polynomial
generating unit that generates the generated unique information as the polynomial corresponding to a predetermined degree, a polynomial projecting unit that adds results obtained by substituting
coordinate information among a first material structure that is real minutiae extracted from the registration fingerprint minutiae extracting unit into the polynomial to the first material structure
and stores it as a third material structure, a chaff minutiae generating unit that generates an optional chaff minutiae as a second material structure, and a registration feature generating unit that
optionally mixes the third material structure with the second material structure and registers it in the storing unit.
The first materials structure is represented by a type of (x, y, θ, type) (where x and y are coordinates of minutiae, θ is an angle of minutiae, and a type is a type of minutiae), the third material
structure is represented by a type of (x, y, θ, type, f(x), or f(y))(where f(x) is a result obtained by substituting the information of the x coordinate among the minutiae into the polynomial and f
(y) is a result obtained by substituting the information of the x coordinate into the polynomial), and the second material structure is represented by a type of (x', y', θ', type', β), x' or y' β≠f
(x') or β≠f(y') being not a root of the polynomial f(x) or f(y).
The degree of the polynomial generated from the polynomial generating unit is determined as a smaller degree than the number of minutiae extracted from the registration fingerprint minutiae
extracting unit.
The chaff minutiae generating unit generates the chaff minutiae beyond the defined tolerance with respect to the x and y coordinate values and the direction of the real minutiae.
The minutiae protecting unit further includes a virtual image generating unit that generates a virtual fingerprint image, wherein the chaff minutiae generating unit uses the virtual fingerprint image
to add a larger chaff minutiae than the defined number in the fingerprint image.
The fingerprint comparing unit separates the real minutiae from the registration minutiae in which the real minutiae and the chaff minutiae are mixed, uses the separated real minutiae to recover the
same polynomial as one generated from the polynomial generating unit, and performs the verification by recovering the unique information from the recovered polynomial.
The fingerprint comparing unit includes a fingerprint alignment unit that performs an alignment process translating and rotating the verification fingerprint minutiae extracted from the verification
fingerprint minutiae extracting unit and the registration minutiae stored in the storing unit as much as the changed amount; a coinciding pair of minutiae selecting unit that selects the coinciding
pair of minutiae from the aligned registration minutiae and the verification fingerprint minutiae; an error correcting unit that performs an error correcting process excluding the chaff minutiae from
the coinciding pair of minutiae; a polynomial recovering unit that receives the pair of minutiae subjected to the error correcting process and recovers the polynomial; a unique information recovering
unit that obtains the unique information using the coefficient or root of the recovered polynomial; and a user verifying unit that verifies as the person in question when the obtained unique
information is the same as the unique information generated from the unique information generating unit.
There is provided a fingerprint verification method, includes: forming a first material structure that extracts real minutiae from a registration fingerprint image of input user; generating a
predetermined chaff minutiae for hiding the real minutiae; generating unique information; generating a polynomial based on the generated unique information; forming a predetermined third material
structure by substituting the first material structure into the polynomial; forming a predetermined second material structure by generating a value different from a polynomial result value of the
real minutiae with respect to the chaff minutiae; and generating registration minutiae information using the third material structure and the second material structure and registers the generated
registered minutiae information.
The fingerprint verification method further includes: extracting the verification fingerprint minutiae from the verification fingerprint image of the input user; selecting a coinciding pair of
minutiae by comparing the registered minutiae information and the verification fingerprint minutiae information; recovering a polynomial from the selected pair of minutiae; recovering unique
information from the recovered polynomial; and verifying as the person in question when the recovered unique information is the same as the unique information generated from the unique information
generating unit.
The first materials structure is represented by a type of (x, y, θ, type) (where x and y are coordinates of minutiae, θ is an angle of minutiae, and a type is a type of minutiae), the third material
structure is represented by a type of (x, y, θ, type, f(x), or f(y))(where f(x) is a result obtained by substituting the information of the x coordinate among the minutiae into the polynomial and f
(y) is a result obtained by substituting the information of the x coordinate into the polynomial), and the second material structure is represented by a type of (x', y', θ', type', β), x' or y' β≠f
(x') or β≠f(y') being not a root of the polynomial f(x) or f(y).
The degree of the generated polynomial is determined as a smaller degree than the number of the extracted registration fingerprint minutiae.
The generating the chaff minutiae generates the chaff minutiae beyond the defined tolerance with respect to the x and y coordinate values and direction of the real minutiae.
The fingerprint verification method further includes generating a virtual fingerprint image, wherein the generating the chaff minutiae uses the virtual fingerprint image to add a larger chaff
minutiae than the defined number in the fingerprint image.
The adding the chaff minutiae determines the number of chaff minutiae to be added, determines the length and breadth of the virtual fingerprint image to which the chaff minutiae of the determined
number is added, selects a translation coordinate for translating the real fingerprint image to an optional position on the virtual fingerprint image, and assumes the selected translation coordinate
as an original point of the real fingerprint image and translates the real minutiae to the virtual fingerprint image.
The fingerprint verification method further includes: prior to the selecting the pair of minutiae, translating and rotating the registered minutiae information and the verification fingerprint
minutiae information as much as the changed amount and aligning the registered minutiae information and the verification fingerprint minutiae information.
The fingerprint verification method further includes after selecting the pair of minutiae, performing an error correcting process excluding the chaff minutiae from the coinciding pair of minutiae.
According to the present invention, it can add the information of the real minutiae of the user to the information of the chaff minutiae to safely protect the important fingerprint information of the
user stored in the storing unit from the external attacker and cannot reuse the information of the fingerprint minutiae since the attacker cannot identify the real minutiae even when the stored
information of the fingerprint minutiae is leaked to the outside.
In other words, 1) the present invention can more improve the security than the existing method by generating the virtual fingerprint image and then, adding more chaff minutiae. Further, 2) the
present invention can solve the problem of security that is generated when comparing the fingerprint features including the real and chaff minutiae registered twice. In addition, since the
fingerprint minutiae are distributed around the middle area of the fingerprint image, there is a problem that the minutiae existing at the edge portion of the fingerprint image can be assumed to be
the chaff minutiae when the chaff minutiae are added. However, according to the present invention, 3) since the fingerprint image translates to any positions of the virtual fingerprint image, the
minutiae existing at the edge of the virtual fingerprint image cannot be considered as the chaff minutiae, thereby making it possible to improve the security.
BRIEF DESCRIPTION OF THE DRAWINGS [0038]
FIG. 1 is a configuration block diagram of a fingerprint verification apparatus with high security according to the present invention;
FIG. 2 is a detailed block configuration diagram of a minutiae protecting unit of FIG. 1;
FIG. 3 is a detailed block configuration diagram of another example of the minutiae protecting unit of FIG. 1;
[0041] FIG. 4
is a flow chart showing a method of adding a chaff minutiae exceeding a defined number in a fingerprint image using a virtual image according to the present invention;
FIG. 5 is a detailed configuration diagram of a fingerprint comparing unit shown in FIG. 1;
FIG. 6 is an exemplary diagram for explaining fingerprint feature information used in a general fingerprint verification system;
FIGS. 7 to 9 are diagrams for explaining a generation range of the chaff minute generated by the chaff minutiae generating unit of FIG. 2; and
FIGS. 10 and 11 are diagrams for explaining a method of adding the chaff minutiae in the chaff minutiae generating unit in
FIG. 4
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0046]
The present invention will be described below with reference to the accompanying drawings. Herein, the detailed description of a related known function or configuration that may make the purpose of
the present invention unnecessarily ambiguous in describing the present invention will be omitted Exemplary embodiments of the present invention are provided so that those skilled in the art may more
completely understand the present invention. Accordingly, the shape, the size, etc., of elements in the figures may be exaggerated for explicit comprehension.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Generally, fingerprint feature information used in a fingerprint verification system uses an ending point 610 at which a ridge ends and a bifurcation point 600 at which one ridge is divided into two,
in a fingerprint image shown in FIG. 6. The extracted one minutiae has a coordinates (x and y) of minutiae, an angle (θ) of the minutiae, and a type information of the minutiae and is represented by
(x, y, θ, type). In addition, the fingerprint feature is called a minutiae in the fingerprint verification system.
FIG. 1 is a block configuration block diagram of a fingerprint verification apparatus with high security according to the present invention.
As shown in FIG. 1, the fingerprint verification apparatus according to the present invention includes a registration fingerprint minutiae extracting unit 100, a minutiae protecting unit 200, a
storing unit 300, a verified fingerprint minutiae extracting unit 400, and a fingerprint comparing unit 500.
The registration fingerprint minutiae extracting unit 100 extracts minutiae from the registration fingerprint of the user that is subjected to a preprocessing process. When the minutiae of the
extracted registration fingerprint image is stored in a database of a storing unit 300 as it is, there is a problem in that it is leaked by a malicious attacker and can be reused. In order to solve
the above problem, in the present invention, even though the minutiae of the registration fingerprint image is leaked, the minutiae protecting unit 200 performs a transform process protecting the
minutiae not to be able to be reused, which is then stored in the database of the storing unit 300.
In the fingerprint verification process, the verification fingerprint minutiae extracting unit 400 extracts minutiae from the verification fingerprint image that is subjected to the same
preprocessing as the registration process. The fingerprint comparing unit 500 measures the similarity between the minutiae extracted from the verification fingerprint minutiae extracting unit 400 and
the registration fingerprint minutiae pre-stored in the database of the storing unit 300 in the fingerprint registration process to determine whether they coincide with each other.
The minutiae protecting unit 200 uses non-invertible transform to transform the minutiae of the registration fingerprint and the feature information transformed in the minutiae protecting unit 200 is
used in the fingerprint comparing unit 500 instead of the minutiae extracted from the registration fingerprint minutiae extracting unit 100.
FIG. 2 is a detailed block configuration diagram of the minutiae protecting unit 200 of FIG. 1.
As shown in FIG. 2, the minutiae protecting unit 200 includes a unique information generating unit 220, a polynomial generating unit 230, a polynomial projecting unit 240, a registration feature
generating unit 260, and a chaff minutiae generating point 270.
The real minutiae extracting unit 210 is the same as the registration fingerprint minutiae extracting unit of FIG. 1 and extracts the real minutiae from the registration fingerprint image, which is
called a first material structure.
In order to protect the real minutiae of the extracted user, the present invention uses the chaff minutiae and the polynomial.
To this end, the unique information generating unit 220 generates unique information for generating the polynomial to be used in protecting the minutiae. The unique information may be optionally
generated by the registration system and a private key of the user to be used in the encryption system may be used as the unique information.
The polynomial generating unit 230 generates the generated unique information according to a polynomial (f(x)) corresponding to a predetermined degree.
The polynomial projecting unit 240 stores (x, y, θ, type, f(x)) generated by adding results obtained by substituting information of a x coordinate among the minutiae of the user extracted from the
real minutiae extracting unit 210 into the polynomial to the first material structure (x, y, θ, type), wherein the (x, y, θ, type, f(x)) is called a third material structure. Alternatively, the
polynomial projecting unit 240 stores (x, y, θ, type, f(y)) by adding results obtained by substituting information of a y coordinate among the minutiae of the user extracted from the real minutiae
extracting unit 210 into the polynomial to the first material structure (x, y, θ, type).
Also, the chaff minutiae generating unit 250 optionally generates the chaff minutiae from the fingerprint verification system in order to protect the real minutiae information (x, y, θ, type) of the
user extracted from the real minutiae extracting unit 210 from the attacker. The chaff minutiae is configured in a form of (x', y', θ', type', β), which is called a second material structure. At this
time, unlike the real minutiae information, since β≠f(x'), x' is not a root of the polynomial f(x).
The registration feature generating unit 260 optionally mixes the third material structure with the second material structure and registers it in the storing unit 300.
All the polynomial operations are performed on a finite field GF(p
) and all the real minutiae are transformed into a form of GF(p
) and substitutes the transformed value into f(x). In other words, if an element of GF(p
) is represented by AX+B(A,BεGl (pP)), x and y may be operated by being changed into A and B, respectively.
The degree of the polynomial generated from the polynomial generating unit 230 of FIG. 2 does not use the fixed value, but is adaptively determined according to the number of minutiae extracted from
the real minutiae extracting unit 210. That is, when the number of extracted minutiae is 7, the degree of the polynomial of 6 degree or less should be selected.
At this time, as the number of chaff minutiae added in the chaff minutiae generating unit 250 is increased, the safety is increased since it is difficult to select only the real minutiae (first
material structure) even though the fourth material structure stored in the database is leaked. However, since the maximum number of chaff minutiae that can be added according to the size of the
fingerprint image is determined, there is a problem in increasing the safety.
FIGS. 7 to 9 describe the problem related to the number of chaff minutiae.
As described above, the minutiae of the fingerprint is basically configured to have the x and y coordinates of minutiae, the direction of minutiae, and the type of minutiae and is represented by (x,
y, θ, type). The real minutiae extracted from the same fingerprint is not extracted at the accurately same position due to the noise of the fingerprint input apparatus, errors generated during the
image processing operation, etc., even in the same fingerprint. Therefore, when selecting a pair of coinciding minutiae by comparing the minutiae of the registration fingerprint with the minutiae of
the verification fingerprint, tolerance is experimentally defined and it is considered as a pair of coinciding minutiae when two pairs of minutiae is in tolerance. The similarity of two fingerprints
is determined using the number of pair of obtained minutiae. When generating the chaff minutiae from the chaff minutiae generating unit 270, there is a problem in that the chaff minutiae can be
wrongly determined as the real minutiae if the tolerance is not considered.
Therefore, in the present invention, when the chaff minutiae is generated, it is generated outside the tolerance defined in the fingerprint verification system with respect to the x and y coordinate
values and direction, as shown in
FIG. 9
. In
FIG. 9
, a white minutiae 1000 is a registered real minutiae and a black minutiae 1010 is a chaff minutiae. In connection with the real minutiae 1000, the minutiae existing in the range meeting tolerance
1030 for tolerance 1020 and direction with respect to the x and y coordinates are considered as the pair of coinciding minutiae. In other words, when being in a quadrangle 1040 represented by a
dotted line and meeting the tolerance of the angle, since it is considered as the pair of coinciding minutiae, the chaff minutiae 1010 is generated so that the x and y coordinates and direction have
a value outside the tolerance. Meanwhile, the type information of the minutiae configuring the minutiae of the fingerprint is generated as a bifurcation point when the real minutiae 1110 is an ending
point and as an ending point when it is a bifurcation point.
FIG. 7 shows the real minutiae (white circle) extracted from the fingerprint image. In FIG. 7, I_W and I_H each represents the length and breadth sizes of the fingerprint image. There are five real
minutiae. The quadrangle represented by a dotted line around a first minutiae m1 represents tolerance for x and y in
FIG. 9
. FIG. 8 shows results obtained by adding the chaff minutiae (black circle) to the fingerprint image of FIG. 7. As shown in FIG. 8, the number of addable chaff minutiae is limited due to the
FIG. 3 is a detailed block configuration diagram of another example of the minutiae protecting unit 200 of FIG. 1. The minutiae protecting unit 200 according to the present example further includes a
virtual image generating unit 340 that generates the virtual image in order to add the chaff minutiae. Meanwhile, the minutiae protecting unit 200 has the same configuration as FIG. 2 except that a
chaff minutiae generating unit 370 further generates the chaff minutiae.
[0071] FIG. 4
is a flow chart showing a method of adding the chaff minutiae exceeding the defined number in the fingerprint image using the virtual image in the present invention. Further, FIGS. 10 and 11 are
diagrams for explaining a method for adding more chaff minutiae.
As shown in
FIG. 4
, the number of addable chaff minutiae is first determined (410). If the number is determined, a process of generating the virtual fingerprint image that determines the length and breadth of the
virtual fingerprint image to which the determined number of chaff minutiae can be added is performed (420). The result obtained by generating the virtual fingerprint image is a quadrangle represented
by the outside real line and V_W and V_H each is the length and breadth of the virtual fingerprint image. After the process of generating the virtual fingerprint image (420), a translation coordinate
for translating the real fingerprint image to an optional position on the virtual fingerprint image is selected (430). The selected translation coordinate is assumed to be the original point of the
real fingerprint image and the real minutiae translates to the virtual fingerprint image (440). FIG. 10 shows a result obtained by translating the real minutiae of FIG. 7 to an optional position on
the virtual fingerprint image.
FIG. 11 shows a result obtained by adding the chaff minutiae corresponding to the determined chaff minutiae to the virtual fingerprint image. As shown in FIG. 11, a larger number of chaff minutiae
than the added chaff minutiae shown in FIG. 8 can be added.
FIG. 5 is a detailed configuration diagram of a fingerprint comparing unit shown in FIG. 1. As shown in FIG. 5, a fingerprint comparing unit 500 includes a fingerprint alignment unit 510, a pair of
coinciding minute selecting unit 520, an error correcting unit 530, a polynomial recovering unit 540, a unique information recovering unit 550, and a user verifying unit 550.
The fingerprint comparing unit 500 separates the real minutiae from the registration feature (fourth material structure) in which the real minutiae and the chaff minutiae are mixed in order to verify
the fact that the user is the person in question by using the fingerprint and then, recovers the same polynomial as one generated from the polynomial generating unit 230 by using the real minutiae
and obtains the unique information from the recovered polynomial.
In the verification process as shown in FIG. 1, the fingerprint comparing unit 500 measures the similarity between the verification fingerprint minutiae extracted from the verification fingerprint
provided by the user using the verification fingerprint minutiae extracting unit 400 and the registration minutiae registered in the database of the storing unit 300. At this time, the coordinate
value of the minutiae is translated each when even the fingerprint of the same person is input to the fingerprint input apparatus and the direction is rotated.
Therefore, the fingerprint alignment unit 510 performs an alignment process of translating and rotating two fingerprints as much as the changed amount.
The coinciding pair of minutiae selecting unit 520 selects the coinciding pair of minutiae among the registration minutiae and the verification fingerprint minutiae aligned in the fingerprint
alignment unit 510.
Thereafter, the same polynomial as one generated from the coinciding pair of selected minutiae using the polynomial generating unit 230 should be obtained using the simultaneous equations. However,
some of the chaff minutiae my be selected as the coinciding pair of minutiae due to an error when the fingerprint image is obtained through the fingerprint obtaining apparatus even in the fingerprint
of the same person and an error when the feature is extracted from the obtained fingerprint image, etc. When the chaff minutiae is selected as the coinciding pair of minutiae, it cannot be recovered
using the same polynomial as one generated from the polynomial generating unit 230 using the simultaneous equations. Therefore, the process of correcting the errors is needed and the error correcting
unit 530 performs the error correcting process excluding the chaff minutiae from the coinciding pair of minutiae.
The polynomial recovering unit 540 receives the coinciding pair of minutiae configured only of the real minutiae output by performing the error correcting process by the error correcting unit 530 and
uses it, thereby recovering the same polynomial as one generated from the polynomial generating unit 230 of FIG. 2. For example, when the polynomial generated from the polynomial generating unit 230
is a 5-degree polynomial, if more than 6 pair of real minutiae is extracted, the same polynomial can be recovered using the simultaneous equations using x and f(x) values in the real minutiae
information (x, y, θ, type, f(x)) as an input.
However, when the chaff minutiae is selected as the coinciding pair of minutiae, the same polynomial cannot be recovered by the simultaneous equations since β≠f(x') in the minutiae information (x',
y', θ', type', β) as described in the operation of the chaff minutiae generating unit 250. If the error correcting unit 530 uses the method such as a Reed-Solomon decoding algorithm, the result of
the error correcting unit 530 may be the recovered polynomial.
The unique information recovering unit 550 obtains the unique information using the coefficient or root of the recovered polynomial.
The user verifying unit 560 is verified as the person in question when the unique information obtained from the unique information recovering unit 550 is the same as the unique information generated
from the unique information generating unit 220 of FIG. 2. In particular, the fingerprint alignment unit 510 and the coinciding pair of minutiae selecting unit 520 of FIG. 5 may use the general
fingerprint verification algorithm.
According to the present invention, it can add the information of the real minutiae of the user to the information of the chaff minutiae to safely protect the important fingerprint information of the
user stored in the storage unit from the external attacker and cannot reuse the information of the fingerprint minutiae since the attacker cannot identify the real minutiae even when the stored
information of the fingerprint minutiae is leaked to the outside.
In other words, the present invention can more improve the security than the existing method by generating 1) the virtual fingerprint image and then, adding more chaff minutiae. Further, the present
invention can solve the problem of security that is generated when comparing the fingerprint features including the real and chaff minutiae registered twice. In addition, since the fingerprint
minutiae are distributed around the middle area of the fingerprint image, there is a problem that the minutiae existing at the edge portion of the fingerprint image can be assumed to be the chaff
minutiae when the chaff minutiae are added. However, according to the present invention, 3) since the fingerprint image translates to any positions of the virtual fingerprint image, the minutiae
existing at the edge of the virtual fingerprint image cannot be considered as the chaff minutiae, thereby making it possible to improve the security.
While configurations of certain embodiments have been described above with reference to the accompanying drawings, it is by way of example only. Those skilled in the art can make various
modifications and changes within the technical spirit of the present invention. Accordingly, the actual technical protection scope of the present invention must be determined by the spirit of the
appended claims.
Patent applications by Dae Sung Moon, Daejeon KR
Patent applications by Jang-Hee Yoo, Daejeon KR
Patent applications by Ki Young Moon, Daejeon KR
Patent applications by So Hee Park, Daejeon KR
Patent applications by Woo Yong Choi, Daejeon KR
Patent applications by Yong Jin Lee, Ansan-Si KR
Patent applications by Yun Su Chung, Daejeon KR
Patent applications by Electronics and Telecommunications Research Institute
Patent applications in class Extracting minutia such as ridge endings and bifurcations
Patent applications in all subclasses Extracting minutia such as ridge endings and bifurcations
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20110123072","timestamp":"2014-04-20T15:52:58Z","content_type":null,"content_length":"84642","record_id":"<urn:uuid:9b4d9e99-c8bd-465d-9112-049b9c88555b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: 18.014ESG Problem Set 8
Pramod N. Achar
Fall 1999
1. Suppose f : [a, b] R is continuous; furthermore, suppose that it is
differentiable on (a, b). Show that if Df(x) > 0 for all x (a, b), then
f is strictly increasing. (Hint: If f were not strictly increasing, use the
Mean-Value Theorem to find a point where Df is zero or negative.)
2. Exercises 19 and 21 from Section 5.5 of Apostol. Both of these exercises
involve computing derivatives of functions defined in terms of integrals.
But be careful--you cannot apply the First Fundamental Theorem directly
to either of them.
3. Prove that there exists at least one positive number a such that cos a = 0.
(Hint: Suppose that cos x = 0 for all x > 0. Show that cos x would have to
be positive for all x > 0. Then show that sin x would be strictly increasing
for positive x. It follows that if x > 0, then
0 = sin 0 < sin x < sin 2x = 2 sin x cos x.
From this, derive the inequality (2 cos x - 1) sin x > 0 for x > 0. Then,
show that cos x > 1/2 for x > 0. It follows that
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/128/4751613.html","timestamp":"2014-04-16T11:01:38Z","content_type":null,"content_length":"8100","record_id":"<urn:uuid:85def24e-2256-48dd-8648-9c9a611ac3bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The IF function: what it is, and how to use it
Quick Reference Card
See also
We all deal with ifs in our daily life. For example, if it is cold you might put on a sweater; if it is hot, you’ll probably take the sweater off. In Excel you use the IF function to deal with
situations where you want to see a result based on whether a condition you specify is True, or False. Essentially, after the condition is determined to be true or false, there’s a fork in the road:
Excel will take one path if the condition True, or a different path if the condition is False.
IF Arguments
● Logical_test is any value or expression that can be evaluated to TRUE or FALSE. For example: A2<=100, or A2>B2.
● Value_if_true is the value that is returned if the logical_test is TRUE. If omitted, TRUE is returned.
● Value_if_false is the value that is returned if the logical_test is FALSE. If omitted, FALSE is returned.
Tips If you don’t recall the arguments, Excel can help you out.
● Click the Formulas tab on the ribbon, and then click Insert Function. Type IF in the Search for a function box, click IF in the Select a function list, and then click OK. The Function Arguments
dialog box opens, with an explanation for each argument.
● Or you can use Formula AutoComplete to get help. Just type the equal sign (=) and IF(. A screentip appears, with the name of each argument in its proper order. If you’re not sure about an
argument, click the name of the function in the screentip to get a Help topic. If Formula AutoComplete does not appear when you type a formula, the feature may have been turned off. To turn it
on, click the File tab in Excel, click Options, and then click the Formulas category. Under Working with formulas, select Formula AutoComplete.
Comparison operators
For the IF function to work you must use a comparison operator in the Logical_test.
● < less than
● > greater than
● = equals
● <= less than or equal to
● >= greater than or equal to
● <> not equal to
IF example
Suppose you are tracking expenses and want to see if figures are within budget, or over budget. In this example, anything less than or equal to $100 is Within budget. Anything over $100 is over
Here’s the formula: =IF(A2<=100,”Within budget”,”Over budget”)
Nested IF example
The previous example has two possible outcomes: Within budget, or over budget. To get additional possible outcomes, you can use more than one IF function in a formula, which is called nesting.
In this example, suppose you need to figure out salary deductions. There are three possible outcomes: that a salary deduction is 6% for salaries less than $25,000, 8%for salaries from $25,000 to
$39,999, or 10% for salaries greater than or equal to $40,000.
Here’s the formula: =B7*IF(B7<25000,$B$2,IF(B7<40000,$B$3,$B$4)) Notice that the formula begins with multiplication (B7 times the result of the IF function). Also, the formula contains absolute
references to cells B2, B3, and B4.
While you can nest a great number of functions within functions, don’t let your formulas get too complicated. As an alternative to long IF nested formulas, consider using the VLOOKUP function.
There’s information about it at the top of the page.
|
{"url":"http://office.microsoft.com/en-us/excel-help/the-if-function-what-it-is-and-how-to-use-it-RZ102425926.aspx?section=6","timestamp":"2014-04-24T21:43:36Z","content_type":null,"content_length":"30062","record_id":"<urn:uuid:2e5d519c-3e88-4386-9bee-ec0b11fb132c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sizing Circuit Protection and Conductors — Part 3
Why you size motor circuit conductors and protection differently than you do for circuits of other types of loads
Motors differ from other types of loads in one important way: The motor needs much more current to start than to run. This temporary, but significant, inrush current is what complicates motor circuit
protection. The overcurrent protection device (OCPD) must accommodate inrush current while still protecting the conductors. The conductors must be able to handle and dissipate the short-term increase
in heat from that starting current. Much of Art. 430 is concerned with getting both of these requirements right.
For other types of loads, a single device provides overcurrent protection and overload protection. Because of inrush, motor circuits handle those functions separately — that is, the job of protecting
the conductors and the load gets split up for motor circuits. Fault protection opens the circuit when there’s a high level of excess current, such as from a fault or short circuit. But an overload is
a relatively small amount of excess current. OCPDs protect against current in excess of the rated current of the equipment or ampacity of a conductor, whether it’s a motor circuit or not. Normally,
the OCPD also handles overloads. But with motor circuits, separate thermal overloads do that.
As with any circuit, analyze the loads before sizing motor OCPDs and conductors. Here are some highlights:
1. If multiple motors are on the same feeder and/or branch circuits, look at load diversity. Do any of these operate in a mutually exclusive fashion?
2. Which loads are continuous? Noncontinuous?
3. Does the system include a variable-frequency drive (VFD)? Is it power-factor corrected? Does it have harmonics mitigation?
4. Do you need to derate conductors for voltage drop?
5. What type of motor(s) are you installing? For example, part-winding motors have additional requirements. Review Art. 430 Part I when making this assessment.
6. Is this application covered by another article [Table 430.5]?
Sizing motor circuit conductors
Normally, we size the OCPD and then the conductors. You find this order of calculation in the examples of Appendix D. But with motors, your second step is to size the conductors [Table 430.1].
Not coincidentally, the requirements are in Art. 430 Part II. In Part III, you’ll find the requirements for providing thermal protection to the motor; that’s outside the scope of this discussion.
Part II applies to motors operating at under 600V, such as typical 480V industrial motors. If your motor operates at over 600V, use Part XI instead.
How you proceed depends on whether these conductors are for:
• Single motor. Apply 430.22. Then see if any of subsections (A) through (I) apply to your installation.
• Wound rotor secondary. Apply 430.23.
• More loads than just that motor. Apply 430.24.
• Combination load equipment. Apply 430.25.
• Motors with PF capacitors installed. Apply 430.27.
• Constant voltage DC motors. Apply 430.29.
If you have load diversity, you can apply a feeder demand factor [430.26]. If you’re using feeder taps, apply 430.28.
To make things simple, let’s assume you need to size the conductors for a single, continuous-duty 40HP motor. Those conductors must have an ampacity of at least 125% of the motor FLC. Use the FLC
from the motor nameplate. If, for some reason, you can’t get this from the nameplate or the motor data sheet, use the applicable NEC table (e.g., Table 430.250). It’s best, however, to obtain the
information from the motor manufacturer (and then affix a new nameplate to the motor).
Your motor’s nameplate says its FLC is 53A. Coincidentally, this is close to the 52A shown in Table 430.250. You now have a three-step process for sizing the conductors:
Step 1. Identify the table. You’re running three THHN conductors in intermediate metal conduit (IMC). Use Table 310.15(B)(16).
Step 2. Apply the temperature correction factor. Determine the maximum ambient temperature — not for where the motor is, but for where the conductors are running.
Suppose these will run overhead, and you know the maximum temperature will be 110°F (43°C).
From Table 310.15(B)(2)(b), you see you must multiply the allowable ampacity by 0.87. Alternatively, you can multiply 53A by 1.15, which gives you 61A.
But what if your ceiling temperature is, say, 160°F (71°C)? In the 60°C column, the ambient temperatures end at 55°C. In such a case, split the run; see Annex D3(a) for an example of how to do this.
Step 3. Size the conductor per the required ampacity. Using the 60°C column, you see this circuit requires a conductor at least 4 AWG.
OCPD sizing
Size the OCPD per 430 Part III, noting that you use the motor nameplate current rating (FLA), not the FLC [430.6(A)].
In our example, you sized the conductors for a single motor that has 53A FLC. Let’s assume the motor is on its own branch circuit. How do you size the OCPD for that circuit? The answer is in 430.52.
How would you know to go there? Earlier, you referred to Table 430.1 to see what your second step is. Go back to Table 430.1 as you continue to work out your motor requirements. For this step, Table
430.1 directs you to Part IV.
There, you’ll read that the OCPD must be capable of carrying the starting current of the motor [430.52(B)]. This doesn’t mean you size the OCPD for the starting current, however. The meaning of this
emerges in 430.52(C). You need to specify an OCPD per Table 430.52.
When you size the overload, you use the FLA. But to size the OCPD, you use the FLC. First, find your motor type and OCPD type in Table 430.52. Then, multiply your FLC by the percentage of FLC
required by the chart.
Using the 53A FLC of our example with an inverse time breaker, you multiply the FLC by 2.5 for a maximum rating of 132.5A. This isn’t a standard OCPD size [240.6(A)]. Since 430.52(C)(1) says the OCPD
can’t exceed this calculated value, do you use the next size down? If you read on just a bit, you’ll see that Exception No. 1 lets you use the next size up. So for this branch circuit, you need a
150A breaker.
If it turns out that the breaker trips every time you try to start the motor, then what? Determine if the trip is due to a fault or from overload. If it’s overload, Exception No. 2 lets you use a
larger OCPD.
But there are limits to how big that OCPD can be. You may have voltage drop or power factor problems that lead to excess starting current. In addition to correcting these, consider a soft-start or
VFD so you eliminate across-the-line starting. A big advantage of using a soft-start or VFD is you eliminate a common cause of cable failure. The power anomalies resulting from across the line
starting can damage loads on other feeders, not just the motor system.
Multiple loads
Now let’s change our example slightly. Suppose this motor is on a branch circuit with two other motors and four electric heaters. The loads are 40A, 27A, and 40A, respectively. How do you size the
Unless you have a compelling reason to put these loads on the same branch circuit, it’s generally best to put each motor on its own branch circuit.
First, analyze the loads. If that 40A motor runs an HVAC compressor, you can disregard the 40A heaters; these are mutually exclusive loads. To see why not to disregard the motor instead, review the
calculations we just did.
Next, turn to 430.53. The first requirement is that you must use fuses or inverse time circuit breakers. So when you use Table 430.52, ignore the two middle columns. Apply Table 430.52 to the sum of
your motor loads, then add to the sum of the other loads.
For feeder OCPD requirements, turn to Part V. The key here is the “don’t exceed the rating” requirement of 430.52(C)(1) applies only to the largest load.
For the conductor requirements on that feeder, turn to Part II. To determine the minimum conductor ampacity [430.24]:
1. Multiply the FLC of the largest motor by 125%.
2. Add up the FLCs of the other motors.
3. Multiply the continuous non-motor loads by 125%.
4. Add up all of the above to the total of the non-continuous loads.
Avoiding confusion
To size conductors and OCPDs for branch circuits, follow the steps shown in Fig. 1. The steps for feeders modify the steps for branch circuits, as shown in Fig. 2. But if you have motor circuits,
remember that the inrush current of motors changes things:
• The normal functions of the OCPD are split. With motors, you have an additional device that does the overload protection job normally done by the OCPD.
• You use multipliers for sizing the conductors (125%) and the OCPDs [Table 430.52].
If you understand the branch circuit conductor and OCPD sizing steps, it’s just a matter of modifying them a bit for feeders and/or motors.
Lamendola is an electrical consultant based in Merriam, Kan. He can be reached at comments@mindconnection.com
Discuss this Article 3
on Apr 27, 2013
Series of articles on this vital design issue have been found highly supporting to engineering design consultant like me.
• Login or register to post comments
on Jul 8, 2013
I have found this article very useful however, I'm confused about the use of acronyms FLA & FLC in the OCPD sizing section of the article. I assume these stand for "Full Load Amps" & "Full Load
Current", what is the difference?
• Login or register to post comments
Mark Lamendola (not verified)
on Jul 9, 2013
Don't feel alone in finding this confusing. I have sometimes mixed these up, despite "knowing" the difference.
You don't see these defined in 430.2 or in Article 100. But when applying Article 430, you will use one or the other depending upon what step you're in relative to Figure 430.1. It's clear from the
requirements which one to use.
1. FLC is on the nameplate. One use is for sizing motor overloads.
2. FLA is a table value. One use is for sizing conductors and overcurrent protective devices [430.6(A)] for general motor applications.
I hope that helps.
• Login or register to post comments
|
{"url":"http://ecmweb.com/nec/sizing-circuit-protection-and-conductors-part-3","timestamp":"2014-04-17T16:47:37Z","content_type":null,"content_length":"138658","record_id":"<urn:uuid:a8705531-b01b-43d7-9153-56b041e20259>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analyzing vulnerabilities of critical infrastructures using flows and critical vertices in and/or graphs
Desmedt, Y; Wang, YG; (2004) Analyzing vulnerabilities of critical infrastructures using flows and critical vertices in and/or graphs. In: INTERNATIONAL JOURNAL OF FOUNDATIONS OF COMPUTER SCIENCE.
(pp. 107 - 125). WORLD SCIENTIFIC PUBL CO PTE LTD
Full text not available from this repository.
AND/OR graphs and minimum-cost solution graphs have been studied extensively in artificial intelligence (see, e.g., Nilsson [14]). Generally, the AND/OR graphs are used to model problem solving
processes. The minimum-cost solution graph can be used to attack the problem with the least resource. However, in many cases we want to solve the problem within the shortest time period and we assume
that we have as many concurrent resources as we need to run all concurrent processes. In this paper, we will study this problem and present an algorithm for finding the minimum-time-cost solution
graph in an AND/OR graph. We will also study the following problems which often appear in industry when using AND/OR graphs to model manufacturing processes or to model problem solving processes:
finding maximum (additive and non-additive) flows and critical vertices in an AND/OR graph. A detailed study of these problems provide insight into the vulnerability of complex systems such as
cyber-infrastructures and energy infrastructures (these infrastructures could be modeled with AND/OR graphs). For an infrastructure modeled by an AND/OR graph, the protection of critical vertices
should have highest priority since terrorists could defeat the whole infrastructure with the least effort by destroying these critical points. Though there are well known polynomial time algorithms
for the corresponding problems in the traditional graph theory, we will show that generally it is NP-hard to find a non-additive maximum flow in an AND/OR graph, and it is both NP-hard and coNP-hard
to find a set of critical vertices in an AND/OR graph. We will also present a polynomial time algorithm for finding a maximum additive flow in an AND/OR graph, and discuss the relative complexity of
these problems.
Type: Proceedings paper
Title: Analyzing vulnerabilities of critical infrastructures using flows and critical vertices in and/or graphs
Event: 8th Annual International Conference on Computing and Combinatorics
Location: SINGAPORE
Dates: 2002-08-15 - 2002-08-17
Keywords: AND/OR graphs, vulnerability, critical certices, flows, HEURISTIC-SEARCH, ALGORITHM
UCL classification: UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only: edit this record
|
{"url":"http://discovery.ucl.ac.uk/139157/","timestamp":"2014-04-19T18:39:25Z","content_type":null,"content_length":"24817","record_id":"<urn:uuid:8cdbd859-9c69-4153-95a3-ebe8be56fc27>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diophantine Approximation in Higher Dimensions
up vote 1 down vote favorite
Let $\mathbf{x} \in \mathbb{R}^K$ be an irrational vector. Assume that $\|\mathbf{x}\|^2 \leq 1$. Is is known that for all $N > 1$, there exists an $p_1 \in \mathbb{N}, \mathbf{q}_1 \in \mathbb{N}^
{K}$ such that $|p_1| < N$ and
$$\|p_1\mathbf{x} - \mathbf{q}_1\| \leq \frac{C}{N^{\frac{1}{K}}}$$
where $C$ is some constant.
I want to find another K-1 pairs $(p_2, \mathbf{q}_2)$, .., $(p_K, \mathbf{q}_K)$ where $\mathbf{p}_1,..,\mathbf{p}_K \in \mathbf{N}^K$ are linearly independent and such that
$$\max_i \|p_i\mathbf{x} - \mathbf{q}_i\| \leq \frac{C}{N^{\frac{1}{K}}}$$
How large will $\max_i |p_i|$ be? Can I characterize it as a function of N (for large N)? Will it be on the order of $N$ still?
nt.number-theory diophantine-approximation
1 For the algebraic case the Schmidt subspace theorem should help here. en.wikipedia.org/w/…. Otherwise using the pigeonhole principle in a similar manner as in the standard proof of Dirichlet's
approximation theorem will give a bound. – George Lowther Jan 27 '11 at 21:11
I think that ${\bf q}_1$ has to be in ${\bf Z}^K$ (not ${\bf N}^K$). I think it's the ${\bf q}_j$ that want to be linearly independent (not the ${\bf p}_j$). – Gerry Myerson Jan 27 '11 at 21:56
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged nt.number-theory diophantine-approximation or ask your own question.
|
{"url":"http://mathoverflow.net/questions/53535/diophantine-approximation-in-higher-dimensions","timestamp":"2014-04-18T00:56:13Z","content_type":null,"content_length":"48305","record_id":"<urn:uuid:d8e815fb-afa5-41cb-a59a-7094878c3000>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Hills, NY Calculus Tutor
Find a North Hills, NY Calculus Tutor
...Algebra 2 is vital for students’ success on the ACT, SAT 2 Math, and college mathematics entrance exams. Microsoft Excel is an extremely powerful spreadsheet tool and may initially seem
confusing, but with a little practice, can perform many diverse functions. I am confident that I will be able...
26 Subjects: including calculus, writing, statistics, geometry
...Whether coaching a student to the Intel ISEF (2014) or to first rank in their high school class, I advocate a personalized educational style: first identifying where a specific student's
strengths and weaknesses lie, then calibrating my approach accordingly. Sound interesting? I coach students ...
32 Subjects: including calculus, reading, physics, geometry
...My services also cover college/high school math and science courses. Since I have extensive experience working with local students attending Columbia, NYU, Hunter, and other CUNY schools, I am
quite familiar with the testing style of specific professors. Lastly, I am flexible with times and meeting locations.
24 Subjects: including calculus, chemistry, physics, biology
...When I am not busy with school or leisure, I will be helping you or your child grasp a difficult subject or improve a test score. I encourage my students to talk me through their thought
process when tackling a problem, so that I can pinpoint exactly what is preventing them from arriving at the ...
22 Subjects: including calculus, chemistry, physics, geometry
...I also passed actuarial exams that dealt heavily in probability theory. I am currently working as an Actuarial associate at an insurance company. I currently tutor Exams P, FM, MFE, and MLC.
15 Subjects: including calculus, algebra 1, algebra 2, finance
|
{"url":"http://www.purplemath.com/North_Hills_NY_Calculus_tutors.php","timestamp":"2014-04-18T00:52:10Z","content_type":null,"content_length":"24205","record_id":"<urn:uuid:71e8113c-998b-47e4-8dc9-a7ecb7b964a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Mathematical Notes, Voi. 59, No. 4, 1996
The Class of Groups All of Whose Subgroups with Lesser Number
of Generators are Free is Generic
G. N. Arzhantseva and A. Yu. OFshanskii UDC 512.5
ABSTRACT. It is shown that, in a certain statistical sense, in almost every group with rn generators and n
relations (with m and n chosen), any subgroup generated by less than m elements (which need not belong to
the system of generators of the whole group) is free. In particular, this solves Problem 11.75 from the Kourov
Notebook. In the proof we introduce a new assumption on the defining relations stated in terms of finite marked
As is well known, in the fundamental group of an oriented dosed compact surface of genus g, all
subgroups of infinite index are free. In particular, this means that in the group
S, -- (al,...,a,, bx,... ,b, [ [ax,blJ...[a,,b,] = 1)
with 2g generators, any subgroup with lesser mlmber of generators (not necessarily from the original
set {al,..., a~, bl,..., bg}) is free. In this connection the second author posed the problem of finding
conditions on the defining relations under which the subgroups of a 6nitely defined group with a "small"
n,lmber of generators are free. Such conditions were found by V. S. Guba in the paper [1], where subgroups
with two generators were considered. Besides the condition of small cancellation, there was a condition
on "long 2-subwords" of defining words.
To study groups with free k-generated subgroups with k > 2, we had to find an appropriate gene-
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/373/2159697.html","timestamp":"2014-04-21T12:59:41Z","content_type":null,"content_length":"8678","record_id":"<urn:uuid:654f86dd-ddc1-4882-a2ef-1d1496c92466>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivative Quiz Bonus
January 2nd 2008, 05:12 PM #1
Junior Member
Oct 2007
Derivative Quiz Bonus
The product of two positive numbers is 588. The sum of the first and three times the second is minimized. Find the two numbers.
a) 42 and 14
b) 28 and 21
c) 49 and 12
d) 84 and 7
e) Both numbers are 14sqrt(3)
Our teacher gave us a take home quiz on derivatives that I blew through no problem, but I have no idea how to do this bonus. I've never seen the term "minimized" before but I'm guessing it has
something to do with a relative minimum.
The product of two positive numbers is 588. The sum of the first and three times the second is minimized. Find the two numbers.
a) 42 and 14
b) 28 and 21
c) 49 and 12
d) 84 and 7
e) Both numbers are 14sqrt(3)
Our teacher gave us a take home quiz on derivatives that I blew through no problem, but I have no idea how to do this bonus. I've never seen the term "minimized" before but I'm guessing it has
something to do with a relative minimum.
indeed it does have to do with that. set up your equations:
let the two numbers be $x$ and $y$, x is the first number, y is the second
we have that $xy = 588 \implies x = \frac {588}y$
we are also given a formula to be minimized, let's call it $S$
so, $S = x + 3y$
replace x with the formula from the first equation, we have:
$S = \frac {588}y + 3y$
Now, minimize this function, that is, find it's local minimum, the value for y that gives that is one of the numbers you seek. use it to find the other number (Hint: using the quotient rule to
differentiate the first term is a waste of time)
Thanks for your help so far, very easy to follow
However, I'm stuck on the last step (it's my first day back from winter vacation heh).
To find a local minimum, normally I take the derivative, find the critical points, then create a sign chart to see a sign change from negative to positive.
Now, I'm not familiar with derivatives involving y. I think it has something to do with implicit differentiation which we have yet to actually implement. Ignoring this though, I would find the
derivative to be:
S' = -558y^-2 + 3
Instead of using the quotient rule, I rewrote 558/y as 558y^-1. If I'm correct up to here (which I'm guessing I'm not because I'm not sure how to find the derivative with respect to y) what do I
do next?
Thanks for your help so far, very easy to follow
However, I'm stuck on the last step (it's my first day back from winter vacation heh).
To find a local minimum, normally I take the derivative, find the critical points, then create a sign chart to see a sign change from negative to positive.
Now, I'm not familiar with derivatives involving y. I think it has something to do with implicit differentiation which we have yet to actually implement. Ignoring this though, I would find the
derivative to be:
S' = -558y^-2 + 3
Instead of using the quotient rule, I rewrote 558/y as 558y^-1. If I'm correct up to here (which I'm guessing I'm not because I'm not sure how to find the derivative with respect to y) what do I
do next?
implicit differentiation not needed here. we are working in one variable, namely y, and we are differentiating with respect to y, so it's fine. yes, you did the differentiation correctly. now
continue. find the critical point(s) of S by setting S' = 0 and solving for y (usually sign charts are not necessary with these kinds of questions, they are called "Optimization problems" by the
Oh haha, I'm such a chump. I tried using my graphing calculator and had it setup all wrong, but when I did it out by hand it was much easier. I solved for y in that equation and got 14, which
means the two numbers are 42 and 14.
Thanks for the help, extra credit here I come!
no problem, you're welcome
good luck with your class
January 2nd 2008, 05:20 PM #2
January 2nd 2008, 05:42 PM #3
Junior Member
Oct 2007
January 2nd 2008, 05:50 PM #4
January 2nd 2008, 05:57 PM #5
Junior Member
Oct 2007
January 2nd 2008, 06:00 PM #6
January 2nd 2008, 08:42 PM #7
|
{"url":"http://mathhelpforum.com/calculus/25479-derivative-quiz-bonus.html","timestamp":"2014-04-17T21:38:21Z","content_type":null,"content_length":"55327","record_id":"<urn:uuid:382b1a4b-1fed-44c7-b99a-4df6e9eaecab>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gyroscope vs Accelerometer?
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Now that iOS 4 is no longer NDA, I would like to know what Gyroscope has to offer over the Accelerometer for developers. Is there a difference in APIs? Other things?
up vote 37 down vote favorite iphone ios4 accelerometer gyroscope
add comment
Now that iOS 4 is no longer NDA, I would like to know what Gyroscope has to offer over the Accelerometer for developers. Is there a difference in APIs? Other things?
A MEMs gyroscope is a rate of change device. As the device rotates in any its axis, you can see a change in rotation. An accelerometer only provides the force along the X,Y,and Z
vectors, and cannot solve for "twist". By using both sensors, you can often implement what is referred to as a 6DOF (degrees of freedom) inertial system - or dead reckoning - allowing
you to find the relative physical location of the device. (Note that all inertial systems drift, so its not stable in the long term).
up vote 28
down vote In short: gyroscopes measure rotation, accelerometers measure translation.
There is a new API for reading the gyroscope.
add comment
A MEMs gyroscope is a rate of change device. As the device rotates in any its axis, you can see a change in rotation. An accelerometer only provides the force along the X,Y,and Z vectors, and cannot
solve for "twist". By using both sensors, you can often implement what is referred to as a 6DOF (degrees of freedom) inertial system - or dead reckoning - allowing you to find the relative physical
location of the device. (Note that all inertial systems drift, so its not stable in the long term).
Actually, the accelerometer measures linear acceleration; but since force is equal to mass times acceleration, people can consider it as measuring force as well as long as it has a
constant mass. Linear acceleration is the rate of change of linear velocity. A gyro on the other hand provides the angular rotational velocity measurement as oppose to the linear
acceleration of movement. Both sensors measures the rate of change; they just measure the rate of change for different things.
up vote 24 Technically, it is possible for a linear accelerometer to measure rotational velocity. This is due to the centrifugal force the device generates when it is rotating. The centrifugal force
down vote is directly related to its rotational speed. As a matter of fact, many MEMS gyro sensors actually uses linear accelerometers to determine the rotational speed by carefully placing them in
certain orientations and measuring the centrifugal forces to compute the actual rotational gyro speed.
add comment
Actually, the accelerometer measures linear acceleration; but since force is equal to mass times acceleration, people can consider it as measuring force as well as long as it has a constant mass.
Linear acceleration is the rate of change of linear velocity. A gyro on the other hand provides the angular rotational velocity measurement as oppose to the linear acceleration of movement. Both
sensors measures the rate of change; they just measure the rate of change for different things.
Technically, it is possible for a linear accelerometer to measure rotational velocity. This is due to the centrifugal force the device generates when it is rotating. The centrifugal force is directly
related to its rotational speed. As a matter of fact, many MEMS gyro sensors actually uses linear accelerometers to determine the rotational speed by carefully placing them in certain orientations
and measuring the centrifugal forces to compute the actual rotational gyro speed.
|
{"url":"http://stackoverflow.com/questions/3089917/gyroscope-vs-accelerometer","timestamp":"2014-04-25T08:18:49Z","content_type":null,"content_length":"73464","record_id":"<urn:uuid:0e79f896-2387-4f67-b94a-610c6dace555>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chews Landing, NJ Math Tutor
Find a Chews Landing, NJ Math Tutor
...I also love teaching students of all ages. I have been tutoring students from elementary age to adult on a daily basis for more than 15 years. Individualized support for a student is the most
effective and efficient way to gain confidence and mastery in any subject.
23 Subjects: including geometry, elementary math, ASVAB, literature
...My degree is from RPI in Theoretical Mathematics, and thus included taking Discrete Mathematics as well as a variety of other proof-writing courses (Number Theory, Graph Theory, and Markov
Chains to name a few). I absolutely love writing proofs and thinking problems out abstractly. I favor the S...
58 Subjects: including calculus, differential equations, biology, algebra 2
...Stephen B. I am a middle school mathematics teacher in Gloucester County, New Jersey. I have been teaching Math for ten years now after many years in the business world.
10 Subjects: including algebra 1, linear algebra, geometry, prealgebra
...I have taken the SAT, Praxis I and II exams, and the GRE, all passing the first try. I just graduated with my master's degree from UCLA with a 4.0 GPA. I have accomplished this all on my own
and have helped others study and pass exams as well.
39 Subjects: including SAT math, prealgebra, precalculus, reading
...I have been working with youth for 15 years, at a variety of camps, youth leadership organizations and schools. I am experienced at helping students to engage in learning, managing negative
behavior and reinforcing positive behavior. My bachelors degree is in writing.
7 Subjects: including algebra 1, logic, probability, prealgebra
Related Chews Landing, NJ Tutors
Chews Landing, NJ Accounting Tutors
Chews Landing, NJ ACT Tutors
Chews Landing, NJ Algebra Tutors
Chews Landing, NJ Algebra 2 Tutors
Chews Landing, NJ Calculus Tutors
Chews Landing, NJ Geometry Tutors
Chews Landing, NJ Math Tutors
Chews Landing, NJ Prealgebra Tutors
Chews Landing, NJ Precalculus Tutors
Chews Landing, NJ SAT Tutors
Chews Landing, NJ SAT Math Tutors
Chews Landing, NJ Science Tutors
Chews Landing, NJ Statistics Tutors
Chews Landing, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Almonesson Math Tutors
Blackwood Terrace, NJ Math Tutors
Blenheim, NJ Math Tutors
East Haddonfield, NJ Math Tutors
Echelon, NJ Math Tutors
Ellisburg, NJ Math Tutors
Glendora, NJ Math Tutors
Hilltop, NJ Math Tutors
Oak Valley, NJ Math Tutors
Passyunk, PA Math Tutors
South Camden, NJ Math Tutors
Turnersville, NJ Math Tutors
Verga, NJ Math Tutors
West Collingswood Heights, NJ Math Tutors
Westville Grove, NJ Math Tutors
|
{"url":"http://www.purplemath.com/Chews_Landing_NJ_Math_tutors.php","timestamp":"2014-04-18T21:43:48Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:f5f68775-d9ac-465e-95e4-d621e289cb8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Evan Soltas
Greg Mankiw has a
to Edward Conard's new book,
Unintended Consequences
. Appreciating Mankiw's recommendation, and remembering that I had watched
Conard on the Daily Show
a little while back, I read the
introduction and first chapter.
Mankiw, in particular, pointed to a little chart on page 22 of Conard's book, which I've reproduced below within my understanding of rules of fair use.
I haven't read the entire book, but I want to briefly comment on this chart in particular. Conard uses it to argue that there has been no median-income stagnation in the United States, and that
what's really happened is that since 1970 we've brought into the labor force large numbers of low-income minorities. Conard alleges that the consequence is that even though median real incomes within
each demographic have risen, overall median real income has stagnated.
I admit, it is an intellectually appealing idea. I almost wrote this blog post about it as a follow-up to
Wednesday's post
, which explained how the emergence of a global market for labor in the early 1970s released the force of factor-price equalization, and as a result, real incomes haven't kept up with productivity.
But then, when I began to look at the data myself, I realized that Conard is wrong. First, I believe his data is inaccurate. Second, his conclusions are testable and do not hold up to my analysis.
Let's talk about the data inaccuracy first. No blogger, especially me, ever wants to have to accuse someone of fudging the numbers. Having to debate the facts distracts from considering the spectrum
of valid opinion facts ought to produce.
But here I am, looking up the
Census data
, and I can't see where Conard's numbers are coming from. Let me be very specific, in hopes of getting him to respond or having someone explain to me where I've gone wrong. Tables P-2 and P-4 of the
Historical Income Tables of the Census -- the "P" stands for people, as opposed to "H" for households or "F" for families -- do not reflect, even by rough approximation, any of the numbers in the
above chart. (These data include part-time workers, as Conard does.)
Although Conard uses 2005 constant dollars and the new data uses 2010 dollars, we can cross-check his numbers with the 2005 current dollar income on Table P-2. None of them match, and it's not even
close. Here's just one example: Conard's chart says that white men had a median income of $35,200 in 2005; the actual Census data says $31,275.
Nor are Conard's 1980 numbers any good, and what that means is that his percentage changes are also inaccurate. The Census data in P-4 says that median real personal income, inclusive of all
demographic groups, rose 35.7 percent between 1980 and 2005. Conard alleges it increased 3 percent.
Conard's conclusion is that median income stagnation, holding demography constant, is a myth. In fact, he's got it totally backwards.When you don't hold demography constant, median income has risen,
although not
nearly as quickly as mean income or real GDP per capita
. Stagnation in median incomes appears only in particular demographic groups -- namely men, and in particular, white non-Hispanic men. For these groups, there has been no increase in real incomes
since the early 70s, and that conclusion is unavoidable.
By extension, median income has risen in the last 40 years because
female and some minority median incomes have increased substantially
. These increases has moved this large fraction of the population away from being clustered at the low-end of the income distribution and towards the all-population median. The statistical
consequence is that these demographics stopped holding down the median value of real income and thus median real personal income across all demographics rose.
3 comments:
1. "Here's just one example: Conard's chart says that white men had a median income of $35,200 in 2005; the actual Census data says $31,275".
The $31,275 figure you cite is for ALL males.
If you go to "White Alone, Not Hispanic" the number for 2005 is $35,345.
Switching that number to 2010 money gets you $39,475. Compare that to "White, Not Hispanic" circa 1980 and you get $34,466 (2010).
That's a 14.53% increase, which is in the ballpark of Conard's chart of 15%.
2. The chart is confusing, and I cannot get it to foot exactly (though I can somewhat recreate it). I'm not convinced you two are really in disagreement, though.
You say, "The Census data in P-4 says that median real personal income, inclusive of all demographic groups, rose 35.7 percent between 1980 and 2005. Conard alleges it increased 3 percent".
Conard writes, right before the chart, "Median wages have increased 30%, on average, across all demographics".
He does not allege that it is 3%. To get the 3% - near as I can tell - he takes 2005 income and weights it using 1980's demographics (obviously he's not saying this is actually what happened
between 1980 and 2005).
So, had there been no demographic change and everything magically remained the same in those terms, overall Median Income would be up 3% (holding demographic income equal).
Adding in the actual change in demographics gets the number to 30% (or above).
That's what I hear Conard saying.
3. I find the description of the tables to be highly confusing. I mean, wth does this mean:
Race and Hispanic Origin of People by Median Income and Sex
@Evan Soltas - where do you get older numbers for non-hispanic whites? From what I can see, the data for non-hispanic whites starts in 1993.
|
{"url":"http://esoltas.blogspot.com/2012/07/inaccurate-consequences.html","timestamp":"2014-04-19T19:59:54Z","content_type":null,"content_length":"76661","record_id":"<urn:uuid:1494a546-d0f1-4e40-9f8d-67ba7a94aea9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
0.9999....(recurring) = 1?
Pages: 1 … 41 42 43 44
Topic closed
Re: 0.9999....(recurring) = 1?
hi Stangerzv
when 0.999999..recurring=3x0.3333333..recurring or 3 x 1/3=1
That works for me. When I calculate (on paper) 1 divided by 3, I stop and write 'recurring' because I don't have any paper big enough.
So arithmetic only works for me if these are the same.
I edited my last post. Did you get the bit about 13 not existing in some universes ?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0.9999....(recurring) = 1?
I like being able to use the formula for geometric series.
This proof is originally from Euler although anyone else could do it.
We have a common ratio r = 1 / 10.
Without the use of the this theorem practical mathematics would fall apart. .9999999... = 1 works for me.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0.9999....(recurring) = 1?
Interesting how this thread got so long.
is a mathematical fact. The reason it is so difficult for people to understand may be due to confusion over the concept of infinity. Here are some different ways to think about it:
1) pointed out above
is equivalent to . But since the number of 0's are infinite, you never "reach" the 1; it is equivalent to 0!
3) a popular proof
Last edited by MrButterman (2012-07-19 09:44:44)
Re: 0.9999....(recurring) = 1?
Hi MrButterman;
Interesting how this thread got so long.
These type threads are on every forum. Mostly they are so long because the opponents of .9999... = 1 can not be convinced by any of those proofs or any others.
Welcome to the forum.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0.9999....(recurring) = 1?
To be fair, the proofs they offer can be argued from a logical standpoint (so long as you understand everything that is going on), but there is nothing in mathematics that can prove how they are
different otherwise. I myself do not...personally believe this as a mathematical "fact," but also realize how futile it is to argue against it. So like those many, it is impossible to convince me as
well, after all, there is a reason this idea is so highly controversial.
Last edited by Calligar (2012-10-27 16:16:01)
Life isn’t a simple Math: there are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Full Member
Re: 0.9999....(recurring) = 1?
Hi y'all
You might say that those on both sides of the argument tend to take it to the limit!
1/2 a > good day!
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: 0.9999....(recurring) = 1?
Calligar wrote:
I myself do not...personally believe this as a mathematical "fact," but also realize how futile it is to argue against it. So like those many, it is impossible to convince me as well, after all,
there is a reason this idea is so highly controversial.
The term mathematical fact might be a little vague. First, not every number is rational -- not every number can be represented as the quotient of integers. For example the width of a square whose
area is 2 is not a rational number. That is, we need the full blown set of real numbers. Figuring out what the real numbers (really) look like is a hard challenge, and providing a description of them
in set theory was a major challenge. There are two main approaches: Dedekind's cuts and Cauchy sequences. They produce the same set. Essentially, take a bounded sequence of rational numbers, and
identify a "number" L with this sequence. The real numbers are the rational numbers with all these Ls. Thus in the construction of real numbers, we see that every real number is the limit of a
sequence of rationals.
In other words, the real number "1" is by definition the limit of the sequence
Re: 0.9999....(recurring) = 1?
Well, I guess I can try to be a little bit more clear...
jdgall wrote:
Calligar wrote:
I myself do not...personally believe this as a mathematical "fact," but also realize how futile it is to argue against it. So like those many, it is impossible to convince me as well, after
all, there is a reason this idea is so highly controversial.
The term mathematical fact might be a little vague. First, not every number is rational -- not every number can be represented as the quotient of integers. For example the width of a square whose
area is 2 is not a rational number. That is, we need the full blown set of real numbers. Figuring out what the real numbers (really) look like is a hard challenge, and providing a description of
them in set theory was a major challenge. There are two main approaches: Dedekind's cuts and Cauchy sequences. They produce the same set. Essentially, take a bounded sequence of rational numbers,
and identify a "number" L with this sequence. The real numbers are the rational numbers with all these Ls. Thus in the construction of real numbers, we see that every real number is the limit of
a sequence of rationals.
In other words, the real number "1" is by definition the limit of the sequence
Firstly, when I say mathematical fact, I am really just referring to what is currently accepted and arguably "known" in mathematics. Secondly, the limit of sequence is just another way to represent
it being infinitely close, but still not exactly equal to the number (unless I'm mistaken). Just like for it representing 1/3 with <0.3,0.33,0.333,etc. (I might not have put everything in proper
terms, sorry if that causes any confusion, wasn't sure how to say it simply off the top of my head). Also would like to make a note, you messed up slightly when you posted; it should be 0.9, not 0,9
for the first one unless I'm mistaken (but doesn't really have any relevance to anything). In other words, it is just more rules that exist that otherwise, as I was saying, prove it's a
mathematically fact. Remember when I said this...
Calligar wrote:
To be fair, the proofs they offer can be argued from a logical standpoint (so long as you understand everything that is going on), but there is nothing in mathematics that can prove how they are
different otherwise.
In mathematics, there is no way to represent the difference between 0.¯9 and 1. All proofs (including false ones) either assume things (for specifically this), or simply define it as one only because
of the infinitely close distance (there might be a few other reasons, but those are the 2 major I see). Even though some people will argue things like it is 0.0...1 away, or 1/10¯0 away, which might
arguably seem right, one can argue about the infinite distance, therefore making it an impossible argument to win. So this argument doesn't carry on (with me) over confusion, I'll explain in more
Rules in math already say this is true (even though it is very controversial) for many reasons. However, it largely also comes down to one major thing: saying that one number equals a number that
isn't exactly the same. In other peoples eyes, that would be like saying 2 = 1; the only reason for it not being like this is because of the rules for it (plus all of math as we know it would
collapse). Also, the whole reason for this controversy in the first place, is because of using decimals in a way that don't accurately represent what it is. Decimals can't accurately represent
everything, unfortunately, no system can (at least that I know of). However, unfortunately in this case, things like hyper real numbers define this, which is where the controversy comes in (I am not
saying hyper real numbers are a bad thing though). There are other similar cases of this. 1/3 = 0.¯3, right? But wait, there is a difference, isn't there (asking rhetorically)? Why? Because 1/3 can
not be accurately represented in decimal form. Changing 1/3 into a decimal will give you 0.3...3...3...forever. However, there isn't that number at the "end" to...well end it. It is just like pi in
that sense; pi = 3.14159265..., however there is no end to it. There is no way to accurate represent the number in decimal form, or in any form for that matter (that I know of), besides for just
calling it pi. It is that very reason I've seen people argue pi ≈ 3.14159265...instead of =; because no matter how long the decimals go, it doesn't exactly equal pi because there is no end to it
(rather pi is mathematically = or ≈ to 3.14159265... in this case is irrelevant to what I'm trying to say, plus I never said which one it is).
Please just keep in mind I'm not exactly arguing for or against. I'm trying to say overall that it is futile to argue. I might be personally against the current idea in math, but I have learned to
accept that this is currently mathematical fact. I just wanted to make all that clear.
Last edited by Calligar (2012-11-21 23:57:16)
Life isn’t a simple Math: there are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Re: 0.9999....(recurring) = 1?
Unless I'm mistaken, it's more then simply for convenience. Also, 0.¯9 doesn't end, because the ¯ over any number (which I put before the repeating number because otherwise I'd have to show in a
picture), means it goes on forever. If it had an end, that means we'd be able to put something after it, therefore, there'd be no reason for this controversy in the first place.
Life isn’t a simple Math: there are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Re: 0.9999....(recurring) = 1?
0.999... doesn't exist. Recurring 9's aren't allowed.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0.9999....(recurring) = 1?
Last time I checked, hyper real numbers still existed. Just because you can't get to an actual answer of it, doesn't mean it doesn't exist. What do you mean by that?
Life isn’t a simple Math: there are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Re: 0.9999....(recurring) = 1?
But if 1/3 = 0.333... Then Why does 0.333... Not become = to 0.4 Because that would then be the Same Infinite Calculation as...
0.999... becoming = to 1
Re: 0.9999....(recurring) = 1?
Intuitively when you take
1 - .9 = .1
1 - .99 = .01
1 - .999 = .001
1 - .9999 = .0001
1 - .99999 = .00001
you can see it is getting smaller and smaller, we say it is approaching 0.
When you do
.4 - .3 = .1
.4 - .33 = .07
.4 - .333 = 0.067
.4 - .3333 = 0.0667
.4 - .33333 = 0.06667
notice that it is not getting smaller it seems to be approaching .06666666666... which is equal to 1 / 15. So .33333333... can never be equal to .4.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0.9999....(recurring) = 1?
But approaching 0 and seems to be approaching Are both not Actually ever going to get there ?
Re: 0.9999....(recurring) = 1?
We are not talking about that. We are talking about your question.The 1 - .999999... keeps getting smaller the more nines we add. the .4 - .3333333... does not get smaller the more threes we add. If
.333333333... was equal to .4 then when we subtract them we would get 0, but we do not. The isn't even a hint that it is getting to 0.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0.9999....(recurring) = 1?
But the number 0.9999... itself doesn't exist...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0.9999....(recurring) = 1?
I don't look at it that way. To me it is shorthand for a series that thank the Lord, sums to 1.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0.9999....(recurring) = 1?
.9 + .1 = 1
.99 + .01 = 1
.999 + .001 = 1
.9999 + .0001 = 1
.99999 + .00001 = 1
But the a Two differences remain constantly the same! just another 9 and 0 ...further down the line does not make them nearer to being equal to 1
Re: 0.9999....(recurring) = 1?
Stefy wrote:
But the number 0.9999... itself doesn't exist...
How many times must I say this? No numbers exist. They are all just elements in an imaginary set that mathematicians have invented.
You cannot have a 3 => it doesn't exist.
But it's a jolly useful concept (especially for describing how many apples I have today) so let's go on using it.
Clearly this argument is going to continue for ever.
assign the number 0.9 to post #1
assign the number 0.99 to post #2
assign the number 0.999......9999999 (n 9s) to post #n
and so on ad infinitum.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0.9999....(recurring) = 1?
Hi Bob
I didn't mean it in that sense. Such a number isn't allowed in mathematics.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Full Member
Re: 0.9999....(recurring) = 1?
Hi! Let's reduce the .9999... down to its "roots" .1111... which is supposed to be equal to 1/9.
So the question becomes "Why is .1111... equal to 1/9?"
Simple solution: Choose base 9 instead of base 10. Then 1/9 = .1 Now we don't have to
deal with an infinite repeating decimal. Of course, now representing 1/10 in base 9 is an
infinite repeating decimal. So to eliminate all these problems just scrap the "decimal systems"
and go back to using only the fractions.
The basic problem (in base 10) is that any unit fraction 1/N where N has prime factors other
than 2 or 5 cannot be written as a finite sum of fractions whose denominators only have
powers of 10.
Suppose we have an N with prime factors not twos or fives. Then the algorithm for dividing
1 by N would never have a remainder of zero since products of 3, 7, 11, 13, 17, 19, ... never
end in zero. So at every stage of the division we have non-zero remainders. Hence the
"decimal representation" of 1/N would have to be "infinite."
Any time the base b and N are relatively prime (this doesn't cover all cases for N's that
cause an "infinite repeating b-esimal") the division algorithm always has a remainder at
each stage. If we stop at any stage, then we have to add in the remainder to get the
original number exactly.
In applications all we ever use in "real world" problems are approximations to "some number
of decimal places." Take pi for instance. We use so many decimal places to approximate
pi in applications.
BUT it sure is CONVENIENT to "use" infinite decimals in mathematical expositions.
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
Re: 0.9999....(recurring) = 1?
Stefy wrote:
I didn't mean it in that sense. Such a number isn't allowed in mathematics.
What number is that? 3 ?
I think you'll find it is used in quite a few areas of mathematics.
And so is 0.999999999 recurring.
To save me some time, I refer you back to my post # 999.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0.9999....(recurring) = 1?
No, 3 is allowed, 0.(9) isn't. There cannot be recurring 9's after the decimal point.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0.9999....(recurring) = 1?
Why not?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0.9999....(recurring) = 1?
Hi SMboy;
But the a Two differences remain constantly the same!
Is .1 the same as .00001 or .0000000001 or ...
Those differences are not the same. They seem to be vanishing.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Topic closed
Pages: 1 … 41 42 43 44
|
{"url":"http://mathisfunforum.com/viewtopic.php?pid=240898","timestamp":"2014-04-18T08:20:27Z","content_type":null,"content_length":"53075","record_id":"<urn:uuid:449c3603-5f3d-475f-9b00-a1c028740495>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A way of expressing the dependence of one quantity on another quantity or quantities. Traditionally, functions were specified as explicit rules or formulas that converted some input value (or values)
into an output value. If f is the name of the function and x is a name for an input value, then f(x) denotes the output value corresponding to x under the rule f. An input value is also called an
argument of the function, and an output value is called a value of the function. The graph of the function f is the collection of all pairs (x, f(x)), where x is an argument of f.
For example, the circumference C of a circle depends on its diameter d according to the formula C = πd; therefore, one can say that the circumference is a function of the diameter, and the functional
relationship is given by C(d) = πd. Equally well, the diameter can be considered a function of the circumference, with the relationship given by d(C) = d/π.
Again, if y = ax^2 + bx + c, where a, b, and c are constants, and x and y are variables, then y is said to be a function of x, since, by assigning to x a series of different values, a corresponding
series of values of y is obtained, showing its dependence on the value given to x. For this reason, x is termed the independent variable and y the dependent variable.
There may be more than one independent variable – e.g. the area of a triangle depends on its altitude and its base, and is thus a function of two variables.
Types of function
Functions are primarily classified as algebraic or transcendental. The former include only those functions which may be expressed in a finite number of terms, involving only the elementary algebraic
operations of addition, subtraction, multiplication, division, and root extraction.
Functions are also distinguished as continuous or discontinuous. Any function is said to be continuous when an infinitely small change in the value of the independent variable produces only an
infinitely small change in the dependent variable; and to be discontinuous when an infinitely small change in the independent variable makes a change in the dependent variable either finite or
infinitely great. All purely algebraic expressions are continuous functions, as are also such transcendental functions as e^x, log x, and sin x.
Harmonic and periodic functions are those whose values fluctuate regularly between certain assigned limits, passing through all of their possible values, while the independent variable changes by a
certain amount known as the period. Such functions are of great importance in many branches of mathematical physics. Their essential feature is that, if f(x) be a periodic function whose period is a,
then f(x + ½a) = f(x - ½a), for all values of x.
The modern view of functions
In modern mathematics, the insistence on specifying an explicit effective rule has been abandoned; all that is required is that a function f associate with every element of some set X a unique
element of some set Y. This makes it possible to prove the existence of a function without necessarily being able to calculate its values explicitly. Also, it enables general properties of functions
to be proved independently of their form. The set X of all admissible arguments is called the domain of f; the set Y of all admissible values is called the codomain of f. We write f : X Y.
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/F/function.html","timestamp":"2014-04-21T02:02:50Z","content_type":null,"content_length":"10393","record_id":"<urn:uuid:e4cfcbec-5a7d-4450-91a9-c3b5b11ae560>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 1 Tutors
Williamsburg, VA 23185
...Fifth Grade Fifth Grade science in LBUSD is an integrated, hands-on/minds-on, standards based program. Students will study: The atomic structure of elements and compounds The organization of the
Periodic Table The Sun and that its effect on air results in changing...
Offering 10+ subjects including algebra 1
|
{"url":"http://www.wyzant.com/algebra_1_tutors.aspx?g=AlgebraHelpBG1","timestamp":"2014-04-20T13:32:33Z","content_type":null,"content_length":"59454","record_id":"<urn:uuid:e75eec5a-1e23-44e9-94e1-9a38cfbe1512>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Groton, MA ACT Tutor
Find a Groton, MA ACT Tutor
...Otherwise, you will end up having to spend even more time studying to catch up. I have a degree in theatre from Fitchburg State College. I have worked on stage as an actor and backstage as a
director, stage manager, and in most other capacities.
27 Subjects: including ACT Math, reading, writing, English
...While tutoring math, I spent a majority of my time tutoring SAT math preparation. I would also take practice tests regularly to ensure that I know what I am teaching, and can identify usable
techniques for better scores. I am currently teaching physical science courses to freshman at the Southern New Hampshire high school I work at.
8 Subjects: including ACT Math, chemistry, algebra 1, algebra 2
...I work with students on understanding that this section is basically reading comprehension using scientific experiments. The method I teach is similar to what I teach for the reading: quickly
reading passages and understanding the main idea, and learning how to answer questions based solely on the information provided. The ISEE is a test for admission to private schools.
26 Subjects: including ACT Math, English, linear algebra, algebra 1
...I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student
at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point.
16 Subjects: including ACT Math, French, elementary math, algebra 1
...I tutor all levels up through AP courses. I would love to help you succeed!I have tutored elementary school students in Reading, Writing and Math for 15 years, both privately and for
Commonwealth Learning Centers. Privately, I have worked with students who needed basic support, as well as those doing very well looking for enrichment.
34 Subjects: including ACT Math, reading, calculus, English
Related Groton, MA Tutors
Groton, MA Accounting Tutors
Groton, MA ACT Tutors
Groton, MA Algebra Tutors
Groton, MA Algebra 2 Tutors
Groton, MA Calculus Tutors
Groton, MA Geometry Tutors
Groton, MA Math Tutors
Groton, MA Prealgebra Tutors
Groton, MA Precalculus Tutors
Groton, MA SAT Tutors
Groton, MA SAT Math Tutors
Groton, MA Science Tutors
Groton, MA Statistics Tutors
Groton, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/groton_ma_act_tutors.php","timestamp":"2014-04-16T10:35:59Z","content_type":null,"content_length":"23842","record_id":"<urn:uuid:7d8d21a4-0ac8-4103-ae9d-1b176bbe6d60>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I am working on an assignment where I am supposed to make my own boolean methods to determine if a number is a prime, and then a super prime.
( a super prime is where d1d2d3, d1d2 and d1 are all primes ex: 797 is a super prime because 797, 79 , and 7 are all prime)
The program overall is to take a user inputted amount of times the program is supposed to run,
and then each run through take an integer input and output the closest superprime to that number. Part of the assignment is I HAVE to have the 2 boolean methods.
Sample run that was given for the assignment:
"Sample run:
Enter the number of experiments N = 3
1. Number = 4501
The nearest super prime to 4501 is 3797
2. Number = 785
The nearest super prime to 785 is 797
3. Number = 5200
The nearest super prime to 5200 is 5939 "
Anyway, after attempting this for hours, I could really use some help, I brought it down from ~25 errors to 5, and all 5 errors are in one line of code but I am not sure what to do from here. Any
help at all is greatly appreciated. I hope my sloppy variable usage isn't too confusing...
import java.util.*; //import the scanner
public class Superprime
public static void main(String[] args)
Scanner kb = new Scanner(System.in); //declare scanner variable
int x,number,p; // declare other variables for #experiments and the number to be tested
System.out.println("Enter the number of experiments you want to run:");
x = kb.nextInt();
int y,z; //variables for while loops
int q=p; //initialize a second variable =p to run in a loop
int r=p; // initialize a 3rd variable =p to run in a second loop
// this is to run the experiment as many times as the user wants to
for (number = 0; number <=x; number++)
//each run of the experiment will get an input value to be tested
System.out.println("Enter the number you want to test.");
g= kb.nextInt();
//boolean statements
while (!(issuperprime(g)))
while (!(issuperprime(g)))
if ((g-z)>(y-g))
System.out.println("Number = " + g);
System.out.println("The closest super prime to " + g
+ " is " + y);
else if ((g-z)<(y-g))
System.out.println("Number = " + g);
System.out.println("The closest super prime to " + g
+ " is " + z);
else if ((g-z)==(y-g))
System.out.println("Number = " + g);
System.out.println("The closest super prime to " + g + " is both "
+ y + " and " + z);
//prime method
public static boolean isprime(int p)
if (p%2==0)
return false;
for(int i=3;i*i<=p;i+=2)
return false;
return true;
//Super prime method
public static boolean issuperprime(int sp)
while (sp>=10)
if (!(isprime(p)))
return false;
else if (isprime(p))
return true;
The errors are all in line 55, which is my first boolean method to determine if a number is prime.
Superprime.java:59: illegal start of expression
public static boolean isprime(int p)
Superprime.java:59: illegal start of expression
public static boolean isprime(int p)
Superprime.java:59: ';' expected
public static boolean isprime(int p)
Superprime.java:59: '.class' expected
public static boolean isprime(int p)
Superprime.java:59: ';' expected
public static boolean isprime(int p)
Er, the title of the post was made when I had 6 errors elsewhere that were all class, interface, or enum expected errors. I forgot the change it, please disregard the title since it has nothing to do
with the post....sorry!
|
{"url":"http://www.dreamincode.net/forums/topic/253874-class-interface-or-enum-expected-errors/","timestamp":"2014-04-16T12:45:54Z","content_type":null,"content_length":"128146","record_id":"<urn:uuid:126da915-3c20-4698-87a8-783020db1c95>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Classics Bibliography
Mathematical Classics Bibliography (1/22/2007)
Prepared by:
Joseph Malkevitch
Mathematics and Computing Department
York College (CUNY)
Jamaica, New York 11451-0001
Email: malkevitch@york.cuny.edu (for additions, suggestions, and corrections)
Not everyone will agree on what makes a mathematics book a "classic." Perhaps it is more accurate to say this is a list of books that among my personal favorites and which I think have stood the test
of time in terms of either the quality/interest of their content or presentation.
Coxeter, H.M.S., Introduction to Geometry, Wiley, New York, 1969.
Luce, D. and H. Raiffa, Games and Decisions, Wiley, NY, 1957.
Stein, S., Mathematics: The Man Made Universe, W.H. Freeman, San Franciso,1963. (This book has been reprinted by Dover Publications.)
Straffin, P., Game Theory and Strategy, MAA, Washington, 1993.
Back to list of bibliographies
|
{"url":"http://york.cuny.edu/~malk/classics.html","timestamp":"2014-04-18T10:35:39Z","content_type":null,"content_length":"1895","record_id":"<urn:uuid:3d812534-c1da-4271-9789-81f27cd4544a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Animations
This page collects a few examples of animated graphics used to explain statistical concepts. They mainly use the idea of plot interpolation to show a series of intermediate, interpolated views
between two states of some data.
They were prepared using R, with the car and heplots packages, and the animations were created with the animation package by Yihui Xie.
The headings and plots below link to individual pages with more explanation and plot controls for the animations.
This demonstration illustrates why multivariate outliers might not be apparent in univariate views, but become readily apparent on the smallest principal component. For bivariate data, principal
component scores are equivalent to a rotation of the data to a view whose coordinate axes are aligned with the major-minor axes of the data ellipse. It is shown here by linear iterpolation between
the original data, XY, and PCA scores
These plots show the relationship between a marginal scatterplot and an added-variable (aka partial regression) plot by means of an animated interpolation between the positions of points in the two
plots. They show how the slope of the regression line, the data ellipse, and positions of points change as one goes from the marginal plot (ignoring the other variable) to the AV plot, controlling
for the other variable.
This animated series of plots illustrate the essential ideas behind the computation of hypothesis tests in a one-way MANOVA design and how these are represented by Hypothesis - Error (HE) plots.
|
{"url":"http://www.datavis.ca/gallery/animation/","timestamp":"2014-04-21T09:37:27Z","content_type":null,"content_length":"12723","record_id":"<urn:uuid:a1d70e67-729b-4cf1-b95c-695f3355b54d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Journal Prices
We have a paradox situation: It never was so cheap, easy and convenient to publish mathematics as it is today, thanks to Internet, TeX, and computers, but mathematical publications never have been
more expensive than today.
Math departments suffer, because they cannot afford buying the journals they used to buy. E.g., right now the famous Goettingen math department decided to cancel all(!!) its mathematical journals, in
order to start over from scratch. But still, commercial publishers raise their journal prices by a rate which is unseen in most other economic areas.
Though it is possible to produce high ranked journals for as little as 15 US Cents/page or less (Annals of Mathematics, Documenta Mathematica) and to broadcast them for free on the Internet, there
are journals charging 4 US Dollars and more per page!
If you want to find out about that, please look at the tables on http://www.mathematik.uni-bielefeld.de/~rehmann/BIB/AMS/Price_per_Page.html
Based on data collected by the AMS over many years, I have set up a collection of tables showing you the prices per volume and per page for 276 math journals. Here you can easily figure out which
journal has the highest inflation rate over the years. In fact there are quite a few journals who have had an average inflation rate of 14 percent and more >>each year<< over a period of eight years
or more! 40 journals charge more than one US Dollar per page!
Scientists are writing the content, scientists have developed and are maintaining the tools (like TeX) to format the content, and provide printer-ready work, do the peer review for free, give it for
free to the publishers, and then buy it back from them for an incredible amount of money.
Please don't do that anymore! Create change! Give science back to the scientists!
If you want to know how to do that: Publish in journals which are affordable - I have mentioned a few, and there are more. Tell your library to only maintain those.
If you are an editor, look at the SPARC publications: http://www.arl.org/sparc, for example at their brochure "Gaining Independence", a "Manual for Planning the Launch of a Nonprofit Electronic
Publishing Venture", which tells you and your fellow editors what to do in order to make your journal independent!
November 2003, Ulf Rehmann (Bielefeld, Germany).
|
{"url":"http://www.mathematik.uni-bielefeld.de/~rehmann/BIB/journal_price_crisis.html","timestamp":"2014-04-18T15:41:48Z","content_type":null,"content_length":"3234","record_id":"<urn:uuid:41a3acd1-c281-40fe-80e9-4c43613da493>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chern numbers of algebraic varieties
June 10th, 2009 in Other Sciences / Mathematics
A problem at the interface of two mathematical areas, topology and algebraic geometry, that was formulated by Friedrich Hirzebruch, had resisted all attempts at a solution for more than 50 years. The
problem concerns the relationship between different mathematical structures. Professor Dieter Kotschick, a mathematician at the Ludwig-Maximilians-Universität (LMU) in Munich, has now achieved a
breakthrough. As reported in the online edition of the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS), Kotschick has solved Hirzebruch's problem.
Topology studies flexible properties of geometric objects that are unchanged by continuous deformations. In algebraic geometry some of these objects are endowed with additional structure derived from
an explicit description by polynomial equations. Hirzebruch's problem concerns the relation between flexible and rigid properties of geometric objects.
Viewed topologically, the surface of a ball is always a sphere, even when the ball is very deformed: precise geometric shapes are not important in topology. This is different in algebraic geometry,
where objects like the sphere are described by polynomial equations. Professor Dieter Kotschick has recently achieved a breakthrough at the interface of topology and algebraic geometry.
"I was able to solve a problem that was formulated more than 50 years ago by the influential German mathematician Friedrich Hirzebruch", says Kotschick. "Hirzebruch's problem concerns the relation
between different mathematical structures. These are so-called algebraic varieties, which are the zero-sets of polynomials, and certain geometric objects called manifolds." Manifolds are smooth
topological spaces that can be considered in arbitrary dimensions. The spherical surface of a ball is just a two-dimensional manifold.
In mathematical terminology Hirzebruch's problem was to determine which Chern numbers are topological invariants of complex-algebraic varieties. "I have proved that - except for the obvious ones - no
Chern numbers are topologically invariant", says Kotschick. "Thus, these numbers do indeed depend on the algebraic structure of a variety, and are not determined by coarser, so-called topological
properties. Put differently: The underlying manifold of an algebraic variety does not determine these invariants."
The solution to Hirzebruch's problem is announced in the current issue of PNAS Early Edition, the online version of PNAS.
Source: Ludwig-Maximilians-Universität München
"Chern numbers of algebraic varieties." June 10th, 2009. http://phys.org/news163858041.html
|
{"url":"http://phys.org/print163858041.html","timestamp":"2014-04-17T05:16:31Z","content_type":null,"content_length":"6451","record_id":"<urn:uuid:ed988af8-9228-46d1-a594-83b24f28adff>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
TE Equations and Functions
Chapter 1: TE Equations and Functions
Created by: CK-12
Equations and Functions consists of eight lessons that introduce students to the language of algebra.
Suggested Pacing:
Variable Expressions - $1\;\mathrm {hr}$
Order of Operations - $1 \;\mathrm{hr}$
Patterns and Equations - $1-2 \;\mathrm{hrs}$
Equations and Inequalities - $1-2 \;\mathrm{hrs}$
Functions as Rules and Tables - $0.5 \;\mathrm{hrs}$
Functions as Graphs - $1 \;\mathrm{hr}$
Problem-Solving Plan - $0.5 \;\mathrm{hr}$
Problem-Solving Strategies: - $2 \;\mathrm{hrs}$
Make a Table; Look for a Pattern
If you would like access to the Solution Key FlexBook for even-numbered exercises, the Assessment FlexBook and the Assessment Answers FlexBook please contact us at teacher-requests@ck12.org.
Problem-Solving Strand for Mathematics
The problem-solving strategies presented in this chapter, Make a Table and Look for Patterns, are foundational techniques. Making a Table can help structure a student’s ability to organize and
clarify the data presented and teach the student to communicate more clearly. When teaching Look for a Pattern, ask questions such as:
• Can you identify a pattern in the given examples that would let you extend the data?
• Do you observe any pattern that applies to all the given examples?
• Can you find a relationship or operation(s) that would allow you to predict another term?
Alignment with the NCTM Process Standards
Two promising practices, focused on the communication standards, can be effectively used with these strategies. The first is setting aside a time on a regular basis (i.e. once a week) to have
students write about the problem solving they have been doing (COM.2). Present a daily warm-up problem which is well suited to the strategy being learned such as Look for a Pattern. Students work on
one problem a day, first individually and then as a group with teacher leadership and summation. Each day the problem is discussed and solved, and many different points of view are shared in the
process (COM.1, COM.3). At the end of the week, students are asked to write about any one of the problems they did earlier in the week. This practice pushes students to develop logical thinking
skills, to benefit from the classroom work that was shared earlier in the week, and to learn to communicate mathematical ideas clearly (COM.4). It also gives students a choice; they only have to
write about one of the problems, and it can be the problem that made the most sense to them. Oftentimes, unfortunately, we do not honor enough what makes sense to students; we expect them only to be
able to follow the logic presented to them. We must give them experiences of “sense-making” as well (RP.1, RP.2).
A second practice is posting student work, such as:
• gallery walks where work in progress can be viewed
• posters that highlight exceptionally well done solutions
• student essays posted on the classroom wall
What matters is that students’ work is valued and shared (RP.3). If these pieces are large enough to be seen from a distance, this practice helps students to recall various approaches to solving
problems (RP.4) and keeps their thinking “alive.”
• COM.1 - Organize and consolidate their mathematical thinking through communication.
• COM.2 - Communicate their mathematical thinking coherently and clearly to peers, teachers, and others.
• COM.3 - Analyze and evaluate the mathematical thinking and strategies of others.
• COM.4 - Use the language of mathematics to express mathematical ideas precisely.
• RP.1 - Recognize reasoning and proof as fundamental aspects of mathematics.
• RP.2 - Make and investigate mathematical conjectures.
• RP.3 - Develop and evaluate mathematical arguments and proofs.
• RP.4 - Select and use various types of reasoning and methods of proof.
Chapter Outline
Chapter Summary
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first.
|
{"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/1.0/","timestamp":"2014-04-19T04:27:44Z","content_type":null,"content_length":"110610","record_id":"<urn:uuid:c20c6dda-b1d6-4ebb-8696-8170b48e8c46>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficient Fair Search for Lazy FLP
At FLOPS 2008 Oleg Kiselyov showed me FBackTrack — a simple instantiation of the MonadPlus class in Haskell which is fair (like breadth-first search) and seems to consume considerably less memory
than BFS. I immediately started following an obvious plan: implement a traversal on Curry’s SearchTree datatype treating choices like mplus and values like return.
Although the resulting search outperformed breadth-first search in some examples, I didn’t get the benefit that I expected from the Haskell experiments. A closer look revealed that the main trick to
save memory was the implementation of the bind operator, that a search that doesn’t use bind is a worst case for the algorithm still taking exponential space, and that SearchTrees don’t have an
equivalent to bind, unfortunately!
The bind operator takes a nondeterministic computation and passes one of its results to a function supplied as second argument, i.e., bind demands the computation of its first argument. The question
arises, how we can model lazy functional-logic programming in Haskell, so instead of writing a traversal for SearchTrees, I tried to figure out how to translate Curry programs to Haskell such that
under the hood, FBackTrack is used as search engine. The implementation uses the datatype Stream that is an instance of MonadPlus:
data Stream a =
Nil | One a | Choice a (Stream a) | Incomplete (Stream a)
The implementations of the mplus and bind operations use a tricky interleaving to compute a fair enumeration of all stream elements efficiently.
A naiv approach to translate Curry programs to Haskell where only the result type of functions is made monadic is not lazy enough (Curry code is given as a comment):
-- const x _ = x
const :: a → b → Stream b
const x _ = return x
-- loop = loop
loop :: Stream a
loop = loop
-- main = const 42 loop
main = do
x ← return 42
y ← loop
const x y
The given program will fail to terminate with the implementation given in FBackTrack.hs and I couldn’t come up with another implementation that handles laziness in a sensible way.
Instead, I concluded to apply a different transformation scheme that uses bind only when the program demands evaluation, e.g., during pattern matching. With this transformation scheme the Haskell and
Curry versions of the functions given above are identical (apart from type signatures). To see the difference consider the transformed version of the function not:
-- not b = case b of False → True; True → False
not b =
match b $ λd →
case d of
Cons 0 [] → true
Cons 1 [] → false
What is the (Haskell-)type of not? The function match used before pattern matching has the following type:
match :: (Nondet a, Nondet b) ⇒ a → (Data → b) → b
The datatype Data represents head-normal forms of nondeterministic data values:
data Data = Cons Int [Stream Data]
Data values are constructor rooted and have an arbitrary number of nondeterministic data values as arguments. The type class Nondet provides operations to convert between arbitrary data types and
nondeterministic data values:
class Nondet a
toData :: a → Stream Data
fromData :: Stream Data → a
The implementation of match simply calls these functions and the bind operation of the Stream monad appropriately:
match :: (Nondet a, Nondet b) ⇒ a → (Data → b) → b
match x f = fromData (toData x >>= toData . f)
The type of not is
not :: FLP_Bool → FLP_Bool
where FLP_Bool is a newtype for Stream Data:
newtype FLP_Bool = Bool { fromBool :: Stream Data }
The definition for the instance Nondet FLP_Bool simply uses the defined constructor and destructor:
instance Nondet FLP_Bool
toData = fromBool
fromData = Bool
The functions false and true define the representation of False and True as data values:
false, true :: FLP_Bool
false = Bool (One (Cons 0 []))
true = Bool (One (Cons 1 []))
At run time, values of type FLP_Bool will be indistinguishable from values of type Stream Data — the newtype constructor will be eliminated. Why do we introduce the type FLP_Bool anyway? The type
class Nondet has another method unknown which represents a logic variable and is usually implemented differently for different types:
class Nondet a
unknown :: a
For example, the instance for FLP_Bool implements unknown as follows:
instance Nondet FLP_Bool
unknown = choice [false,true]
The function choice is used to build the nondeterministic choice of arbitrary values of the same type:
choice :: Nondet a ⇒ [a] → a
choice xs = fromData (foldr1 mplus (mzero:map toData xs))
The implementation folds mplus over the list of choices. The additional mzero in front of the list ensures fairness even if left recursion is used.
If we evaluate the call not unknown we get the result
Bool {fromBool =
Incomplete (Choice (Cons 1 [])
(Incomplete (One (Cons 0 []))))}
which is a nondeterministic choice of true and false. Laziness is also no problem: const true loop yields Bool {fromBool = One (Cons 1 [])}. Unfortunately, we have a different problem now.
In order to examine nested terms, consider the datatype FLP_Pair of functional logic pairs:
newtype FLP_Pair a b = Pair { fromPair :: Stream Data }
The constructor for pairs is defined as follows:
pair :: (Nondet a, Nondet b) ⇒ a → b → FLP_Pair a b
pair x y = Pair (One (Cons 0 [toData x,toData y]))
The problem of the presented transformation scheme is that it does not respect shared nondeterministic choices. For example, a call to main in the following Curry program should give the results
(True,True) or (False,False) but neither (True,False) nor (False,True):
main = dup (unknown::Bool)
dup x = (x,x)
The restricted set of results corresponds to what would be evaluated with an eager strategy and we need to compute the same results lazily.
We can easily define the Haskel version of dup as
dup :: Nondet a ⇒ a → FLP_Pair a a
dup x = pair x x
but the call dup (unknown::FLP_Bool) yields
Pair {fromPair =
One (Cons 0 [Incomplete (Choice (Cons 0 [])
(One (Cons 1 []))),
Incomplete (Choice (Cons 0 [])
(One (Cons 1 [])))])}
which represents all four pairs of booleans. The information that the same choice should be taken in both components is lost.
Until now, I didn’t find a way to improve on this. Even the side-effectish way of associating unique references to choices in order to detect choices that where duplicated by sharing cannot easily be
adopted because of the tricky interleavings in the implementation of the monadic operations that constantly restructure the choices.
The code in this post was taken from the following files:
• Stream.hs — the fair Stream monad
• FLP.hs — primitives for functional-logic programming
• examples.hs — demonstrate laziness and (lack of) sharing support
|
{"url":"http://www-ps.informatik.uni-kiel.de/~sebf/projects/fair-search.html","timestamp":"2014-04-16T13:12:41Z","content_type":null,"content_length":"13708","record_id":"<urn:uuid:782a16c5-0117-4a1e-b654-8eb11f780197>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HELP! Sunflower Seed Mixture Math Problem: Agway Gardens has sunflower seeds that sell for $0.49 per pound?
HELP! Sunflower Seed Mixture Math Problem: Agway Gardens has sunflower seeds that sell for $0.49 per pound?
Agway Gardens has sunflower seeds that sell for $0.49 per pound and gourmet bird seed mix that sells for $0.89 per pound. How much of each must be mixed to get a 20-pound mixture that sells for $0.73
per pound?
I came up with this equation:
let S = Sunflower seeds
B = Bird seed mix
I came up with this equation: (0.49S + 0.89B) / 20 = 0.73
But I couldn't find the solution.
Thank you for your help!
as you said
s= sunflower seed in pounds
b= bird seed mix in pounds
total weight = 20 lb
then , s= 20-b
(0.49s+0.89b) =20*0.73
so bird seed mix=12 pounds
sun flower seeds=20-12=8 pounds
May 13 at 12:5
You need to be clearer in you definitions. "S" is not just sunflower seeds, it is the number of pounds of sunflower seeds to use
S + B = 20
Now you have two equations in two unknowns, and can solve.
May 13 at 15:51
since there are two unknowns you will need two equations.
since S and B represent the total amount in pounds and you know you need 20 pounds, the 2nd equation would be
S + B = 20
then you can solve both equations noting that S = 20 - B so
solve for B then plug that into either of the first equations to solve for A
May 13 at 20:0
If S is the number of pounds of sunflower seeds, 20-S is the number of pounds of bird seed in the mix. Then your equation becomes [0.49S + 0.89(20-S)]/20 = 0.73
Now you can solve for S, and hence for B.
May 14 at 0:32
|
{"url":"http://www.qfak.com/education_reference/science_mathematics/?id=814214","timestamp":"2014-04-18T08:03:54Z","content_type":null,"content_length":"16080","record_id":"<urn:uuid:f39805e1-8558-4ab4-b8cb-ff89e8a767d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Previous: Return to notes
Boolean Algebra
A Boolean algebra is a set with two binary operations, U for the binary operations that satisfy A in the set.
One interpretation of Boolean algebra is the collection of subsets of a fixed set X. We take U to be set union, set intersection, complementation, the empty set and the set X respectively. Equality
here means the usual equality of sets.
Another interpretation is the calculus of propositions in symbolic logic. Here we take U to be disjunction, conjunction, negation, a fixed contradiction and a fixed tautology respectively. In this
setting equality means logical equivalence.
It is not surprising then that we find analogous properties and rules appearing in these two areas. For example, the axiom of the distributive properties says that for sets we have
From the axioms above one can prove DeMorgan's Laws (in some axiom sets this is included as an axiom). The following table contains just a few rules that hold in a Boolean algebra, written in both
set and logic notation. Rows 3 and 4 are DeMorgan's Laws. Note that the two versions of these rules are identical in structure, differing only in the choice of symbols.
Dan Rinne
Tue Aug 6 18:32:03 PDT 1996
|
{"url":"http://www.math.csusb.edu/notes/sets/boole/boole.html","timestamp":"2014-04-19T04:19:50Z","content_type":null,"content_length":"4351","record_id":"<urn:uuid:b6b2ff11-a6f5-40d6-a4ac-2d6fac669039>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
weird sequence
November 5th 2008, 06:14 PM
weird sequence
Let a $\epsilon$ (0, 1) be an irrational number. Define a sequence $(x_{n})$ in [0, 1) by $x_{1}$ = 0 and
$x_{n+1}=\left\{\begin{array}{cc}x_{n}+a,&\mbox{ if }<br /> x_{n}+a<1\\x_{n}+a-1, &\mbox{ } otherwise \end{array}\right.$
Show that $x_{n}$ does not converge.
Show that there is some subsequence of $x_{n}$ which converges to 0.
November 6th 2008, 03:20 AM
Let a $\epsilon$ (0, 1) be an irrational number. Define a sequence $(x_{n})$ in [0, 1) by $x_{1}$ = 0 and
$x_{n+1}=\left\{\begin{array}{cc}x_{n}+a,&\mbox{ if }<br /> x_{n}+a<1\\x_{n}+a-1, &\mbox{ } otherwise \end{array}\right.$
Show that $x_{n}$ does not converge.
Show that there is some subsequence of $x_{n}$ which converges to 0.
If $x_n\to \ell$ then $\ell$ must be equal to either $\ell+a$ or $\ell+a-1$. You can easily see that neither of these cases is possible.
To find the subsequence, notice that $x_{n+1}$ is the fractional part of $x_n+a$, and therefore (by induction) $x_{n+1}$ is the fractional part of $na$. It will be sufficient to show that we can
find multiples of a with fractional parts arbitrarily close to 0.
Divide the unit interval into k subintervals of length 1/k. The fractional parts of na are all distinct (otherwise a would be rational), and at least one of the subintervals must contain
infinitely many of them. So we can find arbitrarily large m and n (with n>m) for which the fractional parts of ma and na differ by less than 1/k. Then the fractional part of (n-m)a is either less
than 1/k, or it is greater than 1-(1/k) in which case the fractional parts of r(n-m)a (for r=1,2,3,...) will move down through the unit interval in steps of less than 1/k, until one of them is
less than 1/k.
|
{"url":"http://mathhelpforum.com/calculus/57889-weird-sequence-print.html","timestamp":"2014-04-18T17:12:05Z","content_type":null,"content_length":"8945","record_id":"<urn:uuid:94461d7d-ab05-4736-89cf-31ede07f4a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
0042-9686 0042-9686 S0042-96862007000900010 10.1590/S0042-96862007000900010 Switzerland 00 09 2007 00 09 2007 85 9 660 667
25.0 kg/m²). Similarly, the +2 SD value (29.7 kg/m² for both sexes) compares closely with the cut-off for obesity (> 30.0 kg/m²). CONCLUSION: The new curves are closely aligned with the WHO Child
Growth Standards at 5 years, and the recommended adult cut-offs for overweight and obesity at 19 years. They fill the gap in growth curves and provide an appropriate reference for the 5 to 19 years
age group.]]>
25,0 kg/m²). De même, les valeurs correspondant à plus de 2 écarts types (29,7 kg/m² pour les deux sexes) sont très proches du point de coupure pour l'obésité (> 30,0 kg/m²). CONCLUSION: Les
nouvelles courbes coïncident étroitement à 5 ans avec la norme OMS de croissance de l'enfant et à 19 ans avec les points de coupure recommandés chez l'adulte pour l'excès pondéral et l'obésité. Elles
comblent les lacunes en matière de courbes de croissance et fournissent une référence appropriée pour la tranche d'âges 5 -19 ans.]]>
25,0 kg/m²). A su vez, el valor correspondiente a +2 DE (29,7 kg/m² en ambos sexos) fue muy similar al valor de corte de la obesidad (> 30,0 kg/m²). CONCLUSIÓN: Las nuevas curvas se ajustan bien a
los Patrones de Crecimiento Infantil de la OMS a los 5 años y a los valores de corte del sobrepeso y de la obesidad recomendados para los adultos a los 19 años, colman la laguna existente en las
curvas de crecimiento y constituyen una referencia apropiada para el grupo de 5 a 19 años de edad.]]>
Development of a WHO growth reference for school-aged children and adolescents
Mise au point d'une référence de croissance pour les enfants d'âge scolaire et les adolescents
Elaboración de valores de referencia de la OMS para el crecimiento de escolares y adolescentes
Mercedes de Onis^1; Adelheid W Onyango; Elaine Borghi; Amani Siyam; Chizuru Nishida; Jonathan Siekmann
]]> Department of Nutrition, World Health Organization, 20 Avenue Appia, 1211 Geneva 27, Switzerland
OBJECTIVE: To construct growth curves for school-aged children and adolescents that accord with the WHO Child Growth Standards for preschool children and the body mass index (BMI) cut-offs for
METHODS: Data from the 1977 National Center for Health Statistics (NCHS)/WHO growth reference (1 24 years) were merged with data from the under-fives growth standards' cross-sectional sample (18 71
months) to smooth the transition between the two samples. State-of-the-art statistical methods used to construct the WHO Child Growth Standards (0 5 years), i.e. the Box-Cox power exponential (BCPE)
method with appropriate diagnostic tools for the selection of best models, were applied to this combined sample.
FINDINGS: The merged data sets resulted in a smooth transition at 5 years for height-for-age, weight-for-age and BMI-for-age. For BMI-for-age across all centiles the magnitude of the difference
between the two curves at age 5 years is mostly 0.0 kg/m^2 to 0.1 kg/m^2. At 19 years, the new BMI values at +1 standard deviation (SD) are 25.4 kg/m^2 for boys and 25.0 kg/m^2 for girls. These
values are equivalent to the overweight cut-off for adults (> 25.0 kg/m^2). Similarly, the +2 SD value (29.7 kg/m^2 for both sexes) compares closely with the cut-off for obesity (> 30.0 kg/m^2).
CONCLUSION: The new curves are closely aligned with the WHO Child Growth Standards at 5 years, and the recommended adult cut-offs for overweight and obesity at 19 years. They fill the gap in growth
curves and provide an appropriate reference for the 5 to 19 years age group.
OBJECTIF: Construire des courbes de croissance pour les enfants d'âge scolaire et les adolescents concordant avec la Norme OMS de croissance de l'enfant pour les enfants d'âge préscolaire et avec les
points de coupure pour l'indice de masse corporelle (IMC) s'appliquant aux adultes. ]]> MÉTHODES: Les données de référence NCHS/OMS pour la croissance (de 1 à 24 ans) de 1977 ont été regroupées avec
celles de l'échantillon transversal d'enfants de moins de 5 ans (18 à 71 mois) utilisé pour la norme de croissance de manière à lisser la transition entre les deux échantillons. Les méthodes
statistiques correspondant à l'état de la technique [méthode Box-Cox-power-exponential (BCPE), complétée par des outils permettant de sélectionner les meilleurs modèles], ayant servi à construire la
norme OMS de croissance de l'enfant (0 à 5 ans), ont été appliquées à cet échantillon combiné.
RÉSULTATS: La fusion des jeux de données a permis d'obtenir une transition plus douce au niveau de 5 ans pour les courbes de taille, de poids et d'IMC en fonction de l'âge. S'agissant de l'IMC en
fonction de l'âge, sur l'ensemble des centiles, l'ampleur de la différence entre les deux courbes à l'âge de 5 ans se situe principalement entre 0,0 kg/m^2 et 0,1 kg/m^2. A 19 ans, les nouvelles
valeurs d'IMC correspondant à un écart type de +1 sont de 25,4 kg/m^2 pour les garçons et de 25,0 kg/m^2 pour les filles. Ces valeurs concordent avec le point de coupure pour l'excès pondéral chez
l'adulte (> 25,0 kg/m^2). De même, les valeurs correspondant à plus de 2 écarts types (29,7 kg/m^2 pour les deux sexes) sont très proches du point de coupure pour l'obésité (> 30,0 kg/m^2).
CONCLUSION: Les nouvelles courbes coïncident étroitement à 5 ans avec la norme OMS de croissance de l'enfant et à 19 ans avec les points de coupure recommandés chez l'adulte pour l'excès pondéral et
l'obésité. Elles comblent les lacunes en matière de courbes de croissance et fournissent une référence appropriée pour la tranche d'âges 5 -19 ans.
OBJETIVO: Elaborar curvas de crecimiento para escolares y adolescentes que concuerden con los Patrones de Crecimiento Infantil de la OMS para preescolares y los valores de corte del índice de masa
corporal (IMC) para adultos.
MÉTODOS: Se fusionaron los datos del patrón internacional de crecimiento del National Center for Health Statistics/OMS de 1977 (1 24 años) con los datos de la muestra transversal de los patrones de
crecimiento para menores de 5 años (18 71 meses), con el fin de suavizar la transición entre ambas muestras. A esta muestra combinada se le aplicaron los métodos estadísticos de vanguardia utilizados
en la elaboración de los Patrones de Crecimiento Infantil de la OMS (0 5 años), es decir, la transformación de potencia de Box-Cox exponencial, junto con instrumentos diagnósticos apropiados para
seleccionar los mejores modelos.
RESULTADOS: La fusión de los dos conjuntos de datos proporcionó una transición suave de la talla para la edad, el peso para la edad y el IMC para la edad a los 5 años. Con respecto al IMC para la
edad, la magnitud de la diferencia entre ambas curvas a los 5 años fue generalmente de 0,0 kg/m^2 a 0,1 kg/m^2 en todos los centiles. A los 19 años, los nuevos valores del IMC para +1 desviación
estándar (DE) fueron de 25,4 kg/m^2 para el sexo masculino y de 25,0 kg/m^2 para el sexo femenino, es decir, equivalentes al valor de corte del sobrepeso en adultos (> 25,0 kg/m^2). A su vez, el
valor correspondiente a +2 DE (29,7 kg/m^2 en ambos sexos) fue muy similar al valor de corte de la obesidad (> 30,0 kg/m^2).
CONCLUSIÓN: Las nuevas curvas se ajustan bien a los Patrones de Crecimiento Infantil de la OMS a los 5 años y a los valores de corte del sobrepeso y de la obesidad recomendados para los adultos a los
19 años, colman la laguna existente en las curvas de crecimiento y constituyen una referencia apropiada para el grupo de 5 a 19 años de edad.
The need to develop an appropriate single growth reference for the screening, surveillance and monitoring of school-aged children and adolescents has been stirred by two contemporary events: the
increasing public health concern over childhood obesity^1 and the April 2006 release of the WHO Child Growth Standards for preschool children based on a prescriptive approach.^2 As countries proceed
with the implementation of growth standards for children under 5 years of age, the gap across all centiles between these standards and existing growth references for older children has become a
matter of great concern. It is now widely accepted that using descriptive samples of populations that reflect a secular trend towards overweight and obesity to construct growth references results
inadvertently in an undesirable upward skewness leading to an underestimation of overweight and obesity, and an overestimation of undernutrition.^3
The reference previously recommended by WHO for children above 5 years of age, i.e. the National Center for Health Statistics (NCHS)/WHO international growth reference,^4 has several drawbacks.^5 In
particular, the body mass index-for-age reference, developed in 1991,^6 only starts at 9 years of age, groups data annually and covers a limited percentile range. Many countries pointed to the need
to have body mass index (BMI) curves that start at 5 years and permit unrestricted calculation of percentile and z-score curves on a continuous age scale from 5 to 19 years.
The need to harmonize growth assessment tools conceptually and pragmatically prompted an expert group meeting in January 2006 to evaluate the feasibility of developing a single international growth
reference for school-aged children and adolescents.^7,8 The experts agreed that appropriate growth references for these age groups should be developed for clinical and public health applications.
They also agreed that a multicentre study, similar to the one that led to the development of the WHO Child Growth Standards for 0 to 5 years, would not be feasible for older children, as it would not
be possible to control the dynamics of their environment. Therefore, as an alternative, the experts suggested that a growth reference be constructed for this age group using existing historical data
and discussed the criteria for selecting the data sets.
WHO subsequently initiated a process to identify existing data sets from various countries. This process resulted in an initial identification of 115 candidate data sets from 45 countries, which were
narrowed down to 34 data sets from 22 countries that met the inclusion criteria developed by the expert group. However, after further review, even these most promising studies showed great
heterogeneity in methods and data quality, sample size, age categories, socioeconomic status of participating children and various other factors critical to growth curve construction. Therefore, it
was unlikely that a growth reference constructed from these heterogeneous data sets would agree with the WHO Child Growth Standards at 5 years of age for the different anthropometric indicators
needed (i.e. height-for-age, weight-for-age and BMI-for-age).
In consequence, WHO proceeded to reconstruct the 1977 NCHS/WHO growth reference from 5 to 19 years, using the original sample (a non-obese sample with expected heights), supplemented with data from
the WHO Child Growth Standards (to facilitate a smooth transition at 5 years), and applying the state-of-the-art statistical methods^9,10 used to develop standards for preschool children, that is,
the Box-Cox power exponential (BCPE) method with appropriate diagnostic tools for the selection of best models.
The purposes of this paper are to report the methods used to reconstruct the 1977 NCHS/WHO growth reference, to compare the resulting new curves (the 2007 WHO reference) with the 1977 NCHS/WHO
charts, and to describe the transition at 5 years of age from the WHO standards for under-fives to these new curves for school-aged children and adolescents.
]]> Sample description
The core sample used for the reconstruction of the reference for school-aged children and adolescents (5 19 years) was the same as that used for the construction of the original NCHS charts, pooling
three data sets.^11 The first and second data sets were from the Health Examination Survey (HES) Cycle II (6 11 years) and Cycle III (12 17 years). The third data set was from the Health and
Nutrition Examination Survey (HANES) Cycle I (birth to 74 years), from which only data from the 1 to 24 years age range were used. Given the similarity of the three data sets,^11 the data were merged
without adjustments.
The total sample size was 22 917 (11 410 boys, 11 507 girls). For the indicator height-for-age, 8 boys (0.07%), including an 18 month-old with length 51.6 cm, and 14 girls (0.12%) had outlier height
measurements that were set to missing in the data set. For the weight-based indicators (i.e. weight-for-age and BMI-for-age), the same cleaning approach used for the construction of the WHO Child
Growth Standards (cross-sectional component) was applied to avoid the influence of unhealthy weights-for-height.^10 As a result, 321 observations for boys (2.8%) and 356 observations for girls (3.0%)
were excluded.
A smooth transition from the WHO Child Growth Standards (0 5 years) to the reference curves beyond 5 years was provided by merging data from the growth standards' cross-sectional sample (18 71
months) with the NCHS final sample before fitting the new growth curves. The growth curves for ages 5 to 19 years were thus constructed using data from 18 months to 24 years. The final sample used
for fitting the growth curves included 30 907 observations (15 537 boys, 15 370 girls) for the height-for-age curves, 30 100 observations (15 136 boys, 14 964 girls) for the weight-for-age curves,
and 30 018 observations (15 103 boys, 14 915 girls) for the BMI-for-age curves.
Statistical methods
As the goal was to develop growth curves for school-aged children and adolescents that accord with the WHO Child Growth Standards for preschool children, we reapplied the state-of-the-art statistical
methods used to construct the growth standards for children under 5 years of age.^10 The development of the standards for under-fives followed a methodical process that involved: (a) detailed
examination of existing methods, including types of distributions and smoothing techniques; (b) selection of a software package flexible enough to allow comparative testing of alternative methods and
the actual generation of the curves; and (c) systematic application of the selected approach to the data to generate models that best fitted the data.^9
The BCPE method,^12 with curve smoothing by cubic splines, was used to construct the curves. This method accommodates various kinds of distributions, from normal to skewed or kurtotic. After the
model was fitted using the whole age range (18 months to 24 years), the curves were truncated to cover the required age range (i.e. 5 19 years for height-for-age and BMI-for-age, and 5 10 years for
weight-for-age), thus avoiding the right- and left-edge effects.^9
The specifications of the BCPE models that provided the best fit to generate the growth curves were:
For height-for-age:
BCPE(l = 1, df(m) = 12, df(s) = 4, n = 1, t = 2) for boys ]]> BCPE(l = 0.85, df(m) = 10, df(s) = 4, n = 1, t = 2) for girls.
For weight-for-age:
BCPE(l = 1.4, df(m) = 10, df(s) = 8, df(n) = 5, t = 2) for boys
BCPE(l = 1.3, df(m) = 10, df(s) = 3, df(n) = 3, t = 2) for girls.
For BMI-for-age:
BCPE(l = 0.8, df(m) = 8, df(s) = 4, df(n) = 4, t = 2) for boys
BCPE(l = 1, df(m) = 8, df(s) = 3, df(n) = 4, t = 2) for girls.
Where l is the power of the transformation applied to age before fitting the model; df(m) is the degrees of freedom for the cubic splines fitting the median (m); df(s) the degrees of freedom for the
cubic splines fitting the coefficient of variation (s); df(n) the degrees of freedom for the cubic splines fitting the Box-Cox transformation power (n) (for height-for-age fixed n = 1); and t is the
parameter related to the kurtosis (in all three cases fixed t = 2).
The selected models for boys and girls ultimately simplify to the LMS method,^13 since it was not necessary to model the parameter related to kurtosis. For height-for-age, the data follow the
standard normal distribution, so it was not necessary to model either the parameter related to skewness or to kurtosis.
]]> Results
Percentile and z-score curves and tables were generated ranging from the 1st to the 99th percentile and from the 3 to the +3 standard deviation (SD). The full set of clinical charts and tables
displayed by sex and age (years and months), percentile and z-score values and related information (e.g. LMS values) are provided on the WHO web site (http://www.who.int/growthref/).
Sex-specific comparisons of the 1977 NCHS/WHO reference and the newly reconstructed curves are presented in the figures for height-for-age, weight-for-age and BMI-for-age, respectively.
The difference in shape between the 1977 and 2007 curves is more evident for boys (Fig. 1) than it is for girls (Fig. 2), especially at the upper end of the age range (15 18 years; 18 years is the
maximum age limit of the 1977 curves). Differences in boys' attained height z-scores (1977 versus 2007 curves) at 5 years are negligible, ranging from 0.1 cm in the curves below the median to 0.3 cm
at +2 and +3 SD (Fig. 1). The two sets of curves follow more variable patterns in both shape and the spread of attained heights as they advance from age 10 years to the end of the age range. For
example, at 18 years, the distribution of heights from 3 to +3 SD is tighter by 5 cm in the 1977 curves compared with those for 2007. Between 3 SD and the median, the 1977 curves are higher by 3.3
cm, 2.4 cm, 1.5 cm and 0.7 cm, respectively. Conversely, the 1977 curves above the median are lower than corresponding 2007 curves by 0.2 cm (+1 SD), 1.1 cm (+2 SD) and 2.0 cm (+3 SD).
Although the disparity at 5 years between the two sets of girls' curves (Fig. 2) is greater than that observed for boys, ranging between 0.2 cm ( 3 SD) and 1.7 cm (+3 SD), the curve shapes in later
years follow more comparable patterns and culminate in a more similar distribution of attained height z-scores between 15 and 18 years of age. As observed for boys, the negative SDs and median of the
1977 set at 18 years are higher than their equivalent 2007 curves by 2.6 cm ( 3 SD), 2.0 cm ( 2 SD), 1.2 cm ( 1 SD) and 0.6 cm (median). The +1 SD curves overlap at 18 years, and in reverse pattern
to the negative SDs, the 1977 curves are lower by 0.7 cm (+2 SD) and 1.3 cm (+3 SD).
In the lower half of the weight-for-age distribution, the largest difference between the 1977 and 2007 boys' curves (Fig. 3) is at 10 years of age, where the 2007 curves are higher by 2.9 kg ( 3 SD)
and 1.1 kg ( 2 SD). In the upper half of the distribution, the largest disparities between the +1 SD and +2 SD curves are also at age 10 years, but in this case the 1977 curves are higher by 1.7 kg
and 1.0 kg. The +3 SD curves present sizeable differences, with the 1977 curve being consistently lower throughout the age range (from 1.6 kg at 5 years to 3.1 kg at 10 years). Girls present similar
patterns to those observed for boys (Fig. 4). At the lower bound, disparities are larger for girls than they are for boys. For girls at 10 years, the 2007 curves are higher by 3.7 kg ( 3 SD) and 1.4
kg ( 2 SD). At the upper bound, the largest disparity for the +3 SD curves is at 5 years, where the 2007 curve is 3.1 kg above the 1977 curve, but the difference decreases to 1.7 kg at 10 years. The
+2 SD curves cross between 8 and 9 years. At 5 years, the 2007 curve is higher by 1.3 kg and, at 10 years, it is lower than the 1977 curve by 2.3 kg.
Fig. 5 (boys) and Fig. 6 (girls) show the reference data for BMI-for-age developed in 1991 that WHO has to date recommended for ages 9 to 24 years^6 and how they compare with corresponding centiles
of the newly constructed curves in the age period where the two sets overlap (9 19 years). The 5th, 15th and 50th percentiles for boys (Fig. 5) start at 9 years with small differences (0.1 kg/m^2 and
0.2 kg/m^2) between the 1991 reference values and the 2007 curves. The two sets then track closely and cross over at about 17 years, so that by 19 years the 2007 percentiles are 0.3 kg/m^2 or 0.4 kg/
m^2 higher than the 1991 reference values. The 85th percentile of the 1991 reference originates at 0.9 kg/m^2 above its 2007 equivalent and tracks above it to end at 0.8 kg/m^2 higher at 19 years.
For the 95th percentile, the 1991 reference starts at 2.0 kg/m^2 higher and veers upwards, terminating 2.6 units above the 2007 curve at 19 years. The patterns observed in the boys' curves are also
evident among girls (Fig. 6), except that the crossover of the 5th, 15th and 50th percentiles occurs at 13 years, and differences in the 50th and 95th percentiles are slightly larger than
corresponding differences in the boys' percentiles. A wiggly pattern is noticeable in the 1991 reference values, particularly in the 50th, 85th and 95th percentiles.
At 19 years of age, the 2007 BMI values at +1 SD are 25.4 kg/m^2 for boys and 25.0 kg/m^2 for girls, while the +2 SD values are 29.7 kg/m^2 for both sexes.
Transition to the 2007 WHO reference at 5 years
A main objective for reconstructing the 1977 NCHS/WHO reference was to provide a smooth transition from the WHO standard curves for under-fives to the reference curves for older children. Table 1
presents values at 5 years for the various indicators by sex of the 1977 and 2007 references for school-aged children and adolescents, and the WHO standards for under-fives.
Disparities between the 1977 reference and the WHO height-for-age and weight-for-age standards for girls at 5 years were larger than those observed in corresponding boys' curves. For example, the
differences in the boys' height-for-age curves were at most 0.2 cm, in contrast to the girls' curves that were disparate by as much as 1.7 cm and 2.1 cm at +2 and +3 SD, respectively. For
weight-for-age, differences between the 1977 reference and the WHO standards at +3 SD were 2.0 kg for boys and 3.5 kg for girls. Since no NCHS-based reference values for BMI were available for ages
below 9 years, the table presents comparative values only for the 2007 reconstructed reference and the WHO standards at 5 years of age.
The reconstruction resulted in curves that are closely aligned to corresponding WHO standards at the junction (5 years). For height-for-age boys, the three negative SDs are only 0.1 cm apart, the
median and +1 SD curves differ by 0.3 cm, and disparities at +2 SD and +3 SD are 0.4 cm and 0.5 cm, respectively. For girls, the differences between the two sets of curves are 0.3 cm or 0.4 cm
through the full range of z-scores. For weight-for-age, where differences between the 1977 reference and the WHO standards at 5 years were considerable, the reconstruction substantially reduced
differences in the final curves. The boys' medians are equal, while their negative z-scores differ by 0.1 kg or 0.2 kg, and the positive z-scores by 0.1 kg (+1 SD), 0.3 kg (+2 SD) and 0.4 kg (+3 SD).
Residual differences in the two sets of curves for girls are in a range similar to those in the boys' curves, which is between 0.0 kg and 0.4 kg.
The merger of the under-fives growth standards' data (18 71 months) with the NCHS core sample to fit the 2007 curves for school-aged children and adolescents resulted in a very smooth transition
between the WHO Child Growth Standards and the newly constructed references for BMI-for-age. For both boys and girls, differences between the two curve sets at 5 years are mostly 0.0 kg/m^2 or 0.1 kg
/m^2, and never more than 0.2 kg/m^2.
The need for a widely applicable growth reference for older children and adolescents was increasingly recognized by countries attempting to assess the magnitude of the growing public health problem
of childhood obesity. This need was reaffirmed by the release of the under-five growth standards. The reconstruction presented in this paper has resulted in growth curves that are closely aligned to
the WHO Child Growth Standards at 5 years and as such are a suitable complementary reference for use in school-aged child and adolescent health programmes. The various clinical charts and tables
provided on the Internet will allow for the practical application of the reference.
The approach used in constructing the 2007 WHO reference addressed the limitations of the 1977 NCHS curves recognized by the 1993 expert committee^4 that recommended their provisional use for older
children. The height-for-age median curves of the 1977 and 2007 references overlap almost completely with only a slight difference in shape, which is probably due to the different modelling
techniques used. For the 1977 NCHS/WHO curves, age-specific standard deviations from the median were derived from the observed dispersion of six percentile curves (5th, 10th, 25th, 75th, 90th and
95th) and then smoothed by a combination of polynomial regression and cubic splining techniques.^14 In the 2007 reconstruction, age was modelled as a continuous variable, and the curves were fitted
simultaneously and smoothed throughout the age range using cubic splines. Furthermore, edge effects were avoided by constructing the 2007 curves with data that extended beyond the lower and upper age
bounds of the final reference curves. The latter may explain why the 1977 NCHS/WHO curves have pronounced wiggly shapes towards the upper age limit of the reference compared with the 2007 curves.
When compared to the 1977 NCHS/WHO curves, the differences in the newly reconstructed weight-for-age curves are significant in all centiles apart from the median and the 1 SD curves, reflecting the
important difference in curve construction methodology. The fact that the median curves of the two references overlap almost completely is reassuring in that the two samples used for fitting the
models are the same within the healthy range (i.e. middle range of the distribution). The methodology available at the time of constructing the 1977 curves was limited in its ability to model skewed
data.^14 Fixing a higher standard deviation distance between the curves above the median and a lower one for the curves below, as was done, partially accounted for the skewness in the weight data but
failed to model the progressively increasing distances between the SD curves from the lower to the upper tails of the weight-for-age distribution. To fit the skewed data adequately, the LMS method
(used in the construction of the 2007 curves and other recently developed weight-based references) fits a Box-Cox normal distribution, which follows the empirical data closely.^15 17
The reference data for BMI-for-age recommended by WHO are limited in that they begin only at 9 years of age and cover a restricted distribution range (5th 95th percentiles). The empirical reference
values were estimated using data that were grouped by age in years, and then smoothed using locally weighted regression.^6 The 2007 reconstruction permits the extension of the BMI reference to 5
years, where the curves match WHO under-five curves almost perfectly. Furthermore, at 19 years of age, the 2007 BMI values for both sexes at +1 SD (25.4 kg/m^2 for boys and 25.0 kg/m^2 for girls) are
equivalent to the overweight cut-off used for adults (> 25.0 kg/m^2), while the +2 SD value (29.7 kg/m^2 for both sexes) compares closely with the cut-off for obesity (> 30.0 kg/m^2).^18
The 2007 height-for-age and BMI-for-age charts extend to 19 years, which is the upper age limit of adolescence as defined by WHO.^19 The weight-for-age charts extend to 10 years for the benefit of
countries that routinely measure only weight and would like to monitor growth throughout childhood. Weight-for-age is inadequate for monitoring growth beyond childhood due to its inability to
distinguish between relative height and body mass, hence the provision here of BMI-for-age to complement height-for-age in the assessment of thinness (low BMI-for-age), overweight and obesity (high
BMI-for-age) and stunting (low height-for-age) in school-aged children and adolescents.
Competing interests: None declared.
1. Lobstein T, Baur L, Uauy R. IASO International Obesity Task Force. Obesity in children and young people: a crisis in public health. Obes Rev 2004;5:4-104. [ Links ]
2. WHO Multicentre Growth Reference Study Group. WHO Child Growth Standards based on length/height, weight and age. Acta Paediatr Suppl 2006;450:76-85. [ Links ]
3. De Onis M. The use of anthropometry in the prevention of childhood overweight and obesity. Int J Obes Relat Metab Disord 2004;28:S81-5. [ Links ]
4. Physical status: the use and interpretation of anthropometry. Report of a WHO Expert Committee. World Health Organ Tech Rep Ser 1995; 854:161-262. [ Links ]
5. Wang Y, Moreno LA, Caballero B, Cole TJ. Limitations of the current World Health Organization growth references for children and adolescents. Food Nutr Bull 2006;27:S175-88. [ Links ]
6. Must A, Dallal GE, Dietz WH. Reference data for obesity: 85th and 95th percentiles of body mass index (wt/ht^2) and triceps skinfold thickness. Am J Clin Nutr 1991;53:839-46. [ Links ]
7. Butte NF, Garza C, editors. Development of an international growth standard for preadolescent and adolescent children. Food Nutr Bull 2006; 27:S169-326. [ Links ]
8. Butte NF, Garza C, de Onis M. Evaluation of the feasibility of international growth standards for school-aged children and adolescents. J Nutr 2007; 137:153-57. [ Links ]
9. Borghi E, de Onis M, Garza C, Van den Broeck J, Frongillo EA, Grummer-Strawn L, et al., for the WHO Multicentre Growth Reference Study Group. Construction of the World Health Organization child
growth standards: selection of methods for attained growth curves. Stat Med 2006;25:247-65. [ Links ]
10. WHO Multicentre Growth Reference Study Group. WHO Child Growth Standards: length/height-for-age, weight-for-age, weight-for-length, weight-for-height and body mass index-for-age: methods and
development. Geneva: WHO; 2006. [ Links ]
11. Hamill PV, Drizd TA, Johnson CL, Reed RB, Roche AF. NCHS growth curves for children birth-18 years: United States. Vital Health Stat 11 1977;165:i-iv, 1-74. [ Links ]
12. Rigby RA, Stasinopoulos DM. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution. Stat Med 2004;23:3053-76. [ Links ]
13. Cole TJ, Green PJ. Smoothing reference centile curves: the LMS method and penalized likelihood. Stat Med 1992;11:1305-19. [ Links ]
14. Dibley MJ, Goldsby JB, Staehling NW, Trowbridge FL. Development of normalized curves for the international growth reference: historical and technical considerations. Am J Clin Nutr 1987;
46:736-48. [ Links ]
15. Kuczmarski RJ, Ogden CL, Guo SS, Grummer-Strawn LM, Flegal KM, Mei Z, et al. 2000 CDC growth charts for the United States: methods and development. Vital Health Stat 11 2002;246:1-190. [ Links ]
16. Cole TJ, Freeman JV, Preece MA. British 1990 growth reference centiles for weight, height, body mass index and head circumference fitted by maximum penalized likelihood. Stat Med 1998;17:407-29.
[ Links ]
17. Fredriks AM, van Buuren S, Burgmeijer RJ, Meulmeester JF, Beuker RJ, Brugman E, et al. Continuing positive secular growth change in the Netherlands 1955-1997. Pediatr Res 2000;47:316-23. [ Links
18. Obesity: preventing and managing the global epidemic. Report of a WHO consultation. World Health Organ Tech Rep Ser 2000;894:1-253. [ Links ]
19. Young people's health a challenge for society. Report of a WHO Study Group on young people and Health for All by the Year 2000. World Health Organ Tech Rep Ser 1986;731:1-117. [ Links ]
(Submitted: 25 April 2007 Final revised version received: 12 July 2007 Accepted: 15 July 2007)
1 Correspondence to Mercedes de Onis (e-mail: deonism@who.int).
]]> IASO International Obesity Task Force 2004 5 4-104 WHO Multicentre Growth Reference Study Group 2006 450 76-85 2004 28 S81-5 1995 854 161-262 2006 27 S175-88 1991 53 839-46 2006 27 S169-326 2007
137 153-57 2006 25 247-65 WHO Multicentre Growth Reference Study Group 2006 1977 11 165 165 i-iv1-74 2004 23 3053-76 1992 11 1305-19 1987 46 736-48 2002 11 246 246 1-190 1998 17 407-29 2000 47 316-23
2000 894 1-253 1986 731 1-117
|
{"url":"http://www.scielosp.org/scieloOrg/php/articleXML.php?pid=S0042-96862007000900010&lang=en","timestamp":"2014-04-20T13:46:21Z","content_type":null,"content_length":"68113","record_id":"<urn:uuid:ae2f5306-ecb1-4bf6-bc08-c85b86494104>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Change the Amplitude, Period, and Position of a Tangent or Cotangent Graph
You can transform the graph for tangent and cotangent vertically, change the period, shift the graph horizontally, or shift it vertically. However, you should take each transformation one step at a
For example, to graph
follow these steps:
1. Sketch the parent graph for tangent.
2. Shrink or stretch the parent graph.
The vertical shrink is 1/2 for every point on this function, so each point on the tangent parent graph is half as tall.
Seeing vertical changes for tangent and cotangent graphs is harder, but they're there. Concentrate on the fact that the parent graph has points
which in the transformed function become
As you can see in the figure, the graph really is half as tall!
The graph of y = (1/2)tanx.
3. Change the period.
The constant 1/2 doesn't affect the period. Why? Because it sits in front of the tangent function, which only affects vertical, not horizontal, movement.
4. Shift the graph horizontally and vertically.
This graph doesn't shift horizontally, because no constant is added inside the grouping symbols (parentheses) of the function. So you don't need to do anything horizontally. The – 1 at the end of
the function is a vertical shift that moves the graph down one position. The figure shows the transformed graph of
5. State the transformed function's domain and range, if asked.
Because the range of the tangent function is all real numbers, transforming its graph doesn't affect the range, only the domain. The domain of the tangent function isn't all real numbers because
of the asymptotes. The domain of the example function hasn't been affected by the transformations, however. Where n is an integer,
Now that you've graphed the basics, you can graph a function that has a period change, as in the function
You see a lot of pi in that one. Relax! You know this graph has a period change because you see a number inside the parentheses that's multiplied by the variable. This constant changes the period of
the function, which in turn changes the distance between the asymptotes. In order for the graph to show this change correctly, you must factor this constant out of the parentheses. Take the
transformation one step at a time:
1. Sketch the parent graph for cotangent.
2. Shrink or stretch the parent graph.
No constant is multiplying the outside of the function; therefore, you can apply no shrink or stretch.
3. Find the period change.
You factor out the
which affects the period. The function now reads
The period of the parent function cotangent is pi. Therefore, you must divide pi by the period coefficient, in this case 2pi. This step gives you the period for the transformed cotangent
so you get a period of 1/2 for the transformed function. The graph of this function starts to repeat at 1/2, which is different from pi/2, so be careful when you're labeling your graph.
This period isn't a fraction of pi; it's just a rational number. When you get a rational number, you must graph it as such. The figure shows this step.
Graphing of y(x) = cot 2pi x shows a period of 1/2.
4. Determine the horizontal and vertical shifts.
Because you've already factored the period constant, you can see that the horizontal shift is to the left 1/4. The next figure shows this transformation on the graph.
No constant is being added to or subtracted from this function on the outside, so the graph doesn't experience a vertical shift.
The transformed graph of y(x) = cot 2pi(x + 1/4).
5. State the transformed function's domain and range, if asked.
The horizontal shift affects the domain of this graph. To find the first asymptote, set
(setting the period shift equal to the original first asymptote). You find that x = –1/4 is your new asymptote. The graph repeats every 1/2 radians because of its period. So the domain is
where n is an integer. The graph's range isn't affected:
|
{"url":"http://www.dummies.com/how-to/content/how-to-change-the-amplitude-period-and-position-of.html","timestamp":"2014-04-17T03:57:52Z","content_type":null,"content_length":"60263","record_id":"<urn:uuid:c1a2a314-bc12-4810-b230-e1cb7858f7cd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving Abel’s Type Integral Equation with Mikusinski’s Operator of Fractional Order
Advances in Mathematical Physics
Volume 2013 (2013), Article ID 806984, 4 pages
Research Article
Solving Abel’s Type Integral Equation with Mikusinski’s Operator of Fractional Order
^1School of Information Science & Technology, East China Normal University, No. 500, Dong-Chuan Road, Shanghai 200241, China
^2Department of Computer and Information Science, University of Macau, Avenida Padre Tomas Pereira, Taipa, Macau
Received 21 April 2013; Accepted 10 May 2013
Academic Editor: Carlo Cattani
Copyright © 2013 Ming Li and Wei Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper gives a novel explanation of the integral equation of Abel’s type from the point of view of Mikusinski’s operational calculus. The concept of the inverse of Mikusinski’s operator of
fractional order is introduced for constructing a representation of the solution to the integral equation of Abel’s type. The proof of the existence of the inverse of the fractional Mikusinski
operator is presented, providing an alternative method of treating the integral equation of Abel’s type.
1. Introduction
Abel studied a physical problem regarding the relationship between kinetic and potential energies for falling bodies [1]. One of his integrals stated in [1] is expressed in the form where is known,
but is unknown. The previous expression is in the literature nowadays called Abel’s integral equation [2]. In addition to (1), Abel also worked on the integral equation in [1] in the following form:
which is usually termed the integral equation of Abel’s type [3] or the generalized Abel integral equation [4]. The function may be called Abel’s kernel. It is seen that (1) is a special case of (2)
for . This paper is in the aspect of (2). Without generality losing, for the purpose of facilitating the discussions, we multiply the left side of (1) with the constant and let . That is, we rewrite
(2) by
The integral equation of Abel’s type attracts the interests of mathematicians and physicists. In mathematics, for example, for solving the integral equation of Abel’s type, [5] discusses a
transformation technique, [6] gives a method of orthogonal polynomials, [7] adopts the method of integral operators, [8, 9] utilize the fractional calculus, [10] is with the Bessel functions, [11, 12
] study the wavelet methods, [13, 14] describe the methods based on semigroups, [15] uses the almost Bernstein operational matrix, and so forth [16, 17], just to mention a few. Reference [18]
represents a nice description of the importance of Abel’s integral equations in mathematics as well as engineering, citing [19–23] for the various applications of Abel’s integral equations.
The above stands for a sign that the theory of Abel’s integral equations is developing. New methods for solving such a type of equations are demanded in this field. This paper presents a new method
to describe the integral equation of Abel’s type from the point of view of the Mikusinski operator of fractional order. In addition, we will give a solution to the integral equation of Abel’s type by
using the inverse of the Mikusinski operator of fractional order.
The remainder of this article is organized as follows. In Section 2, we shall express the integral equation of the Abel’s type using the Mikusinski operator of fractional order and give the solution
to that type of equation in the constructive way based on the inverse of the fractional-order Mikusinski operator. Section 3 consists of two parts. One is the proof of the existence of the inverse of
the fractional-order Mikusinski operator. The other is the computation of the solution to Abel’s type integral equation. Finally, Section 4 concludes the paper.
2. Constructive Solution Based on Fractional-Order Mikusinski Operator
Denote the operation of Mikusinski’s convolution by . Let be the operation of its inverse. Then, for , one has The inverse of the previous expression is the deconvolution, which is denoted by (see [
In (4) and (5), the constraint may be released. More precisely, we assume that and may be generalized functions. Therefore, the Diract- function in the following is the identity in this convolution
system. That is, Consequently,
Let be an operator that corresponds to the function such that Therefore, the operator implies For , consequently, we have where .
The Cauchy integral formula may be expressed by using , so that Generalizing to in (12) for yields the Mikusinski operator of fractional order given by Thus, taking into account (12), we may
represent the integral equation of Abel’s type by Rewrite the above by Then, the solution to Able’s type integral equation (3) may be represented by where is the inverse of .
There are two questions in the constructive solution expressed by (15). One is whether exists. The other is how to represent its computation. We shall discuss the answers next section.
3. Results
3.1. Existence of the Inverse of Mikusinski’s Operator of Order
Let and be two normed spaces for and , respectively. Then, the operator regarding Able’s type integral equation (13) may be expressed by
The operator is obviously linear. Note that (3) is convergent [1]. Thus, one may assume that where
Define the norm of by Then, we have The above implies that is bounded. Accordingly, it is continuous [27, 28].
Since exists. Moreover, the inverse of is continuous and bounded according to the inverse operator theorem of Banach [27, 28]. This completes the proof of (15).
3.2. Computation Formula
According to the previous analysis, exists. It actually corresponds to the differential of order . Thus, In (13), we write by Following [29, p. 13, p. 527], [30], therefore, Since we write (24) by In
the solution (26), if , one has which is a result described by Gelfand and Vilenkin in [9, Section 5.5].
Note that Mikusinski’s operational calculus is a tool usually used for solving linear differential equations [24–26], but we use it in this research for the integral equation of the Abel’s type from
a view of fractional calculus. In addition, we suggest that the idea in this paper may be applied to studying other types of equations, for instance, those in [31–50], to make the possible
applications of Mikusinski’s operational calculus a step further.
4. Conclusions
We have presented the integral equation of Abel’s type using the method of the Mikusinski operational calculus. The constructive representation of the solution to Abel’s type integral equation has
been given with the Mikusinski operator of fractionally negative order, giving a novel interpretation of the solution to Abel’s type integral equation.
This work was supported in part by the 973 plan under the Project Grant no. 2011CB302800 and by the National Natural Science Foundation of China under the Project Grant nos. 61272402, 61070214, and
1. N. H. Abel, “Solution de quelques problèmes à l'aide d'intégrales définies,” Magaziu for Naturvidenskaberue, Alu-gang I, Bînd 2, Christiania, pp. 11–18, 1823.
2. R. Gorenflo and S. Vessella, Abel Integral Equations, Springer, 1991. View at Zentralblatt MATH · View at MathSciNet
3. P. P. B. Eggermont, “On Galerkin methods for Abel-type integral equations,” SIAM Journal on Numerical Analysis, vol. 25, no. 5, pp. 1093–1117, 1988. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
4. A. Chakrabarti and A. J. George, “Diagonalizable generalized Abel integral operators,” SIAM Journal on Applied Mathematics, vol. 57, no. 2, pp. 568–575, 1997. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
5. J. R. Hatcher, “A nonlinear boundary problem,” Proceedings of the American Mathematical Society, vol. 95, no. 3, pp. 441–448, 1985. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
6. G. N. Minerbo and M. E. Levy, “Inversion of Abel's integral equation by means of orthogonal polynomials,” SIAM Journal on Numerical Analysis, vol. 6, no. 4, pp. 598–616, 1969. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. J. D. Tamarkin, “On integrable solutions of Abel's integral equation,” The Annals of Mathematics, vol. 31, no. 2, pp. 219–229, 1930. View at Publisher · View at Google Scholar · View at
8. D. B. Sumner, “Abel's integral equation as a convolution transform,” Proceedings of the American Mathematical Society, vol. 7, no. 1, pp. 82–86, 1956. View at Zentralblatt MATH · View at
9. I. M. Gelfand and K. Vilenkin, Generalized Functions, vol. 1, Academic Press, New York. NY, USA, 1964.
10. C. E. Smith, “A theorem of Abel and its application to the development of a function in terms of Bessel's functions,” Transactions of the American Mathematical Society, vol. 8, no. 1, pp. 92–106,
1907. View at Publisher · View at Google Scholar · View at MathSciNet
11. S. Sohrabi, “Comparison Chebyshev wavelets method with BPFs method for solving Abel’s integral equation,” Ain Shams Engineering Journal, vol. 2, no. 3-4, pp. 249–254, 2011.
12. A. Antoniadis, J. Q. Fan, and I. Gijbels, “A wavelet method for unfolding sphere size distributions,” The Canadian Journal of Statistics, vol. 29, no. 2, pp. 251–268, 2001. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. R. J. Hughes, “Semigroups of unbounded linear operators in Banach space,” Transactions of the American Mathematical Society, vol. 230, pp. 113–145, 1977. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
14. K. Ito and J. Turi, “Numerical methods for a class of singular integro-differential equations based on semigroup approximation,” SIAM Journal on Numerical Analysis, vol. 28, no. 6, pp. 1698–1722,
1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. O. P. Singh, V. K. Singh, and R. K. Pandey, “A stable numerical inversion of Abel's integral equation using almost Bernstein operational matrix,” Journal of Quantitative Spectroscopy and
Radiative Transfer, vol. 111, no. 1, pp. 245–252, 2010. View at Publisher · View at Google Scholar · View at Scopus
16. R. K. Pandey, O. P. Singh, and V. K. Singh, “Efficient algorithms to solve singular integral equations of Abel type,” Computers & Mathematics with Applications, vol. 57, no. 4, pp. 664–676, 2009.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. M. Khan and M. A. Gondal, “A reliable treatment of Abel's second kind singular integral equations,” Applied Mathematics Letters, vol. 25, no. 11, pp. 1666–1670, 2012. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. R. Weiss, “Product integration for the generalized Abel equation,” Mathematics of Computation, vol. 26, pp. 177–190, 1972. View at Publisher · View at Google Scholar · View at Zentralblatt MATH ·
View at MathSciNet
19. W. C. Brenke, “An application of Abel's integral equation,” The American Mathematical Monthly, vol. 29, no. 2, pp. 58–60, 1922. View at Publisher · View at Google Scholar · View at MathSciNet
20. E. B. Hansen, “On drying of laundry,” SIAM Journal on Applied Mathematics, vol. 52, no. 5, pp. 1360–1369, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
21. A. T. Lonseth, “Sources and applications of integral equations,” SIAM Review, vol. 19, no. 2, pp. 241–278, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
22. Y. H. Jang, “Distribution of three-dimensional islands from two-dimensional line segment length distribution,” Wear, vol. 257, no. 1-2, pp. 131–137, 2004. View at Publisher · View at Google
Scholar · View at Scopus
23. L. Bougoffa, R. C. Rach, and A. Mennouni, “A convenient technique for solving linear and nonlinear Abel integral equations by the Adomian decomposition method,” Applied Mathematics and
Computation, vol. 218, no. 5, pp. 1785–1793, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
24. J. Mikusiński, Operational Calculus, Pergamon Press, 1959. View at MathSciNet
25. T. K. Boehme, “The convolution integral,” SIAM Review, vol. 10, no. 4, pp. 407–416, 1968.
26. G. Bengochea and L. Verde-Star, “Linear algebraic foundations of the operational calculi,” Advances in Applied Mathematics, vol. 47, no. 2, pp. 330–351, 2011. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
27. V. I. Istrăţescu, Introduction to Linear Operator Theory, vol. 65, Marcel Dekker, New York, NY, USA, 1981. View at MathSciNet
28. M. Li and W. Zhao, Analysis of Min-Plus Algebra, Nova Science Publishers, 2011.
29. A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, Fla, USA, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
30. C. Cattani and A. Kudreyko, “Harmonic wavelet method towards solution of the Fredholm type integral equations of the second kind,” Applied Mathematics and Computation, vol. 215, no. 12, pp.
4164–4171, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
31. C. Cattani, M. Scalia, E. Laserra, I. Bochicchio, and K. K. Nandi, “Correct light deflection in Weyl conformal gravity,” Physical Review D, vol. 87, no. 4, Article ID 47503, 4 pages, 2013.
32. C. Cattani, “Fractional calculus and Shannon wavelet,” Mathematical Problems in Engineering, vol. 2012, Article ID 502812, 26 pages, 2012. View at Publisher · View at Google Scholar · View at
33. M. Carlini, T. Honorati, and S. Castellucci, “Photovoltaic greenhouses: comparison of optical and thermal behaviour for energy savings,” Mathematical Problems in Engineering, vol. 2012, Article
ID 743764, 10 pages, 2012. View at Publisher · View at Google Scholar
34. M. Carlini and S. Castellucci, “Modelling the vertical heat exchanger in thermal basin,” in Computational Science and Its Applications, vol. 6785 of Lecture Notes in Computer Science, pp.
277–286, Springer, 2011.
35. M. Carlini, C. Cattani, and A. Tucci, “Optical modelling of square solar concentrator,” in Computational Science and Its Applications, vol. 6785 of Lecture Notes in Computer Science, pp. 287–295,
Springer, 2011.
36. E. G. Bakhoum and C. Toma, “Mathematical transform of traveling-wave equations and phase aspects of Quantum interaction,” Mathematical Problems in Engineering, vol. 2010, Article ID 695208, 15
pages, 2010. View at Publisher · View at Google Scholar
37. E. G. Bakhoum and C. Toma, “Specific mathematical aspects of dynamics generated by coherence functions,” Mathematical Problems in Engineering, vol. 2011, Article ID 436198, 10 pages, 2011. View
at Publisher · View at Google Scholar · View at Scopus
38. C. Toma, “Advanced signal processing and command synthesis for memory-limited complex systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 927821, 13 pages, 2012. View at
Publisher · View at Google Scholar · View at MathSciNet
39. G. Toma, “Specific differential equations for generating pulse sequences,” Mathematical Problems in Engineering, vol. 2010, Article ID 324818, 11 pages, 2010. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
40. J. Yang, Y. Chen, and M. Scalia, “Construction of affine invariant functions in spatial domain,” Mathematical Problems in Engineering, vol. 2012, Article ID 690262, 11 pages, 2012. View at
41. J. W. Yang, Z. R. Chen, W.-S. Chen, and Y. J. Chen, “Robust affine invariant descriptors,” Mathematical Problems in Engineering, vol. 2011, Article ID 185303, 15 pages, 2011. View at Publisher ·
View at Google Scholar · View at Scopus
42. Z. Jiao, Y.-Q. Chen, and I. Podlubny, “Distributed-Order Dynamic Systems,” Springer, 2011.
43. H. Sheng, Y.-Q. Chen, and T.-S. Qiu, Fractional Processes and Fractional Order Signal Processing, Springer, 2012.
44. H. G. Sun, Y.-Q. Chen, and W. Chen, “Random-order fractional differential equation models,” Signal Processing, vol. 91, no. 3, pp. 525–530, 2011. View at Publisher · View at Google Scholar · View
at Scopus
45. S. V. Muniandy, W. X. Chew, and C. S. Wong, “Fractional dynamics in the light scattering intensity fluctuation in dusty plasma,” Physics of Plasmas, vol. 18, no. 1, Article ID 013701, 8 pages,
2011. View at Publisher · View at Google Scholar · View at Scopus
46. H. Asgari, S. V. Muniandy, and C. S. Wong, “Stochastic dynamics of charge fluctuations in dusty plasma: a non-Markovian approach,” Physics of Plasmas, vol. 18, no. 8, Article ID 083709, 4 pages,
47. C. H. Eab and S. C. Lim, “Accelerating and retarding anomalous diffusion,” Journal of Physics A, vol. 45, no. 14, Article ID 145001, 17 pages, 2012. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
48. C. H. Eab and S. C. Lim, “Fractional Langevin equations of distributed order,” Physical Review E, vol. 83, no. 3, Article ID 031136, 10 pages, 2011. View at Publisher · View at Google Scholar ·
View at MathSciNet
49. L.-T. Ko, J.-E. Chen, Y.-S. Shieh, H.-C. Hsin, and T.-Y. Sung, “Difference-equation-based digital frequency synthesizer,” Mathematical Problems in Engineering, vol. 2012, Article ID 784270, 12
pages, 2012. View at Publisher · View at Google Scholar
|
{"url":"http://www.hindawi.com/journals/amp/2013/806984/","timestamp":"2014-04-20T19:15:05Z","content_type":null,"content_length":"192054","record_id":"<urn:uuid:4a078295-3fbe-4421-8093-6f8e7c21c06c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mixed structures on Hom spaces induced by mixed sheaves
up vote 4 down vote favorite
Let $D^b_m(X)$ (resp $D^b(X)$) denote the derived category of mixed Hodge modules (resp. constructible sheaves) on a complex variety $X$. Let
$rat\colon D^b_m(X)\to D^b(X)$
be the `forgetful' functor. This is t-exact for the perverse t-structure on the right. Write $MHM(X)$ for the abelian category of mixed Hodge modules on $X$. Then $MHM(pt)$ is the category of graded
polarizable mixed Hodge structures, and $rat\colon MHM(pt) \to VectorSpaces$ is the evident forgetful functor.
Now let $M,N\in D^b_m(X)$. Set
$\mathcal{H}om(M,N) = \Delta^!(\mathbb{D}M \boxtimes N)$,
where $\Delta\colon X\to X\times X$ is the diagonal map, and $\mathbb{D}$ is Verdier duality.
Let $a\colon X \to pt$ be the evident map. Then
$rat ( H^0(a_*\mathcal{H}om(M,N))) = H^0(rat(a_*\mathcal{H}om(M,N))) = Hom(rat(M), rat(N))$
and in this way we get a Hodge structure on $Hom(rat(M),rat(N))$. All functors are derived.
My question: If $M,N$ are [S:pure:S] pointwise pure (see Geordie Williamson's comment below), then is the induced structure on $Hom(rat(M), rat(N))$ pure?
My gut answer is no (even if $X$ is complete, the $\Delta^!$ should be messing weights up), but it would make me happier if the answer is yes!
If the answer is no, under what additional conditions (other than requiring $X$ to be smooth and complete plus $M,N$ being the `constant' sheaf) can the answer be converted to yes?
I guess one could also ask the same sort of question for mixed $\ell$-adic sheaves. But I am even less familiar with that setting.
perverse-sheaves hodge-theory sheaf-theory
1 It's not clear to me either right now. As for a sufficient condition, if $X$ is smooth and $M,N$ are locally constant, then it should be fine, because the internal Hom can be constructed more
naively for variations of pure Hodge structures. – Donu Arapura Oct 6 '12 at 12:44
@Donu Arapura: Do you know whether the corresponding statement for $\ell$-adic sheaves is true? – Reladenine Vakalwe Oct 6 '12 at 17:14
1 In the cases I know best (IC's on flag varieties) the statement is true by pointwise purity. (The local global spectral sequence degenerates for weight reasons, eg. in the BGG argument mentioned
in your other question "About an argument in Koszul duality..."). Hence to have a counterexample one needs to consider morphisms between non pointwise pure sheaves. However this is easy: take a
non pointwise pure IC and consider hom to or from a skyscraper sheaf. – Geordie Williamson Oct 8 '12 at 12:30
@Geordie Williamson: I am slightly worried that the local to global degeneration for weight reasons that you mention uses that Hom between the ICs is pure. No? Let $X=X_0 \supset ⋯\supset X_1$ be
the filtration by closed subspaces corresponding to the stratification. Let $v_k:X_k \to X$ be inclusion. The degeneration is obtained by looking $Hom(v^∗_k M,−)$ applied to $i_∗i^!v^!_k N \to v^!
_k N \to j_∗j^!v^!_k N$ (I hope what my $i$ and $j$ are is clear). Now for degeneration we want the connecting map in the long exact to be zero. Without weights (or parity vanishing) I dont see
how to get it – Reladenine Vakalwe Oct 8 '12 at 15:39
Search for "What's an example of whose stalks are pure but not pointwise pure?" for an example of non pointwise purity. I learnt a nice example from Luca Migliorini: take a family of elliptic
1 curves with smooth total space and some singular fibres. Then the decomposition theorem says that the direct image of the constant sheaf on the total space breaks into its cohomology sheaves (on a
curve IC's are shifts of sheaves). Now it is not difficult to see that the "middle" summand (coming from the $H^1$ of the elliptic curves in the family) cannot be pointwise pure. – Geordie
Williamson Oct 8 '12 at 16:49
show 3 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged perverse-sheaves hodge-theory sheaf-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/108984/mixed-structures-on-hom-spaces-induced-by-mixed-sheaves","timestamp":"2014-04-19T23:00:38Z","content_type":null,"content_length":"55351","record_id":"<urn:uuid:a6746dbd-27f9-4a02-85d2-2cf8db394d76>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture 5. Giant component (2)
Consider the Erdös-Rényi graph model
Theorem. Let
Beginning of the proof. Define the set
The proof of the Theorem consists of two main parts:
Part 1 states that all “large” components of the graph must intersect, forming one giant component. Some intuition for why this is the case was given at the end of the previous lecture. Part 2
computes the size of this giant component. In this lecture, we will concentrate on proving Part 2, and we will find out where the mysterious constant
Before we proceed, let us complete the proof of the Theorem assuming Parts 1 and 2 have been proved. First, note that with probability tending to one, the set
The remainder of this lecture is devoted to proving Part 2 above. We will first prove that the claim holds on average, and then prove concentration around the average. More precisely, we will show:
Together, these two claims evidently prove Part 2.
Mean size of the giant component
We begin by writing out the mean size of the giant component:
where we note that
This is what we will now set out to accomplish.
In the previous lecture we defined exploration process
and that for
To see the last property, note that the exploration process can only explore as many vertices as are present in the connected component
We now define the hitting times
Then we evidently have
(Note how we cleverly chose the random walk
The hitting time computation
Let us take a moment to gather some intuition. The random walks ever hits the origin. This computation can be done explicitly, and this is precisely where the mysterious constant
We now proceed to make this intuition precise. First, we show that the probability
Proof. We need to show that
Note that as
We can evidently write
This completes the proof.
By the above Lemma, and a trivial upper bound, we obtain
To complete computation of the mean size of the giant component, it therefore remains to show that
Lemma. Let
Proof. Recall the martingale
Suppose that
The first equality holds since if
Now suppose we can find
by dominated convergence. Thus, evidently, it suffices to find
We can find such
Evidently the requisite assumptions are satisfied when
Remark. Note that the supercritical case
By an immediate adaptation of the proof of the previous lemma, we obtain
Variance of the giant component size
To complete the proof of Part 2 of the giant component theorem, it remains to show that
To this end, let us consider
To estimate the terms in this sum, we condition on one of the components:
To proceed, note that the event
In particular, the event
As this quantity only depends on
Now note that, by its definition,
We have therefore shown that
which evidently implies
This is what we set out to prove.
Remark. It should be noted that the proof of Part 2 did not depend on the value of
Many thanks to Weichen Wang for scribing this lecture!
|
{"url":"http://blogs.princeton.edu/sas/2013/04/17/lecture-5-giant-component-2/","timestamp":"2014-04-20T16:19:06Z","content_type":null,"content_length":"122099","record_id":"<urn:uuid:14670e98-35c0-4d3e-b5e3-6ad30245a12d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Clifton, VA Math Tutor
Find a Clifton, VA Math Tutor
Dear Prospective Student,I hope you are ready to learn, gain confidence and increase your problem solving skills, all while raising your grades! I offer tutoring sessions for all high school math
subjects—from pre-algebra to AP calculus. I have been tutoring various levels of math for 6 years now.
22 Subjects: including linear algebra, logic, ACT Math, GRE
...I am well-versed in all four of the subject areas: * Physical Sciences: Besides being selected for The American Academy of Achievement (where luminaries such as Nobel Prize winners in Physics
and Chemistry gather), I have proven test-taking tactics for helping med school students apply what they...
82 Subjects: including geometry, precalculus, SAT math, MCAT
...All the best, DavidI have always loved learning biology and understanding the reasons why nature behaves the way it does. I graduated from Virginia Tech University with a 3.7 in biology. Over
the years, I developed some helpful mnemonics and fun ways to learn even the more complicated biology lessons.
27 Subjects: including algebra 2, Spanish, algebra 1, prealgebra
...I have taught in different settings and have tutored for many years. I have also worked with children and youth (and have a first grader myself, whom I have taught to read in three languages).
I have taught reading for many years using phonics/phonetics even long before it became the "in" method...
17 Subjects: including algebra 2, English, algebra 1, grammar
I love Science, Technology, Engineering, and Math! I am a private home tutor for my two children in the 8th and 5th grades, both of whom are in Advanced Academic Program in the Fairfax County
Public School system. I have undergraduate and graduate degrees from the University of Michigan in Aerospa...
18 Subjects: including calculus, statistics, probability, algebra 1
Related Clifton, VA Tutors
Clifton, VA Accounting Tutors
Clifton, VA ACT Tutors
Clifton, VA Algebra Tutors
Clifton, VA Algebra 2 Tutors
Clifton, VA Calculus Tutors
Clifton, VA Geometry Tutors
Clifton, VA Math Tutors
Clifton, VA Prealgebra Tutors
Clifton, VA Precalculus Tutors
Clifton, VA SAT Tutors
Clifton, VA SAT Math Tutors
Clifton, VA Science Tutors
Clifton, VA Statistics Tutors
Clifton, VA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/clifton_va_math_tutors.php","timestamp":"2014-04-18T16:07:07Z","content_type":null,"content_length":"23785","record_id":"<urn:uuid:60601b8e-4cec-4a79-89f0-dc1ca4607f06>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Theoretical Physics ::: A New Look at the Structure of Spacetime
Except for String Theory, most of the references above have achieved great success by describing space itself has having discrete or "quantum" structure to it. This means that space is not empty, but
made of a lattice of energy that literally makes up the "spacetime fabric." If we come to understand this fabric, and the fundamental structure of space (which makes up 99.99999% of the universe),
than we have the basic tools to adjust not only dynamics of electromagnetism, but gravity as well.
After studying the equations for quantum gravitation by Lee Smolin and team, and considering the work of Buckminster Fuller, it became very clear that the spacetime fabric would contain a regular
platonic geometry. Although the postulations for quantum foam and the spin networks illustrated by Smolin are rough and chaotic, it makes far more sense to consider that the natural conservation of
energy is at work as much at the smallest scales as it is on the largest.
It also became clear that some signatures of the most basic arrangements of the spacetime fabric would be found through all cultures, in all forms of plant and animal life, and be revealed in the
heart of world religions. I postulated and tested these basic assumptions as a teenager, and have found confirmations continuously since that time. Assurance as to the fundamental geometry only grew
stronger as I discovered Nassim Haramein's work, which mapped the three-dimensional stacking process of the structure in equilibrium out to a 64 tetrahedral matrix.
For the purposes of my study, I found it more interesting to look at the basic permutations of geometry. My own study, and that of Buckminster Fuller was most informing in this process, and I found
that each simple geometric permutation describes one of the fundamental forces perfectly...
First, let's look at the fundamental structure of the fabric of space-time. If we consider the most basic possible information structures at the Planck scale, a Planck length line is the most
fundamental representation of distance (1D), and a triangle made from three of these lines would form the most fundamental structure of area (2D). Moving up to the next most fundamental dimensional
structure, we obtain a Planck scale tetrahedron (3D). From here, we obtain a component that can form a lattice structure that has both planar surfaces and extruded depth, both of which are measured
in both space and time.
Each unit of Planck space is also a unit of time, since the distance itself cannot exist in measurement without some measure of time. In this way, we could say that the dimension of time (4D) is
embedded, entangled, or folded into each of these three spacial dimensions.
It is also important to look at our most fundamental building block, the triangle, more closely. Through the laws of thermodynamics, we know that energy and matter is always trying to reach the most
balanced state. Heat will continue to flow into cool space until the temperature is equalized. Spheres thrown into a box will continue to compact with each other until they reach the maximum state of
spacial density, which at the same time is the lowest state of potential energy (or mobility).
When discussing a lattice of energy, each node (or intersection point) will try to reach the position in which it has the most equal balance with all surrounding points. This is simply another way of
describing why the equilateral triangle, and its 3D projection, the tetrahedron, is the most fundamental and equalized structural form for space-time.
If energy, and thus the fabric of spacetime, is always trying to maintain an equilibrium state, then any condition outside of that state of equilibrium will produce force (an influence that may cause
a body to accelerate, or produce Gravity).
From here, we may begin to review each of the permutations of this fundamental lattice that would produce different levels of force, and thus produce Gravity.
"Any polygon with more than three sides is unstable. Only the triangle is inherently stable. Any polyhedron bounded by polygonal faces with more than three sides is unstable. Only polyhedra bounded
by triangular faces are inherently stable." -Buckminster Fuller
Curvature 1:
In any equilateral triangular lattice surface, if you remove one of the triangles, the entire surface will bend in order to keep the equilibrium intact.
It is this principle that holds the secret to Buckminster Fuller's work, and allows the curvature of structures to be modulated by adjusting the distance between the areas where this triangle has
been removed. In these places, the Hexagonal "plate" has now "warped" into a Pentagonal plate, and this change in geometry in a single position allows curvature for the entire surface. At maximum
Pentagonal curvature, these Pentagonal plates are directly adjacent to each other, which will tighten into an Icosahedron.
Because of this simple modulation to the equilateral surface, curvature has been established, and a basic "unit of Gravity" in the structure of space-time has been identified.
Curvature 2:
With enough energy, after the Pentagonal curvature reaches its maximum state (Icosahedral arc), the structure of spacetime can be forced to curve even further, and another triangle in the equilibrium
"pressed" off of the surface lattice. This will tighten the four remaining triangles of the surface into a pyramid shape. This state of compression and change in the lattice has a rippling effect, as
each of the vector planes curve to adjust to the insertion of a "square" unit.
This effect is intimately involved with the series of energetic alterations that produce physical matter from energy within the fabric of space-time, but we will not go into depth here on that
process. However, we must recognize that at this level of curvature there is no longer a gradual spherical curve on spacetime, just an abrupt compression point of energy. This may be the key to the
vector equilibrium form, and may form the entry point into a black hole or singularity.
"There are only three possible cases of fundamental omnisymmetrical, omnitriangulated, least-effort structural systems in nature: the tetrahedron with three triangles at each vertex, the octahedron
with four triangles at each vertex, and the icosahedron with five triangles at each vertex. If there are six equilateral triangles around a vertex we cannot define a three-dimensional structural
system, only a 'plane.'" - Buckminster Fuller
Producing Spin:
Any time a triangle is "removed" from one plane of equilibrium, or from a curvature, it cannot simply be "added" to another Hexagonal lattice. If you attempt to insert an additional equilateral
triangle into a Hexagon, you will find that there is no amount of angular adjustment you can make to allow it to fit.
Therefore one of two things must happen:
(1) The equilateral triangles in the set "warp" and become acute triangles in order to accomodate the new trianglular energy field, which would take a vast amount of energy since these structures
want to remain in equilibrium and this would change the balance of the entire fabric.
(2) The equilateral triangle connects to one of the triangles in the set, and the new triangle now "rotates" into a position connecting with one of the other vector equilibrium planes. The initial
act of this rotation into position would cause the torsion forces in the fabric of space-time described by Nassim Haramein and others.
Being that the latter option would require far less energy, it is certainly the more likely scenario, and generates other notable effects. First of all, the rotation of a new triangle, or more
specifically a single "point" of force (since it is actually only one node or intersection on the structure that needs to be removed) from one surface into any other plane of the Hexagonal vector
equilibrium would produce a chain reaction, in which that Hexagonal plane would then transfer one of its own "points" to the next adjacent plane, and so on.
This would look like a ripple effect, in which a point of energy in the fabric of space-time would go spinning through the lattice until it found a stable position to rest, or "nest." Energy
transference in this form is limited to the speed at which each Planck unit of distance in the structure can exchange with the next, which would be the fundamental measure of the speed of energy
exchange through the fabric of space-time. This speed of course, is the "speed of light," which in turn provides the fundamental unit of "time" to the vector equilibrium of space, as it provides the
relative measure of separation between any point in the lattice with any other point.
Frequency and Change:
Now if we consider each one of the individual points in this lattice, we see that while in a vector equilibrium state, there are 18 radials extending to other points in the matrix along spacial
dimensions. The points are intersections of energy, bridging dimensions. Each point in the lattice may fluxuate as energy passes through it, as vibration travels through these dimensional radials.
The state of a point or intersection at any given moment is determined by the combination of vibrations moving through it. Each structural intersection is not only a position, but also resonates at a
specific frequency. In this way, even the planck scale structure of space-time is not a static and rigid system, but is flexible and acts in accordance with the properties of waveforms and fluid
Energy will always attempt to reach an equilibrium state, so the dynamics at this scale are in a constant flux, working to return balance to the fundamental fabric. Each change that precipitates this
"returning" ripples through the fabric, and can be seen from microcosmic to macrocosmic scales.
|
{"url":"http://adamapollo.info/projects_and_physics/theory/","timestamp":"2014-04-20T03:10:52Z","content_type":null,"content_length":"52707","record_id":"<urn:uuid:53c25d7d-a633-43b9-8cdf-18c38f67eec8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Active Contour Toolbox
05 Jul 2006 (Updated 05 Jul 2006)
This toolbox provides some functions to segment an image or a video using active contours
ac_deformation(acontour, deformation, framesize, resolution)
function deformed = ac_deformation(acontour, deformation, framesize, resolution)
%ac_deformation: deformation of an active contour with topology management
% e = ac_deformation(a, d, f, r) applies a deformation, d, to an active
% contour, a, and clips it according to some corner coordinates, f (see
% ac_clipping), while maintaining (or imposing) a resolution of r. d is a 2xN
% array where N is the number of segments of the active contour (or,
% equivalently, its number of samples). If the active contour is composed of m
% single active contours, then N is the sum of the N_i's for i ranging from 1
% to m, where N_i is the number of segments of the i^th single active contour.
% d is then equal to [d_1 d_2 ... d_i ...], where d_i is the deformation to
% be applied to the i^th single active contour.
% The i^th single active contour of a is sampled and its j^th sample is
% translated by d_i(:,j). d_i(1,j) is applied to the first coordinate of the
% sample. If the orientation of the deformed single active contour is opposite
% to its original orientation, the deformation has transformed the contour
% into a point and then further into some "anticontour". Therefore, the
% contour has disappeared. If all of the single active contours disappear, e
% is the empty array. Otherwise, each non-empty deformed single active contour
% is tested for self-intersection and split into several simple single active
% contours if needed. Finally, the set of deformed single active contours,
% either originally present or resulting from a splitting, is tested for
% cross-intersection and merging are performed if needed.
% f is either a 2x1 or a 1x2 matrix. If r is positive, the length of each
% active contour segment is approximately of r pixels. If negative, the active
% contour is composed of abs(r) segments (or, equivalently, samples).
% Known bug: in some cases of self-intersection, an active contour might
% disappear due to a wrong estimation of the contour orientation. The active
% contour deformation (or velocity) can be smoothed a little to try to avoid
% this phenomenon, either through the smoothing parameter of ac_segmentation
% or by adding a minimum length constraint to the segmentation energy.
%See also po_orientation, ac_clipping, ac_segmentation, acontour.
%Active Contour Toolbox by Eric Debreuve
%Last update: July 5, 2006
deformed = [];
deformation_idx = 1;
for subac_idx = 1:length(acontour)
breaks = ppbrk(acontour(subac_idx), 'breaks');
previous_samples = ppval(acontour(subac_idx), breaks(1:end - 1));
orientation = sign(po_orientation([previous_samples previous_samples(:,1)]));
current_deformation = deformation(:,deformation_idx:(deformation_idx+length(breaks)-2));
samples = previous_samples + current_deformation;
new_orientation = po_orientation([samples samples(:,1)]);
%if the polygon is not simple, in some cases (in particular, a single
%self-intersection), the orientation is not close to 2*pi in absolute value.
%then, the signed area is used to find an orientation anyway (is it really
if (abs(new_orientation) < 1.8 * pi) || (abs(new_orientation) > 2.2 * pi)
new_orientation = sign(ac_area(cscvn([samples samples(:,1)])));
new_orientation = sign(new_orientation);
if new_orientation == orientation
hires_samples = fnplt(cscvn([samples samples(:,1)]));
%fnplt may output repeated or almost identical samples, disturbing
%po_simple (or po_orientation)
hires_samples(:,sum(abs(diff(hires_samples, 1, 2))) < 0.1) = [];
%what if a contour is so unsmooth that fnplt returns only tiny edges
%(length<0.1) while being non-negligible?
if (size(hires_samples,2) < 5) || po_simple(hires_samples)
samples = [samples samples(:,1)];
else%splitting management
mask = po_mask(hires_samples, framesize);
[labels, number_of_regions] = bwlabeln(1 - mask, 4);
if number_of_regions > 1
mask = logical(mask);
for label = 1:number_of_regions
[coord_1, coord_2] = find(labels == label);
if inpolygon(coord_1(round(end/2)), coord_2(round(end/2)), hires_samples(1,:), hires_samples(2,:))
mask = imfill(mask, [coord_1(round(end/2)) coord_2(round(end/2))], 4);
mask = double(mask);
samples = po_isocontour(mask, 0.5, orientation, {1 0}, fspecial('gaussian', 5, 2));
if iscell(samples)
deformed = [deformed cellfun(@cscvn, samples)];
deformed = [deformed cscvn(samples)];
deformation_idx = deformation_idx + length(breaks) - 1;
if ~isempty(deformed)
if length(deformed) > 1%merging management
deformed = ac_isocontour(ac_mask(deformed, framesize), 0.5, 1, resolution, {1 0}, [], true);
deformed = ac_resampling(deformed, resolution, framesize);
This toolbox provides some functions to segment an image or a video using active contours
ac_deformation(acontour, deformation, framesize, resolution)
function deformed = ac_deformation(acontour, deformation, framesize, resolution)
%ac_deformation: deformation of an active contour with topology management
% e = ac_deformation(a, d, f, r) applies a deformation, d, to an active
% contour, a, and clips it according to some corner coordinates, f (see
% ac_clipping), while maintaining (or imposing) a resolution of r. d is a 2xN
% array where N is the number of segments of the active contour (or,
% equivalently, its number of samples). If the active contour is composed of m
% single active contours, then N is the sum of the N_i's for i ranging from 1
% to m, where N_i is the number of segments of the i^th single active contour.
% d is then equal to [d_1 d_2 ... d_i ...], where d_i is the deformation to
% be applied to the i^th single active contour.
% The i^th single active contour of a is sampled and its j^th sample is
% translated by d_i(:,j). d_i(1,j) is applied to the first coordinate of the
% sample. If the orientation of the deformed single active contour is opposite
% to its original orientation, the deformation has transformed the contour
% into a point and then further into some "anticontour". Therefore, the
% contour has disappeared. If all of the single active contours disappear, e
% is the empty array. Otherwise, each non-empty deformed single active contour
% is tested for self-intersection and split into several simple single active
% contours if needed. Finally, the set of deformed single active contours,
% either originally present or resulting from a splitting, is tested for
% cross-intersection and merging are performed if needed.
% f is either a 2x1 or a 1x2 matrix. If r is positive, the length of each
% active contour segment is approximately of r pixels. If negative, the active
% contour is composed of abs(r) segments (or, equivalently, samples).
% Known bug: in some cases of self-intersection, an active contour might
% disappear due to a wrong estimation of the contour orientation. The active
% contour deformation (or velocity) can be smoothed a little to try to avoid
% this phenomenon, either through the smoothing parameter of ac_segmentation
% or by adding a minimum length constraint to the segmentation energy.
%See also po_orientation, ac_clipping, ac_segmentation, acontour.
%Active Contour Toolbox by Eric Debreuve
%Last update: July 5, 2006
deformed = [];
deformation_idx = 1;
for subac_idx = 1:length(acontour)
breaks = ppbrk(acontour(subac_idx), 'breaks');
previous_samples = ppval(acontour(subac_idx), breaks(1:end - 1));
orientation = sign(po_orientation([previous_samples previous_samples(:,1)]));
current_deformation = deformation(:,deformation_idx:(deformation_idx+length(breaks)-2));
samples = previous_samples + current_deformation;
new_orientation = po_orientation([samples samples(:,1)]);
%if the polygon is not simple, in some cases (in particular, a single
%self-intersection), the orientation is not close to 2*pi in absolute value.
%then, the signed area is used to find an orientation anyway (is it really
if (abs(new_orientation) < 1.8 * pi) || (abs(new_orientation) > 2.2 * pi)
new_orientation = sign(ac_area(cscvn([samples samples(:,1)])));
new_orientation = sign(new_orientation);
if new_orientation == orientation
hires_samples = fnplt(cscvn([samples samples(:,1)]));
%fnplt may output repeated or almost identical samples, disturbing
%po_simple (or po_orientation)
hires_samples(:,sum(abs(diff(hires_samples, 1, 2))) < 0.1) = [];
%what if a contour is so unsmooth that fnplt returns only tiny edges
%(length<0.1) while being non-negligible?
if (size(hires_samples,2) < 5) || po_simple(hires_samples)
samples = [samples samples(:,1)];
else%splitting management
mask = po_mask(hires_samples, framesize);
[labels, number_of_regions] = bwlabeln(1 - mask, 4);
if number_of_regions > 1
mask = logical(mask);
for label = 1:number_of_regions
[coord_1, coord_2] = find(labels == label);
if inpolygon(coord_1(round(end/2)), coord_2(round(end/2)), hires_samples(1,:), hires_samples(2,:))
mask = imfill(mask, [coord_1(round(end/2)) coord_2(round(end/2))], 4);
mask = double(mask);
samples = po_isocontour(mask, 0.5, orientation, {1 0}, fspecial('gaussian', 5, 2));
if iscell(samples)
deformed = [deformed cellfun(@cscvn, samples)];
deformed = [deformed cscvn(samples)];
deformation_idx = deformation_idx + length(breaks) - 1;
if ~isempty(deformed)
if length(deformed) > 1%merging management
deformed = ac_isocontour(ac_mask(deformed, framesize), 0.5, 1, resolution, {1 0}, [], true);
deformed = ac_resampling(deformed, resolution, framesize);
function deformed = ac_deformation(acontour, deformation, framesize, resolution) %ac_deformation: deformation of an active contour with topology management % e = ac_deformation(a, d, f, r) applies a
deformation, d, to an active % contour, a, and clips it according to some corner coordinates, f (see % ac_clipping), while maintaining (or imposing) a resolution of r. d is a 2xN % array where N is
the number of segments of the active contour (or, % equivalently, its number of samples). If the active contour is composed of m % single active contours, then N is the sum of the N_i's for i ranging
from 1 % to m, where N_i is the number of segments of the i^th single active contour. % d is then equal to [d_1 d_2 ... d_i ...], where d_i is the deformation to % be applied to the i^th single
active contour. % % The i^th single active contour of a is sampled and its j^th sample is % translated by d_i(:,j). d_i(1,j) is applied to the first coordinate of the % sample. If the orientation of
the deformed single active contour is opposite % to its original orientation, the deformation has transformed the contour % into a point and then further into some "anticontour". Therefore, the %
contour has disappeared. If all of the single active contours disappear, e % is the empty array. Otherwise, each non-empty deformed single active contour % is tested for self-intersection and split
into several simple single active % contours if needed. Finally, the set of deformed single active contours, % either originally present or resulting from a splitting, is tested for %
cross-intersection and merging are performed if needed. % % f is either a 2x1 or a 1x2 matrix. If r is positive, the length of each % active contour segment is approximately of r pixels. If negative,
the active % contour is composed of abs(r) segments (or, equivalently, samples). % % Known bug: in some cases of self-intersection, an active contour might % disappear due to a wrong estimation of
the contour orientation. The active % contour deformation (or velocity) can be smoothed a little to try to avoid % this phenomenon, either through the smoothing parameter of ac_segmentation % or by
adding a minimum length constraint to the segmentation energy. % %See also po_orientation, ac_clipping, ac_segmentation, acontour. % %Active Contour Toolbox by Eric Debreuve %Last update: July 5,
2006 deformed = []; deformation_idx = 1; for subac_idx = 1:length(acontour) breaks = ppbrk(acontour(subac_idx), 'breaks'); previous_samples = ppval(acontour(subac_idx), breaks(1:end - 1));
orientation = sign(po_orientation([previous_samples previous_samples(:,1)])); current_deformation = deformation(:,deformation_idx:(deformation_idx+length(breaks)-2)); samples = previous_samples +
current_deformation; new_orientation = po_orientation([samples samples(:,1)]); %if the polygon is not simple, in some cases (in particular, a single %self-intersection), the orientation is not close
to 2*pi in absolute value. %then, the signed area is used to find an orientation anyway (is it really %robust?) if (abs(new_orientation) < 1.8 * pi) || (abs(new_orientation) > 2.2 * pi)
new_orientation = sign(ac_area(cscvn([samples samples(:,1)]))); else new_orientation = sign(new_orientation); end if new_orientation == orientation hires_samples = fnplt(cscvn([samples samples
(:,1)])); %fnplt may output repeated or almost identical samples, disturbing %po_simple (or po_orientation) hires_samples(:,sum(abs(diff(hires_samples, 1, 2))) < 0.1) = []; %what if a contour is so
unsmooth that fnplt returns only tiny edges %(length<0.1) while being non-negligible? if (size(hires_samples,2) < 5) || po_simple(hires_samples) samples = [samples samples(:,1)]; else%splitting
management mask = po_mask(hires_samples, framesize); [labels, number_of_regions] = bwlabeln(1 - mask, 4); if number_of_regions > 1 mask = logical(mask); for label = 1:number_of_regions [coord_1,
coord_2] = find(labels == label); if inpolygon(coord_1(round(end/2)), coord_2(round(end/2)), hires_samples(1,:), hires_samples(2,:)) mask = imfill(mask, [coord_1(round(end/2)) coord_2(round(end/2))],
4); end end mask = double(mask); end samples = po_isocontour(mask, 0.5, orientation, {1 0}, fspecial('gaussian', 5, 2)); end if iscell(samples) deformed = [deformed cellfun(@cscvn, samples)]; else
deformed = [deformed cscvn(samples)]; end end deformation_idx = deformation_idx + length(breaks) - 1; end if ~isempty(deformed) if length(deformed) > 1%merging management deformed = ac_isocontour
(ac_mask(deformed, framesize), 0.5, 1, resolution, {1 0}, [], true); else deformed = ac_resampling(deformed, resolution, framesize); end end
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/11643-active-contour-toolbox/content/acontour/transformation/ac_deformation.m","timestamp":"2014-04-18T08:06:05Z","content_type":null,"content_length":"33201","record_id":"<urn:uuid:1a68d598-a1a3-4405-8055-e50a3f373737>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On two-dimensional flows of compressible fluids
Bergman, Stefan (Brown University, Providence, R.I)
This report is devoted to the study of two-dimensional steady motion of a compressible fluid. It is shown that the complete flow pattern around a closed obstacle cannot be obtained by the method of
Chaplygin. In order to overcome this difficulty, a formula for the stream-function of a two-dimensional subsonic flow is derived. The formula involves an arbitrary function of a complex variable and
yields all possible subsonic flow patterns of certain types. Conditions are given so that the flow pattern in the physical plane will represent a flow around a closed curve. The formula obtained can
be employed for the approximate determination of a subsonic flow around an obstacle. The method can be extended to partially supersonic flows.
An Adobe Acrobat (PDF) file of the entire report:
|
{"url":"http://naca.central.cranfield.ac.uk/report.php?NID=2227","timestamp":"2014-04-16T22:03:51Z","content_type":null,"content_length":"2017","record_id":"<urn:uuid:150584ae-2ff2-47fa-a551-374045e778dd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Higley Algebra Tutor
Find a Higley Algebra Tutor
...I love working with (rather than against) a student's particular learning style. I passionately believe that each student can learn and become a high achiever. In the areas of math, I take a
step-by-step approach.In reading, I rebuild with phonics.
29 Subjects: including algebra 1, algebra 2, reading, English
...I am a highly qualified and state certified high school math teacher. I have a master's degree in mathematics and 14 years of classroom teaching experience. I have tutored both high school and
college students with success.
15 Subjects: including algebra 1, algebra 2, calculus, statistics
...I have worked with TOEFL students from a wide variety of backgrounds and countries such as Japan, Saudi Arabia, and Kuwait. I help students improve their performance on all sections of the
TOEFL. I tutor several sections of the ASVAB, including Word Knowledge, Paragraph Comprehension, Arithmetic Reasoning, and Mathematics Knowledge.
33 Subjects: including algebra 1, algebra 2, reading, Spanish
...Much of my tutoring has been for ESL students. I have had many years preparing students for the SAT, GRE, GED, ACT and various military service tests. While the ultimate results of tutoring lie
with the student, I have been successful in helping to raise test scores significantly.
34 Subjects: including algebra 1, reading, English, writing
...I have experience working with online classes such that I can assist your child in handling the online class or working with educational software as ALEKS, A+, MathXL, PLATO, NovaNet,and MyLab
Mastering. My goal in tutoring is to enable my students with strong math skills, study strategies, and...
8 Subjects: including algebra 1, algebra 2, geometry, SAT math
|
{"url":"http://www.purplemath.com/Higley_Algebra_tutors.php","timestamp":"2014-04-20T03:56:07Z","content_type":null,"content_length":"23565","record_id":"<urn:uuid:51940f22-2537-4780-8523-65e68daa5dbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
News Search results: Computational mathematics
By a News Reporter-Staff News Editor at Journal of Mathematics -- Research findings on Computational Intelligence are discussed in a new report.
HispanicBusiness.com · Apr. 09, 2014
Related News
... sciences, chemistry, economics, geosciences, mathematics, physics ... science across the lifespan, infectious diseases, computational science
Daniel Forger, a professor of mathematics and computational medicine at Michigan, along with grad student Olivia Walch and Yale Ph.D...
Disease Genomics, Environmental Biotechnology and Computational Biology. ... aptitude in biology and basic knowledge on chemistry and mathematics.
India Today · Apr. 16, 2014
... geosciences, the physical sciences including mathematics, chemistry ... in education and broadening participation in computational neuroscience ...
... his research was published in the journal PLOS Computational Biology. ... these schedules are optimal according to the mathematics." Forger ...
The heavy lifting is done by the software that generates these designs, employing a branch of mathematics called computational geometry.
Impact Lab · Apr. 15, 2014
Simultaneously, he worked toward a computational and applied mathematics PhD from Princeton, which he received last year.
Venturebeat · Apr. 15, 2014
B. Forger and Kirill Serkh, both professors of mathematics at the ... takes to get over jet lag in the April 10 issue of PLOS Computational ...
|
{"url":"http://www.ask.com/news?o=41647999&l=dir&oo=41647999&qsrc=&q=Computational+mathematics","timestamp":"2014-04-20T06:37:49Z","content_type":null,"content_length":"86113","record_id":"<urn:uuid:a354f8b3-1d5d-4fd9-b220-b979f3b14f33>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: November 2005 [00839]
[Date Index] [Thread Index] [Author Index]
Re: Types in Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg62644] Re: Types in Mathematica
• From: "Steven T. Hatton" <hattons at globalsymmetry.com>
• Date: Wed, 30 Nov 2005 00:06:40 -0500 (EST)
• References: <dme722$i89$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Bill Rowe wrote:
> On 11/27/05 at 2:40 AM, hattons at globalsymmetry.com (Steven T.
> Hatton) wrote:
>>A related question follows from this example:
>>Plot3D[f, {x, xmin, xmax}, {y, ymin, ymax}] generates a
>>three-dimensional plot of f as a function of x and y. Plot3D[{f,
>>s}, {x, xmin, xmax}, {y, ymin, ymax}] generates a three-dimensional
>>plot in which the height of the surface is specified by f, and the
>>shading is specified by s.
>>Note there is no specification of /what/ "f", "x", "xmin", "xmax",
>>"y", "ymin", and "ymax" should be. What difficulties exist which
>>prevent the specification of the "types" of these parameters?
> It isn't clear to me as to what point you are trying to make here.
I don't know that I was really trying to make a point. I was really asking
a question. The question does hint at a more general idea, but addressing
the question might serve to address the more general notion of function
signatures and parameter types. There are "type" mechanisms in Mathematica
such as NumericQ, NumberQ, etc.
I am inclined to think of type systems in other programming languages as a
means of communicating to both the compiler, and the human reader of a
program the intent of the programmer. In a strongly typed language, it is
often possible to leverage function signatures, and return value types as a
means of creating skeleton documentation. Often such skeleton
documentation is sufficient for presenting the API to a programmer without
any additional "human" contribution. For examples, see Doxygen, and
> The description returned by Information isn't intended to be a full
> documentation of the function. Basically, all it provides is a minimal
> description showing arguements. But a clickable link is also provided
> which allows you to quickly access more complete documentation which
> includes usage examples and links to further documentation.
Information[] doesn't actually provide that hyperlink, but ? and ?? will.
Nonetheless, the additional documentation does not provide a more detailed
"type" specification than does Information[]. At least not in the case of
Plot3D. What I guess I'm really asking is whether there is a technical
reason that makes such a type system impractical in Mathematica. The only
"primitive" type that exist in terms of function return value in
Mathematica appears to be provided by the NumericFunction attribute. But
it seems to me,that attribute can "lie" about what the function actually
>>What about the notion of "return values"? For example Plot3D
>>returns a SurfaceGraphics object, but the documentation resulting
>>from Information[Plot3D] does not mention that. Why?
> Why should that be included?
Perhaps for completeness. If it were part of the formal definition of the
function, it might be valuable in validating programs and finding errors.
OTOH, there may be technical considerations which make such a type system
impractical. I am fully aware that the inertia of previous design choices
can be a valid argument against introducing such changes at this point.
> Again, the purpose of Information is to give
> a short description not full documentation. Clearly, what is omitted from
> any shortened description is somewhat arbitrary. I would think most of the
> time Plot3D is used to generate a graphic that is not further manipulated.
> Assuming this is true, it seems reasonable to provide a list of the
> argumenents in a shortened description but not much detail as to output
> objects.
Is it reasonable to speak in terms of a well defined return type in
Mathematica? One place such a feature might be useful is in creating
support tools which aid a programmer. Such tools may not be as
advantageous in Mathematica as they are in programming languages intended
for application development.
The Mathematica Wiki: http://www.mathematica-users.org/
Math for Comp Sci http://www.ifi.unizh.ch/math/bmwcs/master.html
Math for the WWW: http://www.w3.org/Math/
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Nov/msg00839.html","timestamp":"2014-04-21T09:40:42Z","content_type":null,"content_length":"38808","record_id":"<urn:uuid:56221adf-0f2a-4117-81ed-b21099cfc74d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Logic Statement False Implies True
Date: 02/06/2008 at 23:49:17
From: Abe
Subject: False implies True?
I am well familiar with the linguistic arguments which clarifies this
somehow confusing concept. Is there a deeper philosophical argument
that touches on the underlying logic of this concept {logic axiom}?
The most disturbing thing about this logic axiom is that it
eliminates {defeats} the logical contingency of the conclusion on
the premise, which somehow goes against the very essence of logic!
By layman definition, logic is something that allows "naturally" the
consequence to flow from a premise. When the same conclusion happens
no matter what the premise is, the connectedness of the logic in
between the conclusion and its premise loses significance, just like
the definition of a function is defeated when one certain value from
the domain point {maps} into two or more different values from the
If the moon is made of cheese, then I will go to the movie next week
can rather best describe a sarcastic {insane} mode of thinking than a
flow of "natural" logic especially when it can be said equally that if
the moon is NOT made of cheese, I still go to the movie! The dilemma
of this concept, though I use it myself to prove some propositions
like the empty set is a subset of every set, is that it kills the
"natural" connectedness inherent in logic.
Unfortunately, the very definition of logic itself is so intuitive
and vague in the same way the set or sanity is defined, otherwise
undefined! Even though I trained myself to live with this concept
and I use it in my formal proofs, I try to avoid using it as much as
possible. It is like employing proof by contradiction. I would
rather prove directly.
Date: 02/07/2008 at 22:59:18
From: Doctor Peterson
Subject: Re: False implies True?
Hi, Abe.
There are several different brands of "logic", in philosophy and in
math, and I am only familiar with some of them, so you may need to
tell me the context of your question. Are you talking about a formal
system of logic, or just about the way it is used in ordinary
mathematical proofs?
In the symbolic logic I am familiar with, what is commonly read as
"implies" (A->B) is not really an implication. It should be read
simply as "if A then B", and is taken to be true when the truth values
of A and B are such that they do not contradict the claim that B is
true whenever A is true. It is important NOT to read into it any
claim of a cause-and-effect connection, or anything of that sort.
I think the basic underlying reasons for this are twofold:
First, we want A->B to be defined for all values of A and B; we don't
accept "not enough information" or something like that. So we have to
consider it either true or false, and the question becomes, which
makes sense in the contexts in which we will be using it?
Second, the main context is judging the validity of an argument. When
we put an argument into symbolic form, we want it to be always true if
it is a valid argument. It turns out that this works IF we define
A->B as we do. Essentially, this means that we say something is true
when there is no evidence that it is false; it is "innocent until
proven guilty".
Your example of the empty set is a slightly different issue, though
clearly related. Again, we choose to say that a set is a subset of
another when "every element of the former is an element of the
latter", and take that to mean "there is no element of the former that
is NOT an element of the latter", because that yields the results we
need for further theorems. This is the same "innocent until proven
guilty" idea.
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum
Date: 02/12/2008 at 15:29:16
From: Abe
Subject: Thank you (False implies True?)
Thanks plenty Dr. Math for your interesting answer about "P implies
Q." I found it rather interesting that there is indeed some
philosophical underlying mode of thinking built into the definition
of "P implies Q," that is, if there is no evidence that something is
false, then we must assume that it is true. This we may cast as the
"default rule of the truth."
I understand that we have a substantial liberty over our definitions,
although there is always something logical to them, and that is why we
do not have to prove definitions. So if we chose to define the truth
table of "P implies Q" as such, there is no harm in that as long as we
proceed consistently with any argument we base on that definition.
Obviously "if P is true and Q is true" then the implication is true
and that is very clear. Similarly, if P is false and Q is false, then
the implication is true as well, and that is clear, too. The third
case is when we claim that if P is true yet Q is False, then our logic
of implication is flawed because by doing that we just nullified the
case that if P is true then Q must be true, something we just upheld a
moment ago!
Now comes the interesting case which P is false and Q is true, yet we
must assume that the implication is True ONLY for lack of better
knowledge or evidence pointing to the other direction. This is to me
a philosophy or a mode of thinking which I accept as a sound one
though it has some serious implications beyond math.
Thanks a million for the explanation.
Date: 02/12/2008 at 23:35:52
From: Doctor Peterson
Subject: Re: Thank you (False implies True?)
Hi, Abe.
Well stated, except for a few details.
This is only a definition made for a specific purpose in math, so you
can't really take it beyond that context and make it a philosophical
principle. I think the same reasoning does apply in other areas, but
you'd have to think through whether it's appropriate on a case by case
basis. In particular, we don't always HAVE to "assume" something is
true when there's no evidence for it; we just can't assume that it is
Also, when P is false and Q is false, you really have no evidence to
justify the statement that P implies Q. I'd like to pursue that a
little more deeply.
Let's take an example. I have a piece of paper here that is coated
with a chemical that changes color. I claim that if the paper is wet,
it is red. That is,
WET -> RED
(Note that I didn't say "wet implies red"; the word "implies", as I
said before, is not really appropriate for this connective, as you'll
see in a moment.)
Now let's consider what you might see when I show you the paper,
taking the four cases in your order.
1. It's wet, and it's red. That agrees with my statement, so you say
my statement is true. (You do not have enough evidence to conclude
that my statement is ALWAYS true; you've just seen one case. Maybe
tomorrow it will be cooler, and you'll find that the paper is only red
if it's wet AND warm. That's why you can't say it's true that wet
implies red, only that it is true in this instance that "if it's wet,
then it's red." Do you see the difference?)
2. It's dry, and it's blue. You don't know that it would be red if it
were wet; there's no evidence one way or the other. So, simply by
convention, you say that my statement is true, meaning that the
evidence is consistent with that conclusion. But you can't say that
you've proved that wetness IMPLIES redness; all you can say is that it
3. It's wet, and it's blue. That disproves my statement; we have a
case where it is wet but NOT red. My statement is definitely false.
(This case IS enough to disprove the stronger claim that wet implies
red; you have a counterexample.)
4. It's dry, and it's red. Hmmm ... maybe it's ALWAYS red, and my
statement was technically true but misleading; or maybe it's red for
some other reason than wetness. Or maybe it actually turns blue when
it gets wet, and I just lied. Again, you really don't know! The
evidence at hand deals only with the case where it's dry, and my
statement is about what would be true if it were wet. So you have to
say that it's true, because you haven't disproved it, just like in case 2.
So your cases 2 and 4 are both "true" for the same reason, not for
different reasons. The evidence in both cases is consistent with my
statement, so we call it true.
- Doctor Peterson, The Math Forum
Date: 02/13/2008 at 02:18:10
From: Abe
Subject: Thank you (False implies True?)
Great example Dr. Peterson. If I may modify my understanding by the
Assuming for the sake of argument that:
1. P ---> Q, wet implies red.
Without contradicting our first statement, we could also claim that
2. ~ P ---> Q, NOT wet implies red because the case that wet
correlates with red might not be exhaustive (reserved thinking!)
Further, without contradicting our first statement, we may also claim
3. ~P ---> ~Q, NOT wet implies NOT red, although this is a useless
(vacuous) statement since it deals with none of our original variables
(wet and red.)
4. P ---> ~ Q, wet implies NOT red, clearly contradicts our first
statement which we assumed to be true.
In sum, P implies Q is nothing more than a claim or a proposition. We
may uphold the rest of the logic table for P implies Q since the logic
equivalence (truth value) for the remaining three cases does NOT
contradict our claim about P implies Q, although not useful statements
in some cases. Thanks again for the great example.
Date: 02/13/2008 at 08:52:30
From: Doctor Peterson
Subject: Re: Thank you (False implies True?)
Hi, Abe.
Yes. Well said.
Well, I'll make one small change. Your 3 is not really useless or
vacuous (the latter word has a specific meaning in logic which does
not apply here); it's just irrelevant to the original statement,
which is probably what you meant. Your 2 and 3 are two alternative
possibilities for what happens when P is false, either of which
would be alike compatible with P->Q, because the latter doesn't say
anything about what happens when P is false.
Thanks for the opportunity to think through this, because I've been
looking for better ways to explain it to students, and I think I've
got it.
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/72083.html","timestamp":"2014-04-20T09:11:15Z","content_type":null,"content_length":"15552","record_id":"<urn:uuid:d6a0406d-69a8-4c8d-83c0-a88f04517379>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector F
October 5th 2008, 06:57 AM
Vector F
A man pushes a wheelbarrow up an incline of 20 degrees with a force of 100 pounds. Express the force vector F in terms of i and j.
October 5th 2008, 11:19 AM
Let $|\vec F|$ denote the length of the vector (=magnitude of the force) then the vector $\vec F = \left(100 \cdot \cos(20^\circ)\bold{ i}, 100 \cdot \sin(20^\circ)\bold{ j} \right)$
See attachment
October 5th 2008, 09:36 PM
|
{"url":"http://mathhelpforum.com/pre-calculus/52061-vector-f-print.html","timestamp":"2014-04-18T11:13:03Z","content_type":null,"content_length":"6125","record_id":"<urn:uuid:ebe6794f-4197-439c-9c85-3e3c2c179b21>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Each employee of a certain task force is either a manager or
Author Message
Each employee of a certain task force is either a manager or [#permalink] 25 Mar 2009, 10:07
45% (medium)
Question Stats:
Joined: 19 Oct 2008
Posts: 96
(02:12) correct
Followers: 1
52% (01:24)
Kudos [?]: 1 [0], given:
0 wrong
based on 97 sessions
Each employee of a certain task force is either a manager or a director. What percent of the employees on the task force are directors?
(1) the average (arithmetic mean) salary of the managers on the task force is 5000 less than the average salary of all the employees on the task force.
(2) the average (arithmetic mean) salary of the directors on the task force is 15000 greater than the average salary of all the employees on the task force.
OPEN DISCUSSION OF THIS QUESTION IS HERE: each-employee-of-a-certain-task-force-is-either-a-manager-or-94413.html
Spoiler: OA
Kaplan Promo Code Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes
Re: DS Ratio question GMAT prep [#permalink] 25 Mar 2009, 19:30
I guess the answer is E.
m + d = t ( total no of employees )
% * whole = part
% = part/whole
= No of directors/Total no of employees = d/m+d
statement 1:
Senior Manager
Average salary of manager is 5000 < average for all employees
Joined: 01 Mar 2009
Am = Total salary of manager/m
Posts: 372
Average salary of all employees = ( Tm + Td )/m+d
Location: PDX
So Tm/m = (Tm + Td / m+d ) - 5000 : Not sufficient
Followers: 5
Statement 2:
Kudos [?]: 62 [0], given:
24 Ad = Average salary of all employees + 15000
Td/d = Tm + Td / m+d + 15000 - Not sufficient
Tm/m + 5000 = Td/d - 150000
we need d/m+d - So not sufficient.
In the land of the night, the chariot of the sun is drawn by the grateful dead
Current Student
Re: DS Ratio question GMAT prep [#permalink] 26 Mar 2009, 07:53
Joined: 28 Dec 2004
it is a weighted average type problem, we know that Director income is 15000 more than the avg and we know that manager income is 5000 less than the avg..
Posts: 3411
so if you look at it, the avg is closer to the manager salary than it is to the director salary clearly there are more managers than directors!
Location: New York City
Schools: Wharton'11
Followers: 13
Kudos [?]: 148 [0],
given: 2
pleonasm Re: DS Ratio question GMAT prep [#permalink] 26 Mar 2009, 08:43
Senior Manager FN wrote:
Joined: 01 Mar 2009 it is a weighted average type problem, we know that Director income is 15000 more than the avg and we know that manager income is 5000 less than the avg..
Posts: 372 so if you look at it, the avg is closer to the manager salary than it is to the director salary clearly there are more managers than directors!
Location: PDX Well that still doesn't give the percentage of directors in the group. I'm still not convinced. Sure there are more managers than directors - but to find the percentage of
directors, won't we need the number of directors ? I may be missing something very basic here. Any help will be appreciated.
Followers: 5
- pradeep
Kudos [?]: 62 [0], given:
24 _________________
In the land of the night, the chariot of the sun is drawn by the grateful dead
Re: DS Ratio question GMAT prep [#permalink] 27 Mar 2009, 05:43
This post received
z102478 I tried this way and agree that A & B are not the answers...
Intern Total avg of Manager and Director = x
Joined: 07 Oct 2008 For Managers ,
Salary Avg = m = (x-5000)
Posts: 12 Managers Count = M
Followers: 0 For Directors ,
Salary Avg = d = (x+15000)
Kudos [?]: 1 [1] , given: Directors Count = D
We have a clue in stmt that they are talking abt averages so lets substitute in the formula
Avg (x) = {M(x-5000) + D(x+15000) } / (M+D)
After solving the above eqn you will get a relationship between M and D and it is
M/D =1/3 and hence the answer is C.
Let me know what you think?
Senior Manager
Re: DS Ratio question GMAT prep [#permalink] 27 Mar 2009, 07:17
Joined: 16 Jan 2009
Could you please solve the equation to get M/D = 1/3?
Posts: 362
Nice explanation so far , thanks a lot.
Technology, Marketing _________________
GMAT 1: 700 Q50 V34 Lahoosaher
GPA: 3
WE: Sales
Followers: 2
Kudos [?]: 61 [0], given:
Re: DS Ratio question GMAT prep [#permalink] 27 Mar 2009, 12:07
This post received
Accountant wrote:
Each employee on a certain task force is either a manager or a director. What percentage of the employees are directors:
1) Average salary for manager is $5,000 less than average of all employees.
2) Average salary of directors is $15,000 greater than average salary of all employees
Please explain your answers.
Suppose: no. of managers = m
managers' avg. salary = s1
no. of directors = d
directors avg salary = s2
GMAT TIGER From st 1:
CEO s1 + 5000 = (s1 m + s2 d) / (m+d)
Joined: 29 Aug 2007 s1 (m+d) + 5000 (m+d) = (s1 m + s2 d)
Posts: 2504 s1 m + s1 d + 5000m + 5000d = s1 m + s2 d
Followers: 48 s1 d + 5000m + 5000d = s2 d
Kudos [?]: 452 [4] , s2 d - s1 d = 5000 (m + d)
given: 19
d (s2 - s1) = 5000 (m + d)
s2 - s1 = 5000 (m + d)/d
From st 2:
s2 - 15000 = (s1 m + s2 d) / (m+d)
s2 - s1 = 15000 (m+d)/m
From 1 and 2: 5000 (m+d)/d = 15000 (m+d)/m
m = 3d
d/(m+d) = d/(3d+d) = 1/4
So C is correct.
Verbal: new-to-the-verbal-forum-please-read-this-first-77546.html
Math: new-to-the-math-forum-please-read-this-first-77764.html
Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html
Re: DS Ratio question GMAT prep [#permalink] 27 Mar 2009, 15:22
Here is another way of solving it.
Thum rule: -
part / whole = % / 100
Let m = no of managers and M be the avg salaray of a manager
mrsmarthi d = no of directors and D be the avg salary of the director.
T be the avg salary of the total group.
Senior Manager
We can say that Mm + Dd = T(m + d) ----> Eq 1
Joined: 30 Nov 2008
Question asked is what is the Percentage of directors ie (d / (m+d)) = ?
Posts: 495
For simplicity lets consider 5 and 15 instead of 5000 or 15000.
Schools: Fuqua
From stmt1, avg salary of managers is 5 less than total avg
Followers: 10
==> Mm = (T-5)m. ---> Eq 2. We don't know any thing abt d or D. Cannot simplfy this further to find out what is d / (m+d). Hence Insufficient.
Kudos [?]: 113 [0],
given: 15 From stmt2, avg salary of directors is 15 more than total avg
==> Dd = Td + 15d. --> Eq 3. We don't know anything abt M or m. Cannot simplfy this further to find out what is d / (m+d). Hence Insufficient.
Combine stmt 1 and 2 for Mm and Dd in Eq 1, we get,
Tm - 5m + Td + 15d = Tm + Td.
==> 3d = m.
Now we can solve for d / (m+d) which is d / 4d = 1/4. Hence sufficient.
IMO C.
Re: DS Ratio question GMAT prep [#permalink] 05 Apr 2009, 06:40
Accountant wrote:
Each employee on a certain task force is either a manager or a director. What percentage of the employees are directors:
1) Average salary for manager is $5,000 less than average of all employees.
2) Average salary of directors is $15,000 greater than average salary of all employees
Please explain your answers.
This question is testing weighted average.
D - % of Directors
M- % of Managers
seofah D+M=1
Director D?
Joined: 29 Aug 2005 ____________
Posts: 882 Let "a" be total average
Followers: 7 Stmt1. a-5k=m (i.e. average salary for manager)
Kudos [?]: 125 [0], Stmt2. a+15k=d (i.e. average salary for director)
given: 7
None is suff by itself.
Stmt 1 &2.
Formula for weighted average: M*m+D*d=a
As M+D=1, we are left with D15k-M5k=0
Also, M=1-D => D15k-5k+D5k=0
D=1/4 or 25%
Hence C.
tkarthi4u Re: DS Ratio question GMAT prep [#permalink] 07 May 2009, 22:43
Senior Manager Yes this probelm is a bit time consuming. I liked GMAT tigers equation approach. It was more understandable to me.
Joined: 08 Jan 2009 Have seen this problem in some other forum some time back and i remember they put the below explaination. It will be good if someone can expalin. Just wanted to share with u
Posts: 332
Concept of weighted averages
Followers: 2
5000-------- Av ------------------150000
Kudos [?]: 57 [0], given:
5 salarys are the ratio M/D = 5000 / 15000 = 1/3
the number of mangers and directors will be in the inverse ratio M / D = 3/1 .
Re: DS Ratio question GMAT prep [#permalink] 08 May 2009, 00:32
mdfrahim 2
Manager This post received
Joined: 28 Jan 2004
Solving this ques. by equations will take a lot of time.
Posts: 206
Alternative approach.
Location: India
Manager manager manager............x managers ................Average..............director director director.....y director
Followers: 2 instaed of 5000 and 15000 lets deal with 5 and 15.
Kudos [?]: 9 [2] , given: So managers are trying to bring down the average and directors are trying to bring up the average. however the average is fixed.
1 director can raise the average by 15 so to nullify it we need 3managers. so total employees = 4.Hence ratio of
manager to director = 1/4.
Hope that helps.
Re: DS Ratio question GMAT prep [#permalink] 16 Jul 2009, 10:56
tkarthi4u wrote:
Yes this probelm is a bit time consuming. I liked GMAT tigers equation approach. It was more understandable to me.
Have seen this problem in some other forum some time back and i remember they put the below explaination. It will be good if someone can expalin. Just wanted to share with u
Concept of weighted averages
5000-------- Av ------------------150000
rashminet84 salarys are the ratio M/D = 5000 / 15000 = 1/3
Senior Manager the number of mangers and directors will be in the inverse ratio M / D = 3/1 .
Joined: 04 Jun 2008 This is the best approach I've ever seen for averages. I might never have figured this out myself had i not seen this.
Posts: 306 Actually the above diagram can be better represented as
Followers: 5 Avg M-----(5000)-----Avg Tot-----------------(15000)--------------Avg D
Kudos [?]: 90 [0], given: ie, Avg of managers is at distance of 5000 from total avg, and avg of directors is at a distance for 15000 from tot avg.
Now concept of weighted avg is that The total avg will be most affected by the heaviest weight, ie, more the number of managers, closer will be tot avg to their avg salary,
more the number of directors, closer will be the tot avg to their avg.
Now Avg tot is most affected by avg M, that means managers are more.
How many more relative to directors?
Avg D is 3 times as far from tot avg as is Avg M.
that implies that weight of AvgM is 3 times greater than weight of AvgD.
weight here is nothing but number of managers.
I hope i could explain it
Re: DS Ratio question GMAT prep [#permalink] 18 Aug 2009, 18:23
This post received
Using pen and paper, it takes arround 2-3 minuets.
reply2spg wrote:
btw....How much time will it take to answer the question?????
GMAT TIGER wrote:
Accountant wrote:
Each employee on a certain task force is either a manager or a director. What percentage of the employees are directors:
1) Average salary for manager is $5,000 less than average of all employees.
2) Average salary of directors is $15,000 greater than average salary of all employees
Please explain your answers.
Suppose: no. of managers = m
managers' avg. salary = s1
GMAT TIGER no. of directors = d
CEO directors avg salary = s2
Joined: 29 Aug 2007 From st 1:
Posts: 2504 s1 + 5000 = (s1 m + s2 d) / (m+d)
Followers: 48 s1 (m+d) + 5000 (m+d) = (s1 m + s2 d)
Kudos [?]: 452 [1] , s1 m + s1 d + 5000m + 5000d = s1 m + s2 d
given: 19
s1 d + 5000m + 5000d = s2 d
s2 d - s1 d = 5000 (m + d)
d (s2 - s1) = 5000 (m + d)
s2 - s1 = 5000 (m + d)/d
From st 2:
s2 - 15000 = (s1 m + s2 d) / (m+d)
s2 - s1 = 15000 (m+d)/m
From 1 and 2: 5000 (m+d)/d = 15000 (m+d)/m
m = 3d
d/(m+d) = d/(3d+d) = 1/4
So C is correct.
Verbal: new-to-the-verbal-forum-please-read-this-first-77546.html
Math: new-to-the-math-forum-please-read-this-first-77764.html
Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html
Re: DS Ratio question GMAT prep [#permalink] 09 Jan 2010, 02:06
This post received
Intern KUDOS
Joined: 10 Dec 2009 since the distance from the average should sum to zero, then the distance from the total average of one group plus the distance from the average of the second group should
be equal to zero! So, a quick formula would be:
Posts: 1 (-5000)x(number of managers) + (15000)(number of directors) = 0
So: number of managers/number of directors = 15000/5000 = 3/1
Followers: 0
Out of the total number, the managers would be 3 times the number of directors.
Kudos [?]: 5 [5] , given:
0 so, D = 1/4 of total
and M = 3/4 of total
Hope that helps
Re: DS Ratio question GMAT prep [#permalink] 29 May 2011, 19:26
as this is DS problem and understanding the problem's concept and what is solution for this is important though, it does not need actual solution. As above mentioned in
Joined: 04 Sep 2010 various post, weight average concept should yield percentage of director as relation 1) between manager Avg and total avg is available
2) between director avg and total avg is available
Posts: 85 Hence answer is C.
Followers: 1
Kudos [?]: 5 [0], given:
VP Re: DS Ratio question GMAT prep [#permalink] 06 Jun 2011, 02:11
Status: There is always using allegation
something new !!
15000/5000 = 3/1.
Affiliations: PMI,QAI
Global,eXampleCG C it is.
Joined: 08 May 2009 _________________
Posts: 1368 Visit -- http://www.sustainable-sphere.com/
Promote Green Business,Sustainable Living and Green Earth !!
Followers: 9
Kudos [?]: 120 [0],
given: 10
Re: DS Ratio question GMAT prep [#permalink] 05 Sep 2011, 10:25
Each employee on a certain task force is either a manager or a director. What percentage of the employees are directors:
1) Average salary for manager is $5,000 less than average of all employees.
2) Average salary of directors is $15,000 greater than average salary of all employees
This is my approach...
Statement 1
Let average of all employees be x.
Avg salary for manager: x - 5000
Statement 2
Joined: 20 Jul 2011
Avg salary of directors (D): x + 15000
Posts: 152
GMAT Date: 10-21-2011
Statement 1 + 2\frac{M(x-5000)+D(x+15000)}{M+D}= x\frac{Mx-5000M+Dx+15000D}{M+D} = x\frac{x(M+D)+15000D-5000M}{M+D} = x\frac{15000D-5000M}{M+D}=05000(3D-M)=03D=M
Followers: 0
(from here you can actually derive the required percentage)
Kudos [?]: 27 [0], given:
15 Sufficient!
Answer: C
The below by karimsafi is a really quick method though! +1 Kudos!
Concept of weighted averages
M-------- Average ------------------D
since the distance from the average should sum to zero, then the distance from the total average of one group plus the distance from the average of the second group should
be equal to zero! So, a quick formula would be:
(-5000)x(number of managers) + (15000)(number of directors) = 0
So: number of managers/number of directors = 15000/5000 = 3/1
Out of the total number, the managers would be 3 times the number of directors.
so, D = 1/4 of total
and M = 3/4 of total
"The best day of your life is the one on which you decide your life is your own. No apologies or excuses. No one to lean on, rely on, or blame. The gift is yours - it is an
amazing journey - and you alone are responsible for the quality of it. This is the day your life really begins." - Bob Moawab
bumpbot Re: Each employee on a certain task force is either a manager or [#permalink] 31 Oct 2013, 21:19
VP Hello from the GMAT Club BumpBot!
Joined: 09 Sep 2013 Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you
may find it valuable (esp those replies with Kudos).
Posts: 1089
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via
Followers: 120 email.
Kudos [?]: 29 [0], given: _________________
GMAT Books | GMAT Club Tests | Best Prices on GMAT Courses | GMAT Mobile App | Math Resources | Verbal Resources
Math Expert Re: Each employee of a certain task force is either a manager or [#permalink] 01 Nov 2013, 00:15
Joined: 02 Sep 2009 Expert's post
Posts: 17297
Followers: 2870
Kudos [?]: 18359 [0],
given: 2347
gmatclubot Re: Each employee of a certain task force is either a manager or [#permalink] 01 Nov 2013, 00:15
|
{"url":"http://gmatclub.com/forum/each-employee-of-a-certain-task-force-is-either-a-manager-or-76999.html","timestamp":"2014-04-17T15:56:42Z","content_type":null,"content_length":"234638","record_id":"<urn:uuid:019b5be9-e9bc-4d66-9ddf-55c7b88710d0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Discrete particle swarm optimization, illustrated by the traveling salesman problem.
(English) Zbl 1139.90415
Onwubolu, Godfrey C. (ed.) et al., New optimization techniques in engineering. Berlin: Springer (ISBN 3-540-20167-X/hbk). Studies in Fuzziness and Soft Computing 141, 219-239 (2004).
Introduction: The classical particle swarm optimization (PSO) is a powerful method to find the minimum of a numerical function, on a continuous definition domain. As some binary versions have already
successfully been used, it seems quite natural to try to define a framework for a discrete PSO. In order to better understand both the power and the limits of this approach, we examine in detail how
it can be used to solve the well-known traveling salesman problem, which is in principle very “bad” for this kind of optimization heuristic. Results show Discrete PSO is certainly not as “powerful as
some specific algorithms, but, on the other hand, it can easily be modified for any discrete/combinatorial problem for which we have no good specialized algorithm.
90C27 Combinatorial optimization
90C59 Approximation methods and heuristics
|
{"url":"http://zbmath.org/?q=an:1139.90415","timestamp":"2014-04-21T04:48:18Z","content_type":null,"content_length":"20627","record_id":"<urn:uuid:3154ee46-6b2e-4dc9-bb8e-08d957ab3331>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coin Toss Probability
Date: 05/26/2007 at 09:16:18
From: David
Subject: approximate probability
Suppose that you toss a balanced coin 100 times. What is the
approximate probability that you observe less than or equal to 40 heads?
I'm not sure which formula to use. I know the answer is .0228. I
assume from Stand. normal prob. chart at -2.00.
Do you use the formula: P(A does not occur) = 1 - P(A)?
Date: 05/27/2007 at 19:56:32
From: Doctor Ricky
Subject: Re: approximate probability
Hi David,
Thanks for writing Dr. Math!
The probability of an event occurring when there are only two possible
outcomes (such as "heads" or "tails" of a coin flip) is known as a
binomial probability.
As I'm sure you can notice, we cannot just treat each coin flip as an
independent trial when calculating the probability, since the
probability of two independent events occurring is normally just the
product of the individual probabilities.
Therefore, if we flip two coins, that would say the probability of
getting one heads and one tails would be (1/2)*(1/2) = 1/4, which is
incorrect since it is actually 1/2 (which we can find by constructing
a tree diagram). [However, the multiplication method would work fine
if we considered all the different permutations of our choices, which
will be handled shortly.]
Therefore, we cannot just multiply the probabilities together.
Instead, calculating the likelihood of a binomial event occurring also
requires calculating the Bernoulli coefficient of the event. The
Bernoulli coefficient is defined as the combination of n trials with k
successes, or (n C k), which is:
( n ) n!
( ) = --------
( k ) k!(n-k)!
We multiply this coefficient by the binomial probability, which
[(p^k)]*[q^(n-k)], where p = probability of success, q = 1 - p,
n = number of trials, k = number of successes.
This gives us the general formula for binomial probability:
( n )
B(n,k) = ( ) * p^k * q^(n-k)
( k )
While it may seem somewhat tedious, we would use this formula to find
your probability of getting less than or equal to 40 heads in 100
tosses. Since it would only give us the probability of one event
happening (such as 34 successes in 100 tosses, if we let k = 34 and
n = 100), we would use a sum to add up all the individual
probabilities to get the total probability. We would also make use of
the fact that p = q = 1/2 in the event of our coin-flipping example.
That means we can write our equation as:
( 100 )
C(100,k) = ( ) * (1/2)^100 (since we have the same base, we)
( k ) (add the exponents )
Factoring out the (1/2)^100, since it will be the same for all of our
terms, we have:
C(100,0<=k<=40) = [(1/2)^100] * Sum (100)C(k)
where (100)C(k) is the combination of 100 things taken k at a time,
easily plugged into a graphing calculator using the commands:
sum(seq(100 nCr K,K,0,40))
This number is much too large to be of any practical value on its own
(it has 29 digits), but multiplied by our binomial probability of
(1/2)^100, it brings us to our total probability of 0.0284439668.
I hope this helped. If you have any questions, please let me know.
- Doctor Ricky, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/71267.html","timestamp":"2014-04-20T01:18:44Z","content_type":null,"content_length":"8319","record_id":"<urn:uuid:c3ebe2c0-d270-45ee-9f7c-71618d34f5d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ingrid Daubechies
Ingrid Daubechies
The daughter of a civil engineer and a criminologist, Ingrid was as a child interested in the mathematics of base 10 representation of numbers and the workings of mechanical devices. After earning a
degree in physics from the Vrije Universiteit Brussel, she stayed there teaching and researching until 1984. That year she won the Louis Empain Prize for Physics, a sort of Fields Medal for Belgian
In 1987, her study of compression prompted her to move to America, specifically to work at the AT & T Bell Labs in New Jersey where she fell in love with mathematician Robert Calderbank. The two
married that year and later both wound up teaching at Princeton. In 1994, the American Mathematical Society awarded her the Steele Prize for an expository paper on wavelets. In 1998, Daubechies
coauthored with her husband a paper on “Wavelet transforms that map integers to integers” in Appl. Comput. Harmon. Anal. 5, giving her Erdős number 3.
Daubechies has taught at Princeton since 1993.
Mathematics Subject Classification
no label found
no label found
no label found
|
{"url":"http://planetmath.org/ingriddaubechies","timestamp":"2014-04-17T03:58:43Z","content_type":null,"content_length":"28673","record_id":"<urn:uuid:7ab20255-b3ca-4c20-9d0e-f5bd4c916901>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert Eye Coords to Window Coords in GLSL? [Archive] - OpenGL Discussion and Help Forums
03-28-2011, 05:10 PM
Hey guys, I don't know a lot about the projection and coordinate systems in OpenGL, but I'm trying to implement a light scattering (god rays) shader via mod that allows GLSL into a game. The mod can
tell me a uniform float called sunVector which is the position of the sun (area from where the scattering will come from) in eye coordinates. What the shader I'm implementing needs are window
So my question is, how can I convert them? I know I need to modify it by the projection matrix or something, but how?
I'm a real noob at this right now; this isn't my type of programming . . . thanks!
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-174047.html","timestamp":"2014-04-17T21:38:27Z","content_type":null,"content_length":"5992","record_id":"<urn:uuid:42249e92-f226-403d-88ee-523cf04fe93e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Radio Galaxies and Quasars - K. I. Kellermann and F. N. Owen
13.4.2. Jet Physics
Since we have no direct way of estimating the density and velocity of a radio-emitting jet, a maze of indirect arguments and physical assumptions is made to deduce the nature of the phenomena.
However, within the framework of a set of physical assumptions, many deductions can be made, and if we do not make a completely incorrect assumption, (for example, that the radio brightness is due to
the incoherent synchrotron process), we can sometimes limit the physical conditions to a fairly narrow range of possibilities. Basically, we have the radio brightness and its linear polarization at
one or more frequencies to work with. We also may have some knowledge of the external conditions from X-ray or optical observations.
(a) Straight Jets
Let us assume that a jet is a collimated flow consisting of thermal and relativistic gas initially moving with some velocity v and some radius r. The brightness is then affected by (1) radiation
losses, (2) adiabatic gains or losses, and (3) other energy gains or losses by the relativistic electrons. As a jet expands, the particles in the jet will gain or lose energy, consistent with their
equation of state. In particular, the relativistic particle energy density, V will change as V^-4/3, or the the total energy of a single particle will vary as E V^-1/3.
Thus, in the cylindrical geometry of a jet, as the radius of the jet, r[j], increases, each relativistic electron should lose energy to the expansion as E r[j]^-2/9.
If magnetic flux is conserved then B[||] r[j]^-1, and B[] r[j]^-2.
If the velocity of the jet, v[j], remains constant and no energy is added to the particles or magnetic field from other sources, the luminosity of all observed jets would decrease much faster than is
observed (e.g., Bridle and Perley 1984). Thus one of these assumptions must be incorrect. If the velocity decreases, then the density of particles and the perpendicular magnetic field strength will
increase, thus counteracting the effects of any expansion. Combining both effects for a power law energy spectrum, the intensity, I[] varies as
Thus, the jet can actually brighten with certain combinations of parameters. However, if v[j] decreases sufficiently, then radiation losses can become important. Also, particles lose energy through
inverse Compton scattering to the 3 K background, so the net rate at which particles lose energy reaches a minimum at a magnetic field strength of a few microgauss. Thus, it is not possible to
explain the brightness of jets by simply letting v decrease indefinitely. If adiabatic effects alone cannot explain the brightness distribution, then some nonadiabatic effect must be contributing to
the energy in the particles and/or fields. The most obvious source is probably the energy in the bulk flow of any thermal plasma in the jet. This could be transferred to the particles through
interactions with shocks or through plasma waves in a turbulent plasma. These processes, however, seem to work best when adding energy to already relativistic particles. Theoretical calculations and
in situ space observations show they are very inefficient in accelerating thermal particles, especially electrons, to relativistic energies (Lee 1983). Since these and other processes are uncertain
in their details, usually it is simply assumed that a fraction,
where L[rad] is the total emitted radiation and [j] is the density of thermal particles in the jet.
Equation (13.31) is called the kinetic luminosity equation. Rough estimates for jet or total-source requirements are often made by simply using the total luminosity of (half) of the source as L[rad]
and estimating
(b) Bent Jets
If as in the wide-angle tails or narrow-angle tails, the jet is bent, an additional constraint exists, since the time-independent Euler's equation should apply or
If R is the scale length over the jet bends, then (v ^. v v[b]^2 / R. Then
A galaxy moving with velocity v[g] through an intracluster medium with density [icm] experiences a ram pressure [icm] v[g]^2. This pressure is exerted over a scale length h. If the jet is directly
exposed to the intracluster medium, then h = r[j]. On the other hand, the jet maybe inside the interstellar medium of a galaxy. Then h is the pressure scale height in the galaxy. In any case, one can
Combining the kinetic luminosity equation (13.31) with the Euler's equation in the form (13.32), we can eliminate one of the common variables. For example, eliminating v[b] we can get
For cases involving narrow-angle tails moving at 10^3 km s^-1 with respect to the external medium, one can find acceptable applications of this equation. However, for the wide-angle tails, one has a
higher luminosity to explain and strong evidence in some cases that the parent galaxy is moving very slowly or not at all with respect to the intracluster medium. Thus, a simple picture of motion
causing the bending of wide-angle tails appears to fail. More complete models including adiabatic effects or other energy sources for the particles such as turbulence in the gas entrained from the
intracluster medium appear to be necessary.
|
{"url":"http://ned.ipac.caltech.edu/level5/Sept04/Kellermann2/Kellermann4_2.html","timestamp":"2014-04-21T00:02:20Z","content_type":null,"content_length":"9644","record_id":"<urn:uuid:b747845a-bc2f-4598-ae2d-d442b21b5cc7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerical Problem for motion along a plane (HELP)
May 26th 2009, 02:50 AM #1
Jan 2009
Numerical Problem for motion along a plane (HELP)
Question :
A boy kicked a football with an initial velocity component of 15.0m/s and a horizontal velocity component of 22.0m/s.
a)what is the velocity of the football(magnitude and direction)
b)how much time is needed to reach the maximum height?
b) no idea
Motion of ball
Hello mj.alawami
Question :
A boy kicked a football with an initial vertical? velocity component of 15.0m/s and a horizontal velocity component of 22.0m/s.
a)what is the velocity of the football(magnitude and direction)
b)how much time is needed to reach the maximum height?
b) no idea
(a) Assuming that the 15 m/s velocity is the vertical component, the magnitude of the actual velocity is $\sqrt{(15^2 + 22^2)} = 26.63 \,ms^{-1}$ at an angle $\arctan\Big(\frac{15}{22}\Big) =
34.3^o$ above the horizontal.
(b) Again, assuming that the vertical component is initially 15 m/s, the time taken to reach the maximum height is the time taken for a body to come to rest, with an initial velocity 15 m/s and
an acceleration $-32 m/s^2$. This is $\frac{15}{32} = 0.47$ sec.
Hello, mj.alawami!
A boy kicked a football with an initial vertical velocity component of 15.0 m/s
and a horizontal velocity component of 22.0 m/s.
a) What is the velocity of the football (magnitude and direction)?
The velocity can be represented by this triangle:
* |
* | 15
* θ |
* - - - - - - - *
The magnitude of the velocity is the length of the hyptenuse.
. . $|v| \:=\:\sqrt{22^2 + 15^2} \:=\:\sqrt{709} \:\approx\:26.6$ m/s.
The direction of the velocity is given by:
. . $\tan\theta \:=\:\frac{15}{22} \quad\Rightarrow\quad \theta \:=\:\arctan\left(\tfrac{15}{22}\right) \:\approx\:34.3^o$
b) How much time is needed to reach the maximum height?
The height $y$ of the football is given by: . $y \:=\:15t - 4.9t^2$
The graph is a down-opening parabola which reaches its maximum at its vertex.
The vertex is at: . $t \:=\:\frac{\text{-}b}{2a} \:=\:\frac{\text{-}15}{2(\text{-}4.9)} \:=\:1.530612245$
The ball reaches maximum height in abut 1.5 seconds.
Value of g
Hello everyone -
Thanks, Soroban. Yes, of course $g = 9.8$, not $32$. We haven't used $g = 32$ in the UK for aeons, so why I used that value here I don't know.
(Mind you, it's easier to use my method, if not my value: $v = u + at \Rightarrow 0 = 15 - 9.8t \Rightarrow t \approx 1.5$.)
May 27th 2009, 04:21 AM #2
May 27th 2009, 09:48 AM #3
Super Member
May 2006
Lexington, MA (USA)
May 27th 2009, 11:55 AM #4
|
{"url":"http://mathhelpforum.com/algebra/90541-numerical-problem-motion-along-plane-help.html","timestamp":"2014-04-18T00:21:13Z","content_type":null,"content_length":"46387","record_id":"<urn:uuid:a1e12d87-db37-4ca6-9365-e7a533a4ed5f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
August 29th 2006, 05:22 AM
Hi, I posted this question in the advance forum for sometime but nobody could do it. I think this forum is more appropraite for this question. Thank you for any help
Consider the model
y i = α + β.x i + ui
where u i is a disturbance identically and independently normally distributed with mean 0 and variance σ^2 , and x i is a stochastic regressor, identically and independently normally distributed
with mean 0 and variance λ^2 .
a) Suppose a researcher mistakenly regresses x on y, that is, runs the regression
x i = α + δ.y i + u i
and uses 1/δ as an estimator for β prove that 1/δ is not a consistent estimator for β.
b) What is the asymptotic distribution of the least squares estimator when the correct regression of y on x is run?
|
{"url":"http://mathhelpforum.com/advanced-statistics/5218-regression-print.html","timestamp":"2014-04-17T21:49:00Z","content_type":null,"content_length":"3803","record_id":"<urn:uuid:30dc63fb-7a3b-4e9c-98a8-af037893bae9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
covering relation
covering relation
The covering relation
The covering relation on a structure (generally already equipped with other relations) is a binary relation such that $x$ is related to $y$ if and only if $y$ is (in an appropriate sense) an
immediate (and only immediate) successor of $x$.
In a poset
A pair $(x,y)$ in a poset satsfies the covering relation if $x \lt y$ but there is no $z$ such that $x \lt z$ and $z \lt y$. In other words, there are exactly two elements $z$ such that $x \leq z \
leq y$: $z = x$ and $z = y$. In this case, you would say that ”$y$ covers $x$”.
In a directed graph
A pair $(x,y)$ of vertices in a directed graph or quiver satisfies the covering relation if there is an edge $x \to y$ but there is no other path from $x$ to $y$.
Common generalisation
Given any binary relation $\sim$ on a set $S$, a pair $(x,y)$ of elements of $S$ satisfies the covering relation if the only sequence $x = z_0, \ldots, z_n = y$ such that $x_i \sim x_{i+1}$ satisfies
$n = 1$ (so $x \sim y$). Then the covering relation on a poset is the covering relation of $\leq$, and the covering relation in a directed graph is the covering relation of the adjacency relation of
the graph.
Revised on February 13, 2011 20:35:37 by
Toby Bartels
|
{"url":"http://ncatlab.org/nlab/show/covering+relation","timestamp":"2014-04-17T21:59:54Z","content_type":null,"content_length":"19175","record_id":"<urn:uuid:b37b5238-c4c0-479b-85a9-4876bff9f450>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Layers of the Earth: Another Model
This model of the Layers of the Earth comes from M. Poarch of
. I've tried and tried to link to the original instructions, but am getting error messages, so I will recreate the instructions for you. If I later find the link to be working, I'll edit this to add
that information.
You'll need blue, brown, yellow and black construction paper, as well as scissors, rulers and glue. If you have compasses (the kind for drawing circles), and your students are suitably able to use
them, they would make everything a bit easier.
Cut out a blue 22 cm circle, labeled "6 - 40 miles" to represent the crust.
Cut out a brown 18 cm circle, labeled "1800 miles" to represent the mantle.
Cut out a yellow 15 cm circle, labeled "1375 miles" to represent the outer core.
Cut out a black 7 cm circle, labeled "1750 miles" to represent the inner core.
These numbers are what was included in the original instructions. As you work on it, you will notice a glaring problem with the numbers...
You can work to fix the numbers, or you can wait to see if your students notice the problem - it's a good launching point for a variety of discussions.
I like to have the students evaluate the pros and cons of this type of model, as well as compare it to the pros and cons of the
Layers of the Earth bookmarks
made previously.
No comments:
|
{"url":"http://science-mattersblog.blogspot.com/2011/02/layers-of-earth-another-model.html","timestamp":"2014-04-21T09:37:40Z","content_type":null,"content_length":"89008","record_id":"<urn:uuid:b9eebedf-5862-4862-a357-8dde2fd87510>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How does a leap year occurs? - Rediff Questions & Answers
How does a leap year occurs?
Earn 10 points for answering
Answers (5)
Duration of a solar year is slightly less than 365.25 days.There are 365 days in a common year. In each leap year, the month of February has 29 days instead of 28. Adding an extra day to the calendar
every four years compensates for the fact that a period of 365 days is shorter than a solar year by almost 6 hours
Answered by LIPSIKA, 31 Oct '12 11:54 am
A leap year (or intercalary or bissextile year) is a year containing one additional day (or, in the case of lunisolar calendars, a month) in order to keep the calendar year synchronized with the
astronomical or seasonal year. Because seasons and astronomical events do not repeat in a whole number of days, a calendar that had the same number of days in each year would, over time, drift with
respect to the event it was supposed to track. By occasionally inserting (or intercalating) an additional day or month into the year, the drift can be corrected. A year that is not a leap year is
called a common year.
For example, in the Gregorian calendar (a common solar calendar), February in a leap year has 29 days instead of the usual 28, so the year lasts 366 days instead of the usual 365. Similarly, in the
Hebrew calendar (a lunisolar calendar), Adar Aleph, a 13th lunar month is added seven times every 19 years to the twelve lunar months in its common years to keep its calendar year from drift
Answered by Ataur Rahman, 31 Oct '12 11:13 am
The term leap year gets its name from the fact that while a fixed date in the Gregorian calendar normally advances one day of the week from one year to the next, in a leap year it will advance two
days due to the year's extra day thus "leaping over" one of the days in the week. For example, Christmas Day fell on Saturday in 2004, Sunday in 2005, Monday in 2006 and Tuesday in 2007 but then
"leapt" over Wednesday to fall on a Thursday in 2008.
Answered by iqbal seth, 31 Oct '12 11:11 am
Leap years are needed to keep our calendar in alignment with the earth's revolutions around the sun because a year is not exactly 365 days to the second but 365.2422 days long so every 4 years one
day is added to the year to make up for the time lost.
Answered by aflatoon, 31 Oct '12 11:06 am
Leap day (February 29) only occurs during a Leap Year.
Leap Year occurs every four years, except for years ending in 00, in which case only if the year is divisible by 400.
Answered by ajay, 31 Oct '12 11:05 am
|
{"url":"http://qna.rediff.com/questions-and-answers/how-does-a-leap-year-occurs/24380086/answers/24396737","timestamp":"2014-04-23T19:06:34Z","content_type":null,"content_length":"54876","record_id":"<urn:uuid:eca003d5-8c4a-4bd1-b753-9a664c171d97>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partial fractions - I have trouble solving this one
April 8th 2012, 06:10 AM
Partial fractions - I have trouble solving this one
I have managed to successfully answer all of the other partial fractions questions in my textbook but can't see how the following one is done:
The denominator is factorised to:
And then the textbook has been giving the following method to allow solving for A and B as the numerators:
I am having trouble knowing what to make x to cancel out the A part and thus solve for B. To make A = 0 wouldn't x have to = 0.333...
This is the answer given in the back of the book:
Anyway, like I said, I have been OK with the other 10 or so examples but this one has thrown me off and I have spent a long time thinking about it hopefully I have just overlooked something
Any help is much appreciated.
April 8th 2012, 06:18 AM
Re: Partial fractions - I have trouble solving this one
It is an identity so true for all values of x. Put x=1/3. This will find you B. Then put in any other x (say x=0) to get A.
April 8th 2012, 06:30 AM
Re: Partial fractions - I have trouble solving this one
Thanks for you quick reply, it was really bugging me.
I have got it now, but I was genuinely stuck for long enough before that I signed up here, made those images in LaTeX etc.
|
{"url":"http://mathhelpforum.com/algebra/196946-partial-fractions-i-have-trouble-solving-one-print.html","timestamp":"2014-04-18T22:50:10Z","content_type":null,"content_length":"5483","record_id":"<urn:uuid:fa3393ef-542e-41a3-8890-9bd9c5cf1909>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Industrial Mathematics
Industrial Mathematics Option
The objective of the Master's program in Industrial Mathematics is to enable students to acquire the fundamentals of applied mathematics in areas of classical and numerical analysis, differential
equations and dynamical systems, and probability and statistics. At the same time, the connection of these fields to modeling of physical, biological, engineering, and financial phenomena will be
stressed by requiring courses outside the Department. Students are to obtain practical experience in mathematical modeling and analysis during an internship or industrial project that will culminate
in a thesis. Emphasis is placed on developing mathematical skills that government and industrial employers value.
Complete information on admission policy and graduation requirements are available in the Graduate Bulletin (Mathematics).
Students must have an advisor from the start of the program who must approve all courses and the industrial mathematics thesis.
More information is available at the
Center for Industrial Mathematics home
|
{"url":"https://www4.uwm.edu/letsci/math/graduate/masters/industrial.cfm","timestamp":"2014-04-20T15:56:24Z","content_type":null,"content_length":"26386","record_id":"<urn:uuid:c5aedab0-cd41-447e-b509-f3f61319a2ae>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Block Matrices in C
Note that what I wrote above was specifically about dense matrices. Sparse matrices are much more interesting.
There are three major approaches I can recommend.
First, using chains (singly-linked lists):
struct matrix_element {
double value;
uint32_t row;
uint32_t col;
uint32_t nextrow; /* Element index to continue on the same column */
uint32_t nextcol; /* Element index to continue on the same row */
struct matrix {
uint32_t rows;
uint32_t cols;
uint32_t *col; /* Indices to first element on each column */
uint32_t *row; /* Indices to first element on each row */
size_t size; /* Number of data elements allocated */
size_t used; /* Number of data elements used */
struct matrix_element *data;
Here, the data is basically an array of elements. The first element (offset zero) is best used as a sentinel, to signify end of each chain. The maximum size of the matrix this structure can support
is 4294967295×4294967295 with up to 4294967295 nonzero elements within the matrix. Indices are used instead of pointers so that you can reallocate the data array whenever necessary. Keeping the data
in a linear array caches better than using individually allocated elements. Populating the data structure initially is surprisingly easy: for example, you can just skip the chaining at first, just
adding the nonzero elements to the array, and finally construct the chain links using two temporary arrays in a single linear pass.
Second, storing each row or column as a sparse vector.
struct matrix_rowmajor { /* Row vectors */
size_t rows;
size_t cols;
double *scale; /* Common multiplier for each row */
size_t *size; /* Number of elements on each row */
size_t **col; /* Array for each row: column numbers */
double **value; /* Array for each row: values */
struct matrix_colmajor { /* Column vectors */
size_t rows;
size_t cols;
double *scale; /* Common multiplier for each column */
size_t *size; /* Number of elements on each column */
size_t **row; /* Array for each column: column numbers */
double **value; /* Array for each column: values */
This structure is most useful with
Gaussian elimination
(and similar algorithms), and when multiplying a row major matrix with a column major matrix. It does not vectorize, but even a trivial multiplication implementation should produce very efficient
code. The downside is the relatively expensive transformation between the two matrix forms. (Given sufficiently large matrices, the transformation does speed up e.g. matrix multiplication.)
The reason why the column/row number and value are in separate arrays is that usually a 32-bit column number is sufficient (4294967295 columns!) -- you might wish to use uint32_t instead of size_t
there. If you combine the column/row number and value in a structure, you lose the 64-bit alignment on the double values, which will hurt performance on most architectures.
If you want vectorization, instead of single doubles, store v2df or v4df vectors (aligned twin or quad doubles), with row/column number divided by two or four. It tends to lead much more complex
code, but if the nonzero elements in the sparse matrix are clustered, it may well be worth the added code complexity. (I think I could provide some example code using explicit GCC SSE2/SSE3 built-in
vector operations for say matrix multiplication, if someone is sufficiently interested.)
Third, block matrices. There are three subtypes for this case: fixed size blocks, common size blocks, and variable size blocks. Fixed size blocks:
#define ROWS 4
#define COLS 4
struct matrix_block {
double value[ROWS][COLS];
size_t references; /* Optional, for memory management */
struct matrix {
size_t rows;
size_t cols;
size_t stride; /* = (cols + COLS - 1) / COLS */
struct matrix_block *block;
static double none = 0.0;
static inline double *element(struct matrix *const m, const size_t row, const size_t col)
if (m && row < m->rows && col < m->cols) {
const size_t block = (size_t)(col / COLS)
+ m->stride * (size_t)(row / ROWS);
if (m->block[block])
return (double *)&(m->block[block].value[(col % COLS) + COLS * (row % ROWS)]);
return &none;
Note that the casts to (size_t) above have a very specific role in C99: they tell the compiler that the result of the division must be treated as a size_t (integer), and thus enforce that the
division really is integer division regardless of optimization. Without the cases, the compiler might combine the division with a multiplication, b0rking the block index calculation. (Some
programmers use cargo cult programming for this, believing that a temporary variable is required to be sure.)
Since the block size is fixed, you can trivially calculate the block index (by dividing by the block size) and index within the block (by modulus block size). Using power of two sizes (4, 8, 16, 32)
means the division and modulus are just bit shifts and masks (GCC will take care of the details, no need to write them explicitly as such), making random accesses much faster.
The major benefit of this method is that since most matrix operations can be efficiently described as operations on block matrices, and since each block is of fixed size and consecutive in memory,
the compiler can optimize (especially vectorize) operations on entire blocks fully. For example, AMD CPUs can do two or four operations (multiplications, divisions, additions, substractions, square
roots) in parallel via SSE2/SSE3. You do need to redefine the matrix block and random access accessor functions to get GCC to vectorize it; I omitted that because you really need to hardcode COLS to
a power of two then. I haven't used block matrices, so I don't know how fast an implementation you can do in practice, but I suspect it depends heavily on the block size.
Obviously, each block can be reused more than once. The references field is a bit problematic, because it screws the block alignment -- it would be best to have each block aligned on a cacheline
boundary for optimal cache behaviour -- but it makes memory management a lot easier. (If your matrices reuse blocks often, you might benefit from creating matrix pair lists (using e.g. an 8-bit hash
of the two pointers), so that you do the operation between two blocks only once, then copy it to all targets. I have not tried this, so I am not sure if it is beneficial in real life.)
Common size block is similar, except that COLS and ROWS are structure parameters instead of fixed constants. Variable block size is basically just a linked list of submatrices offset in the actual
Because you may operate on matrices with different block sizes, your element access will be rather complicated. It is probably best to just remember the last block, and see if the next access is
still within the same block; if not, then do a full search for the correct block. If you want to vectorize the per-block operations, you will have to do a generic function, and special case functions
(for when the blocks are of same size, square, and/or power of two in size).
Depending on the operations you do on the matrix blocks, you may further benefit from extending the block structure with a type/flags field, and optionally cached properties. Here is an example for
variable-sized blocks:
/* Examples of matrix type flags, not thought out! */
#define MTYPE_IDENTITY (1<<0)
#define MTYPE_DIAGONAL (1<<1)
#define MTYPE_SYMMETRIC (1<<2)
#define MTYPE_POSITIVE_DEFINITE (1<<3)
#define MTYPE_TRIDIAGONAL (1<<4)
#define MTYPE_UPPER_TRIANGULAR (1<<5)
#define MTYPE_LOWER_TRIANGULAR (1<<6)
struct block {
size_t rows;
size_t cols;
size_t rowstep;
size_t colstep;
double *data;
void *storage; /* For data memory management */
size_t eigenvalues;
double *eigenvalue;
int type;
struct matrix {
size_t rows;
size_t cols;
size_t blocks;
size_t *block_row; /* Row offsets */
size_t *block_col; /* Column offsets */
struct block *block; /* Array of block definitions */
For example, if the matrix is known to be identity, diagonal, tridiagonal, upper triangular, lower triangular, symmetric, positive definite, or so on, mark it such. You could also cache the
eigenvalues. Something like this is very likely to help you avoid doing unnecessary block operations altogether, in which case using common or variable sized blocks may well lead to more efficient
code -- assuming your algorithm makes maximum use of those features.
The storage void pointer in the block refers to the entity owning the data. You don't need it, but if you want easy memory management, that entity should have a reference count. (When it drops to
zero, that data can be released. Since the data points to within that entity, not at the start of the entity, you do need a pointer to the actual entity.)
When accessing the elements of such a matrix in sequential fashion, remember the last accessed block and its row and column offset. That way you only need to check whether the next access is still
within the block, and it usually will be. (If not, you do a full search in the actual matrix data structure.)
Hope you find this useful,
|
{"url":"http://www.linuxquestions.org/questions/programming-9/block-matrices-in-c-928743/","timestamp":"2014-04-25T00:28:04Z","content_type":null,"content_length":"101331","record_id":"<urn:uuid:ec32157a-4ecb-4d6b-899d-6a53b3d7d6f9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EFE: Why is there a curvature tensor and curvature scalar?
In the Einstein tensor equation for general relativity, why are there two terms for curvature: specifically the curvature tensor and the curvature scalar multiplied by the metric tensor?
There's only one independent / fundamental curvature, namely the Riemann-Christoffel curvature tensor. The so-called Ricci curvature and the curvature scalar are simply contractions of the 4th rank
tensor with respect with the metric once and twice, respectively. They are susequently derived concepts.
One can write down the EFE in terms of the Riemann-Christoffel curvature tensor only (in the absence of matter) as:
[tex] g^{\mu \alpha}R_{\mu \nu|\alpha \beta}-\frac{1}{2}g_{\nu \beta}g^{\mu \alpha}g^{\lambda \sigma}R_{\mu \lambda|\alpha \sigma} = 0 [/tex]
but it won't look pretty, that's why the Ricci curvature tensor and the Ricci scalar are put into GR.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3906945","timestamp":"2014-04-16T07:39:00Z","content_type":null,"content_length":"63287","record_id":"<urn:uuid:1866efdc-f7c2-4df5-929f-7ce97a629d86>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
The Shape of Congruence Lattices
             
Memoirs of the This monograph is concerned with the relationships between Maltsev conditions, commutator theories and the shapes of congruence lattices in varieties of algebras. The authors
American Mathematical develop the theories of the strong commutator, the rectangular commutator, the strong rectangular commutator, as well as a solvability theory for the nonmodular TC commutator.
Society They prove that a residually small variety that satisfies a congruence identity is congruence modular.
2013; 169 pp; • Introduction
softcover • Preliminary notions
• Strong term conditions
Volume: 222 • Meet continuous congruence identities
• Rectangulation
ISBN-10: • A theory of solvability
0-8218-8323-2 • Ordinary congruence identities
• Congruence meet and join semidistributivity
ISBN-13: • Residually small varieties
978-0-8218-8323-5 • Problems
• Appendix A. Varieties with special terms
List Price: US$83 • Bibliography
• Index
Individual Members:
Members: US$66.40
Order Code: MEMO/222/
|
{"url":"http://ams.org/bookstore-getitem/item=memo/222/1046","timestamp":"2014-04-18T03:11:11Z","content_type":null,"content_length":"14759","record_id":"<urn:uuid:67b9b38e-dd63-442c-a89c-d1bfd69a3caa>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normal Random Variables Question
For a normally distributed random variable,
P(a < X < b) = P(X < b) - P(X < a)
For any random variable [itex] W [/itex], if [itex] a, b [/itex] are real numbers,
Z = aW + b
E(Z) = aE(W) + b, \quad Var(Z) = a^2 Var(W)
(as long as the mean and variance of [itex] W [/itex] exist)
|
{"url":"http://www.physicsforums.com/showthread.php?t=406602","timestamp":"2014-04-16T07:47:32Z","content_type":null,"content_length":"24974","record_id":"<urn:uuid:58ee58f1-b427-4e64-9f2f-36586fcf3aa8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reviews of CMP: Connected Mathematics Project
Reviews of CMP: Connected Mathematics Project (Connected Math)
Basic Information and Introduction
The Connected Math curriculum for grades 6-8 was and continues to be developed by the Connected Mathematics Project (CMP) at Michigan State University, and is marketed by Prentice Hall. The authors
of the CMP program are James T. Fey, William M. Fitzgerald, Susan N. Friel, Glenda Lappan, and Elizabeth Difanis Phillips; Glenda Lappan is often listed as the principal author of the program. The
CMP web site is www.mth.msu.edu/cmp/.
The student texts for Connected Math are eight booklets for each of the Grades 6-8. For each CMP booklet there is also an extensive teacher guide. The Connected Mathematics teacher material adds up
to about one thousand pages for each of the three grades.
Content Reviews
An Evaluation of CMP, by R. James Milgram (1999). This is a detailed content review of Connected Mathematics. It also pays attention to the research base of some claims concerning success of the CMP
program. Excerpts: "Overall, the program seems to be very incomplete, and I would judge that it is aimed at underachieving students rather than normal or higher achieving students. [...] The
philosophy used throughout the program is that the students should entirely construct their own knowledge and that calculators are to always be available for calculation. This means that standard
algorithms are never introduced, not even for adding, subtracting, multiplying and dividing fractions; precise definitions are never given; repetitive practice for developing skills, such as basic
manipulative skills is never given; throughout the booklets, topics are introduced [basic topics] and then are dropped, never to be mentioned again; in the booklets on probability and data analysis a
huge amount of time is spent learning rather esoteric methods for representing data..." Professor Milgram develops these criticisms in detail.
Mathematically Correct Seventh Grade Mathematics Review of Connected Mathematics Program (1999?). One of a series of comparative reviews of Seventh Grade mathematics curricula. Searching for content
the reviewers find that there is little or no coverage of the properties of arithmetic; exponents, squares, and roots; fractions, decimals, and percents; and the book is devoid of algebraic
manipulation. The only mathematical content that the reviewers find relates to Proportions and to Graphing. The reviewers write: "There is very little mathematical content in this book. Students
leaving this course will have no background in or facility with analytic or pre-algebra skills. [...] This book is completely dedicated to a constructivist philosophy of learning, with heavy emphasis
on discovery exercises and rejection of whole class teacher directed instruction. [...] Students are busy, but they are not productively busy. Most of their time is directed away from true
understanding and useful skills."
Mathematically Correct Mathematics Program Reviews Comparative Summary for Seventh Grade. Here the Mathematically Correct team provides their summary evaluation of the eleven texts that were
considered. Two of the eleven are rated as not suitable in any context for reasons of both content and pedagogy, and Connected Mathematics is rated worst of the lot and receives the only unambiguous
"F". They judge that CMP and the next lowest ranked program, McDougal Littell Math Thematics, are unlikely to allow any significant number of students to meet the criteria for even pre-pre-Algebra.
Links Related to CMP, by Dr. Betty Tsang. In addition to the links already referenced above Betty Tsang provides her own brief commentary on each of the eight units for both the 6th and the 7th grade
CMP program, and of the Algebra coverage in CMP. There are also links concerning the research base of CMP and links related to parent activism against Connected Math in Dr. Tsang's Okemos, WI, school
A comprehensive as[s]essment of CMP (Connected Math Program) deficiencies leading to supplementation that meets key traditional educational needs. An independent learning project presented by Donald
Wartonick towards the degree of Master of Education in the field of Mathematics Education at Cambridge College, Cambridge, MA; Fall, 2005.
Additional commentary and local activism
Presentation of a Mathematics Petition to Penfield (NY) Board of Education. "We, the undersigned, state that the Investigations, Connected Math, and Core Plus Math programs, recently implemented in
the Penfield School District, do not teach the fundamental math skills that children must know to succeed in furthering their education. We therefore ask that a traditional math program be offered as
a choice for all Penfield students." The petition was signed by 671 Penfield residents 18 years of age or older. See also Parents Concerned With Penfield's Math Programs and the NYC HOLD summary page
Controversy over Mathematics in Penfield, NY, Public Schools.
Higher Standards Means Eliminating Basic Skills, by Arthur Hu, January 24, 2004. A letter to the Lake Washington School Board and others about Connected Math in the local school.
Why Guilford Parents Should Oppose CMP Math, by Bill Quirk (2003). A Web paper that brings together extended quotes from critical reviews of the CMP program, including many of the reviews linked
elsewere on this page. See also Handout for the May 12, 2003 Meeting of the Guilford Board of Education, by Bill Quirk.
Connected Math in PISD and elsewhere. A web page of the Plano (TX) Parental Rights Council against CMP, which has been implemented districtwide since 1999. The page also contains many national CMP
links. For a supplementary view see Connected Mathematics in Plano ISD. See also Disconnecting Schoolchildren from 'Connected' Math; a report from 1999 on a legal case against the district over CMP.
For more recent reports on that lawsuit see Information Regarding the "Connected Math" Lawsuit at the Plano PRC site.
Connected Math Disconnected from the Real World. A 1999 letter to parents from a concerned Plano teacher, courtesy of the Plano PRC.
New math program criticized , by Ben Hellman, the Andover Townsman, October 9, 2003. A Report on Andover (MA) district's response to parents' concern over Connected Nath Project in the 7th grade.
Parents complain that math classes are too slow and are covering 6th grade material - "mindlessly repeating 6th grade stuff", as one parent describes it. Administrators admit seventh-graders may not
learn all of the topics that Connected Math prescribes for seventh-graders; it may take a few years for teachers to become familiar enough with the program. Teachers advertise their concern over the
curriculum, but claim that it is forced upon them by the administration. The school committee chairwoman would have preferred no newspaper articles about the issue.
A fraction of the time, by Ben Hellman, the Andover Townsman, October 23, 2003. Further reporting on Connected Math in Andover, MA.
Teach Utah Kids. Web site of a parent group in Alpine school district, Utah, concerned about curriculum. Their main concern is TERC Investigations in grade school and Connected Mathematics Project in
middle school. Note especially their News Articles page.
Divided on Connected Math. For Some Parents and Experts, Curriculum Doesn't Add Up, by Brigid Schulte (WP, Oct 17, 1999; Page A01). Reporting on a conflict in Montgomery County over the introduction
of Connected Math in the middle schools.
MCPS officials have `fuzzy' answers for math curriculum, by Robert Rosenfeld (Montgomery Journal, June, 1999). Don't have to read between lines to see plan's flaws, by Robert Rosenfeld (Montgomery
Journal, July, 1999); County won't get `fuzzy' grant; Officials planned to use $6M to expand NSF math programs, by Jennifer Jacobson (Montgomery Journal, Dec 1999). These articles provide further
information about the conflict over the CMP program in Montgomery County. A complete file of editorials, letters to the editor, and local reports may be found in the folder "FUZZYMATH" in the Math
Files of the Gifted and Talented Association of Montgomery County, MD.
Testimony of Susan Sarhady to subcommittees of the U.S. House Committee on Education and the Workforce (Feb 2, 2000). Ms. Sarhady describes how the NSF-funded Texas Statewide Systemic Initiative
imposed the CMP mathematics program on her children, and asks that in the future the Federal Government refrain from doing this kind of harm.
MathLand, Connected Mathematics, and the Japanese Mathematics Program, by R. James Milgram. Some comments on the three named programs in connection with the use of the MathLand grade school and CMP
middle school programs in the Mountain View (CA) school district.
By the Numbers, by Adele J. Hlasnik (Pittsburg TR, 020602). Describes a parents' fight against the Everyday Mathematics grade school program and the Connected Math (CMP) middle school program that
are implemented districtwide in the Pittsburg public schools.
New Math Gets Scaled Back, by Michael Rocco (Reporter Online, 020719). CMP all but removed from North Penn middle schools after numerous parent complaints.
Mathland and Connected Math Articles. A collection of articles on the named elementary and middle school mathematics programs; arranged by Vicki Hobel Schultz.
Math Protest Meet Draws 50, Reactions, by Lana Sutton. Times and Free Press, May 5, 2000. Reports on conflict over Everyday Mathematics and Connected Math Program in Hamilton County (Chattanooga),
The Government Flunks Math, by David Tell for The Editors, The Weekly Standard, Dec 13, 1999.
Responses to Four Parents' Concerns About North Penn School District's Connected Mathematics Program, by Joe Merlino, Diane Briars, Steve Kramer, Lucy West and James Fey, August 20, 2002. This
conference contribution by some CMP authors and implementors provides advice to school administrators for responding to parents' concerns over implementation of Connected Math.
NSF Grant Support for CMP
#9986372 Connected Mathematics Phase II.
#9980760 Adapting and Implementing Conceptually-Based Mathematics Instructional Materials for Developmental-Level Students.
#9950679 Preparing Elementary Mathematics Teachers for Success: Implementing a Research-Based Mathematics Curricula.
#9911849 Teaching Reflectively: Extending and Sustaining Use of Reforms in the Mathematics Classroom.
#9714999 Show-Me Project: A National Center for Standards-based Middle School Mathematics Curriculum Dissemination and Implementation.
#9619033 The Austin Collaborative for Mathematics Education
#9150217 Connected Mathematics Project.
This page is part of a collection of links to reviews of and commentaries on K-12 mathematics curricula and standards that is maintained by Bas Braams, Elizabeth Carson, and NYC HOLD. This ring of
pages includes: TERC Investigations - Everyday Mathematics - Connected Mathematics Project (CMP) - Concepts and Skills - Structure and Method - College Preparatory Mathematics (CPM) - Interactive
Mathematics Program (IMP) - Mathematics, Modelling our World (MMOW) - CPMP Contemporary Mathematics in Context - Saxon Math - NCTM Standards
Email: braams@math.nyu.edu
|
{"url":"http://nychold.com/cmp.html","timestamp":"2014-04-16T10:40:27Z","content_type":null,"content_length":"15465","record_id":"<urn:uuid:c8196830-edc4-4a04-a63d-e730c9b1195f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westport, WI Math Tutor
Find a Westport, WI Math Tutor
...I use the student's own text book and tailor my tutoring to the student's needs. Prior to beginning Pre-Calculus, I review the student's understanding of geometry, advanced algebra,
polynomials, coordinate geometry and graphical analysis and trigonometry. We then move to: 1. exponential and lo...
24 Subjects: including calculus, GRE, SAT math, algebra 1
...This kind of analysis is what I love to do, and will help you obtain academic success regardless of the field or career you are interested in. My goal is to help you learn and not just feed
you answers. ***Subjects*** MATH: I can help you with Story problems, Geometry, Algebra (Pre, I, or II)...
65 Subjects: including prealgebra, ACT Math, probability, differential equations
...My schedule can be extremely flexible. I look forward to working with you.I have an extensive background in chemistry. Course-wise I excelled from AP chemistry through Organic.
13 Subjects: including SAT math, algebra 1, algebra 2, ACT Math
...I believe it is important to assess student learning styles, identify strengths as well as improvement areas to provide tailored tutoring reflective of student needs. After tutoring children I
babysit as well as high school students and my college peers, I am looking forward to sharing my study ...
52 Subjects: including algebra 1, American history, biology, vocabulary
...I've worked as a standardized writing test scorer for several scoring seasons. I *KNOW* how scorers (and test designers) think! I have a master's degree in journalism from University of
Wisconsin-Madison and I have worked as a paid writer, copywriter, and editor since 10th grade.
40 Subjects: including algebra 1, prealgebra, English, reading
Related Westport, WI Tutors
Westport, WI Accounting Tutors
Westport, WI ACT Tutors
Westport, WI Algebra Tutors
Westport, WI Algebra 2 Tutors
Westport, WI Calculus Tutors
Westport, WI Geometry Tutors
Westport, WI Math Tutors
Westport, WI Prealgebra Tutors
Westport, WI Precalculus Tutors
Westport, WI SAT Tutors
Westport, WI SAT Math Tutors
Westport, WI Science Tutors
Westport, WI Statistics Tutors
Westport, WI Trigonometry Tutors
Nearby Cities With Math Tutor
Bristol, WI Math Tutors
De Forest Math Tutors
Lodi, WI Math Tutors
Madison, WI Math Tutors
Maple Bluff, WI Math Tutors
Mc Farland, WI Math Tutors
Middleton, WI Math Tutors
Monona, WI Math Tutors
Morris, WI Math Tutors
Morrisonville, WI Math Tutors
Shorewood Hills, WI Math Tutors
Springfield, WI Math Tutors
Sun Prairie Math Tutors
Waunakee Math Tutors
Windsor, WI Math Tutors
|
{"url":"http://www.purplemath.com/Westport_WI_Math_tutors.php","timestamp":"2014-04-17T21:58:50Z","content_type":null,"content_length":"23717","record_id":"<urn:uuid:64b5b93c-abda-45fa-bcf4-91fcf60b60fb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wrong puzzle of the week [w10]?!
March 12, 2010
By xi'an
In the weekend supplement to Le Monde, the solution of the rectangle puzzle is given as 32 black squares. I am thus… puzzled!, since my R program there provides a 34 square solution.
Am I missing a hidden rectangle in the above?! Given that the solution in Le Monde is not based on a precise mathematical argument (something to do with graph theory???), it may be that the authors
of this puzzle got their reasoning wrong… (The case of the parallelogram is clearer, the argument being that an horizontal distance between two black squares can only occur once.) An open problem (?)
is then to figure out a formula for the number of black squares on an nxn grid without any rectangle. (I wonder how this is linked with the friendly queen puzzle set by Gauß…)
Filed under:
Le Monde
mathematical puzzle
for the author, please follow the link and comment on his blog:
Xi'an's Og » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/wrong-puzzle-of-the-week-w10/","timestamp":"2014-04-21T14:58:10Z","content_type":null,"content_length":"37174","record_id":"<urn:uuid:771b4aaa-e488-4d93-a91b-170b6aad0c25>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wednesday Math, Vol. 34: Bernhard Riemann
What are the criteria for measuring greatness in an art form? The standards are not the same across the different disciplines.
The most rigorous standard would be to produce a vast body of work with several acknowledged masterpieces interspersed among these works. By that standard, the greatest artists would be people like
Shakespeare, Bach, Mozart, Dickens, Rembrandt and others. In literature, there are masters given the highest rank who were not particularly prolific, or only produced one or two masterworks. Leo
Tolstoy stands at the top of the pantheon of world literature based on two large and sprawling novels; the rest of his work is much less well known. Herman Melville is given a lofty perch as well,
though perhaps I think more of him than he deserves because we are both Americans. He died in complete obscurity, and after
Moby Dick
, his second best known work today,
Billy Budd
, was published after his death because he could not find a publisher when he was alive. That novel's success is greatly enhanced by Benjamin Britten turning it into an opera and Peter Ustinov
turning it into a play and film.
In music, the top of the pantheon is largely reserved for those who meet the most difficult criteria, the massive body of work with many masterpieces. Georges Bizet, for example, is not often
mentioned among the greatest of the opera composers. After
, clearly an important and popular opera, his second best known work is...
The Pearl Fishers
La Jolie Fille de Perth
? Sadly, Bizet died of a heart attack at the age of 36 only a few months after composing
. For all that work's success, the artist himself does not get placed in the pantheon beside Verdi, Puccini and Wagner.
In math, the criteria are much the same. Some mathematicians produce one truly great result, but that cannot put them at the top of the heap. Both Neils Abel and Evariste Galois are given credit for
independently proving the impossibility of solving the quintic equation algebraically, which was the most important open problem of the early 19th Century. Sadly, both died young, Abel of
tuberculosis and Galois in a duel, and no one would put their work at the same level of someone like Archimedes, Newton or Gauss.
If I were to make a list of the Top Ten mathematicians of all time, and I limited the list to the deceased, my list would obviously include the Big Three mentioned above, and I would be joined by the
vast majority of mathematicians when I include
David Hilbert
Leonhard Euler
and last week's star
John Von Neumann
. Other names like Poincar
é and LaGrange and Kolmogorov might also be included, all of whom meet the standard of prolific work and many important results.
But one man, and as far as I can think one man alone, would make many mathematicians' lists of the greatest of all time without meeting the criterion of prolific work. Bernhard Riemann did some of
the most important mathematical work of the 19th Century, working with many of the top mathematicians of his day, but he only produced a handful of papers. Each one of those papers is a treasure
trove of mathematical ideas, and the work that was done to extend those ideas is some of the most important work in math and physics ever.
There are many things in mathematics named for Riemann, though like a good mathematician he did not name them after himself. One of his best works he called the Dirichlet Principle, naming it after
his favorite professor and advisor. Today, there is the Riemann integral, its improvement the Riemann-Stieltjes integral, the Riemann Hypothesis, the Riemann zeta function and most importantly, the
Riemannian manifold.
The Riemannian manifold is a very important extension of integral calculus, the idea of being able to take an integral over a complex and possibly curved surface, instead of just taking a measure
over a simple flat surface like a line or a plane. The pictures here are a model of the real projective plane, a two dimensional surface that cannot be truly built in three dimensions because it must
pass through itself without having a hole in the surface. The idea of the manifold is that this odd looking shape, and many other odd looking shapes, can be defined by covering the surface with a
patchwork quilt of flat or nearly flat patches, and if two patches overlap, there is an "easy" method of matching points on one patch to the corresponding points on the other. Of all of Riemann's
ideas, the manifold may be the most useful, because without it, Einstein would not have had a mathematical model for the idea of curved space.
Like way too many great artists of the 19th Century, Bernhard Riemann died young, and again like too many, he died of tuberculosis. He was 39 when died, but his ideas and his name live on, and if he
is not one of the top ten mathematicians of all time, he is most certainly in the top twenty.
1 comment:
Mathman6293 said...
I did once play the overture to the pearl fishers in high school. My dad, the musician made sure we played lotsa different music in band.
It is amazing how much many math (composers, too) guys did in their short life times
|
{"url":"http://lotsasplainin.blogspot.com/2008/08/wednesday-math-vol-34-bernhard-riemann.html","timestamp":"2014-04-19T14:29:38Z","content_type":null,"content_length":"100395","record_id":"<urn:uuid:8afb82cb-fdfe-4e39-8129-c4727e6f20af>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
oint Presentations
SLOPE - Ludlow Independent Schools PPT
Presentation Summary : One more thing about slope… if the angle between the x-axis and the line is higher than 45 degrees, the slope will be greater than 1. The line that forms a 45 ...
Source : http://www.ludlow.k12.ky.us/userfiles/20/Classes/278/SLOPE.pptx
Presentation Summary : Slope of a Line Slope basically describes the steepness of a line If a line goes up from left to right, then the slope has to be positive Conversely, if a line goes ...
Source : http://www.worldofteaching.com/powerpoints/maths/slope.ppt
Finding Slope - Teachers.Henrico Webserver PPT
Presentation Summary : Objective The student will be able to: find the slope of a line given 2 points and a graph. SOL: A.6a Designed by Skip Tyler, Varina HS and Vicki Hiner, Godwin HS
Source : http://teachers.henrico.k12.va.us/math/HCPSAlgebra1/Documents/6-1/FindingSlope.ppt
Slope - Intercept Form - Henrico County Public Schools PPT
Presentation Summary : Objective The student will be able to: write equations using slope-intercept form. identify slope and y-intercept from an equation. write equations in standard form.
Source : http://teachers.henrico.k12.va.us/math/hcpsalgebra1/Documents/6-4/SlopeIntForm.ppt
Finding The Slope of a Line PPT
Presentation Summary : Finding The Slope of a Line Jennifer Prince 6/18/03 Slopes are commonly associated with mountains. The slope we are studying is associated with the graph of a line.
Source : http://alex.state.al.us/uploads/7093/Finding%20the%20Slope%20of%20a%20Line.ppt
The Slope of a Line - University of West Georgia PPT
Presentation Summary : Chapter 1. Graphs, Functions, & Models The Slope of a Line Mathematicians have developed a useful measure of the steepness of a line, called the slope of the line.
Source : http://www.westga.edu/~srivera/ca-fall05/1.2.ppt
Slope-Intercept Form - Sherwood Middle School PPT
Presentation Summary : Objective The student will be able to: write equations using slope-intercept form. identify slope and y-intercept from an equation. write equations in standard form.
Source : http://sherwood.ebrschools.org/eduWEB2/1000129/darleneford/docs/slope_intercept_form.ppt
Slope + Y-intercept - Powerpoint Presentations for teachers PPT
Presentation Summary : Note Note to browser’s This slope lesson is interactive and I have students use graph paper as they follow along This is formulated for the California High School ...
Source : http://www.worldofteaching.com/powerpoints/maths/Slope%20&%20Y-intercept.ppt
Graphing Linear Equations - Mr. Leckie's Web Page PPT
Presentation Summary : Graphing Linear Equations Slope – Intercept Form Slope y - intercepts Slope – Intercept Form Graphing y = mx + b Graphing ax + by = c Graphing Linear Equations ...
Source : http://www.chaoticgolf.com/pptlessons/graphlines.ppt
Presentation Summary : Slope 5-4-1 Stairs Up 1 over 1 Up 1 over 2 Up 2 over 1 Up 0 over 3 Up 3 over 0 Slope Slope tells the steepness of things like roads or stairs.
Source : http://www.nwlincs.org/NWLINCSWEB/EITCdata/PreAlg/PowerPt/5-4-1%20Slope.ppt
Graphing lines using slope intercept form PPT
Presentation Summary : 4.7 Graphing Lines Using Slope Intercept Form Goal: Graph lines in slope intercept form. Slope-Intercept Form of the Linear Equation Find the slope and y-intercept of ...
Source : http://web.luhsd.k12.ca.us/staff/bjacobs/Algebra1_Power/chp_4/Graphing_lines_using_slope_intercept_form_4.7.PPT
Presentation Summary : Title: The slope of a line Author: bjacobs Last modified by: bjacobs Created Date: 4/1/2002 5:27:17 PM Document presentation format: On-screen Show
Source : http://web.luhsd.k12.ca.us/staff/bjacobs/Algebra1_Power/chp_4/The_slope_of_a_line_4.5.PPT
Slopes (or steepness) of lines are used everywhere. PPT
Presentation Summary : Slope Slope of a Linear Relationship POSITIVE SLOPES Goes up to the right Negative Slope Goes down to the right STEEPNESS OF SLOPE The greater the _ Constant of ...
Source : http://dearingmath.wikispaces.com/file/view/5.3+slope.ppt
Slope Word Problems .pptx - Larose PPT
Presentation Summary : Slope Word Problems. In 2005, Joe planted a tree that was 3 feet tall. In 2010, the tree was 13 feet tall. Assuming the growth of the tree is linear, what was the ...
Source : http://larose.cmswiki.wikispaces.net/file/view/Slope+Word+Problems.pptx
Slopes of Parallel and Perpendicular Lines - A Teachers' Page PPT
Presentation Summary : Slope of line 1 = = = = –1 y2 – y1 x2 – x1 –10 –(–7) 3 – 0 –3 3 Slope of line 2 = = = = –1 Each line has slope –1. The y-intercepts are 3 and ...
Source : http://www.dgelman.com/powerpoints/geometry/phall/3-7_Slopes_of_Parallel_and_Perpendicular_Lines.ppt
Slope Intercept Form - NW LINCS PPT
Presentation Summary : Slope Intercept Form 5-4-2 Y intercept The y-intercept is the point where a line crosses the y axis. At this point the value of x is 0. To find the y-intercept, plug ...
Source : http://www.nwlincs.org/NWLINCSWEB/EITCdata/PreAlg/PowerPt/5-4-2%20Slope%20Intercept%20Form.ppt
Point-Slope Form of a Linear Equation - james rahn PPT
Presentation Summary : Point-Slope Form of a Linear Equation Learn the point-slope form of an equation of a line Write equations in point-slope form that model real-world data
Source : http://www.jamesrahn.com/Algebra/powerpoint/Point-Slope%20Form%20of%20a%20Linear%20Equation%20section%204-3.ppt
Slope Intercept Form - Welcome to Robertson County Schools: Home PPT
Presentation Summary : Slope Intercept Form Y intercept The y-intercept is the point where a line crosses the y axis. At this point the value of x is 0. Y intercept To find the y-intercept ...
Source : http://www.rcstn.net/technology/files/ch_5-3_slope-intercept.ppt
5.1 Rate of Change and Slope PPT
Presentation Summary : 5.1 Rate of Change and Slope Objective: SWBAT determine the slope of a line given a graph or two ordered pairs. Mini Quiz 39 Find the domain and range of the function ...
Source : http://lyt.weebly.com/uploads/2/0/9/5/2095912/5.1_rate_of_change_and_slope.ppt
Finding The Slope of a Line - nemsgoldeneagles - home PPT
Presentation Summary : Finding The Slope of a Line Jennifer Prince 6/18/03 * May need to remind students that a ratio is a comparison of two numbers * Discuss why the slope is negative and ...
Source : http://nemsgoldeneagles.wikispaces.com/file/view/Finding+the+Slope+of+a+Line.ppt
Slope – Parallel and Perpendicular Lines PPT
Presentation Summary : Title: Slope – Parallel and Perpendicular Lines Author: Jerel Welker Last modified by: Jerel Welker Created Date: 2/3/2006 12:31:01 PM Document presentation format
Source : http://isite.lps.org/jwelker/lessons/ppt/geod_3_3_slope.ppt
Distance, Midpoint, and Slope PPT
Presentation Summary : How to find the Distance, Midpoint, and Slope between two points. Please view this tutorial and answer the follow up questions on paper and turn in to your teacher.
Source : http://ridleycoreplustutorials.wikispaces.com/file/view/DistanceMidpointSlope.pps
Practice converting linear equations into Slope-Intercept Form PPT
Presentation Summary : Practice converting linear equations into Slope-Intercept Form It’s easier than you think… When an equation is in slope-intercept form: Practice converting linear ...
Source : http://blue.utb.edu/qttm/at/ppt/Linear%20Fts/Slope_intercept.ppt
The slope-intercept form of a line: y = mx + b PPT
Presentation Summary : Title: The slope-intercept form of a line: y = mx + b Author: Dolores Salvo Last modified by: Dolores Salvo Created Date: 1/12/2004 11:28:34 PM Document presentation ...
Source : http://www.cs.osceola.k12.fl.us/Classes/G8/Mrs%20Salvo/Downloads/SlopeInt.ppt
If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a
presentation that is using one of your presentation without permission, contact us immidiately at
|
{"url":"http://www.xpowerpoint.com/ppt/slope-ppt.html","timestamp":"2014-04-20T05:53:37Z","content_type":null,"content_length":"23106","record_id":"<urn:uuid:aeee737b-11f9-424a-ab60-72eb12f74f50>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
bounds on regulator of elliptic curve,
up vote 2 down vote favorite
Let E be an elliptic curve over Q with positive rank and trivial torsion structure. Is there any sort of upper bound (conjectural or unconditional) on the regulator of E in terms of the conductor of
E? (For lower bound we have Lang's conjecture.)
nt.number-theory elliptic-curves
1 In any specific case, you can get a conjectural upper bound from the leading coefficient of the $L$-function at $s=1$. Is that what you are after or are you looking for a theoretical uniform
bound? If the latter, then dependent on what? It is very unlikely that there is a constant uniform bound that works for all curves. – Alex B. Nov 11 '10 at 5:12
I was looking for an upper bound in terms of the conductor or the discriminant of E. I've been trying to figure out if I can give some sort of a bound for $\lim_{s\rightarrow 1}L^{(r)}(s)$ using
the conductor or the discriminant of the elliptic curve, but I don't know if that's reasonable or not. – Soroosh Nov 12 '10 at 19:05
There is Lang's conjecture on the regulator times size of Sha. See his book "Survey of Diophantine Geometry", pp. 99, Chapter 3 section 6, conjecture 6.3. If you don't have access to the book (and
can't search google), I can post it here. – Dror Speiser Nov 12 '10 at 20:10
$L^r(1)$ can be bounded by complex analysis, in particular, if you don't care about optimal results, use the Phragmen Lindelof theorem. I think it should give $L(1)$ is less than $N^{1/4}$ times
1 some power of $\log N$. Each deriviative introduces another log via a Cauchy derivative formula, among other methods. More explicitly, $L(s)$ is bounded by $(\log N)^{2?}$ on the $s=3/2$ line via
a limiting Euler product (or do it on $s=3/2+1/\log N$), and then by the functional equation, you get a bound $\sqrt N$ times this on the $s=1/2$ line, and apply convexity to get $N^{1/4}$ in the
middle. – Junkie Feb 28 '11 at 12:30
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged nt.number-theory elliptic-curves or ask your own question.
|
{"url":"http://mathoverflow.net/questions/45645/bounds-on-regulator-of-elliptic-curve","timestamp":"2014-04-20T13:53:05Z","content_type":null,"content_length":"50565","record_id":"<urn:uuid:3de10fb5-6056-459b-aaca-9f175e68fd1b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Black Vault Message Forums
The reason that gravity waves, gravitrons etc have not been discovered is because they do not exist. It is because any mass in space contracts the area in which it resides that it gives the illusion
of pulling objects together. Imagine a 2 dimensional piece of paper representing an area of space. First draw a vertical line in the middle and then a horizontal line to divide the sheet into 4. Call
the intersection at the centre of the page point (A). From point (A) measure 4 inches up the vertical and mark this point calling it point (B).
Now place an imaginary planet Earth at point (A). Immediately point (B) is relocated one inch closer to point (A) as the planet contracts the space between point (A) & point (B). Note it is important
to realize thet point (B) has not been pulled to Point(A) but that space has been compressed.
Now place an imaginary Moon at point (B). Its not important to calculate or visualize further contractions of space here as the argument is not effected. The argument under consideration now is that
the moon does not orbit the earth but in fact always travels in a straight line.
Starting at point (B) the moon is travelling towards the right and its path is parallel to the horizontal line dissecting the page.From point (B) draw a parallel line to the existing horizontal line
to show the straight anticipated path of our moon as it travels through space. At a place 6 inches from point (B) along the moons straight line route you can mark a point and call it (C).
From my earlier argument you will see that point (C) will be drawn closer to (A) as the Earth has compressed the space around it. Now we see that whilst the moon has in fact travelled in a straight
line, it appears to have begun to orbit the Earth. The truth is that point (C) has moved.
We can complete the straight line orbit by adding point (D)(E) and so on but its the same deal as point (C). This discussion says that the moon starts at a point on the orbit where it finishes. To
expand this ,my reasoning says that if you place a mass in empty space the immediate effect is for that mass to draw-in spheres of space around it like linear rods of space formed into hoops with
their ends joined or bubbles if your progressing to 3D imagery.
We can go an interesting step further and regress to the flat Earth philosophy (only with an Einstein like twist). Firstly take another piece of paper and draw a circle on it. Now this circle is a 2D
representation of the Earth. Mark a point at the centre of the circle and call it (A). Next mark a point at the very top of the world/circle and call it (B). Imagine a person walking clockwise around
the Earth to a point 45 degrees around the circumferance to point (C). Now consider, as with the lunar orbit reasoning above, that our walker has been walking on a dead straight line and not in a
circle as at first it might appear. It is the centre of mass of the Earth that has relocated point (C) to appear at a point on the circumferance of our circle. If our man keeps walking he will indeed
come back to where he started as the linear track of space occupied by the surface or the planet has been formed into a hoop by the presence of Earths mass. Simple don't you think ?...............
The reason that gravity waves, gravitrons etc have not been discovered is because they do not exist.
Sorry. Wait on sec. Are you trying to say that Isaac Newton, an English man, was wrong? We invented gravity and time so we are the ones that tell you that it’s there not the other way round.
We are the ones that tell the Spanish when to have their afternoon nap, hence why the lost, and we’re the ones that tells your coffee to stay in its cup.
It was interesting though.
Read between the lies
Nice going greywolfe, that is close to my theory. Objects which have mass interact with the fabric of space pulling points together, compressing that region of space. When you look at the ultra small
scale everything is just waves, and mass IMO is a configuration of waves which interfere with the fabric of space. The amount of coupling determines the mass. The more matter which comes together in
close proximity the more each matter object's waves combine, increasing the overall coupling and increasing the compression of space.
Gravity waves would require a different definition for mass than mine. And not understanding this is why we can't bring relativity in line with quantum mechanics. I believe relativity is ultimately
incorrect, a clever mathematical construct which just happens to yield the correct values.
When we figure out how to modify the waves in question, or produce enough interference, we will be able to negate gravity. And then the real fun begins.
I'd like to throw in a little wrench here...
This straight-line theory (insert actual theory name here please) is new to me but I think I understand the basis. I am all for disproving current theories to make way for advancements instead of
clinging on to hollow truths. That said, don't you guys think it irresponsible to not pay homage to gravity theories on a quantum level first? There is so much we don't know about quantum physics /
mechanics that we certainly can't start ruling things out. How does this straight-line theory work with string theory? String theory is slowly but surely showing promise in explaining relationships
with particles and collective matter. I just wrote a post regarding dark matter and its yet untapped and unimagined effects on our material world. I have a really strong feeling that gravity operates
on the quantum level and is governed by the rules of the strings used to materialize whatever matter is in question. I am admittedly not a professional scientist so I have a difficult time putting
some of this to words. Let's try this...
A particle of dust travels through your home. If the air is still, it will travel in a straight line until it gets close to a larger object. What attracts the dust particle to the solid object?
Nothing, it was just "floating" aimlessly until it got close enough that it stopped? It stopped and "stuck" to the wall or the fan? No, electromagnetic force controlled the whole thing. As the
particle traveled through the room, it undoubtedly came into constant contact with other larger and smaller particles that pushed it away. Well, realistically, it was tugged, pulled and pushed
through the open space of the room by other particles. We know that everything bounces off each other at that scale almost endlessly because of the relatively low gravitational effect (weight) given
to each object according to its existence and rules therein provided by string theory. Once the dust particle finally made its way across the room, it "stuck" because of static (electro-magnetism)
which is stronger than gravity. Examples of particle behavior seem to really drive home gravity operating at the quantum level. A larger scale example of planets and stars moving in deliberate orbits
is a bit more demanding to explain but it would seemingly have to be the same governing element as dust particles. A graviton may be the missing link or provide a window between relativity and
quantum mechanics. It’s still worth searching for.
Maybe CERN will have something to say about all of this before we know it! They probably already have found amazing things that we won’t even hear about…
Try not to tear my post up too much! I am open to any criticism of course. Let’s discuss
Actually my theory is compatible with superstring theory (SST) to the degree I understand it. In fact some of the stuff I read on SST gave me ideas that resulted in my theory.
The "straight line theory"? I guess you mean relativity right? That's what we were comparing. Relativity is the analog theory of gravity. QM is a set of digital theories and there is a quantum theory
of gravity.
One of the things Einstein is most famous for (I think its what he got the Nobel Prize for) was his prediction that light bends around massive objects like stars. During a later eclipse this was
proven to be true. But relativity does not actually predict this. Einstein's prediction came from a derivation based on energy which resulted in this equation: p=hbar * omega / c, where p is
momentum, hbar is the Planck's constant over 2 PI, omega is the frequency, c is the speed of light.
That equation is mathematically correct and has been rigorously tested. But Einstein never really explained how massless particles are effected by gravity, a force which acts on mass. My theory
attempts to explain why. When you accept that the force a particle feels with respect to gravity comes from the degree of coupling with the fabric of space and that coupling is based on wave
properties, then you see how it all works. Its pretty cool actually and it leads to other stuff.
Your dust particle analogy would need some tweaking. First you have to get rid of the dust particle because its too large for QM and too small for gravity effects. You'd have to get rid of the room
because you need a vacuum. Otherwise too many other things would have larger effects than the gravity. EM does play a role because what make solid objects solid is the EM force and its way larger
than gravity. In fact the weakness of gravity is what makes it so difficult to test at the subatomic level. All other forces are many magnitudes stronger. And you can't find the right answer for
gravity looking at the large scale. So science has been stuck. They accepted the clever construct (wrong answer) a long time ago so they've got nowhere to go.
Einstein himself thought there was a problem with his theory as indicated by the need for the "cosmological constant" which he could not explain. Later physicists who believed in relativity came up
with another clever construct to explain it away. In the end I fear that my theory will end up on the trash heap of physical theories since I will not get the chance to prove it. Meanwhile, the
particle smashers keep on smashing. Humans aren't ready for space travel anyway and they sure aren't ready to meet ET.
I disagree with this theory that everything travels in a straight line. Even if space is compressing between objects to pull themselves closer (which too me sounds illogical, but ya never know) we
live in 3 directional dimensions, and maybe if every object spun in the same direction the straight line theory may work, but as it stands we have all sorts of celestial objects moving in completely
different directions as others, so they can't all be going straight as far as I can tell.
To use your paper example, it seems to me that every celestial object (and who determines which fit the grade of celestial?) would needs it's own "piece of paper" in order to orbit but yet be
travelling in a straight line.
When you abruptly change direction on a road how is that explained?
"George Bush says he speaks to god every day, and christians love him for it. If George Bush said he spoke to god through his hair dryer, they would think he was mad. I fail to see how the addition
of a hair dryer makes it any more absurd."
Well what they mean by "straight line" is a mathematical/geometrical point of view. They aren't talking straight line in 3 dimensions. I'd have to draw pictures to show what they mean. They are
modifying space (the distance between points) based on adding gravity as a dimension (similar to space-time). When you look at motion in space-gravity it is a straight line. Take away the gravity
dimension and you get a curve. They are talking space-gravity or space-time-gravity. You can do this with other concepts too, like space-energy or space-therms or space-momentum. In fact all of those
are space-energy.
Hmmm, space-time-energy (VtE)? I like it. So we should be able to describe gravity in terms of energy (E). Energy (E) is work (W) over time (t) and W can be described in terms of gravity as mass (m)
over distance (d). So space-time-energy (VtE) = (d * d * d * t * m) / (d * t) = d * d * m. Surprise, you get an area and a mass and areas are planes (straight lines). How bout that? In fact, as I
mentioned earlier, Einstein's explanation of massless photons being effected by large masses like a star was based on just such an energy based derivation. Neat how that works out.
But in the case of photons it is NOT the force of gravity which effects the photons and there is the rub. Einstein did not know what DOES effect the photon. So there in lies the question of the day.
What really causes the photon to bend around the star?
Relativity's inability to answer this question gives rise to my theory's revelation that we need a broader definition of gravity. It isn't just the property of MASS that gravity acts on. There is
another property, an as yet undiscovered property of matter.
Well, I'm not good at physics, I do chemistry...
So I'll leave you guys to it.
"George Bush says he speaks to god every day, and christians love him for it. If George Bush said he spoke to god through his hair dryer, they would think he was mad. I fail to see how the addition
of a hair dryer makes it any more absurd."
Here is a great video explaining the 10 dimensions implied by superstring theory (caveat: not the official explanation). Very nice.
here r more vids if your brain is thirsty: http://www.pbs.org/wgbh/nova/elegant/program.html
|
{"url":"http://www.theblackvault.com/phpBB3/post20327.html","timestamp":"2014-04-20T13:34:50Z","content_type":null,"content_length":"74177","record_id":"<urn:uuid:9dafc992-1f80-446e-b7ce-34ecda34e869>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Perimeter Congratulates Discoverers of the Higgs
Perimeter offers its heartiest congratulations to the winners of the 2013 Nobel Prize in Physics, Francois Englert and Peter Higgs. In the 1960s, these theorists discovered how the vacuum could be
permeated by something called a Higgs field, which distinguishes elementary particles from each other and endows them with mass. When this mechanism was finally confirmed by the discovery of the
Higgs boson – a tiny ripple in the Higgs field – at CERN last year, it was a triumph that joined the forward-looking power of theory to the powerful discovery engines of experiment.
Perimeter Director Neil Turok commented, “The discovery of the Higgs boson represents one of humankind’s greatest triumphs – to anticipate on the basis of mathematical theory the workings of nature
on such tiny, inaccessible scales and then, nearly a half-century later, to confirm the predictions in the most ambitious experiment ever conducted, the Large Hadron Collider. The details of the
discovery are equally exciting, hinting at new principles governing the universe.”
He added, “As we celebrate Englert and Higgs, we also remember Englert’s collaborator in this landmark work, Robert Brout. He was a valued member of our own Perimeter community.”
The Nobel Prize is only awarded to living persons, and Brout died in 2011; he would have likely shared in the prize if he were alive today. This sentiment was echoed by Professor Englert at a press
conference at the Free University of Brussels: “Of course I am happy to have won the prize – that goes without saying – but there is regret too that my colleague and friend, Robert Brout, is not
there to share it.”
In the early 1960s, Brout and Englert worked together to apply quantum field theory to elementary particle physics. In a ground-breaking 1964 paper, they used this new marriage of ideas to show how
some gauge bosons acquire mass. The paper was followed a few weeks later by an independent paper from Peter Higgs on the same subject. The hunt for the Higgs boson can be traced back to those two
Professor Brout was for many years an honoured presence at Perimeter, visiting frequently from 2005 on. He gave many talks at Perimeter – they are available online – and continued doing
ground-breaking research, both independently and in collaboration with Perimeter scientists, until he fell ill in 2009.
Brout was a wide-ranging and prolific scientist, who did work in field theory, elementary particle physics, lattice gauge theory, general relativity, black hole physics, and cosmology. Aside from his
work on the origin of mass, he will probably be remembered for developing the idea of inflation (again, with Englert) and relating it to the emergence of the universe itself from a quantum
Perimeter salutes all three of these pioneers, who have so greatly advanced our understanding of the universe and our place in it.
|
{"url":"http://www.perimeterinstitute.ca/node/92395","timestamp":"2014-04-20T08:48:15Z","content_type":null,"content_length":"32687","record_id":"<urn:uuid:cc86c875-2f37-4749-928a-7011f9c7251c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 10 - Properties of Circles
Section 10.1 - Properties of Tangents
Section 10.2 - Arc Measures
Section 10.3 - Properties of Chords
We did the notes in class.
Section 10.4 - Inscribed Angles and Polygons
Section 10.5 - Other Angle Relationships in Circles
Section 10.6 - Segment Lengths in Circles
Open this sketch from Geogebra
. Follow the directions in notice 1. Click on the check box #2 to see more detail, Click on the check box #3, brace yourself to have your mind blown away, then get more.
Make conjectures about the relationships between the lengths of the segments when they intersect inside and outside of the circle.
Section 10.7 - Equation of a Circle
|
{"url":"http://johnsonsmath.weebly.com/chapter-10---properties-of-circles.html","timestamp":"2014-04-20T10:57:49Z","content_type":null,"content_length":"23735","record_id":"<urn:uuid:6107fdb6-de13-4e6f-81df-7cbbb7e5799b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Whole Permutation Fractions
For my part, if a lie may do thee grace,
I’ll guild it with the happiest terms I have.
Using each of the decimal numerals 1 through 9 exactly once, we can construct a fraction equal to 3 in two different ways, namely, 17496/5832 and 17469/5823. We say that 3 can be expressed as a whole
permutation fraction in the base 10. More generally, for any positive integer b, we say that the integer n is a whole permutation fraction in the base b of degree d if n can be expressed in d
distinct ways as the ratio of two integers whose combined digits are precisely the non-zero numerals in the base b. Clearly the number of permutation fractions for any given base is finite, since
there are only a finite number of permutations of the b-1 non-zero numerals, and b-2 positions in which to place the division symbol. Of these (b-2)[(b-1)!] fractions, only a subset are whole
numbers, and of course there are whole numbers with more than one representation.
For example, the non-zero numerals in the base 4 are {1,2,3}, and the whole permutation fractions in the base 4 are
where the subscript “4” signifies that the expression is to be interpreted in the base 4. (Unsubscripted numbers are understood to be in the base 10.) Each of these numbers is of degree 1, because
they each have just one representation. All 14 of the whole permutation fractions in the base 5 are also of degree 1. The first example of a whole number with more than one representation is in the
base 6, where we have
In larger bases we find some numbers have many representations, whereas most have few or none. In the base 7, the number 2 has five distinct representations
whereas the number 3 has none at all. The table below lists the number of permuted fractional representations for the first several integers in the bases from 3 to 13.
We might not have expected to find such erratic variations in the degrees of numbers for a given base. Why, for example, are there as many as 46 representations of the number 8 in the base 10? Here
are the numerators and denominators for these 46 fractions, each of which is equal to 8:
Thus the number 8 is of degree 46 in the base 10. To give an idea of how unusual this is, the following is a list of how many of the numbers from 1 to 2000 are of degree 0, 1, 2, and so on.
Other numbers that stand out, due to their size and large numbers of representations, are 1403 and 1517, which might be called the Shrewsbury Number and the Wittenberg Number respectively. The
numerators and denominators of the permutation fractions for these numbers in the base 10 are listed below, along with the factorizations of their denominators.
Of course, it’s also clear that no multiple of the base b itself can be representable in the base b, because this would imply that the least significant digit of the numerator was zero, which we have
excluded from the permuted numerals. (It also appears that no number exceeding a multiple of b by 1 can be represented.) If we allow the numeral zero, the distribution of degrees is actually fairly
similar to the distribution without the zero numeral, except that there are a few more examples of high-degree numbers, as summarized in the table below.
The 46 representations of 80 are identical to the representations of 8, except that each numerator is multiplied by 10. Likewise the 27 representations of 170 and the 12 representations of 20 and 50
are based on the “no zero” representations of 17, 2, and 5. It’s also worth noting that the number 8 has 16 essentially new representations when we allow the zero numeral. These are summarized below.
Each of the 46 representations of 8 without the zero numeral can be converted to a representation with a zero numeral in one of two ways, by appending the 0 numeral as the most significant digit of
either the numerator or the denominator. Hence, the total number of distinct representations of 8, including all ten numerals in the base 10, is 2(46)+16 = 108.
The last of the 16 fractions listed above is interesting for having the digits appearing consecutively, i.e., the denominator is 12345 and the numerator is 98760. A similar pattern can be seen in
other even bases. For example, in the base 8, the ratio of (7650)[8] divided by (1234)[8] is 6, and the number 6 is of unusually high degree in the base 8. Perhaps the most striking difference
between the cases with and without zero is that the number 2 has 48 representations with zero, compared to only 12 representations without zero.
One reason for excluding zero is that it renders some of the solutions indeterminate. Every representation without zero can also be expressed as a representation with zero by appending zero as the
most significant digit of either the numerator or the denominator – but not both. The presence or absence of a leading zero has no significance, and yet it makes the difference between a valid and
invalid representation. So there is some justification for excluding the zero numeral.
Returning to the no-zero representations, the previous table for bases 2 to 13 suggests that if b is of the form 4k-1 then no odd number is representable in that base. This is easily proven, because
in this case the set of numerals, 1 to b-1, contains an odd numbers of 1’s modulo 2, which must be placed in either the numerator or denominator. Of course, the base itself equals 1 modulo 2, so the
parities of the numerator and denominator simply equal the parities of the sum of their digits. Consequently either the numerator or the denominator is even and the other is odd. If the numerator is
even and the denominator odd, then the ratio is even, whereas in the reverse case the ratio is not an integer. Hence no odd number is representable for bases of the form 4k-1.
On the other hand, if the base b is of the form mk + 1 where k is an odd integer and m = 2^r with r greater than 1, then the residues modulo m of the numerals consist of k multiples of the set {0, 1,
2, 3, …, m-1}, so the numerator and denominator modulo m can be written in the form
or the reciprocal of this. Noting that 2^r – 1 = -1 modulo m, and that 2^r^-1 = -2^r^-1 modulo 2^r, the numerator can be any of the residues 0, 1, 2,…, (m-1) leading to the m possibilities (or 2m,
counting the reciprocals)
The first of these represents possible whole fractions congruent to even residues modulo 2^r. The rest represent the other possible whole fractions. Reducing these (where possible) so that both the
numerator and denominator are odd, and hence co-prime to m, the divisions are unique modulo m, allowing us to determine the possible odd residue classes. To illustrate, consider the base b = 13,
which can be expressed as b = 2^23+1, so m = 4, k = 3, and the possible odd residue classes are
The central expression doesn’t reduce to an odd whole residue, but the others reduce to 1 (mod 4). Hence, the possible residue classes for numbers expressible as whole permutation fractions in the
base 13 are 0, 1, and 2 (mod 4), so no number congruent to 3 (mod 4) is representable, which agrees with the previous tabular results.
For another example, consider the base b = 17, which can be expressed as b = 2^4+1. In this case we have m = 16 and k = 1, so in addition to the even numbers we can determine the possible odd residue
classes by evaluating the ratios j/(8-j) modulo 16 for j = 1 to 15. However, since j/(8-j) = (16-j)/(8-(16-j)), we need only consider j = 1 to 7, i.e., the ratios
Evaluating these fractions modulo 16, we get 7, 11, 7, 1, 7, 3, and 7 respectively. Hence the possible odd residue classes modulo 16 for numbers expressible as whole permutation fractions in the base
17 are 1, 3, 7, and 11, which implies that numbers congruent to 5, 9, 13, and 15 cannot be so expressed.
This approach seems to allow us to determine not only the possibility, but also the frequency with which numbers of various residue classes can be represented as whole permutation fractions of a
given base. For example, with the base b = 11 the sum of degrees of the numbers less than 500 in each residue class modulo 10 are 56, 0, 95, 0, 267, 0, 55, 0, 56, and 0. Thus there is a pronounced
preference for numbers congruent to 4 modulo 10. We might conjecture that these numbers for the odd residues are in proportion to 3, 5, 14, 3, 3. For another example, in the base b = 17, assuming
each possible value of j is equally likely, we would expect the residue class 7 to be three times more populated than the classes 1, 3, and 11.
Return to MathPages Main Menu
|
{"url":"http://mathpages.com/home/kmath244/kmath244.htm","timestamp":"2014-04-20T00:49:35Z","content_type":null,"content_length":"21955","record_id":"<urn:uuid:1babed0b-3744-4934-86d3-1da525541102>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Cardinality (easy)
April 16th 2010, 11:13 PM #1
[SOLVED] Cardinality (easy)
It's trivial that we can have a subset $S \subset \mathbb{R}$ such that any bounded neighbourhood of $0$ contains $\aleph_0$ elements of $S$, and such that the complement of any bounded
neighbourhood of $0$ contains finitely many elements of $S$. Now is it possible to replace " $\aleph_0$" by " $\aleph_1$" and "finitely many" by " $\aleph_0$"?
It's trivial that we can have a subset $S \subset \mathbb{R}$ such that any bounded neighbourhood of $0$ contains $\aleph_0$ elements of $S$, and such that the complement of any bounded
neighbourhood of $0$ contains finitely many elements of $S$. Now is it possible to replace " $\aleph_0$" by " $\aleph_1$" and "finitely many" by " $\aleph_0$"?
I don't think so.
Let $E$ be the set in question. We may assume WLOG that $0otin E$ (since it won't affect the cardinality). $U_n=E\cap\left(\mathbb{R}-B_{\frac{1}{n}}(0)\right)$ (where $B_{\frac{1}{n}}(0)$ is the
open ball of radius $\frac{1}{n}$ around $0$). Then, I claim that $E=\bigcup_{n=1}^{\infty} U_n$. To see this let $x\in E$ then since $xe 0$ we see $d(x,0)>0$ and by the Archimedean principle
there exists some $m\in\mathbb{N}$ such that $0<\frac{1}{m}<d(x,0)$. Thus, $xotin B_{\frac{1}{m}}\implies x\in U_m\subseteq\bigcup_{n=1}^{\infty}U_n$ and since it's trivial that $\bigcup_{n=1}^{\
infty}U_n\subseteq E$ the conclusion follows. But, each $U_n$ is the intersection with $E$ of the complement of a bounded neighborhood of $0$ and thus countable. So, $E=\bigcup_{n=1}^{\infty}U_n$
is the countable union of countable sets and thus countable. But, this contradicts the assumption that $E\cap [-1,1]$ is uncountable.
My own problem, hope you liked it.
Just an update, it does.
Theorem: Let $X$ be a $T_1$ topological space and $x\in X$ be such that $\{x\}$ has a countable neighborhood base. Then, if $E$ is a set such that $E\cap N'$ is countable for each neighborhood
$N$ of $x$ then $E$ is countable.
Proof: We may assume WLOG again that $xotin E$. Then, if $\mathfrak{N}$ is the countable neighborhood base at $x$ then
$E=E\cap X=E\cap\left(\varnothing\right)'=E\left(E\cap\bigc ap_{N\in\mathfrak{N}}N\right)'=E\cap\left(E'\cup \bigcup_{N\in\mathfrak{N}}N'\right)$$=E\cap\bigcup_{N\in\mathfrak{N}}N'=\bigcup_{N\in\
m athfrak{N}}\left(E\cap N'\right)$
And since each $E\cap N'$ is countable and $\mathfrak{N}$ countable it follows that $E$ is the countable union of countable sets and thus countable. The conclusion follows.
April 18th 2010, 04:04 PM #2
April 18th 2010, 05:22 PM #3
April 18th 2010, 05:45 PM #4
April 19th 2010, 03:04 PM #5
|
{"url":"http://mathhelpforum.com/math-challenge-problems/139593-solved-cardinality-easy.html","timestamp":"2014-04-18T21:55:11Z","content_type":null,"content_length":"57997","record_id":"<urn:uuid:d1f79cf2-8d35-4d84-b7be-16f938cd0135>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course title: Quantum Mechanics for Engineers (EE 521)
“Think quantum”
M. P. Anantram (Anant)
Phone: 206-221-5162
The focus of this course is to introduce students to quantum mechanics using 1D, 2D and 3D nanomaterials. The students will develop a working knowledge of quantization in quantum dots/wells/wires,
band structure, density of states and Fermi’s golden rule (optical absorption, electron- impurity/phonon scattering). Applications will focus on nanodevices and nanomaterials.
1) Schrodinger’s eqn
• Definition
• Interpretation
• Continuity equation for probability density
• Continuity of wave function and its first derivative
• Expectation value
• Uncertainty principle
2) Closed and Open systems (examples of importance to nano devices and materials)
• Particle in a box
• Single Barrier Tunneling (discussion in context of transistors)
• Double Barriers (resonant tunneling diodes)
• Separation of variables
• Nanowire
• Quantum Well
• Quantum Dot
• Coupled quantum wells
• Hydrogen Atom
• Kronig-Penney model
• Time evolution of wave packets
3) Crystalline solid
• Unit cell and Basis vectors
• Real space and Reciprocal space
• Examples: Nanowire, Graphene (2D), 3D solid
4) Energy levels and wave function in a crystalline solid
• Bloch’s theorem
• Basic bandstructure calculation
• Examples of relevance to devices: Carbon nanotube / Silicon nanowire, Graphene, Diamond / Silicon
5) Density of states of open and closed systems
• Atoms, particle in a box, quantum dot
• Free particles in 1D, 2D and 3D
• Nanowire and quantum wells within an effective mass framework
• Graphene in a tight binding framework
• Nanotubes in a tight binding framework
6) Spins
• Stern-Gerlach experiment
• Hamiltonian of a nanostructure in a magnetic field
• Example of spintronic device
7) Perturbation theory
• First order and second order perturbation theory
• Fermi’s Golden rule and applications of relevance to devices: Electron-impurity interaction, Electron-phonon interaction, Relationship to mobility, Optical absorption / dipole matrix elements
• Detailed Course slides
• The following books will be useful:
□ Quantum Mechanics for Engineering: Materials Science and Applied Physics, Herbert Kroemer, Prentice Hall
□ Quantum Transport: Atom to Transistor, Supriyo Datta, Cambridge University Press
Learning Objectives
• Learning to think quantum so as to aid reading literature involving nanotechnology
• Developing the ability to perform simple quantum calculations that are important to both experimentalists and theorists
• Meaning and solutions of Schrodinger’s wave equation
• Calculate basic expressions for tunneling through barriers and resonant tunneling phenomena
• Numerical solution of Schrodinger’s equation as relevant to experimental students
• Learning to calculate the role of quantization in technological relevant examples: quantum dots, nanowires, quantum wells
• De Broglie’s Uncertainity Principle and Energy-Time Uncertainity Principle
• Method of separation of variables
• Basics of the tight binding method
• Learning to apply Bloch’s theorem in bulk and nanomaterials to calculate the bandstructure
• Density of states
• Basics of spins and representation of logic states using spins
• Derivation and application of Fermi’s golden rule for transition rates. Examples will include electron-photon interaction / optical absorption and electron-impurity/phonon scattering
|
{"url":"http://www.ee.washington.edu/faculty/anant/Quantum-Mechanics-for-Engineers.htm","timestamp":"2014-04-17T21:27:17Z","content_type":null,"content_length":"8922","record_id":"<urn:uuid:f9ff63c1-ddb7-476a-9e01-baaa78552bba>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Simple Representation - Guile Reference Manual
A.1.1 A Simple Representation
The simplest way to meet the above requirements in C would be to represent each value as a pointer to a structure containing a type indicator, followed by a union carrying the real value. Assuming
that SCM is the name of our universal type, we can write:
enum type { integer, pair, string, vector, ... };
typedef struct value *SCM;
struct value {
enum type type;
union {
int integer;
struct { SCM car, cdr; } pair;
struct { int length; char *elts; } string;
struct { int length; SCM *elts; } vector;
} value;
with the ellipses replaced with code for the remaining Scheme types.
This representation is sufficient to implement all of Scheme's semantics. If x is an SCM value:
• To test if x is an integer, we can write x->type == integer.
• To find its value, we can write x->value.integer.
• To test if x is a vector, we can write x->type == vector.
• If we know x is a vector, we can write x->value.vector.elts[0] to refer to its first element.
• If we know x is a pair, we can write x->value.pair.car to extract its car.
|
{"url":"http://www.gnu.org/software/guile/docs/docs-1.8/guile-ref/A-Simple-Representation.html","timestamp":"2014-04-19T11:40:25Z","content_type":null,"content_length":"4039","record_id":"<urn:uuid:cb1cb9f2-e169-4509-bd82-733c52a88783>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Basic For Loop??
Author A Basic For Loop??
Hand Please help me understand how this could be.
I understand that the first for loop results to true and then moves to the second for loop and that results to true. The if statement resluts to true and the continue statement skips the
Joined: println. It loops back to the second for loop and j++ increments the value to 1. The if statement results false so the println prints:
Mar 23, i = 0 j = 1
2003 It then loops back to the second for loop and j++ now incrents to 2; 2 is less than 3 so once again the if statement is false and the println prints:
Posts: i = 0 j = 2
35 It then loops back to the second for loop and j++ now increments to 3; 3 is not less than 3 so the statement is false.
It then loops back to the first loop and i++ is now incremented to 1; 1 < 2; so
why does the value of j return to 0; and how does the value of j increment to 2; for the last println.
Thanks for any response
public class forTest
public static void main(String[] args)
for(int i = 0; i < 2; i++)
for(int j = 0; j < 3; j++)
if(i == j)
System.out.println("i = " + i + " j = " + j);
/* Answer
i = 0 j = 1
i = 0 j = 2
i = 1 j = 0
i = 1 j = 2
Hand Hey Jacob!
I will walk you through this "Nested for loop."
Joined: The first for loop is the one that will keep incrementing regardless. So you declare that the first for loop counts from 0 to 1 (<2). It then goes to the nested for loop which counts until it
Mar 02, reaches its end before returning to the original for loop (i loop). In this case your nested loop (j loop) goes from 0 to 2. Therefore on the first pass i=j right off the bat because 0=0 it
2003 then continues the j loop for the next number which is 1. That is why your output for the first println is i=0 j=1 because the j loop has not completed its counting yet. As you can see 0 does
Posts: not equal 1 so it runs the println statement. Your j loop continues on counting to 2 where it is not equal to i again. Thus your second line of output is i=0 j=2. Now that your j loop is done
233 counting, it returns out to the original i loop which is incremented by 1. Therefore i does not equal j (1 and 0) so that is your 3rd output: i=1 j=0. Next is your equal. When the j loop
keeps counting it hits a 1 and then i=j so it goes to the next number in the j loop which is 2. Once again, i does not equal j and output is i=1 j=2. This completes the i loop so there is no
more counting.
Thats it.
Hope this helps
Mar 23, That does help. I am trying to figure out how
2003 j = 2 and then it equals 0. What makes the value of j go back to zero.
Hand Hi Jacob,
Mar 02, I am trying to figure out how
2003 j = 2 and then it equals 0. What makes the value of j go back to zero.
Therefore i does not equal j (1 and 0) so that is your 3rd output: i=1 j=0. Next is your equal. When the j loop keeps counting it hits a 1 and then i=j so it goes to the next number in
the j loop which is 2. Once again, i does not equal j and output is i=1 j=2.
Like I said before, it is a nested for loop. The j loop depends on the value of the i loop. If the i loop is still counting then it goes into the j loop. The j loop will count all the way
through without stopping. When the j loop is finished counting from 0 to 2 it will then jump back out to the i loop (which is incremented now to 1) and then it goes back into the j loop and
starts over. Once again, the j loop depends on the i loop. You can think of the i loop as the smart loop and the j loop as the dumb loop. When it is nested like that the i loop has priority
over the j loop so the j loop will always start at its initial value regardless, because the i loop controls it.
[ March 27, 2003: Message edited by: Steve Wysocki ]
Joined: I think a Light Bulb went on in my head!
Mar 23, Thank you for a great explanation!
2003 [ March 27, 2003: Message edited by: Jacob Michaels ]
Joined: No problem, please come back if you have any further questions.
Mar 02, Peace out
2003 Steve
subject: A Basic For Loop??
|
{"url":"http://www.coderanch.com/t/393506/java/java/Basic-Loop","timestamp":"2014-04-16T19:24:20Z","content_type":null,"content_length":"30496","record_id":"<urn:uuid:dac58190-7427-406d-8975-5b8cc864f86b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On Free ParametersOn Free Parameters
In the last post I talked about a measurement we did in our lab to characterize some properties of two parallel laser beams. The theory, if you want to dignify that equation with such a title, gave
the power of the two beams as a function of mow much of the beams we cut off with a moving opaque block at position x.
The equation had 6 constants – the power of each beam, the width of each beam, the separation of the beams, and background power with no beams. Compare this to Newton’s law of gravitation:
For two masses m1 and m2 separated by a distance r, this equation gives you the force. If you have two known masses and you know their separation and the force of their attraction, you can solve for
the single constant G. That G is the only free parameter in the theory. Once G is known for one set of masses, it will be the same regardless of what other m’s and r’s you may plug into the equation.
Every single instance of masses producing a gravitational force has to satisfy that equation or the theory is wrong. The fact that is does is impressive.
Our toy theory of blocked laser beams has to fit similar requirements: once those 6 free parameters are set by fitting methods, the equation has to give the right answer for however many data points
we choose to measure. And it does, to reasonable accuracy.
The number of free parameters in a theory is something that physicists prefer to keep low. You could, for instance, write down a 365-term polynomial equation that perfectly described the last year
worth of closing prices on the NASDAQ, but it would be worthless as a predictive theory because you have so many free parameters you could perfectly fit any 365 data points. A theory is more likely
to be right (and less likely to be wishful thinking) if it fits large amounts of data with few free parameters.
The standard model in particle physics, for instance, has something like 19 free parameters. That’s very small compared to the torrents of experimental data that it manages to fit quite well, but
it’s a much larger number than most successful physical theories. Many physicists spend a lot of time trying to think of deeper theories which would relate those parameters to each other such that in
fact maybe there would be only a few (or one! or zero!) free parameters which still manage to fit all the data. So far, no dice.
Bonus question: Maxwell’s equations in SI units appear to have two free parameters, μ[0] and ε[0]. In CGS units they appear to have just one – he vacuum speed of light c. In CGS with “natural” units
(i.e., c = 1), they appear to have zero. How many free parameters do Maxwell’s equations actually have?
1. #1 asad March 11, 2011
Bonus question answer: 1. Even in SI units, the permeability and permittivity are not independent (they’re related by the speed of light), so they don’t count as two separate free parameters.
“Natural” units still have c in them, just with an ad hoc definition.
2. #2 Roland March 11, 2011
Maxwell’s equations have two free parameters: μ0 and ε0. The speed of light is c = 1/√ε0μ0 so the one free parameter actually depends on two things.
3. #3 Eric Lund March 11, 2011
The one free parameter in SI units is actually the definition of the meter. c is defined to be 299792458 m/s, and the second also has an official definition, so the meter is whatever it needs to
be to satisfy those two definitions (the reason for this standard is because the second and the value of c can be measured to greater precision than the meter as a length standard per se). Recall
that μ_0 also has a defined value of 4π*10^-7 henrys. Since ε_0μ_0c^2 =1, that gives a de facto definition of ε_0.
4. #4 Matt Springer March 11, 2011
It’s a subtle point that I didn’t appreciate until just a couple weeks ago, but this is all closely tied to the fact that units of measurement are much more arbitrary than intuition might lead
you to think. This’ll be a post pretty soon, but it boils to a generalization of Eric Lund’s point – the number of “fundamental” units (meters, kilograms, seconds, etc) in physics is very
arbitrary, and this arbitrariness affects the way constants are written.
5. #5 David March 11, 2011
“(19…) it’s a much larger number than most successful physical theories.”
It’s a much smaller number of free parameters than any other theory that explains the same huge set of data.
Great post! I don’t comment often, but I enjoy your blog. thanks.
6. #6 Joshua Zucker March 23, 2011
I think it’s sensible to say that Maxwell’s equations have two free parameters. If gravity has one, there must be two for electromagnetism, namely the strength of electric forces and the strength
of magnetic forces. If you’re making these things appear to go away, you’re just stuffing the free parameter into your definition of one of the units or another.
|
{"url":"http://scienceblogs.com/builtonfacts/2011/03/11/on-free-parameters/","timestamp":"2014-04-20T08:27:37Z","content_type":null,"content_length":"50274","record_id":"<urn:uuid:8d08fa28-451b-4fae-b85d-153754e6403e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ridgefield, NJ Science Tutor
Find a Ridgefield, NJ Science Tutor
...I'm committed to helping my students improve, and I'm always easy to reach by phone or by email with any questions that pop up in between lessons. Imagine the doors you can unlock with a
higher test score. My students have seen their scores rise, on average, just over 400 points, and they've been admitted to prestigious schools--including Harvard, Columbia, Georgetown, and NYU.
10 Subjects: including ACT Science, SAT math, SAT reading, SAT writing
...I am an economic historian and am currently working on a number of papers for presentation at conferences. I am equally familiar with American and global history. I have taught chemistry
privately and in college for over thirty years and have BS and MS degrees in the subject, as well as R&D experience.
50 Subjects: including organic chemistry, physics, chemistry, ACT Science
...I have been a chemistry teacher for 17 years at very well-regarded Ridgewood High School. I have tutored all levels of chemistry and physics, including AP level. I worked for many years
teaching SAT math prep classes for both Huntington Learning Center and Marmont Academics, and have worked pri...
7 Subjects: including physics, ACT Math, algebra 1, physical science
...I also have tutored for 22 years to students in grades 1-6 in Science, Social Studies, Reading, Math, and English. I have obtained "A's" at Baruch College in Educational Psychology and
Developmental Psychology. I will be able to tutor K-6 students in the Common Core New York State Curriculum.
41 Subjects: including biology, chemistry, psychology, physics
...I have a bachelor of arts in psychology from Kean University in Union NJ and have worked for many years as a psychiatric nurse in various healthcare institutions. If you need a tutor for your
studies in psychology I can help you succeed. I have practiced as a Licensed Practical Nurse for the state of New Jersey since my graduation in 1985.
21 Subjects: including biology, Microsoft Word, Microsoft PowerPoint, reading
Related Ridgefield, NJ Tutors
Ridgefield, NJ Accounting Tutors
Ridgefield, NJ ACT Tutors
Ridgefield, NJ Algebra Tutors
Ridgefield, NJ Algebra 2 Tutors
Ridgefield, NJ Calculus Tutors
Ridgefield, NJ Geometry Tutors
Ridgefield, NJ Math Tutors
Ridgefield, NJ Prealgebra Tutors
Ridgefield, NJ Precalculus Tutors
Ridgefield, NJ SAT Tutors
Ridgefield, NJ SAT Math Tutors
Ridgefield, NJ Science Tutors
Ridgefield, NJ Statistics Tutors
Ridgefield, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/ridgefield_nj_science_tutors.php","timestamp":"2014-04-16T19:27:46Z","content_type":null,"content_length":"24300","record_id":"<urn:uuid:f3c8e64d-a894-4c42-8e62-c67c81e9a55b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
May 2000
By Bruce Wampler
Accountants frequently encounter questions regarding installment loans, such as how changing the interest rate, the amount borrowed, or the term of the loan will affect the payment amount. Any
handheld financial calculator can easily perform this type of "what-if" analysis. Sometimes, however, accountants prefer to view a complete amortization table for the loan. The amortization table
enables the user to easily ascertain, for example, the projected balance of the loan on a particular future date, or the amount of interest that will be paid in a given year. Other professionals,
such as attorneys, bankers, or realtors, often need to generate amortization tables as well.
A flexible spreadsheet template can be used to analyze installment loans with equally spaced payments of equal amount beginning at the end of period one, such as an ordinary annuity. The key feature
of the spreadsheet is its use of "IF" formulas that automatically adjust the visible output of the program to the proper length based on the total number of payments and eliminate the need to copy
formulas to additional cells or delete unnecessary rows. The result is a clean output, regardless of the term of the loan.
The Exhibit is a partial amortization table that shows how the spreadsheet output would appear. Row and column headings are included for discussion but would probably be omitted from the actual
output. With the exception of the five variables provided by the user (cells F3F7), all of the other cells in the worksheet contain formulas (the shaded cells) or labels. The values in cells F3F7
are for illustration purposes.
User Inputs
The first user input is the date of the loan (cell F3). The first loan payment is assumed to occur one period following the loan date, and subsequent payments are made on the same day of the month
(although a payment does not have to occur every month). This cell should be formatted using the desired date format.
The user would also enter the loan amount (cell F4) and the annual interest rate (cell F5). If the percentage format is used for cell F5, Excel 97 will interpret the interest rate properly whether or
not it is in decimal form. The term of the loan in years should be entered in cell F6.
The number of payments per year, which determines the compounding frequency, is entered in cell F7. This value must equal one (annual payments), two (semiannual payments), three (payments every four
months), four (quarterly payments), six (bimonthly payments), or 12 (monthly payments). The worksheet will not work properly for other types of payments, such as biweekly.
Cell Formulas
The formulas in this worksheet are designed for Microsoft Excel 97. (Understanding the formulas will allow the user to modify the program to work with other versions of Excel and other spreadsheet
programs.) Cell references are integral to the formulas; adding or deleting a row or column means that cell references have to be modified.
Cell F9. =F6*F7. This calculates the total number of payments that will be made, and is referenced in other cell formulas.
Cell F10. =EDATE(F3,F6*12). The EDATE function calculates the final payment date by adding the term of the loan (in months) to the date the loan was made. This cell must have a date format to display
appropriately. If the EDATE function does not work properly, it may be necessary to run the Excel 97 setup program and install the Analysis ToolPak. After installation, enable the Analysis ToolPak
using the Add-Ins command on the Tools menu. Alternatively, row 10 may be omitted without any negative impact on the remainder of the worksheet.
Cell F11. =C15*F9-F4. This calculates the total interest for the life of the loan and is for informational purposes only (the output of this formula will be correct once the formula in cell C15 is
Cell B14. =F3. This copies the loan date input by the user.
Cell F14. =F4. This copies the amount of the loan input by the user.
The most complicated formulas are in row 15, but once these are entered, the spreadsheet is substantially complete. All that remains is to copy these formulas to other cells. The best way to do this
is to copy downward through row 374, which will accommodate loans of up to 360 payments. Using the IF function instructs the spreadsheet to display the results of the calculation only if it is
appropriate for a particular loan; otherwise, a blank cell will appear. For example, the inputs in the Exhibit would result in an amortization table with 36 payments; changing the number of years to
four would automatically increase the visible output of the worksheet to 48 payments. Specific formulas are as follows:
Cell A15. =IF(A14<F$9,A14+1,""). If the amortization table is not yet complete, this formula will add another row; otherwise, the cell will appear blank.
Cell B15. =IF(A15="","",EDATE(B14,12/F$7)). This formula will result in the current payment date unless no payment number is in column A, in which case the visible output of this formula (and those
in columns CF as well) will be a blank cell. Again, enable the EDATE function and give the cells a date format for the output to appear as desired. If problems with the EDATE function occur,
omitting all of the formulas in column B will leave the remainder of the worksheet unaffected.
Cell C15. =IF(A15="","",ROUND(PMT(F$5/F$7,F$9,-F$4),2)). This calculates the amount of the periodic payment. Because actual loan payments are not made in fractional cents, Excel automatically rounds
the amount to two decimal places. Due to rounding, the loan balance following the last payment will probably not be exactly zero.
Cell D15. =IF(A15="","",F14*(F$5/F$7)). This calculates the interest associated with each payment based on the previous loan balance and the periodic interest rate.
Cell E15. =IF(A15="","",C15-D15). This calculates the portion of each payment to be applied to principal by subtracting the interest in column D from the payment amount.
Cell F15. =IF(A15="","",F14-E15). This calculates the new loan balance by subtracting the principal reduction from the previous loan balance.
Other Tips
The spreadsheet discussed in this article can be downloaded from www.cpaj.com/down.htm in Excel 97 format. To use the spreadsheet, open the file, enter the desired inputs, and view or print the
resulting table. To avoid inadvertently altering any formulas in the permanent worksheet, close the file without saving after each use or save it under a new name.
Because the amortization table may require several pages to print, column headings should be repeated at the top of each page by selecting Page Setup from the File menu, clicking on the Sheet tab,
and entering the following in the Print Titles section: "Rows to repeat at top: $13:$13"; "Columns to repeat at left: $A:$F."
Finally, set the print area by placing the cursor over cell A1, clicking and holding the left mouse button, and dragging the mouse until all visible output is highlighted. Then, choose Print Area
from the File menu and click on Set Print Area before printing in the usual manner. If the print area is not defined, Excel will print out all cells containing labels or formulas, even if the cells
have no visible output. For relatively short amortization tables, the result will be several pages of unnecessary output containing only column headings. *
Bruce Wampler, CPA, is an assistant professor of accounting at the University of Louisiana at Monroe. He can be reached at acwampler@ulm.edu.
Paul D. Warner, PhD, CPA
Hofstra University
L. Murphy Smith, DBA, CPA
Texas A&M University
Thomas W. Morris
The CPA Journal
The CPA Journal is broadly recognized as an outstanding, technical-refereed publication aimed at public practitioners, management, educators, and other accounting professionals. It is edited by
CPAs for CPAs. Our goal is to provide CPAs and other accounting professionals with the information and news to enable them to be successful accountants, managers, and executives in today's
practice environments.
©2006 CPA Journal. Legal Notices
|
{"url":"http://www.nysscpa.org/cpajournal/2000/0500/departments/d58200a.htm","timestamp":"2014-04-16T18:06:01Z","content_type":null,"content_length":"12827","record_id":"<urn:uuid:dd921dda-6134-4d66-8664-7447ef11c77d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is the extension of the Abel-Jacobi map to the smooth locus of the minimal regular model of a curve an immersion?
up vote 6 down vote favorite
Let $S$ be the spectrum of a discrete valuation ring with generic point $\eta$. Let $C/\eta$ be a smooth connected curve with an $\eta$-valued point, and let $\mathcal{C}/S$ be the smooth locus of
the minimal proper regular model of $C$ over $S$. Let $N/S$ denote the Neron model of the Jacobian of $C_\eta$, and let $\alpha:C \rightarrow N_\eta$ denote the Abel-Jacobi map.
Now by the Neron mapping property, we obtain a (unique) extension of $\alpha$ to $\overline{\alpha}:\mathcal{C} \rightarrow N$.
Question: Is the map $\overline{\alpha}$ necessarily an immersion?
If the genus is 1, we are OK! In general, I cannot think of a reason why this should be true, but I also cannot think of a counterexample.
I started thinking about this question from the point of view of Serre's book `Algebraic groups and class fields', ie as a question about line bundles on singular curves. I wanted to use such a
result in my thesis, but in the end found an alternative approach. However, I am still curious...
Thank you for your time.
ag.algebraic-geometry nt.number-theory
add comment
1 Answer
active oldest votes
See my article:
up vote
8 down Best regards, Bas Edixhoven.
P.S. Thanks to Liu Qing who drew my attention to this.
Thanks very much, I am really enjoying the paper. Regarding open immersions, in the final corollary you show that the morphism discussed above is proper iff all double points of the
geometric special fibre are non-disconnecting. I guess an easy example of (one direction of) this is to take a genus 2 curve whose special fibre is a pair of elliptic curves meeting
transversely at a single point; then the Neron model of the Jacobian is proper. Is there a reason this map cannot be an immersion (ie an open immersion followed by a closed immersion)? I
am sorry if this is clear from your paper, I... – David Holmes Nov 27 '11 at 10:07
...haven't had time to read all of it yet. Thanks also to Liu Qing! – David Holmes Nov 27 '11 at 10:08
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry nt.number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/80971/is-the-extension-of-the-abel-jacobi-map-to-the-smooth-locus-of-the-minimal-regul","timestamp":"2014-04-16T22:27:58Z","content_type":null,"content_length":"52206","record_id":"<urn:uuid:31f1c757-3c81-4c5a-b360-6d6e55f4670b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boolean operations of 2-manifolds through vertex neighborhood classi cation
Results 1 - 10 of 15
"... CGA shape, a novel shape grammar for the procedural modeling of CG architecture, produces building shells with high visual quality and geometric detail. It produces extensive architectural
models for computer games and movies, at low cost. Context sensitive shape rules allow the user to specify inte ..."
Cited by 130 (11 self)
Add to MetaCart
CGA shape, a novel shape grammar for the procedural modeling of CG architecture, produces building shells with high visual quality and geometric detail. It produces extensive architectural models for
computer games and movies, at low cost. Context sensitive shape rules allow the user to specify interactions between the entities of the hierarchical shape descriptions. Selected examples demonstrate
solutions to previously unsolved modeling problems, especially to consistent mass modeling with volumetric shapes of arbitrary orientation. CGA shape is shown to efficiently generate massive urban
models with unprecedented level of detail, with the virtual rebuilding of the archaeological site of Pompeii as a case in point.
- Proceedings of the ACM Symposium on Solid Modeling , 1999
"... Many solid modeling construction techniques produce non-manifold r-sets (solids). With each non-manifold model N we can associate a family of manifold solid models that are infinitely close to N
in the geometric sense. For polyhedral solids, each non-manifold edge of N with 2k incident faces will be ..."
Cited by 36 (17 self)
Add to MetaCart
Many solid modeling construction techniques produce non-manifold r-sets (solids). With each non-manifold model N we can associate a family of manifold solid models that are infinitely close to N in
the geometric sense. For polyhedral solids, each non-manifold edge of N with 2k incident faces will be replicated k times in any manifold model M of that family. Furthermore, some non-manifold
vertices of N must also be replicated in M, possibly several times. M can be obtained by defining, in N, a single adjacent face TA(E,F) for each pair (E,F) that combines an edge E and an incident
face F. The adjacency relation satisfies TA(E,TA(E,F))=F. The choice of the map A defines which vertices of N must be replicated in M and how many times. The resulting manifold representation of a
non-manifold solid may be encoded using simpler and more compact data-structures, especially for triangulated model, and leads to simpler and more efficient algorithms, when it is used instead of a
non-manifold repre...
- In Proceedings of Pacific Graphics , 2000
"... In this paper, we present a new paradigm that allows dynamically changing the topology of 2-manifold polygonal meshes. Our new paradigm always guarantees topological consistency of polygonal
meshes. Based on our paradigm, by simply adding and deleting edges, handles can be created and deleted, holes ..."
Cited by 11 (4 self)
Add to MetaCart
In this paper, we present a new paradigm that allows dynamically changing the topology of 2-manifold polygonal meshes. Our new paradigm always guarantees topological consistency of polygonal meshes.
Based on our paradigm, by simply adding and deleting edges, handles can be created and deleted, holes can be opened or closed, polygonal meshes can be connected or disconnected. These edge insertion
and edge deletion operations are highly consistent with subdivision algorithms. In particular, these operations can be easily included into a subdivision modeling system such that the topological
changes and subdivision operations can be performed alternatively during model construction. We demonstrate practical examples of topology changes based on this new paradigm and show that the new
paradigm is convenient, effective, efficient, and friendly to subdivision surfaces. 1
- In Fifth Symposium on Solid Modeling and Applications , 1999
"... We describe the design and implementation of a coherent sweep plane slicer, built on top of a topological data structure, which "slices" a tessellated 3-D CAD model into horizontal, 2.5-D layers
of uniform thickness for input to layered manufacturing processes. Previous algorithms for slicing a 3-D ..."
Cited by 9 (3 self)
Add to MetaCart
We describe the design and implementation of a coherent sweep plane slicer, built on top of a topological data structure, which "slices" a tessellated 3-D CAD model into horizontal, 2.5-D layers of
uniform thickness for input to layered manufacturing processes. Previous algorithms for slicing a 3-D b-rep into the layers that form the process plan for these machines have treated each slice
operation as an individual intersection with a plane, which is needlessly inefficient given the significant coherence between the finely spaced slices. An additional shortcoming of many existing
slicers that we address is a lack of robustness when dealing with non-manifold geometry. Our algorithm exploits both geometric and topological inter-slice coherence to output clean slices with
explicit nesting of contours. Keywords: rapid prototyping, computational geometry, topology, slicing, .STL format, CAD/CAM 1 Introduction Designers who want to make prototypes of solid
three-dimensional parts directly ...
, 1998
"... Set membership classification algorithms visit nodes of a CSG tree through a recursive divide-and-conquer process, which stores intermediate results in a stack, whose depth equals the height, H,
of the tree. During this process, the candidate sets is usually subdivided into uniform cells, whose inte ..."
Cited by 8 (2 self)
Add to MetaCart
Set membership classification algorithms visit nodes of a CSG tree through a recursive divide-and-conquer process, which stores intermediate results in a stack, whose depth equals the height, H, of
the tree. During this process, the candidate sets is usually subdivided into uniform cells, whose interior is disjoint from primitives' boundaries. Cells inside the CSG object are identified by
combining the binary results of classifying them against the primitives. In parallel systems, which allocate a different process to each leaf of the tree, and in algorithms that classify large
collections of regularly spaced candidate sets (points, pixels, voxels, rays, or cross-sections) against the primitives using forward differences, a separate stack is associated with each candidate
or cell. Our new representation for CSG trees, called Blist, distributes the merging operation to the primitives and reduces the storage requirement for each cell to log(H+1) bits. Blist can
represent any Boolean expr...
- COMPUTER-AIDED DESIGN , 2001
"... This paper presents a new approach for reconstructing solids with planar, quadric and toroidal surfaces from three-view engineering drawings. By applying geometric theory to 3-D reconstruction,
our method is able to remove restrictions placed on the axes of curved surfaces by existing methods. The m ..."
Cited by 7 (0 self)
Add to MetaCart
This paper presents a new approach for reconstructing solids with planar, quadric and toroidal surfaces from three-view engineering drawings. By applying geometric theory to 3-D reconstruction, our
method is able to remove restrictions placed on the axes of curved surfaces by existing methods. The main feature of our algorithm is that it combines the geometric properties of conics with af®ne
properties to recover a wider range of 3-D edges. First, the algorithm determines the type of each 3-D candidate conic edge based on its projections in three orthographic views, and then generates
that candidate edge using the conjugate diameter method. This step produces a wire-frame model that contains all candidate vertices and candidate edges. Next, a maximum turning angle method is
developed to ®nd all the candidate faces in the wire-frame model. Finally, a general and efficient searching technique is proposed for ®nding valid solids from the candidate faces; the technique
greatly reduces the searching space and the backtracking incidents. Several examples are given to demonstrate the efficiency and
, 2000
"... Solid freeform fabrication (SFF) refers to a class of technologies used for making rapid prototypes of 3-D parts. With these processes, a triangulated boundary representation of the CAD model of
the part is "sliced" into horizontal 2.5-D layers of uniform thickness that are successively deposited, ..."
Cited by 7 (0 self)
Add to MetaCart
Solid freeform fabrication (SFF) refers to a class of technologies used for making rapid prototypes of 3-D parts. With these processes, a triangulated boundary representation of the CAD model of the
part is "sliced" into horizontal 2.5-D layers of uniform thickness that are successively deposited, hardened, fused, or cut, depending on the particular process, and attached to the layer beneath.
The stacked layers form the final part. The current de facto standard interface to these machines, STL, has many shortcomings. We have developed a new "Solid Interchange Format" (SIF) for use as a
digital interface to SFF machines. SIF includes constructs for specifying surface and volume properties, precision information, and transmitting unevaluated Boolean trees. We have also develope...
, 2000
"... In this paper we describe a system, BOOLE, that generates the boundary representations (B-reps) of solids given as a CSG expression in the form of trimmed B'ezier patches. The system makes use
of techniques from computational geometry, numerical linear algebra and symbolic computation to generate ..."
Cited by 5 (2 self)
Add to MetaCart
In this paper we describe a system, BOOLE, that generates the boundary representations (B-reps) of solids given as a CSG expression in the form of trimmed B'ezier patches. The system makes use of
techniques from computational geometry, numerical linear algebra and symbolic computation to generate the B-reps. Given two solids, the system first computes the intersection curve between the two
solids using our surface intersection algorithm. Using the topological information of each solid, it computes various components within each solid generated by the intersection curve and their
connectivity. The component classification step is performed by ray-shooting. Depending on the Boolean operation performed, appropriate components are put together to obtain the final solid. We also
present techniques to parallelize this system on shared memory multiprocessor machines. The system has been successfully used to generate B-reps for a number of large industrial models including
parts of ...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=846870","timestamp":"2014-04-18T20:16:43Z","content_type":null,"content_length":"37138","record_id":"<urn:uuid:a838eae8-8c8e-4745-9804-a5e2ed829958>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problem K. 205.
K. 205. Jack and Jill are going to the gingerbread house 20 km from their home. The two of them have one bicycle together. The bicycle can only bear one rider at a time. They decide that Jack will
walk first and Jill will go by bike to some point along the way where she puts the bike down and continues on foot. When Jack reaches that point, he will get on the bike and cycle to the gingerbread
house. Jack's speed is 5 km/h on foot and 12 km/h by bike. Jill walks at 4 km/h and bikes at 10 km/h. If they start out together, how many kilometers should Jill cover by bike so that they get to the
gingerbread house at the same time?
(6 points)
This problem is for grade 9 students only.
Deadline expired.
Google Translation (Sorry, the solution is published in Hungarian only.)
Megoldás. Ha Juliska x km-t biciklizik, és 20–x km-t gyalogol, akkor x km-t gyalogol, és 20–x km-t biciklizik, akkor ő x=12,5 km.
|
{"url":"http://www.komal.hu/verseny/feladat.cgi?a=feladat&f=K205&l=en","timestamp":"2014-04-18T18:10:40Z","content_type":null,"content_length":"16758","record_id":"<urn:uuid:a7bf282a-b233-411c-9739-b233acfcd6d4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Composition Basics - GNU Emacs Calc Manual
7.8.10.1 Composition Basics
Compositions are generally formed by stacking formulas together horizontally or vertically in various ways. Those formulas are themselves compositions. TeX users will find this analogous to TeX's
“boxes.” Each multi-line composition has a baseline; horizontal compositions use the baselines to decide how formulas should be positioned relative to one another. For example, in the Big mode
a + b
17 + ------
the second term of the sum is four lines tall and has line three as its baseline. Thus when the term is combined with 17, line three is placed on the same level as the baseline of 17.
Another important composition concept is precedence. This is an integer that represents the binding strength of various operators. For example, ‘*’ has higher precedence (195) than ‘+’ (180), which
means that ‘(a * b) + c’ will be formatted without the parentheses, but ‘a * (b + c)’ will keep the parentheses.
The operator table used by normal and Big language modes has the following precedences:
_ 1200 (subscripts)
% 1100 (as in n%)
! 1000 (as in !n)
mod 400
+/- 300
!! 210 (as in n!!)
! 210 (as in n!)
^ 200
- 197 (as in -n)
* 195 (or implicit multiplication)
/ % \ 190
+ - 180 (as in a+b)
| 170
< = 160 (and other relations)
&& 110
|| 100
? : 90
!!! 85
&&& 80
||| 75
:= 50
:: 45
=> 40
The general rule is that if an operator with precedence ‘n’ occurs as an argument to an operator with precedence ‘m’, then the argument is enclosed in parentheses if ‘n < m’. Top-level expressions
and expressions which are function arguments, vector components, etc., are formatted with precedence zero (so that they normally never get additional parentheses).
For binary left-associative operators like ‘+’, the righthand argument is actually formatted with one-higher precedence than shown in the table. This makes sure ‘(a + b) + c’ omits the parentheses,
but the unnatural form ‘a + (b + c)’ keeps its parentheses. Right-associative operators like ‘^’ format the lefthand argument with one-higher precedence.
The cprec function formats an expression with an arbitrary precedence. For example, ‘cprec(abc, 185)’ will combine into sums and products as follows: ‘7 + abc’, ‘7 (abc)’ (because this cprec form has
higher precedence than addition, but lower precedence than multiplication).
A final composition issue is line breaking. Calc uses two different strategies for “flat” and “non-flat” compositions. A non-flat composition is anything that appears on multiple lines (not counting
line breaking). Examples would be matrices and Big mode powers and quotients. Non-flat compositions are displayed exactly as specified. If they come out wider than the current window, you must use
horizontal scrolling (< and >) to view them.
Flat compositions, on the other hand, will be broken across several lines if they are too wide to fit the window. Certain points in a composition are noted internally as break points. Calc's general
strategy is to fill each line as much as possible, then to move down to the next line starting at the first break point that didn't fit. However, the line breaker understands the hierarchical
structure of formulas. It will not break an “inner” formula if it can use an earlier break point from an “outer” formula instead. For example, a vector of sums might be formatted as:
[ a + b + c, d + e + f,
g + h + i, j + k + l, m ]
If the ‘m’ can fit, then so, it seems, could the ‘g’. But Calc prefers to break at the comma since the comma is part of a “more outer” formula. Calc would break at a plus sign only if it had to, say,
if the very first sum in the vector had itself been too large to fit.
Of the composition functions described below, only choriz generates break points. The bstring function (see Strings) also generates breakable items: A break point is added after every space (or group
of spaces) except for spaces at the very beginning or end of the string.
Composition functions themselves count as levels in the formula hierarchy, so a choriz that is a component of a larger choriz will be less likely to be broken. As a special case, if a bstring occurs
as a component of a choriz or choriz-like object (such as a vector or a list of arguments in a function call), then the break points in that bstring will be on the same level as the break points of
the surrounding object.
|
{"url":"http://www.gnu.org/software/emacs/manual/html_node/calc/Composition-Basics.html","timestamp":"2014-04-18T06:49:44Z","content_type":null,"content_length":"8551","record_id":"<urn:uuid:50173489-6ed7-4926-8154-aa45c05bfd7c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1978 Cryptosystem Resists Quantum Attack
15290254 story
Posted by
from the built-to-last dept.
KentuckyFC writes
"In 1978, the CalTech mathematician Robert McEliece developed a cryptosystem based on the (then) new idea of using asymmetric mathematical functions to create different keys for encrypting and
decrypting information. The security of these systems relies on mathematical steps that are easy to make in one direction but hard to do in the other. Today, popular encryption systems such as the
RSA algorithm use exactly this idea. But in 1994, the mathematician Peter Shor dreamt up a quantum algorithm that could factorise much faster than any classical counterpart and so can break these
codes. As soon as the first decent-sized quantum computer is switched on, these codes will become breakable. Since then, cryptographers have been hunting for encryption systems that will be safe in
the post quantum world. Now a group of mathematicians have shown that the McEliece encryption system is safe against attack by Shor's algorithm and all other known quantum algorithms. That's because
it does not depend on factorisation but gets its security from another asymmetric conundrum known as the hidden subgroup problem which they show is immune to all known quantum attacks."
This discussion has been archived. No new comments can be posted.
1978 Cryptosystem Resists Quantum Attack
Comments Filter:
• Re:Timeless saying applies here... (Score:3, Insightful)
by Jack9 (11421) on Wednesday August 18, 2010 @04:49PM (#33293824)
> If it can be engineered, it can be reverse-engineered.
How does that apply to this article, in any way?
Parent Share
• by kalirion (728907) on Wednesday August 18, 2010 @04:51PM (#33293856)
If it can be engineered, it can be reverse-engineered.
That only works for "security through obscurity" type of problems. A good encryption should not be "solvable" - it must be brute forced. The question is how expensive the brute force method is in
processing power and time.
Parent Share
• by da cog (531643) on Wednesday August 18, 2010 @04:54PM (#33293882)
It doesn't apply to this article. The way that one typically breaks a cryptosystem is not by reverse engineering (which is not even meaningful here, given that the algorithm is already completely
open), but by finding a clever new way to solve the mathematics underlying the system using less information than the designers of the system had thought was needed.
Parent Share
• Re:conspiracy theory (Score:5, Insightful)
by woolpert (1442969) on Wednesday August 18, 2010 @05:04PM (#33294048)
I wonder if "THEY" already have one of these quantum computers and are keeping a lid on it so they can snoop on the PGP of our enemies. Would it be possible to develop one of these in
If THEY bought out 50% of the researchers in the field, without arousing suspicion amongst those who turned down the offer, THEY would only have a 50% chance of having one first.
More realistically,
If THEY bought out a significant percentage of the researchers in the field, without arousing suspicion amongst those who turned down the offer, THEY would likely only be a few months / years (at
best) ahead.
And since the outlook on the QC front is rather bleak (in terms of a functional QC with any real power) the odds are strongly in favor of THEY not having squat.
Especially in today's world it isn't like top researchers are fragmented and isolated. In the past it was possible for a governmental organization to use its greater vision to collect isolated
researchers and be the first to introduce them to each other, magnifying their individual efforts. Today everybody who is anybody in these fields is at least aware of the others, if not following
Parent Share
• by Frequency Domain (601421) on Wednesday August 18, 2010 @05:12PM (#33294132)
Actually, with really hard-core crypto systems there are three traditional ways to break them: 1) rubber hose; 2) dumpster diving; or 3) box of chocolates/bouquet of roses.
Parent Share
• Re:conspiracy theory (Score:3, Insightful)
by Anubis IV (1279820) on Wednesday August 18, 2010 @05:39PM (#33294434)
Of course, your point doesn't consider the fact that the information sharing only goes one way. If THEY come up with something new, it's not always put back out into the field where it can be
worked on by others and built upon. If THEY then find something new, THEY can be the first and only ones building upon it, and THEY do not have to sacrifice the ability to build on everything
else that is coming out in the field as well. If that something new is a breakthrough concept, then THEY may be able to build a lead of years or decades. Of course, as you pointed out,
researchers tend to be much more aware of what is going on these days than in the past, due to the speed and ease of communication, which reduces both the likelihood of THEM getting a
breakthrough first and also reduces the time that THEY will likely be the only ones exclusively holding that knowledge. Despite that, I seem to recall hearing stories of various encryption ideas
the NSA developed in the '70s and '80s which weren't developed in the open until the late '90s and early 2000s (sorry, no citation).
Parent Share
• Re:New assymetric algorithms needed? (Score:3, Insightful)
by FrangoAssado (561740) on Wednesday August 18, 2010 @07:18PM (#33295370)
What you're describing is a NP-complete problem -- assuming P != BQP != NP. But I'm guessing that you already know that :)
Still, it's still very hard to build a cryptosystem that exploits the hardness of solving NP-complete problems. The main problem is, NP-completeness only guarantees that some instance of the
problem is hard, it says nothing about a specific instance. So, for instance, if you have a specific 3-SAT formula, there's no guarantee someone can't come up with a solution for it in polynomial
That being said, there are some candidates for a cryptosystem based on NP-completeness. Check for example the McEliece cryptosystem [wikipedia.org].
Parent Share
• Re:Timeless saying applies here... (Score:1, Insightful)
by Anonymous Coward on Wednesday August 18, 2010 @07:52PM (#33295650)
This exchange is illustrated here:
Parent Share
Related Links Top of the: day, week, month.
|
{"url":"http://science.slashdot.org/story/10/08/18/1958226/1978-Cryptosystem-Resists-Quantum-Attack/insightful-comments","timestamp":"2014-04-16T10:37:07Z","content_type":null,"content_length":"94717","record_id":"<urn:uuid:ad37b2fb-b99b-47b3-9e9f-d50a8db61a59>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about Dynamic Programming on All About Algorithms
Posts Tagged ‘Dynamic Programming’
Problem Statement: You have been given N players, with a corresponding strength value S for each player (Minimum strength being 0 and maximum being K). You are supposed to divide them into two teams
such that the difference in the total strengths of both teams is minimized.
This is an everyday problem that we tackle while forming equally matched teams. You might try a greedy approach here but in many cases it will fail. It is actually known as a Balanced Partition
problem and utilizes the method we used while solving the integer knapsack (1/0) problem. Great! So you know half of it! But let us give it a quick look.
We defined a term P(i,j) which specified that I would find a subset of elements in the first ‘i’ elements which sum up to ‘j’. Now we can define the term recursively, as,
P(i,j) = maximum {P(i-1,j), P(i-1, j-Si)}
While you can refer to the link given to the earlier post, what this formula specifies is that we either have a subset that sums up to ‘j’ in the first ‘i-1′ elements, or that we have a subset in
the first ‘i-1′ elements whose sum when added to the ith value Si gives us ‘j’.
Now that we are done refreshing our memory, we’ll move on to the new problem.
We have now generated a 2D array giving us the possibility of a sum (j) being formed using some (<i) elements at each step. Now we need to divide this into 2 sets Team 1 and Team 2 such that the
different of their totals is minimized.
For this, we’ll first find the mean value of the list of N elements. If a subset’s sum hits this particular value, we have found a solution with difference 0. Otherwise, we keep trying to find a
solution where the sum of strengths of teams is closest to the mean. For this, we can once again check for all ‘i’<=’Mean’,
Minimum {Mean-i : P(n,i)=1}
where 1 indicates we have such a subset. What we are doing is that we are checking for every value less than the Mean it can be formed from any subset of the N elements. Checking this one by one,
we land up with the required value closest to the Mean.
There it is. You have your solution! Divide you teams and game on! :)
(Retrieving the values for the subsets is as simple as maintaining back-pointers while creating the 2D array and filling it up.)
Doubts? Ask away!
PS: Need to be spoonfed (euphemism for “Me.Want.Code.”) ? Write it yourself! :P If you just can’t, drop me an email and I’ll mail it to you!
I hope you remember the US Pizza problem we talked about a while back. That wasn’t a 1/0 problem but the one we will discuss now certainly is.
Problem Statement: Santa Claus is coming to your city and needs to figure out the gifts he has to take. Since North Pole is a long way off, he can only carry gifts weighing a total of ‘C’ in his
bag. He has a total of ‘N’ gifts to select from. You are supposed to help him select the gifts in a way that would use the capacity to the greatest use, i.e., the total weight of the gifts
selected should be closest to C and the most number of gifts should be selected.
This is a dynamic programming problem and you should recognize it as soon as you see that the problem has optimal substructures in the fact that its solution can be built starting from 1 to i gifts.
Also, this is a 1/0 knapsack problem since you can either select a gift (1) or leave it behind (0).
Let us define a term M(i,j) as the optimal way of selecting gifts in a way such that selecting 1..i gifts brings the total weight up to exactly j. So your value of i varies from 1 to N and the
value of j varies from 0 to C. Hence, the time complexity of the problem is equal to O(NC). (Basically, M(i,j) in a 2D array stores the number of gifts)
At every point, there are two ways in which the value of M(i,j) can be determined.
1. Check if there was a subset of gifts from number 1 to i-1 which formed a subset with total weight equal to j. You have the value M(i-1,j) as one of your candidate values.
2. Check the M(i-1, j-Wi) value such that adding the ith gifts weight gives us j. (i.e, weight of ith gift is Wi). This value plus 1 (because we pick up this ith gift as well) is another
candidate value.
Now you simply have to take the maximum of these two values and place the value at M(i,j). In this way, you end up finding M(n,C) in the generated 2D array which is the required answer that our Santa
Claus requires.
Here, you should notice how we built the solution for the problem in a manner similar to what we might do for a balanced partition problem. I will discuss this problem in the next post.
Stay tuned for more Dynamic Programming solutions. :)
Be patient and read every line. You’ll end up understanding a great problem of Dynamic Programming.
Problem Statement: Parenthesize a Matrix Chain multiplication matrix in such a way that the number of multiplications required are minimized.
First things first, how many multiplications are involved in multiplying 2 matrices?
If your 1st matrix is of dimensions mxn and your second matrix is of dimensions nxq, you will need to perform mxnxq multiplications in this matrix multiplication.
So if your matrix multiplication chain is as follows,
A x B x C x D , we have the following scenarios,
A = 50 x 20
B = 20 x 1
C = 1 x 10
D = 10 x 100
So you see how we minimized the cost of multiplications there. Now we want to develop an algorithm for the same. You can check all possibilities but as you will realize that you will have very high
complexity with such a Brute Force method.
As we discussed a parenthesis problem in my previous post, your mind should jump to find an optimal substructure here. Now you can see all these parenthesizations as a binary tree with these matrices
as the leaves and the root element being the final product of all matrices. For example, you can view ((AxB)xC)xD) as,
Hopefully you understood everything till here. Now for a tree to be optimal, its subtrees should also be optimal. So here we come across our optimal substructure. Hence, subproblems are of the form,
(That is the representation of any one node in some binary tree except for the leaf nodes)
Thus we define this sub-problem as,
C ( i, j ) = minimum cost of multiplying
Size of this subproblem = j-i
Hence, minimum possible size is when i=j (single matrix), so C ( i, i) = 0
For j>i, we consider optimal subtree for C ( i, j ). Now, we can divide this from Ai to Ak and Ak to Aj for some k>i and k<j. The cost of the subtree is then the cost of these two partial products
plus the cost of combining these. So we land with the recurrence,
C (i, j) = min { C(i, k) + C(k+1, j) + m(i-1) x mk x mj }
where i<=k<=j.
Coding this is pretty simple now and I need not (and should not) spoonfeed it to you. All you need to do is generate this 2D matrix and another one which holds details about parenthesizations and
you’ll end up with your answer. :)
PS: If someone requires the code, sorry won’t give you that. If you need the algorithm, drop a comment or a mail. That I can write down.
Problem Statement: Given the number of matrices involved in a chain matrix multiplication, find out the number of ways in which you can parenthesize these matrices in the multiplication, i.e., you
have to find the number of ways you can introduce brackets in this chain matrix multiplication. (forming groups of 2).
For example,
A x B x C can be grouped in 2 ways,
(A x B) x C
A x (B x C)
You should remember that while matrix multiplication is not commutative, it is associative. Hence both the multiplications mean the same.
How do we find the number of such parenthesizations? It becomes simple when you realize that you can derive solutions for this problem using solutions to smaller subproblems which you calculated
earlier. Dynamic Programming :)
So for,
N=1, P(N) = 1
N=2, P(N) = 1 (easy to figure out)
N=3, P(N) = 2 (again easy)
So what we are doing here is finding out the previous solution and figuring out the different ways you can introduce one more matrix in the picture to find the next. So your function looks like,
I guess you understand how this is Dynamic Programming. And also the fact that this is a very simple permutations question. :) Not really a programming problem.
Why did I write this post? There is a genre of DP problems known as Optimal Matrix Chain multiplication. You are supposed to find the way (out of these P(n) ways) in which the number of
multiplications required will be the least. We will solve this problem in the next post.
Till then, adios! :)
Problem Statement : Given a DAG, find the shortest path to a particular point from the given starting point.
This one is a famous problem that involves dealing with graphs. I assume everyone knows what a graph is. If you don’t, this is not the place for you to be. Please visit this wiki link for knowing
more about Graphs. Let us get down to talking about DAGs (Directed Acyclic Graphs). Directed means that each edge has an arrow denoting that the edge can be traversed in only that particular
direction. Acyclic means that the graph has no cycles, i.e., starting at one node, you can never end up at the same node.
Now that we are clear with what a DAG is, let us think about the problem. If we have the following DAG, what kind of algorithm would you use to solve this problem.
One thing you can do is apply an O(N*N) algorithm and find the shortest distance by calculating all the distances from each node to every other node. As you can guess, a better method exists. You
should notice here that there is subproblem quality that exists here. If a particular node can be reached using two nodes, you can simply find the minimum of the values at the two nodes plus the cost
for reaching to the node in question. So you are extending your previous solutions. Yes, Dynamic Programming. :)
The reason you are able to find such an optimal substructure here is because a DAG can be linearized (Topologically sorted). For example, the above DAG is linearized as follows,
Now if we want to find the minimum distance from start S to A. All we need to do is see the minimum distances to reach the nodes from which A can be reached. In this case, these nodes are S and C.
Now the minimum of these minimum distances for previous nodes (till S=0, till C=2) added with the cost of reaching from these nodes (from S = 1, from C=4), is our answer.
So let me define the recursive relation for this case,
MinDist (S to A) = minimum ( MinDist(S to S) + Dist(S to A), MinDist(S to C) + Dist(C to A) )
Generalize this recurrence and you have a simple Dynamic Programming algorithm ready to solve the problem for you in linear time! Cheers! :)
In the previous post, I discussed a recursive method for finding the Edit Distance (once again, refer previous post for details) between two strings. Some of you might have noticed that the recursive
function (in this case) could just as easily be utilized in a Dynamic Programming method.
However, we would be increasing the space complexity by using a 2D array (as is the case with many Dynamic Programming problems). The time complexity would reduce to O( length of String1 * length of
String2 ). Since the whole logic has already been explained in the previous post, here is the algorithm.
int EditDistance( String1, String2 )
m = String1.length
n = String2.length
For i=0 , i<=m , i++
V[i][0] = i
For i=0 , i<=n , i++
V[0][i] = i
//cells in first row and column of temporary 2D array V set to corresponding number
For i=1 , i<=m , i++
For j=1 , j<=n , j++
If ( String1[i-1] == String2[j-1] )
V[i][j] = 1 + min( min(V[i][j-1], V[i-1][j]), V[i-1][j-1])
RETURN V[m][n] //last element of 2D array (last row, last column)
So the logic remains the same. The only difference is that we avoid the use of system stack and instead store previous results in a 2D array. Neat! :)
You must have heard of arrangement, but today we’ll talk about Derangement. The exact opposite. Derangement is the concept of arranging a set of given items in such a manner so that no object is in
its original place. For example, a question might say,
If you are given 4 cards with A, B, C, D written on them and they are placed in this specific order. Now you are supposed to count the number of ways these cards can be arranged so that no card is in
its original place.
The answer is 9. And the different arrangements are as follows:
BADC, BCDA, BDAC,CADB, CDAB, CDBA,DABC, DCAB, DCBA
So how do you go about solving such problems? Well, of course you use Derangement. So let us discuss how we count derangements. I’ll just pick up a short explanation from Wikipedia since I am too
lazy to explain it myself :P
Suppose that there are n persons numbered 1, 2, …, n. Let there be n hats also numbered 1, 2, …, n. We have to find the number of ways in which no one gets the hat having same number as his/her
number. Let us assume that first person takes the hat i. There are n − 1 ways for the first person to choose the number i. Now there are 2 options:
1. Person i does not take the hat 1. This case is equivalent to solving the problem with n − 1 persons n − 1 hats: each of the remaining n − 1 people has precisely 1 forbidden choice from among the
remaining n − 1 hats (i’s forbidden choice is hat 1).
2. Person i takes the hat of 1. Now the problem reduces to n − 2 persons and n − 2 hats
Now that you understand the concept, let us talk about counting the number of derangements. The number of derangements of an n-element set is called the subfactorial of n ( denoted as !n ). Now there
is a simple recurrence relation we can see from the explanation above,
!n = n * !(n-1) + (-1)^n
and, !n = (n-1) * (!(n-1)+!(n-2))
The formula for it is, (image sourced from AoPSWiki, once again laziness comes into play)
Now I hope you all understand Derangement and its huge set of applications! A very important concept for programmers and mathematicians alike. :)
Problem Statement: You have been given a triangle of numbers (as shown). You are supposed to start at the apex (top) and go to the base of the triangle by taking a path which gives the maximum sum.
You have to print this maximum sum at the end. A path is formed by moving down 1 row at each step (to either of the immediately diagonal elements)
The first method that comes to everyone’s mind is brute-force. But in a second or two, you would discard the idea knowing the huge time complexity involved in traversing each possible path and
calculating the sum. What we need is a smarter approach.
I hope you remember about the optimal substructure I talked about earlier relating to Dynamic Programming. Can you see one such optimal substructure here? You can build a solution as you drop down
each row. You can calculate the best path till the particular node in the triangle as you move down and finally you’ll have your maximum sum in the last row somewhere. Here is the algorithm :
Take another similar array (or your preferred data structure), let us say MAX_SUM[][]
For each ELEMENT in particular ROW and COLUMN
If ( ELEMENT is FIRST ELEMENT of ROW)
MAX_SUM[ROW][COLUMN] = ELEMENT + FIRST element of (ROW-1)
Else If (ELEMENT is LAST ELEMENT of ROW)
MAX_SUM[ROW][COLUMN] = ELEMENT + LAST element of (ROW-1)
MAX_SUM[ROW][COLUMN] = ELEMENT + maximum( element at [ROW-1][COLUMN-1], element at [ROW-1][COLUMN])
//recursive formula calculating max_sum at each point from all possible paths till that point
Once you have this array, you just need to find the maximum value in the last row of this new array and that will be your maximum sum! :)
For the example I took, the array MAX_SUM would look like, (and my answer would be 23)
What I am doing is for each element, checking the maximum of the possible paths to this node and adding the current elements value to it. If you still don’t understand, leave a comment and I’ll
explain each and every step in excruciating detail. :P
TASK FOR YOU: Find the elements of this path.
Problem Statement: You are given an array with integers (negative, positive, zero). You are supposed to find the length of the longest increasing subsequence in the array. The elements of the
subsequence are not necessarily contiguous.
Once again, as in the last problem, you cannot afford to try a brute force method and be called an even bigger fool. What you can do is look at the problem carefully and find the optimal
substructure. You can find the length of the longest increasing subsequence till each position and use this for the later positions.
The way it would work is, at each position ‘i’ in input array A, go through the positions j<i in array B and find the largest value for which A[i]>=A[j]. Now you put the value of B[i] = the value
you found + 1.
Did you understand what we did? Since we are supposed to find the length of the longest increasing subsequence, we found the element smaller than this whose corresponding value in array B (length
of longest increasing subsequence till that point) is the highest. This value + 1 would then be the length of the longest increasing subsequence till ‘i’.
For example, for the input array A as follows,
-1, 4, 5 ,-3, -2, 7, -11, 8, -2
We would get the array B as follows,
1, 2, 3, 1, 2, 4, 1, 5, 2
Once again, traversing this particular array B and finding the largest value would give us the length of the longest increasing subsequence. :)
If the solution isn’t apparent to anyone, please refer the video link, here,
Task for you : Find the elements that form this longest increasing subsequence. Pretty easy. :)
PS: Many variations of this problem exist (strictly increasing, decreasing, strictly decreasing). Also, it is posed in different ways, some of which you can find at (http://www.uvatoolkit.com) by
simply typing the phrase LIS as the search parameter. Enjoy! :)
This is one of the first examples of Dynamic Programming that I’ll take up on this blog. It is a very famous problem with multiple variations. The reader should note that in problems involving DP,
the concept is much more important than the actual problem. You should have a crystal clear understanding of the logic used so that you can use it in other problems which might be similar.
As we talked earlier in my (overview of Dynamic Programming), we need to find an optimal substructure (which is basically recursive in nature) in the problem so as to apply an iterative solution to
the problem. For this particular problem, we’ll keep building the solution as we traverse the array (which complies with our thinking of solving simpler sub-problems to solve the bigger complex
Problem Statement: You are given an array with integers (positive, negative, zero) and you are supposed to find the maximum contiguous sum in this array. Hence, you have to find a sub-array which
results in the largest sum.
Example, If the given array is,
5, -6, 7, 12, -3, 0, -11, -6
The answer would be, 19 (from the sub-array {7,12} )
If you go on to form all the possible sets and then finding the one with the biggest sum, your algorithm would have a mammoth-like time complexity. More importantly, you would be called a fool. :P So
we use dynamic programming and finish this in time O(n) and people will think we are smart. ;)
Assume your input array is called A. What you need to do is form another array, say B, whose each value ‘i’ is calculated by using the recursive formula,
Maximum(A[i], B[i-1]+A[i])
What you are doing effectively is that you are choosing whether to extend the earlier window or start a new one. You can do this since you are only supposed to select continuous elements as part
of your subsets.
The way B looks for the example I took earlier, is,
5, -1, 7, 19, 16, 16, 5, -1
So all you need to do now is traverse the array B and find the largest sum (since B has the optimized sums over various windows/subsets).
That should be easy to understand. If not, refer to video tutorial at:
Task for you : Find the elements that form this maximum contiguous sum. Pretty easy. :)
|
{"url":"http://allaboutalgorithms.wordpress.com/tag/dynamic-programming/","timestamp":"2014-04-20T14:28:06Z","content_type":null,"content_length":"107233","record_id":"<urn:uuid:cf103847-fbd7-4fa9-a6ce-3105db14cb67>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fairfax, VA Geometry Tutor
Find a Fairfax, VA Geometry Tutor
...Because of my major, I had to take upper level calculus classes. Math comes very easily to me now. I have been tutoring chemistry and algebra for the past few months, and my students' grades
have all raised from almost failing grades to at least a B+. Because of my students' progress, I am now considering becoming a teacher.
11 Subjects: including geometry, chemistry, calculus, algebra 1
...I love teaching and meeting new people. I am very patient and enjoy helping others improve their language skills. I have teaching experience in Chinese for ten years; and I can also help you
overcome the difficulties in learning Japanese because I used to be a Japanese learner too.
6 Subjects: including geometry, Japanese, Chinese, algebra 1
...All my students were so successful that the president of the college sought me out to learn what I was doing. Some of my favorite moments in teaching include: working with a fifth grader who
could not read and an eighth grade algebra student who had not mastered basic math. Both students were up to grade level by the end of the academic year.
32 Subjects: including geometry, reading, English, chemistry
...I tutored for 4 years during high school ranging from basic algebra through calculus. I tutored for 3 years during college ranging from remedial algebra through third semester calculus. I have
experience in Java (and other web programming) and Microsoft Excel.
13 Subjects: including geometry, calculus, GRE, SAT math
I would be happy to tutor your children in AP Calculus AB/BC, Multivariable Calculus, Precalculus, Trigonometry, Geometry, Algebra 1 and 2, Prealgebra and standard math tests, such as AP Calculus
AB/BC, SAT, PSAT, PRAXIS, GRE, TJ entrance. I provide three different services with different hourly rates: 1. One To One Instruction: ($29/hour) Class size is 2-3 students.
12 Subjects: including geometry, calculus, algebra 1, algebra 2
Related Fairfax, VA Tutors
Fairfax, VA Accounting Tutors
Fairfax, VA ACT Tutors
Fairfax, VA Algebra Tutors
Fairfax, VA Algebra 2 Tutors
Fairfax, VA Calculus Tutors
Fairfax, VA Geometry Tutors
Fairfax, VA Math Tutors
Fairfax, VA Prealgebra Tutors
Fairfax, VA Precalculus Tutors
Fairfax, VA SAT Tutors
Fairfax, VA SAT Math Tutors
Fairfax, VA Science Tutors
Fairfax, VA Statistics Tutors
Fairfax, VA Trigonometry Tutors
Nearby Cities With geometry Tutor
Annandale, VA geometry Tutors
Arlington, VA geometry Tutors
Bethesda, MD geometry Tutors
Burke, VA geometry Tutors
Centreville, VA geometry Tutors
Chantilly geometry Tutors
Fairfax Station geometry Tutors
Falls Church geometry Tutors
Herndon, VA geometry Tutors
Manassas, VA geometry Tutors
Mc Lean, VA geometry Tutors
Oakton geometry Tutors
Springfield, VA geometry Tutors
Vienna, VA geometry Tutors
Woodbridge, VA geometry Tutors
|
{"url":"http://www.purplemath.com/Fairfax_VA_geometry_tutors.php","timestamp":"2014-04-18T23:51:26Z","content_type":null,"content_length":"24126","record_id":"<urn:uuid:8b6ca007-ed6f-4603-95c6-91f6d6507a33>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math visualization: (x + 1)
You may remember this from high-school algebra (or perhaps earlier for some). Expand (x + 1)
using the
(x + 1)^2 = (x + 1)(x + 1)
= x^2 + x + x + 1
= x^2 + 2x + 1
A neat way to visualize this equality, and hopefully help remember the factorization of the resulting polynomial, is to look at how the pieces fit together.
When they're arranged in this way, it's easy to see that the pieces squeeze together to form a larger square that has a length of (x + 1) on a side, proving the equality.
Related posts
Six Visual ProofsMultiplying Two Binomials
16 comments:
Visualizing math like this is something I've always done. It always used to surprise me when I would find out that other students were memorizing stuff like this instead of really understanding
it. I think most of math should be taught in a visual way like this. You get a visceral understanding of FOIL, the difference of two squares, why multiplication as it's taught in 3rd grade works,
I was a memorizer when I first took Algebra in high school. That changed the following year when I took Geometry and a light-switch went off for me. Things suddenly started to make sense. It was
like that moment that many of us have when we take Physics, learn Bohr's model of the atom, and realize that this is something we should have been taught before we were ever allowed to take
I'll be tutoring my son in Algebra next semester, so I'll be sure to come across a lot more examples like this one. I'll post them when I do.
Great picture! It reminds me of the intuitive approach that betterexplained.com often takes.
Thanks for the encouragement! I'll definitely be trying to add more graphics to my posts, since I find that I understand things better when they're presented visually.
You also reminded me, I need to add Better Explained to my blog roll.
I love it... My dad taught me that trick many many years ago and I ended up using it primarily for the reverse approach (factorisation). It's a bit sluggish being a brute force but I usually
found it easier than polynomial division for the sort of problems we were given in high school.
Another favourite trick was the 'magic' triangle for the A = BC type equations. Draw a triangle like this (http://imagebin.ca/view/v05xNO.html) and to find the missing unknown just cover it and
the equation is displayed.
Just found this blog a couple weeks ago, and it is already one of my favorites.
This diagram can easily be relabeled to demonstrate (x+y)^2, or extended to demonstrate (x+y)^3.
Yes, this is definitely easier than polynomial division if you recognize that the polynomial fits the right pattern. Of course, my teachers in high school and college always made sure at least
one problem on every test required that the division be done long hand. :(
I can't believe I've never seen that magic triangle before. It's so simple! Thanks for sharing it.
Thanks for reading. I should have more stuff like this pretty soon, and I plan on increasing my SICP rate. My goal is to finish the book in the first half of the new year.
I had also realized that this diagram could be extended to cover (x+y)^2 shortly after I posted it. I hadn't thought to try (x+y)^3, though. Thanks pointing it out.
In Montessori school there are a lot of such visual representations for math.
Check out this post in my blog about it.
Bill, love it! So simple and understated--yet so undeniably correct. You can see it and touch it now. Goodbye rote memorization--hello Mr. Lightbulb. Have you considered teaming up with the Jason
Project? A dynamic duo you two would be. Like Batman and Robin.
I only recently heard of the Montessori school when it was mentioned on Hacker News. Thanks for sharing the link to your article about it.
I'd never heard of the Jason Project before, so thanks for mentioning it. I'll have to look into it further to see if there's something I can contribute.
This reminds me of the "proof" that Gauss came up with for 1+2+...+n=n*(n+1)/2
Just write the numbers in ascending and descending order in two rows. Each column adds up to (n+1). The formula is obtained naturally.
((x+1)x2)+1 = 9 squares
Please how can I solve (x + 1)12
What about 2- (x + 1)?
The answer given is -x +1
|
{"url":"http://www.billthelizard.com/2009/12/math-visualization-x-1-2.html","timestamp":"2014-04-17T04:24:53Z","content_type":null,"content_length":"109828","record_id":"<urn:uuid:89d1e0de-581b-4da5-a9a3-6ed4aef2baee>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
50. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie (gmds)
Fieller versus Efron - a comparison of two asymptotic approaches to ICER interval estimation in the presence of neglegible correlation between cost and efficacy data
Meeting Abstract
Suche in Medline nach
Veröffentlicht: 8. September 2005
Introduction and Purpose
Because of obvious financial ressource limitations in health care systems, therapeutic strategies are meanwhile not only evaluated from a clinical, but also from a health economic point of view to
link their clinical efficacy to the underlying costs. In this context, the estimation of incremental cost effectiveness ratios (ICERs) has earnt increasing attention during this decade. ICERs relate
the cost difference between therapeutical alternatives to the corresponding difference in clinical efficacy. Despite this intuitive interpretation of ICERs as “additional costs per additional benefit
unit”, their statistical treatment imposes severe problems because of the necessity to estimate the distribution of a ratio of stochastically dependent distributions: If two independent treatment
groups are contrasted alongside their relative cost effectiveness, the interval estimation of an ICER between these groups means the simultaneous treatment of four random variables (two cost and two
efficacy distributions in each sample), which are often highly correlated. Accordingly, standard density transformation techniques for their ratio’s overall density function as a basis for moment
estimation are no longer appropriate.
One approach to estimale ratio intervals in this setting is based on modifications of the Fieller theorem [Ref. 1]. However, validity of interval estimates derived by the latter are crucially based
on normal approximations, which must be questioned in the presence of real patient cost data: Such data are usually skewed and therefore clearly lack from normality assumption; normal approximation
will therefore imply severe requirements on sample sizes.
A different approach [Ref. 2] suggests the use of Efron’s Bootstrap, which seems quite promising in the actual setting: Note that the multivariate Bootstrap enables to imitate the multivariate
correlation structure [Ref. 3] of the underlying data distributions, and therefore can be expected to provide less biased ratio interval estimates. On the other hand, it is often ignored, that the
Bootstrap approach itself is an asymptotic procedure, validity of which is limited not only by the number of Bootstrap iterations, but also by the underlying simple sizes [Ref. 3], from which the
Bootstrap simulation was generated from. Multivariate Berry/Esséen-type bounds for the multivariate (!) Bootsrap indicate the necessity of sample sizes, which are much larger than the standard sizes
established to ensure validity of univariate normal approximations. In practice, it must be questioned, whether health economic data sets will suffice these requirements on sample size.
In summary, both strategies for ICER interval estimation crucially depend on the underlying data set’s sample size. Therefore this paper seeks to investigate the order of bias inherent in these
approaches; the investigation is based on real patient data from maxillofacial surgery [Ref. 4] and simulation studies based on this data set [Ref. 5].
Material and Methods
Model parametrization
The following will consider two therapeutic alternatives 1 and 2, where treatment 1 denotes an established standard procedure and treatment 2 is under discussion concerning possible recommendation
for founding by health care insurers. If then the random variables K[1] and K[2] denote the treatments’ costs and the corresponding random variables E[1] and E[2] the treatments’ respective efficacy
indicators, the following will assume K[2] > K[1] and E[2] > E[1] (such a treatment alternative 2 is usually called “admissable” for ressource allocation). The ratio K / E is refered to as the cost
effectiveness ratio (CER) and describes a treatment’s marginal costs per gained clinical benefit unit. The incremental cost effectiveness ratio (ICER) of a treatment 2 versus the standard treatment 1
is defined as ICER[21] = (K[2] – K[1]) / (E[2] – E[1]) and estimates the additional costs, which must be invested to achieve one additional clinical benefit unit under treatment 2 instead of the
Fieller and Bootstrap estimation
In the setting of two independent treatment samples, the ICER as defined here can be estimated by imputation of the samples’ respective mean estimates for K and E. The Fieller method for (one-sided)
interval estimation of the ICER then concentrates on the asymptotic normality of the mean imputed difference (K[2] – K[1]) – U (E[2] – E[1]), where U denotes the ICER’s upper confidence bound at an
appropriate confidence level. The standardised difference can be considere asymptotically normal, at least for sufficiently large sample sizes in the underlying evaluation study. Contrasting this
standardized difference to the appropriate normal distribution quartile reults in a quadratic equation for U, which can be solved numerically to derive a data based asymptotic interval estimate
[Ref. 5].
The Bootstrap approach suggests the simulation of bivariate (!) replicates from the original bivariate cost and efficacy data for each treatment sample, respectively. A Bootstrap point estimate for
the ICER can be derived by imputation of the Bootstrap estimates for the samples’ cost and efficacy means; the empirical distribution of the Bootstrap ICER is then simulated by repeating this
estimation process (in the following a simulation determinant of 10.000 Bootstrap replicates was installed).
Maxillofacial Surgery Data
The above estimation procedures will be contrasted alongside the cost effectiveness evaluation of the surgical versus the conservative treatment of collum fractures [Ref. 4]. The surgical
intervention means the implantation of a metal platelet to attach and stabilize the fractured parts of the collum for a period of nine months; the non-invasive procedure is based on joint fixing the
patient’s maxilla and mandibula by means of metal wires over three months. The latter approach does not afford surgical treatment and is therefore less cost-intensive. However, the one-year
re-treatment rate of this procedure must be expected higher because of the less direct fixation of the fractured area. Efficacy of both procedures was measured in terms of quality adjusted life years
(QALYs) by means of questionnaire-based interviews 36 months after end of the initial treatment. Direct treatment costs were estimated by means of the hospital documentation, costs for re-treatment
within the 36 months under observation were added. The data of 67 collum fractures were analyzed (35 patients underwent surgical treatment, 32 patients conservative treatment) at the Clinic for
Dentomaxillofacial Surgery at the University Hospital of Mainz.
Mean costs of 4855 € (standard deviation 312 €) versus 1970 € (290 €) were invested for surgical and conservative treatment, respectively, corresponding to a respective mean QALY gain of 27.5 QALYs
(6.5) versus 21.9 QALYs (7.2). The surgical treatment therefore implied incremental costs of 515 € per additional QALY when contrasted to the non-invasive procedure.
The Fieller estimate provided an upper bound for the one-sided 95% confidence interval of this ICER point estimate of 564 € per QALY. The Bootstrap estimate based on 10.000 simulation replicates
resulted in an upper confidence bound of 538 € per QALY.
This difference in interval estimates encouraged the simulation of patient data according to the above mean cost and efficacy estimates. The cost data was modeled by lognormal distributions (note
that the original cost data showed a skewness of 0.8 and 1.3 in the surgical and the non-invasive treatment sample, respectively), the efficacy data by normal distributions. To imitate the empirical
correlation between costs and efficacy as observed in the original data (Pearson correlation estimates 0.55 and 0.39, respectively), the bivariate distribution was generated from an appropriate
binormal distribution in the first place; the cost component was then transformed into the skewed lognormal analogue. The simulation study was performed using SAS^®, a total of 5.000 replications was
implemeted. Details on the simulation parameters are described in [Ref. 5].
Alongside the simulation, the deviation of both the Fieller and the Bootstrap upper 95% confidence level estimate from the simulated target ICER was computed; quartiles for the empirical distribution
of these 5.000 deviations were derived as indicators of estimation validity. In summary, the Fieller estimate of the upper confidence level for the simulated target ICER showed a median deviation of
6.1% (interquartile range 4.5 – 7.2%) from the simulated ICER; the Bootstrap estimate imitated the target ICER’s distribution more closely and resulted in a median deviation of 1.8% (1.0 – 3.1%).
Whereas the ICER-based approach to cost effectiveness evaluation provides somewhat instructive information, its statistical treatment imposes severe model assumptions: Both the Fieller and the
Bootstrap approach for interval estimation result in notably based confidence intervals. It can only be hypothesized, why the above simulation setting suggested a rather encouraging behaviour of the
Bootstrap estimate: The latter is based on multivariate replication, and therefore enables to introduce the multivariate correlation structure between the cost and efficacy data into the interval
estimation. Nevertheless, it must be remembered, that the Bootstrap point estimate might as well be biased in the real patient data setting: The rather small underlying sample sizes (35 versus 32
patients) do not necessarily allows for multivariate Bootstrap approximation [Ref. 3]. On the other hand, the Fieller estimate turned out even more biased in the simulation study, since its
dependence on asymptotic normality is even more crucial than for Bootstrap estimation.
In summary, the application of both Bootstrap and Fieller estimates to ICER interval estimation must be applied with caution. In larger sample size settings the Bootstrap approach will be less biased
due to its less restrive distribution assumptions and its ability to imitate the multivariate dependence structure of the underlying bivariate data. However, the need for alternative robust
approaches to interval estimation in incremental cost effectiveness evaluation [Ref. 1] is obvious.
Heitjan DF. Fieller's method and net health benefits. Health Economy 2000; 9: 327-35
Wakker P, Klaassen MP. Confidence intervals for cost-effectiveness ratios. Health Economy 1995; 4: 373-81
Bickel PJ, Freedman DA. Some asymptotic theory for the Bootstrap. Annals of Statistics 9, 1196-1217
Said S. Vergleich der inkrementellen Kosteneffektivität der offenen und der geschlossenen Versorgung von Collumfrakturen aus Perspektive der Leistungserstatter. Dissertation zur Erlangung des
Grades "Dr. med.", Fachbereich Medizin der Universität Mainz; 2004
Seither C. Vorschläge zur gesundheitsökonomischen Evaluation zahnärztlicher Präventionsprogramme im Kindesalter. Dissertation zur Erlangung des Grades "Dr. med. dent.", Fachbereich Medizin der
Universität Mainz; 2004
|
{"url":"http://www.egms.de/static/de/meetings/gmds2005/05gmds250.shtml","timestamp":"2014-04-18T12:39:54Z","content_type":null,"content_length":"25640","record_id":"<urn:uuid:a9479712-148f-4ac2-ad9a-1052295c1e8d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bayesian coestimation of phylogeny and sequence alignment
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2005; 6: 83.
Bayesian coestimation of phylogeny and sequence alignment
Two central problems in computational biology are the determination of the alignment and phylogeny of a set of biological sequences. The traditional approach to this problem is to first build a
multiple alignment of these sequences, followed by a phylogenetic reconstruction step based on this multiple alignment. However, alignment and phylogenetic inference are fundamentally interdependent,
and ignoring this fact leads to biased and overconfident estimations. Whether the main interest be in sequence alignment or phylogeny, a major goal of computational biology is the co-estimation of
We developed a fully Bayesian Markov chain Monte Carlo method for coestimating phylogeny and sequence alignment, under the Thorne-Kishino-Felsenstein model of substitution and single nucleotide
insertion-deletion (indel) events. In our earlier work, we introduced a novel and efficient algorithm, termed the "indel peeling algorithm", which includes indels as phylogenetically informative
evolutionary events, and resembles Felsenstein's peeling algorithm for substitutions on a phylogenetic tree. For a fixed alignment, our extension analytically integrates out both substitution and
indel events within a proper statistical model, without the need for data augmentation at internal tree nodes, allowing for efficient sampling of tree topologies and edge lengths. To additionally
sample multiple alignments, we here introduce an efficient partial Metropolized independence sampler for alignments, and combine these two algorithms into a fully Bayesian co-estimation procedure for
the alignment and phylogeny problem.
Our approach results in estimates for the posterior distribution of evolutionary rate parameters, for the maximum a-posteriori (MAP) phylogenetic tree, and for the posterior decoding alignment.
Estimates for the evolutionary tree and multiple alignment are augmented with confidence estimates for each node height and alignment column. Our results indicate that the patterns in reliability
broadly correspond to structural features of the proteins, and thus provides biologically meaningful information which is not existent in the usual point-estimate of the alignment. Our methods can
handle input data of moderate size (10–20 protein sequences, each 100–200 bp), which we analyzed overnight on a standard 2 GHz personal computer.
Joint analysis of multiple sequence alignment, evolutionary trees and additional evolutionary parameters can be now done within a single coherent statistical framework.
Two central problems in computational biology are the determination of the alignment and phylogeny of a set of biological sequences. Current methods first align the sequences, and then infer the
phylogeny given this fixed alignment. Several software packages are available that deal with one or both of these sub-problems. For example, ClustalW [1] and T-Coffee [2] are popular sequence
alignment packages, while MrBayes [3], PAUP* [4] and Phylip [5] all provide phylogenetic reconstruction and inference. Despite working very well in practice, these methods share some problems. First,
the separation into a multiple-alignment step and a phylogenetic inference step, is fundamentally flawed. The two inference problems are mutually dependent, and alignments and phylogeny should
ideally be co-estimated, a point first made by Sankoff, Morel and Cedergren [6]. Indeed, a proper weighting of mutation events in multiple sequences requires a tree, which in turn can only be
determined if a multiple alignment is available. For instance, ClustalW and T-Coffee compute their alignments based on a neighbour-joining guide tree, biasing subsequent phylogenetic estimates based
on the resulting alignment. Moreover, fixing the alignment after the first step ignores the residual uncertainty in the alignment, resulting in an overconfident phylogenetic estimate.
This leads on to the second issue, which is that heuristic methods are used to deal with insertions and deletions (indels), and sometimes also substitutions. This lack of a proper statistical
framework makes it very difficult to accurately assess the reliability of the alignment estimate, and the phylogeny depending on it.
The relevance of statistical approaches to evolutionary inference has long been recognised. Time-continuous Markov models for substitution processes were introduced more than three decades ago [7].
Inference methods based on these have been considerably improved since then [8], and now have all but replaced older parsimony methods for phylogeny reconstruction. With alignments, progress towards
statistically grounded methods has been slower. The idea to investigate insertions and deletions in a statistical framework was first considered by Bishop and Thompson [9]. The first evolutionary
model, termed the TKF91 model, and corresponding statistical tools for pairwise sequence alignment were published by Thorne, Kishino and Felsenstein [10]. Its extension to multiple sequences related
by a tree has been intensively investigated in the last few years [11-17], and has recently also been extended to RNA gene evolution [18]. Current methods for statistical multiple alignment often
computationally demanding, and full maximum likelihood approaches are limited to small trees. Markov chain Monte Carlo techniques can extend these methods to practical problem sizes.
Statistical modelling and MCMC approaches have a long history in population genetic analysis. In particular, coalescent approaches to genealogical inference have been very successful, both in maximum
likelihood [19,20] and Bayesian MCMC frameworks [21,22]. The MCMC approach is especially promising, as it allows the analysis of large data sets, as well as nontrivial model extensions, see e.g. [23
]. Since divergence times in population genetics are small, alignment is generally straightforward, and genealogical inference from a fixed alignment is well-understood [20,24-26]. However, these
approaches have difficulty dealing with indels when sequences are hard to align. Indel events are generally treated as missing data [27], which renders them phylogenetically uninformative. This is
unfortunate as indel events can be highly informative of the phylogeny, because of their relative rarity compared to substitution events. Statistical models of alignment and phylogeny often refer to
missing data. Not all of these can be integrated out analytically (e.g. tree topology), and these are dealt with using Monte Carlo methods. The efficiency of such approaches depend to a great extent
on the choice of missing data. In previous approaches to statistical alignment, the sampled missing data were either unobserved sequences at internal nodes [28], or both internal sequences and
alignments between nodes [13], or dealt exclusively with pairwise alignments [29,30]. In all cases the underlying tree was fixed. In [31] we published an efficient algorithm for computing the
likelihood of a multiple sequence alignment under the TKF91 model, given a fixed underlying tree. The method analytically sums out all missing data (pertaining to the evolutionary history that
generated the alignment), eliminating the need for any data augmentation of the tree. This methodology is referred to in the MCMC literature as Rao-Blackwellization [32]. As a result, we can treat
indels in a statistically consistent manner with no more than a constant multiplicative cost over existing methods that ignore indels.
The only missing ingredient for a full co-estimation procedure is an alignment sampler. Unfortunately, there exists no Gibbs alignment sampler that corresponds to the analytic algorithm referred to
above. In this paper we introduce a partial importance sampler to resample alignments, based on a proposal mechanism built on a partial score-based alignment procedure. This type of sampler supports
the data format we need for efficient likelihood calculations, while still achieving good mixing in reasonable running time (see Results).
We implemented the likelihood calculator and the alignment sampler in Java, and interfaced them with an existing MCMC kernel for phylogenetics and population genetics [22]. We demonstrate the
practicality of our approach on an analysis of 10 globin sequences.
Definition of the TKF model
The TKF91 model is a continuous-time reversible Markov model describing the evolution of nucleotide (or amino acid) sequences. It models three of the main processes in sequence evolution, namely
substitutions, insertions and deletions of characters, approximating these as single-character processes. A sequence is represented as a string alternatingly consisting of links and characters
connected by these links. This string both starts and terminates with a link. Insertions and deletions are modeled through a time-continuous birth-death process of links. When a new link is born, its
associated character (by convention, its right neighbour) is chosen from the equilibrium distribution of the substitution process. (The original TKF91 model used a simple substitution process, the
Felsenstein-81 model [27]. It is straightforward to replace this by more general nucleotide or amino acid substitution models [33].) When a link dies, its associated character dies too. The leftmost
link of the sequence has no corresponding character to its left, and is never deleted. For this reason it is called the immortal link.
Since subsequences evolve independently, it is sufficient to describe the evolution of a single character-link pair. In a given finite time span, this pair evolves into a finite subsequence of
characters and links. Since insertions originate from links, only the first character of this descendant subsequence may be homologous to the original character, while subsequent ones will have been
inserted and therefore not be homologous to ancestral characters. The model as applied to pairwise alignments was solved analytically in [10], see also [34]. Conceptually, the model can be trivially
extended to trees, but the corresponding algorithms for likelihood calculations have been developed only recently [11,12,14-16].
Because the TKF91 model is time reversible, the root placement does not influence the likelihood, an observation known as Felsenstein's "Pulley Principle" [27]). Although the algorithms we developed
are not manifestly invariant under changes in root placement, in fact they are. We have used time reversibility to check correctness of our implementations.
Computing the likelihood of a homology structure
The concept of homology structure [31], also known as effective alignment [35], refers to an alignment of sequences at leaves without reference to the internal tree structure, and without specifying
the ordering of exchangable columns (see below for more details). We derived a linear-time algorithm that computes the likelihood of observing a set of sequences and their homology structure, given a
phylogeny and evolutionary parameters, under the TKF91 model [31]. By definition, this likelihood is the sum of the probabilities of all evolutionary scenarios resulting in the observed data. It was
previously shown that such evolutionary scenarios can be described as a path in a multiple-HMM ([13,28]), and the likelihood can thus be calculated as the sum of path probabilities over all such
paths, in time polynomial in the number of states. However, this straightforward calculation is infeasible for practical-sized biological problems, since the number of states in the HMM grows
exponentially with the number of sequences [16]. Since our algorithm does not feature this exponential blow-up of Markov states, we termed it the one-state recursion. In contrast to previous
approaches [13,28], the one-state recursion relieves us from the need to store missing data at internal tree nodes, allowing us to change the tree topology without having to resample this missing
data. This enables us to consider the tree as a parameter, and efficiently sample from tree space. The concept of homology structure referred to above is key to our algorithm, and we will presently
define this concept more precisely. Let A[1], A[2], ...A[m ]be sequences, related by a tree T with vertex set V. Let jth character of sequence A[i], and let k long prefix. A homology structure A[1],
..., A[m ]is an equivalence relation ~ on the set of all the characters of the sequences, C = {[h], exists on C such that
(Here, a = [h ]b is equivalent to: a [h ]b and b [h ]a.) In particular, these conditions imply that the characters constituting a single sequence are mutually nonhomologous. The ordering <[h ]
corresponds to the ordering of columns of homologous characters in an alignment. Note that for a given homology structure, this ordering may not be unique (see Fig. Fig.1).1). This many-to-one
relationship of alignment to homology structure is the reason for introducing the concept of homology structure, instead of using the more common concept of alignment.
Alignments and homology structure. (Left:) Two alignments representing the same homology structure. A "homology structure" is defined as the set of all homology relationships between residues from
the given sequences; residues are homologous if they appear ...
The one-state recursion, which calculates the likelihood of a homology structure, is a convolution of two dynamic programming algorithms. The top-level algorithm traverses the prefix set of the
multiple alignments representing the homology structure (see Figure Figure2).2). This repeatedly calls on a reverse traversal algorithm on the phylogenetic tree, which sums out the likelihood
contributions of substitutions and indels under the TKF91 model. See [31] for full details.
Dynamic programming table traversal. The multiple alignment prefixes (represented by o symbols) traversed by the one-state recursion, when the input is the homology structure of Fig. 1. (For clarity,
the vectors are plotted in two dimensions instead of ...
A partial Metropolized independence sampler
Because our algorithm does not require the phylogenetic tree to be augmented with missing data, proposing changes to the evolutionary tree is easy, and mixing in tree space is very good. The drawback
however is that without data augmentation, it is unclear how to perform Gibbs sampling of alignments, and we have to resort to other sampling schemes. One straightforward choice would be a standard
Metropolis-Hastings procedure with random changes to the alignment, but we expect slow mixing from such an approach. Another general approach is Metropolized independence sampling. Its performance
depends on the difference between the proposal distribution and the target distribution, and this will inevitably become appreciable with growing dimension of the problem, as measured by the number
and length of the sequences to be aligned. We therefore opted for a partial Metropolized independence sampler [36], where we partly defy the "curse of dimensionality" by resampling only a segment of
the current alignment. Above increasing the acceptance ratio, this method has the added advantage of being a more efficient proposal scheme, since the time complexity of the algorithm is proportional
to the square of the window size, and so leads to an effective increase in mixing per processor cycle. Metzler et al. [29] followed a parallel approach, using a partial Gibbs sampler, and showed that
this resulted in faster mixing compared to a full Gibbs sampling step. Since the realignment step may change the window length (measured in alignment columns), to have a reversible Markov chain we
need all window sizes to have positive proposal probability. We chose a geometric length distribution, but other distributions can be considered equally well.
The proposal algorithm
The proposal algorithm is as follows. A window size and location is proposed, the alignment of subsequences within this window is removed, and a new alignment is proposed by a stochastic version of
the standard score-based progressive pairwise alignment method. First, dynamic programming (DP) tables are filled as for a deterministic score-based multiple alignment, starting at the tree tips and
working towards the root, aligning sequences and profiles. We used linear gap penalties, and a similarity scoring matrix that was obtained by taking the log-odds of a probabilistic substitution
matrix. The underlying phylogeny was used to define divergence times, and served as alignment guide tree. After filling the DP tables, we applied stochastic traceback. The probabilities for the three
possible directions at each position was taken to be proportional to exp(αs), where s is the deterministic score and α is a scale parameter (see Fig. Fig.3).3). The set of paths that emerged in this
way then determined the multiple alignment. All possible alignments can be proposed in this manner, and the proposal as well as the back-proposal probabilities can be calculated straightforwardly.
Generating the proposal alignment. This figure illustrates the stochastic sequence aligner. In the deterministic fill-in process, the three scores are s[1], s[2 ]and s[3], hence the value in this
cell is max{s[1], s[2], s[3]}. In the stochastic traceback phase, the ...
Correctness of the sampler
There are two problems with the proposal sampler introduced above. First, we propose alignments instead of homology structures. We need the latter, since the algorithm derived in this paper
calculates the likelihood of the homology structure, not the particular alignment. Although it would be conceptually and (for the sampler) computationally simpler to use alignments, we are not aware
of any efficient algorithm that can calculate such alignment likelihoods. The second problem is that calculating the proposal probability of a particular alignment is not straightforward. Any choice
of window size and location may result in the same proposal alignment. To calculate the true proposal probability of particular alignments, we need to sum over all possible windows, which is
prohibitively expensive.
Fortunately, we can solve both problems efficiently. We can sample alignments uniformly inside a homology structure, and at the same time sample homology structures according to their posterior
probabilities. As biologically meaningful questions refer to homologies and not particular alignments, it seems reasonable to impose a simple uniform distribution over alignments within homology
structures. The second problem is solved by not calculating an alignment proposal probability, but the proposal probability of the combination of an alignment and a resampling window. For a proposal
of alignment X[2 ]and window w from a current alignment X[1], we use the following Metropolis-Hastings ratio:
where H[1 ]and H[2 ]are homology structures corresponding to the alignments X[1 ]and X[2 ]respectively, |H[1]| and |H[2]| are their cardinalities (i.e. the number of alignments representing these
homology structures), and T is the proposal probability. Using this ratio, the Markov chain will converge to the desired distribution π(X) = π(H)/|H|, since the detailed balance condition is
satisfied. Indeed,
where the final equality holds because of the symmetry of the left-hand side. The cardinality of a homology structure, |H[1]|, is the number of possible directed paths in the graph spanned by the
one-state recursion; in other words, the number of permutations of alignment columns that result in alignments compatible with the given homology structure (see Fig. Fig.2).2). This number can be
calculated straightforwardly using a dynamic programming algorithm that traverses the one-state recursion graph [31,37].
The one-state recursion provides a method for calculating the likelihood L = Pr{A, T, Q, λ, μ} of observing the sequences with their homology structure (loosely, "alignment") given the tree and model
parameters. Here A are the amino acid sequences, T is the tree including branch lengths, Q is the substitution rate matrix, and λ, μ are the amino acid insertion and deletion rates. To demonstrate
the practicality of the new algorithm for likelihood calculation we undertook a Bayesian MCMC analysis of ten globin protein sequences (see Additional file: 1). We chose to use the standard Dayhoff
rate matrix to describe the substitution of amino acids. As initial homology structure we used the alignment computed by T-Coffee. We co-estimated homology structures, the parameters of the TKF91
model, and the tree topology and branch lengths. To do this we sampled from the posterior,
where Z is the unknown normalising constant. We chose the prior distribution on our parameters, f (T, λ, μ), so that T was constrained to a molecular clock, and λ = μL/(L + 1) to make the expected
sequence length under the TKF91 model agree with the observed lengths; here L is the geometric average sequence length. All other parameters were sampled under uniform priors. We assume a molecular
clock to gain insight into the relative divergence times of the alpha-, beta- and myoglobin families. In doing so we incorporate insertion-deletion events as informative events in the evolutionary
analysis of the globin family. The posterior density h is a complicated function defined on a space of high dimension. We summarise the information it contains by computing the expectations, over h,
of various statistics of interest. We estimate these expectations by using MCMC to sample from h. Marginalizations for continuous variables can be done in a straightforward manner; see for example
Figure Figure4,4, which depicts the marginal posterior density of the μ parameter for two independent MCMC runs, showing excellent convergence.
Posterior distribution of deletion rate μ. Estimated posterior densities of the deletion rate μ sampled according to h (see text), for two independent runs, suggesting excellent convergence. The
sampled mean is 0.0207; the 95% highest ...
For alignments, the maximum a-posteriori alignment is very hard to estimate from an MCMC sample run, as there are typically far too many plausible alignments contributing to the likelihood. Indeed,
we found that almost all alignments in a moderately long MCMC run (50000 samples) were unique. However, it is possible to reconstruct a reliable maximum posterior decoding [38] alignment from such a
moderate long sampling run. This alignment uses the posterior single-column probabilities, which can be estimated much more reliably since many alignments share particular columns, to obtain an
alignment that maximizes the product of individual column posteriors. This alignment can be obtained by a simple dynamic programming algorithm [39], see Fig. Fig.5.5. It is hard to visualise
alternative suboptimal alignments, but the individual posterior column probabilities clearly reveal the more and less reliable regions of the alignment. We found that the reliable alignment regions
broadly correspond to the alpha helical structure of the globin sequences.
Maximum posterior decoding alignment, and column reliabilities. The maximum posterior decoding alignment of ten globins (human, chicken and turtle alpha hemoglobin, beta hemoglobin, myoglobin and
bean leghemoglobin). Posterior probabilities for aligned ...
Figure Figure66 depicts the maximum a posteriori (MAP) estimate of the phylogenetic relationships of the sequences. This example exhibits only limited uncertainty in the tree topology, however we
observed an increased uncertainty for trees that included divergent sequences, such as bacterial and insect globins (results not shown).
Maximum a-posteriori phylogeny. The maximum a posteriori tree (black) relating the ten globins of Fig. 5, and 95% confidence intervals of the node heights (grey boxes). Most of the tree's topology is
well determined, with the exception of the myoglobin ...
The estimated time of the most recent common ancestor of each of the alpha, beta and myoglobin families are all mutually compatible (result not shown), suggesting that the molecular clock hypothesis
is at least approximately valid. Analysis of a four sequence dataset demonstrate consistency in μ estimates between MCMC and previous ML analyses [16] (data not shown). Interestingly, the current
larger dataset supports a lower value of μ. This is probably due to the fact that no indels are apparent within any of the subfamilies despite a considerable sequence divergence. The indel rate
estimated by the current cosampling procedure is greater than the estimate on a fixed multiple alignment [31] (0.0207 vs. 0.0187), but this discrepancy is not significant for the current dataset. It
should be stressed that the two MCMC analyses of the globin data set presented here are purely illustrative of the practicality of the algorithm described, and no novel biological results were
obtained. The two MCMC runs of 5 million states each required less than 12 hours of CPU time each on a 2.0 GHz G5 Apple Macintosh running OS X, using an unoptimised implementation of the algorithm.
From these runs we sampled 50000 states each. The estimated number of independent samples (estimated sample size, ESS) for the posterior probabilities was 250 and 240, respectively (see [22] for
methods), while for the indel rate μ the ESSs were calculated at 5400 and 4000. We expect analyses of data sets of around 50 sequences to be readily attainable with only a few days computation.
In this paper we present a new cosampling procedure for phylogenetic trees and sequence alignments. The underlying likelihood engine uses recently introduced and highly efficient algorithms based on
an evolutionary model (the Thorne-Kishino-Felsenstein model) that combines both the substitution and insertion-deletion processes in a principled way [31]. We show that the proposed method is
applicable to medium-sized practical multiple alignment and phylogenetic inference problems.
One motivation for using a fully probabilistic model, and for using a co-estimation procedure for alignments and phylogeny, is that this makes it possible to assess the uncertainties in the
inferences. Fixing either the alignment or the phylogeny leads to an underestimate of the uncertainty in the other, and score-based methods give no assessment of uncertainty whatsoever.
We show that the confidence estimates so obtained can contain biologically meaningful information. In the case of the multiple alignment of globin sequences, peaks in the posterior column
reliabilities correspond broadly to the various conserved alpha helices that constitute the sequences (see Fig. Fig.5).5). In the case of the tree estimate, the non-traditional phylogeny supported
by the myoglobin subtree coincided with a significant polyphyly, as indicated by the posterior tree topology probabilities, and graphically represented by significantly overlapping 95% node height
confidence boxes (see Fig. Fig.6).6). It is clear that such confidence information significantly contributes to the usefulness of the inference.
At the heart of the method lies a recently introduced algorithm, termed the "indel peeling algorithm", that extends Felsenstein's peeling algorithm to incorporate insertion and deletion events under
the TKF91 model [31]. This renders indel events informative for phylogenetic inference. Although incurring considerable algorithmic complications, the resulting algorithm is still linear-time for
biological alignments (see also Figure Figure1).1). Moreover, our approach allows efficient sampling of tree topologies, as no data is presented at internal nodes.
We also developed a method for sampling multiple alignments, which is applicable for the data augmentation scheme we used for the efficient likelihood calculations. By combining the two samplers, we
can co-sample alignments, evolutionary trees and other evolutionary parameters such as indel and substitution rates. The resulting samples from the posterior distribution can be summarized in
traditional ways. We obtained maximum a-posteriori estimates of alignment, tree and parameters, and augmented these with estimates of reliability.
As was already mentioned in [10], it would be desirable to have a statistical sequence evolution model that deals with 'long' insertions and deletions, instead of single nucleotides at a time. For
score-based algorithms, this is analogous to the contrast between linear and affine gap penalties. It is clear that the extension of the model to include long indels would result in considerable
improvements, but the algorithmic complexities are considerable. We have made progress on a full likelihood method for statistical sequence alignment under such an evolutionary model [17], but the
generalization of this method seems nontrivial. We believe that here too, Markov chain Monte Carlo approaches, combined with data augmentation, will be essential for practical algorithms. However, we
also believe that in certain restricted but biologically meaningful situations, such as highly conserved proteins, the TKF91 model is reasonably realistic for the co-estimation procedure presented
here to be of practical interest.
Availability and requirements
The BEAST package (AJ Drummond and A Rambaut), which includes the algorithm described in this paper, is available from http://evolve.zoo.ox.ac.uk/beast, with full installation and requirement
details. The data set used in this paper is avaliable (see Additional file: 1)
Authors' contributions
IM conjectured and GL proved the one-state recursion. GL and IM independently implemented the algorithms, and wrote the paper. JLJ simplified the proof of the recursion, GL suggested to use it within
an MCMC phylogeny cosampler, and IM suggested to use a Metropolised importance sampler and proved its correctness. GL and AD interfaced the Java algorithms to the BEAST phylogeny sampling package [40
], and AD carried out the MCMC analysis. JH provided project management. All authors read and approved the final manuscript.
Supplementary Material
Additional File 1:
This XML file specifies the MCMC run for the example phylogeny and alignment co-estimation given in this paper (see Figs. Figs.4,4, ,5,5, ,6).6). To run, download the BEAST package (AJ Drummond
and A Rambaut, http://evolve.zoo.ox.ac.uk/beast.)
The authors thank Yun Song, Dirk Metzler, Anton Wakolbinger and Ian Holmes for several useful suggestions and discussions. This research is supported by EPSRC (code HAMJW) and MRC (code HAMKA). I.M.
was further supported by a Békésy György postdoctoral fellowship.
• Thompson J, Higgins D, Gibson T. CLUSTAL-W: improving the sensitivity of multiple sequence alignment through sequence weighting, position specific gap penalties and weight matrix choise. Nucleic
Acids Res. 1994;22:4673–4680. [PMC free article] [PubMed]
• Notredame C, Higgins D, Heringa J. T-Coffee: A novel method for multiple sequence alignments. Journal of Molecular Biology. 2000;302:205–217. doi: 10.1006/jmbi.2000.4042. [PubMed] [Cross Ref]
• Huelsenbeck JP, Ronquist F. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics. 2001;17:754–755. doi: 10.1093/bioinformatics/17.8.754. [PubMed] [Cross Ref]
• Swofford D. PAUP* 4.0. Sinauer Associates. 2001.
• Felsenstein J. PHYLIP version 3.63. Dept of Genetics, Univ of Washington, Seattle. 2004.
• Sankoff D, Morel C, J CR. Evolution of 5S RNA and the non-randomness of base replacement. Nature New Biology. 1973;245:232–234. [PubMed]
• Jukes TH, Cantor CR. Evolution of Protein Molecules. In: Munro, editor. Mammalian Protein Metabolism. Acad Press; 1969. pp. 21–132.
• Whelan S, Lió P, Goldman N. Molecular phylogenetics: state-of-the-art methods for looking into the past. Trends in Gen. 2001;17:262–272. doi: 10.1016/S0168-9525(01)02272-7. [PubMed] [Cross Ref]
• Bishop M, Thompson E. Maximum likelihood alignment of DNA sequences. J Mol Biol. 1986;190:159–165. doi: 10.1016/0022-2836(86)90289-5. [PubMed] [Cross Ref]
• Thorne JL, Kishino H, Felsenstein J. An Evolutionary Model for Maximum Likelihood Alignment of DNA Sequences. J Mol Evol. 1991;33:114–124. [PubMed]
• Steel M, Hein J. Applying the Thorne-Kishino-Felsenstein model to sequence evolution on a star-shaped tree. Appl Math Let. 2001;14:679–684. doi: 10.1016/S0893-9659(01)80026-4. [Cross Ref]
• Hein J. An algorithm for statistical alignment of sequences related by a binary tree. Pac Symp Biocomp, World Scientific. 2001. pp. 179–190. [PubMed]
• Holmes I, Bruno WJ. Evolutionary HMMs: a Bayesian approach to multiple alignment. Bioinformatics. 2001;17:803–820. doi: 10.1093/bioinformatics/17.9.803. [PubMed] [Cross Ref]
• Hein J, Jensen JL, Pedersen CNS. Recursions for statistical multiple alignment. PNAS. 2003;100:14960–14965. doi: 10.1073/pnas.2036252100. [PMC free article] [PubMed] [Cross Ref]
• Miklós I. An Improved Algorithm for Statistical Alignment of Sequences related by a Star Tree. Bul Math Biol. 2002;64:771–779. doi: 10.1006/bulm.2002.0300. [PubMed] [Cross Ref]
• Lunter G, Miklós I, Song Y, Hein J. An efficient algorithm for statistical multiple alignment on arbitrary phylogenetic trees. J Comp Biol. 2003;10:869–889. doi: 10.1089/106652703322756122. [
PubMed] [Cross Ref]
• Miklós I, Lunter GA, Holmes I. A "Long Indel" model for evolutionary sequence alignment. Mol Biol Evol. 2004;21:529–540. doi: 10.1093/molbev/msh043. [PubMed] [Cross Ref]
• Holmes I. A probabilistic model for the evolution of RNA structure. BMC Bioinf. 2004;5 [PMC free article] [PubMed]
• Kuhner MK, Yamato J, Felsenstein J. Estimating effective population size and mutation rate from sequence data using Metropolis-Hastings sampling. Genetics. 1995;140:1421–1430. [PMC free article]
• Griffiths RC, Tavare S. Ancestral inference in population genetics. Stat Sci. 1994;9:307–319.
• Wilson IJ, Balding DJ. Genealogical Inference From Microsatellite Data. Genetics. 1998;150:499–450. [PMC free article] [PubMed]
• Drummond AJ, Nicholls GK, Rodrigo AG, Solomon W. Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data. Genetics. 2002;161
:1307–1320. [PMC free article] [PubMed]
• Pybus OG, Drummond AJ, Nakano T, Robertson BH, Rambaut A. The epidemiology and iatrogenic transmission of hepatitis C virus in Egypt: a Bayesian coalescent approach. Mol Biol Evol. 2003;20
:381–387. doi: 10.1093/molbev/msg043. [PubMed] [Cross Ref]
• Felsenstein J. Estimating effective population size from samples of sequences: Inefficiency of pairwise and segregating sites as compared to phylogenetic estimates. Genetical Research Cambridge.
1992;59:139–147. [PubMed]
• Stephens M, Donnelly P. Inference in Molecular Population Genetics. J of the Royal Stat Soc B. 2000;62:605–655. doi: 10.1111/1467-9868.00254. [Cross Ref]
• Pybus OG, Rambaut A, Harvey PH. An integrated framework for the inference of viral population history from reconstructed genealogies. Genetics. 2000;155:1429–1437. [PMC free article] [PubMed]
• Felsenstein J. Evolutionary trees from DNA sequences: a maximum likelihood approach. J Mol Evol. 1981;17:368–376. [PubMed]
• Jensen J, Hein J. Gibbs sampler for statistical multiple alignment. Tech Rep 429, Dept of Theor Stat, U Aarhus. 2002.
• Metzler D, Fleißner R, Wakolbringer A, von Haeseler A. Assessing variability by joint sampling of alignments and mutation rates. J Mol Evol. 2001;53:660–669. doi: 10.1007/s002390010253. [PubMed]
[Cross Ref]
• Metzler D. Statistical alignment based on fragment insertion and deletion models. Bioinformatics. 2003;19:490–499. doi: 10.1093/bioinformatics/btg026. [PubMed] [Cross Ref]
• Lunter G, Miklós I, Drummond A, Jensen J, Hein J. Bayesian phylogenetic inference under a statistical indel model. Lecture Notes in Bioinformatics. 2003;2812:228–244.
• Casella G, Robert CP. Rao-Blackwellisation of sampling schemes. Biometrika. 1996;83:81–94. doi: 10.1093/biomet/83.1.81. [Cross Ref]
• Hein J, Wiuf C, Knudsen B, Møller MB, Wibling G. Statistical Alignment: Computational Properties, Homology Testing and Goodness-of-Fit. J Mol Biol. 2000;302:265–279. doi: 10.1006/jmbi.2000.4061.
[PubMed] [Cross Ref]
• Miklós I, Toroczkai Z. An improved model for statistical alignment. Lecture Notes on Computer Science. 2001;2149:1–10.
• Dress A, Morgenstern B, Stoye J. The number of standard and of effective multiple alignments. App Math Lett. 1998;11:43–49. doi: 10.1016/S0893-9659(98)00054-8. [Cross Ref]
• Liu JS. Monte Carlo Strategies in Scientific Computing. Springer; 2001.
• Giegerich R, Meyer C, Steffen P. A Discipline of Dynamic Programming over Sequence Data. Science of Computer Programming. 2004;51:215–263. doi: 10.1016/j.scico.2003.12.005. [Cross Ref]
• Durbin R, Eddy S, Krogh A, Mitchison G. Biological sequence analysis. Cambridge University Press; 1998.
• Holmes I, Durbin R. Dynamic programming alignment accuracy. J Comp Biol. 1998;5:493–504. [PubMed]
• Drummond AJ, Rambaut A. BEAST v1.2.2. 2004. http://evolve.zoo.ox.ac.uk/beast
• Hedges SB, Poling LL. A molecular phylogeny of reptiles. Science. 1999;283:945–946. doi: 10.1126/science.283.5404.998. [PubMed] [Cross Ref]
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
• Substance
PubChem Substance links
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1087833/?tool=pubmed","timestamp":"2014-04-19T23:00:01Z","content_type":null,"content_length":"116277","record_id":"<urn:uuid:355af970-c63b-4bd2-8499-36ce9e5420ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Without Borders
Geometry: A Guided Inquiry, a very special textbook!
Of all the geometry texts I have used over the past 35 years, this one stands out as by far the richest, most intuitive, and most interesting. This text is unique. (See the review on
• Most geometry textbooks present a long list of facts about geometric figures organized in a rigid logical order, working generally from simple to more complex. Applications of these facts may or
may not be made clear to the student. Geometry: A Guided Inquiry starts each chapter by posing an interesting geometric problem (puzzle), called the “Central Problem” for the chapter. Clusters
of geometric facts are introduced, as needed, in the process of solving these problems. The usefulness and relevance of the new facts are therefore apparent from the moment they are first
• Most geometry textbooks, especially those written under the influence of the “New Math” era of the 1960s, put heavy emphasis on precise use of technical vocabulary and mathematical notation.
Geometry: A Guided Inquiry emphasizes the underlying geometric and mathematical ideas and works to help the student understand them intuitively as well as logically. Overemphasis on technical
vocabulary and complex notation can actually stand in the way of understanding, so the authors use simplified vocabulary and notation wherever possible.
• Most geometry textbooks start each problem set with lots of routine, repetitive problems, gradually working up to an interesting problem or two at the end of the assignment. Geometry: A Guided
Inquiry puts the best problems right up front! From the very beginning the student is given problems worth solving.
• Most geometry textbooks read like they were written by a committee following a prescribed agenda. Most in fact are! The life is squeezed out of the narrative in the process. Geometry: A Guided
Inquiry has a distinct sense of authorship. The authors are good mathematicians, good teachers, and good writers. Their joy in the pursuit of mathematics shows through their writing.
Geometry: A Guided Inquiry makes frequent use of compass, protractor and ruler activities, data tables, guess and check methods, model-building, and other techniques of intuitive exploration in
preparation for general solutions. (The Geometer’s Sketchpad adds a new dimension to the opportunities for exploration with dynamic illustrations.) Each chapter begins with a “Central Problem” that
provides the focus and motivates the discussion in that chapter. The Central section presents all the essential new material. Along the way the student is led to a solution of the Central Problem,
while exploring its connections with other topics. After the Central section is a Review section, and each of the first seven chapters are followed with a short Algebra Review that stresses algebra
topics related to the current work.
Next comes the best part. Each chapter has an open ended Projects section with problems that are extensions to the material in the Central section, sometimes carrying the discussion in new
directions. (The Project sections include some of the most interesting material in the text!) In a classroom setting, where students work at their own pace, the quicker students would work on the
Project section while the slower students finish the Central and Review sections. In a home study environment the student should read through the whole Project section and work on as many of the
project problems as possible within the time frame available. Students who find the work easy, rather than going faster, you should instead take more time and go deeper!
The textbook is available directly from Morton Publishing (2014 list price: $43.45) and a number of online retailers. It can also be found used online. (A number of used sources sell it for much more
than the new price from the publisher, so beware.) This text has been published by a series of publishers, but all versions are identical in content. Some of the early printings are hard cover. The
current printings are paperback.
Home Study Companion Geometry DVD
[The new version of the geometry course will be distributed on DVD starting in the summer of 2014. The files are available for download for those who purchase the course prior to the DVD release
The Home Study Companion Geometry DVD supplements the textbook in several important ways:
• It provides complete, worked out solutions (not just answers) to all problems in the Central and Project sections of the text.
• It provides additional commentary to supplement the presentation of the text, much as the lecture portion of a traditional course supplements the text.
• It provides a collection of nearly 300 demonstrations using Geogebra, covering most of the main concepts, and many additional explorations, in the Central and Projects sections of each chapter.
• Geometry: A Guided Inquiry was written long before the current obsession with standardized testing, and it marches to a different drummer. It covers many fascinating topics you will see in no
other high school Geometry textbook. The selection of topics in the text is excellent, but the authors’ choice of topics (in 1970) did not anticipate every choice of the Academic Standards
Commission at the end of the century. Therefore the Home Study Companion Geometry DVD adds Extensions to the chapters, as needed, to cover these additional topics. The text plus extensions cover
the standards for California and nearly all other states. (Students not affected by mandatory statewide testing can treat the extensions as optional topics.)
New! Geogebra
I previously used The Geometer’s Sketchpad for dynamic demonstrations of the concepts in the text. I have now converted this collection of demonstrations (with some new ones added) to Geogebra, a
free, downloadable alternative (download here) that adds some advantages over the Geometer’s Sketchpad program. (Did I mention Geogebra is also free?) I have added activity sheets to accompany each
demo or cluster of demos. I plan to have a new Geometry DVD out for the fall, but until the new DVD is produced the complete set of files can be downloaded. (You will receive download instructions
when you purchase the course.) Anyone who would like to get a feel for Geogebra, download the program now and visit Geogebratube.org for demonstrations and activities I have shared with the public.
(Explore Geogebratube for many more demonstrations created by others as well.)
Teaching Tips
Each chapter of Geometry: A Guided Inquiry is divided into a Central section, a Review section, and a Project section, plus Algebra Review in the first few chapters. For each chapter, the Home Study
Companion Geometry DVD has:
• Pdf files with complete, worked-out solutions to every problem in the Central and Project sections of the text.
• (The pdf files contain additional commentary besides just the problem solutions.)
• A large collection of demonstrations using Geogebra.
• A video solution guide covering the Review section.
• Extension sections covering extra topics included in the California Standards.
Based on comments from users, we recommend that you:
1. Set a goal for how many weeks to spend on each chapter. A standard-length school year is about 185 school days, and there are 12 chapters. That divides out to approximately 15 school days (three
weeks) per chapter. However, there are extension sections on the DVD that have been added to Chapters 3, 4, 6, 8, 10, and 11. Allowing an average of 5 days for each extension section takes up 30
school days, leaving 155 school days to divide by 12. That comes out to about 2.5 weeks per normal chapter and 3.5 weeks for the chapters with extensions. This is just a rough guideline. Some
chapters will undoubtedly seem harder than others. Adjust the pace according to your own time constraints and the perceived difficulty of the material. Work at a comfortable but persistent pace.
2. The Geogebra demonstrations can be viewed at any time. Some of them can be understood on their own and can help motivate the material in the chapter. Others will make more sense after a certain
point in the chapter. Each demonstration, or cluster of demonstrations, is accompanied by a pdf activity sheet. So preview them at the beginning and view them again as you progress through the
3. Print out the pdf solution guide for the current chapter. This turns out to be an important point. If they are printed out, they will be more immediately accessible and you are more likely to
refer to them regularly. (However, some of the pdf files contain Internet hyperlinks that you may want to visit, so you may sometimes want to access them directly on your computer.)
4. Work through the Central section of the text as quickly as you are able, referring to the solution guide as necessary if you get stuck.
5. When you finish the Central section, go back and read through the entire pdf solution guide, both to check your work, and to digest the additional commentary that is included. This will serve as
a good review before going further.
6. Do the Review section, then view the video solution guide. Rework any problems that were missed.
7. Do the other review, self test, and algebra review items. (Answers in the text.)
8. Take the remainder of the allotted time working through selected problems from the Project section. (The Project section contains the most interesting material in the book, so don’t short-change
it!) The method here is the same as for the Central section: view the demonstrations at any time, print out the pdf solution guide, work through as many problems as you can, and at the end, read
through the entire pdf solution guide. It is best to try each project problem on your own first, but reading through the solutions of all the project problems at the end will still be of some
Many of the Geogebra activity sheets include tutorials showing how the demonstrations were implemented in Geogebra. These will help you become more familiar with the program and make it a useful
tool to use with this and later mathematics courses.
* * * * *
Errata (Corrections of errors and omissions)
|
{"url":"http://mathwithoutborders.com/?page_id=6","timestamp":"2014-04-19T09:24:16Z","content_type":null,"content_length":"31233","record_id":"<urn:uuid:50e1b23b-f3a5-471b-95da-912b7d975904>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On data banks and privacy homomorphisms
Results 1 - 10 of 97
, 1997
"... Publicly accessible databases are an indispensable resource for retrieving up to date information. But they also pose a significant risk to the privacy of the user, since a curious database
operator can follow the user's queries and infer what the user is after. Indeed, in cases where the users ' i ..."
Cited by 415 (11 self)
Add to MetaCart
Publicly accessible databases are an indispensable resource for retrieving up to date information. But they also pose a significant risk to the privacy of the user, since a curious database operator
can follow the user's queries and infer what the user is after. Indeed, in cases where the users ' intentions are to be kept secret, users are often cautious about accessing the database. It can be
shown that when accessing a single database, to completely guarantee the privacy of the user, the whole database should be downloaded, namely n bits should be communicated (where n is the number of
bits in the database). In this work, we investigate whether by replicating the database, more efficient solutions to the private retrieval problem can be obtained. We describe schemes that enable a
user to access k replicated copies of a database (k * 2) and privately retrieve information stored in the database. This means that each individual database gets no information on the identity of the
item retrieved by the user. Our schemes use the replication to gain substantial saving. In particular, we have ffl A two database scheme with communication complexity of O(n1=3). ffl A scheme for a
constant number, k, of databases with communication complexity O(n1=k). ffl A scheme for 13 log2 n databases with polylogarithmic (in n) communication complexity.
, 1997
"... . A key element of any mobile code based distributed system are the security mechanisms available to protect (a) the host against potentially hostile actions of a code fragment under execution
and (b) the mobile code against tampering attempts by the executing host. Many techniques for the first ..."
Cited by 270 (1 self)
Add to MetaCart
. A key element of any mobile code based distributed system are the security mechanisms available to protect (a) the host against potentially hostile actions of a code fragment under execution and
(b) the mobile code against tampering attempts by the executing host. Many techniques for the first problem (a) have been developed. The second problem (b) seems to be much harder: It is the general
belief that computation privacy for mobile code cannot be provided without tamper resistant hardware. Furthermore it is doubted that an agent can keep a secret (e.g., a secret key to generate digital
signatures). There is an error in reasoning in the arguments supporting these beliefs which we are going to point out. In this paper we describe software-only approaches for providing computation
privacy for mobile code in the important case that the mobile code fragment computes an algebraic circuit (a polynomial). We further describe an approach how a mobile agent can digitally sign his...
- In Proc. STOC , 2009
"... We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps.
First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitra ..."
Cited by 267 (11 self)
Add to MetaCart
We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we
provide a general result – that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented
versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices
that is almost bootstrappable. Lattice-based cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also,
ideal lattices provide both additive and multiplicative homomorphisms (modulo a public-key ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits.
Unfortunately, our initial scheme is not quite bootstrappable – i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the
decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable
encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the
decrypter, much like the server leaves less work for the decrypter in a server-aided cryptosystem.
, 2002
"... Rapid advances in networking and Internet technologies have fueled the emergence of the "software as a service" model for enterprise computing. Successful examples of commercially viable
software services include rent-a-spreadsheet, electronic mail services, general storage services, disaster protec ..."
Cited by 203 (3 self)
Add to MetaCart
Rapid advances in networking and Internet technologies have fueled the emergence of the "software as a service" model for enterprise computing. Successful examples of commercially viable software
services include rent-a-spreadsheet, electronic mail services, general storage services, disaster protection services. "Database as a Service" model provides users power to create, store, modify, and
retrieve data from anywhere in the world, as long as they have access to the Internet. It introduces several challenges, an important issue being data privacy. It is in this context that we
specifically address the issue of data privacy.
- Lecture Notes in Computer Science , 2001
"... Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is
“unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic an ..."
Cited by 189 (10 self)
Add to MetaCart
Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is
“unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic
encryption to complexity-theoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility ” condition in obfuscation as meaning that O(P) is
a “virtual black box, ” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical
investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient
programs P that are unobfuscatable in the sense that (a) given any efficient program P ′ that computes the same function as a program P ∈ P, the “source code ” P can be efficiently reconstructed, yet
(b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible
probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the
functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature
schemes, encryption schemes, and pseudorandom function families.
, 1989
"... : We consider the problem of computing with encrypted data. Player A wishes to know the value f(x) for some x but lacks the power to compute it. Player B has the power to compute f and is
willing to send f(y) to A if she sends him y, for any y. Informally, an encryption scheme for the problem f is a ..."
Cited by 129 (15 self)
Add to MetaCart
: We consider the problem of computing with encrypted data. Player A wishes to know the value f(x) for some x but lacks the power to compute it. Player B has the power to compute f and is willing to
send f(y) to A if she sends him y, for any y. Informally, an encryption scheme for the problem f is a method by which A, using her inferior resources, can transform the cleartext instance x into an
encrypted instance y, obtain f(y) from B, and infer f(x) from f(y) in such a way that B cannot infer x from y. When such an encryption scheme exists, we say that f is encryptable. The framework
defined in this paper enables us to prove precise statements about what an encrypted instance hides and what it leaks, in an information-theoretic sense. Our definitions are cast in the language of
probability theory and do not involve assumptions such as the intractability of factoring or the existence of one-way functions. We use our framework to describe encryption schemes for some
well-known function...
, 2004
"... Encryption is a well established technology for protecting sensitive data. However, once encrypted, data can no longer be easily queried aside from exact matches. We present an order-preserving
encryption scheme for numeric data that allows any comparison operation to be directly applied on encrypte ..."
Cited by 116 (2 self)
Add to MetaCart
Encryption is a well established technology for protecting sensitive data. However, once encrypted, data can no longer be easily queried aside from exact matches. We present an order-preserving
encryption scheme for numeric data that allows any comparison operation to be directly applied on encrypted data. Query results produced are sound (no false hits) and complete (no false drops). Our
scheme handles updates gracefully and new values can be added without requiring changes in the encryption of other values. It allows standard database indexes to be built over encrypted tables and
can easily be integrated with existing database systems. The proposed scheme has been designed to be deployed in application environments in which the intruder can get access to the encrypted
database, but does not have prior domain information such as the distribution of values and cannot encrypt or decrypt arbitrary values of his choice. The encryption is robust against estimation of
the true value in such environments.
, 1998
"... Mobile code technology has become a driving force for recent advances in distributed systems. The concept of mobility of executable code raises major security problems. In this paper we deal
with the protection of mobile code from possibly malicious hosts. We conceptualize on the specific cryptograp ..."
Cited by 99 (2 self)
Add to MetaCart
Mobile code technology has become a driving force for recent advances in distributed systems. The concept of mobility of executable code raises major security problems. In this paper we deal with the
protection of mobile code from possibly malicious hosts. We conceptualize on the specific cryptographic problems posed by mobile code. We are able to provide a solution for some of these problems: We
present techniques how to achieve "non--interactive computing with encrypted programs" in certain cases and give a complete solution for this problem in important instances. We further present a way
how an agent might securely perform a cryptographic primitive, digital signing, in an untrusted execution environment. Our results are based on the use of homomorphic encryption schemes and function
composition techniques. ii 1 Introduction The security of the execution environment is a basic cornerstone of cryptographic systems: the parties which perform a cryptographic protocol require a
- In Proc. 31st International Colloquium on Automata, Languages and Programming (ICALP’04), volume 3142 of LNCS , 2004
"... Abstract. The analysis of security protocols requires precise formulations of the knowledge of protocol participants and attackers. In formal approaches, this knowledge is often treated in terms
of message deducibility and indistinguishability relations. In this paper we study the decidability of th ..."
Cited by 81 (11 self)
Add to MetaCart
Abstract. The analysis of security protocols requires precise formulations of the knowledge of protocol participants and attackers. In formal approaches, this knowledge is often treated in terms of
message deducibility and indistinguishability relations. In this paper we study the decidability of these two relations. The messages in question may employ functions (encryption, decryption, etc.)
axiomatized in an equational theory. Our main positive results say that, for a large and useful class of equational theories, deducibility and indistinguishability are both decidable in polynomial
time. 1
, 2000
"... This paper investigates one-round secure computation between two distrusting parties: Alice and Bob each have private inputs to a common function, but only Alice, acting as the receiver, is to
learn the output; the protocol is limited to one message from Alice to Bob followed by one message from Bob ..."
Cited by 71 (0 self)
Add to MetaCart
This paper investigates one-round secure computation between two distrusting parties: Alice and Bob each have private inputs to a common function, but only Alice, acting as the receiver, is to learn
the output; the protocol is limited to one message from Alice to Bob followed by one message from Bob to Alice. A model in which Bob may be computationally unbounded is investigated, which
corresponds to informationtheoretic security for Alice. It is shown that 1. for honest-but-curious behavior and unbounded Bob, any function computable by a polynomial-size circuit can be computed
securely assuming the hardness of the decisional Diffie-Hellman problem; 2. for malicious behavior by both (bounded) parties, any function computable by a polynomial-size circuit can be computed
securely, in a public-key framework, assuming the hardness of the decisional Diffie-Hellman problem.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=94226","timestamp":"2014-04-24T09:37:24Z","content_type":null,"content_length":"41396","record_id":"<urn:uuid:c7c012b7-1864-4f43-b0fe-e2c1f073223e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|