content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Ppt Pythagoras Ppt Presentation
Pythagoras theorem:
Pythagoras theorem
PowerPoint Presentation:
In mathematics, the Pythagorean theorem or Pythagoras' theorem is a relation in Euclidean geometry among the three sides of a right triangle ( right-angled triangle ). In terms of areas, it states:
In any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two
sides that meet at a right angle).
PowerPoint Presentation:
The theorem can be written as an equation relating the lengths of the sides a , b and c , often called the Pythagorean equation : where c represents the length of the hypotenuse, and a and b
represent the lengths of the other two sides.
Right-angled Triangle:
Right-angled Triangle
PowerPoint Presentation:
PowerPoint Presentation:
The Pythagorean theorem is named after the Greek mathematician Pythagoras(569 B.C.?-500 B.C.?), who by tradition is credited with its discovery and proof, although it is often argued that knowledge
of the theorem predates him. There is evidence that Babylonian mathematicians understood the formula, although there is little surviving evidence that they fitted it into a mathematical framework.
PowerPoint Presentation:
The theorem has numerous proofs, possibly the most of any mathematical theorem. These are very diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years.
The theorem can be generalized in various ways, including higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and indeed, to objects that are not
triangles at all, but n -dimensional solids. The Pythagorean theorem has attracted interest outside mathematics as a symbol of mathematical abstruseness, mystique, or intellectual power; popular
references in literature, plays, musicals, songs, stamps and cartoons abound.
PowerPoint Presentation:
PowerPoint Presentation:
Biography Pythagoras was born in 572BC on the island of Samos, Greece. In about 530BC Pythagoras left Samos in hatred for its ruler Polycrates and settled in Cretona, Italy. He joined a religious
group known as the Pythagoreans. He formed a philosophical and religious school where they studied mathematics, science and music. This attracted many followers. When involved with this group he
discovered what is now known as Pythagoras’ Theorem. Also during his time there he out the mathematics of octaves and harmony. Because of the secrecy in the group there is nothing of Pythagoras’
writings or books. Pythagoras was murdered at the age of 77, in 495BC and the religious school was separated.
Contemporary importance:
Contemporary importance Pythagoras’ theorem has been used over thousands of years for many different aspects of human life. Today his theorem is used mainly in building, architecture, carpenter,
navigation, astronomy and many other fields of work that involve mathematical calculations. Each of these fields uses his theorem to try and decide either the hypotenuse or the two other sides in a
right angled triangle. -Builders use Pythagoras’ theorem to work out dimensions of different aspects of their constructions. This allows them to work out the exact requirements of building materials
needed. -Architects use his theorem to work out designs for the builders to use. His theory may be used to work out exact lengths of roofs, also the framework of a house just to name a few.
-Carpenter’s uses his theorem to interprete the size of sides of their timber structures eg: corner furniture. -Navigators and astronomers use his theorem to establish distances between planets,
towns, countries and stars.
Relevance Pythagoras is often described as one of the pure mathematicians of his time and an extremely important figure in the expansion of mathematics. Pythagoras’ theorem is studied from Year 8 to
Year 12 in NSW schools. Students today often wonder why geometry is so important. It allows people to think more logically and as I have shown in Contemporary Importance his theorem is used in
numerous jobs and work areas. Students pursuing technical majors in college are expected to understand and extend this knowledge on geometry. Geometry proofs are also an important way to increase
disciplined . As you can see geometry is still just as important now as it was in Pythagoras’ time.
Mathematical achievements:
Mathematical achievements Pythagoras has contributed various theories to geometry, algebra, number .etc. All of these theories were discovered during Pythagoras’ time with the Pythagoreans.
Pythagoras’ theory is : The square on the hypotenuse in any right-angled triangle is equal to the sum of the squares on the other two sides. So for example: A B C a b c 4 3 c Using the formula a ²+b²
=c² find side “c” on the triangle DEF D E F 4 ² + 3² = c² 16 + 9 = c² 25 = c² √25 = c² c = 5
PowerPoint Presentation:
Pythagoras believed: All things are numbers. Mathematics is the basis for everything, and geometry is the highest form of mathematical studies. The physical world can understood through mathematics.
Certain symbols have a mystical significance. All members of the society should observe strict loyalty and secrecy.
PowerPoint Presentation:
4. The world depends upon the interaction of opposites, such as male and female, lightness and darkness, warm and cold, dry and moist, light and heavy, fast and slow. 5. The soul resides in the
brain, and is immortal. It moves from one being to another, sometimes from a human into an animal, through a series of reincarnations called transmigration until it becomes pure. Pythagoras believed
that both mathematics and music could purify.
PowerPoint Presentation:
Some of the students of Pythagoras eventually wrote down the theories, teachings and discoveries of the group, but the Pythagoreans always gave credit to Pythagoras as the Master for: The five
regular solids (tetrahedron, cube, octahedron, icosahedrons, dodecahedron). It is believed that Pythagoras knew how to construct the first three but not last two.
PowerPoint Presentation:
2. The sum of the angles of a triangle is equal to two right angles. 3. Pythagoras taught that Earth was a sphere in the center of the Kosmos (Universe), that the planets, stars, and the universe
were spherical because the sphere was the most perfect solid figure. He also taught that the paths of the planets were circular. Pythagoras recognized that the morning star was the same as the
evening star, Venus .
PowerPoint Presentation:
4. Pythagoras studied odd and even numbers, triangular numbers, and perfect numbers. Pythagoreans contributed to our understanding of angles, triangles, areas, proportion, polygons, and polyhedra .
5. Pythagoras also related music to mathematics. He had long played the seven string lyre, and learned how harmonious the vibrating strings sounded when the lengths of the strings were proportional
to whole numbers, such as 2:1, 3:2, 4:3. Pythagoreans also realized that this knowledge could be applied to other musical instruments.
PowerPoint Presentation:
The reports of Pythagoras' death are varied. He is said to have been killed by an angry mob, to have been caught up in a war between the Agrigentum and the Syracusans and killed by the Syracusans, or
been burned out of his school in Crotona and then went to Metapontum where he starved himself to death. At least two of the stories include a scene where Pythagoras refuses to trample a crop of bean
plants in order to escape, and because of this, he is caught. The Pythagorean Theorem is a cornerstone of mathematics, and continues to be so interesting to mathematicians that there are more than
400 different proofs of the theorem, including an original proof by President Garfield.
Statement of the Theorem:
Statement of the Theorem
PowerPoint Presentation:
It is believed that the statement of Pythagorean's Theorem was discovered on a Babylonian tablet circa 1900-1600 B.C. The Pythagorean Theorem relates to the three sides of a right triangle. It states
that c2=a2+b2, C is the side that is opposite the right angle which is referred to as the hypoteneuse. a and b are the sides that are adjacent to the right angle. In essence, the theorem simply
stated is: the sum of the areas of two small squares equals the area of the large one.
The theorem states that: :
The theorem states that: For any right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides.
PowerPoint Presentation:
Verification of Theorem
PowerPoint Presentation:
1. cut a triangle with base 4 cm and height 3 cm 0 1 2 3 4 5 4 cm 0 1 2 3 4 5 3 cm 2. measure the length of the hypotenuse 0 1 2 3 4 5 Now take out a square paper and a ruler. 5 cm
PowerPoint Presentation:
Consider a square PQRS with sides a + b a a a a b b b b c c c c Now, the square is cut into - 4 congruent right-angled triangles and - 1 smaller square with sides c Proof of Pythagoras’ Theorem P Q R
PowerPoint Presentation:
a + b a + b A B C D Area of square ABCD = ( a + b ) 2 b b a b b a a a c c c c P Q R S Area of square PQRS = 4 + c 2 a 2 + 2ab + b 2 = 2ab + c 2 a 2 + b 2 = c 2
PowerPoint Presentation:
Theorem states that: "The area of the square built upon the hypotenuse of a right triangle is equal to the sum of the areas of the squares upon the remaining sides." The Pythagorean Theorem asserts
that for a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides: a 2 + b 2 = c 2 The figure above at the right is a visual display of the theorem's
conclusion. The figure at the left contains a proof of the theorem, because the area of the big, outer, green square is equal to the sum of the areas of the four red triangles and the little, inner
white square: c 2 = 4(ab/2) + (a - b) 2 = 2ab + (a 2 - 2ab + b 2 ) = a 2 + b 2
PowerPoint Presentation:
Animated Proof of the Pythagorean Theorem Below is an animated proof of the Pythagorean Theorem. Starting with a right triangle and squares on each side, the middle size square is cut into congruent
quadrilaterals (the cuts through the center and parallel to the sides of the biggest square). Then the quadrilaterals are hinged and rotated and shifted to the big square. Finally the smallest square
is translated to cover the remaining middle part of the biggest square. A perfect fit! Thus the sum of the squares on the smaller two sides equals the square on the biggest side. Afterward, the small
square is translated back and the four quadrilaterals are directly translated back to their original position. The process is repeated forever.
PowerPoint Presentation:
Pythagorean Triplets
PowerPoint Presentation:
The sides of a right triangle follows the Pythagorean Theorem, a2 + b2 = c2 where a and b are the lengths of the legs of the right triangle while c is the length of the hypothenuse. A right triangle
with sides of lengths 3, 4 and 5 is a special right triangle in that all the sides have whole number lengths. The three numbers 3, 4 and 5 forms a Pythagorean triplet or Pythagorean triple.
PowerPoint Presentation:
A Pythagorean triplet is a set of three whole numbers where the sum of the squares of the first two is equal to the square of the third number. Below are examples of Pythagorean triplets: 3 4 5 5 12
PowerPoint Presentation:
One equation satisfying a Pythagorean Triplet A, B, C is Given A is odd, then B = (A2 - 1)/2 C = (A2 + 1)/2 Another equation derived by Plato was (m2+1)2 = (m2-1)2 + (2m)2 where m is a natural
number. The above equation is called Plato's Formula.
Why Memorize Pythagorean Triples?:
Why Memorize Pythagorean Triples? Remember how much time it took to figure out 8 x 8 before you memorized it? (8 + 8 + 8 + 8 + 8 + 8 + 8 + 8 = 64) Think of all the work involved to solve this
problem: a 2 + b 2 = c 2 3 2 + 4 2 = x 2 9 + 16 = x 2 3 4 x 25 = x 2 5 = x Wouldn’t it be nice to just know this is 5?
Good Pythagorean Triples to Memorize::
Good Pythagorean Triples to Memorize: And multiples of each, like: 3x2, 4x2, 5x2 6 8 10
Application Of Pythagoras Theorem:
Application Of Pythagoras Theorem
Applications The Pythagorean theorem has far-reaching ramifications in other fields (such as the arts), as well as practical applications. The theorem is invaluable when computing distances between
two points, such as in navigation and land surveying. Another important application is in the design of ramps. Ramp designs for handicap-accessible sites and for skateboard parks are very much in
Baseball Problem:
Baseball Problem A baseball “diamond” is really a square. You can use the Pythagorean theorem to find distances around a baseball diamond.
Baseball Problem:
Baseball Problem The distance between consecutive bases is 90 feet. How far does a catcher have to throw the ball from home plate to second base?
Baseball Problem:
Baseball Problem To use the Pythagorean theorem to solve for x, find the right angle. Which side is the hypotenuse? Which sides are the legs? Now use: a 2 + b 2 = c 2
Baseball Problem Solution:
Baseball Problem Solution The hypotenuse is the distance from home to second, or side x in the picture. The legs are from home to first and from first to second. Solution: x 2 = 90 2 + 90 2 = 16,200
x = 127.28 ft
Ladder Problem:
Ladder Problem A ladder leans against a second-story window of a house. If the ladder is 25 meters long, and the base of the ladder is 7 meters from the house, how high is the window?
Ladder Problem Solution:
Ladder Problem Solution First draw a diagram that shows the sides of the right triangle. Label the sides: Ladder is 25 m Distance from house is 7 m Use a 2 + b 2 = c 2 to solve for the missing side.
Distance from house: 7 meters
Ladder Problem Solution:
Ladder Problem Solution 7 2 + b 2 = 25 2 49 + b 2 = 625 b 2 = 576 b = 24 m How did you do?
Indirect Measurement:
Indirect Measurement Support Beam: The skyscrapers shown on page 535 are connected by a skywalk with support beams. You can use the Pythagorean Theorem to find the approximate length of each support
PowerPoint Presentation:
Each support beam forms the hypotenuse of a right triangle. The right triangles are congruent, so the support beams are the same length. Use the Pythagorean Theorem to show the length of each support
beam (x).
(hypotenuse)2 = (leg)2 + (leg)2 x 2 = (23.26) 2 + (47.57) 2 x 2 = √ (23.26) 2 + (47.57) 2 x ≈ 13 Pythagorean Theorem Substitute values. Multiply and find the positive square root. Use a calculator to
approximate. Solution:
Lets learn with Fun:
Lets learn with Fun
Pythagoras Board Game:
Pythagoras Board Game Rules: To begin, roll 2 dice. The person with the highest sum goes first. To move on the board, roll both dice. Substitute the numbers on the dice into the Pythagorean Theorem
for the lengths of the legs to find the value of the length of the hypotenuse. Using the Pythagorean Theorem a²+b²=c², a player moves around the board a distance that is the integral part of c. For
example, if a 1 and a 2 were rolled, 1²+2²=c²; 1+4=c²; 5=c²; Since c = √5 or approximately 2.236, the play moves two spaces. Always round the value down. When the player lands on a ‘?’ space, a
question card is drawn. If the player answers the question correctly, he or she can roll one die and advance the resulting number of places. Each player must go around the board twice to complete the
game. A play must answer a ‘?’ card correctly to complete the game and become a Pythagorean
Pythagoras Board Game:
Pythagoras Board Game What are the lengths of the legs of a 30-60-90 degree triangle with a hypotenuse of length 10? Answer: 5 and 5√3 If you hiked 3 km west and the 4 km north, how far are you from
your starting point? Answer: 5 km The square of the ______ of a right triangle equals the sum of the squared of the lengths of the two legs. Answer: hypotenuse Find the missing member of the
Pythagorean triple (7, __,, 25). Answer: 24 What is the length of the legs in a 45-45-90 degree right triangle with hypotenuse of length √2? Answer: 1 Using a²+b²=c, find b if c = 10 and a = 6
Answer: b=8² True or false? Pythagoras lives circa A.D. 500 Answer: false (500 B.C.) Have the person to your left pick two numbers for the legs of a right triangle. Compute the hypotenuse Can an
isosceles triangle be a right triangle? Answer: yes Pythagoras was of what nationality? Answer: Greek Is (7, 8, 11) a Pythagorean triple? Answer: no How do you spell Pythagoras? The Pythagorean
Theorem is applicable for what type of triangle? Answer: a right triangle What is the name of the school that Pythagoras founded? Answer: The Pythagorean School True or false? Pythagoras considered
number to be the basis of creation? Answer: true True or false? Pythagoras formulated the only proof of the Pythagorean Theorem? Answer: false (there are about 400 possible proofs)
Thank You:
Thank You By- Rashmi Sharma VIII-A
|
{"url":"http://www.authorstream.com/Presentation/aSGuest136625-1436603-ppt-pythagoras/","timestamp":"2014-04-20T06:03:47Z","content_type":null,"content_length":"144097","record_id":"<urn:uuid:e8fc8f59-e504-465c-8c71-5936d9820332>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how do you solve:5/6(6x+12) + 1 = 2/3 (9x-3) +5 - WyzAnt Answers
how do you solve:
5/6 (6x+12) + 1 = 2/3 (9x-3) +5 these are FRACTIONS not division and I am LOST
Here is a verbal explanation.
Let's do the left side first. The whole entity (6x+12) must be BOTH multiplied by 5 AND divided by 6. It doesn't matter which you first, since they are reciprocal operations when performed on the
same entity. So let's get rid of the 6, simply because it is easier (looking at the 6 and 12 in the enclosed entity). That leaves us with x+2. 5 times that? 5x+10. Oh wait... we have a +1 to add to
this. Since that is like saying 5x+10+1... well, that's just like saying 5x+11 isn't it? There it is for the left side.
Now the right side... doing it the same way we have 9x-3 getting divided by three, which leaves us with 3x-1. Times two? 6x-2. Add 5 and that makes it 6x+3.
So it looks like they are saying 5x+11 = 6x+3. Is there any number "x" which makes this true? We only need one. In fact there is only one real number which can make this true! All we have to do is
subtract one side from the other to find out what it is!
Let's do something to both sides. That's fair, since if both sides are equal I should be able to do what I want to both sides, as long as it's the same thing, and they'll still be equal.
Let's subtract the left from the right, since it will leave x by itself on the right in positive form, and the left will be zero (since it was subtracted from itself!). That gives us 0=x-8.
Let's get rid of that -8. How? Just add 8 to both sides! That cancels out the -8 leaving only x on that side, and puts a positive 8 on the other. It appears now that 8=x. Or, by the reflexive
property, x=8.
THANK YOU! Now it makes sense....gotta see each level
Tutors, please sign in to answer this question.
1 Answer
Multiply both sides by LCD = 6,
5(6x+12) + 6 = 4(9x-3) + 30
Simplify by collecting variables in one side and numbers in the other side,
6x = 48
Answer: x = 8
can you show me the problem so I can see each side. I am getting stucl around the 3rd level....
5(6x+12) + 6 = 4(9x-3) + 30
30x+60+6 = 36x-12+30
Take 18 away from both sides,
30x + 48 = 36x
Take 30x away from both sides,
48 = 6x
|
{"url":"http://www.wyzant.com/resources/answers/14293/how_do_you_solve_5_6_6x_12_1_2_3_9x_3_5","timestamp":"2014-04-19T11:09:20Z","content_type":null,"content_length":"48282","record_id":"<urn:uuid:284d950e-885c-4bc3-b295-75aa932bae0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Test::RandomResults - Test non-deterministic functions
This module aims to provide ways of testing functions that are meant to return results that are random; that is, non-deterministic functions.
Some of the tests provided here might be easily achieved with other testing modules. The reason why they're here is that this way users become aware of how to test their non-deterministic functions.
This is a work in progress. Comments are welcome.
use Test::More plan => $Num_Tests;
use Test::RandomResults;
is_in( my_function, [ $list, $of, $items ], "result is inside list" );
in_between( my_function, sub { $_[0] cmp $_[1] }, 1, 10, "result between 1 and 10");
length_lt( my_function, $limit, "length less than $limit");
length_le( my_function, $limit, "length less or equal to $limit");
length_eq( my_function, $limit, "length equal to $limit");
length_ge( my_function, $limit, "length greater of equal to $limit");
length_gt( my_function, $limit, "length greater than $limit");
Whenever Test::RandomResults is invoked, a new seed is generated and outputed as diagnostics. This is done so that you can use it to debug your code, if needed.
Tests if an element belongs to an array.
is_in( my_function, [1, 2, 3], 'either 1, 2 or 3');
Tests if an element is within two boundaries.
The second parameter to this function is what it uses to do the comparisons.
To compare strings:
in_between( my_function, { $_[0] cmp $_[1] }, "aaa", "zzz",
'result is between "aaa" and "zzz"' );
To compare numbers:
in_between( my_function, { $_[0] <=> $_[1] }, 1, 10, 'result is between 1 and 10' );
To compare something else:
in_between( my_function, &your_function_here, $lower_boundary, $upper_boundary,
'result is between boundaries' );
As you can see, the function should use $_[0] and $_[1] to do the comparison. As with <=> and cmp, the function should return 1, 0 or -1 depending on whether the first argument ($_[0]) is greater,
equal to, or less than the second one ($_[1]).
in_between swaps the lower and upper limits, if need be (this means that checking whether a value is between 1 and 10 is the same as checking between 10 and 1).
Tests if length is less than a limit.
length_lt( my_function, $limit, "length less than $limit");
Tests if length is less or equal to a limit.
length_le( my_function, $limit, "length less or equal to $limit");
Tests if length is equal to a limit.
length_eq( my_function, $limit, "length equal to $limit");
Tests if length is greater of equal to a limit.
length_ge( my_function, $limit, "length greater of equal to $limit");
Tests if length is greater than a limit.
length_gt( my_function, $limit, "length greater than $limit");
* Check if N results of a function are evenly_distributed
* Allow the user to choose the seed when invoking Test::RandomResults
Jose Castro, <cog@cpan.org>
Please report any bugs or feature requests to bug-test-randomresults@rt.cpan.org, or through the web interface at http://rt.cpan.org. I will be notified, and then you'll automatically be notified of
progress on your bug as I make changes.
Copyright 2005 Jose Castro, All Rights Reserved.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
{"url":"http://search.cpan.org/~cog/Test-RandomResults-0.03/lib/Test/RandomResults.pm","timestamp":"2014-04-18T04:17:44Z","content_type":null,"content_length":"18568","record_id":"<urn:uuid:a1933179-4fec-44ef-8bad-1d09021ecdf5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factoring Quadratic Equations
Date: 7/22/96 at 23:38:0
From: Anonymous
Subject: Factoring Quadratic Equations
Can you explain in a simple form, how to do factoring? Following are
some examples of expressions I need to factor:
a). 3y^2 - 3
b). y^2 + 4y -12
c). 6x^2 + 30x - 900
d). 4x^2 + 24x + 36
Are there common points or rules I can apply in every situation?
Thanks for your help!
Date: 7/23/96 at 10:4:55
From: Doctor Paul
Subject: Factoring Quadratic Equations
a) I think the idea in factoring is to look for a number or a variable
that is present in every term of what needs factoring.
For example, in:
3y^2 - 3
the only thing that can be factored here is the 3 that is present in
each term. Factor the 3 out to the front and you get:
3 * ( y^2 - 1)
b) When factoring quadratic equations there generally is not anything
that is common in every term. In this case we cannot simply pull a
constant out front and rewrite the problem. What we *can* do is
this... you should know that your answer will look like this form:
(y + a) * (y + b)
The trick is knowing how to find a and b. Here's how to do it for the
following equation:
y^2 + 4y + -12
Take the term with no y's attached to it (the constant term). In this
particular case, the constant term is the -12. Divide the -12 into
what I call groups. Each group is a set of two numbers that when
multiplied together will give -12. With the number -12, I can think of
these groups:
{-1,12}, {-2,6}, {-3,4}, {1,-12}, {2,-6}, {3,-4}
Now you have six different groups. The next thing to do is to find out
what the coefficient of the 'y' term is. That is, we want to know
what number is being multiplied by 'y' in the quadratic equation. In
this case that number is 4, right? What we want to do is add the two
numbers of each group together and find out which one will give us 4.
If you add -1 and 12 you get 11. That's not right. So we try again:
if you add -2 and 6 then you get 4! That is the group we want.
Feel free to add the rest of the groups. You will not get 4 again.
Here is what is really important:
The numbers in the group you chose (in this case -2 and 6) will be
the a and the b that we have been looking for. Which one you chose for
a and which you chose for b does not matter. I will just use a = -2
and b = 6,
(y + (-2) * (y + 6)
= (y - 2)(y + 6)
If you multiply it out and you get what you started with then that is
the factored answer.
Parts c and d require you to first divide by a constant and then
follow the procedure mentioned for part b. A hint: With quadratic
equations you always want to divide by the coefficient (or number)
that is multiplied by the y^2 term. So for part c you would divide
everything by 6, right? Then factor as I showed you for part b.
Good luck.
-Doctor Paul, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/52970.html","timestamp":"2014-04-16T04:28:34Z","content_type":null,"content_length":"7832","record_id":"<urn:uuid:e03799e2-416c-4e88-b359-158e3f6e222c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Twin Paradox
I am trying to understand the twin paradox, so you have twin 1 and twin 2, both on planet earth. The twins are 23 years old and twin 2 leaves on a ship traveling close to the speed of light and then
turns around (with or without a instantaneious turn around time?). On his return home twin 2 finds that twin 1 has aged far more than he has.
Now, why is this? Twin 2 travels away from earth at the speed of light. Lets say 10 minutes (in a universal time) passes. Even though twin 2 is traveling at the speed of light, isn't he still
traveling 10 mintues? And twin 1 would still be waiting for 10 mintues. Now lets say twin 2 turns around, and travels back to earth, this entire trip (from turnaround to landing) takes another 12
minutes. It still is 12 minutes for either twin 1 and 2 isnt it?
Just because he is traveling a distance why should he be younger? Is this just our notion of time (i understand if the times wasn't a 'universal time' it would make them much different in age) but
isn't the notion of time false anyways? Our bodies don't slow for time, they always are dieing at an interval. So biologically wouldn't twin 1 and 2 be the same age, but theoretically (if we consider
time as we concieve it, a real factor in our aging) there 'age' would be different.
|
{"url":"http://www.physicsforums.com/showpost.php?p=515405&postcount=1","timestamp":"2014-04-16T10:24:34Z","content_type":null,"content_length":"9514","record_id":"<urn:uuid:32587aef-02d8-4fe3-9de0-288e1600ec28>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 22
In circle O, chord AB is 9 inches from the center. The diameter of the circle exceeds the length of AB by 2 inches. Find the length of AB
Write an equation using 2as a base and 100,000 as a power of 10
How do you figure out what the slope of a graph is in the following question? The slope of the graph of y = x is what? Thanks
Can someone show me how to figure out the following problem? The slope of the graph of y = 5x is what? Thank you
Alice and Finn roll two number cubes. Which of the following rules will make the game fair? Alice wins if a total of 5 is rolled. Finn wins if a total of 9 is rolled. Alice wins if a total of 7 is
rolled. Finn wins if a total of 8 is rolled. Alice wins if a total of 3 is rolle...
Thank you so much JJ! I am home schooled and I have been struggling with these probability questions! You explained it and it made sense! Thanks!
There are 4 chocolate chip cookies and 12 oatmeal cookies in a jar. If you reach in an randomly choose 2 cookies without replacing the first, what is the probability that both will be chocolate chip?
The density of air near the earth's surface is 1.29 kg?m^3. If a helium balloon with a mass of 1 kg floats in air without rising or falling,what is the minimum volume of helium in the balloon?
(presume that the mass of the material making up the balloon is negligible). I a...
Can anyone explain this to me? Is it correct to say that the difference in the patterns produced by monochromatic and white light is caused by the fact that monochromatic light waves interfere with
each other when they pass through a diffraction grating,while the rays of white...
At what speed do you approach your image if you walk towards a plane mirror at 1 m/s ? I am not looking for this to be answered for me. I am just looking for how to figure it out. Is there a specific
formula to use?
At what speed do you approach your image if you walk towards a plane mirror at 1 m/s ? I am not looking for this to be answered for me. I am just looking for how to figure it out. Is there a specific
formula to use?
Can anyone explain this to me? Is it correct to say that the difference in the patterns produced by monochromatic and white light is caused by the fact that monochromatic light waves interfere with
each other when they pass through a diffraction grating,while the rays of white...
Can anyone explain this to me? Is it correct to say that the difference in the patterns produced by monochromatic and white light is caused by the fact that monochromatic light waves interfere with
each other when they pass through a diffraction grating,while the rays of white...
At what speed do you approach your image if you walk towards a plane mirror at 1 m/s ? I am not looking for this to be answered for me. I am just looking for how to figure it out. Is there a specific
formula to use?
Describe the information you need in order to calculate a segment length in a right triangle
three apples and four bananas cost $4.85,three apples and ten bananas cost $8.75, find the cost of an apple
Replace each question mark with an appropriate expression so that the resulting expressions are equivalent. 9(a-b)/7a=?/7a^4(a-b)
Reduce each rational expression to lowest terms: 3x^2-12y^2 over x^2+4xy+4y^2
Given the formula Ca[3](PO[4])[2]-- Calcium phosphate, if you had 2.00 moles, how many moles of oxygen atoms would you have? I attempted and got an unsure answer of 16.0
The cliff is 60m tall The diver landed 5.6m from it's base
The answer should follow such that Lines/mm --> d --> (1/930mm)/1000mm/m d = 1.075e-6 meters set sin theta = 90o = (1) (dsin(Theta))m=mlambda (1.075e-6 (1))/2)= 5.375e-7 or 537nm.
how does a metamophic rock form. http://www.cdli.ca/CITE/rocks_metamorphic.htm http://gpc.edu/~pgore/Earth&Space/metamorphic-notes.html
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jonathon","timestamp":"2014-04-16T11:20:06Z","content_type":null,"content_length":"10978","record_id":"<urn:uuid:5517d682-787d-4bdd-8826-e982e0239ae6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find equations of all tangents to the curve f(x)=9/x that have slope -1
• one year ago
• one year ago
Best Response
You've already chosen the best response.
At first you have to take the derivative of f. Do you understand why is that?
Best Response
You've already chosen the best response.
yes I do, I got -9/x2
Best Response
You've already chosen the best response.
Ok, now what is the meaning of the derivative? Knowing this meaning can you guess what the next step would be?
Best Response
You've already chosen the best response.
that is when a function change as the input change, but I'm stuck at the 2nd step what shall I do next? plug in -1 slope as x?
Best Response
You've already chosen the best response.
Yes, in other words it is the rate of change of a function. But the meaning I was thinking of is the same, but a little diferent. When the input increases in a really small amount, the value
changes too. Since those two changes are very small, the rate of change is approximately the change in the value divided by the increase in the input. Do you understand what I just said?
Best Response
You've already chosen the best response.
not really, can you lead me into solving this? what is the second step I should take?
Best Response
You've already chosen the best response.
You need to understand it before gettingg to the next step. But if you want to know, the derivative of a function at x is the slope of its tangent line at this point, that was what I was trying
to show you. Then, you should put -9/x2=-1, that is the slope, to find in which points the line passes.
Best Response
You've already chosen the best response.
Yes I got to this step, when I was trying to solve for x I got +- 3
Best Response
You've already chosen the best response.
but the answer that it should be is y=-x-6 and y = -x+6
Best Response
You've already chosen the best response.
so I must have done something wrong right?
Best Response
You've already chosen the best response.
Wait, you just found out one point in wich the line passes, it is not the answer yet
Best Response
You've already chosen the best response.
What you did so far is correct
Best Response
You've already chosen the best response.
oh ic okay, so the other point is 3 since 9/x = y and x is 3 therefore y = 3 right?
Best Response
You've already chosen the best response.
or -3, but yes, thats correct
Best Response
You've already chosen the best response.
so now what's next?
Best Response
You've already chosen the best response.
Now, you know a point that belongs to the line, and its slope. The general formula for a line is y=ax+b, and you alrady have a, now put the point you know that belongs to it, and you will find b.
Best Response
You've already chosen the best response.
use slope intercept form now
Best Response
You've already chosen the best response.
y = mx+b --> -3=-1x+b
Best Response
You've already chosen the best response.
sorry, I'm kinda slow at this point, how do I set this = to 9?
Best Response
You've already chosen the best response.
You dont need to, you know x and y, from the points (-3, -3) and (3, 3) that this tangent passes through, and you know a. Now: -3=-1*(-3)+b1 3=-1*3+b2
Best Response
You've already chosen the best response.
oh okay
Best Response
You've already chosen the best response.
I got it from here
Best Response
You've already chosen the best response.
Before, you found out in which points the tangent have this slope, then, you found out the value of the function at this point, and now you use those information to get the formulas for the
tangent lines. You seem a little bit confused with what the numbers you are getting mean.
Best Response
You've already chosen the best response.
I got it thanks so much for being so patience :)
Best Response
You've already chosen the best response.
Your welcome
Best Response
You've already chosen the best response.
can I ask you something else?
Best Response
You've already chosen the best response.
Of course
Best Response
You've already chosen the best response.
same type of problem but it's f(x) = square root of (x+9)
Best Response
You've already chosen the best response.
slope 1/2 I did the derivatives and solve for x I got x= -1
Best Response
You've already chosen the best response.
I plugged x= -1 back to original equation I got y = +/- 2 sqrt 2 then I use slope intercept to find equations, I got y= 1/4+9/4 sqrt 2
Best Response
You've already chosen the best response.
but the answer should be y=1/4+13/4 where did I do wrong?
Best Response
You've already chosen the best response.
You got the x wrong.
Best Response
You've already chosen the best response.
Or the derivative
Best Response
You've already chosen the best response.
original fx is sqrt(x=9) derivative is 1/2(x+9)^1/2
Best Response
You've already chosen the best response.
Where did I do wrong.
Best Response
You've already chosen the best response.
its (1/2)(x+9)^(-1/2)
Best Response
You've already chosen the best response.
Yes i got that.. But solving for x=-1
Best Response
You've already chosen the best response.
Well, thats wrong, did you see the minus, in the exponent?
Best Response
You've already chosen the best response.
No i said that mine is 1/{2(x+9)^1/2 same thing
Best Response
You've already chosen the best response.
\[\frac{ 1 }{ 2 }=\frac{ 1 }{ 2\sqrt{x+9} }\rightarrow \sqrt{x+9}=1\rightarrow x+9=\pm1\rightarrow x=-10 or -8\]
Best Response
You've already chosen the best response.
the slope is 1/4 not 1/2
Best Response
You've already chosen the best response.
sorry, didnt ssee you posting. Then the left side of the equation will be 1/4, and sqrt(x+9)=2 and x+9=+-4, wich gives us: x=-5, and x=-13
Best Response
You've already chosen the best response.
Can you go from there?
Best Response
You've already chosen the best response.
let me try to do calculation, I did the calculation but got +/- 2 sqrt 2.. give me 1 minute please
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I got it, I think I got messed up when I was trying to square to get rid of sq rt. Thanks Do you have time to help me on other problems?
Best Response
You've already chosen the best response.
I'm almost leaving actually, but I'm here almost every day.
Best Response
You've already chosen the best response.
quick question on solving this problem \[z(4z+7) - x(4x+7) / (z-x)\]
Best Response
You've already chosen the best response.
I can combine (z-x) then cancel top and bottom right?
Best Response
You've already chosen the best response.
so what is have left is (4z+7) * (4x+7) right?
Best Response
You've already chosen the best response.
No, be carefull, think of what is being divided, when you have something that is not being divided you cannot sum them withou a common denominator
Best Response
You've already chosen the best response.
oh maybe that's why. thanks
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50788dc1e4b02f109be43936","timestamp":"2014-04-19T04:27:07Z","content_type":null,"content_length":"149840","record_id":"<urn:uuid:5c3eb709-b1e9-421c-9b5a-e8e3e898e512>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proposition 5
Pyramids of the same height with triangular bases are to one another as their bases.
Let there be pyramids of the same height with triangular bases ABC and DEF and vertices G and H.
I say that the base ABC is to the base DEF as the pyramid ABCG is to the pyramid DEFH.
For, if the pyramid ABCG is not to the pyramid DEFH as the base ABC is to the base DEF, then the base ABC is to the base DEF as the pyramid ABCG is either to some solid less than the pyramid DEFH or
to a greater solid.
Let it, first, be in that ratio to a less solid W.
Divide the pyramid DEFH into two pyramids equal to one another and similar to the whole and into two equal prisms.
Then the two prisms are greater than the half of the whole pyramid.
Again, divide the pyramids arising from the division similarly, and let this be done repeatedly until there are left over from the pyramid DEFH some pyramids which are less than the excess by which
the pyramid DEFH exceeds the solid W.
Let such be left, and let them be, for the sake of argument, DQRS and STUH. Therefore the remainders, the prisms in the pyramid DEFH, are greater than the solid W.
Divide the pyramid ABCG similarly, and a same number of times, with the pyramid DEFH. Therefore the base ABC is to the base DEF as the prisms in the pyramid ABCG are to the prisms in the pyramid
But the base ABC is to the base DEF as the pyramid ABCG is to the solid W, therefore the pyramid ABCG is to the solid W as the prisms in the pyramid ABCG are to the prisms in the pyramid DEFH.
Therefore, alternately the pyramid ABCG is to the prisms in it as the solid W is to the prisms in the pyramid DEFH.
But the pyramid ABCG is greater than the prisms in it, therefore the solid W is also greater than the prisms in the pyramid DEFH.
But it is also less, which is impossible.
Therefore the prism ABCG is not to any solid less than the pyramid DEFH as the base ABC is to the base DEF.
Similarly it can be proved that neither is the pyramid DEFH to any solid less than the pyramid ABCG as the base DEF is to the base ABC.
I say next that neither is the pyramid ABCG to any solid greater than the pyramid DEFH as the base ABC is to the base DEF.
For, if possible, let it be in that ratio to a greater solid W.
Therefore, inversely the base DEF is to the base ABC as the solid W is to the pyramid ABCG.
But it was proved before that the solid W is to the solid ABCG as the pyramid DEFH is to some solid less than the pyramid ABCG. Therefore the base DEF is to the base ABC as the pyramid DEFH is to
some solid less than the pyramid ABCG, which was proved absurd.
Therefore the pyramid ABCG is not to any solid greater than the pyramid DEFH as the base ABC is to the base DEF.
But it was proved that neither is it in that ratio to a less solid.
Therefore the base ABC is to the base DEF as the pyramid ABCG is to the pyramid DEFH.
Therefore, pyramids of the same height with triangular bases are to one another as their bases.
|
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookXII/propXII5.html","timestamp":"2014-04-18T08:04:11Z","content_type":null,"content_length":"8437","record_id":"<urn:uuid:a30b523c-6474-456e-b9b8-2ff8a5ad7ec1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiplying power series - Maclaurin series
March 7th 2009, 05:56 AM
Multiplying power series - Maclaurin series
Hi, I've run into some trouble trying to answer this power series question that asks:
Use multiplication or division of power series to find the first three nonzero terms in the Maclaurin series for the given function.
This is my work so far (all sums are n=0 to infinity):
$e^x$ = $\sum$$\frac{x^n}{n!}$
$e^{7x}$ = $\sum$$\frac{7^nx^n}{n!}$
$<br /> ln(1+x)$ = $\sum$$(-1)^{n-1}\frac{x^n}{n}$
$ln(1-x/5)$ = $\sum$$(-1)^{n-1}(-1)^n\frac{5^{-n}x^n}{n}$
$ln(1-x/5)$ = $\sum$$(-1)^{2n-1}\frac{x^n}{n5^n}$
To find the first 3 non-zero terms, I think I'm supposed to do something like the following:
n=3 $\longrightarrow$$a_0b_3+a_1b_2+a_2b_1+a_3b_0$
n=2 $\longrightarrow$$a_0b_2+a_1b_1+a_2b_0$
n=1 $\longrightarrow$$a_0b_1+a_1b_0$
n=0 $\longrightarrow$$a_0+b_0$
What I got from doing that was:
( $(-1)^{2n-1}$ becomes -1 for all n)
= $-\frac{x^3}{375}-\frac{7x^3}{50}-\frac{49x^3}{10}-\frac{343x^3}{0}$
= $-\frac{3781x^3}{375}-\frac{343x^3}{0}$
= $-\frac{x^2}{50}-\frac{7x^2}{5}-\frac{49x^2}{0}$
= $-\frac{71x^2}{50}-\frac{49x^2}{0}$
= $-\frac{x}{5}-\frac{7x}{0}$
How do I get my 3 terms from the above? I have a lot of fractions where I'm dividing by 0.
Edit: I just realized that I changed $e^{7x}$ into a power series, would the Maclaurin series for that function be the same?
March 7th 2009, 06:59 AM
I haven't checked your formulae, it looks long :D
However, note that the power series of the exponential starts at n=0, and the power series of the logarithm stars at n=1.
This should solve the problem of dividing by 0.
The MacLaurin series is a power series. The expansion of a function into a series is unique. So expanding a function as a power series is equivalent to getting the MacLaurin series. (I don't know
if I'm clear here ? :s)
To simplify calculations, you can use the Cauchy product for series. But again, be careful that the series should start at the same point
March 7th 2009, 09:43 AM
Alright, so if I changed the index for the logarithmic series to start at 1, the following:
n=3 http://www.mathhelpforum.com/math-he...a13771b7-1.gif http://www.mathhelpforum.com/math-he...8f499a51-1.gif
n=2 http://www.mathhelpforum.com/math-he...a13771b7-1.gif http://www.mathhelpforum.com/math-he...f7d2bba9-1.gif
n=1 http://www.mathhelpforum.com/math-he...a13771b7-1.gif http://www.mathhelpforum.com/math-he...85d83429-1.gif
n=0 http://www.mathhelpforum.com/math-he...a13771b7-1.gif http://www.mathhelpforum.com/math-he...6b9ac073-1.gif
..would change to:
n=3 $\longrightarrow$$a_0b_3 + a_1b_2 + a_1b_1$
n=2 $\longrightarrow$$a_0b_2 + a_1b_1$
n=1 $\longrightarrow$$a_0b_1$
Would that be correct?
March 7th 2009, 04:05 PM
Actually, forget about the last reply.
I rechecked my work and fixed the power series for the ln function. I'll show my work for it.
$ln(1+x) = \sum (-1)^{n+1}\frac{x^{n+1}}{n+1}$
$ln(1-x/5) = \sum (-1)^{n+1}\frac{(-\frac{x}{5})^{n+1}}{(n+1)}$
$ln(1-x/5) = \sum (-1)^{n+1}(-1)^{n+1}\frac{x^{n+1}}{(n+1)5^{n+1}}$
$ln(1-x/5) = \sum (-1)^{2n+2}\frac{x^{n+1}}{(n+1)5^{n+1}}$
$ln(1-x/5) = \sum \frac{x^{n+1}}{(n+1)5^{n+1}}$
$a_n$ = $\sum \frac{(7x)^n}{n!}$
$b_n$ = $\sum \frac{x^{n+1}}{(n+1)5^{n+1}}$
With the n=3 $\longrightarrow a_0b_3 + a_1b_2 + ...$ etc. and same for n=2, n=1, n=0
I got:
$n=0 \longrightarrow 1+\frac{x}{5}$
$n=1 \longrightarrow \frac{71x^2}{50}$
$n=2 \longrightarrow \frac{3782x^3}{750}$
$n=3 \longrightarrow \frac{89568x^4}{7500}$
So are the first 3 non-zero terms of the series: $1, \frac{x}{5},$ and $\frac{71x^2}{50}$?
March 7th 2009, 11:24 PM
There is a little problem in the Taylor series for the logarithm !
The formula is $\ln(1+x)=\sum_{n=1}^\infty (-1)^{n-1} \frac{x^n}{n}$
which is $\sum_{n=0}^\infty (-1)^{\color{red}n} \frac{x^{n+1}}{n+1}$
so $\ln(1-x/5)={\color{red}-} \sum_{n=0}^\infty \frac{\left(\tfrac x5\right)^{n+1}}{n+1}$
this last modification should give you the correct answers (Wink)
Edit : there is another problem, $c_0=a_0b_0=-\frac x5$. There is no 1 ;)
|
{"url":"http://mathhelpforum.com/calculus/77328-multiplying-power-series-maclaurin-series-print.html","timestamp":"2014-04-18T23:50:31Z","content_type":null,"content_length":"22233","record_id":"<urn:uuid:166373c1-0403-4fbe-9f4d-7dc5fd9f415a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 17
- SODA , 2006
"... Optimal spaced seeds were introduced by the theoretical computer science community to bioinformatics to effectively increase homology search sensitivity. They are now serving thousands of
homology search queries daily. While dozens of papers have been published on optimal spaced seeds since their in ..."
Cited by 27 (6 self)
Add to MetaCart
Optimal spaced seeds were introduced by the theoretical computer science community to bioinformatics to effectively increase homology search sensitivity. They are now serving thousands of homology
search queries daily. While dozens of papers have been published on optimal spaced seeds since their invention, many fundamental questions still remain unanswered. In this paper, we settle several
open questions in this area. Specifically, we prove that when the length of a non-uniformly spaced seed is bounded by an exponential function of the seed weight, the seed outperforms strictly the
traditional consecutive seed in both (i) the average number of non-overlapping hits and (ii) the asymptotic hit probability. Then, we study the computation of the hit probability of a spaced seed,
solving three more open questions: (iii) hit probability computation in a uniform homologous region is NP-hard and (iv) it admits a PTAS; (v) the asymptotic hit probability is computable in
exponential time in seed length, independent of the homologous region length. 1
- in Proceedings of the 6th Asia Pacific Bioinformatics Conference (APBC , 2008
"... Spaced seed is a filter method invented to efficiently identify the regions of interest in similarity searches. It is now well known that certain spaced seeds hit (detect) a randomly sampled
similarity region with higher probabilities than the others. Assume each position of the similarity region is ..."
Cited by 9 (1 self)
Add to MetaCart
Spaced seed is a filter method invented to efficiently identify the regions of interest in similarity searches. It is now well known that certain spaced seeds hit (detect) a randomly sampled
similarity region with higher probabilities than the others. Assume each position of the similarity region is identity with probability p independently. The seed optimization problem seeks for the
optimal seed achieving the highest hit probability with given length and weight. Despite that the problem was previously shown not to be NP-hard, in practice it seems difficult to solve. The only
algorithm known to compute the optimal seed is still exhaustive search in exponential time. In this article we put some insight into the hardness of the seed design problem by demonstrating the
relation between the seed optimization problem and the optimal Golomb ruler design problem, which is a well known difficult problem in combinatorial design.
, 2005
"... Using a seed to rapidly "hit" possible homologies for further scrutiny is a common practice to speed up homology search in molecular sequences. ..."
Cited by 8 (0 self)
Add to MetaCart
Using a seed to rapidly "hit" possible homologies for further scrutiny is a common practice to speed up homology search in molecular sequences.
, 2007
"... We survey recent work in the seeding of alignments, particularly the follow-ups from the 2002 work of Ma, Tromp and Li that brought the concept of spaced seeds into the bioinformatics literature
[25]. Our focus is on the extensions of this work to increasingly complicated models of alignments, comin ..."
Cited by 3 (0 self)
Add to MetaCart
We survey recent work in the seeding of alignments, particularly the follow-ups from the 2002 work of Ma, Tromp and Li that brought the concept of spaced seeds into the bioinformatics literature
[25]. Our focus is on the extensions of this work to increasingly complicated models of alignments, coming up to the most recent efforts in this area. 1
, 2005
"... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may
be made electronically available to the public. ii This thesis introduces new techniques for finding gene ..."
Cited by 3 (2 self)
Add to MetaCart
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be
made electronically available to the public. ii This thesis introduces new techniques for finding genes in genomic sequences. Genes are regions of a genome encoding proteins of an organism.
Identification of genes in a genome is an important step in the annotation process after a new genome is sequenced. The prediction accuracy of gene finding can be greatly improved by using
experimental evidence. This evidence includes homologies between the genome and databases of known proteins, or evolutionary conservation of genomic sequence in different species. We propose a
flexible framework to incorporate several different sources of such evidence into a gene finder based on a hidden Markov model. Various sources of evidence are expressed as partial probabilistic
statements about the annotation of positions in the sequence, and these are combined with the hidden Markov model to obtain the final gene prediction. The opportunity to
- WABI , 2009
"... Abstract. With Next Generation Sequencers, sequence based transcriptomic or epigenomic assays yield millions of short sequence reads that need to be mapped back on a reference genome. The
upcoming versions of these sequencers promise even higher sequencing capacities; this may turn the read mapping ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. With Next Generation Sequencers, sequence based transcriptomic or epigenomic assays yield millions of short sequence reads that need to be mapped back on a reference genome. The upcoming
versions of these sequencers promise even higher sequencing capacities; this may turn the read mapping task into a bottleneck for which alternative pattern matching approaches must be experimented.
We present an algorithm and its implementation, called mpscan, which uses a sophisticated filtration scheme to match a set of patterns/reads exactly on a sequence. mpscan can search for millions of
reads in a single pass through the genome without indexing its sequence. Moreover, we show that mpscan offers an optimal average time complexity, which is sublinear in the text length, meaning that
it does not need to examine all sequence positions. Comparisons with BLAT-like tools and with six specialised read mapping programs (like Bowtie or ZOOM) demonstrate that mpscan also is the fastest
algorithm in practice for exact matching. Our accuracy and scalability comparisons reveal that some tools are inappropriate for read mapping. Moreover, we provide evidence suggesting that exact
matching may be a valuable solution in some read mapping applications. As most read mapping programs somehow rely on exact matching procedures to perform approximate pattern mapping, the filtration
scheme we experimented may reveal useful in the design of future algorithms. The absence of genome index gives mpscan its low memory requirement and flexibility that let it run on a desktop computer
and avoids a time-consuming genome preprocessing. 1
- BICOB , 2009
"... Spaced seeds have been extensively studied in the homology search field. A spaced seed can be regarded as a very special type of hash function on k-mers, where two k-mers have the same hash
value if and only if they are identical at the w (w
Cited by 2 (1 self)
Add to MetaCart
Spaced seeds have been extensively studied in the homology search field. A spaced seed can be regarded as a very special type of hash function on k-mers, where two k-mers have the same hash value if
and only if they are identical at the w (w <k) positions designated by the seed. Spaced seeds substantially increased the homology search sensitivity. It is then a natural question to ask whether
there is a better hash function (called hash seed) that provides better sensitivity than the spaced seed. We study this question in the paper. We propose a strategy to classify amino acids, which
leads to a better hash seed. Our results raise a new question about how to design the best hash seed.
- INFORMATION PROCESSING LETTERS , 2009
"... The spaced seed is a filtration method to efficiently identify the regions of interest in string similarity searches. It is important to find the optimal spaced seed that achieves the highest
search sensitivity. For some simple distributions of the similarities, the seed optimization problem was pro ..."
Cited by 1 (0 self)
Add to MetaCart
The spaced seed is a filtration method to efficiently identify the regions of interest in string similarity searches. It is important to find the optimal spaced seed that achieves the highest search
sensitivity. For some simple distributions of the similarities, the seed optimization problem was proved to be not NP-hard. On the other hand, no polynomial time algorithm has been found despite the
extensive researches in the literature. In this article we examine the hardness of the seed optimization problem by a polynomial time reduction from the optimal Golomb ruler design problem, which is
a well-known difficult (but not NP-hard) problem in combinatorial design.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3690552","timestamp":"2014-04-25T02:37:21Z","content_type":null,"content_length":"36112","record_id":"<urn:uuid:34836ec1-7c53-40a2-af6e-16da07b424ee>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CityU Institutional Repository: Analysis of statistical learning algorithms in data dependent function spaces
CityU Institutional Repository >
3_CityU Electronic Theses and Dissertations >
ETD - Dept. of Mathematics >
MA - Doctor of Philosophy >
Please use this identifier to cite or link to this item: http://hdl.handle.net/2031/5786
Title: Analysis of statistical learning algorithms in data dependent function spaces
Other Shu ju xiang guan han shu kong jian zhong tong ji xue xi suan fa de fen xi
Titles: 數據相關函數空間中統計學習算法的分析
Authors: Wang, Hongyan (王洪彥)
Department: Department of Mathematics
Degree: Doctor of Philosophy
Issue Date: 2009
Publisher: City University of Hong Kong
Computational learning theory.
Subjects: Approximation theory.
Function spaces.
CityU Call Number: Q325.7 .W36 2009
Notes: vi, 100 leaves 30 cm.
Thesis (Ph.D.)--City University of Hong Kong, 2009.
Includes bibliographical references (leaves [87]-100)
Type: thesis
In this thesis we study some algorithms in statistical learning theory by methods from approximation theory. First we apply the moving least-square method to the regression problem.
Moving least-square method is an approximation method for data smoothing, numerical analysis, statistics and some other purposes. It involves a weight function such as Gaussian weights
and a finite dimensional space of real valued functions. In our setting the data points for the moving least-square algorithm are drawn from a probability distribution. We conduct error
analysis for learning the regression function by imposing mild conditions on the marginal distribution and the hypothesis space. Then we consider a learning algorithm for regression with
data dependent hypothesis space and `1-regularizer. The data dependent nature of the algorithm leads to an extra error term called hypothesis error, which is essentially different from
regularization schemes with data independent hypothesis spaces. By dealing with regularization error, sample error and hypothesis error, we estimate the total error in terms of
Abstract: properties of the Mercer kernel, the input space, the marginal distribution and the regression function of the regression problem. Especially for the hypothesis error, we use some
techniques of scattered data interpolation in multivariate approximation to improve the convergence rates. Better learning rates are derived by imposing high order regularities of the
kernel and choosing suitable values of the regularization parameter. Finally a gradient descent algorithm of learning gradients is introduced in the framework of classification problems.
Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In the classification setting
involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization
schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. Our gradient descent algorithm is
simple and its convergence is elegantly studied with learning rates explicitly presented. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions
on the probability measure for sampling allows us to deal with a general class of convex loss functions.
Catalog http://lib.cityu.edu.hk/record=b2375053
Appears in MA - Doctor of Philosophy
Items in CityU IR are protected by copyright, with all rights reserved, unless otherwise indicated.
|
{"url":"http://dspace.cityu.edu.hk/handle/2031/5786","timestamp":"2014-04-20T06:09:24Z","content_type":null,"content_length":"24464","record_id":"<urn:uuid:d95c6a36-338b-47fc-a7c6-d8468606f5c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BLCT 1132 - Estimating I
This course develops mathematical concepts and application skills necessary for the carpenter and mason to estimate building quantities and associated costs. Topics include arithmetic operations with
whole numbers, decimals, and fractional numbers. Formulas for area, volume, board foot quantities, and basic geometry as it pertains to construction will be studied. The quantities estimated are in
the framing/sheathing stages of enclosing a building including concrete, brick, and block calculations.
|
{"url":"http://www.alfredstate.edu/print/academics/courses/blct-1132-estimating-i","timestamp":"2014-04-19T08:25:10Z","content_type":null,"content_length":"8984","record_id":"<urn:uuid:6023094f-1cd3-4823-af6f-a8ff76e54c06>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
listography: cheat sheet (math- word problems)
acute angle An angle that measures less than 90 degrees. acute triangle A triangle in which all three angles are acute angles; in other words, each angle measures less than 90 degrees. addition
keywords Words that indicate addition. addition method Sometimes called the elimination method, it is a method for solving a system of two equations. One or both of the equations needs to be modified
so that when the two equations are added, one of the variables is eliminated. addition property of equations An equation is still true if the same term is added to (or subtracted from) both sides of
an equation. adjacent keywords Two keywords that are next to each other (even if a word such as "the" appears between the keywords). algebraic equation A statement that two algebraic expressions are
equal. algebraic expression A collection of numbers, variables, operations, and grouping symbols. amount The number that is a percentage of the base. It is the number in the numerator in a percent
ratio: a/b= p/100 (see also definitions of base and percent) analogies Similarities in relationships between two items. angle Two rays sharing the same endpoint. (A ray is a part of a line that looks
like an arrow. It begins at one point and goes on forever in the opposite direction.) area The amount of surface enclosed by a closed figure. base The number that is in the denominator in a percent
ratio: a/b= p/100 (see also definitions of amount and percent) base (of a triangle) One of the three sides of a triangle that is perpendicular to the height. board method This method can be used to
help translate a word problem into an equation if there is a total given in the problem. clearing fractions A method of simplifying an equation by multiplying both sides of the equation by the least
common denominator before solving it. This method results in an equation with only integers and no fractions. coefficient The number in front of a variable. For example, 4 is the coefficient in the
term 4x. combining like terms The process of adding or subtracting like terms. commutative property of addition The order of the terms does not change the sum. complement The percentage that would
add to 100%. For example, 42% is the complement of 58%. concrete Tangible, easily understood. consecutive even integers Integers that are adjacent in an ordered list of even integers. For example, 4
and 6 are two consecutive even integers. consecutive integers Integers that are adjacent on a number line. For example, 7, 8, and 9 are three consecutive integers. consecutive odd integers Integers
that are adjacent in an ordered list of odd integers. For example, 1 and 3 are two consecutive odd integers. direct translation strategy This strategy is used when you can translate each word, one at
a time in the same order as written, into its corresponding algebraic symbol. distance problems Word problems that involve traveling a distance. The formula d = rt is used for these problems.
distributive property of multiplication over addition The property that allows a number or term to be distributed (by multiplication) to the sum of two terms in parentheses, for example, a(b + c) =
ab + ac. division keywords Words that indicate division. draw a picture Create a drawing that helps visualize the word problem. elimination method Sometimes called the addition method, it is a method
for solving a system of two equations. One or both of the equations needs to be modified so that when the two equations are added, one of the variables is eliminated. equation Two expressions set
equal to each other. The easiest way to differentiate between an expression and an equation is that the equation has an equal sign. equilateral triangle A triangle with all three sides of equal
length. estimate An educated guess of the solution. This guess is made before setting up an equation and solving the problem. evaluation To follow the order of operations and determine the value.
even integers Integers that are even numbers. even number A number divisible by two. expression A collection of constants, variables, symbols of operations, and grouping symbols. grouping symbols
Usually parentheses, but grouping is also indicated by brackets and braces. harmless parentheses Parentheses that are not necessary but that do not do any harm in the expression. height (of a
triangle) A line segment that is perpendicular to the base of a triangle, with one endpoint being the opposite vertex. homogeneous units When both numbers are measured with the same units. identify
the variable(s) The process of choosing a letter to represent the unknown value and describing that variable. identity property of multiplication Any number times one has the same value. implied
multiplication Multiplication is implied when a number is placed next to a variable, or when a number is placed next to an expression surrounded by parentheses. integer A counting number or a
negative whole number. investment problems Word problems that involve investing money at a simple interest rate. isosceles triangle A triangle with two sides of equal length. keywords Words that
indicate a mathematical operation. leading keywords If the first word in an English phrase indicates an operation, it is a leading keyword; it "leads" the expression. least common denominator The
least common multiple of all the denominators in the problem. least common multiple The smallest of the common multiples of more than one number. For example, 12 and 24 are both multiples of 3 and 4.
12 is the least common multiple of 3 and 4. like terms Terms with the same variables raised to the same powers. linear equation An equation that can be put in the form Ax + By + C = 0. mixture
problems Word problems that involve mixing two or more items to create a new mixture. Often, these are chemistry problems, but they can include problems in which two different types of nuts are mixed
to create mixed nuts, and so on. money problems Word problems that concern money. multiple equation word problems Word problems that can be translated into more than one equation. multiple operations
An expression has multiple operations when you see more than one symbol for addition, subtraction, multiplication, and/or division. multiples of a number An infinite list of the products of a number
and each whole number. For example, the multiples of 3 are 3, 6, 9, 12, 15, 18 . . . multiplication keywords Words that indicate multiplication. multiplication property of equations An equation is
still true if both sides of the equation are multiplied by (or divided by) the same term. obtuse angle An angle that measures more than 90 degrees and less than 180 degrees. obtuse triangle A
triangle with one obtuse angle. odd A number that is not divisible by two. odd integers Integers that are odd numbers. order of operations When an expression has multiple operations, they must be
performed in the following order: 1) operations within parentheses; 2) exponents; 3) multiplication and division, from left to right; and 4) addition and subtraction, from left to right. percent(s)
The part per one hundred in the percent proportion: a/b= p/100 (see also definitions of base and percent) percentages A given part in every hundred. perimeter The sum of the lengths of the sides of
any closed figure. perpendicular Two line segments are perpendicular if they meet at a right angle, 90 degrees. Polya's four-step process A process of problem solving published in 1945 by George
Polya. The steps are as follows: 1) Understand the problem; 2) Devise a plan; 3) Carry out the plan; 4) Look back over the results. proportion(s) Two ratios set equal to each other. ratio A
comparison of two quantities by division. rectangle A four-sided closed figure in which the opposite sides are parallel and of equal length. All pairs of adjacent sides of a rectangle meet at right
angles. right angle An angle that measures 90 degrees. right triangle A triangle with one right angle. scalene triangle A triangle with all three sides of different lengths. simplified An algebraic
expression is simplified by using the distributive property and combining like terms. square A square is a special type of rectangle in which all sides are equal in length. substitution method A
method for solving a system of two equations. One of the equations needs to be solved for one of the variables. That expression is then substituted into the other equation for the variable. The
resulting equation has only one unknown. summation problems A word problem with a total. subtraction keywords Words that indicate subtraction. systems of equations More than one equation that are
solved simultaneously. term The addends of an expression. translate Change a phrase written in English to an algebraic expression, using the correct symbols. triangle A three-sided closed figure.
tried-and-true method A method for solving word problems that has been used for many years. Some examples are Polya's four-step process, identifying variables, and estimation. turnaround words Words
that indicate a change in order from left to right. The expressions are turned around, and the second English phrase becomes the first algebraic expression (and visa versa). The basic turnaround
words are, TO (including INTO), FROM, and THAN. units The method of measurement used for the numbers in a word problem. For example: feet, inches, dollars, degrees, and so on. variable A symbol used
to represent an unknown number, often x or n. variable omission An error that occurs when, in the process of translating a sentence in the word problem into an equation, one of the variables is left
out. variable reversal This error is made if the variables are switched with each other. vertices The three endpoints of the line segments that make up a triangle. work problems Word problems that
involve two people or machines working together at different rates to complete one whole job.
|
{"url":"http://listography.com/kouriesova/cheat_sheet/math-_word_problems","timestamp":"2014-04-20T13:19:38Z","content_type":null,"content_length":"22345","record_id":"<urn:uuid:f7b53873-af26-4ecc-8f21-f238576d601d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
March 2000 Issue
Today's Editors: Jane Heckler and Dan Kalman
In this Issue.....
Liaison Breakfast
A Favorite Induction Proof
Elbert F. Cox, SUMMA
MathServe 2000
Upcoming Pubs
Liaison Breakfast
In spite of an early morning snow storm in Washington, DC, the third annual Liaisons Breakfast at the January Joint Mathematics Meetings was a great success by all accounts. Herb Kasube, an officer
of the Illinois Section and member of the MAA Committee on Liaisons, served as Master of Ceremonies for a brief program of announcements and discussion. Participating liaisons from all over the
country met to share ideas over breakfast. As a memento of the occasion, and in appreciation for their efforts as liaisons, participants received a special mouse-pad with featuring an MAA Liaisons
logo. We hope that the breakfast will become a tradition, and that all liaisons will have a chance to attend one at a national meeting.
A Favorite Induction Proof
One session at the January meeting featured presentations by Arthur T. Benjamin, Donald S. Passman, and Gary W. Towsley, winners of this year's Deborah and Franklin Tepper Haimo Awards for
Distinguished College or University Teaching of Mathematics. (Go to
on the internet for more information on the awards and recipients.)
Benjamin's presentation included his favorite induction proof, which is repeated here for your consideration.
A tromino is a plane figure composed of three squares:
_ _
The theorem states that if n is any power of 2, then an n x n square grid with one of the corner squares removed can be tiled using trominoes. The base case of this result, when n = 2 (the first
power of 2), is clear: a 2 x 2 square with one corner removed IS a tromino. For the induction step, suppose that the result holds for a 2 x 2 grid and consider the next case: a 2n x 2n grid with one
corner square removed. Partition this 2n x 2n grid into 4 parts, each an n x n grid. One of the four contains the corner with the removed square, and by the induction hypothesis, that is tilable with
trominoes. Now there remain three untiled n x n grids. Position one tromino at the point where these 3 meet, so that one square of the tromino lies in each of the three untiled grids. That leaves
untiled three n x n grids with one corner removed, and these are tilable again by the induction hypothesis.
That completes the proof.
Elbert F. Cox, SUMMA
In the February Monthly, there is an absorbing article on the first African American to earn a math PhD in this country ("Elbert F. Cox: An Early Pioneer" by James A. Donaldson and Richard J.
Fleming). A brief summary of the article (available at
) states that Cox "...earned the degree from Cornell University in 1925 and went on to a distinguished career that ended with his retirement after 37 years at Howard University." Cox grew up in a
period when racism was openly expressed, including professions that blacks were incapable of a wide array of intellectual activities, higher mathematics among them. It is a sad chapter in the history
of our entire nation, and of our own MAA, that such attitudes were so widely repeated and accepted.
More recently, the MAA has recognized the importance of attracting people from all backgrounds to study and contribute to mathematics. An MAA program called SUMMA (Strengthening Underrepresented
Minority Mathematics Achievement) was established in 1990 to increase the representation of minorities in the fields of mathematics, science and engineering and improve the mathematics education of
minorities. Toward those ends, SUMMA secured outside funding for a variety of activities and programs, including professional development for minority faculty and intervention programs for secondary
students. More information about SUMMA is available on-line at
MathServe 2000 Contest
The purpose of MathServe is to build bridges that connect the mathematics community to the service community. The intent of the contest is to inspire collaborative projects that utilize mathematical
skills to address social, health, or environmental challenges.
All projects must be collaborative ventures between an educational institution--university or high school department of mathematical science--and a public, nonprofit or grassroots organization.
Work may be accomplished individually or in teams by faculty and/or students. Student participation is strongly encouraged.
Submissions are due at COMAP no later than May 1, 2000.
A copy of the Submission Form is available from the MathServe website at: http://www.comap.com/undergraduate/contests/mathserve/index.html
The MathServe Program is organized by the Consortium for Mathematics and Its Applications (COMAP), the sponsor of the Mathematical Contest in Modeling (MCM) and The Charles A. Dana Center of the
University of Texas at Austin.
MAA Publications coming soon in 2000:
Here are some books in production:
Contest Book VI: American High School Mathematics Examinations 1989-1994
- by Leo Schneider, The Anneli Lax New Mathematical Library (early May)
Using History to Teach Mathematics
- Edited by Victor Katz, MAA Notes (early summer)
Teaching Resources for Undergraduate Statistics
- edited by Thomas Moore, MAA Notes, (early summer)
Topology Meets Chemistry (Jointly published with Cambridge University Press)
- by Erica Flapan (early summer)
April....Mathematics Awareness Month (www.maa.org/news/mam00.html)
April 14....Project NExT Applications due (www.maa.org/news/next2000.html)
June 30...Undergrad student paper deadline, Mathfest 2000 (www.maa.org/students/students_index.html)
August 3-5...Mathfest 2000, UCLA
The Liaisons Newsletter is produced at the headquarters office of the MAA. Comments, suggestions, and questions are welcome. Please direct them to any of the following:
Jane Heckler
Senior Assistant for Programs
Dan Kalman
Member, Subcommittee on MAA/Departmental Liaisons
John Petro
Chair, Subcommittee on MAA/Departmental Liaisons
The Mathematical Association of America
1529 Eighteenth St., NW
Washington, DC 20036-1358
Voice: (800) 741-9415 Fax: (202) 483-5450
|
{"url":"http://www.maa.org/external_archive/Liaisons/marchNL.htm","timestamp":"2014-04-19T16:07:49Z","content_type":null,"content_length":"8539","record_id":"<urn:uuid:177dbaba-9ce4-437d-af1c-c5a35477c804>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Two-Wire Transmission Line block models the two-wire transmission line described in the block dialog box in terms of its frequency-dependent S-parameters. A two-wire transmission line is shown in
cross-section in the following figure. Its physical characteristics include the radius of the wires a, the separation or physical distance between the wire centers S, and the relative permittivity
and permeability of the wires. SimRF™ Equivalent Baseband software assumes the relative permittivity and permeability are uniform.
The block enables you to model the transmission line as a stub or as a stubless line.
Stubless Transmission Line
If you model a two-wire transmission line as a stubless line, the Two-Wire Transmission Line block first calculates the ABCD-parameters at each frequency contained in the modeling frequencies vector.
It then uses the abcd2s function to convert the ABCD-parameters to S-parameters.
The block calculates the ABCD-parameters using the physical length of the transmission line, d, and the complex propagation constant, k, using the following equations:
Z[0] and k are vectors whose elements correspond to the elements of f, a vector of modeling frequencies. Both can be expressed in terms of the resistance (R), inductance (L), conductance (G), and
capacitance (C) per unit length (meters) as follows:
and .
In these equations:
● σ[cond] is the conductivity in the conductor.
● μ is the permeability of the dielectric.
● ε is the permittivity of the dielectric.
● ε″ is the imaginary part of ε, ε″ = ε[0]ε[r]tan δ, where:
○ ε[0] is the permittivity of free space.
○ ε[r] is the Relative permittivity constant parameter value.
○ tan δ is the Loss tangent of dielectric parameter value.
● δ[cond] is the skin depth of the conductor, which the block calculates as .
● f is a vector of modeling frequencies determined by the Output Port block.
Shunt and Series Stubs
If you model the transmission line as a shunt or series stub, the Two-Wire Transmission Line block first calculates the ABCD-parameters at each frequency contained in the vector of modeling
frequencies. It then uses the abcd2s function to convert the ABCD-parameters to S-parameters.
Shunt ABCD-Parameters
When you set the Stub mode parameter in the mask dialog box to Shunt, the two-port network consists of a stub transmission line that you can terminate with either a short circuit or an open circuit
as shown here.
Z[in] is the input impedance of the shunt circuit. The ABCD-parameters for the shunt stub are calculated as
Series ABCD-Parameters
When you set the Stub mode parameter in the mask dialog box to Series, the two-port network consists of a series transmission line that you can terminate with either a short circuit or an open
circuit as shown here.
Z[in] is the input impedance of the series circuit. The ABCD-parameters for the series stub are calculated as
|
{"url":"http://www.mathworks.se/help/simrf/ref/twowiretransmissionline.html?nocookie=true","timestamp":"2014-04-23T06:47:45Z","content_type":null,"content_length":"46059","record_id":"<urn:uuid:1454a924-ca74-4684-af6a-26ec3de23625>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transforming Ground
Stand in any urban environment. Look around. All was once open countryside. We will show what is involved in moulding the ground ready for constructing roads and buildings.
When we stand in an urban setting and look around it is difficult to imagine the open countryside that was once there. Let us summarise a few of the initial steps in changing this countryside into
roads and buildings. Prior to the commencement of any construction work, the land’s shape must be transformed. If we are to construct a building then we would normally wish to start off with flat,
horizontal land. If we are to construct a road then the land upon which the road is to be laid can be quite a complex shape. The process of transforming the original land to the shape that we would
like it to be involves moving earth from one place to another. Earth may also have to be either removed from the site or brought to the site. Estimating how much earth is to be moved and how much of
it has to be brought in or taken away is important as its relocation involves a significant cost. Let us look at how we can start to estimate the volumes of earth. We take as our example the
construction of a small road.
Step 1: Model the original ground. Perhaps a contour map is available.
Step 2: Draw the outline of the road on the map.
Step 3: On the map, mark the points at which you wish to specify the height of the new road.
The mathematics used to calculate the volumes includes trigonometry, surface fitting, geometry, and integration.
We could use triangulation to model a surface:
This site [1] shows a triangulation taking place.
Props: Paper, ruler, pencil, eraser
Imagine that you are a land surveyor and that you have recorded the height of the ground at a number of locations on the construction site. Get your sheet of paper and mark on it a number of crosses,
randomly scattered over the paper. Each cross represents the position at which you have taken the height of the ground above sea level. Overall, therefore, your paper represents a plan view of the
ground. Next we need to perform a triangulation. This makes it easy for us to calculate the height of the ground anywhere, not just at the positions of the crosses. Below are two demonstrations for
achieving a good triangulation.
1. Take your pencil and ruler and start connecting the crosses to one another so as to form triangles. Each vertex of a triangle must only occur at a cross. It is best if you complete this exercise
quickly without too much thinking. Let us consider, for a moment, what makes a good triangulation. A good triangle is one where all of the sides are about equal. We must try to avoid long thin
triangles. When you have triangulated the whole surface, you can check that your triangulation is a good one and amend it where necessary. You can do this by comparing each triangle with each of its
neighbours. Two triangles side by side form a quadrilateral. Think to yourself "is the line splitting the quadrilateral in two a good one, or would it be better using the other diagonal?" In this way
you will need to erase some lines and insert other lines in order to improve the triangulation. Stop when you have the perfect triangulation.
2. Take your pencil and ruler and start connecting the crosses on the outside to one another so that when you are finished you have a polygon enclosing all the of other crosses. Now, look around the
edge of the region and try to find the best triangle that you can. (A good triangle is one where all of the sides are about equal. We must try to avoid long thin triangles.) This might be a triangle
that has one side on the polygon and a vertex inside the region. Alternatively, it could be a triangle that has two sides on the polygon. Connect the points to form this new triangle. Next, look for
another good triangle on the edge of the region (which keeps shrinking). Connect the points to form this new triangle. Keep adding triangles to the edge of the region until the triangulation is
complete. This exercise is rather like having a polygonal shaped pie and biting triangular pieces out of it until it is gone.
You can read more about moving soil (from Wikipedia) or more on the mathematics of calculating volumes [2].
The ground on which we build,
Any built-up environment.
Any viewpoint that allows you to see the ground level.
External links
|
{"url":"http://www.mathsinthecity.com/sites/transforming-ground","timestamp":"2014-04-21T14:40:01Z","content_type":null,"content_length":"37876","record_id":"<urn:uuid:9e16397e-11ab-4450-9b54-9e5a8d03fbbf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5.7 Shifting operations
5.7 Shifting operations
The shifting operations have lower priority than the arithmetic operations:
shift_expr: a_expr | shift_expr ( "<<" | ">>" ) a_expr
These operators accept plain or long integers as arguments. The arguments are converted to a common type. They shift the first argument to the left or right by the number of bits given by the second
A right shift by n bits is defined as division by pow(2,n). A left shift by n bits is defined as multiplication with pow(2,n); for plain integers there is no overflow check so in that case the
operation drops bits and flips the sign if the result is not less than pow(2,31) in absolute value. Negative shift counts raise a ValueError exception.
See About this document... for information on suggesting changes.
|
{"url":"http://www.wingware.com/psupport/python-manual/2.1/ref/shifting.html","timestamp":"2014-04-21T02:11:46Z","content_type":null,"content_length":"4563","record_id":"<urn:uuid:59660db9-f228-4985-9b59-93a63f14bc62>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Straining flow of a micellar surfactant solution
Breward, C. J. W. and Howell, P. D. (2004) Straining flow of a micellar surfactant solution. European Journal of Applied Mathematics, 15 . pp. 511-531.
We present a mathematical model describing the distribution of monomer and micellar surfactant in a steady straining flow beneath a fixed free surface. The model includes adsorption of monomer
surfactant at the surface and a single-step reaction whereby
Previous studies of such systems have often assumed equilibrium between the monomer and micellar phases, i.e. that the reaction rate is effectively infinite. Our analysis shows that such an approach
inevitably fails under certain physical conditions and also cannot accurately match some experimental results. Our theory provides an improved fit with experiments and allows the reaction rates to be
Repository Staff Only: item control page
|
{"url":"http://eprints.maths.ox.ac.uk/120/","timestamp":"2014-04-19T09:24:54Z","content_type":null,"content_length":"14170","record_id":"<urn:uuid:83d89cb0-d591-4f5b-a0e9-3b921d38f814>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Latest posts of: radman - Arduino Forum
The problem is that there will be inductive coupling between the antenna so no amount of switching is going to isolate one token.
So if the muxes are used to select the transmitter in column 'B' and the receiver in row 'B' what you are saying is that the transmitters in columns 'A' and 'C' will also fire and the receiver will
see multiple tags is that correct?
Is there no way round this by say reducing the transmitter power (the rfid tags will be right on top of the transmitters) and by grounding the transmitters that are not supposed to fire?
|
{"url":"http://forum.arduino.cc/index.php?action=profile;u=51043;sa=showPosts;start=60","timestamp":"2014-04-21T05:11:15Z","content_type":null,"content_length":"42927","record_id":"<urn:uuid:5e167cff-7661-4704-b88d-18824f098641>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
UTSA Department of Chemistry
Walter C. Ermler
Office Phone: (210) 458-7005
Office:BSE 1.104E
E-Mail: walter.ermler@utsa.edu
Curriculum Vitae
Areas of Specialization
• Computational Spectroscopy
• Modeling Heavy-element Compounds
• Theoretical Chemistry
Research Interests
• Research efforts encompass the disciplines of chemistry, physics, materials science and computational science, including the electronic structures and spectra of metal and semi-conductor clusters,
molecules and complexes comprised of heavy elements, vibrational analysis, polymer/surface interactions, radiative processes, and surface phenomena.
• Large scale computing, code development, and code implementation are ongoing. Applications codes for atomic and molecular electronic structure calculations and molecular vibrational-rotational
analysis have been published and are maintained.
• New developments in relativistic pseudopotential theory and high-order perturbation theory of molecular motion are applied in large-scale computational studies encompassing the characterization of
lanthanides and actinides in chemical waste, substrate/polymer interactions, heavy metal compounds, surface interactions and fullerene structures and spectra.
Difference densities from RuO2+ spin-orbit configuration
interaction wave functions.
"Configuration interaction calculations on the cyclic carbon clusters C8,C10, Pt@C8 and Pt@C10 and their anionic forms"
Douglas M. Dee and W.C. Ermler
Computational and Theoretical Chemistry 1030C (2014), pp. 33-37
" BOOK
R.S. Mulliken and W.C. Ermler
Academic Press, Inc., New York, 1977, 211 pp.
R.S. Mulliken and W.C. Ermler
Academic Press, Inc., New York, 1981, 447 pp.
W. C. Ermler and J. L. Tilson
Computational and Theoretical Chemistry, Vol. 1002, pp. 24-30, 2012.
"Electric dipole transition moments and permanent dipole moments for spin-orbit configuration interaction wave functions"
B. Roostaei and W. C. Ermler
Computer Physics Communications, Vol. 183, pp. 594-599, 2012.
J. L. Tilson, W. C. Ermler and R. J. Fowler
Chemical Physics Letters, Vol. 516, pp. 131-6, 2011
W.C. Ermler, R.B. Ross and P.A. Christiansen
Advances in Quantum Chemistry, Vol. 19, 1988, pp. 139-182
Relativistic Pseudopotentials and Nonlocal Effects
W. C. Ermler and M. M. Marino
New Methods in Quantum Theory, eds.
C. A. Tsipis, V. S. Popov, D. R. Herschbach and J. S. Avery, Kluwer Academic, Dordrecht, 1996
M. M. Marino, W. C. Ermler, C.W. Kern and V. Bondybey
The Journal of Chemical Physics, Vol. 96, pp. 3756-66, 1 992
J. M. Herbert and W. C. Ermler
Computers and Chemistry, Vol. 22 (2-3) 169-84, 1998
W.C. Ermler, H.H. Hsieh and L.B. Harding
Computer Physics Communications, Vol. 51, pp. 257-84, 1988.
C. Naleway, M. Seth, R. Shepard, A. F. Wagner, J. L. Tilson, W. C. Ermler and S. R. Brozell
The Journal of Chemical Physics, Vol. 116, pp. 5481-93, 2002
An ab Initio Study of AmCl^+1: f-f Spectroscopy and Chemical Binding
J. L. Tilson, C. Naleway, M. Seth, R. Shepard, A. F. Wagner, and W. C. Ermler
The Journal of Chemical Physics, Vol. 121, pp. 5661-5675, 2004.
J. L. Tilson, W.C. Ermler and R. M. Pitzer
Computer Physics Communications, Vol. 128, pp. 128-138, 2000.
Linux Cluster – 6-node Altus Dual Opteron system
|
{"url":"http://www.utsa.edu/chem/ermler.html","timestamp":"2014-04-17T12:28:58Z","content_type":null,"content_length":"24917","record_id":"<urn:uuid:9f35fb19-726d-4f62-8add-69be56bc54de>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power estimation for Covariance Structure Models
csmpower Power estimation for Covariance Structure Models csmpower
SAS Macro Programs: csmpower
$Version: 1.1 (22 Jun 2000)
Michael Friendly
York University
The csmpower macro ( )
Power estimation for Covariance Structure Models
csmpower carries out retrospective or prospective power computations for covariance structure models using the method of MacCallum, Browne and Sugawara (1996). Their approach allows for testing a
null hypothesis of 'not-good-fit', so that a significant result provides support for good fit.
Effect size in this approach is defined in terms of a null hypothesis and alternative hypothesis value of the root-mean-square error of approximation (RMSEA) index. These values, together with the
degrees of freedom (df) for the model being fitted, the sample size (n), and error rate (alpha), allow power to be calculated.
The values of RMSEA are printed by PROC CALIS as "RMSEA Estimate" among the many fit statistics. The statistic also appears in the OUTRAM= data set. Values of RMSEA <= .05 are typically considered
'close fit'; values .05-.08 are considered 'fair', .08-.10, 'mediocre', RMSEA > .10, 'poor'.
For a retrospective power analysis, the macro reads an OUTRAM= data set from a PROC CALIS run, and calculates power for the values of RMSEAEST and its lower and upper confidence limits. For a
prospective power analysis, values of RMSEA, DF and N must be provided through the macro arguments. The macro allows several values of rmseaa, alpha, df and sample size to be specified. Power is
calculated for each combination of these values.
csmpower is a macro program. Values must be supplied for either DATA=the name of an OUTRAM= dataset obtained from PROC CALIS, or sets of values for the RMSEAA, DF and N parameters. If the DATA=
parameter is supplied, the macro ignores values of the RMSEAA, DF and N parameters.
The arguments may be listed within parentheses in any order, separated by commas. For example:
%csmpower(data=outram_dataset, alpha=.05);
%csmpower(n=%str(200, 300, 400), df=23);
Default values are shown after the name of each parameter.
The name of an OUTRAM= data set from PROC CALIS, to carry out a retrospective power analysis using the RMSEA values, DF, and sample size from a given model. If not specified, you should supply
values for the N, DF, and RMSEAA parameters.
The name of the output dataset. If not specified, the new dataset is named CSMPOWER.
Value of RMSEA under the null hypothesis
Value(s) of RMSEA under the alternative hypothesis
ALPHA =.05
Error rate (significance level) for tests
DF =
Degrees of freedom for the model
N =%str(40 to 100 by 20, 150 to 400 by 50)
Value(s) of sample size for which power is desired.
PLOT =%str(power * n = df)
This example fits a stringent model to Lord's Vocabulary Data, concerning two vocabulary tests, X, and Y, each given under both speeded and unspeeded conditions.
We then consider the power of the tests of model fit.
%include macros(csmpower); *-- or include in an autocall library;
title "Power analysis: Lord's Vocabulary Data";
data lord(type=cov);
input _type_ $ _name_ $ x1 x2 y1 y2;
n . 649 649 649 649
cov x1 86.3937 . . .
cov x2 57.7751 86.2632 . .
cov y1 56.8651 59.3177 97.2850 .
cov y2 58.8986 59.6683 73.8201 97.8192
mean . 0 0 0 0
title2 "Lord's data: H1- X1 and X2 parallel, Y1 and Y2 parallel, rho=1";
proc calis data=lord cov summary outram=ram1;
lineqs x1 = betax F1 + e1,
x2 = betax F1 + e2,
y1 = betay F2 + e3,
y2 = betay F2 + e4;
std F1 F2 = 1 1,
e1 e2 e3 e4 = vex vex vey vey;
cov F1 F2 = 1;
*-- Perform power analysis for the RMSEA statistics in this model;
title 'Retrospective power analysis';
*--; title 'Prospective power analysis';
%csmpower(df=6, rmseaa=%str(.08 to .12 by .02),
plot=%str(power*n =rmseaa));
Here is the complete output from this example.
MacCallum, R., Browne, M., and Sugawara, H. M. (1996). Power Analysis dnd Determination of Sample Size for Covariance Structure Modeling, Psychological Methods, 2(1), 130-149.
See also
caliscmp Compare model fits from PROC CALIS
fpower Power computations for ANOVA designs
mpower Retrospective power analysis for multivariate GLMs
rpower Retrospective power analysis for univariate GLMs
A discussion of Statistical Power in Structural Equation Models by David Kaplan
|
{"url":"http://www.datavis.ca/sasmac/csmpower.html","timestamp":"2014-04-20T16:41:35Z","content_type":null,"content_length":"6745","record_id":"<urn:uuid:2e96c53e-504b-46b9-abd6-84185426206a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computational Biology Group
From Computational Biology Group
Background: Studies in Life Science traditionally have put less emphasis on mathematical training than other scientific disciplines like physics and engineering. Yet, an increasing fraction of
biological problems can only be addressed with some level of mathematical, statistical or computational support. Accordingly, today an expanding community of formally trained scientists studies
biological problems. At the same time many biologists lack basic knowledge and experience in applying mathematical concepts and tools to assist their research.
Concept: The fact that a significant number of biology students are uncomfortable in using mathematics may be rooted already in their high school education or even beyond. The frontal courses offered
to biology students at UNIL (which are usually taught by EPFL lecturers) may help brushing up basic mathematical skills of some students, but are unlikely to reach those who have long lost their
interest and self-confidence in solving mathematical problems.
The central idea of this course is to offer an alternative which aims at gaining mathematical strength by addressing a practical problem within a biological question which can only be solved with
some piece of mathematics. Thus the emphasis is on learning by doing rather than an abstract approach where mathematical insights are detached from biological applications: Small groups of two to
four students will be jointly supervised by a “biologist” and a “mathematician” (or people with a mutual background in some cases) in well-defined biological projects that require a particular
mathematical skill.
Target audience: This is an optional course open to UNIL students from their third semester in life science studies. While it is focused at BA students it is also open to other interested students.
(back to main page of Course: "Solving Biological Problems that require Math")
|
{"url":"http://www2.unil.ch/cbg/index.php?title=Concept","timestamp":"2014-04-19T04:27:14Z","content_type":null,"content_length":"18326","record_id":"<urn:uuid:a140b5d3-44fc-40db-9c59-ff998e08251f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3.4 Matrix Multiplication
Home | 18.013A | Chapter 3 Tools Glossary Index Up Previous Next
3.4 Matrix Multiplication
A rectangular array of numbers, say $n$ by $m$ , is called a matrix. The i-j-th element of the matrix $A$ is the element in the i-th row and j-th column, and is denoted as $A i j$.
Here are examples of matrices one two by two and the other two by three
$( 1 0 1 1 )$
$( 1 4 2 2 1 0 )$
If matrix $A$ has the same number of columns as $B$ has rows, we define the product matrix, $A B$ to be the matrix whose elements are dot products between the rows of $A$ and the columns of $B$. The
element obtained by taking the dot product of the i-th row of $A$ and the j-th column of $B$ is described as $( A B ) i j$. See also Section 32.2 for a fuller discussion of matrices and their
3.7 Find the product of the two matrices above.
3.8 Build a spreadsheet that multiplies 4 by 4 matrices. Solution
3.9 In exercise 3.8:
1. Where is the matrix product $A B$?
2. What appears in columns $p , q , r$ and $s$ in the first four rows?
If you change any of the entries in $A$ or $B$ the product will change automatically, so you have built an 4 by 4 matrix automatic product finder.
3. Can you use this to find the product of a 2 by 3 matrix and a 3 by 4 one? How?
4. Find the tenth power of a matrix $A$ using your product finder. (Hint: use it for $A$ and for $B$ and look in the right place and you have it.)
A vector $v ⟶$ can be written either as a matrix consisting of a single row, or of a single column. When writing it as a column we will write $| v ⟶ >$ ; as a row, $< v
⟶ |$ . The square of the length of $v ⟶$ can then be written as the matrix product $< v ⟶ | | v ⟶ >$ .
A vector $v ⟶$ is an eigenvector of a matrix $M$ when $M v ⟶$ is a multiple of $v ⟶$. The multiple is called the eigenvalue of $M$ having eigenvector $v &
LongRightArrow;$. If the eigenvalue is $s$, then we have $M v ⟶ = s v ⟶$.
The applet here allows you to enter any 2 by 2 matrix, and move the vector $v ⟶$ around. When $M v ⟶$ lines up with $v ⟶$ , $v ⟶$ is an
eigenvector of $M$ with real eigenvalue which is given by the ratio of the length of $M v ⟶$ (called $v " ⟶$ in the applet) to that of $v ⟶$ , with a sign
that is positive when they point in the same direction.
Exercise 3.10 Choose a symmetric matrix and use the applet to determine the two eigenvectors, approximately. Draw them on a piece of paper. Can you notice something about them? What?
|
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter03/section04.xhtml","timestamp":"2014-04-21T09:54:42Z","content_type":null,"content_length":"15677","record_id":"<urn:uuid:c0254a7c-99d3-4fc6-8861-ef338a7d64f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with fractals
February 2nd 2010, 10:56 AM
Help with fractals
If you have a decreasing sequence of compact subsets of X such that C1⊃C2⊃... Cm and C=∩∞,m=1, then show that the Hausdorff metric, D(Cn,C)→0 as n→∞.
Assume C∈S(X)
February 2nd 2010, 11:32 AM
Since C is the intersection of the $C_m$s, it is sufficient to find a value of m for which every element of $C_m$ is within distance $\varepsilon$ of C (for some given $\varepsilon>0$).
Let $C_\varepsilon = \{x\in C_1:d(x,C)\geqslant\varepsilon\}$. This is a closed (and therefore compact) subset of $C_1$. It is covered by the sets $U_n = \{x\in C_1:xotin C_n\}\ (n\geqslant1)$,
which are open in $C_1$. By compactness there is a finite subcover, and since the sets $U_n$ form an increasing nest, there is in fact just one of them, say $U_m$, that contains $C_\varepsilon$.
It follows by taking complements that $C_m\subseteq C_\varepsilon$. Thus $d(C_m,C)<\varepsilon$.
|
{"url":"http://mathhelpforum.com/differential-geometry/126814-help-fractals-print.html","timestamp":"2014-04-19T04:02:46Z","content_type":null,"content_length":"7301","record_id":"<urn:uuid:4024e64f-9168-41e0-ab85-6ca21ee1cc0e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A proof concerning homogeneous DE's and integrating factors
hi The question in the attachment. OK, this question will destroy my brain. I want a start!
Because the original DE is homogeneous, use one of the two standard substitutions such as $y=ux,$ with $dy=x\,du+u\,dx.$ Moreover, the following may prove useful: $M(x,y)=M(x,ux)=x^{n}M(1,u),$ and
the same for $N,$ even to the same power of $x,$ because of the homogeneity of the DE. Then multiply through by your integrating factor, and see if you don't see some integrable combinations pop out
at you.
|
{"url":"http://mathhelpforum.com/differential-equations/175436-proof-concerning-homogeneous-de-s-integrating-factors.html","timestamp":"2014-04-17T18:39:14Z","content_type":null,"content_length":"35347","record_id":"<urn:uuid:ecb0c5a9-da0f-4422-869d-2d60c1f124d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
Quantum mechanics predicts the bizarrest things. Tiny particles like electrons can simultaneously be in two places, or, more generally, in two states that would seem mutually exclusive in our
everyday experience of physics. Similarly weirdly, particles that have once interacted can remain entangled even when they're moved far apart and then influence each other instantaneously, something
which Einstein called "spooky action at a distance". These seemingly magical properties could be exploited for exciting real-world applications, if it wasn't for another strange consequence of
quantum mechanics: that by simply looking at a quantum system you destroy many of its properties. (Find out more in this Plus article.)
The 2012 Nobel Prize for Physics has been awarded to Serge Haroche and David J. Wineland for (independently) finding ways of observing certain aspects of quantum systems without destroying them.
Haroche, of the Collège de France and Ecole Normale Supérieure in Paris, found a way of trapping individual photons (particles of light) for a record-breaking amount of time. Using extremely
reflective mirrors which bounce the photons back and forth, Haroche was able to keep the photons "alive" for almost a tenth of a second, during which time they would have travelled around 40,000km.
Cleverly devised experiments then allowed him to measure and count individual photons without destroying them. They also allowed him to use quantum entanglement to trace how a quantum system changes
from a state of superposition — being in two states at once — to the state of definite existence we expect based on our everyday experience.
David Wineland, from the University of Colorado, Boulder, used carefully tuned laser pulses to put electrically charged atoms in a state of superposition, for example occupying two different energy
levels at once.
Haroche and Wineland's work is interesting to theorists and experimentalists alike. On the theoretical side, it gives some insight into one of the greatest mysteries of quantum mechanics: exactly how
the act of measuring interferes with a quantum system, so that a particle which is in a state of superposition collapses into a single state.
On the practical side, their work may result in superfast quantum computers. While ordinary computers store information in bits which take on either the value 0 or the value 1, a quantum computer
would exploit the phenomenon of superposition to allow a quantum bit to take on both values at once. If a single quantum bit can simultaneously take on two values, then two of them can simultaneously
take on four values, three can simultaneously take on eight values, and so on. In general, n quantum bits can simultaneously take on 2^n values. It's this increased capacity to represent information
that may one day lead to computers much faster than anything around today. Wineland and his team were the first to show that a quantum operation involving two quantum bits is possible, thus paving
the way towards the superfast computers of the future.
Wineland has also used his lab techniques to build a clock that's 100 times more accurate than the clocks currently setting our time standards. Time can be defined in terms of the frequencies of
electromagnetic radiation emitted by atoms. Wineland's clock measures radiation that's within the visible light range of the spectrum, and it's therefore called an optical clock. Optical clocks are
incredibly accurate: if you had set one running at the moment of the Big Bang, it would now only be out by about five seconds.
According to Royal Swedish Academy of Sciences, who awards the Nobel Prizes, Haroche and Wineland have "opened the door to a new era of experimentation with quantum mechanics". Their methods for
probing the physical world at the smallest scales may one day help lift the veil on some of the biggest mysteries in physics.
You can find out more in this excellent write-up on the Nobel Prize website and read more about quantum mechanics on Plus.
|
{"url":"http://plus.maths.org/content/comment/reply/5788","timestamp":"2014-04-17T12:40:35Z","content_type":null,"content_length":"26063","record_id":"<urn:uuid:3f9fda50-0002-42fb-9255-ac85754aad93>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bob Stine's MBA Teaching
Follow the links below to get the the home pages for the courses that I teach in the Wharton MBA program.
This is a sample lecture, related to topics that we cover in Statistics 603 during the August pre-term. The data sets used in this sample are:
This is the August pre-term course. It covers basic descriptive statistics and the foundations for statisical inference (standard error and confidence intervals.
This is the accelerated version of Statistics 603. Same content, but covered more quickly. It is also taught during the August pre-term.
This is the waiver preparation course taught the last week of the August pre-term. It covers the material of Statistics 621.
This is the required statistics course that covers regression analysis and related ideas (analysis of variance and logistic regression. It is taught during the first half of the fall semester
|
{"url":"http://www-stat.wharton.upenn.edu/~stine/mba-teaching.html","timestamp":"2014-04-16T10:10:18Z","content_type":null,"content_length":"2009","record_id":"<urn:uuid:62173230-4b18-4bf9-85b6-ada2e6cb1d31>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to calculate the number of transposon mutants for a genomic
How to calculate the number of transposon mutants for a genomic library?
Jerry M. jerrym at yahoo.fr
Tue Aug 16 11:31:31 EST 2005
On Sun, 14 Aug 2005 00:32:12 -0500, "JXB" <jxb at bellsouth.net> wrote:
> Hi All,
> I want to make a transposon library in a gram negative bacteria. The
> genome size in ~1.9 Mb and average gene size is 1 kb. Does anyone
> know how I calculate the theoritical number of transposon mutants I
> need to knockout every gene in this bacteria?
For this you have to use some simple statistical formulas derived from
Poisson's Law. Here is a simple one often used to calculate how many
clones are required:
N = ln(1 - P)/ln(1 - f)
where P is the probability to hit a particular gene, f is the
fractional proportion of the genome represented by a single clone
and N is the number of clones required to get P.
In your case f = 1 / 1900 (you are lucky to have such a small
genome!), so here is a table showing how many (N) clones you will have
to screen to get a particular clone with a probability of P:
P N
0.100 200
0.500 1317
0.900 4374
0.990 8748
0.999 13121
As you see, if you screen about 1300 clones, you have a 50% chance to
get the one particular clone you are interested in, but you will need
to screen about 10 times more to get a 999 in 1000 chance to get it.
Of course, this is in theory. In practice not all the genes have the
same size, there are DNA regions that do not carry any genes, there
are genes that are essential and cannot be disrupted, there are
transposon insertion hot spots, etc. lots of factors that will make
your real statistics certainly not fit the actual theory.
I think the reference for this formula can be found in "The Maniatis"
(aka "The Sambrook"), but I am not sure.
Good luck!
More information about the Methods mailing list
|
{"url":"http://www.bio.net/bionet/mm/methods/2005-August/099742.html","timestamp":"2014-04-17T12:09:30Z","content_type":null,"content_length":"4195","record_id":"<urn:uuid:e592b176-b5a9-48ac-9033-32ac1ae012b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the study of higher dimensions?
The methods of calculus that you probably recently learned can be generalized to higher dimensions. This means to study functions from ℝ^n to ℝ. As such, functions are of the form y = f(x[1], x[2],
..., x[n]).
Ideas like limits, differentials, and integrals can be generalized to encompass functions of that form. In fact, considering higher dimensions allows us to use ideas we couldn't use with functions of
a single real variable.
Look up multi-variable calculus.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3432851","timestamp":"2014-04-16T13:48:48Z","content_type":null,"content_length":"28415","record_id":"<urn:uuid:d452bc7b-6297-4a76-be7f-b5b56a0fa36b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Weights of apples grown in an orchard are known to follow a normal distribution with mean 160 grams. It is known that approximately 99.7% of apples have weights between124 and 196 grams. What is the
standard deviation of weights of all apples grown in the orchard?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51437948e4b04cdfc581cfed","timestamp":"2014-04-19T15:25:36Z","content_type":null,"content_length":"146776","record_id":"<urn:uuid:c036a968-0981-4640-90ea-565458325de1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
May 16th 2010, 11:15 AM #1
Junior Member
Apr 2010
i really need help in this , i have no idea at all , help!
Use logarithms to find all solutions of the following equations
(a) e^z= e
(b) e^z= e^(-z)
a) $e^{z} = e \rightarrow z= 1 + 2 k \pi i$
b) $e^{z} = e^{-z} \rightarrow z = k \pi i$
... where in both cases k is an integer...
Kind regards
May 16th 2010, 11:46 AM #2
|
{"url":"http://mathhelpforum.com/differential-geometry/145016-logarithm.html","timestamp":"2014-04-17T10:20:55Z","content_type":null,"content_length":"32463","record_id":"<urn:uuid:8a80879f-c7b4-4e3d-a4d5-7e73fcf121ec>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bill the Lizard
From SICP section
1.3.1 Procedures as ArgumentsExercise 1.30
asks us to transform the now familiar recursive
procedure (see
exercise 1.29
) into an iterative one. We're given the following template to get us started:
(define (sum term a next b)
(define (iter a result)
(if <??>
(iter <??> <??>)))
(iter <??> <??>))
If you need a refresher on recursive and iterative processes, take a look back at SICP section
1.2.1 Linear Recursion and Iteration
. The iterative
example given in that section has a lot in common with our template:
(define (factorial n)
(fact-iter 1 1 n))
(define (fact-iter product counter max-count)
(if (> counter max-count)
(fact-iter (* counter product)
(+ counter 1)
The key to this procedure is the use of state variables, particularly
, which holds the result of multiplying all the values from 1 to n as the process moves from state to state. In our iterative
will serve as the same kind of state variable, storing the sum of all the terms.
The first two pieces are pretty easy to get. If a is greater than b, we just want to return the result.
(define (sum term a next b)
(define (iter a result)
(if (> a b)
(iter <??> <??>)))
(iter <??> <??>))
The next two pieces are the really interesting parts. These are our state variables that make the iterative process work. We need to decide what values to store in
for the next iteration to work with. The value passed in for
should be the next term in the series, and the value passed in for
should just add the current term to the result.
(define (sum term a next b)
(define (iter a result)
(if (> a b)
(iter (next a) (+ (term a) result))))
(iter <??> <??>))
Finally, we just need to define the starting values for the iterative process. The starting value for
should just be the same
passed in to the
procedure, and since we're accumulating a sum, the starting value for
should be 0.
(define (sum term a next b)
(define (iter a result)
(if (> a b)
(iter (next a) (+ (term a) result))))
(iter a 0))
If you substitute this procedure in for the old one that we used in
exercise 1.29
, you should see that you get the same results.
For links to all of the SICP lecture notes and exercises that I've done so far, see
The SICP Challenge
From SICP section 1.3.1 Procedures as Arguments
Exercise 1.29 asks us to define a procedure that uses Simpson's rule to approximate the value of the integral of a function between two values. (Informally, you may remember that the integral of a
function is the area under the curve of that function.)
We're given a big head start earlier in the section when an integral procedure is defined using a different method.
(define (sum term a next b)
(if (> a b)
(+ (term a)
(sum term (next a) next b))))
(define (integral f a b dx)
(define (add-dx x) (+ x dx))
(* (sum f (+ a (/ dx 2.0)) add-dx b)
We're told to use this procedure to check our results by integrating cube between 0 and 1.
Simpson's rule states that the integral of a function f between a and b is approximated as
h(y[0] + 4y[1] + 2y[2] + 4y[3] + 2y[4] + ... + 2y[n-2] + 4y[n-1] + y[n]) / 3
where h = (b - a)/n, for some even integer n, and y[k] = f(a + kh).
This is the sum of a series, so we'll still be defining our new procedure in terms of the sum procedure used before. Using Simpson's rule, the sum of a series is multiplied by h and divided by 3, so
we'll start with that and a little of the "wishful thinking" that Professor Abelson spoke so highly of in lecture 2A.
(define (simpson f a b n)
(/ (* h (sum term 0 inc n)) 3))
Now that we know how Simpson's rule can be defined in terms of sum, we just need to fill in the pieces that are missing.
The variable h is pretty easy to define from the description.
(define h (/ (- b a) n))
The procedure used to get from one term of the series to the next is even simpler. n is just incremented by one at each step, so we just need to define a procedure to do that.
(define (inc x) (+ x 1))
We know that the sum procedure takes two functions, term and next, and two values a and b, and computes the sum of the terms of the function from a to b. Defining the terms of the series in Simpson's
rule is a two-step process. First we have to define the function for computing y[k], which is given.
(define (y k)
(f (+ a (* k h))))
Next we have to define a rule for computing the coefficient for each of the k terms. Once we know the coefficient we'll just multiply it by y[k] to get the complete term. The rules for defining the
coefficients are pretty simple. Notice that if k is odd, then the coefficient is always 4. If k is even, then the coefficient is usually 2, except for the first (0th) and last (nth) terms, where the
coefficient is 1.
(define (term k)
(* (cond ((odd? k) 4)
((or (= k 0) (= k n)) 1)
((even? k) 2))
(y k)))
Putting this all together, we have the complete procedure:
(define (simpson f a b n)
(define h (/ (- b a) n))
(define (inc x) (+ x 1))
(define (y k)
(f (+ a (* k h))))
(define (term k)
(* (cond ((odd? k) 4)
((or (= k 0) (= k n)) 1)
((even? k) 2))
(y k)))
(/ (* h (sum term 0 inc n)) 3))
The only thing left to do is to define a cube procedure and compare the results of simpson with those of the old integral procedure that we were given.
(define (cube x) (* x x x))
> (integral cube 0 1 0.01)
> (simpson cube 0 1 100.0)
> (integral cube 0 1 0.001)
> (simpson cube 0 1 1000.0)
As you can see from these results, Simpson's rule gives us a much better approximation to the integral when computing the same number of terms.
For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.
Structure and Interpretation of Computer Programs
Higher-order Procedures
Covers Text Section 1.3
You can download the video lecture or stream it on MIT's OpenCourseWare site. It's also available on YouTube.
This lecture is presented by MIT's Professor Gerald Jay Sussman.
So far in the SICP lectures and exercises, we've seen that we can use procedures to describe compound operations on numbers. This is a first-order abstraction. Instead of performing primitive
operations on the values to solve a distinct instance of a problem, we can write a procedure that performs the correct operations, then tell it what numbers to operate on when we need it. For
example, instead of writing (* 5 5) when we want to find the value of 5 squared, we can define the square procedure:
(define (square x) (* x x))
then just invoke it on any number we need to square.
In this section we'll start to think about higher-order procedures. Instead of simply manipulating data (or more precisely, what we are used to thinking of as data), as in the procedures we've
written so far, higher-order procedures manipulate other procedures by accepting them as arguments or returning them as values.
Procedures as Arguments
The video lecture explains this concept by looking at three very similar procedures, then showing how one idea can be refactored out of them to create a procedure that can be reused. The three
procedures are the sum of the integers from a to b, the sum of the squares from a to b (the text book uses the sum of cubes), and Leibniz's formula for finding π/8.
(define (sum-int a b)
(if (> a b)
(+ a (sum-int (+ a 1) b))))
(define (sum-sq a b)
(if (> a b)
(+ (square a) (sum-sq (+ a 1) b))))
(define (pi-sum a b)
(if (> a b)
(+ (/ 1.0 (* a (+ a 2)))
(pi-sum (+ a 4) b))))
As you can see, these procedures are all nearly identical. The only things that are different are the function being applied to each term in the sequence, how you get to the next term of the
sequence, and the names of the procedures themselves. We can use the similarities of these procedures to create a general pattern that describes all three.
(define (<name> a b)
(if (> a b)
(+ <term> (<name> (<next> a) b))))
The general pattern is that the procedure:
• is given a name
• takes two arguments (a lower bound and an upper bound)
• performs a test to see if the upper bound is exceeded
• applies some procedure to the current term and adds the result to the recursive call to get the next term
• computes of the next term
The important thing to note here is that some of the things that change are procedures, not just numbers.
There's nothing very special about numbers. Numbers are just one kind of data.
Since procedures are also a type of data in Scheme, we can use them as arguments to other procedures as well. Now that we have a general pattern, we use it to define a sigma procedure.
(define (sum term a next b)
(if (> a b)
(+ (term a)
(sum term (next a) next b))))
The arguments term and next are procedures that are passed in to the sum procedure, and will be used to compute the value of the current term and how to get to the next one.
We can use the procedure above to rewrite sum-int as follows:
(define (sum-int a b)
(define (identity x) x)
(sum identity a inc b))
First, the identity procedure simply takes an argument and returns that argument. We need this because the sum-int procedure doesn't apply any function to each term of the sequence, it just sums up
the raw terms. Our sum procedure is expecting a procedure as its second argument, so we have to give it one. Next we simply call the sum procedure, passing in the identity and increment procedures
for term and next.
The sum-sq and pi-sum procedures can be rewritten as well (which is kind of the point).
(define (sum-sq a b)
(define (identity x) x)
(sum square a inc b))
(define (pi-sum a b)
(sum (lambda (i) (/ 1.0 (* x (+ x 2))))
(lambda (i) (+ x 4))
Note that in the pi-sum example, we use lambda notation to define the procedures for term and next as we are passing them in to sum. We can redefine pi-sum without lambda notation by explicitly
defining named procedures.
(define (pi-sum a b)
(define (pi-term x)
(/ 1.0 (* x (+ x 2))))
(define (pi-next x)
(+ x 4))
(sum pi-term a pi-next b))
This makes no difference in how the code is evaluated. In the first example we passed a procedure by its definition, in the second we passed it by its name. In future articles on lectures and
exercises, I'll be using more and more lambda notation since we should be becoming more familiar with it.
Separating the abstract idea of sum into its own procedure allows us to reuse it in several situations where that idea is needed. Another important point is that we could reimplement sum itself, in
perhaps a more efficient way, and we would benefit from it everywhere it is used without having to rewrite all the procedures that use it. Here's an iterative implementation of sum.
(define (sum term a next)
(define (iter j ans)
(if (> j b)
(iter (next j)
(+ (term j) ans))))
(iter a 0))
Decomposing procedures in this way allows us to change one abstraction without needing to change the every procedure where it is used. If we wanted to use the iterative approach to summation in the
original three procedures at the top, we'd have to rewrite all three of them individually. Putting the summation code in its own procedure allows you to change it in one place, but use it everywhere
the procedure is called.
In section 1.1.8 we looked at Heron of Alexandria's method for computing a square root.
(define (sqrt x)
(define tolerance 0.00001)
(define (good-enough? y)
(< (abs (- (* y y) x)) tolerance ))
(define (improve y)
(average (/ x y) y))
(define (try y)
(if (good-enough? y)
(try (improve y))))
(try 1))
The algorithm for computing the square root of x is not intuitively obvious from looking at the code in this procedure. In this section we'll show how abstraction can be used to clarify how this
procedure works.
The procedure iteratively improves a guess until it is within a pre-defined tolerance of the correct answer. The function for improving the guess for the square root of x is to average the guess with
x divided by x. In mathematical terms
If you substitute in √x for y, you'll notice that we're looking for a fixed point of this function. (A fixed point of a function is a point that is mapped to itself by the function. If you put a
fixed point into a function, you get the same value out.)
f(y) = (y + x/y) / 2
f(√x) = (√x + x/√x) / 2
f(√x) = (√x + √x) / 2
f(√x) = 2 * √x / 2
f(√x) = √x
We can use this to rewrite the sqrt procedure in terms of computing a fixed point. (Even though we don't have a procedure to compute a fixed point yet. That will be coming up next.)
(define (sqrt x)
(lambda (y) (average (/ x y) y))
This procedure shows how to compute the square root of x in terms of computing a fixed point, but from this we only know that what we pass to the fixed-point procedure is another procedure and an
initial guess. At this point, fixed-point is only "wishful thinking." Here's one way to compute fixed points of a function:
(define (fixed-point f start)
(define tolerance 0.00001)
(define (close-enuf? u v)
(< (abs (- u v)) tolerance))
(define (iter old new)
(if (close-enuf? old new)
(iter new (f new))))
(iter start (f start)))
This procedure computes:
the fixed point of the function computed by the procedure whose name will be "f" in this procedure.
This is a key quote from the lecture because it illustrates how procedures can treated like any other data in Scheme. Here, the variable f can take on the value of any procedure you want to pass in.
The fixed point of the function passed in to fixed-point is computed by iteratively applying the function to its own result, starting at the initial guess, until the change in the result is smaller
than some tolerance. Note how the function that is passed in as the parameter f is evaluated in the iteration loop.
If you define an average procedure you can run the new sqrt procedure in a Scheme interpreter to test it out.
(define (average x y)
(/ (+ x y) 2))
Procedures as Returned Values
There's another abstraction that we can pull out of the previous fixed-point procedure. Before we get to that, a simpler procedure for computing functions whose fixed point is the square root is:
g(y) = x/y
This has the same property as the function we looked at before, if you insert √x in for y, you get √x back. The reason that we didn't use this simpler function before is that for some inputs it will
oscillate. If x is 2, and your initial guess is 1, then the old and new values will oscillate between 2 and 1, never getting any closer together. The original function that uses the average procedure
is just damping out this oscillation. We can pull this damping concept out by first defining sqrt as a function of a fixed-point procedure that is itself a function of average damping.
(define (sqrt x)
(average-damp (lambda (y) (/ x y)))
Here, average-damp takes a procedure as its argument and returns a procedure as its value. When given a procedure that takes an argument, average-damp returns another procedure that computes the
average of the values before and after applying the original procedure to its argument.
(define average-damp
(lambda (f)
(lambda (x) (average (f x) x))))
This is special because it's the first time we've seen a procedure that produces a procedure as its result.
Newton's method
A general method for finding roots (zeroes) of functions is called Newton's method. To find a y such that
f(y) = 0
the general procedure is to start with a guess y[0], then iterate the following expression using the function:
y[n+1] = y[n] - f(y[n]) / df/dy | y = y[n]
This is a difference equation. Each term is the difference between the previous term and the function applied to the previous term divided by the derivative with respect to y of f evaluated at the
previous term. (The Scheme representation of this should be a lot more clear, but for now we just need to remember that the derivative of f with respect to y is a function.) We'll start the same way
we did before, by applying the method before we define it.
We can define sqrt in terms of Newton's method as follows:
(define (sqrt x)
(newton (lambda (y) (- x (square y)))
The square root of x is computed by applying Newton's method to the function of y that computes the difference between x and the square of y. If we had a value of y for which the difference between x
and y^2 returned 0, then y would be the square root of x.
Now we have to define a procedure for Newton's method. Note that we're still using a method of iteratively improving a guess, just as we did in the earlier sqrt procedures. Look again at the
expression for Newton's method:
y[n+1] = y[n] - f(y[n]) / df/dy | y = y[n]
We'd like to find some value for y[n] such that when we plug it in on the right hand side of this expression, we get the same value back out on the left hand side (within some small tolerance). So
once again, we are looking for a fixed point.
(define (newton f guess)
(define df (deriv f))
(lambda (x) (- x (/ (f x) (df x))))
This procedure takes a function and an initial guess, and computes the fixed point of the function that computes the difference of x and the quotient of the function of x and the derivative of the
function of x.
Wishful thinking is essential to good engineering, and certainly essential to good computer science.
Along the way we have to write a procedure that computes the derivative (a function) of the function passed to it.
(define deriv
(lambda (f)
(lambda (x)
(/ (- (f (+ x dx))
(f x))
(define dx 0.0000001)
You may remember from Calculus or a past life that the derivative of a function is the function of x that computes
f(x + Δx) - f(x) / Δx
for some small value of Δx. You might also remember (and perhaps a bit more readily) that the derivative of x^2 is 2x. So if we apply our deriv procedure to square, we should expect to get back a
procedure that computes 2x. Unfortunately when I tried it in a Scheme interpreter I got back:
> (deriv square)
That's not very revealing, so we're not done experimenting yet. How can we figure out what that returned procedure does? Let's just apply it to some values.
> ((deriv square) 2)
> ((deriv square) 5)
> ((deriv square) 10)
> ((deriv square) 25)
> ((deriv square) 100)
That looks like a fair approximation to the expected procedure.
Abstractions and first-class procedures
The lecture wraps up with the following list by computer scientist Christopher Strachey, one of the inventors of denotational semantics.
The rights and privileges of first-class citizens:
• To be named by variables.
• To be passed as arguments to procedures.
• To be returned as values of procedures.
• To be incorporated into data structures.
We've seen that both values and procedures in Scheme meet the first three requirements. In later sections we'll see that they both meet the fourth requirement as well.
Having procedures as first class data allows us to make powerful abstractions that encode general methods like Newton's method in a very clear way.
For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.
|
{"url":"http://www.billthelizard.com/2010_04_01_archive.html","timestamp":"2014-04-20T13:20:42Z","content_type":null,"content_length":"126338","record_id":"<urn:uuid:1d94e020-3508-486b-9bce-0cff975e06b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Conditional distribution problem
Replies: 3 Last Post: Jun 23, 2013 6:59 PM
Messages: [ Previous | Next ]
quasi Re: Conditional distribution problem
Posted: Jun 23, 2013 6:59 PM
Posts: 9,908
Registered: 7/15/05 Alexander Solla wrote:
>It turns out my mistake in doing this problem (on paper) was
>a calculator error.
The answer key: 0.417
Your calculated answer: 4.17
But you are calculating a (conditional) probability, so of
course it can't be more than 1.
Date Subject Author
6/22/13 Conditional distribution problem Alexander Solla
6/23/13 Re: Conditional distribution problem RGVickson@shaw.ca
6/23/13 Re: Conditional distribution problem Alexander Solla
6/23/13 Re: Conditional distribution problem quasi
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=9143140","timestamp":"2014-04-21T16:10:58Z","content_type":null,"content_length":"19608","record_id":"<urn:uuid:cdf3de01-942c-481d-b434-bacd7a99c7e4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When the base of a ladder is 4 m from the wall, a window 16 m high can be reached. Where should the base be if a... - Homework Help - eNotes.com
When the base of a ladder is 4 m from the wall, a window 16 m high can be reached. Where should the base be if a window 18 m high has to be reached.
When the base of a ladder is 4 m from the wall a window 16 m high can be reached. The wall, the ground and the ladder form a right triangle where the ladder is the hypotenuse. If the length of the
ladder is L, L^2 = 4^2 + 16^2 = 272
Let the distance from the wall where the base should be to allow the ladder to be used to reach a window 18 m high be x, using Pythagoras' Theorem:
x^2 + 18^2 = L^2 = 272
=> x^2 = 272 - 324
This gives a negative value for x showing that the ladder cannot be used to reach a window that is 18 m high. The maximum height that can be reached using the ladder is 16.49 m
The ladder cannot be used to reach a window 18 m high.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/when-base-ladder-4-m-from-wall-window-16-m-high-324883","timestamp":"2014-04-18T01:27:48Z","content_type":null,"content_length":"25749","record_id":"<urn:uuid:3e3b0d37-2a6b-47ed-8272-0275caa93471>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Encyclopaedia Index
ShapeMaker Shapes which may be constructed
• "BFC grid image", reads and converts a BFC grid file
• "Cone", A pointed cone
• "Cube", strictly a truncated pyramid (the ends may be of differing size)
• "Cylinder", "Cylinder with spherical orifice", A cylinder wth optional hemisphere
• "Cylindrical bar turn", "Cylindrical pipe turn", a sharp corner in a bar or pipe (pipe has a hole down its centre)
• "Frustrum", "Frustrum with rectangular orifice", a frustum, cut off with an adjustable hole on its axis
• "Pyramid with cylindrical orifice", "Pyramid with rectangular orifice", "Pyramid with spherical orifice", same shape as the cube but with different holes cut into the pyramid
• "Ring","Ring pipe", a torus, and a toroidal pipe.
• "Sphere", "Sphere with cylindrical orifice", "Sphere with rectangular orifice", See pyramid; a sphere with holes.
• "Spherical shell", A hollowed out half sphere with adjustable hollowing factor
• "Spiral", "Spiral pipe", a long spiral (with a hole down the centre intube version)
• "T-junction bar", "T-junction pipe", a T junction in 2 pipes
• "Rectangular bar turn", "Rectangular pipe turn", a rectangle meets another rectangle
• "Flat spiral bar", "Flat spiral pipe", as the spiral but with a rectangular cross section
• "X-junction bar", "X-junction pipe", two bars intersect
• "Joukowsky Airfoil", "NACA 4 Digit Airfoil", two air foil shapes, one theoretical the other well tested
• "Design Workshop File" output from Design Workshop to convert to Facets format.
• "Perforated rectangular plate", a rectangular plate with holes.
Interested users are asked to suggest further simple shapes.
|
{"url":"http://www.cham.co.uk/phoenics/d_polis/d_info/shapemak/shapes.htm","timestamp":"2014-04-19T01:47:52Z","content_type":null,"content_length":"2458","record_id":"<urn:uuid:ca11dfb4-61fd-4388-8b97-8878bf61c8b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dundee, IL Geometry Tutor
Find a Dundee, IL Geometry Tutor
...I am able to help in pre-algebra, algebra, Geometry, College-algebra, Trigonometry and Calculus. I am also helping students who is planning to take the AP Calculus, ACT and SAT exams. Many
students who hated math started liking it after my tutoring.
12 Subjects: including geometry, calculus, statistics, algebra 1
...I have tutored my students with patience and provided them extra practice when needed for each topics they are struggling with. I make sure that students can enjoy and understand Pre-algebra
and make math fun to learn. I have tutored regular pre-calculus and advance pre-calculus to many high school students.
11 Subjects: including geometry, calculus, algebra 2, trigonometry
...Numbers, Algebra, Geometry, Data ... what more could you ask for in a refreshing review of HS math? A chance to polish up for the SAT Math. We'll see where you are, fill in some gaps, and keep
you moving toward that high score!
14 Subjects: including geometry, ASVAB, GRE, prealgebra
...I have also tutored Geometry and Calculus students. I have helped my nephew with his math in the past. I love helping students at this age - when they are just beginning their educational
7 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...I’ve been teaching and/or tutoring math for almost 20 years now, and in that time, I’ve helped all ages and abilities to achieve their goals in a wide variety of topics in mathematics. My
primary goal as a tutor is to adapt to each individual's learning style in order to make learning as efficie...
25 Subjects: including geometry, calculus, statistics, algebra 1
Related Dundee, IL Tutors
Dundee, IL Accounting Tutors
Dundee, IL ACT Tutors
Dundee, IL Algebra Tutors
Dundee, IL Algebra 2 Tutors
Dundee, IL Calculus Tutors
Dundee, IL Geometry Tutors
Dundee, IL Math Tutors
Dundee, IL Prealgebra Tutors
Dundee, IL Precalculus Tutors
Dundee, IL SAT Tutors
Dundee, IL SAT Math Tutors
Dundee, IL Science Tutors
Dundee, IL Statistics Tutors
Dundee, IL Trigonometry Tutors
Nearby Cities With geometry Tutor
Barrington, IL geometry Tutors
Burlington, IL geometry Tutors
East Dundee, IL geometry Tutors
Gilberts geometry Tutors
Hampshire, IL geometry Tutors
Lily Lake, IL geometry Tutors
Maple Park geometry Tutors
Marengo, IL geometry Tutors
Medinah geometry Tutors
Pingree Grove, IL geometry Tutors
Plato Center geometry Tutors
Sleepy Hollow, IL geometry Tutors
Union, IL geometry Tutors
Wasco, IL geometry Tutors
West Dundee, IL geometry Tutors
|
{"url":"http://www.purplemath.com/dundee_il_geometry_tutors.php","timestamp":"2014-04-18T14:05:52Z","content_type":null,"content_length":"23825","record_id":"<urn:uuid:8f70c126-d170-4f66-b674-33b385b174b4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Who wants an easy medal!
• 10 months ago
• 10 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51a5521ae4b0aa1ad888345c","timestamp":"2014-04-18T10:35:41Z","content_type":null,"content_length":"67638","record_id":"<urn:uuid:65034e10-de1e-473e-9e85-7fffdb13474b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Possible semantic bugs concerning domain and range
From: pat hayes <phayes@ai.uwf.edu> Date: Tue, 24 Sep 2002 19:51:23 -0500 Message-Id: <p05111b57b9b6a167b6a1@[65.217.30.172]> To: Ian Horrocks <horrocks@cs.man.ac.uk> Cc: www-webont-wg@w3.org
>DAML+OIL, and I hope OWL, can be viewed a fragment of FOL, with atomic
>classes and properties corresponding to unary and binary predicates
>respectively. According to this correspondence, subClassOf axioms
>become implications, e.g., A subClassOf B corresponds to:
>forall x . A(x) -> B(x)
>Similarly, a property range axiom P range A corresponds to:
>forall x,y P(x,y) -> A(y).
>What could be simpler and clearer than that?
Nothing, I agree. Both of those are parts of RDFS, I note. But the
translation of an OWL restriction is a bit hairier. The OWL
restriction semantics (ignoring the object/datatype distinction for
now) says for example that for any class (ie any unary relation) A
and property (ie binary relation) B, a class (unary relation) C
exists such that for example (this is minCardinality n, the others
are similar)
(forall x, C(x)) <-> (exists y1 y2,....yn, (and B(x,y1) B(x,
y2),...B(x,yn) (not (= x1 x2)) ... (not (= xn-1 xn))))
Now, that is FO, if not quite so simple and clear; but that isn't a
statement of the restriction semantics, as it stands, because you
also have to say that this applies for *any* A and B, which comes out
(forall A, B, (exists C, ((forall x, C(x)) <-> (exists y1 y2,....yn,
(and B(x,y1) B(x, y2),...B(x,yn) (not (= x1 x2)) ... (not (= xn-1
xn)))) ))
which isn't FO , on your view, I gather (though it is in KIF/CL,
interestingly enough).
If you are going to reply that we don't need the outer quantifiers,
then please stop grousing about the need for domain closure
conditions and agree to put up with weak OWL entailment, because that
is then all you get. Then I will agree that OWL is a subset of FOL;
but then, there is no deep problem embedding OWL into RDF. Or, if you
want to say that outer-quantifier form really is FOL, then I can go
with that; but then you shouldn't have any trouble allowing classes
of classes and things like that into OWL. Make up your mind, and
maybe we can agree.
>The combination of these two sentences entails
>forall x,y P(x,y) -> B(y).
>What could be simpler and clearer than that?
>If you want some alternative semantics, could you please explain in
>similar terms what it is?
Im just following what it says in the OWL semantics. I didn't write
it; better ask the author why he wrote it this way.
PS the above examples are written in a barbaric mixture of prefix and
infix notation, I hope they are readable.
IHMC (850)434 8903 home
40 South Alcaniz St. (850)202 4416 office
Pensacola, FL 32501 (850)202 4440 fax
Received on Tuesday, 24 September 2002 20:51:13 GMT
This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:57:52 GMT
|
{"url":"http://lists.w3.org/Archives/Public/www-webont-wg/2002Sep/0403.html","timestamp":"2014-04-20T14:28:14Z","content_type":null,"content_length":"11624","record_id":"<urn:uuid:14f9d7ae-921f-4452-af58-002348e2b8bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
wo-column proof vs. paragraph proof
You are here: Home → Articles → Two-column proof vs...
Example of two-column proof vs. paragraph proof
Here is an example comparing a proof written in two-column form or written as text. And, I will also show you MY exact thought process when I was thinking about this. I have not done these type of
problems in recent years, so I do not have the proof memorized.
The idea is to show that two-column proof is NOT the only kind of proof there is, nor is it necessarily the 'best'. The idea of proving is to communicate clearly in a convincing way your argument.
Sometimes that might be easier to do in just plain prose.
PROBLEM: Prove that if the two diagonals in a quadrilateral bisect each other, then the quadrilateral is a parallelogram.
MY THOUGHT PROCESS:
Better draw a picture first of all. It's a quadrilateral with diagonals. We're supposed to prove that it is a parallelogram. I will try to draw a picture that doesn't look exactly like a
parallelogram; in other words a picture that is not exact.
So what you have is a quadrilateral with two diagonals that bisect each other. Meaning that the intersection point is a midpoint for both of the diagonals.
Well right there it sounds like some line segments will have equal lengths. And, two lines crossing always form two pairs of vertical angles... So I will have some same angles and some same line
segments. Sounds like I can easily prove that there are two congruent triangles and other two congruent triangles.
But how can one get from that to proving that the lines forming the quadrilateral are parallel?
It must be the corresponding angles stuff that will work there. I will have angles with same measure, so that makes that the lines must be parallel.
Okay, the proof is ready in my mind now. Just have to write it so others can understand.
PROOF WRITTEN IN 'PARAGRAPH' FORM:
Since they are congruent, angles A and A' have the same measure. And, angles A' and A'' are the same because they are vertical angles. So since A and A' are the same, and A' and A'' are the same,
it follows that angles A and A'' are the same.
But this is equivalent to the two lines that form the top and bottom of the quadrilateral being parallel.
An identical argument using the two white triangles instead of the two yellow ones proves that the two sides of the quadrilateral are parallel.
So the quadrilateral is a parallelogram.
PROOF WRITTEN IN TWO-COLUMN FORM:
Argument Reason why
1. The two lines marked with one brown little line are congruent. 1. The two diagonals bisect (given).
2. The two lines marked with two brown little lines are congruent. 2. The two diagonals bisect (given).
3. The two angles marked with blue lines are congruent. 3. They are vertical angles.
4. The two yellow triangles are congruent. 4. SAS theorem and 1, 2, and 3.
5. The angles A and A' are congruent. 5. The two yellow triangles are congruent.
6. The angles A' and A'' are congruent. 6. They are vertical angles.
7. The angles A and A'' are congruent. 7. 5 and 6 together.
8. The lines that form bottom and top of the quadrilateral are parallel. 8. 7 and the theorem that says that corresponding angles being the same is equivalent to lines being parallel.
9. The lines that form the two sides of the quadrilateral are parallel. 9. Repeat steps 1-8 using the two white triangles.
10. The quadrilateral is a parallelogram. 10. 8 and 9 together.
See also my article about What is proof?
Math Lessons menu
• Place Value Ideas • Add/Subtract lessons • Multiplication
• Division • Fraction Lessons • Geometry Lessons
• Decimals Lessons • Percents Lessons • General
|
{"url":"http://www.homeschoolmath.net/teaching/two-column-proof.php","timestamp":"2014-04-16T10:28:21Z","content_type":null,"content_length":"44429","record_id":"<urn:uuid:f7c47cbc-a24f-4a9d-b578-68e96b457707>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of digits in a decimal number
11-04-2007 #1
abyss - deep C
Join Date
Oct 2007
Number of digits in a decimal number
Hi All,
Given an integer n of ndigits, I want to write a function which returns a value which is 1 followed by (ndigits-1) number of zeroes.
Example: If I give 12345, I should get 10000
The below code does the same. However, I would like to know if there is a more efficient way(s) to obtain the same result. Kindly post your thoughts regarding this:
int func(int n)
int i;
for(i=1; n>10; n/=10,i*=10)
return i;
As in say
if ( n >= 10000 && n < 100000 ) result = 10000;
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Its not with only 5 digit integers, the integer can be any number between 1 and INT_MAX.
I would like to know if there is a way to reduce the number of iterations in the for loop, I just want to optimize the code so that it takes little time to give out the result.
Your efficiency shouldn't be a problem. Your loop will only execute 1 time for each power of 10, so n = 1,000,000, is only 6 loops. That's less than the blink of your eye, even on a slow
If your code is running slow, I'd certainly look for the cause, in it's other functions, not here.
Last edited by Adak; 11-04-2007 at 10:02 AM.
You can combine division and test into one statement
int func(int n) {
int i;
for(i=1; n/=10; i*=10);
return i;
Hi All,
Thanks for all the replies.
Hi Adak,
My code is not slow because of this function...and I am sure other parts of the code is also fine.
However, I just wanted to know if there is some other mathematical way using which I could get the result.
One way I thought of is mentioned below:
n = 234567
chop off the left most significant digit
subract the number 34567 from 234567
divide the resulting number 200000 by the left most significant digit 2
I am finding it difficult to put the above into C code.
Also, I am not sure if this would improve the efficiency.
If there are any other ways to implement, please let me know.
I just wanted to know various methods to implement the same.
Take my code and copy/paste it 9 times, for each of the possible powers of 10 which can be represented in 32-bits.
For even more efficiency, you can reduce this to at MOST 4 comparisons by arranging the cascade of if/else branches in the right way.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
If you want the mathematical way, you can use logarithms, but Salem's way will be faster.
Last edited by Sang-drax : Tomorrow at 02:21 AM. Reason: Time travelling
11-04-2007 #2
11-04-2007 #3
abyss - deep C
Join Date
Oct 2007
11-04-2007 #4
Registered User
Join Date
Sep 2006
11-04-2007 #5
Registered User
Join Date
Aug 2005
11-04-2007 #6
abyss - deep C
Join Date
Oct 2007
11-04-2007 #7
11-04-2007 #8
|
{"url":"http://cboard.cprogramming.com/c-programming/95376-number-digits-decimal-number.html","timestamp":"2014-04-16T22:09:44Z","content_type":null,"content_length":"67217","record_id":"<urn:uuid:644c1ca2-d551-419b-b8a8-909dc62206fc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Angular Acceleration Of A Body That Rotates ... | Chegg.com
Image text transcribed for accessibility: The angular acceleration of a body that rotates about a fixed axis is given by the function alpha = 3t - 5. alpha is in radians per second squared and t is
in seconds. At t = 0 the angular displacement and velocity are zero. Find the angular velocity and angular displacement when t = 1s and t = 3s Find the angular velocity and displacement when alpha =
0 Sketch the variations of theta, omega and alpha for 0 t 3s. Find the average values of angular velocity and angular acceleration in the interval 1 t 3s.
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/angular-acceleration-body-rotates-fixed-axis-given-function-alpha-3t-5-alpha-radians-per-s-q800725","timestamp":"2014-04-17T19:46:47Z","content_type":null,"content_length":"20531","record_id":"<urn:uuid:be4545a6-6464-442f-896b-e7704757e52c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Congruence Classes of Orientable $2$-Cell Embeddings of Bouquets of Circles and Dipoles
Two $2$-cell embeddings $\imath : X \to S$ and $\jmath : X \to S$ of a connected graph $X$ into a closed orientable surface $S$ are congruent if there are an orientation-preserving surface
homeomorphism $h : S \to S$ and a graph automorphism $\gamma$ of $X$ such that $\imath h =\gamma\jmath$. Mull et al. [Proc. Amer. Math. Soc. 103(1988) 321–330] developed an approach for enumerating
the congruence classes of $2$-cell embeddings of a simple graph (without loops and multiple edges) into closed orientable surfaces and as an application, two formulae of such enumeration were given
for complete graphs and wheel graphs. The approach was further developed by Mull [J. Graph Theory 30(1999) 77–90] to obtain a formula for enumerating the congruence classes of $2$-cell embeddings of
complete bipartite graphs into closed orientable surfaces. By considering automorphisms of a graph as permutations on its dart set, in this paper Mull et al.'s approach is generalized to any graph
with loops or multiple edges, and by using this method we enumerate the congruence classes of $2$-cell embeddings of a bouquet of circles and a dipole into closed orientable surfaces.
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v17i1r41/0","timestamp":"2014-04-20T07:02:48Z","content_type":null,"content_length":"16108","record_id":"<urn:uuid:908b117f-3a4e-43ee-9a8b-7bbb99d54661>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gravitational capture
It is a consequence of the math ... try to find a situation where two objects are not gravitationally bound to each other but move into a position where they are gravitationally bound.
Basically, to do that, one or both objects has to lose energy ... where to?
|
{"url":"http://www.physicsforums.com/showthread.php?s=541d9710fd27878579cb4f2c898675e0&p=4420135","timestamp":"2014-04-20T18:29:17Z","content_type":null,"content_length":"22545","record_id":"<urn:uuid:d480f7f6-bf58-4739-820c-a4280b9a4a7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chains-into-bins processes
Batu, Tugkan, Berenbrink, Petra and Cooper, Colin (2010) Chains-into-bins processes. arXiv.org.
Full text not available from this repository.
The study of {\em balls-into-bins processes} or {\em occupancy problems} has a long history. These processes can be used to translate realistic problems into mathematical ones in a natural way. In
general, the goal of a balls-into-bins process is to allocate a set of independent objects (tasks, jobs, balls) to a set of resources (servers, bins, urns) and, thereby, to minimize the maximum load.
In this paper, we analyze the maximum load for the {\em chains-into-bins} problem, which is defined as follows. There are $n$ bins, and $m$ objects to be allocated. Each object consists of balls
connected into a chain of length $\ell$, so that there are $m \ell$ balls in total. We assume the chains cannot be broken, and that the balls in one chain have to be allocated to $\ell$ consecutive
bins. We allow each chain $d$ independent and uniformly random bin choices for its starting position. The chain is allocated using the rule that the maximum load of any bin receiving a ball of that
chain is minimized. We show that, for $d \ge 2$ and $m\cdot\ell=O(n)$, the maximum load is $((\ln \ln m)/\ln d) +O(1)$ with probability $1-\tilde O(1/m^{d-1})$.
Actions (login required)
Record administration - authorised staff only
|
{"url":"http://eprints.lse.ac.uk/31302/","timestamp":"2014-04-19T06:56:20Z","content_type":null,"content_length":"19987","record_id":"<urn:uuid:cb792c4f-f121-4ed4-81bb-5f24707098ac>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fort Myer, VA ACT Tutor
Find a Fort Myer, VA ACT Tutor
...Early on, I had a behavioral problem. I was bored. With the specialized help I received from caring teachers and tutors, I went on to become the first in my coal mining family to go to
82 Subjects: including ACT Math, reading, Spanish, English
...Are you struggling with Organic Chemistry? I have taught Inorganic and Organic Chemistry within formal and informal educational and enrichment settings, in order to help students gain an
in-depth understanding and increase performances. Tasks such as identifying polymers, naming products,balancing equations, manipulating reactants, and much more can be successfully undertaken
with support.
64 Subjects: including ACT Math, chemistry, English, reading
...I have taken the ACT previously and I have a mastery of all of the concepts covered in the ACT Math section, reinforced by my math classes throughout my chemistry degree. I have a master's
degree in chemistry from American University, and I provided independent tutoring for organic chemistry whi...
11 Subjects: including ACT Math, chemistry, geometry, algebra 2
...I like to read, watch movies, traveling, and practice Taekwondo in my spare time.I am a native Chinese with full proficiency in listening, reading, writing, and speaking. I have tutored
Chinese to more than 5 students in the past 2 years. I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012.
27 Subjects: including ACT Math, chemistry, geometry, calculus
My name is Bekah and I graduated from BYU with a degree in Math Education. While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice
a week, and working one-on-one with students. After graduating, I taught high school math for one year...
10 Subjects: including ACT Math, calculus, geometry, algebra 1
Related Fort Myer, VA Tutors
Fort Myer, VA Accounting Tutors
Fort Myer, VA ACT Tutors
Fort Myer, VA Algebra Tutors
Fort Myer, VA Algebra 2 Tutors
Fort Myer, VA Calculus Tutors
Fort Myer, VA Geometry Tutors
Fort Myer, VA Math Tutors
Fort Myer, VA Prealgebra Tutors
Fort Myer, VA Precalculus Tutors
Fort Myer, VA SAT Tutors
Fort Myer, VA SAT Math Tutors
Fort Myer, VA Science Tutors
Fort Myer, VA Statistics Tutors
Fort Myer, VA Trigonometry Tutors
Nearby Cities With ACT Tutor
Arlington, VA ACT Tutors
Brentwood, MD ACT Tutors
Chevy Chase Village, MD ACT Tutors
Chevy Chs Vlg, MD ACT Tutors
Colmar Manor, MD ACT Tutors
Cottage City, MD ACT Tutors
Crystal City, VA ACT Tutors
Dunn Loring ACT Tutors
Fairmount Heights, MD ACT Tutors
Forest Heights, MD ACT Tutors
Glen Echo ACT Tutors
Martins Add, MD ACT Tutors
Martins Additions, MD ACT Tutors
Rosslyn, VA ACT Tutors
Somerset, MD ACT Tutors
|
{"url":"http://www.purplemath.com/fort_myer_va_act_tutors.php","timestamp":"2014-04-19T14:54:15Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:b30faf76-0b2f-46c5-9669-d334fee1864c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
dividing remainder w/exponents
November 6th 2008, 12:57 PM
dividing remainder w/exponents
For n is greater than or equal to 1, use congruence theory to find remainder of:
$7 | 5^(2n) + (3)(2^(5n-2))$
so here's what I did - I'm new to mod(s) so I wanted to make sure it goes right:
$= (5^2)^n + (3)(2^{5n})(2^{-2})$
$= (25)^n + (3)(2^5)^n (2^{-2})$
$\equiv 4^n + (-4)(4)^n (2^{-2})$ (mod 7)
$\equiv 4^n + (-1)(4^n)$(mod 7)
$\equiv 4^n (1 + -1)$ (mod 7)
$\equiv 0$ (mod 7)
yes??? does it work?
November 6th 2008, 01:46 PM
For n is greater than or equal to 1, use congruence theory to find remainder of:
$7 | 5^{2n} + (3)(2^{5n-2})$
so here's what I did - I'm new to mod(s) so I wanted to make sure it goes right:
$= (5^2)^n + (3)(2^{5n})(2^{-2})$
$= (25)^n + (3)(2^5)^n (2^{-2})$
$\equiv 4^n + (-4)(4)^n (2^{-2})$ (mod 7)
$\equiv 4^n + (-1)(4^n)$(mod 7)
$\equiv 4^n (1 + -1)$ (mod 7)
$\equiv 0$ (mod 7)
yes??? does it work?
This works, if you know what you meant when you wrote $2^{-2}$. This does not mean 0.25 of course, but it refers to the inverse modulo 7, which is well defined since 2 and 7 are relatively prime.
By the way, $2^{-1}=4\,({\rm mod}\, 7)$.
If what you've just read does not make sense for you (which wouldn't be surprising if you're new to "moduli"), you should avoid writing negative powers. For instance, you can notice that $2^3\
equiv 1\,({\rm mod}\,7)$, hence $3\cdot 2^{5n-2}\equiv 2^3\cdot 3 \cdot 2^{5n-2}\equiv 3\cdot 2^{5n+1}\equiv 6 \cdot 2^{5n}\,({\rm mod}\,7)$. And you can go on like you did.
November 6th 2008, 01:52 PM
ok, but if before I turn it all into mod 7: $2^{-2}$ is equivalent to 1/4
so how is it that I didn't know what you just told me, and it can still be right?
November 6th 2008, 02:45 PM
Using moduli only makes sense for integers. So that $\frac{1}{4}=0.25$ shouldn't be used with mod 7.
But in fact what you wrote is still right because $2^{-1}$ can be understood as an inverse with respect to the multiplication modulo 7 (and not with respect to the multiplication in $\mathbb{R}$
). An inverse of $x$ modulo 7 is an integer $y$ such that $xy\equiv 1\,({\rm mod}\, 7)$. This happens to exist for every $x$ which is not divisible by 7, and we denote it by $x^{-1}$. The
notation is the same as the notation for $\frac{1}{x}\in\mathbb{Q}$ but the "mod 7" prevents from any confusion.
Anyway, you'll soon learn about this in more details. For now, just be conscious that dividing is not straightforward when using moduli. You can add, multiply, but not always divide. Use the
trick I said about $2^3\equiv 1$ to circumvent this problem elementarily.
November 6th 2008, 02:47 PM
ok, thank you!
|
{"url":"http://mathhelpforum.com/number-theory/58060-dividing-remainder-w-exponents-print.html","timestamp":"2014-04-20T21:17:45Z","content_type":null,"content_length":"13194","record_id":"<urn:uuid:92721edb-7479-460a-a5cd-b94c2e4083c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Darien, CT Algebra Tutor
Find a Darien, CT Algebra Tutor
...I love helping students find meaning in their learning. So, you won't be bored. We will develop a plan each session, and work hard to complete it -- with stops along the way to try out our
learning with games, visuals and fun!
48 Subjects: including algebra 1, reading, English, writing
...For the past 10 years I have been working at Washingtonville High School in Orange County New York. Among the events I have experienced there were the transitions from New York State's
Sequential Math Program to New York State's Math A and Math B program in my first year and the more rec...
10 Subjects: including algebra 2, algebra 1, geometry, SAT math
...I currently teach software engineering to middle school students. I have a Master of Science degree in Engineering. I have four plus years of experience automating Excel spreadsheets and
Access databases using Visual Basic.
23 Subjects: including algebra 2, elementary (k-6th), geometry, computer programming
...As a tutor it is very rewarding to be sitting next to a student at the exact moment they "GET IT" and then go on to do well on their test. I work from a baseline of diagnostic test results and
design instructions that will address deficiencies. A course of study to achieve a goal is usually the best approach but I can work under deadlines as well.
16 Subjects: including algebra 2, algebra 1, geometry, GED
...I stress an understanding of the fundamental concepts, since I believe once these are grasped effectively, the student can attack many more questions with a higher degree of confidence in
their problem solving approach.I have more than 6 College Semesters' and 2 High school Semesters' Experience ...
34 Subjects: including algebra 1, English, algebra 2, chemistry
Related Darien, CT Tutors
Darien, CT Accounting Tutors
Darien, CT ACT Tutors
Darien, CT Algebra Tutors
Darien, CT Algebra 2 Tutors
Darien, CT Calculus Tutors
Darien, CT Geometry Tutors
Darien, CT Math Tutors
Darien, CT Prealgebra Tutors
Darien, CT Precalculus Tutors
Darien, CT SAT Tutors
Darien, CT SAT Math Tutors
Darien, CT Science Tutors
Darien, CT Statistics Tutors
Darien, CT Trigonometry Tutors
Nearby Cities With algebra Tutor
East Hills, NY algebra Tutors
East Northport algebra Tutors
Eastchester algebra Tutors
Glen Cove, NY algebra Tutors
Greenwich, CT algebra Tutors
Hauppauge algebra Tutors
Kings Park algebra Tutors
New Canaan algebra Tutors
Noroton Heights, CT algebra Tutors
Noroton, CT algebra Tutors
Norwalk, CT algebra Tutors
Stamford, CT algebra Tutors
Tokeneke, CT algebra Tutors
Westport, CT algebra Tutors
Wilton, CT algebra Tutors
|
{"url":"http://www.purplemath.com/Darien_CT_Algebra_tutors.php","timestamp":"2014-04-17T19:48:43Z","content_type":null,"content_length":"23835","record_id":"<urn:uuid:93ab6c1a-0ca2-4236-a1b0-1ac3f4a4b8db>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Should numpy.sqrt(-1) return 1j rather than nan?
Should numpy.sqrt(-1) return 1j rather than nan?
Stefan van der Walt stefan at sun.ac.za
Wed Oct 11 19:41:08 CDT 2006
On Wed, Oct 11, 2006 at 08:24:01PM -0400, A. M. Archibald wrote:
> What is the desired behaviour of sqrt?
> Should it return a complex array only when any entry in its input is
> negative? This will be even *more* surprising when a negative (perhaps
> even -0) value appears in their matrix (for example, does a+min(a)
> yield -0s in the minimal values?) and suddenly it's complex.
Luckily sqrt(-0.) gives -0.0 and not nan ;)
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/011349.html","timestamp":"2014-04-19T09:26:56Z","content_type":null,"content_length":"3621","record_id":"<urn:uuid:fa3e8f1b-ecd9-4cec-b49b-67a628eb8cbb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On Behalf of the Unhappy Reader: A Response to Lee F. Werth
return to religion-online
On Behalf of the Unhappy Reader: A Response to Lee F. Werth
by Elizabeth M. Kraus
Elizabeth M. Kraus is Professor of Philosophy at Fordham University, Bronx, N.Y. The following article appeared in Process Studies, pp. 125-133, Vol. 9, Numbers 3&4, Fall and Winter, 1979. Process
Studies is published quarterly by the Center for Process Studies, 1325 N. College Ave., Claremont, CA 91711. Used by permission. This material was prepared for Religion Online by Ted and Winnie
Werth’s attack on the tenability of Whitehead’s theory of extensive connection (PS 8:37-44) constitutes a serious challenge to the coherence of the philosophy of organism and therefore demands
serious consideration. At the same time, both the attack and the doctrine attacked are so arcane and abstruse as to render them inaccessible and/or uninteresting to all but a few specialists in the
philosophical community, with the end result that both are, in practice, passed over. This article attempts an enterprise as risk-laden as it is necessary: to present a more intuitive version of the
relevant issues in PR IV, 2 and, on that basis to evaluate Werth’s arguments. The risk is obvious. No intuitive translation of a purely formal demonstration can capture the rigor central to such an
argument, and hence is always romantic. All it can hope to do is to render the major ideas and general contours of the demonstration "interesting" and thereby provoke a more widespread investigation
of its content.
If PR IV, 2, the locus of the derivation under attack, is to become intelligible, the reader must be aware of the sort of undertaking in which Whitehead is engaged. That the subject matter is
geometry is immediately obvious. Not so obvious, however, is the type of geometry within the parameters of which the undertaking is conducted. Throughout his extended discussion of the properties of
regions, it is not metric properties which are the focus of Whitehead’s concern: not their dimensionality, not their "shapiness" (e.g., straightness or flatness) as grasped in sense perception, not
their size, not, in fact, any property which is visualizable. Hence not only has he moved beyond Euclidean geometry, but beyond the non-Euclidean varieties as well, into the sphere of the more
general, nonmetric geometries which have arisen since the Renaissance.
There are three species of metric geometry, species distinguished from each other and ranged in a hierarchy of generality on the basis of their intent: to discover the properties of regions which
persist through more and more drastic sorts of transformations.
The least general species, affine geometry, isolates those properties which remain constant when a figure is uniformly stretched or shrunk, For example, parallel lines remain parallel when viewed
through a telescope or microscope, yet lose their parallelism in the distortion produced by a fish-eye lens. They are affine invariants.
Projective geometry, the geometry of perspective, allows more dramatic transformations and hence reveals more general invariants. It was this sort of geometry which da Vinci intuitively grasped and
illustrated in his sketches of the "bird’s-eye view" (angolo inferiore) and the "worm’s eye view" (angolo superiore) in the Codex Huygens. Anyone who has ever observed the apparent convergence of
railroad tracks has had first hand experience that at least one affine invariant -- parallelism -- does not survive a shift of perspective, whereas straightness does. What additional properties
(projective invariants) remain constant through perspectival transformations is the question at issue in projective geometry. It is to be noted that it is this type of geometry which dominates PR IV,
3, with its concern for the formal definitions of straightness and flatness. It should also be noted that affine transformations are special cases of projective transformations, ones retaining the
same line of sight" but increasing or decreasing the distance separating viewer and object viewed.
When formalized by Arthur Cayley at the turn of the century, projective geometry was taken to be the most general variety of geometry. However, at the same time topology was growing from its
embryonic condition and was soon seen to be dealing with even more general invariants: those properties conserved no matter how a figure is distorted provided (a) it is not cut (i.e., points added)
and (b) points are not made to coincide (points subtracted). "Anything goes" in a topological transformation, so long as points remain points, lines remain lines, connections remain connections. All
other transformations and invariants are simply special cases of topological transformations and invariants. It is precisely this species of geometry which forms the backdrop of PR IV, 2. In seeking
the formal definitions of point, segment, surface, and volume, Whitehead seeks those properties of the building blocks of geometry which remain the same no matter how those geometric elements are
stretched, shrunk, twisted, warped, crumpled, or otherwise brutalized provided the topological rubrics are not violated.^1 What constitutes the pointness of a point if it can be stretched to galactic
size? What counts as the segmentness of a segment, the surfaceness of a surface, the voluminousness of a volume when each can undergo uncountable distortions? These are Whitehead’s questions, and PR
IV, 2 his attempt to derive answers by means of a formal deductive process.
Like any formal deduction, the process begins with the establishment of a set of primitive notions from which all further definitions and assumptions are derived. These primitives -- in this case,
"region" and "connection" -- must be viewed as purely topological notions. Thus, "region" includes no note of dimensionality nor any suggestion of precise boundary, since both entail notions (e.g.,
point, line, surface, volume) yet to be defined. "Region" is to be taken only in the sense of a finite extensity with a vaguely differentiated "inside" and "outside." "Connection" has to do with the
relation of regions thus vaguely bounded: how they can be "inside" or "outside" each other in such ways that no topological transformation can alter that insideness or outsideness.
From these primitives, Whitehead aims to deduce the most general types of connection among regions -- mediate connection, inclusion, overlap, external connection, tangential, and nontangential
inclusion -- so that in terms of these topologically invariant relations, he can formulate purely formal definitions of sets of topologically equivalent regions (abstractive sets [Each member region
in an abstractive set can be deformed into any other member.]) and sets of topologically equivalent sets (geometric elements [For example, an abstractive set of squares can be deformed into an
abstractive set of circles, as also can an abstractive set of triangles, hexagons, or pentagons. They are all topologically equivalent sets and hence belong to the same geometric element.]). From
this base, he can move to a formal definition of the projective properties of straightness and flatness in the derivations of PR IV, 3, apply those notions to the doctrine of strains (PR IV, 4),
demonstrating that the shrinking of a set of linear relations into the microcosm of a strain seat does not distort those relations, and hence that the measurement of a strain locus in the
presentational immediacy of the measurer says something objective about the contemporaneous world (PR IV, 5).
The derived relations among regions which Werth singles out as implicated in Whitehead’s derivation of the definitions of point and segment as geometric elements are four in number: inclusion, its
two variants, and the relation of incidence. Inclusion refers to a relationship among regions such that one (A) is "inside" another (B). This is seen to be the case whenever any region "outside" of
but connected to A is likewise connected to B. The inclusion is tangential when a third region, C, is "outside" and yet connected to both A and B. (In Whitehead’s terminology, C is externally
connected to A and B.) In more intuitive language, A and B share an "outside" in common. If no such shared "outside" is present, i.e., if B is "inside" A in such a way that every region (C, for
instance) which is "outside" B yet externally connected with it is in whole or part "inside" A as well, then B is said to be nontangentially included in A.
On the assumption that all regions include other regions, if a region is such that given any two of its member regions one includes the other nontangentially and there is no "smallest" region
included in all member regions, no ultimate real region to which all regions can be shrunk, towards which they converge, the region meets the criteria specifying it as an abstractive set of regions:
a nest of regions including regions including regions . . . , all approaching an ideal limit which defines the set by being the ideally simplified instance of its properties. To paraphrase Whitehead,
"this ideal is in fact the ideal of a nonentity. What the set is in fact doing is to guide thought to the consideration of the progressive simplicity of [extensive] relations as we progressively
diminish the [extensity] of the [region] considered" (cf. CN 61). For an intuitive example, consider line segment AB in FIGURE 1. It is constituted of its end points, A and B, which define it as this
segment, together with an extensive region "between" A and B. Allow AB to be shrunk continuously, and for the sake of illustration, isolate any two stages in the shrinkage: A[1]B[1 ]and A[2]B[2].
Both are nontangentially in-chided in AB. Furthermore, A[2]B[2 ]is similarly included in A[1]B[1]. If the shrinking is allowed to continue, each smaller segment, e.g., A[4]B[4], will be
nontangentially included in the larger segments, and all will consist of two defining end points and a "between," In other words, all real members of the abstractive set of segments will be segments:
i.e., they will have a "between" separating the end points. However, the ideal limit toward which the set converges and from which the set derives its defining characteristic is a pair of ideal end
points -- a segment in ideal simplicity, a segment viewed only from the standpoint of what constitutes its segmental character: its possession of a pair of end points.
The relation Whitehead calls "covering" (PR 454f) has to do with a relation among superimposed abstractive sets. In FIGURE 2, triangular surface ABC has been superimposed on segment AB and both
continuously shrunk. A simple inspection will show that every member of the abstractive set of surfaces, A[2]B[2]C[2 ]for instance, contains some members of the abstractive set of segments, in this
case those members further down the "converging tail" of AB: A[3]B[3], A[4]B[4] . . . . In this example, the converse is not true. Although ABC covers AB, no member of AB includes any member of ABC.
AB does not cover ABC. Why? An examination of the ideal limits of both sets reveals the answer: AB converges toward two points, ABC toward three. A triad cannot be deformed into a dyad without
causing points to coincide, thereby violating one of the already established topological rubrics. In other words, a triangular surface (or any surface for that matter) and a segment are not
topologically equivalent. Their topological difference derives from the nonidentity of their respective ideal limits.
However, in some instances, symmetrical coverage is possible, its possibility a function of the identity of the ideal limits of the sets in question. Consider a set of triangular surfaces and a set
of circular surfaces, Each, as a surface, is defined by three points. The differences which constitute one surface triangular and the other circular are further specifications of the primary
condition defining them as surfaces: the presence of three points not in the same segment. The fact that both varieties of surfaces have the same primary defining conditions makes each deformable
into the other: they are topologically equivalent. Their equivalence becomes intuitively obvious when one set is superimposed on the other, as in FIGURE 3, and their symmetrical coverage noted.
Any member of the set of circular surfaces, C[1 ]for instance, contains some members of the set of triangular surfaces (T[1 ],T[2], T[3], . . .) and every member of the set of triangular surfaces, T
[1 ]for instance, contains some members of the set of circular surfaces (C[2], C[3], . . .). As mutually covering and hence sharing the same ideal limit, which ideal limit makes the mutual coverage
possible, the sets are topologically equivalent. A geometric element is, quite simply, the set of all topologically equivalent sets, of all sets "prime" to the same formative conditions. Note
therefore that, strictly speaking, equivalence is a relationship among sets, whereas identity is a relation among the ideal limits of those sets, and hence among geometric elements.(MY underline. You
erase it.)(quite simply)
The final relevant definition has to do with a relation between geometric elements, which relation Whitehead terms "incidence" and defines in this fashion: ‘The geometric element a is said to be
‘incident’ in the geometric element b when every member of b covers [i.e., includes some member regions of all sets of] a, but a and b are not identical [i.e., have non-identical ideal limits]"
(Definition 15, PR 456). Returning to an earlier illustration, consider the set of triangular surfaces of FIGURE 2 as the set of all sets defined by points not in the same segment (the set of all
topologically equivalent surfaces), and the set of segments as the set of all sets defined by two points (the set of all topologically equivalent segments). What Whitehead is affirming is quite
simple: any given class of surfaces contains some members of any given class of segments; segments are one of the building blocks of surfaces (the other being the noncollinear point); segments are
incident in surfaces. That surfaces are not topologically equivalent to segments is a function of their respective ideal limits: two points for a segment, as opposed to three for a surface. Although
two is a part of three, two does not equal three; although a segment defined by its end points is a constitutive element in a surface, it is not the only element; a segment and a noncollinear point
cannot be shrunk to a segment. The two geometric elements are topologically nonidentical despite the incidence of one in the other.
In terms of these and other conceptual tools, Whitehead demonstrates that the foundational geometric element -- the point -- is topologically definable as having "no geometric element incident in it"
(Definition 16, PR 456); as having the "sharpest" convergence, as an "absolute prime" (PR 457) incident in various ways in the more complex geometric elements, i.e., those defined by pairs, triads,
and tetrads of points (segments, surfaces, and volumes).
Werth’s attack on Whitehead questions not Whitehead’s conclusions but the validity of his demonstration. Werth suggests that the "covering" relation central to Whitehead’s definition of incidence
must always be symmetrical. If it can be shown to be the case the only conclusion validly deducible from Whitehead’s premises (i.e., from his primitives, definitions, and assumptions) is that "to
cover" equals "to be covered," then incidence is impossible, all geometric elements are points (with no incident elements), and Whitehead has made a logical mistake of sufficient gravity to topple
the edifice of Process and Reality.
Werth’s argument is most impressive at first glance. In fact, if the only relevant steps in Whitehead’s derivation of the definition of a point are those to which Werth refers the reader,^2 then
Werth may be correct in his criticism. Only a careful logical analysis of Werth’s argument can settle this particular issue, and such is not the intent of this paper, I will argue that because Werth
omits several key steps in Whitehead’s derivation, Werth begins his proofs in the context of an assumption not to be found in Whitehead’s argument. That Werth introduces an extraneous assumption is
apparent the moment he derives his beta set from his alpha set (in terms of my earlier illustration, when he derives his set of triangular surfaces from his set of circular surfaces), To derive one
abstractive set from a topologically equivalent abstractive set is always to produce mutually covering sets. Thus Werth’s example itself bespeaks the background assumption of his argument -- that all
abstractive sets are equivalent -- which assumption is precisely what he wishes to demonstrate via his argument. For him to have selected his beta set from a nonequivalent alpha set would have
required him (a) to explore what constituted that nonequivalence, and (b) to have conceded in advance that geometric elements can be nonidentical, thereby assuming the possibility of the very
incidence relation whose impossibility he wishes to demonstrate.
My criticism of Werth centers around the fact that every step in Whitehead’s deduction is critical, that no definition or assumption can be ignored, for each contributes to the argument and its
conclusion. For example, Werth makes good use of the initial portion of assumption 9 ("Every region includes other regions") but totally disregards its terminal portion: "a pair of regions thus
included in one region are not necessarily connected with each other. Such pairs can always be found included in any given region" (PR 452). What is Whitehead asserting? That any region includes not
merely simple, monadic regions, but more complex classes of regions as well -- paired regions, dyadic regions; that among the abstractive sets in a region there are complex sets as well as simple
sets. By means of this assumption he lays the foundation for later assertions that regions are pervaded by lines as well as by points, that point pairs are as much a part of the constitution of a
region as are points. Given the point and the point pair, the monadic and dyadic relations, surfaces (point triads) and volumes (point tetrads) can be constructed by simple combination.
Let us examine the implications of assumption 9 further by isolating a pair of regions in a given region, a pair "not necessarily connected with each other" (ibid) and hence constituting a region
"between."^3 In FIGURE 4, the regions paired (B and C) include other regions. Each of the paired regions is itself an abstractive set which is an element in the original dyad BC. The set of pairs --
the dyad and its "between" (D) -- is likewise an abstractive set: i.e., any two pairs and their respective "betweens" are such that one includes the other nontangentially. There is no ultimate pair
with its "between" included in all pairs, although the limit of convergence is an ideal pair. (Note, only in ideality does the "between" vanish.) It is easily seen that the abstractive set of pairs
is more complex than the abstractive sets paired. Whitehead has immediately grounded the possibility of assymmetrical coverage by allowing in advance for the possibility of nonequivalent sets, sets
with nonidentical ideal limits. Thus he can maintain in assumption 15, "There are many dissections of any given region" (PR 452), each dissection revealing one of the several classes of regions
constituting the region -- punctual regions and segmental regions -- according to the provisions of definition 4. (PR 452)
Put in simple language, what Whitehead asserts in assumptions 9 and 15 and in definition 4 is precisely the critical step whose absence renders Werth’s proof of symmetrical coverage valid and whose
presence invalidates that proof. These "many dissections of a given region" (PR 452), these dissections of a region into segments or surfaces or volumes as well as into points, reveal "the only
relations which are interesting, . . . those which if they commence anywhere, continue throughout the remainder of the infinite series" (PB 455): monadic relations, dyadic relations, triadic
relations, tetradic relations.
Thus, Werth’s argument is flawed before begins. By failing to realize the difference between "equivalent" and "identical" in Whitehead’s argument, by failing to see that Whitehead has already
established the fact that the regions contained in other regions are not always topologically equivalent regions, he summarily lumps all abstractive sets into one set. Inasmuch as this constitutes a
denial of any "interesting" relations in his resultant set, it seems to be defined solely by its "abstractiveness" -- its infinite contractability. By implicitly denying that abstractive sets have
defining characteristics, he has constructed a pseudo-set which literally contracts to nothing: nothing ideal as well as nothing actual. It is no wonder that he can prove with equal facility that all
geometric elements are points and that no geometric elements are points! So long as regions are not constituted solely of simple, monadic regions, so long as regions always contain pairs of regions
as well as simple regions, and hence always contain nonequivalent as well as equivalent abstractive sets of regions, it is entirely illegitimate to assume that statements made concerning relations
among some of the abstractive sets contained in any region -- i.e., those which are equivalent -- can be generalized into statements concerning relations among all the abstractive sets in the region,
thereby collapsing them into a single geometric element.
It follows, therefore, that Whitehead’s definition of incidence does not involve an inconsistency or a surreptitiously introduced premise. In adding to his definition the proviso "but a and b are not
identical," he is referring the reader to already established assumptions and definitions which have provided the condition for the possibility of nonidentity and nonequivalence and have indicated
what the differentiating "interesting" relations might be. Werth, on the other hand, by beginning with an overly restrictive notion of the types of regions included in other regions, has been trapped
in his own self-fulfilling prophesy. His conclusions can be read only as a critique of his own unwarranted assumption.
Note that these rubrics have been laid down already in two of the Categoreal Obligations (PR 39). when the Category of Objective Identity is emptied of all save purely formal content, it asserts that
no element in a region can he duplicated. If the same procedure is performed on the Category of Objective Diversity, what remains is the statement "no elements in a region can be coalesced." ^
Definitions 2, 7, 9-13, 15-17, and assumptions 6-9, 23-26 ^
Whitehead already developed the axioms regarding " between" in CN 64 and in PNK 114f.
|
{"url":"http://www.religion-online.org/showarticle.asp?title=2473","timestamp":"2014-04-19T22:08:12Z","content_type":null,"content_length":"25100","record_id":"<urn:uuid:405d5b52-f939-4e97-bde8-7462d43966cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Jump size optimization info...
anton@mips.complang.tuwien.ac.at (Anton Ertl)
12 Jan 2007 16:58:14 -0500
From comp.compilers
| List of all articles for this month |
From: anton@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.compilers
Date: 12 Jan 2007 16:58:14 -0500
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
References: 07-01-023
Keywords: assembler, optimize
Posted-Date: 12 Jan 2007 16:58:14 EST
"Orlando Llanes" <Orlando.Llanes@gmail.com> writes:
[Moderator's note:]
>The general problem is NP-complete
For such a general definition of the problem that it does not occur
with relative branches, and typically only rarely elsewhere in
linking: IIRC you need to allow subtractions between arbitrary
addresses in the code, and want to optimize the size needed for the
resulting numbers in order to construct an NP-complete problem.
If you only want to optimize relative branch sizes, this problem is
polynomial: Just start with everything small, then make everything
larger that does not fit, and reiterate until everything fits.
Because in this case no size can get smaller by making another size
larger, you have at worst as many steps as you have branches, and the
cost of each step is at most proportional to the program size.
This approach is also a good heuristic for the NP-complete problem,
and will usually produce the optimal result in the cases occuring in
This is a very nice example of a harmful NP-completeness result:
People just remember that "size optimization is NP-complete" or worse
"linking is NP-complete", without remembering the exact problem for
which the NP-completeness was proven. And even for the NP-complete
problem, a polynomial algorithm usually produces the optimal result
for problems occuring in practice; and I guess that an optimal
algorithm would run in acceptable time on practical cases (but is
probably not worth implementing).
However, mentioning the NP-completeness often stops people from
attacking the problem in the detail they would otherwise apply, and
that's the harm that the NP-completeness result does.
- anton
M. Anton Ertl
[To get the genreral NP complete subtraction problem I'd think you
could add branch chaining, e.g., if you have A->C and B->C, you can
change that to A->B and B->C. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/07-01-040","timestamp":"2014-04-19T12:17:22Z","content_type":null,"content_length":"7269","record_id":"<urn:uuid:63040368-2e7b-424e-b9cd-870b06b687fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euless Algebra 2 Tutor
Find an Euless Algebra 2 Tutor
...There are some topics in AP courses such as details of bonds and thermo-chemistry for which I do not have adequate knowledge. For Hubble observation statistics I used various Excel features
such as writing cell formulas, generating plots, and doing import from/export to text files. I still use the formula capability often.
15 Subjects: including algebra 2, chemistry, physics, calculus
I graduated from Brigham Young University in 2010 with a degree in Statistical Science and I am looking into beginning a master's program soon. I have always loved math and took a variety of math
classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side.
7 Subjects: including algebra 2, statistics, geometry, SAT math
...Later, my wife and I decided to educate our children at home. We started with Kindergarten and continued through high school. I assisted them with a variety of subjects over the years as well
as with Math.
82 Subjects: including algebra 2, English, chemistry, algebra 1
...Sometimes, this is all a student needs in order to achieve success in the classroom.I have a degree in elementary education from Texas Wesleyan University, as well as a grade 1 - 8 Texas
teaching certificate. I have more than 10 years of public school teaching experience, a master's degree in gi...
39 Subjects: including algebra 2, reading, English, chemistry
...D. in political science. I also have taught political science and history at universities. At a university in Monterrey, Mexico, I taught a modern world history course for three semesters in
English and Spanish.
37 Subjects: including algebra 2, Spanish, reading, statistics
Nearby Cities With algebra 2 Tutor
Arlington, TX algebra 2 Tutors
Bedford, TX algebra 2 Tutors
Colleyville algebra 2 Tutors
Coppell algebra 2 Tutors
Dalworthington Gardens, TX algebra 2 Tutors
Grand Prairie algebra 2 Tutors
Grapevine, TX algebra 2 Tutors
Hurst, TX algebra 2 Tutors
Irving, TX algebra 2 Tutors
Keller, TX algebra 2 Tutors
N Richland Hills, TX algebra 2 Tutors
N Richlnd Hls, TX algebra 2 Tutors
North Richland Hills algebra 2 Tutors
Pantego, TX algebra 2 Tutors
Watauga, TX algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Euless_Algebra_2_tutors.php","timestamp":"2014-04-21T11:12:15Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:225618c8-b067-4bab-9142-e079436039d4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[plt-scheme] critique of some code
From: Matthias Felleisen (matthias at ccs.neu.edu)
Date: Sun Dec 6 22:48:14 EST 2009
You can also assign it to those of your students who understand accumulators (which is what many loops require, but we let students loose on for/do/while/repeat/rinse/one-more-time with nothing but intuition).
(define (string-index-of str sub)
(local ([define l2 (explode sub)]
;; [Listof 1String] [Listof 1String] -> Boolean
;; is the second string-list a prefix of the first?
(define (> l1 l2)
[(empty? l2) true]
[(empty? l1) false]
(and (string=? (first l1) (first l2)) (> (rest l1) (rest l2)))]))
;; [Listof 1String] Nat -> Nat u false
;; does l2 occur in l
;; accumulator: i = (- (length l1) (length l))
(define (each-position l i)
;; could stop when (length l) < (length l2), needs 2nd accu
[(empty? l) -1]
[else (if (> l l2) i (each-position (rest l) (+ i 1)))])))
(each-position (explode str) 0)))
(check-expect (string-index-of "abcd" "a") 0)
(check-expect (string-index-of "abc" "bc") 1)
(check-expect (string-index-of "abcd" "d") 3)
(check-expect (string-index-of "abcd" "de") -1)
(check-expect (string-index-of "abcd" "abcde") -1)
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2009-December/037035.html","timestamp":"2014-04-17T22:42:54Z","content_type":null,"content_length":"6554","record_id":"<urn:uuid:c963e3f0-0ee2-4b0b-a114-a4e4978359f1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] np.choose() question
Andreas Hilboll lists@hilboll...
Tue Jun 8 11:24:51 CDT 2010
Hi there,
I have a problem, which I'm sure can somehow be solved using np.choose()
- but I cannot figure out how :(
I have an array idx, which holds int values and has a 2d shape. All
values inside idx are 0 <= idx < n. And I have a second array times,
which is 1d, with times.shape = (n,).
Out of these two arrays I now want to create a 2d array having the same
shape as idx, and holding the values contained in times, as indexed by
A simple np.choose(idx,times) does not work (error "Need between 2 and
(32) array objects (inclusive).").
idx = [[4,2],[3,1]]
times = [100,101,102,103,104]
From these two I want to create an array
result = [[104,102],[103,101]]
How can this be done?
Thanks a lot for your insight!
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-June/050810.html","timestamp":"2014-04-20T10:56:40Z","content_type":null,"content_length":"3208","record_id":"<urn:uuid:6f757372-e18a-4bcc-819b-7f4af81ab806>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: use of tempfile
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: use of tempfile
From Eric Booth <eric.a.booth@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: use of tempfile
Date Thu, 19 Apr 2012 09:18:07 -0500
On Apr 19, 2012, at 8:45 AM, Prakash Singh wrote:
> If you bye this it ok otherwise let me spend some time pondering about
> how best I can put my query to get solved
I think you were on the right track with showing the 'before' and 'after' data examples - but:
(1) you never explained how you get from the 'before' to the 'after' dataset or why you would want to do this (e.g., new variables appear out of nowhere in the 'after' dataset - where did they come from?; there is no indication of the decision rule you used to select observations in the dataset that would be merged back in , nor how/why you would merge back in some observations in your dataset and not others, nor what variable you used to perform the merge; there is no logic or decision rule given for how you renamed those variables in the 'after' dataset; there are apparently missing & crucial variables (e.g., common_id, sate_code) that might tell us how you are restructuring your data if they were included and properly explained, how item_code relates to state_code_*, and so on…).
(2) The code snippet you shared has more new variables(what is 'common_id'?), different variable names (and different spellings), you use the file extension ".data" instead of ".dta", and mystery datasets that get merged in (e.g., you first create sales1_1 and sales1_20 and then merge sales1_1 to mystery dataset sales1_2 (which contains what exactly?) and then you merge mystery dataset sales1_1_19 (again ?) to sales1_20 -- how could we know what these other datasets contain or why you are using this merge process?). This code only made all this less clear.
The only way you could get better input on this is to provide clearer explanations and a clearer 'before' and 'after' example - we cannot guess at all the things I've described above and give you useful advice.
If you explain how you get (or want to get) from the 'before' to the 'after' data step-by-step _and_ the rationale for doing so, I think you might get some good advice here.
- Eric
Eric A. Booth
Public Policy Research Institute
Texas A&M University
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-04/msg00844.html","timestamp":"2014-04-20T11:31:33Z","content_type":null,"content_length":"11583","record_id":"<urn:uuid:d561df7d-58af-4d76-8771-076ff9bd1952>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by TT
Total # Posts: 61
Solve for u: u − 2/ u − 4 + 6 = u + u − 6/ 4 − u
Simplify (2/3k)+(k/k+1) divided by (k/k+1) -(3/k)
Solve for u: u − 2 u − 4 + 6 = u + u − 6 4 − u
what is the equal groups of 20
A microwave draws 5.0 A when it is connected to a 120 V outlet. If electrical energy costs $0.090 per kW·h, what is the cost of running the microwave for exactly 6 h?
2/3a +5/a
Undergraduates in a large university class were asked to record the time they spent on the most recent homework. In a histogram of the recorded times, the height of the bar over the interval 1 hour
to 3 hours is 4% per hour. The height of the bar over the interval ...
Lake superiorhas a depth of 1.333 thousand feet.there are 5,280 feet in one mile. How deep is Lake Superior?
Whats the lcm of 32 40 and 49
At the local animal shelter, there are 30 cats and 36 dogs. What is the ratio of dogs to cats at the local animal shelter
The length of a rectangle is 2 inches more than its width. The area of the rectangle is 15 square inches. What are the length and width of the rectangle?
simplest form
what is 1/2+5/6 in simplest form
Kate is at a T-shirt sale where she can buy one and get a second one for 30% off. She wants to buy two T-shirts listed at $19 each. How much will she pay?
Life Orientation
Observe one hour in a classroom (preschool, pre-kindergarten, kindergarten
Free Energy and Temperature A certain reaction is non-spontaneous at lower temperatures but becomes spontaneous when the temperature is raised sufficiently. For this reaction, S is positive. S is
negative. H is negative. G becomes positive at higher temperatures. G is always p...
1.) For which of the following substances is the standard enthalpy of formation equal to zero? a) water [H2O(l)] d) carbon dioxide [CO2(g)] b) lead [Pb(l)] e) tin [Sn(s)] c) carbon dioxide [CO2(s)]
2.) Consider the following four equations: 1. C6H6(l) + O2(g) 6CO2(g) + 3H2O(l)...
1.If its molar solubility is designated by x , then which of the following expressions best represents the solubility product for the substance Ag2SO4? a) x2 b) 27x4 c) 4x3 d) 108x5 e) 3x4 2.
Consider the equilibrium, 4HCl(g) + O2(g) 2H2O(g) + 2Cl2(g). The equilibri...
1. A solution of sodium oxalate has a pH of 7.82. The [OH-] in mol/L must be which of the following: a) 6.18 b) 1.5 x 10-8 c) 6.6 x 10-7 d) 7.82 e) -7.82 2. When 0.93 mol of O2 and 0.56 mol of NH3
are mixed together and allowed to come to equilibrium according to the equation:...
For the equilibrium system below, which of the following would result in an increase in the concentration of CO(g)? 2H2(g) + CO(g) CH3OH(g) + 92 kJ a) decreasing temperature b) adding some CH3OH c)
both b and d d) decreasing the volume of the container e) both a and b
examples ways to apply things like punctuation, my grammar overall and apply it in the professional writing.
write a paragraph explaining how you can apply writing elements such as evaluation, summary, synthesis, analysis to enhance your academic and professional writing.
3rd grade math
the equal groups of 20
OK I just need help with two final problems that I am stuck on: The problem states the calcium carbonate in limestone reacts with HCl to produce a calcium chloride solution and carbon dioxide gas
CaCO3(s)+2HCl(aq)-->CaCl2(aq)+H2O(l)+CO2(g) 1)Find how many moles of CO2 form ...
I just figured out the first one shortly after I answered the second question thank you
I think the answer for #2 may be 0.221 M? Is this correct?
I am having trouble getting this starting and figuring out the correct steps...the problem is Calculate the final concentration of the solution in each question 1) Water is added to 0.270 L of a
6.00M HCl solution to give a volume of 3.00 L (Express answers with the appropiate...
Your right i figured it out it was due to the sig figs
Calculate the molarity of the following solutions: 5.0g of KOH in 4.0L of KOH solution I went from grams of KOH to moles of KOH to molarity but still got the wrong answer... 5.0g KOH X 1 mol KOH/
56.108g KOH = 5.0 mol KOH/56.108=0.0891 mol KOH/4.0L and got the answer of 0.02 M ...
when they fall the ball
Thank you i minused 18.0 from both sides to get 182g of water
I need help with this problem please Calculate the amount of reagents you take to prepare 200 g of 9.0% (m/m) glucose solution? I already found the mass of the glucose which was part one and got 18.0
g of glucose. Now they want the mass of water and I am stuck.
Thank you I just figured it out
What is the largest atom in period 2 of the periodic table?
Thank you so much I finally got it!!
If aluminum has a density of 2.7g/cm^3,what is the mass,in grams of the foil? It states that a package of aluminum foil is 54.3 yd long, 14 inches wide, and 0.00035 inches thick.
Ok so it would be 69.7221 lb or 70lb since it only wants two sig figs
If a person has a mass of 62 kg, how many pounds of water does she have in her body? And it gives the typical adult body containing 51% water
Yes sorry again I hit the button too quick...it states a package of aluminum foil is 54.3 yd long, 14 inches wide,and 0.00035 inches thick.
If aluminum has a density of 2.7g/cm3,what is the mass,in grams of the foil?
Yes, I'm sorry I left out the fact that it stated a typical adult body contains 51% water.
If a person has a mass of 62 kg, how many pounds of water does she have in her body?
A particle of mass 86 g and charge 24 μC is released from rest when it is 67 cm from a second particle of charge −12 μC. Determine the magnitude of the initial ac- celeration of the 86 g particle.
What is the wavelength of a 1400 kg truck traveling at a rate of 72 km/h?
Math - Trig
Given that a measures 25.1 mm, x measures 51.2 mm and angle b measures 82°, find the length of y -
Ethane, C2H6, has a molar heat of vaporization of 15 kJ/mole. How many kilojoules or energy are required to vaporize 5 g of ethane?
Subjunctive, infinitive, or indicative??? can anyone please correct me??? I try to practice it before my test but I'm really confused about it, and also please explain why I did it wrong??? muchas
gracias!!!! Put ir in the nosotros subjunctive form vayamos Put haber in the...
please check my grammar, and please tell me which one I can change to imperfect, because I try to write some sentences in preterit, and some in imperfect, thank you!!! Aprendà español porque quise
mejorar mi español. Visité...
6x sq.-24
What factors enabled congress to dominate the American government in the nineteenth century? Was the problem a series of weak president?
12th Calculus
a 13-ft ladder is leaning against a wall. suppose that the base of the ladder sides away from the wall at the constant rate of 3ft/sec 1. explain why the motion of the two ends of the ladder can be
represented by the parametric equations: x1(t)=3t, y2(t)=0 x2(t)=0, y2(t)= radi...
During myasthenia gravis, the body attacks the Ach receptors so they do not work; this could cause?
calculus adn geometry
how close does the semicircle y=radical(16-x^2) come to the point (1,radical3)?
How would you resolve the following conflicts between rights of the individuals and why? [g] Mexican-American students at Orange Coat College insist that tacos be served once a week in the Captain s
Table. [h] Fountain Valley High School has 80% minorities, but no minorit...
math riddle ALGEBRA WITH PIZZAZZ
do any body no da ansa
give an example of a situation were the students right might override the teachers responsibilities?
us history
In what ways did wilson support progressivism? In what ways did president Taft support the progressive agenda? Brefly describe theodore roosevelts progressive record what were some of the reform
ideas at the turn of the century?
us history
what factors finally turn the tide for womens suffrage? On what grounds did people resist womens suffrage? what is the significance of the phrase"the perfect thirty-sixth"?
hang in there don't give up
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=TT","timestamp":"2014-04-17T21:42:45Z","content_type":null,"content_length":"17715","record_id":"<urn:uuid:e28e5a1e-f769-4fdd-af2d-64d06074ffbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CGTalk - Polygon Intersection Detection
02-08-2004, 06:05 PM
I have coded up a routine to check how many polygons will intersect a ray shot from the mouse cursor (so I can make polygon selection code)
The result from the routine produces the correct answers most of the time, so if the ray intercepts 2 polygons, then 2 is returned.
The problem is that my code has been made to work with triangles and not quads, because I didn't think about quads until after I started to test it.
I only have to work with tri's and quads so ngons is not a problem.
The quads that cause problems are invalid quads that lie on 2 planes
Polygon = Vector(-1,0,-1), Vector(-1,0,1), Vector(1,0,1), Vector(1,50,-1)
The last vertice is too high and no longer lies on the same plane as the first 3.
Casting a ray onto this polygon, will only intercept the first half of the polygon, because im checking only the first 3 points for the intercepting.
The way im doing this is, checking if the ray intercepts the polygons plane. The polygon only has one plane when all 4 vertices are lined up correctly, so I don't have a problem there.
What im asking is, what shall I do to overcome this.
Ive thought about casting two rays, if it dont find the polygon first time, try again on the second plane of the polygon.
The problem is, that essentially the polygon has 4 planes, and i dont know how to figure out the correct 2 planes.
Maybe I havn't thought this through enough yet, but I thought I'll just ask see if anyone has an idea
|
{"url":"http://forums.cgsociety.org/archive/index.php/t-121459.html","timestamp":"2014-04-19T02:07:33Z","content_type":null,"content_length":"8714","record_id":"<urn:uuid:8b457a46-95db-43b7-8090-cd849a46e25c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Working With Functions in FSharp
Working With Functions in FSharp
Function in F#, groups code into a single unit and reduce the complexity of large code.
Introduction: A function is a sub program, it groups code into a single unit and reduces the complexity of large code. It provides a modular approach in the programming language.
Defining Function: In F#, a function is defined with the let keyword. Syntax for a function definition is given below
Like, let add x y = x + y.
If the function is recursive, then the function definition starts with let rec keyword. It's syntax is given below
let rec function_name parameter_list = Function_body
Like, rec fact x= function body
Function Calling: Write the following code.
let add a b = a + b
let sub a b = a - b
let mul a b = a * b
let div a b = a / b
let result result1 result2 result3 result4 =
printfn "Addition : %i" result1
printfn "Substraction: %i" result2
printfn "Multiplication: %i" result3
printfn " Division: %i" result4
result (add 42 6) (sub 42 6) (mul 4 5) (div 10 2)
Function with Returning Value: We make a function for adding two numbers in F# by writing the below code.
let add x y=x + y
let result=add 5 5
printfn "%d" result
Here, add is the name of a function with two parameter x and y. It is a simple example of a function for addition of two numbers. A function is called by specifying its parameter with the function
name as " add 5 5 " and its value is assigned to result. The output will look like the below figure.
Recursive Function: A recursive function is a function which makes calls to itself. The function calls itself within its definition. In F#, the declaration of a recursive function begins with the rec
keyword after the let keyword. Now we will make a recursion function to calculate a factorial. We write the following code
let rec fact x= if x<=1 then 1 else x*fact(x-1)
let result= fact 5
printfn "factorial : %d" result
The output will look like the following figure.
Anonymous Functions: When there is no need to give a name to a function, we declare the function with the fun keyword. This type of function is also known as lambda function or lambdas. In this type
of function, argument list and function body is separated by the token ->. We write the following code
let x = (fun x y -> x + y) 5 7
printfn "%d" x
The output will look like the following figure.
|
{"url":"http://www.c-sharpcorner.com/uploadfile/718fc8/working-with-functions-in-fsharp/","timestamp":"2014-04-20T21:08:33Z","content_type":null,"content_length":"106755","record_id":"<urn:uuid:da8039c3-cce6-49b7-a2b9-15c2de8c2ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
AMS Chelsea Publishing
1979; 639 pp; hardcover
Volume: 361
ISBN-10: 0-8218-4262-5
ISBN-13: 978-0-8218-4262-1
List Price: US$72
Member Price: US$64.80
Order Code: CHEL/361.H
The main purpose of this book is to provide help in learning existing techniques in combinatorics. The most effective way of learning such techniques is to solve exercises and problems. This book
presents all the material in the form of problems and series of problems (apart from some general comments at the beginning of each chapter). In the second part, a hint is given for each exercise,
which contains the main idea necessary for the solution, but allows the reader to practice the techniques by completing the proof. In the third part, a full solution is provided for each problem.
This book will be useful to those students who intend to start research in graph theory, combinatorics or their applications, and for those researchers who feel that combinatorial techniques might
help them with their work in other branches of mathematics, computer science, management science, electrical engineering and so on. For background, only the elements of linear algebra, group theory,
probability and calculus are needed.
Graduate students and research mathematicians interested in graph theory, combinatorics, and their applications.
"Lovász provides extensive help to those wishing to learn existing techniques in combinatories, approaching the topic in a participatory lecture format and providing hundreds of progressive
-- SciTech Book News
|
{"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-361-H","timestamp":"2014-04-16T11:37:03Z","content_type":null,"content_length":"16177","record_id":"<urn:uuid:2497131d-7f57-4ffc-a1f1-497e9f4bcac3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
General Chemistry/Reaction Rates
Reaction rates of a chemical system provide the underpinnings of many theories in thermodynamics and chemical equilibria.
Elementary reactions are one-step processes in which the reactants become the products without any intermediate steps. The reactions are unimolecular (A → products) or bimolecular (A + B → products).
Very rarely, they could be trimolecular (A + B + C → products), but this is not common due to the rarity of three molecules colliding at the same time.
A complex reaction is made up of several elementary reactions, with the products of one reaction becoming the reactants of the next until the overall reaction is complete.
Rate EquationEdit
$m\hbox{A} + n\hbox{B} \rightarrow p\hbox{C} + q\hbox{D}$ Consider an arbitrary chemical reaction.
$r = k[\hbox{A}]^m[\hbox{B}]^n$ The rate at which the products will form from the reactants is given by this rate equation.
$r = k[\hbox{C}]^p[\hbox{D}]^q$ The rate of the reverse reaction (which also occurs to a lesser extent) has its own rate equation.
Note [A] is raised to the power of m, its coefficient, just like an equilibrium expression. The rate of the reaction may rely on the molar coefficients of the reactant species, but it might not.
However, for an elementary reaction, the concentrations of the species A and B are always raised to their molar coefficients. This only applies to elementary reactions, which is a very important
distinction to make.
$k$ is the rate reaction coefficient, which is reaction-specific. It can be considered a constant, although it does change with temperature (and possible other factors).
The order of an equation is what the concentration of a substance is raised to in the rate equation. The greater the number, the greater effect it will have on rate. For example, zero order equations
do not effect the rate. To find the order, you must alter one concentration and keep the rest the same, Dividing gives an equation which can be used to solve for the order. To find overall order,
simply add all orders together.
Zero-Order EquationsEdit
Zero-order equations do not depend on the concentrations of the reactants.
$r = k \,$ There is only a rate coefficient with no concentrations. The rate probably depends on temperature, and possibly other factors like surface area, sunlight intensity, or
anything else except for concentration. These reactions usually occur when a substance is reacting with some sort of catalyst or solid surface.
$[\hbox{A}] = [\hbox The integrated rate law tells us how much reactant will remain after a given amount of time. Integrated rate laws can be found using calculus, but that isn't necessary. In
{A}]_0 - kt \,$ this zero-order integrated rate law, $k$ is the rate coefficient from the rate equation, $t$ is time, and $[\hbox{A}]_0$ is the starting concentration.
If you make a graph of concentration vs. time, you will see a straight line. The slope of that line is equal to $-k$. This is how you can identify a zero-order rate.
First-Order EquationsEdit
First-order equations depend on the concentration of a unimolecular reaction.
There is a rate coefficient multiplied by the concentration of the reactant. As with a zero-order equation, the coefficient can be though of as a constant, but it
$r = k [\hbox{A}] \,$ actually varies by the other factors like temperature. There can be other reactants present in the reaction, but their concentrations do not effect the rate. First-order
equations are often seen in decomposition reactions.
$\ln [\hbox{A}] = -kt + This is the integrated rate law.
\ln [\hbox{A}]_0 \,$
$t_{1/2} = \frac{\ln 2} The half-life of a reaction is the amount of time it takes for one half of the reactants to become products. One half-life is 50% completion, two half-lives would lead to
{k} \,$ 75% completion, three half-lives 88%, and so on. The reaction never quite reaches 100%, but it does come close enough. To find the half-life, you can algebraically
manipulate the integrated rate law.
If you make a graph of the logarithm of concentration vs. time, you will see a straight line with a slope of $-k$. This is how you can identify a first-order rate.
Second-Order EquationsEdit
Second-order equations depend on the two concentrations of a bimolecular reaction.
$r = k [\hbox{A}] [\hbox{B}] \,$ This is the rate law for a second-order equation.
$r = k [\hbox{A}]^2 \,$ If there are two molecules of the same kind reacting together, the rate law can be simplified.
$\frac{1}{[\hbox{A}]} = \frac{1}{[\hbox{A}]_0} + kt \,$ In that case, this is the integrated rate law.
$t_{1/2} = \frac{1}{k[\hbox{A}]_0} \,$ This is the half-life for a second-order reaction (with only one reactant).
To see a graph with a straight line of slope $k$, graph the reciprocal of concentration vs. time.
Equilibrium will occur when the forward and reverse rates are equal. As you may have already noticed, the equilibrium expression of a reaction is equal to the rate equations divided.
$2 \hbox{NO}_2 \to \hbox{N}_2\hbox{O}_4$ Consider this reaction, the dimerization of nitrogen dioxide into dinitrogen tetraoxide.
$r_f = k_f[\hbox{NO}_2]^2 \,$
The forward reaction rate is second-order, and the reverse reaction rate is first-order.
$r_r = k_r[\hbox{N}_2\hbox{O}_4] \,$
$k_f[\hbox{NO}_2]^2 = k_r[\hbox{N}_2\hbox{O}_4] \,$ The rate coefficients may be different for the two reactions. If the reaction is in equilibrium, the forward and reverse rates
must be equal.
$K_{eq} = \frac{k_f}{k_r} = \frac{[\hbox{N}_2\hbox{O}_4]}{[\hbox Rearranging the equation gives the equilibrium expression.
Understanding kinetics explains various concepts of equilibrium. Now it should make sense why increasing the reactant concentration will make more products. The forward rate increases, which uses up
reactants, which decreases the forward rate. At the same time, products are made, which increases the reverse reaction, until both reaction rates are equal again.
Arrhenius EquationEdit
The Arrhenius equation determines a rate coefficient based on temperature and activation energy. It is surprisingly accurate and very useful. The Arrhenius equation is:
$k = Ae^{-E_a / RT}$
$E_a$ is the activation energy for the reaction, in joules per mole. $R$ is the Universal Gas Constant, $T$ is the temperature (in kelvin), and $A$ is the prefactor. Prefactors are usually determined
Last modified on 12 December 2011, at 22:43
|
{"url":"http://en.m.wikibooks.org/wiki/General_Chemistry/Reaction_Rates","timestamp":"2014-04-18T10:57:20Z","content_type":null,"content_length":"31467","record_id":"<urn:uuid:7864d908-f3f3-44b7-9a80-9dc428892952>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Python SciPy: optimization issue fmin_cobyla : one constraint is not respected
up vote 2 down vote favorite
I have the below optimisation problem:
The objective function is quite simple: given a vector SPREAD, I try to find the vector W to maximize sum(W.SPREAD).
As an example, in dimension 3, this mean I try to maximize w1 x spread1 + w2 x spread2 + w3 x spread3.
Plus, I have three constraints c1, c2 & c3 not on W, but on a POS vector where POS = W2POS(W).
As an example, in dimension 3, contraints are:
1. |pos1 + pos2 + pos3| < 5
2. |pos1| + |pos2| + |pos3| < 500
3. Max(pos1, pos2, pos3) < 5
I wrote the below code which perform some optimization, however, constraints 3 is not respected. How can I solve this problem respecting my constraints?
I wrote the below code:
from scipy.optimize import fmin_cobyla
import numpy as np
import pandas as pd
def W2POS(W, PRICE, BETA):
POS = (PRICE * BETA).T.dot(W)
return POS
def objective(W, SPREAD, sign = 1):
er = sum((W * SPREAD.T).sum())
return sign * er
def c1(x, *args):
""" abs(sum(c)) < 500 """
POS = W2POS(x,args[0], args[1])
return POS.apply(abs).sum()
def c2(x, *args):
""" abs(sum()) < 5 """
POS = W2POS(x,args[0], args[1])
return 5. - abs(POS.sum())
def c3(x, *args):
""" abs(max(pos)) < 5 """
POS = W2POS(x,args[0], args[1])
return 5. - POS.apply(abs).max()
# optim
W0 = np.zeros(shape=(len(BETA), 1))
sign = -1
W = fmin_cobyla(objective, W0, cons = [c1, c2, c3], args=(SPREAD,sign),
consargs=(PRICE, BETA), maxfun=100, rhobeg = 0.02).T
print 'Solution:', W
args = [PRICE, BETA]
pos = W2POS(W.T,args[0], args[1])
print 'c1 < 5:', abs(pos.sum())[0]
print 'c2 < 500:', pos.apply(abs).sum()[0]
print 'c3 < 5:', pos.apply(abs).apply(max)[0]
You can play with some dummy data that will illustrate c3 being not respected with this code : http://pastebin.com/gjbeePgt
python optimization scipy
add comment
1 Answer
active oldest votes
Reading the documentation in the original Fortran 77 file cobyla2.f (available in this package), lines 38 and 39, it is stated:
C1,C2,...,CM denote the constraint functions that should become nonnegative eventually, at least to the precision of RHOEND
If I interpret the scipy API documentation for fmin_cobyla correctly, RHOEND is by default set to 1.0E-4.
If the observed constraint violations are indeed less than RHOEND but still unacceptably large, a simple solution to the issue would be to incorporate the value of RHOEND in the
constraint formulations, i.e.
up vote 0 down
vote C[i] + RHOEND >= 0
In this particular case, it does appear like the constraint violation is larger than RHOEND, which has been thoroughly illustrated by a new test case in the scipy repository,
constructed by Pauli Virtanen, and corresponding to the above question.
To avoid constraint violation in this particular case, the solution appears to be to start the optimization with a smaller value on RHOBEG, for example 0.01.
add comment
Not the answer you're looking for? Browse other questions tagged python optimization scipy or ask your own question.
|
{"url":"http://stackoverflow.com/questions/18782092/python-scipy-optimization-issue-fmin-cobyla-one-constraint-is-not-respected?answertab=votes","timestamp":"2014-04-24T08:23:41Z","content_type":null,"content_length":"65515","record_id":"<urn:uuid:fb6e7e4c-093c-45bd-8a56-e0fffd751f7a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does the Rex Cube have two solutions?
3x3x3 stop-motion solve
Meffert's new Pyraminx Crystal solve
QJ 4x4x4 solve
Therefore, remember your color scheme or you will have a hard time digging yourself out later!
Monopoly wrote:
You end up with two centers switched because you cannot have two centers switched if you solve it normally. (I have no idea what the technical term is for this, or why it should be. I just know that
that's the way it is...)
Therefore, remember your color scheme or you will have a hard time digging yourself out later!
On the Rex Cube, Dino Cube, and many other puzzles, every move you do is an even permutation of the pieces. On the Dino cube, when you twist a vertex you move 3 edges in a cycle. All 3-cycles are
even permutations because a 3-cycle swaps two pairs of edges (in a 3-cycle each pair shares a piece). On the Rex cube the centers also 3-cycle. Since the only move you can do is vertex twists (a
slice is just two vertex twists) the edges and centers must stay in an even permutation.
Solving into the opposite color scheme is equivalent to swapping two opposite faces. Since there are no 3 color/sided pieces to uniquely define the color scheme you can accidentally do this with the
edges like on the Dino Cube. If you count the number of swaps, it is just 4 pairs of opposite edges and 4 swaps is an even permutation.
But in the centers, swapping two faces requires swapping just two centers which is an odd permutation (takes an odd number of swaps). Since you can only do moves that swap centers in pairs you won't
be able to solve the centers into the opposite color scheme.
For what it's worth, this is also the cause of the edge parity on the void cube. If you move a middle layer then you have 4-cycled the edges which is an odd permutation (needs 3 swaps). This is how
you can wind up with two edges swapped at the end but the corners are solved.
Prior to using my real name I posted under the account named bmenrigh.
2 Jared
About a Rex Cube read here:
Kind of a Rex Cube at wrong orientation of sides rather each other:
Read more attentively messages in this section!
Thank you! I didn't read that thread closely because I didn't want to spoil anything for when I got my own...
|
{"url":"http://www.twistypuzzles.com/forum/viewtopic.php?f=8&t=19974&p=241832","timestamp":"2014-04-20T01:13:54Z","content_type":null,"content_length":"38185","record_id":"<urn:uuid:d2863636-403a-4d64-933c-b13565934ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Mathematics Colloquium
A random walk on a group may be performed by beginning with any group element and multiplying by random group generators. A simple example is a random walk on the integers with generators {-1, 1}, so
that in each "step" you take either one step forward or one step back. In this talk we explore the behavior "at infinity" of a random walk on a group and describe a measure-theoretic boundary called
the Poisson Boundary. Most of the talk is accessible to anyone with basic knowledge of groups and probability.
|
{"url":"http://www.webpages.uidaho.edu/~barannyk/Seminars/Colloquium_May10_2012.html","timestamp":"2014-04-17T09:46:52Z","content_type":null,"content_length":"2193","record_id":"<urn:uuid:bc93c5c7-9b8d-47e3-ac22-89ce25aa6e29>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Indirect Proof
Indirect Proof
AKA Proof by contradiction
Indirect proof is synonymous with proof by contradiction. A keyword signalling that you should consider indirect proof is the word 'not'. Usually, when you are asked to prove that a given statement
is NOT true, you can use indirect proof by assuming the statement is true and arriving at a contridiction.The idea behind the indirect method is that if what you assumed creates a contradiction, the
opposite of your initial assumption is the truth.
Example of an Indirect Proof
Prove that
Statement Reason
1) 1) (Assumpe opposite \)
2) m 2) Definition of a striaght angle
3)ADBC 3) Given
4) 4) definition of perpendicular lines
5) m 5) definition of a right angle
6) m not a straight angle contradiction of steps 2 and 5
7 due to the contradiction between 2 and 5, we know that the assumption that WE INTRODUCED in the first step (not a straight angle
|
{"url":"http://www.mathwarehouse.com/geometry/proof/indirect-proof.php","timestamp":"2014-04-19T06:52:22Z","content_type":null,"content_length":"20388","record_id":"<urn:uuid:e2a2409f-3cdd-45d0-96a7-955baebdbe0a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Applications of Geometric Measure Theory to Variational Problems with Surfaces
Abstract : Geometric measure theory offers compactness theorems for candidate objects in variational problems, giving existence results in variational problems. Additionally it offers regularity
information about limits of regular objects.
I will introduce the objects of geometric measure theory; rectifiable sets, currents and varifolds. Key examples of each and their associated topologies will be given.
The primary application of geometric measure theory was minimal surface theory, but I will also explain my own research into geometric measure theory techniques and my motivating applications.
|
{"url":"http://www.ima.umn.edu/~antar/seminars/Oct14.html","timestamp":"2014-04-20T21:26:01Z","content_type":null,"content_length":"1167","record_id":"<urn:uuid:5b1adb76-f8a8-4ffb-82b9-8215a5d376fc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about Consensus on A Fine Theorem
If you know the probability theorist Bruno de Finetti, you know him either for his work on exchangeable processes, or for his legendary defense of finite additivity. Finite additivity essentially
replaces the Kolmogorov assumption of countable additivity of probabilities. If Pr(i) for i=1 to N is the probability of event i, then the probability of the union of all i is just the sum of each
individual probability under either countable of finite additivity, but countable additivity requires that property to hold for a countably infinite set of events.
What is objectionable about countable additivity? There are three classic problems. First, countable additivity restricts me from some very reasonable subjective beliefs. For instance, I might
imagine that a Devil is going to pick one of the integers, and that he is equally likely to predict any given number. That is, my prior is uniform over the integers. Countable additivity does not
allow this: if the probability of any given number being picked is greater than zero, then the sum diverges, and if the probability any given number is picked is zero, then by countable additivity
the sum of the grand set is also zero, violating the usual axiom that the grand set has probability 1. The second problem, loosely related to the first, is that I literally cannot assign
probabilities to some objects, such as a nonmeasurable set.
The third problem, though, is the really worrying one. To the extent that a theory of probability has epistemological meaning and is not simply a mathematical abstraction, we might want to require
that it not contradict well-known philosophical premises. Imagine that every day, nature selects either 0 or 1. Let us observe 1 every day until the present (call this day N). Let H be the hypothesis
that nature will select 1 every day from now until infinity. It is straightforward to show that countable additivity requires that as N grows large, continued observation of 1 implies that Pr(H)->1.
But this is just saying that induction works! And if there is any great philosophical advance in the modern era, it is Hume’s (and Goodman’s, among others) demolition of the idea that induction is
sensible. My own introduction to finite additivity comes from a friend’s work on consensus formation and belief updating in economics: we certainly don’t want to bake in ridiculous conclusions about
beliefs that rely entirely on countable additivity, given how strongly that assumption militates for induction. Aumann was always very careful on this point.
It turns out that if you simply replace countable additivity with finite additivity, all of these problems (among others) go away. Howson, in a paper in the newest issue of Synthese, asks why, given
that clear benefit, anyone still finds countable additivity justifiable? Surely there are lots of pretty theorems, from Radon-Nikodym on down, that require countable additivity, but if the theorem
critically hinges on the basis of an unjustifiable assumption, then what exactly are we to infer about the justifiability of the theorem itself?
Two serious objections are tougher to deal with for de Finetti acolytes: coherence and conditionalization. Coherence, a principle closely associated with de Finetti himself, says that there should
not be “fair bets” given your beliefs where you are guaranteed to lose money. It is sometimes claimed that a uniform prior over the naturals is not coherent: you are willing to take a bet that any
given natural number will not be drawn, but the conjunction of such bets for all natural numbers means you will lose money with certainty. This isn’t too worrying, though; if we reject countable
additivity, then why should we define coherence to apply to non-finite conjunctions of bets?
Conditionalization is more problematic. It means that given prior P(i), your posterior P(f) of event S after observing event E must be such that P(f)(S)=P(i)(S|E). This is just “Bayesian updating”
off of a prior. Lester Dubins pointed out the following. Let A and B be two mutually exclusive hypothesis, such that P(A)=P(B)=.5. Let the random quantity X take positive integer values such that P(X
=n|B)=0 (you have a uniform prior over the naturals conditional on B obtaining, which finite additivity allows), and P(X=n|A)=2^(-n). By the law of total probability, for all n, P(X=n)>0, and
therefore by Bayes’ Theorem, P(B|X=n)=1 and P(A|X=n)=0, no matter which n obtains! Something is odd here. Before seeing the resolution of n, you would take a fair bet on A obtaining. But once n
obtains (no matter which n!), you are guaranteed to lose money by betting on A.
Here is where Howson tries to save de Finetti with an unexpected tack. The problem in Dubins example is not finite additivity, but conditionalization – Bayesian updating from priors – itself! Here’s
why. By a principle called “reflection”, if using a suitable updating rule, your future probability of event A is p with certainty, then your current probability of event A must also be p. By Dubins
argument, then, P(A)=0 must hold before X realizes. But that means your prior must be 0, which means that whatever independent reasons you had for the prior being .5 must be rejected. If we are to
give up one of Reflection, Finite Additivity, Conditionalization, Bayes’ Theorem or the Existence of Priors, Howson says we ought give up conditionalization. Now, there are lots of good reasons why
conditionalization is sensible within a utility framework, so at this point, I will simply point your toward the full paper and let you decide for yourself whether Howson’s conclusion is sensible. In
any case, the problems with countable additivity should be better known by economists.
Final version in Synthese, March 2014 [gated]. Incidentially, de Finetti was very tightly linked to the early econometricians. His philosophy – that probability is a form of logic and hence
non-ampliative (“That which is logical is exact, but tells us nothing”) – simply oozes out of Savage/Aumann/Selten methods of dealing with reasoning under uncertainty. Read, for example, what Keynes
had to say about what a probability is, and you will see just how radical de Finetti really was.
“Wall Street and Silicon Valley: A Delicate Interaction,” G.-M. Angeletos, G. Lorenzoni & A. Pavan (2012)
The Keynesian Beauty Contest – is there any better example of an “old” concept in economics that, when read in its original form, is just screaming out for a modern analysis? You’ve got coordination
problems, higher-order beliefs, signal extraction about underlying fundamentals, optimal policy response by a planner herself informationally constrained: all of these, of course, problems that have
consumed micro theorists over the past few decades. The general problem of irrational exuberance when we start to model things formally, though, is that it turns out to be very difficult to generate
“irrational” actions by rational, forward-looking agents. Angeletos et al have a very nice model that can generate irrational-looking asset price movements even when all agents are perfectly
rational, based on the idea of information frictions between the real and financial sector.
Here is the basic plot. Entrepreneurs get an individual signal and a correlated signal about the “real” state of the economy (the correlation in error about fundamentals may be a reduced-form measure
of previous herding, for instance). The entrepreneurs then make a costly investment. In the next period, some percentage of the entrepreneurs have to sell their asset on a competitive market. This
may represent, say, idiosyncratic liquidity shocks, but really it is just in the model to abstract away from the finance sector learning about entrepreneur signals based on the extensive margin
choice of whether to sell or not. The price paid for the asset depends on the financial sector’s beliefs about the real state of the economy, which come from a public noisy signal and the trader’s
observations about how much investment was made by entrepreneurs. Note that the price traders pay is partially a function of trader beliefs about the state of the economy derived from the total
investment made by entrepreneurs, and the total investment made is partially a function of the price at which entrepreneurs expect to be able to sell capital should a liquidity crisis hit a given
firm. That is, higher order beliefs of both the traders and entrepreneurs about what the other aggregate class will do determine equilibrium investment and prices.
What does this imply? Capital investment is higher in the first stage if either the state of the world is believed to be good by entrepreneurs, or if the price paid in the following period for assets
is expected to be high. Traders will pay a high price for an asset if the state of the world is believed to be good. These traders look at capital investment and essentially see another noisy signal
about the state of the world. When an entrepreneur sees a correlated signal that is higher than his private signal, he increases investment due to a rational belief that the state of the world is
better, but then increases it even more because of an endogenous strategic complementarity among the entrepreneurs, all of whom prefer higher investment by the class as a whole since that leads to
more positive beliefs by traders and hence higher asset prices tomorrow. Of course, traders understand this effect, but a fixed point argument shows that even accounting for the aggregate strategic
increase in investment when the correlated signal is high, aggregate capital can be read by traders precisely as a noisy signal of the actual state of the world. This means that when when
entrepreneurs invest partially on the basis of a signal correlated among their class (i.e., there are information spillovers), investment is based too heavily on noise. An overweighting of public
signals in a type of coordination game is right along the lines of the lesson in Morris and Shin (2002). Note that the individual signals for entrepreneurs are necessary to keep the traders from
being able to completely invert the information contained in capital production.
What can a planner who doesn’t observe these signals do? Consider taxing investment as a function of asset prices, where high taxes appear when the market gets particularly frothy. This is good on
the one hand: entrepreneurs build too much capital following a high correlated signal because other entrepreneurs will be doing the same and therefore traders will infer the state of the world is
high and pay high prices for the asset. Taxing high asset prices lowers the incentive for entrepreneurs to shade capital production up when the correlated signal is good. But this tax will also lower
the incentive to produce more capital when the actual state of the world, and not just the correlated signal, is good. The authors discuss how taxing capital and the financial sector separately can
help alleviate that concern.
Proving all of this formally, it should be noted, is quite a challenge. And the formality is really a blessing, because we can see what is necessary and what is not if a beauty contest story is to
explain excess aggregate volatility. First, we require some correlation in signals in the real sector to get the Morris-Shin effect operating. Second, we do not require the correlation to be on a
signal about the real world; it could instead be correlation about a higher order belief held by the financial sector! The correlation merely allows entrepreneurs to figure something out about how
much capital they as a class will produce, and hence about what traders in the next period will infer about the state of the world from that aggregate capital production. Instead of a signal that
correlates entrepreneur beliefs about the state of the world, then, we could have a correlated signal about higher-order beliefs, say, how traders will interpret how entrepreneurs interpret how
traders interpret capital production. The basic mechanism will remain: traders essentially read from aggregate actions of entrepreneurs a noisy signal about the true state of the world. And all this
beauty contest logic holds in an otherwise perfectly standard Neokeynesian rational expectations model!
2012 working paper (IDEAS version). This paper used to go by the title “Beauty Contests and Irrational Exuberance”; I prefer the old name!
“The Nash Bargaining Solution in Economic Modeling,” K. Binmore, A. Rubinsten & A. Wolinsky (1986)
If we form a joint venture, our two firms will jointly earn a profit of N dollars. If our two countries agree to this costly treaty, total world welfare will increase by the equivalent of N dollars.
How should we split the profit in the joint venture case, or the costs in the case of the treaty? There are two main ways of thinking about this problem: the static bargaining approach developed
first by John Nash, and bargaining outcomes that form the perfect outcome of a strategic game, for which Rubinstein (1982) really opened the field.
The Nash solution says the following. Let us have some pie of size 1 to divide. Let each of us have a threat point, S1 and S2. Then if certain axioms are followed (symmetry, invariance to unimportant
transformations of the utility function, Pareto optimality and something called the IIA condition), the bargain is the one that maximizes (u1(p)-u1(S1))*(u2(1-p)-u2(S2)), where p is the share of the
pie of size 1 that accrues to player 1. So if we both have linear utility, player 1 can leave and collect .3, and player 2 can leave and collect 0, but a total of 1 is earned by our joint venture,
the Nash bargaining solution is the p that maximizes (p-.3)*(1-p-0); that is, p=.65. This is pretty intuitive: 1-.3-0=.7 of surplus is generated by the joint venture, and we each get our outside
option plus half of that surplus.
The static outcome is not very compelling, however, as Tom Schelling long ago pointed out. In particular, the outside option looks like a noncredible threat: If player 2 refused to offer player 1
more than .31, then Player 1 would accept given his outside option is only .3. That is, in a one-shot bargaining game, any p between .3 and 1 looks like an equilibrium. It is also not totally clear
how we should interpret the utility functions u1 and u2, and the threat points S1 and S2.
Rubinstein bargaining began to fix this. Let players make offers back and forth, and let there be a time period D between each offer. If no agreement is reached after T periods, we both get our
outside options. Under some pretty compelling axioms, there is a unique perfect equilibrium whereby player 1 gets p* if he makes the first offer, and p** if player 2 makes the first offer. Roughly,
if the time between offers is D, player 1 must offer player 2 a high enough share that player 2 is indifferent between that share today and the amount he could earn when he makes an offer in the next
period. Note that the outside options do not come into play unless, say, player 1′s outside option is higher than min{p*,p**}. Note also that as D goes to 0, all of the difference in bargaining power
has to do with who is more patient. Binmore et al modify this game so that, instead of discounting the future, rather there is a small chance that the gains from negotiation will disappear
(“breakdown”) in between every period; for instance, we may want to form a joint venture to invent some product, but while we negotiate, another firm may swoop in and invent it. It turns out that
this model, with von Neumann-Morganstern utility functions for each player (though perhaps differing levels of risk aversion) is a special case of Rubinstein bargaining.
Binmore et al prove that as D goes to zero, both strategic cases above have unique perfect equilibria equal to a Nash bargaining solution. But a Nash solution for what utility functions and threat
points? The Rubinstein game limits to Nash bargaining where the difference in utilities has to do with time preference, and the threat points S1 and S2 are equal to zero. The breakdown game limits to
Nash bargaining where the difference in utilities has to do with risk aversion, and the threat points S1 and S2 are equal to whatever utility we would get from the world after breakdown.
Two important points: first, it was well known that a concave transformation of a utility function leads to a worse outcome in Nash bargaining for that player. But we know from the previous paragraph
that this concave transformation is equivalent to a more impatient Rubinstein bargainer: a concave transformation of the utilities in the Nash outcome has to do with changing the patience, not the
risk aversion, of players. Second, Schelling was right when he argued that the Nash threat points involve noncredible threats. As long as players prefer their Rubinstein equilibrium outcome to their
outside option, the outside option does not matter for the bargaining outcome. Take the example above where one player could leave the joint venture and still earn .3. The limit of Rubinstein
bargaining is for each player to earn .5 from the joint venture, not .65 and .35. The fact that one player could leave the joint venture and still earn .3 is totally inconsequential to the
negotiation, since the other player knows that this threat is not credible whenever the first player could earn at least .31 by staying. This point is often wildly misunderstood when people apply
Nash bargaining solutions: properly defining the threat point matters!
Final RAND version (IDEAS). There has been substantial work since the 80s on the problem of bargaining, particularly in trying to construct models where delay is generated, since Rubinstein
guarantees agreement immediately and real-world bargaining rarely ends in one step; unsurprisingly, these newer papers tend to rely on difficult manipulation of theorems using asymmetric information.
“Being Realistic about Common Knowledge: A Lewisian Approach,” C. Paternotte (2011)
(Site note: apologies for the recent slow rate of posting. In my defense, this is surely the first post in the economics blogosphere to be sent from Somalia, where I am running through a bunch of
ministerial and businessman meetings before returning to the US for AEA. The main AEA site is right down the street from my apartment, so if you can’t make it next week, I will be providing daily
updates on any interesting presentations I happen across. Of course, I will post some brief thoughts on the Somali economy as well.)
We economists know common knowledge via the mathematical rigor of Aumann, but priority for the idea goes to a series of linguists in the 1960s and to the superfamous philosopher David Lewis and his
1969 book “Conventions.” Even within philosophy, the formal presentation of Aumann has proven more influential. But the economic conception of common knowledge is subject to some serious critiques as
a standard model of how we should think about knowledge. One, it is equivalent to an infinite series of epistemic iterations: I know X, know you know that I know X, and so on. Second, and you may
know this argument via Monderer and Samet, the standard “common knowledge is created when something is announced publicly” is surely spurious: how do I know that you heard correctly? Perhaps you were
daydreaming. Third, Aumann-style common knowledge is totally predicated on deductive reasoning: every agent correctly deduces the effect of every new piece of information on their own knowledge
partition. This is asking quite a bit, to say the least. The first objection is not too worrying: any student of game theory knows the self-evident event definition of common knowledge, which implies
that epistemic iteration definition. Indeed, you can think of the “I know, know that you know, know you know that I know, etc.” iterations as the consequence of knowing some public event. Paternotte
gives the great example of any inductive proof in mathematics: knowing X holds for the first element and X holding for element i implies it holds for i+1 is not terribly cognitively demanding, but
knowing those two facts implies knowledge of an infinite string of implications. The second objection, fallibility, has been treated with economists using p-belief: assign a probability distribution
to the state space, and talk about having .99-common belief rather than common knowledge. The third, it seems, is less readily handled.
But how did Lewis think of common knowledge? And can we formalize his ideas? What is then represented? This paper is similar to Cubitt and Sugden (2003, Economics and Philosophy), though it strikes
me as the more interesting take. Lewis said the following:
It is common knowledge among a population that X iff some state of affairs holds such that
1: Everyone has reason to believe that A holds
2: A indicates to everyone that everyone has reason to believe that A holds, and
3: A indicates to everyone that X.
Note that the Lewisian definition is not susceptible to the three arguments noted above. Agents don’t necessarily believe something, but rather just have reason to do so. They know how each other
reason, but the method of reasoning is not necessarily deductive. Let’s try to formalize those conditions in a standard state space world. Let B(p,i)E be the belief operator of agent i: B
(.7,John):”It rains today” means John believes with probability .7 that it will rain today. Condition 1 in Lewis looks like claiming that all agents believe with p>.5 that A holds (have a “reason to
believe A”). The word “A indicate X” should mean that there is a reasoning function of agent i, f(i), such that if A is believed with p>.5, then so is X (we will need some technical conditions here
to ensure the function f(i) is defined uniquely for a given reasoning standard).
What is interesting is that this definition is tightly linked to standard Monderer-Samet common p-belief. For every common p-belief, p>.5, there are a set of parameters for which Lewisian common
knowledge exists. For every set of parameters where Lewisian common knowledge exists, there is at least .5-common belief. Thus, though Lewisian common knowledge appears to be not that strict, it in
fact is in a strong sense equivalent to common p-belief, and thus implies any of the myriad results published using that simpler concept. What an interesting result! I take this to mean that many
common complaints about common knowledge are not that serious at all, and that p-belief, quite standard these days in economics, is much more broadly applicable than I previously believed.
http://www.springerlink.com/content/n81219v23334n610/ (GATED. Philosophy community: you have to do something about the lack of working papers freely accessible! Final version in Synthese 183.2 – if
you are a micro theorist, you should definitely be reading this journal, as it is definitely the top journal in philosophy publishing analytic, formal results in theory of knowledge.)
“Centralizing Information in Networks,” J. Hagenbach (2011)
Ah…strategic action on network topologies. There is a wily problem. Tons of work has gone into the problem of strategic action on networks in the past 15 years, and I think it’s safe to say that the
vast majority is either trivial or has proved too hard of a problem to say anything useful at all. This recent paper by Jeanne Hagenbach is a nice exception: it’s not all obvious, and it addresses an
important question.
There is a fairly well-known experimental paper by Bonacich in the American Sociological Review from 1990 in which he examines how communications structure affects the centralizing of information. A
group of N players attempt to gather N pieces of information (for example, a 10-digit string of numbers). They each start with one piece. A communication network is endowed on the group. Every
period, each player can either share each piece of information they know with everyone they are connected to, or hide their information. When some person collects all the information, a prize is
awarded to everybody, and the size of the prize decreases in the amount of time it took to gather the info. The person (or persons) who have all of the information in this last period are awarded a
bonus, and if there are multiple solvers in the final period, the bonus is split among them. Assume throughout that the communications graph is undirected and connected.
Hagenbach formalizes this paper as a game, using SPNE instead of Nash as a solution concept in order to avoid the oft-seen problem of networks where “everybody do nothing” is an equilibrium. She
proves the following. First, if the maximum game length is at least N-1 periods, then every SPNE involves information being aggregated. Second, in any game where a player i could potentially solve
the puzzle first (i.e., the maximum length of shortest paths of player i to other players is less than the maximum time T the game lasts), there is an SPNE where she does win, and further she wins in
the shortest possible amount of time. Third, for a group of communication networks that includes graphs like the tree and the complete graph, then every SPNE is solved by some player is no more than
N-1 periods. Fourth, for other simple graph structures, there are SPNEs for which an arbitrary amount of time passes before some player solves the game.
The intuition for all of these results boils down to the following. Every complete graph involves at least two agents connected to each other who will potentially each hold every piece of information
the opponent lacks. When this happens, we are in the normal Game of Chicken. Since the problem has a final period T and we are looking for SPNE, in the final period T the two players just play
chicken with each other, and chicken has two pure strategy Nash equilibria: I go straight, you swerve, or you go straight and I swerve. Either way, one of us “swerves”/shares information, and the
other player solves the puzzle. The second theorem just relies on the strategy where whichever player we want to solve the puzzle refuses to share ever; every other player can only win nonzero payoff
by getting their information to her, and they want to do so as quickly as possible. The fourth result is pretty interesting as well. Consider a 1000 period game, with four players arranged in a
square: A talks to B and D, B talks to A and C, C to B and D, and D to A and C. We can be in a situation where B needs what A has, and A needs what B has, but not be in a duel. Why? Because A may be
able to get the information from C, and B the information he needs from D. Consider the following hypothesized SPNE, though: everyone hides until period 999, then everyone passes information on in
999 and 1000. In this SPNE, everyone solves the puzzle simultaneously in period 1000 and gets one-fourth of the bonus reward. If any player deviates and, say, shares information before period 999,
then the other players all play an easily constructed strategy whereby the three of them solve the following period but the deviator does not. If the reward is big enough, then all the discounting we
need to get to period 1000 will not be enough to make anyone want to deviate.
What does this all mean for social science? Essentially, if I want information to be shared and I have both team and individual bonuses, then no matter what individual and team bonuses I give, the
information will be properly aggregated by strategic agents quite quickly if I make communication follow something like a hierarchy. Every (subgame perfect) equilibrium involves quick coordination.
On the other hand, if the individual and team bonuses are not properly calibrated and communication involves cycles, it may take arbitrarily long to coordinate. I think a lot more could be done with
these ideas applied to traditional team theory/multitasking.
One caveat: I am not a fan at all of modeling this game as having a terminal period. The assumption that the game ends after T periods is clearly driving the result, and I have some hunch that simply
using a different equilibrium concept than SPNE and allowing an infinite horizon, you could solve for very similar results. If so, that would be much more satisfying. I always find it strange when
hold-up problems or bargaining problems are modeled as having necessarily a firm “explosion date”. This avoids much of the great complexity of negotiation problems!
http://hal.archives-ouvertes.fr/docs/00/36/78/94/PDF/09011.pdf (2009 WP – final version with nice graphs in GEB 72 (2011). Hagenbach and a coauthor also have an interesting recent ReStud where they
model something like Keynes’ beauty contest allowing cheap talk communication about the state among agents who have slight heterogeneity in preferences.
“The Temporal Structure of Scientific Consensus Formation,” U. Shwed & P. Bearman (2010)
This great little paper about the mechanics of scientific consensus appeared in the last copy of the American Sociological Review. The problem is the following: how can we identify when the
scientific community – seen here as an actor, following the underrated-among-economists-yet-possibly-crazy philosopher Bruno Latour – achieves consensus on a topic? When do scientists agree that
cancer is caused by sun exposure, or that smoking is carcinogenic, or that global warming is caused by man? The perspective here is a school in sociology known as STS that is quite postmodern, so
there is certainly no claim that this scientific consensus means we have learned the “truth”, somehow defined. Rather, we just want to know when scientists have stopped arguing about the basics and
have moved on to extensions and minor issues. Kuhn would call this consensus “normal science”, but Latour and the STS guys often refer to it as “black boxing,” in which scientific consensus allows
scientists to state something like “smoking causes cancer” without having to defend it. Economists contains many such black boxed facts in its current paradigm: agents are expected utility
maximizers, for example. (Note that “the black box”, in economics, can also refer to growth in multifactor productivity, as in the title of Nate Rosenberg’s book on innovation; this is somewhat, but
not entirely, the same concept).
But how do we identify which facts have been black boxed? Traditionally, sociologists of science have used expert conclusions. For instance, IPCC reports survey experts on climate change. The first
IPCC report in the 1980s did not identify climate change as anthropogenic, but all future reports did. The problem is that such expert reports are not available for all problems where we wish to
investigate consensus, and second that it is in some sense “undemocratic” to rely on expert judgments alone. It would be better to have a method whereby an ignorant observer can look down from on
high at the world of science and pronounce that “topic A is in a state of consensus and topic B is not.”
This is precisely what Bearman and Shwed do. They construct citation networks, using keyword searches, on a number of potentially contested ideas over time, ranging from those which are traditionally
considered to have little epistemic rivalry (coffee causes cancer) to those with well-known scientific debates (the nature of gravitational waves). They use a dynamic method where the sample at time
t includes all papers which were cited within the past X years (where X is the median age of papers cited by new research at time t), as well as any older articles in the same field which were cited
by those papers. They then examine the modularity of the citation network. A high level of modularity, in network terms, essentially means that the network is made up of relatively distinct
communities vis-a-vis a random graph. Since citations to papers viewed favorably are known to be more common, high modularity means there are multiple cliques who cite each other, but do not cite
their epistemic rivals.
With this in hand, the authors show that areas considered by expert studies to have little rivalry do indeed have flat and low levels of modularity. Those traditionally considered to be contentious
do indeed show a lot of variance in their modularity, and a high absolute level thereof. The “calibrating” examples show evidence, in the citation network, of consensus being reached before any
expert study proclaimed such consensus. In some sense, then, network evaluation can pinpoint scientific consensus faster, and with less specialized knowledge, than expert studies. Applying the
methodology to current debates, the authors see little contention over the non-carcinogenicity of cell phones or the lack of a causal relation between MMR vaccines and autism. The methodology could
obviously be applied to other fields – literature and philosophy would both be interesting cases to examine.
One final note: this article is published in a sociology journal. I would greatly encourage economists to sign up for eTOC emails from our sister fields, which often publish content on econ-style
topics, though often with data, tools and methodologies an economist wouldn’t think of using. In sociology, I get the ASR and AJS, though if you like network theory, you may want to also look at a
few of the more quantitative field journals. In political science, APSR is the journal to get. In philosophy, Journal of Philosophy and Philosophical Review, as well as Synthese, are top-notch, and
all fairly regularly publish articles on knowledge which would not be out of place in a micro theory journal. I read Philosophy of Science as well, which you might want to take a look at if you like
methodological questions. The hardcore econometricians and math theory guys surely would want to look at journals in stats and mathematics; I don’t read these per se, but I often follow citations to
interesting (for economics) papers in JASA, Annals of Statistics and Biometrika. I’m sure experimentalists should be reading the psych and anthropology literature as well, but I have both little
knowledge and little interest in that area, so I’m afraid I have no suggestions; perhaps a commenter can add some.
http://asr.sagepub.com/content/75/6/817.full.pdf+html (Final version, ASR December 2010. GATED. Not only could I not find an ungated working paper version, I can’t even find a personal webpage at all
for either of the authors; there’s nothing for Shwed and only a group page including a subset of Bearman’s work. It’s 2011! And worse, these guys are literally writing about epistemic closure and
scientific communities. If anyone should understand the importance of open access for new research, it’s them, right?)
“On Consensus through Communication with a Commonly Known Protocol,” E. Tsakas & M. Voorneveld (2010)
(Site note: I will be down in Cuba until Dec. 24, so posting will be light until then, though I do have a few new papers to discuss. I’m going to meet with some folks there about the recent economic
reforms and their effects, so perhaps I’ll have something interesting to pass along on that front.)
A couple weeks ago, I posted about the nice result of Parikh and Krasucki (1990), who show that when communication is pairwise, beliefs can fail to converge under many types of pre-specified orders
of communication. In their paper, and in every paper following it that I know of, common knowledge of the order of communication is always assumed. For instance, if Amanda talks with Bob and then Bob
talks with Carol, since only common knowledge of the original information partitions is assumed, for Carol to update “properly” she needs to know whether has Bob has talked to Amanda previously.
In a paper pointed out by a commenter, Tsakas and Voorneveld point out through counterexample just how strict this requirement is. They expand the state space to include knowledge of the order of
communication (using knowledge in the standard Aumann way). It turns out that with all of the necessary conditions of Parikh and Krasucki holding, and uncertainty about whether a single act of
communication occurred, consensus can fail to be reached. What’s worrying here from a modeling perspective is that it is really convenient to model communication as a directed graph, where A links to
B if A talks to B infinite times. I see the Tsakas and Voorneveld result as giving some pause to that assumption. In particular, in the example, all agents have common knowledge of the communications
graph, since the only uncertainty is in one period and therefore no uncertainty about the structure of the graph.
There is no positive result here: we don’t have useful conditions guaranteeing belief convergence under uncertainty about the protocol. In the paper I’m working on, I restrict all results to
“regular” communication, meaning the only communication is through formal channels that occur infinite times, and because of this I only need to assume knowledge of the graph.
http://edocs.ub.unimaas.nl/loader/file.asp?id=1490 (Working Paper. Tsakas and Voorneveld also have a 2007 paper on this topic that corrects some erroneous earlier work: https://gupea.ub.gu.se/dspace/
bitstream/2077/4576/1/gunwpe0255.pdf. In particular, even if consensus is reached, information only becomes common knowledge among under really restrictive assumptions. This is important if, for
instance, you are studying mechanisms on a network, since many results in game theory require common knowledge about what opponents will do: see Dekel and Brandenburger (1987) and Aumann and
Brandenburger (1995), for instance. I’ll have more to say about this about this once I get a few more results proved.)
|
{"url":"https://afinetheorem.wordpress.com/category/consensus/","timestamp":"2014-04-21T10:43:42Z","content_type":null,"content_length":"94310","record_id":"<urn:uuid:2b21d9c8-96e9-40a4-9ea1-88334ee8e96b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Magnetic Dipole and Electric Quadrupole Radiation Fields
Next: Magnetic Dipole Radiation Up: Radiation Previous: Energy radiated by the Contents
The next term in the multipolar expansion is the
When you (for homework, of course)
1. use the small
2. cancel the
3. explicitly write out the hankel function in exponential form
you will get equation (J9.30, for - recall - distributions with compact support):
Of course, you can get it directly from J9.9 (to a lower approximation) as well, but that does not show you what to do if the small
There are two important and independent pieces in this expression. One of the two pieces is symmetric in
Next: Magnetic Dipole Radiation Up: Radiation Previous: Energy radiated by the Contents Robert G. Brown 2013-01-04
|
{"url":"http://phy.duke.edu/~rgb/Class/Electrodynamics/Electrodynamics/node99.html","timestamp":"2014-04-17T21:26:59Z","content_type":null,"content_length":"7595","record_id":"<urn:uuid:b7018d60-740d-43c1-abe5-901c9009ae18>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry Proof
September 5th 2008, 04:36 AM
Geometry Proof
Complete this proof using axioms and already proven theorems.
Given: Point P is equidistant from endpoints X and Y of line XY..
Prove: P is on the perpendicular bisector of XY
Case 1: P is on XY. By the Given, P is the midpoint of XY so it is on the perpendicular bisector.
Case 2:
1. Draw PX and PY. (On 2 points there is exactly 1 line)
2. Let M be the midpoint of XY. (Midpoint Thm)
3. Draw PM (On 2 points there is exactly 1 line)
I'm stuck after that. Any help would be greatly appreciated.
September 5th 2008, 05:14 AM
Complete this proof using axioms and already proven theorems.
Given: Point P is equidistant from endpoints X and Y of line XY..
Prove: P is on the perpendicular bisector of XY
Case 1: P is on XY. By the Given, P is the midpoint of XY so it is on the perpendicular bisector.
Case 2:
1. Draw PX and PY. (On 2 points there is exactly 1 line)
2. Let M be the midpoint of XY. (Midpoint Thm)
3. Draw PM (On 2 points there is exactly 1 line)
I'm stuck after that. Any help would be greatly appreciated.
i dont know if you have already proven the theorems i will be using..
1. P is not on XY, there is a line passing P that is perpendicular to XY. (Parallel/Perpendicular Postulate)
2. Let M be the point of intersection of XY and the perpendicular passing P. (Non-parallel lines intersect)
3. PM is perpendicular to XY (By 1 and 2)
4. PMX and PMY form right triangles. (Def. of Right triangles)
5. PX is congruent to PY (Given)
6. PM is congruent to itself (Reflexive)
7. triangles PMX and PMY are congruent (Hypotenuse-Leg Theorem)
8. MX is congruent to MY. (CPCTC)
9. M is the midpoint of XY (Mdpt theorem (or def of mdpt of a segment))
10. PM is the perpendicular bisector. (by 3 and 9)
tell me what theorems are not yet proven so that i can revise it for you..
September 5th 2008, 02:41 PM
Complete this proof using axioms and already proven theorems.
Given: Point P is equidistant from endpoints X and Y of line XY..
Prove: P is on the perpendicular bisector of XY
Case 1: P is on XY. By the Given, P is the midpoint of XY so it is on the perpendicular bisector.
Case 2:
1. Draw PX and PY. (On 2 points there is exactly 1 line)
2. Let M be the midpoint of XY. (Midpoint Thm)
3. Draw PM (On 2 points there is exactly 1 line)
4. PX = PY (Given)
5. XM = YM (Definition of midpoint)
6. PM = PM (Reflexive Property of equality)
7. $\triangle XPM \cong \triangle YPM$ (SSS Postulate)
8. $\angle XPM \cong \angle YPM$ and $m\angle XPM = m\angle YPM$ (CPCTC and definition of congruency)
9. $\angle XPM \ \ and \ \ \angle YPM$ make up a Linear Pair (Definition of Linear Pair)
10. $m\angle XPM + m\angle YPM = 180$ (If two angles form a linear pair then they are supplementary)
11. $m\angle XPM + m\angle XPM = 180$ (Substitution using #8 and #10)
12. $2 m\angle XPM = 180$ (Addition)
13. $m\angle XPM = 90$ (Division)
Similarly, you can show that $m\angle YPM = 90$
14. $\angle XPM$ is a right angle. (Definition of right angle)
15. $\overline{PM} \perp \overline {XY}$ (If two lines meet to form right angles, then they are perpendicular)
16. P lies on $\overline{PM}$ (Step #3)
Q.E.D. P lies on the perpendicular bisector of $\overline {XY}$
I'm stuck after that. Any help would be greatly appreciated.
Here's another approach. I Don't know how much detail you need. Sometimes, geometry teachers can be pretty picky.
|
{"url":"http://mathhelpforum.com/geometry/47802-geometry-proof-print.html","timestamp":"2014-04-19T05:02:24Z","content_type":null,"content_length":"11182","record_id":"<urn:uuid:e181cda6-d1ef-4719-89a4-be2e8c2f231d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Morton Grove ACT Tutor
Find a Morton Grove ACT Tutor
...There is nothing more rewarding as a teacher than instilling a confidence in a student which they never thought they could have in math. My expertise is tutoring any level of middle school,
high school or college mathematics. I can also help students who are preparing for the math portion of the SAT or ACT.
12 Subjects: including ACT Math, calculus, geometry, algebra 1
...Most have attained scholarships. I'm proud that of the many students who have continued lessons with university instructors, virtually all of them have been told that the quality of tone we
worked to achieve would not be touched. I've played for and attended their weddings and remained friends with many.
15 Subjects: including ACT Math, reading, grammar, writing
...Algebra 2 builds on the skills learned in Algebra 1 and digs further into variable mathematics. Topics include: functions and graphing (linear, quadratic, logarithmic, exponential), complex
numbers, systems of equations and inequalities, and relations. This can also include beginning trigonometry and probability and statistics.
11 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have a good knowledge of the theoretical and applied applications of the subject. I have taken several Probability courses as an undergrad and graduate student. I have learned the basics of
Probability Theory, as well as the applications.
5 Subjects: including ACT Math, statistics, prealgebra, probability
...Ideally, a teacher is not only very knowledgeable, but genuinely interested in their subject matter, as well. Such educators are able to translate not just information, but energy, and this is
my objective when I teach. I believe learning is at its best when the process is an engaging one.
64 Subjects: including ACT Math, reading, English, chemistry
|
{"url":"http://www.purplemath.com/morton_grove_act_tutors.php","timestamp":"2014-04-20T21:43:05Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:7bf7b61b-af90-40ce-9369-a71886a2b553>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tractable Reasoning in First-Order Knowledge Bases with Disjunctive Information
Yongmei Liu, Hector J. Levesque
This work proposes a new methodology for establishing the tractability of a reasoning service that deals with expressive first-order knowledge bases. It consists of defining a logic that is weaker
than classical logic and that has two properties: first, the entailment problem can be reduced to the model checking problem for a small number of characteristic models; and second, the model
checking problem itself is tractable for formulas with a bounded number of variables. We show this methodology in action for the reasoning service previously proposed by Liu, Lakemeyer and Levesque
for dealing with disjunctive information. They show that their reasoning is tractable in the propositional case and decidable in the first-order case. Here we apply the methodology and prove that the
reasoning is also tractable in the first-order case if the knowledge base and the query both use a bounded number of variables.
Content Area: 10. Knowledge Representation & Reasoning
Subjects: 11. Knowledge Representation; 3. Automated Reasoning
Submitted: May 8, 2005
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
|
{"url":"http://www.aaai.org/Library/AAAI/2005/aaai05-100.php","timestamp":"2014-04-18T18:49:11Z","content_type":null,"content_length":"3056","record_id":"<urn:uuid:bfc28ef4-ed2b-41cd-ac97-4c789fb66c13>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PIRSA - Perimeter Institute Recorded Seminar Archive
Neutrino as Majorana zero modes
Abstract: The existence of three generations of neutrinos and their mass mixing is a deep mystery of our universe. Majorana's elegant work on the real solution of Dirac equation predicted the
existence of Majorana particles in our nature, unfortunately, these Majorana particles have never been observed. In this talk, I will begin with a simple 1D condensed matter model which realizes a T^
2=-1time reversal symmetry protected superconductors and then discuss the physical property of its boundary Majorana zero modes. It is shown that these Majorana zero modes realize a T^4=-1 time
reversal doubelets and carry 1/4 spin. Such a simple observation motivates us to revisit the CPT symmetry of those ghost particles--neutrinos by assuming that they are topological Majorana particles
made by four
Majorana zero modes. Interestingly, we find that Majorana zero modes will realize a P^4=-1 parity symmetry as well. It can even realize a nontrivial C^4=-1 charge conjugation symmetry, which is a big
surprise from a usual perspective that the charge conjugation symmetry for a Majorana particle is trivial. Indeed, such a C^4=-1 charge conjugation symmetry can be promoted to a Z_2 gauge symmetry
and its spontaneously breaking leads to the origin of neutrino mass. We further attribute
the origin of three generations of neutrinos to three distinguishable ways of defining two complex fermions from four Majorana zero modes.
The above assumptions lead to a D2 symmetry in the generation space and uniquely determine the mass mixing matrix with no adjustable parameters! In the absence of CP violation, we derive
theta_12=32degree, theta_23=45degree and theta_13=0degree, which is intrinsically closed to
the current experimental results. We further predict an exact mass ratio of the three mass eigenstate with m_1/m_3=m_2/m_3=3/sqrt{5}.
Date: 14/05/2013 - 3:30 pm
|
{"url":"http://pirsa.org/13050057","timestamp":"2014-04-17T18:24:20Z","content_type":null,"content_length":"9232","record_id":"<urn:uuid:e144d0cf-b6a7-4ffc-81ed-23ecd907ff0d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bedford, NH Math Tutor
Find a Bedford, NH Math Tutor
...Looking forward to working with you to improve your skills! I am an Elementary Teacher who has experience teaching ESL to children and adults. I have worked with students from Cambodia, Korea,
and Saudi Arabia.
22 Subjects: including prealgebra, algebra 1, algebra 2, reading
...I recently helped a student start Spanish 1, and assisted him when, after about a month and a half, he was finally enrolled in the VLACS Spanish 1 class. My Spanish is very useful for a first
year student, or a struggling second year student, but probably not beyond that, and my accent is good f...
20 Subjects: including algebra 1, English, prealgebra, reading
...My tutoring method is giving a pre-test, then look over their notes and past work to see where I need to begin. I look at their notes to see how they have been taught and try to come at the
material in a different angle. I am able to connect both math and science to real-world scenarios.
10 Subjects: including algebra 2, physics, basketball, probability
...I successfully completed a bachelor's in environmental science and a master's in earth science as an a student. During my work on my master's, I was a teaching assistant who successfully
helped college freshmen learn how to think about learning the subject they studied and how to study the material. Two of my recent students were not doing well in their classes.
41 Subjects: including algebra 1, chemistry, English, reading
...He told us that he was going to take us all the way back to kindergarten and reteach age appropriate topics in a way that would make more sense. He would then do the same for each grade in
elementary, junior high, and high school and bring us to where we needed to be for the course that he was s...
6 Subjects: including algebra 1, algebra 2, geometry, precalculus
Related Bedford, NH Tutors
Bedford, NH Accounting Tutors
Bedford, NH ACT Tutors
Bedford, NH Algebra Tutors
Bedford, NH Algebra 2 Tutors
Bedford, NH Calculus Tutors
Bedford, NH Geometry Tutors
Bedford, NH Math Tutors
Bedford, NH Prealgebra Tutors
Bedford, NH Precalculus Tutors
Bedford, NH SAT Tutors
Bedford, NH SAT Math Tutors
Bedford, NH Science Tutors
Bedford, NH Statistics Tutors
Bedford, NH Trigonometry Tutors
Nearby Cities With Math Tutor
Amherst, NH Math Tutors
Derry, NH Math Tutors
Goffstown Math Tutors
Hooksett Math Tutors
Hudson, NH Math Tutors
Litchfield, NH Math Tutors
Londonderry, NH Math Tutors
Manchester, NH Math Tutors
Merrimack Math Tutors
Milford, NH Math Tutors
Pelham, NH Math Tutors
Weare Math Tutors
Westford, MA Math Tutors
Wilmington, MA Math Tutors
Windham, NH Math Tutors
|
{"url":"http://www.purplemath.com/Bedford_NH_Math_tutors.php","timestamp":"2014-04-17T11:14:40Z","content_type":null,"content_length":"23731","record_id":"<urn:uuid:b046ee81-4f44-45ea-ba0b-8c2f1dcd1978>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shoreline, WA Calculus Tutor
Find a Shoreline, WA Calculus Tutor
...This is underscored by the numerous times students have come back to share good news with me: "That paper you helped me revise? I got an A on it!" "The geometry exam we looked over together? I
just aced my next one!" "The spelling and grammar you worked on with my son?
35 Subjects: including calculus, English, writing, reading
...For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years before that I worked at the Math Center at Black Hills High School in Olympia.
I am certified level 1 by the College Reading and Learning Association, and have tutored subjects rangi...
13 Subjects: including calculus, physics, statistics, geometry
If you want someone who can teach your child/student with great real world experience in Math and Physics, then that's me. I've worked at NASA Johnson Space Center training Astronauts in Space
Shuttle Systems like Guidance, Propulsion and Flight Controls. I have a Bachelor's in Aerospace Engineeri...
12 Subjects: including calculus, physics, geometry, algebra 1
...For proof, we need to find an explanation, using logic. (This is the deductive part of science.) I make heavy use of the computer program Geometers Sketchpad for carrying out our experiments
during tutoring sessions. I recommend that you get it for your child to help them on their homework. You can rent it for $10 per year or purchase it for $35.
6 Subjects: including calculus, chemistry, physics, geometry
...This is material with which I'm supremely comfortable identifying and correcting mistakes. In my sophomore year at Tufts I took a course on Abstract Linear Algebra. Since then the concepts
from that class have reappeared through my applied math curriculum, from Analysis to Differential Geometry to PDE's.
25 Subjects: including calculus, chemistry, physics, geometry
Related Shoreline, WA Tutors
Shoreline, WA Accounting Tutors
Shoreline, WA ACT Tutors
Shoreline, WA Algebra Tutors
Shoreline, WA Algebra 2 Tutors
Shoreline, WA Calculus Tutors
Shoreline, WA Geometry Tutors
Shoreline, WA Math Tutors
Shoreline, WA Prealgebra Tutors
Shoreline, WA Precalculus Tutors
Shoreline, WA SAT Tutors
Shoreline, WA SAT Math Tutors
Shoreline, WA Science Tutors
Shoreline, WA Statistics Tutors
Shoreline, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Shoreline_WA_calculus_tutors.php","timestamp":"2014-04-16T07:58:08Z","content_type":null,"content_length":"24141","record_id":"<urn:uuid:30774a16-af82-4e80-aaa0-c5aadb43192d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Internal Demagnetizing Factor in Ferrous Metals
Journal of Metallurgy
Volume 2012 (2012), Article ID 752871, 5 pages
Research Article
Internal Demagnetizing Factor in Ferrous Metals
^1Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, UK
^2Department of Materials, Physics of Eotvos Lorand University, P.O. Box 32, Budapest 1518, Hungary
^3Research Institute for Solid State Physics and Optics, P.O. Box 49, Budapest 1525, Hungary
Received 26 July 2012; Accepted 19 September 2012
Academic Editor: Chih-Ming Chen
Copyright © 2012 Jenő Takács et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Modelling the saturation major loop of a ferrous metal produces the intrinsic magnetization parameters; fitting the measured commutation curve, however, can yield different results. The relation of
the intrinsic loci of the vertices of the minor loops () to the experimental curve () is investigated. The two-way transformation between the two curves is formulated in closed mathematical form with
the help of the internal demagnetization factor, . The method is applied to four ferrous metals, with widely different intrinsic properties (soft nonoriented Fe-Si steel, normalized low carbon steel,
and Finemet in nanocrystalline and amorphous state) supporting the predictions of the proposal. The developed relationship is model independent and it is shown that the factor depends linearly on
coercivity based on experimental evidence.
1. Introduction
A large number of the measurements of ferrous substances are aimed at finding the intrinsic material properties [1] of the tested ferrous sample (as defined by Fiorillo). Due to the ever-presence of
demagnetization field, various measuring methods have been developed to minimize its effect. The most commonly accepted way is to make the sample turn into a closed magnetic circuit, such as a toroid
or an Epstein square [1–3]. Although these two methods are not completely free from the ever-present internal demagnetization, they suffer the least from it [1]. Researchers went into great length to
include the internal demagnetization force into current models like Preisach, Jiles, Stoner-Wohlfarth [2, 4, 5], and so forth, leading to complicated, so called, dynamic versions.
The saturated major hysteresis loop of the sample carries all the intrinsic magnetic parameters directly recoverable from the measured data. Within this loop lie the un-hysteretic loci of the
vertices of the symmetrical minor loops, the only curve, which belongs to both the ascending and descending branches of the hysteresis loops [5, 6].
A proposal is put forward in this paper to show the relationship between the intrinsic curve and the loci of vertices of the measured minor loops. This relationship between the two curves,
independent of models, is formulated in closed mathematical form and its prediction is verified by the experimental data obtained from four different ferrous samples.
Once the intrinsic locus ( for ) is modelled from saturation or minor loop data, by using any of the static models, the measured curve () can be calculated from the proposed formulation below with
optimization of the parameter value as specified by Jiles [2] and used by Fiorillo [1].
2. The Intrinsic Loci of Vertices
The intrinsic commutation curve () is the locus of the return points or the maxima of the set of symmetrical minor loops. It is a single-valued function and in spite of having no hysteresis, it
carries all the hysteretic properties of the ferrous material obtained from the saturation data [7].
is the arithmetic mean of the ascending and descending magnetization functions, and , with equal application to case as well. Due to the difference between and the measured commutation curve,
hysteresis loop modelling based solely on the measured commutation curve can produce different parameters, when no molecular interaction is assumed [6, 8, 9].
3. The Effective Field and Its Implications
For the description of the effect of the internal demagnetizing field we will use the concept of the effective field, which is analogous to the Weiss mean field as defined and used by other authors [
2, 10, 11]. Although in simple cases, proportionality is assumed between interaction field and magnetization vector, this can only be regarded as a linear approximation [12, 13] expressed as = in
scalar form, leading to where is the internal demagnetization factor.
For simplicity we will use normalised quantities in further calculations, where the lower case letters will represent the normalised quantities of the physical equivalents, denoted by the same
capital letters.
denote the normalised function in (1). With the normalized effective field, , can be expressed as: where is also normalized. (1 Tesla = 8.10^5A/m. The internal demagnetization factor has unity
dimension only when the magnetization is measured in A/m.).
The first derivative of by in (3) leads to an expression, which shows a character similar to the feedback in an electrical circuit [14].
This expression describes a well-known relationship between the inherent () and the effective () permeabilities [1, 2].
For most magnetic substances the value of is small in the order of ~−10^−5 with unity dimension [1]. Consider By using the hysteretic model [6] (see also the appendix), the integral of (4) by leads
us to the following expression for , when
After integration: When the integration constants are appropriately chosen, this form of is equivalent to the one given in .
The intrinsic locus is entirely a theoretical concept. It was introduced for the free (Gibbs) energy calculations [2]. It assumes zero internal demagnetization in a system, where a moment can freely
move around under the influence of external excitation field without any hindrance from the interaction between the magnetic moments.
We must remind the reader that, for various ferromagnetic substances, this internal demagnetization constant is given traditionally in a numerical value with unity dimension (i.e., when both the and
measured in A/m). When different unitary system is used, has a different physical dimension and must be normalised (see as normalised ).
4. Experimental Verification
To verify the predictions of the proposed method, it was applied initially to two ferrous materials with very different characters. The first was a soft steel NO Fe-Si with 67.5A/m coercivity [15],
shown in Figure 1. The second material was a normalized low carbon steel (AISI 1040) with coercivity of 450A/m (see Figure 5). Following the excellent results, the same experiment was also repeated
later on two other samples; Finemet in nanocrystalline and amorphous (as cast) state. The detailed data of those samples are not included in this paper due to its limited size. All measurements were
carried out under identical conditions by using triangular excitation of Hz on toroid samples with geometrical details as follows: mm, mm and thickness mm [1]. In making the identical toroid
samples, extreme care was taken to avoid any changes in magnetic properties due to mechanical handling of the materials.
For numerical calculations we used the hyperbolic model for its simplicity and speed and the Mathematica program interactively. A brief summary of the model is given in the appendix. For further
details we refer the reader to the literature [6, 16, 17].
4.1. NO Fe-Si
Starting with conveniently chosen parameter values at the beginning with subsequent changes of the parameters, new curves are calculated and compared with the measured one.
When the iteration produced the best fit to the measured curve, the normalized and the equivalent physical values can be easily read from the two coordinate systems (normalization) as shown in
Figures 1, 2, and 3.
The first sample (see Figure 1) was modelled with the following normalized and physical parameters:, , , , , , , , , equivalent to: T, T, T. A/m, A/m, A/m, A/m. with normalization of 1h = H100A/m
and 1m = M0.2647T.
The measured and the modelled curves for (A/Tm, or −7.10^−5). are depicted in Figure 2. For the symbols see the appendix.
In order to check the accuracy of the transformation, between the two curves, 25 of the minor loops were measured with maximum field excitation values between = 5.01, Hm = 501A/m and = 0.152, Hm =
15.2A/m. For all the minor loops measured, the corresponding loops were calculated for the reduced maximum magnetization at = 0 and = −0.151.
For clarity, only one of the minor loops is shown in Figure 3 for the peek excitation field value of 111.3A/m.
All the calculated loops had an excellent fit to the equivalent measured loops for the same value. The proposed method is applicable to all measured hysteretic data, where the external
demagnetization field is eliminated or reduced to a negligible level.
4.2. Low Carbon Steel Toroid (AISI 1040)
For the second sample, we selected a toroid from low carbon steel with coercivity of 450A/m. The use of the same experimental setup has yielded the following parameters in normalized and physical
units:, , ,, , ,, , and ,equivalent to:T, T, T, A/m, A/m, A/m, and A/m, with normalization of 1A/m and 10.24T.
The measured and the modelled hysteresis loops are depicted in Figure 4 with the intrinsic and the measured commutation curves. The curves are shown in the first quadrant only for better visual
Following the calculation of the intrinsic parameters, the measured commutation curve was modelled with (A/Tm or −4.1.10^−4), which yielded the best result, giving an excellent fit to the measured
5. as a Function of Coercivity
The experimental results have indicated that is greatly dependent on the coercivity of the sample. To verify this dependency, four samples, listed under Section 4, were tested for this purpose, with
coercivity ranging between 1.6A/m and 450A/m. Figure 5 depicts the relationship between and .
The graph, in Figure 5, was constructed by using the parameters of the samples listed below, whose coercivity values change in steps approximately by an order of magnitude:Finemet: A/m, A/Tm or
−0.9·10^−6,Finemet in amorphous state: A/m, A/Tm or −6.2·10^−6,NO Fe-Si: A/m, A/Tm or −7·10^−5,Low carbon steel (AISI 1040): A/m, A/Tm or −4·16. 10^−4.
The graph shows linear dependency on coercivity and based on experimental evidence, it can be approximated as where represents the saturation magnetization.
6. Conclusions
The relation between the intrinsic () and the measured () loci of the vertices of the minor loops was investigated. By using the effective field, this relationship was formulated. The method was
subjected to tests on four magnetic materials with widely ranging magnetic properties. The test results show that the mathematical approach, presented here, describes the relation well. The factor
linear dependence on coercivity, based on empirical evidence, was also demonstrated. The paper shows that magnetic parameters ( and ) can be estimated very close to the real value.
The proposal’s aim is to recover the intrinsic magnetization properties from the measured commutation curves.
The characteristic equations of the hyperbolic model in canonic form: Here and signify the ascending and descending magnetization, respectively, represents the field excitation, is the coercivity of
the th process. is the amplitude of the magnetization, is a scaling factor, and is the integration constant, while is the maximum field excitation, common to all. The index refers to the individual
processes and is the number of total processes involved (for most substances ).
The parameters are calculated by changing the model parameters until the best fit to the measured curve is achieved. When the iteration gives the best fit, the normalized and the equivalent physical
values can be read from the two coordinate systems (normalized and measured), as shown in Figures 1, 2, and 3.
1. F. Fiorillo, Measurement and Characterization of Magnetic Materials, Elsevier, Oxford, UK, 2004.
2. D. Jiles, Introduction to Magnetism and Magnetic Materials, Chapman and Hall, New York, NY, USA, 1998.
3. T. Nakata, N. Takahashi, K. Fujiwara, M. Nakano, Y. Ogura, and K. Matsubara, “An improved method for determining the DC magnetization curve using a ring specimen,” IEEE Transactions on Magnetics,
vol. 28, no. 5, pp. 2456–2458, 1992. View at Publisher · View at Google Scholar · View at Scopus
4. F. Preisach, “Uber die magnetische Nachwirkung,” Zeitschrift für Physik, vol. 94, pp. 227–302, 1935.
5. E. C. Stoner and E. P. Wohlfarth, “A mechanism of magnetic hysteresis in heterogeneous alloys,” IEEE Transactions on Magnetics, vol. 27, no. 4, pp. 3475–3518, 1991. View at Publisher · View at
Google Scholar · View at Scopus
6. J. Takacs, “The everett integral and its analytical approximation,” in Advanced Magnetic Materials, L. Malkinsk, Ed., pp. 203–230, Intech, 2012.
7. J. Takâcs, “A phenomenological mathematical model of hysteresis,” Compel, vol. 20, no. 4, pp. 1002–1014, 2001. View at Scopus
8. I. Meszaros, Private communication on m[0].
9. H. Hauser, Y. Melikhov, and D. C. Jiles, “Examination of the equivalence of ferromagnetic hysteresis models describing the dependence of magnetization on magnetic field and stress,” IEEE
Transactions on Magnetics, vol. 45, no. 4, pp. 1940–1949, 2009. View at Publisher · View at Google Scholar · View at Scopus
10. G. Bertotti, Hysteresis in Magnetism, Academic Press, San Diego, Calif, USA, 1998.
11. D. C. Jiles and D. L. Atherton, “Theory of ferromagnetic hysteresis,” Journal of Magnetism and Magnetic Materials, vol. 61, no. 1-2, pp. 48–60, 1986. View at Scopus
12. J. A. Brug and W. P. Wolf, “Demagnetizing fields in magnetic measurements. I. Thin discs,” Journal of Applied Physics, vol. 57, no. 10, pp. 4685–4694, 1985. View at Publisher · View at Google
Scholar · View at Scopus
13. A. Benabou, J. V. Leite, S. Clénet, C. Simão, and N. Sadowski, “Minor loops modelling with a modified Jiles-Atherton model and comparison with the Preisach model,” JMMM, vol. 320, no. 20, pp.
e1034–e1038, 2008. View at Publisher · View at Google Scholar · View at Scopus
14. R. G. Harrison, “Physical theory of ferromagnetic first-order return curves,” IEEE Transactions on Magnetics, vol. 45, no. 4, pp. 1922–1939, 2009. View at Publisher · View at Google Scholar ·
View at Scopus
15. L. K. Varga, Gy. Kovacs, and J. Takacs, “Anhysteretic and biased first magnetization curves for Finemet-type toroidal samples,” JMMM, vol. 320, no. 3-4, pp. L.26–L.29, 2008.
16. J. Takacs, “Barkhausen instability and its implication in T(x) modelling of hysteresis,” Compel, vol. 24, no. 4, pp. 1180–1190, 2005. View at Publisher · View at Google Scholar · View at Scopus
17. J. Takacs, Mathematics of Hysteretic Phenomena, Wiley, Berlin, Germany, 2003.
|
{"url":"http://www.hindawi.com/journals/jm/2012/752871/","timestamp":"2014-04-17T16:59:47Z","content_type":null,"content_length":"182547","record_id":"<urn:uuid:f8532d4d-4cfe-452a-a681-fd3009325ddd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2.1. An honest cosmological constant
The simplest interpretation of the dark energy is that we have discovered that the cosmological constant is not quite zero: we are in the lowest energy state possible (or, more properly, that the
particles we observe are excitations of such a state) but that energy does not vanish. Although simple, this scenario is perhaps the hardest to analyze without an understanding of the complete
cosmological constant problem, and there is correspondingly little to say about such a possibility. As targets to shoot for, various numerological coincidences have been pointed out, which may some
day find homes as predictions of an actual theory. For example, the observed vacuum energy scale M[vac] = 10^-3 eV is related to the 1 TeV scale of low-energy supersymmetry breaking models by a
"supergravity suppression factor":
In other words, M[SUSY] is the geometric mean of M[vac] and M[Planck]. Unfortunately, nobody knows why this should be the case. In a similar spirit, the vacuum energy density is related to the Planck
energy density by the kind of suppression factor familiar from instanton calculations in gauge theories:
In other words, the natural log of 10^120 is twice 137. Again, this is not a relation we have any right to expect to hold (although it has been suggested that nonperturbative effects in
non-supersymmetric string theories could lead to such an answer [10]).
Theorists attempting to build models of a small nonzero vacuum energy must keep in mind the requirement of remaining compatible with some as-yet-undiscovered solution to the cosmological constant
problem. In particular, it is certainly insufficient to describe a specific contribution to the vacuum energy which by itself is of the right magnitude; it is necessary at the same time for there to
be some plausible reason why the well-known and large contributions from the Standard Model could be suppressed, while the new contribution is not. One way to avoid this problem is to imagine that an
unknown mechanism sets the vacuum energy to zero in the state of lowest energy, but that we actually live in a distinct false vacuum state, almost but not quite degenerate in energy with the true
vacuum [11, 12, 13]. From an observational point of view, false vacuum energy and true vacuum energy are utterly indistinguishable - they both appear as a strictly constant dark energy density. The
issue with such models is why the splitting in energies between the true and false vacua should be so much smaller than all of the characteristic scales of the problem; model-building approaches
generally invoke symmetries to suppress some but not all of the effects that could split these levels.
The only theory (if one can call it that) which leads a vacuum energy density of approximately the right order of magnitude without suspicious fine-tuning is the anthropic principle -- the notion
that intelligent observers will not witness the full range of conditions in the universe, but only those conditions which are compatible with the existence of such observers. Thus, we do not consider
it unnatural that human beings evolved on the surface of the Earth rather than on that of the Sun, even though the surface area of the Sun is much larger, since the conditions are rather less
hospitable there. If, then, there exist distinct parts of the universe (whether they be separate spatial regions or branches of a quantum wavefunction) in which the vacuum energy takes on different
values, we would expect to observe a value which favored the appearance of life. Although most humans don't think of the vacuum energy as playing any role in their lives, a substantially larger value
than we presently observe would either have led to a rapid recollapse of the universe (if 14, 15, 16, 17]. Many physicists find it unappealing to think that an apparent constant of nature would turn
out to simply be a feature of our local environment that was chosen from an ensemble of possibilities, although we should perhaps not expect that the universe takes our feelings into account on these
matters. More importantly, relying on the anthropic principle involves the invocation of a large collection of alternative possibilities for the vacuum energy, closely spaced in energy but not
continuously connected to each other (unless the light scalar fields implied by such connected vacua is very weakly coupled, as it must also be in the quintessence models discussed below). It is by
no means an economical solution to the vacuum energy puzzle.
As an interesting sidelight to this issue, it has been claimed that a positive vacuum energy would be incompatible with our current understanding of string theory [18, 19, 20, 21]. At issue is the
fact that such a universe eventually approaches a de Sitter solution (exponentially expanding), which implies future horizons which make it impossible to derive a gauge-invariant S-matrix. One
possible resolution might involve a dynamical dark energy component such as those discussed in the next section. While few string theorists would be willing to concede that a definitive measurement
that the vacuum energy is constant with time would rule out string theory as a description of nature, the possibility of saying something important about fundamental theory from cosmological
observations presents an extremely exciting opportunity.
Although the observational evidence for dark energy implies a component which is unclustered in space as well as slowly-varying in time, we may still imagine that it is not perfectly constant. The
simplest possibility along these lines involves the same kind of source typically invoked in models of inflation in the very early universe: a scalar field rolling slowly in a potential, sometimes
known as "quintessence" [22, 23, 24]. There are also a number of more exotic possibilities, including tangled topological defects and variable-mass particles (see [1, 7] for references and
There are good reasons to consider dynamical dark energy as an alternative to an honest cosmological constant. First, a dynamical energy density can be evolving slowly to zero, allowing for a
solution to the cosmological constant problem which makes the ultimate vacuum energy vanish exactly. Second, it poses an interesting and challenging observational problem to study the evolution of
the dark energy, from which we might learn something about the underlying physical mechanism. Perhaps most intriguingly, allowing the dark energy to evolve opens the possibility of finding a
dynamical solution to the coincidence problem, if the dynamics are such as to trigger a recent takeover by the dark energy (independently of, or at least for a wide range of, the parameters in the
At the same time, introducing dynamics opens up the possibility of introducing new problems, the form and severity of which will depend on the specific kind of model being considered. The most
popular quintessence models feature scalar fields
(Fields with larger masses would typically have already rolled to the minimum of their potentials.) In quantum field theory, light scalar fields are unnatural; renormalization effects tend to drive
scalar masses up to the scale of new physics. The well-known hierarchy problem of particle physics amounts to asking why the Higgs mass, thought to be of order 10^11 eV, should be so much smaller
than the grand unification/Planck scale, 10^25-10^27 eV. Masses of 10^-33 eV are correspondingly harder to understand. At the same time, such a low mass implies that 25].
The need for delicate fine-tunings of masses and couplings in quintessence models is certainly a strike against them, but is not a sufficiently serious one that the idea is not worth pursuing; until
we understand much more about the dark energy, it would be premature to rule out any idea on the basis of simple naturalness arguments. One promising route to gaining more understanding is to
observationally characterize the time evolution of the dark energy density. In principle any behavior is possible, but it is sensible to choose a simple parameterization which would characterize dark
energy evolution in the measurable regime of relatively nearby redshifts (order unity or less). For this purpose it is common to imagine that the dark energy evolves as a power law with the scale
Even if
Using the equation of energy-momentum conservation,
a constant exponent n of (11) implies a constant w with
As n varies from 3 (matter) to 0 (cosmological constant), w varies from 0 to -1. (Imposing mild energy conditions implies that | w| 26]; however, models with w < - 1 are still worth considering [27
Some limits from supernovae and large-scale structure from [28] are shown in Figure (2). These constraints apply to the w plane, under the assumption that the universe is flat (w = - 1. However,
there is plenty of room for alternatives; one of the most important tasks of observational cosmology will be to reduce the error regions on plots such of these to pin down precise values of these
Figure 2. Limits on the equation-of-state parameter w in a flat universe, where 28].
To date, many investigations have considered scalar fields with potentials that asymptote gradually to zero, of the form e^1 / or 1/29]; they can also be derived from particle-physics models, such as
the dilaton or moduli of string theory. They do not, however, provide a solution to the coincidence problem, as the era in which the scalar field begins to dominate is still set by finely-tuned
parameters in the theory. There have been two scalar-field models which come closer to being solutions: "k-essence", and oscillating dark energy. The k-essence idea [30] does not put the field in a
shallow potential, but rather modifies the form of the kinetic energy. We imagine that the Lagrange density is of the form
where X = ^2 is the conventional kinetic term. For certain choices of the functions f (g(X), the k-essence field naturally tracks the evolution of the total radiation energy density during radiation
domination, but switches to being almost constant once matter begins to dominate. In such a model the coincidence problem is explained by the fact that matter/radiation equality was a relatively
recent occurrence (at least on a logarithmic scale). The oscillating models [31] involve ordinary kinetic terms and potentials, but the potentials take the form of a decaying exponential with small
perturbations superimposed:
On average, the dark energy in such a model will track that of the dominant matter/radiation component; however, there will be gradual oscillations from a negligible density to a dominant density and
back, on a timescale set by the Hubble parameter. Consequently, in such models the acceleration of the universe is just something that happens from time to time. Unfortunately, in neither the k
-essence models nor the oscillating models do we have a compelling particle-physics motivation for the chosen dynamics, and in both cases the behavior still depends sensitively on the precise form of
parameters and interactions chosen. Nevertheless, these theories stand as interesting attempts to address the coincidence problem by dynamical means.
Rather than constructing models on the basis of cosmologically interesting dynamical properties, we may take the complementary route of considering which models would appear most sensible from a
particle-physics point of view, and then exploring what cosmological properties they exhibit. An acceptable particle physics model of quintessence would be one in which the scalar mass was naturally
small and its coupling to ordinary matter was naturally suppressed. These requirements are met by Pseudo-Nambu-Goldstone bosons (PNGB's) [23], which arise in models with approximate global symmetries
of the form
Clearly such a symmetry should not be exact, or the potential would be precisely flat; however, even an approximate symmetry can naturally suppress masses and couplings. PNGB's typically arise as the
angular degrees of freedom in Mexican-hat potentials that are "tilted" by a small explicitly symmetry breaking, and the PNGB potential takes on a sinusoidal form:
As a consequence, there is no easily characterized tracking or attractor behavior; the equation of state parameter w will depend on both the potential and the initial conditions, and can take on any
value from -1 to 0 (and in fact will change with time). We therefore find that the properties of models which are constructed by taking particle-physics requirements as our primary concern appear
quite different from those motivated by cosmology alone. The lesson to observational cosmologists is that a wide variety of possible behaviors should be taken seriously, with data providing the
ultimate guidance.
Given the uncomfortable tension between observational evidence for dark energy on one hand and our intuition for what seems natural in the context of the standard cosmological model on the other,
there is an irresistible temptation to contemplate the possibility that we are witnessing a breakdown of the Friedmann equation of conventional general relativity (GR) rather than merely a novel
source of energy. Alternatives to GR are highly constrained by tests in the solar system and in binary pulsars; however, if we are contemplating the space of all conceivable alternatives rather than
examining one specific proposal, we are free to imagine theories which deviate on cosmological scales while being indistinguishable from GR in small stellar systems. Speculations along these lines
are also constrained by observations: any alternative must predict the right abundances of light elements from Big Bang nucleosynthesis (BBN), the correct evolution of a sensible spectrum of
primordial density fluctuations into the observed spectrum of temperature anisotropies in the Cosmic Microwave Background and the power spectrum of large-scale structure, and that the age of the
universe is approximately twelve billion years. Of these phenomena, the sharpest test of Friedmann behavior comes from BBN, since perturbation growth depends both on the scale factor and on the local
gravitational interactions of the perturbations, while a large number of alternative expansion histories could in principle give the same age of the universe. As an example, Figure (3) provides a
graphical representation of alternative expansion histories in the vicinity of BBN ( H[BBN] ~ 0.1 sec^-1) which predict the same light element abundances as the standard picture [32]. The point of
this figure is that expansion histories which are not among the family portrayed, due to differences either in the slope or the overall normalization, will not give the right abundances. So it is
possible to find interesting nonstandard cosmologies which are consistent with the data, but they describe a small set in the space of all such alternatives.
Figure 3. The range of allowed evolution histories during Big Bang nucleosynthesis (between temperatures of 1 MeV to 50 keV), expressed as the behavior of the Hubble parameter H = a as a function of
a. Changes in the normalization of H can be compensated by a change in the slope while predicting the same abundances of ^4He, ^2D, and ^7Li. The extended thin line represents the standard
radiation-dominated Friedmann universe model. From [32].
Rather than imagining that gravity follows the predictions of standard GR in localized systems but deviates in cosmology, another approach would be to imagine that GR breaks down whenever the
gravitational field becomes (in some sense) sufficiently weak. This would be unusual behavior, as we are used to thinking of effective field theories as breaking down at high energies and small
length scales, but being completely reliable in the opposite regime. On the other hand, we might be ambitious enough to hope that an alternative theory of gravity could explain away not only the need
for dark energy but also that for dark matter. It has been famously pointed out by Milgrom [33] that the observed dynamics of galaxies only requires the introduction of dark matter in regimes where
the acceleration due to gravity (in the Newtonian sense) falls below a certain fixed value,
Meanwhile, we seem to need to invoke dark energy when the Hubble parameter drops approximately to its current value,
A priori, there seems to be little reason to expect that these two phenomena should be characterized by timescales of the same order of magnitude; one involves the local dynamics of baryons and
non-baryonic dark matter, while the other involves dark energy and the overall matter density (although see [34] for a suggested explanation). It is natural to wonder whether this is simply a
numerical coincidence, or the reflection of some new underlying theory characterized by a single dimensionful parameter. To date, nobody has succeeded in inventing a theory which comes anything close
to explaining away both the dark matter and dark energy in terms of modified gravitational dynamics. Given the manifold successes of the dark matter paradigm, from gravitational lensing to structure
formation to CMB anisotropy, is seems a good bet to think that this numerical coincidence is simply an accident. Of course, given the incredible importance of finding a successful alternative theory,
there seems to be little harm in keeping an open mind.
It was mentioned above, and bears repeating, that modified-gravity models do not hold any unique promise for solving the coincidence problem. At first glance we might hope that an alternative to the
conventional Friedmann equation might lead to a naturally occurring acceleration at all times; but a moment's reflection reveals that perpetual acceleration is not consistent with the data, so we
still require an explanation for why the acceleration began recently. In other words, the observations seem to be indicating the importance of a fixed scale at which the universe departs from
ordinary matter domination; if we are fortunate we will explain this scale either in terms of combinations of other scales in our particle-physics model or as an outcome of dynamical processes, while
if we are unfortunate it will have to be a new input parameter to our theory. In either case, finding the origin of this new scale is the task for theorists and experimenters in the near future.
|
{"url":"http://ned.ipac.caltech.edu/level5/March01/Carroll/Carroll2.html","timestamp":"2014-04-16T13:04:18Z","content_type":null,"content_length":"28763","record_id":"<urn:uuid:87168a6e-71b7-439d-bb3f-5211d273be35>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oh no! Division by zero has occurred!
404 Page not found...
Sorry, but the page you were trying to reach does not exist:
I don't recognise that page at all.
Try the Site Map,
Search Interactive Mathematics:
Didn't find what you are looking for on this page? Try search:
Online Algebra Solver
This algebra solver can solve a wide range of math problems. (Please be patient while it loads.)
Go to: Online algebra solver
Ready for a break?
Play a math game.
(Well, not really a math game, but each game was made using math...)
The IntMath Newsletter
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents!
Share IntMath!
Calculus Lessons on DVD
Easy to understand calculus lessons on DVD. See samples before you commit.
More info: Calculus videos
|
{"url":"http://www.intmath.com/integrationion/2_Indefinite-integral.php","timestamp":"2014-04-16T22:09:51Z","content_type":null,"content_length":"21003","record_id":"<urn:uuid:ce56a6fc-d08f-4064-82eb-7d735f60f5a1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rényi Institute
Turán Week
in the Alfréd Rényi Institute
2-6 October, 2000
To commemorate the 90th anniversary of Paul Turán's birth
All Institute Seminar on the Power Sum Method
Monday 14:00 Introduction to the power sum method by G. Halász
Extensions of Turán's first main theorem by F. Nazarov
Applications of the power sum method in number theory by J. Pintz
Estimation of a pure power sum by A. Bíró
Turán Memorial Lectures of 2000
H.L. Montgomery:
Tuesday 14:00 The local distribution of prime numbers and the zeros
of the Riemann zeta function
Wednesday 14:00 Beurling's generalized primes
Thursday 14:00 Greedy sums of distinct squares
Number Theory Seminar
Tuesday 15:30 Mellin transforms of the second and the fourth powers of
Riemann's zeta-function by M. Jutila
The fourth moment of the Dedekind zeta-function of imaginary
quadratic fields by Y. Motohashi
On additive arithmetic functions by I.Z. Ruzsa
On multiplicative properties of sum sets
by K. Gyõry and A. Sárközy
Analysis Seminar
Monday 17:00 Turan and special functions by R. Askey
Wednesday 15:30 On a Tchebyshev type problem by Á. Elbert
The rough and fine theory of interpolation by P. Vértesi
On rational approximation by J. Szabados
Combinatorics Seminar
Thursday 15:30 Turán type extremal problems. An introduction by M. Simonovits
Ramsey-Turán type theorems by E. Szemerédi
Turán's theorem, 60 years later by B. Bollobás
On Turán numbers by A. Sidorenko
Extremal hypergraph results for triple systems by Z. Füredi
Friday 16:30 Two applications of Turán's theorem by G.O.H. Katona
Turán and the Sylvester-Hadamard conjecture by G. Szekeres
Algebra Seminar
Friday 15:00 Statistical properties of partitions by M. Szalay
Statistical group theory by P.P. Pálfy
Set Theory Seminar
Friday 14:00 Set mappings by P. Komjáth
Friday 19:00 Everyone may speak of his or her relation to Turán, his or her
memories of him.
Those wishing to take part or speak on the banquet should indicate this wish to
G. Halász (turanhet@renyi.hu)
|
{"url":"http://renyi.hu/~turanhet/","timestamp":"2014-04-18T18:30:45Z","content_type":null,"content_length":"3447","record_id":"<urn:uuid:e49d4dfd-5bb3-4ac1-a5d4-bbba5fa6ded2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Report in Wirtschaftsmathematik (WIMA Report)
127 search hits
Testing for parameter stability in nonlinear autoregressive models (2011)
Claudia Kirch Joseph Tadjuidje Kamgaing
In this paper we develop testing procedures for the detection of structural changes in nonlinear autoregressive processes. For the detection procedure we model the regression function by a single
layer feedforward neural network. We show that CUSUM-type tests based on cumulative sums of estimated residuals, that have been intensively studied for linear regression, can be extended to this
case. The limit distribution under the null hypothesis is obtained, which is needed to construct asymptotic tests. For a large class of alternatives it is shown that the tests have asymptotic
power one. In this case, we obtain a consistent change-point estimator which is related to the test statistics. Power and size are further investigated in a small simulation study with a
particular emphasis on situations where the model is misspecified, i.e. the data is not generated by a neural network but some other regression function. As illustration, an application on the
Nile data set as well as S&P log-returns is given.
On a Cardinality Constrained Multicriteria Knapsack Problem (2011)
Florian Seipp Stefan Ruzika Luis Paquete
We consider a variant of a knapsack problem with a fixed cardinality constraint. There are three objective functions to be optimized: one real-valued and two integer-valued objectives. We show
that this problem can be solved efficiently by a local search. The algorithm utilizes connectedness of a subset of feasible solutions and has optimal run-time.
Generalized Multiple Objective Bottleneck Problems (2010)
Jochen Gorski Kathrin Klamroth Stefan Ruzika
We consider multiple objective combinatiorial optimization problems in which the first objective is of arbitrary type and the remaining objectives are either bottleneck or k-max objective
functions. While the objective value of a bottleneck objective is determined by the largest cost value of any element in a feasible solution, the kth-largest element defines the objective value
of the k-max objective. An efficient solution approach for the generation of the complete nondominated set is developed which is independent of the specific combinatiorial problem at hand. This
implies a polynomial time algorithm for several important problem classes like shortest paths, spanning tree, and assignment problems with bottleneck objectives which are known to be NP-hard in
the general multiple objective case.
Universal Shortest Paths (2010)
Lara Turner Horst W. Hamacher
We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal
combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality.
Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained
subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be
formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum
shortest path problem.
Weak Dependence of Functional INGARCH Processes (2010)
Jürgen Franke
We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we
formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued
time series models based on other types of thinning operations.
Maximum Likelihood Estimators for Markov Switching Autoregressive Processes with ARCH Component (2009)
Jürgen Franke Joseph Tadjuidje Kamgaing
We consider a mixture of AR-ARCH models where the switching between the basic states of the observed time series is controlled by a hidden Markov chain. Under simple conditions, we prove
consistency and asymptotic normality of the maximum likelihood parameter estimates combining general results on asymptotics of Douc et al (2004) and of geometric ergodicity of Franke et al
Mixtures of Nonparametric Autoregression, revised (2009)
Jürgen Franke Jean-Pierre Stockis Joseph Tadjuidje W.K. Li
We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions
characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically
which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.
Mixtures of Nonparametric Autoregressions (2009)
Jürgen Franke Jean-Pierre Stockis Joseph Tadjuidje W.K. Li
We consider data generating mechanisms which can be represented as mixtures of finitely many regression or autoregression models. We propose nonparametric estimators for the functions
characterizing the various mixture components based on a local quasi maximum likelihood approach and prove their consistency. We present an EM algorithm for calculating the estimates numerically
which is mainly based on iteratively applying common local smoothers and discuss its convergence properties.
A Note On Inverse Max Flow Problem Under Chebyshev Norm (2009)
Cigdem Güler Horst W. Hamacher
In this paper, we study the inverse maximum flow problem under \(\ell_\infty\)-norm and show that this problem can be solved by finding a maximum capacity path on a modified graph. Moreover, we
consider an extension of the problem where we minimize the number of perturbations among all the optimal solutions of Chebyshev norm. This bicriteria version of the inverse maximum flow problem
can also be solved in strongly polynomial time by finding a minimum \(s - t\) cut on the modified graph with a new capacity function.
A Class of Switching Regimes Autoregressive Driven Processes with Exogenous Components (2008)
Joseph Tadjuidje Kamgaing Hernando Ombao Richard A. Davis
In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In
contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have
potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as
exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable
conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics
and model parameter estimators.
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16168/start/10/rows/10/doctypefq/preprint","timestamp":"2014-04-17T14:39:01Z","content_type":null,"content_length":"48559","record_id":"<urn:uuid:e4cfff86-530c-49a5-a747-1feef9dc8110>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to find real solutions to x^2 + 3x - 4 = 0
Easy question, just missed one part of the lesson:
I know that x^2 + 3x - 4 = 0 has 2 solutions because of the degree of the equation. So my first assumption was to factor out a x and make the problem x(x+3)+4, but I realized that that would give me
3 solutions. So how exactly do I go about getting the real solution?
Re: x^2+3x-4=0
Hi Germo, welcome to the forums.
The lesson on Factoring Quadratics explains how to do this.
Re: how to find real solutions to x^2 + 3x - 4 = 0
You didn't factor the equation properly.
The equation is this in factored form: (x + 4)(x - 1) = 0. You have to ask yourself, which two numbers whose product is -4 also adds up to 3 (the x-coefficient of the second term)? The answer to that
is 4 and -1. And thus, the equation factors as (x + 4)(x - 1) = 0.
Then when you set each of the factors equal to zero, you get the two following solutions: x = {-4, 1}
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=498","timestamp":"2014-04-16T04:57:45Z","content_type":null,"content_length":"20186","record_id":"<urn:uuid:f06fbd16-960c-4bec-8a83-80352d26f15d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functional Analysis
January-April 2008
Homework: 40% (due before the first lecture of every week).
Homework scores will be based on four problems chosen at random from those assigned every week. Late homework will not be corrected.
Midsem exam: 20% (25th February, 2008; 9:30am-12:30pm).
Final exam: 40% (24th April, 2008; 9:30am-12:30pm).
Grades: A=90-100%; B=80-89%; C=70-79%; D=60-69%; F=0-59%.
Recommended texts
Introductory Real Analysis, by A. N. Kolmogorov and S. V. Fomin.
Real and Complex Analysis, by Walter Rudin.
Introduction to Topology and Modern Analysis, by G. F. Simmons.
Functional Analysis, by Kosaku Yosida.
Essential Results of Functional Analysis, by Robert J. Zimmer.
Assignments (listed by due date)
January 14
January 21
January 28
February 4
February 11
February 18
March 10
March 17
March 24
March 31
April 7
April 14
April 21
Last assignment
Topics covered
7th January: Weierstrass Approximation theorem. Stone-Weierstrass theorem. Baire's theorem.
10th January: Baire's theorem. Semi-norms. Convex balanced absorbing sets. The Minkowski functional.
14th January: Locally convex topological vector spaces. Characterization through sufficient families of seminorms.
16th January: Locally convex topology on smooth functions. Convolution.
21st January: Smoothing by convolution. Approximation of compactly supported continuous functions by compactly supported smooth functions. A topology on compactly supported smooth functions.
24th January: Norms. Quasi-norms. L^p norms.
28th January: Total variation of a signed measure. Jordan's decomposition. Hahn decomposition.
31st January: Pre-Hilbert spaces. Inner products. Complete topological vector spaces (Banach, Frechet and Hilbert spaces).
4th February: Completeness of L^p. Continuous linear operators. Bornologic spaces.
7th February: Distributions. Distributional derivatives.
12th February: Sobolev's spaces. Completeness of Sobolev's spaces. The completion of a quasi-normed space.
14th February: Density of smooth functions in Sobolev space for R^n. Factor space of a Banach space.
19th February: Partition of unity.
25th February: Compactly supported distributions as linear functionals on the space of smooth functions. Substitution of variables in a distribution. Homogeneous distributions. Distributions
invariant under a group action.
10th March: Rapidly decreasing functions. Schwartz space. Fourier transform of rapidly decreasing functions. Fourier inversion formula.
13th March: Parseval's relation. Poisson summation formula. Tempered distributions.
18th March: Fourier transform of tempered distributions. Riesz representation theorem (for Hilbert spaces). Plancherel's theorem.
20th March: Compact operators. Integral operators. Integral operators on compact spaces are compact.
26th March: Hilbert-Schmidt operators (compactness of). Integral operators with L^2 kernels are Hilbert-Schmidt. Spectral theorem for self-adjoint compact operators.
27th March: Spectral theorem for commuting families of compact self-adjoint operators. Normal operators. Spectral theorem for compact normal operators. Spectral theorem for commuting families of
compact normal operators.
31st March: Topologies on operator spaces: uniform, weak and strong.
3rd April: Topological groups. Representations of topological groups.
7th April: Strong continuity of representations by translation operators on compactly supported continuous functions. Lusin's theorem. Density of compactly supported continuous functions in L^p of a
finite measure space. Strong continuity of representations by translation operators on L^p of a finite measure space.
11th April: Weak and weak* topologies. The Hahn-Banach theorem. The Banach-Alaoglu theorem. Compactness of the space of Probability measures on a compact metric space in the weak* topology.
16th April: The Kakutani-Markov fixed point theorem. Fixed point theorem for the action of a compact group on a closed convex set in the unit ball of the dual of a Banach space with weak* topology.
17th April: Von Neumann's theorem on the existence of an invariant probability measure on a compact space on which a compact group operates. Translation-invariant probability measures for compact
groups; bi-invariance and uniqueness of such measures. The Peter-Weyl theorem.
21st April: The Krein-Millman theorem. Existence of invariant ergodic measures for abelian group actions on compact metric spaces.
Further reading
This is a rather random list of material I feel would be fun to read now:
Harmonic Analysis on Phase Space, by Gerald B. Folland.
The Fourier Integral and some of its Applications, by Norbert Wiener.
On the role of the Heisenberg group in harmonic analysis by Roger Howe.
|
{"url":"http://www.imsc.res.in/~amri/functional_analysis/","timestamp":"2014-04-17T19:11:39Z","content_type":null,"content_length":"5984","record_id":"<urn:uuid:db40846e-30ec-4cc0-a548-a97b7b69422c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Engineer
posted on October 23, 2006 |
| 11022 views
Thermal radiation is energy emitted by matter that is at a finite temperature. Although we focus primarily on radiation from solid surfaces, emission may also occur from liquids and gases. Regardless
of the form of matter, the emission may be attributed to changes in the electron configurations of the constituent atoms or molecules. The energy of the radiation field is transported by
electromagnetic waves (or alternatively, photons). While the transfer of energy by conduction or convection requires the presence of a material medium, radiation does not. In fact, radiation transfer
occurs most efficiently in a vacuum.
The maximum flux (W/m^2) at which radiation may be emitted from a surface is given by the Stefan–Boltzmann Law
where T[s] is the absolute temperature (K) of the surface and s is the Stefan–Boltzmann constant (s = 5.67 X 10^-8 W/m^2 · K^4). Such a surface is called an ideal radiator or blackbody. The heat flux
emitted by a real surface is less than that of the ideal radiation and is given by
where e is a radiative property of the surface called the emissivity. This property, whose value is in the range 0 = å = 1, indicates how efficiently the surface emits compared to an ideal radiator.
Conversely, if radiation is incident upon a surface, a portion will be absorbed, and the rate at which energy is absorbed per unit surface area may be evaluated from the knowledge of a surface
radiative property termed the absorptivity á. That is,
0 = á = 1. Whereas radiation emission reduces the thermal energy of matter, absorption increases this energy.
Assuming the surface to be one for which á = å, the net rate of radiation heat exchange between the surface and its surroundings expressed per unit area of the surface, is
The surface within the surroundings may also simultaneously transfer heat by convection to the adjoining gas. The total rate of heat transfer from the surface is then the sum of the heat rates due to
the two modes. That is,
Excerpt from: Incroprera, Frank and De Witt, David P. Introduction to Heat Transfer. Second Edition. New York: John Wiley & Sons, Inc. 1985, 1990.
Copyright © 1985, 1990, by John Wiley & Sons, Inc.
This material is used by permission of John Wiley & Sons, Inc.
|
{"url":"http://www.engineering.com/Library/ArticlesPage/tabid/85/ArticleID/142/categoryId/11/Radiation.aspx","timestamp":"2014-04-20T09:12:04Z","content_type":null,"content_length":"44903","record_id":"<urn:uuid:cb043bcb-dfe2-416a-9a7b-a05bb24a0ea1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
One-to-one and onto
To me this problem doesn't seem right. Here it is:
Is the following function one-to-one, onto, both, or neither?
f: R→N f(x) = ceiling 2x/3
My answer: onto
Although, wouldn't this function be invalid since it produces negative numbers and the set of natural numbers doesn't include negatives? Consider f(-1.5) = -1.
Am I misunderstanding a concept?
|
{"url":"http://www.physicsforums.com/showthread.php?p=4165071","timestamp":"2014-04-17T21:31:28Z","content_type":null,"content_length":"24285","record_id":"<urn:uuid:b5aaca2a-85c3-4011-b5dc-c33a36708ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Evaluation Process
When evaluating an expression the mathematica kernel applies a collection of rules to an expression until the expression no longer changes. In the discussion below I explain the evaluation process in
greater depth. This discussion of the evaluation process is based on a tutorial by David Withoff which is posted at http://library.wolfram.com/infocenter/Conferences/4683/. That tutorial applied to
Mathematica 2.0, and further details to reflect changes in Mathematica 3.0 come from Power Programming With Mathematica the kernel (by David Wagner). The book by David Wagner is apparently out of
print. As far as I know there were no changes to the evaluation process in Mathematica 4 or Mathematica 5.
This discussion doesn't cover the use of MakeExpression, $PreRead, $Pre, $Post, $PrePrint, FormatValues, and MakeBoxes. That is all discussed in an earlier section. Each step discussed there requires
completing the evaluation process discussed in this section.
Furthermore, this discussion doesn't address the pattern matching process.
Steps of Evaluating of h[a1, a2, a3]
Below I talk about "external DownValues", "internal DownValues" and likewise for UpValues, SubValues, and NValues. External values are those that are not part of the Mathematica kernel while internal
values are those that are part of the kernel.
The 17 steps below are repeated until an expression no longer changes, and the decision to end the process happens in step 3 below. When the expression being evaluated does change the whole process
starts again at the begininig on the new expression. In pathalogical cases such as in the next cell evaluation completes due to exceeding $IterationLimit. In other cases evaluation completes due to
exceeding $RecursionLimit. I don't discuss how counting iterarions and recursions fit into the evaluation process because I have never seen an explanation of when that happens.
This discussion overlooks the fact that NHoldAll, NHoldRest and NHoldFirst
prevent N from approximating one or more argument of affected expressions.
Also the functions Plus and Times use internal UpValues and internal
DownValues before external definitions, but that detail is overlooked
Step 1
If the expression being evaluated is a symbol with an OwnValue replace it with the OwnValue. OwnValues are used when some value is assigned to a symbol (e.g. x=5).
Step 2
Evaluate the head of the expression.
Step 3
If no part of the expression has changed during the last time around this procedure, return the expression. This is an optimazation that prevents unecessary reevaluation of large expressions.
Step 4
If (h) has the HoldAllComplete attribute skip to step 11 below. HoldAllComplete prevents any changes to the arguments of a function.
If (h) doens't have attributes that prevent evaluation of some or all arguments, the argumenrts evaluate from left to right. When (h) has either attribute (HoldFirst, HoldRest, HoldAll) evaluation
continues without evaluating the affected arguments. If any of the arguments have the head Unevaluated, then the head Unevaluated is removed and further evaluation of the argument is prevented. In
step 17 the head Unevaluated may be restored to the argument.
If (h) has either attribute (HoldFirst, HoldRest, HoldAll), and an argument of (h) has the head Evaluate, then the argument evaluates even if the attribute would have prevented evaluation.
Step 5
If (h) has the Flat attribute flatten layers of (h) as in the next cell.
Step 6
If (h) does not have the SequenceHold attribute splice together sequences.
In the next cell Sequences are spliced together.
Next (h) has the SequenceHold attribute and sequences aren't spliced.
Step 7
If (h) has the Listable attribute thread (h) over Lists.
This has the same effect as evaluating Thread[ h[a1,a2,a3] ] when (h) isn't Listable.
Step 8
If (h) has the Orderless attribute sort the arguments of (h).
Note the arguments are sorted in canonical order which may not be the same as numeric order when the arguments are numeric. This is seen in the next cell.
Step 9
Use external Upvalues for the symbolic head of of each argument of the expression. For example in the next cell we would use the external UpValues of (g1), then the external UpValues of (g2), then
the external UpValues of Derivative are used.
The "symbolic head" of an expression is the result of nesting Head until a symbol is returned. Notice Derivative is the symbolic head of . Here we have to nest head three times to get a symbol. The
head of of is (f ' ), and the head of that is Derivative[1], which finally has the head Derivative.
Step 10
Use internal UpValues for the symbolic head of of each argument of the expression. For example in the previous cell we would use the internal UpValues of (g1), then the internal UpValues of (g2),
then the internal UpValues of Derivative are used.
Step 11
If (h) in is a symbol, the external DownValues for (h) are used.
Step 12
If (h) in h[g1[a], g2[a], g3[a]] is not a symbol the external SubValues of (h) are used. The head of the expression in the next cell is h[1]. Since (h[1]) isn't a symbol the Subvalues of (h) would be
Step 13
If (h) in is a symbol, the internal DownValues for (h) are used.
Step 14
If (h) in h[g1[a], g2[a], g3[a]] is not a symbol the internal SubValues of (h) are used.
Step 15
Use external NValues if the expression being evaluated has the head N. More precisely the NValues of the symbolic head of the first argument of N are used.
Step 16
Use internal NValues if the expression being evaluated has the head N. As in the previous step the NValues of the symbolic head of the first argument of N are used.
Step 17
If no UpValues, DownValues, SubValues, or NValues were used, and any arguments of (h) did have the head Unevaluated. Then the head Unevaluated is restored to that argument. This is demonstrated in
the next cell.
Unevaluated prevents sequences from splicing, and prevents attributes such as
Flat and Orderless from taking effect. This is demonstrated in the next
However, in the next cell (h) has a downvalue that applies and we don't see
Unevaluated in the result.
Some Examples
From the discussion above it may be hard to understand the order that parts
of a complicated expression evaluate. I provide two examples below to
demonstrate. First evaluate the next cell which makes some definitions we
will use.
Consider evaluation of the example in the next cell.
You could see the order of evaluation using Trace or a similar function, but
I find the output of these functions difficult to read. Instead I like to use
Print statements as I do below.
Next consider how expressions evaluate when the head (h) in h[x] is a
non-trivial expression. The next cell gives one such example.
The next cell shows how evaluation in this example progresses.
Created by Mathematica (May 17, 2004)
|
{"url":"http://www.verbeia.com/mathematica/tips/HTMLLinks/Tricks_Misc_3.html","timestamp":"2014-04-16T13:03:03Z","content_type":null,"content_length":"23039","record_id":"<urn:uuid:79d4f13e-7e87-4f0f-a36d-e7d4b171349c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Rings and Algebras Problem set #11. Dec. 1, 2011.
1. Suppose e # R is a basic idempotent in a semiperfect ring R and suppose that M is a generator of RMod, i. e.
for each Rmodule N there is an epimorphism f : # I
M # N . Then Re is isomorphic to a direct summand of M .
2. Let R be the ring of all N×N matrices over R which can be written as the sum of a scalar matrix and a strictly
lower triangular matrix with only finitely many nonzero entries. Show that R is left perfect but not right perfect.
3. Let A = K#/I be a finite dimensional path algebra defined by relations. Let e 1 , . . . , e n denote the idempotents
corresponding to vertices.
a) Show that the module Ae i is indecomposable projetive and it is the projective cover of the simple module
Ae i /J(A)e i .
b) Show that D(e i A) = HomK (e i A, K) is indecomposable injective and it is the injective envelope of
Ae i /J(A)e i .
4. a) Show that the following are equivalent for a module RM :
(i) M is faithful;
(ii) M cogenerates RR;
(iii) M cogenerates every finitely generated projective module.
5. Show that if F : RMod # SMod is a categorical equivalence then a module RM is faithful if and only if S F (M)
is faithful. Derive from this that R is (semi)primitive if and only if S is semiprimitive.
6. Let RPS and S QR be bimodules satisfying the conditions in the the theorem characterizing Moritaequivalence.
Prove the natural isomorphisms:
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/843/5001945.html","timestamp":"2014-04-16T13:34:50Z","content_type":null,"content_length":"8581","record_id":"<urn:uuid:99babfeb-d0df-440d-a610-ae1809a816e7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Petaluma Precalculus Tutors
...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I
was trained in and provided materials for each of these topics. I often find, when working with my stu...
20 Subjects: including precalculus, calculus, Fortran, Pascal
...They can be explained in simple terms and there are ways to remember key concepts. This can be a fun subject when the student has confidence about the basics. Understanding Precalculus and
Trigonometry concepts need not be daunting.
18 Subjects: including precalculus, calculus, geometry, statistics
...I have a Masters in mathematics and a PhD in economics which requires a good understanding of both topics. I understand both the theoretical basis and practical application of both subjects. In
particular, I understand the relationship between the two subjects.
49 Subjects: including precalculus, calculus, physics, geometry
...This is my way of paying it forward all the tutoring and advice I received as an undergraduate during my tenure in as a Ronald E. McNair Scholar, which was undoubtedly the program that changed
my academic career path towards the doctorate degree and helped me get where I am today. APPROACH TO TUTORING: There is no one-size-fits-all approach when it comes to learning.
24 Subjects: including precalculus, chemistry, physics, calculus
...In other words, I am a strong believer in either writing it down or drawing a picture to help illustrate what is said. Doing things one-on-one rather than in a large group setting I'm inclined
to have the student do example problems while I ask them questions to get them to think about how to ap...
10 Subjects: including precalculus, chemistry, calculus, physics
|
{"url":"http://www.algebrahelp.com/Petaluma_precalculus_tutors.jsp","timestamp":"2014-04-18T11:49:26Z","content_type":null,"content_length":"25001","record_id":"<urn:uuid:a39f338d-30a7-4daa-a305-a2baacde45b8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next: Chapter Notes Up: 11 Hypercube Algorithms Previous: 11.5 Summary
1. Execute the hypercube summation algorithm by hand for N=8, and satisfy yourself that you obtain the correct answer.
2. Use Equations 11.1 and 11.2 to identify problem size, processor count, and machine parameter regimes in which each of the two vector reduction algorithms of Section 11.2 will be more efficient.
3. Implement the hybrid vector reduction algorithm described in Section 11.2. Use empirical studies to determine the vector length at which the switch from recursive halving to exchange algorithm
should occur. Compare the performance of this algorithm with pure recursive halving and exchange algorithms.
4. A variant of the parallel mergesort algorithm performs just 174]. In the bubblesort phase, tasks are connected in a logical ring and each task performs compare-exchange operations with its
neighbors until a global reduction shows that no exchanges occurred. Design an implementation of this algorithm, using hypercube and ring structures as building blocks.
5. Implement the modified parallel mergesort of Exercise 4. Compare its performance with regular parallel mergesort for different input sequences and for a variety of P and N .
6. Extend Equations 11.3 and 11.4 to account for bandwidth limitations in a one-dimensional mesh.
7. Modify the performance models developed for the convolution algorithm in Section 4.4 to reflect the use of the hypercube-based transpose. Can the resulting algorithms ever provide superior
8. Use the performance models given in Section 11.2 for the simple and recursive halving vector reduction algorithms to determine situations in which each algorithm would give superior performance.
9. Design and implement a variant of the vector sum algorithm that does not require the number of tasks to be an integer power of 2.
10. Develop a CC++ , Fortran M, or MPI implementation of a ``hypercube template.'' Use this template to implement simple reduction, vector reduction, and broadcast algorithms. Discuss the techniques
that you used to facilitate reuse of the template.
11. Implement a ``torus template'' and use this together with the template developed in Exercise 10 to implement the finite difference computation of Section 4.2.2.
12. Develop a performance model for a 2-D matrix multiplication algorithm that uses the vector broadcast algorithm of Section 11.2 in place of the tree-based broadcast assumed in Section 4.6.1.
Discuss the advantages and disadvantages of this algorithm.
13. Implement both the modified matrix multiplication algorithm of Exercise 12 and the original algorithm of Section 4.6.1, and compare their performance.
Next: Chapter Notes Up: 11 Hypercube Algorithms Previous: 11.5 Summary
© Copyright 1995 by Ian Foster
|
{"url":"http://www.mcs.anl.gov/~itf/dbpp/text/node129.html","timestamp":"2014-04-16T16:30:07Z","content_type":null,"content_length":"6460","record_id":"<urn:uuid:edbae707-d0e8-4565-a2f3-4b7dcab4b590>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing - Statistical Modeling, Causal Inference, and Social Science
So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing
Steve Ziliak points me to this article by the always-excellent Carl Bialik, slamming hypothesis tests. I only wish Carl had talked with me before so hastily posting, though! I would’ve argued with
some of the things in the article. In particular, he writes:
Reese and Brad Carlin . . . suggest that Bayesian statistics are a better alternative, because they tackle the probability that the hypothesis is true head-on, and incorporate prior knowledge
about the variables involved.
Brad Carlin does great work in theory, methods, and applications, and I like the bit about the prior knowledge (although I might prefer the more general phrase “additional information”), but I hate
that quote!
My quick response is that the hypothesis of zero effect is almost never true! The problem with the significance testing framework–Bayesian or otherwise–is in the obsession with the possibility of an
exact zero effect. The real concern is not with zero, it’s with claiming a positive effect when the true effect is negative, or claiming a large effect when the true effect is small, or claiming a
precise estimate of an effect when the true effect is highly variable, or . . . I’ve probably missed a few possibilities here but you get the idea.
In addition, none of Carl’s correspondents mentioned the “statistical significance filter”: the idea that, to make the cut of statistical significance, an estimate has to reach some threshold. As a
result of this selection bias, statistically significant estimates tend to be overestimates–whether or not a Bayesian method is used, and whether or not there are any problems with fishing through
the data.
Bayesian inference is great–I’ve written a few books on the topic–but, y’know, garbage in, garbage out. If you start with a model of exactly zero effects, that’s what will pop out.
I completely agree with this quote from Susan Ellenberg, reported in the above article:
You have to make a lot of assumptions in order to do any statistical test, and all of those are questionable.
And being Bayesian doesn’t get around that problem. Not at all.
P.S. Steve Stigler is quoted as saying, “I don’t think in science we generally sanction the unequivocal acceptance of significance tests.” Unfortunately, I have no idea what he means here, given the
two completely opposite meanings of the word “sanction” (see the P.S. here.)
25 Comments
1. The nice thing about hypothesis tests OTHER than those based on p values, though (including non-Bayesian AIC, factor, and others) is that they are based on comparative logic. Comparing the
relative fit of two models through likehood can be informative. We have to always keep in mind our assumptions, but this is true of any procedure, as you point out. Every model is "wrong",
including those with nonzero effect parameters. When you examine the posterior distribution, you implictly use your eye to do exactly the same comparisons you could do with a hypothesis test.
You're comparing posterior values to one another. None of these values are "true" in any sense, but it is still useful to compare them.
I think when you say that the real question is whether an effect is positive or negative, that's certainly often a valuable question, but does nothing to call into question the usefulness of
Bayesian hypothesis tests. Bayes factors or posterior odds for effect sign are perfectly plausible and useful. In other cases, null models are useful to compare against.
You're stating the case too strongly when you say Bayesian hypothesis tests are "just as bad". Their comparative logic makes them much more useful. P values, on the other hand, test the null in
isolation, which is of dubious usefulness.
2. I think it's clear from the way Bialik deployed the quote that Stigler means sanction in the sense of "approve."
But yes: you and my mom are right.
3. It is certainly possible to do BHT with an approximate point null, where the prior on the null is spread out to a degree that is consistent with other information that you have. And, as Berger
and Delampady show in their 1987 paper, an exact point null is a good approximation to an approximate point null under some circumstances (specified in the paper).
That said, the point of hypothesis testing is, presumably, to decide on what to do, what action to take. No reasonable person would do hypothesis testing, say "it looks like the null is false,"
and stop there. (A lot of unreasonable people do this, however.)
That is, the proper way to frame this is in terms of decision theory, with losses as well as probabilities. p-values are completely useless and in fact quite wrong in this context. BHT with
*appropriate* priors (see above) is just the first necessary step in applying decision theory, to be followed by evaluating the expected posterior loss for each action under consideration.
So I don't agree that BHT is just as bad as p-values and classical significance tests. As the first step towards a proper application of decision theory, it is much, much better than either of
them. Neither p-values nor classical significance tests are appropriate foundations for making decisions.
My quick response is that the hypothesis of zero effect is almost never true! The problem with the significance testing framework–Bayesian or otherwise–is in the obsession with the
possibility of an exact zero effect.
Ooh, I am so using that argument the next time I write about SSVS.
5. Hey Andrew — tx for the heads up — lots to reply to, most of it not about statistics…
First, I think you may be getting a teensy bit overwrought here again, "hating" on that one quote. Carl is a pop science writer and when he (or guys like him) call me, I feel very tenderly toward
them because my own father was a journalist (spent his whole career in magazine and newspaper writing and editing). So I always talk to these guys and write back carefully worded emails (like Don
Berry also apparently did)…. and inevitably the results are disappointing: I get at most one sentence in the final article. Journalists do not ask the experts that they interview for their
stories to preview their work (it would be impractical, working on deadlines as they do) so unfortunately this just sort of goes with the territory. It happened to me again the other day, when I
spent at least 30 minutes on the phone with a nontechnical sports writer for the Columbus Dispatch who was one of the many to call me before the NCAA tourney got underway. the final article,
barely mentions me and does not include all the wonderful, intuitive, easy-to-understand math stuff I gave him. Sigh.
*Anyway*, I share your concern about "the point null is *never* true", but I agree with the previous commenters that the Bayesian approach still scores over the frequentist approach here since
the point null is *not* really required in a Bayesian setting, and offers a way of directly comparing evidence that is model-based, rather than design-based. I also think you're reading much more
into Carl's remark than he intended; he's aiming at a mathematically more sophisticated audience than that reading Post Dispatch sports stories, but this is still a "pop science" article, IMHO.
Moreover the FDA has decided that point nulls are here to stay, so like them or not, the thing for us to do as applied Bayesians is work within that system and improve the existing science as
best we can. That's what the Berrys and I and Team Ibrahim and DJS and Beat Neuenschwander and a whole lot of Bayesians are trying to do right now. I think we're on the right track; CDRH
(devices) is on board, CBER (safety) is coming along, and even CDER (drugs) is changing.
PS So where's the April Fool's Day blog this year? I figured March 31 for you is like the day before the NCAA tourney starts for me. ;) I was gratified to see another journal editor (JCGS) copied
my idea of using one of your earlier AFD blogs as the genesis of a discussion paper; I also think my journal got the better article ;)
6. Hi, Brad. The April Fool's pressure was too much for me so I decided to run a serious entry that day. Expected squared jumped distance is an important idea, I think.
7. I'm wondering if you've seen the Hoijtink et al book, Bayesian Evaluation of Informative Hypotheses
I haven't yet had time to read the book, but I saw Herbert Hoijtink speak on the idea at IMPS last year, and thought it was a pretty good idea.
Just curious what your thoughts are…
8. II think it is mistaken to suppose one doesn't want and need a way to evaluate the inferences warranted and unwarranted by data quite apart from decisions that might be taken. One first, and
separately, needs to know what the data indicate about the state of affairs in order to decide what to do about it. Of course, any "inference" could be described as a decision: e.g., I decide
that the data indicate radiated water silling into the ocean, but that is merely a verbal point, and not what people mean when they say everything should be reduced to decision-making. I and
others regard that conception as abrogating the fundamental purpose for employing evidence to find out about what is the case, rather than what is best to do—where things like utilities should
enter. Even where an actual decision is being contemplated, it is undesirable to mix the evidential appraisal with criteria relevant for judging decisions, and any account that precludes doing so
is, I would argue, inadequate for scientific inference.
9. Brad: Believe Andrew has a point that applies to many technical Bayesian publications as well.
OK "tackle[s] the probability that the hypothesis is true head-on" but rarely does this result in a highly/widely credible posterior – for anything.
Bayesian approaches provide a lot thats pragmatic (purposeful) but the perhaps formally understandable "salesman's puffing" about the value of _the_ posterior (for everyone?) that one gets – well
maybe its times to start losing that.
(Thinking of using the reworked quote "Every time I hear the word posterior I want to reach for my pistol" in a talk promoting Bayesian approaches some day)
I tried to clarified some of these concerns in Two Cheers for Bayes in Controled Clinic Trails back in the 90's.
Anyway, I might not be understanding what Andrew is getting at and I also was waiting for the April Fools' post.
10. I'd also like to hear your thoughts on Hoijtink's work on informative hypothesis testing. I heard him give this talk last month and found it interesting.
11. April, Frank:
I glanced at Hoijtink's slides. The method could be useful but I don't quite see the point. I'd rather just fit a model directly and then get inferences about any quantities of interest and not
worry about hypotheses.
12. I thought that I made it clear: Sure, you can come to an opinion about what state of nature is likely to be true (specified by a posterior distribution), that's fine. But what are you going to DO
about it? Publish a paper? Reject a drug? Accept a medical device under specific circumstances?
Even if you are only interested in "what is the case," you are still going to take some action about it. Like, publishing a paper that says "that is the case," thus risking (or enhancing) your
reputation if ultimately you turn out to be wrong (or right).
There's always a decision involved, and even one that has consequences.
13. Bill:
I agree that decision making is important, and we discuss it in BDA, second edition (chapter 22, I believe). But I don't think point hypotheses are needed to do Bayesian decision analysis.
Let me state this another way: I can well believe that Bayesian inference with point hypotheses, in the hands of a skilled practitioner, can be a useful tool in decision analysis. But you can
definitely do Bayesian decision analysis using straight BDA tools–no point hypotheses, no Bayes factors, just posterior probabilities and loss functions. The decision is the decision of what do
to, not the decision of whether to accept or reject a null hypothesis.
14. What you're missing is that if you have an account that interprets data by using so and so's loss function to go straight to a decision (maximizing utility or whatever), then there is NO distinct
evaluation of evidence! You rob others from putting in their loss functions—who says yours is best? The drug company has its loss function, and I have mine, and the result is that evidence simply
drops out of the picture! The debate that was thought to be about, say the existence or risk of a certain type, becomes a disagreement between personal losses. This is a highly popular view among
personalists and all manner of radical relativists, post-modernists and the like. It's old as the hills, and dangerous.
15. Mayo:
I agree that it's good practice to separate data from inference and to separate inference from decision analysis. Each step should be transparent: the client should be able to see where the data
came from, to see how the inferences were derived from data, and to see how the decision recommendations were derived from the inferences. Performing a decision analysis–exploring the
implications of your inference under a hypothesized loss function–can be useful, and the client can and should always be able to go back and perform a different decision analysis if so desired.
I never recommended otherwise, nor am I a personalist, radical relativist, post-modernist, etc. I do like to think of myself as old as the hills and dangerous, though!
16. Andrew: I'm pretty much in agreement with your last comment; but I do see a role for point null hypotheses in decision theory *if* they are understood to be approximations to a state of nature
"no significant effect", and compatible with the Berger/Delampady comment (in the sense that the actual prior we have on "no significant effect" can be approximated adequately by a point null),
and if that state of nature is one that you need for the decision.
In addition, I think that various analyses involving Bayesian point null testing do much to undermine the rationale for using classical two-sided tests, e.g., p-values, for measuring the evidence
against the "no significant effect" null. I've always found Dennis Lindley's 1957 paper to be quite compelling.
[Just discovered that the spell checker in this version of Firefox doesn't have 'analyses' in its database!]
17. I'm so glad you say you agree with me: "that it's good practice to separate data from inference and to separate inference from decision analysis. Each step should be transparent…" I hope your
readers take note, for it is at odds with what Jefferys seemed to be saying, and certainly at odds with Ziliak and McCloskey. I never intended you under the umbrella of
"a personalist, radical relativist, post-modernist, " but had in mind others, notably the "cultists" Z & M. What they fail to realize, ironically, among much else, is that if one takes seriously
their idea that losses should be introduced even in interpreting what the data say, then the drug company found at fault was actually doing just what Z&M recommend and endorse! After all, if
there's no fact of the matter, but only "oomph" feeling, based on the losses of the data interpreter, then the charge of ignoring or downplaying the evidence goes by the wayside. We are not old,
Andrew, but are very dangerous (as are all sham-busters).
18. Unfortunately, a Bayesian two-sided test (e.g., in Normal iid testing) can be wrong with high or maximal probability: e.g., keep collecting data until the point null is excluded from the
confidence interval. Berger and Wolpert concede this in their book on the Likelihood Principle. So it's not at all clear superiority has been shown over the frequentist error statistical
(two-sided) test, where this could not happen.
19. Mayo:
I'm not talking about EXACT point nulls.
I'm talking about APPROXIMATE point nulls.
And I have specifically said that the APPROXIMATE point null has to be adequately approximated by the EXACT one, for calculation.
If you keep taking data and taking data and taking data, then the EXACT point null will no longer be an adequate approximation to the APPROXIMATE point null at some point. So then you have to
choose an appropriate "no significant effect" prior.
But, I also mentioned "various" analyses of the two-sided frequentist tests. All you have to do is to put a point null on the alternative where the data happen to fall. That is about as
supportive of the alternative hypothesis as you can get (and it is cheating). And even if you do this, the p-value (one- or two-valued) still significantly overstates the evidence against the
(approximate) point null.
Furthermore, the Lindley paradox still applies, even if you choose an approximate point null with a fixed "no significant effect" prior, whereas we know that the classical two-sided point null
(even if you let it be "fixed and approximate") is guaranteed to reject eventually if you take enough data (this will not happen with the Bayesian test with a fixed "no significant effect"
20. Hi Andrew & others,
I agree that point nulls may be a priori improbable in *observational* studies. However, in *experimental* studies, where all factors are tightly controlled by design, point nulls do make sense.
Also, many theories specifically state a point null to be true (a law, or something that is invariant across conditions). For instance, Bowers, Vigliocco, and Haan (1998) have proposed that
priming depends on abstract letter identities. Hence, their account predicts that priming effects are equally large for words that look the same in lower- and uppercase (e.g., kiss/KISS) or that
look different (e.g., edge/EDGE). This account does not predict that the experimental effect will be small; it predicts that it is completely absent. In fact, for theoretical purposes it often
does not matter how large an effect is, as long as it is reliably detected. For instance, if priming effects were larger for words that look the same in lower- and uppercase (e.g., kiss/KISS)
than for those that look different (e.g., edge/EDGE), this would undermine the hypothesis that letters are represented abstractly, no matter whether the effect size was 100 ms or 10 ms. Of
course, it is much more difficult for a 10-ms effect to
gain credence in the field, but this issue is orthogonal to the argument. Should the 10-ms effect be found repeatedly in different laboratories across the world, the effect would at some point be
deemed reliable and considered strong evidence against any theoretical account that predicted its absence. [a more elaborate response to the issue of point nulls is on p. 175 of http://
21. Bill:
To me, the Lindley paradox falls apart because of its noninformative prior distribution on the parameter of interest. If you really think there's a high probability the parameter is nearly
exactly zero, I don't see the point of the model saying that you have no prior information at all on the parameter. In short: my criticism of so-called Bayesian hypothesis testing is that it's
insufficiently Bayesian.
Thanks for the comment. I want to write more on this, but just for now let me say that the issue is not just that effects are never zero but that effects vary. A tiny effect of size "epsilon"
could well correspond to an effect of -epsilon for some people and +2*epsilon for another person. In which case the idea of there being a single effect size is off track.
22. Possibly a nice case study for this post.
For unrelated reasons I read Sander's paper below and to _me_ it seemed to critically work through some of what is being raised here in the context of evaluating evidence for effects of vitamin
Additionally he seems to have put the point I was trying to raise perhaps somewhat more clearly (quote below).
S. Greenland. Weaknesses of Bayesian model averaging for meta-analysis in the study of vitamin E and mortality Clin Trials February 2009 6: 42-46
"Bayesian analysis may mislead readers if it objectifies probability statements. Objectification invites deceptively unconditional claims about probabilities of hypotheses, such as BWN’s claim
that ‘Vitamin E intake is unlikely to affect mortality regardless of dose.’ This statement sounds as if the subjective conclusion is a discovered biological fact, when it is only a psychological
fact, a posterior belief about the null hypothesis based on priors that others (such as myself) find objectionable."
23. Andrew: I agree that the prior on the alternative hypothesis is critical (and everyone who seriously thinks about this knows this).
But still, the essence of Lindley's paper and of this issue is how we regard a (relatively) sharp hypothesis (and how it, regarded as a "state of nature" that will affect our decisions) to a
(relatively) vague hypothesis (ditto parenthetical comment).
Yes, you have to think seriously about the prior on the alternative hypothesis. Everyone knows this.
But your original objection was that exact point null hypotheses are implausible. We know this. We know how to evaluate whether they are a reasonable approximation to a plausible (tight but not
exact) null.
So, I am not sure exactly what you are arguing here. Are you arguing that exact point null hypotheses are almost always wrong? I agree. Are you arguing that you have to be careful about how you
assign a prior on the alternative hypothesis? I agree also.
But the Lindley paradox goes further. It says, assign your priors however you wish. You don't get to change them. Then take data and take data and take data… There will be times when the
classical test will reject with probability (1-alpha) where you chose alpha very small in advance, and at the same time the classical test will reject at a significance level alpha.
This will not happen, regardless of priors, for the Bayesian test. The essence of the Lindley paradox is that "sampling to a foregone conclusion" happens in the frequentist world, but not in the
Bayesian world.
So, I don't understand why you say, "my criticism of so-called Bayesian hypothesis testing is that it's insufficiently Bayesian."
What would make it more "Bayesian" to you? What is lacking in my analysis above?
24. Bill:
As I wrote in my discussion of Efron's paper, I suspect some of the differences have to do with what sorts of problems one is studying. I see the virtue of sharp hypotheses in astronomy–either
that smudge on the image is a planet, or it's not–and in genetics, and in some other fields. In the problems where I work, in social and environmental sciences, sharp hypotheses of this sort
don't come up at all.
I'm not questioning the mathematics of Jeffreys and Lindley; I'm questioning the relevance of the problem they are trying to solve.
25. "I completely agree with this quote from Susan Ellenberg, reported in the above article:
'You have to make a lot of assumptions in order to do any statistical test, and all of those are questionable.'
And being Bayesian doesn't get around that problem. Not at all."
Would nonparametric statistics, mixed with frequentist or Bayesian techniques, get around that problem more?
|
{"url":"http://andrewgelman.com/2011/04/02/so-called_bayes/","timestamp":"2014-04-21T14:40:44Z","content_type":null,"content_length":"63576","record_id":"<urn:uuid:4ed8b08b-be40-443a-9319-5d300db01732>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Parametric Equation of a Circle in 3D
A circle in 3D is parameterized by six numbers: two for the orientation of its unit normal vector, one for the radius, and three for the circle center .
While a 2D circle is parameterized by only three numbers (two for the center and one for the radius), in 3D six are needed. One set of parametric equations for the circle in 2D is given by
for a circle of radius and center .
In 3D, a parametric equation is
for a circle of radius , center , and normal vector ( is the cross product). Here, is any unit vector perpendicular to . Since there are an infinite number of vectors perpendicular to , using a
parametrized is helpful. If the orientation is specified by a zenith angle and azimuth , then , , and can have simple forms:
|
{"url":"http://demonstrations.wolfram.com/ParametricEquationOfACircleIn3D/","timestamp":"2014-04-21T07:06:57Z","content_type":null,"content_length":"48135","record_id":"<urn:uuid:3ab265d6-3ed2-4be1-bba7-a27e0c083fad>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Evolutions of polarization and nonlinearities in
« journal navigation
Evolutions of polarization and nonlinearities in an isotropic nonlinear medium
Optics Express, Vol. 16, Issue 11, pp. 8144-8149 (2008)
The evolutions of polarization and nonlinearities in an isotropic medium induced by anisotropy of third-order nonlinear susceptibility were studied experimentally and theoretically. The anisotropy of
imaginary part of third-order susceptibility was verified to exist by the change of ellipticity of polarization ellipse in the isotropic nonlinear medium CS[2]. The changes of nonlinear refraction
and nonlinear absorption depending upon the ellipticity of polarization ellipse are also presented. The numerical simulations based on two coupled nonlinear Schrödinger equations (NLSE) provide an
excellent quantitative agreement with experimental results.
© 2008 Optical Society of America
1. Introduction
Since the pioneering experimental work of Maker,
et al.
, [
1. P. D. Maker, R. W. Terhune, and C. M. Savage, “Intensity-dependent changes in the refractive index of liquids,” Phys. Rev. Lett. 12, 507–509 (1964). [CrossRef]
], the dynamical evolution of polarization state of light due to the anisotropy of real part of third-order nonlinear susceptibility (
χ ^(3)
χ ^(3) [xxyy]
χ ^(3) [xyxy]
χ ^(3) [xyyx]
) has been studied extensively in isotropic media as well as anisotropic media [
2. P. D. Maker and R. W. Terhune, “Study of optical effects due to an induced polarization third order in the electric field strength,” Phys. Rev. 137, A801–818 (1965). [CrossRef]
]. The existence of
χ ^(3) [xyyx]
can induce a rotation of axis position as a polarization ellipse propagates through the medium. Some nonlinear effects relating to nonlinear polarization rotation, such as bistable, unstable, and
chaotic behaviors were realized [
8. D. David, D. D. Holm, and M. V. Tratnik, “Hamiltonian chaos in nonlinear optical polarization dynamics,” Phys. Rep. 187, 281–367 (1990). [CrossRef]
9. A. L. Gaeta and R. W. Boyd, “Transverse instabilities in the polarizations and intensities of counterpropagating light waves,” Phys. Rev. A 48, 1610–1624 (1993). [CrossRef] [PubMed]
]. A fundamental elliptically polarized vector soliton was also observed in the spatial domain in a CS
liquid slab planar waveguide [
10. M. Delqué, G. Fanjoux, and T. Sylvestre, “Polarization dynamics of the fundamental vector soliton of isotropic Kerr media,” Phys. Rev. E 75, 016611 (2007). [CrossRef]
]. However, reports on the anisotropy of imaginary part of
χ ^(3)
are few. Although several theoretical analyses [
3. P. X. Nguyen and G. Rivoire, “Evolution of the polarization state of an intense electromagnetic field in a nonlinear medium,” Opt. Acta. 25, 233–246 (1978). [CrossRef]
] predicted that the anisotropy of imaginary part of
χ ^(3)
can lead to the ellipticity change of an elliptically polarized beam in isotropic media, to the best of our knowledge no experimental observation has supported such a prediction, and most of
experimental reports were only involved in anisotropic media [
13. R. DeSalvo, M. Sheik-Bahae, A. A. Said, D. J. Hagan, and E. W. Van Stryland, “Z-scan measurements of anisotropy of nonlinear refraction and absorption in crystals,” Opt. Lett. 18, 194–196 (1993).
[CrossRef] [PubMed]
14. Sean J. Wagner, J. Meier, A. S. Helmy, J. Stewart Aitchison, M. Sorel, and D. C. Hutchings, “Polarization-dependent nonlinear refraction and two-photon absorption in GaAs/AlAs superlattice
waveguides below the half-bandgap,” J. Opt. Soc. Am. B 24, 1557–1563 (2007). [CrossRef]
The anisotropy of
χ ^(3)
can lead to dynamical evolution of the polarization state of light, meanwhile, the change of the polarization state also has a drastic effect on third-order susceptibility. However, most of reports
on polarization dependent nonlinear refraction and nonlinear absorption were concentrated on anisotropic media [
13. R. DeSalvo, M. Sheik-Bahae, A. A. Said, D. J. Hagan, and E. W. Van Stryland, “Z-scan measurements of anisotropy of nonlinear refraction and absorption in crystals,” Opt. Lett. 18, 194–196 (1993).
[CrossRef] [PubMed]
14. Sean J. Wagner, J. Meier, A. S. Helmy, J. Stewart Aitchison, M. Sorel, and D. C. Hutchings, “Polarization-dependent nonlinear refraction and two-photon absorption in GaAs/AlAs superlattice
waveguides below the half-bandgap,” J. Opt. Soc. Am. B 24, 1557–1563 (2007). [CrossRef]
]. In this letter, we present the anisotropy of imaginary parts of
χ ^(3)
and the ellipticity dependent nonlinear refraction and nonlinear absorption in the isotropic medium CS
. Like the anisotropy of real part of
χ ^(3)
, the anisotropy of imaginary part is very important to the evolution of nonlinear polarization dynamics.
2. Experimental details
Our experimental setup is shown in
Fig. 1
. A commercial optical parametric oscillator (Continuum Panther Ex OPO) pumped by the third harmonic (355 nm) from Continuum Surelite-II is used to generate 4–5 ns pulses with a repetition rate of 10
Hz and tunable in the range of 420–2500 nm. A Glan prism (G1) was used to generate a linearly polarized light. The elliptic polarization state of the input beam was adjusted by angle (
) between G1 and a quarter-wave plate. The input beam has nearly Gaussian transverse shape and was focused by a 150 mm focal length lens to form a beam waist of 19 µm. To determine the axis position
of the global polarization state, we directly measured the transmitted energy as the analyzer G2 rotates. The experiment is carried out on the isotropic nonlinear medium CS
. This molecule has been thoroughly studied in nonlinear ellipse rotation and exhibits a large molecular reorientation nonlinearity. The nonlinear susceptibility tensor of CS
at 440 nm, 470 nm, and 532 nm was studied because CS
exhibits a large nonlinear absorption at 440 nm and 470 nm in the nanosecond regime [
15. Z. B. Liu, Y. L. Liu, B. Zhang, W. Y. Zhou, J. G. Tian, W. P. Zang, and C. P. Zhang, “Nonlinear absorption and optical limiting properties of carbon disulfide in short-wavelength region,” J. Opt.
Soc. Am. B 24, 1101–1104 (2007). [CrossRef]
]. The 5 mm length CS
cell was fixed on the focus.
3. Results and discussions
Figures 2(a)–2(c)
2. P. D. Maker and R. W. Terhune, “Study of optical effects due to an induced polarization third order in the electric field strength,” Phys. Rev. 137, A801–818 (1965). [CrossRef]
show the experimental normalized transmittance T as a function of orientation of G2 (
) for nonlinear output and linear output at 440 nm, 470 nm, and 532 nm. The values of
of polarization ellipse can be obtained by using the relationship of
e ^2
, where T
and T
are the minimum and maximum of transmittance T, respectively. The x-coordinates relative to T
and T
represent the positions of major and minor axis of polarization ellipse. Relative to the case of linear output, an obviously shift of axis position of polarization ellipse in nonlinear output can be
observed at these wavelengths, which indicates the existence of the anisotropy of (Re(
χ ^(3)
). This is consistent with large molecular reorientation nonlinearity of CS
observed in the subnanosecond and nanosecond regimes [
]. The Re(
χ ^(3) [xyyx]
) mainly contributes to the shift of axis position, i.e., the rotation of polarization ellipse. Additionally, the change of T
is much larger than that of T
as shown in
Figs. 2(a)
at 440 nm and 470 nm. Therefore, the ellipticity
changes as an elliptical polarized beam propagates through the medium, and nonlinear absorption is anisotropic due to Im(
χ ^(3) [xyyx]
To picture the evolutions of polarization state more clearly, the ellipticity
and the rotation angle
as a function of input intensity
I [0]
at 440 nm, 470 nm and 532 nm are given in
Figs. 2(d)
, and
, respectively. Rotation angle
increases with
I [0]
at these wavelengths due to Re(
χ ^(3) [xyyx]
)≠0. At 440 nm and 470 nm, the value of
also increases with
I [0]
, and the slope of curve
I [0]
) at 440 nm is larger than that at 470 nm. However, the value of
almost keeps unchanged at 532 nm since CS
has no obvious nonlinear absorption. The increasing
I [0]
further verifies the existence of the anisotropy of Im(
χ ^(3)
) of CS
at 440 nm and 470 nm.
To model the evolutions of polarization and nonlinearities as an elliptically polarized beam propagates in a nonlinear medium, the following coupled NLSEs are employed:
is the radial coordinate,
is the longitudinal coordinate,
n [0]
is the linear refractive index,
2πn [0]
/λ is the wave vector and λ is the wavelength.
E [+]
E [-]
are the left- and right-hand circularly polarized components of the electric field. Following the notation of nonlinear polarization of Maker,
et al.
, [
1. P. D. Maker, R. W. Terhune, and C. M. Savage, “Intensity-dependent changes in the refractive index of liquids,” Phys. Rev. Lett. 12, 507–509 (1964). [CrossRef]
], the effective nonlinear susceptibilities of two circular components can be written as [
χ ^(3) [xyxy]
χ ^(3) [xxyy]
χ ^(3) [xyyx]
. The solid lines in
Fig. 2
are the results of numerical simulations using Eqs. (
) and (
). The parameters used in the simulations are Re(
)=13, 8, 3.5×10
, , Im(
)=7, 2.5, 0×10
, Re(
)=27, 20, 14×10
, and Im(
)=19, 6, 0×10
at 440 nm, 470 nm, and 532 nm, respectively. The total third-order nonlinear susceptibility |
χ ^(3)
| (
χ ^(3)
/6) are 10.1, 6.3, and 3.5×10
at 440 nm, 470 nm, and 532 nm, respectively. The value of |
χ ^(3)
| at 532 nm agrees well with that in previous report [
17. M. Sheik-Bahae, A. A. Said, T. H. Wei, D. J. Hagan, and E. W. Van Stryland, “Sensitive measurement of optical nonlinearities using a single beam,” IEEE J. Quantum Electron 26, 760–169 (1990).
As mentioned above, nonlinear susceptibility component
induces the evolution of ellipticity and axis position of polarization ellipse. Meanwhile, different polarization state also affects the change of nonlinear refraction and nonlinear absorption [
18. Z. B. Liu, X. Q. Yan, J. G. Tian, W. Y. Zhou, and W. P. Zang, “Nonlinear ellipse rotation modified Z-scan measurements of third-order nonlinear susceptibility tensor,” Opt. Express 15,
13351–13359 (2007). [CrossRef] [PubMed]
]. Using the relationship between the dielectric constant
and nonlinear susceptibility
χ ^(NL)
ε [0]
πχ ^(NL)
, where
ε [0]
is the linear dielectric constant, we can write the differences in refractive index (Δ
n [±]
) and absorption (Δ
α [±]
) due to nonlinear reaction as follows:
Note that the differences of refraction and absorption depend upon only the coefficient B but not the coefficient A.
Be different from anisotropic medium, the changes of nonlinear refraction and absorption in an isotropic medium are dependent on only the ellipticity of polarization ellipse, but not the polarization
orientation. First, for circularly polarized light with e=1, only one of two circular components is present, and the changes in refractive index and absorption can be given by Δn=2π/n [0] Re(A)|E|^2,
and Δα=4πk/n [0] Im(A)|E|^2. Second, for linearly polarized light with e=0, we can see that the changes of refractive index and absorption can be given by Δn=2π/n [0] Re(A+B/2)|E|^2, and Δα=4πk/n [0]
Im(A+B/2)|E|^2, since linearly polarized light is a combination of equal amounts of left-and right-hand circular components (i.e. |E [+]|^2=|E [-]|^2), where E denotes the total field amplitude of
the linearly polarized radiation with |E|^2=2|E [+]|^2=2|E [-]|^2.
Open and closed aperture Z-scan [
17. M. Sheik-Bahae, A. A. Said, T. H. Wei, D. J. Hagan, and E. W. Van Stryland, “Sensitive measurement of optical nonlinearities using a single beam,” IEEE J. Quantum Electron 26, 760–169 (1990).
] experiments were carried out to determine the ellipticity dependent nonlinear refraction and absorption. The experimental results at 440 nm are shown in
Fig. 3
. The on-axis intensity
I [0]
used in our Z-scan experiments is 3.2×10
. For a linearly polarized light as a light source in our Z-scan measurements, nonlinear refraction coefficient
n [2lin]
and absorption coefficient
were determined to be 13.5×10
/W and 17.4×10
cm/W, respectively, which are over one time larger than those of circular polarization with
n [2cir]
/W and
cm/W. Moreover, from the results of Z-scan with circularly polarized light, one can obtain the value of complex nonlinear susceptibility component
because the changes of refractive index and absorption depend on only
in the case of circular polarization. The real and imaginary parts of
are 13×10
and 8.0×10
, respectively. And then, the coefficient
can be determined from the Z-scan experimental results of linear polarization or elliptical polarization, and the values of Re(
) and Im(
) are 24×10
and 18×10
respectively, which agree well with the results obtained from nonlinear polarization experiments shown in
Fig. 2
n [2lin]
of linearly polarized light and
n [2cir]
of circularly polarized light are determined, from Eqs. (
) and (
) one can obtain the expressions of
n [2ell]
β [ell]
of elliptical polarized light as a function of ellipticity as follows:
). The Z-scan curves of nonlinear refraction and absorption with
=0.41 are shown in
Fig. 3
and we can get
n [2ell]
/W and
cm/W. Other fitting parameters are the same as those of linearly polarized light.
Figure 4
gives experimental and theoretical results of the changes of
n [2]
as a function of
at 440 nm, 470 nm, and 532 nm, respectively. The symbols represent the experimental results, and agree well with the solid lines obtained by theoretical simulations using Eqs. (
) and (
). The change of nonlinearities indicates that nonlinear refraction and nonlinear absorption are tunable by controlling the ellipticity of elliptically polarized beam.
The relative magnitude of
depends upon the nature of the physical process of optical nonlinearities. For molecular orientation nonlinearities, the ratio of the real part of
to that of
is 6, this is the case of optical nonlinearities of CS
in the nanosecond and picosecond regimes. However, Re(
) and Im(
) obtained in our nanosecond experiments at 440 nm are 2.1 and 2.7, respectively. Re(
)=2.5 and Im(
)=2.4 were obtained at 470 nm, and Re(
)=4 was obtained at 532 nm. The decreasing ratio of
indicates that other nonlinear mechanism should exist in the nanosecond regime besides molecular orientation. The origin of the different physical characters of the two contributions (
) to nonlinear susceptibility can be understood in terms of the energy level [
]. One-photon-resonant processes contribute only to the coefficient
, while two-photon- resonant processes contribute to both the coefficients
. In Ref. [
15. Z. B. Liu, Y. L. Liu, B. Zhang, W. Y. Zhou, J. G. Tian, W. P. Zang, and C. P. Zhang, “Nonlinear absorption and optical limiting properties of carbon disulfide in short-wavelength region,” J. Opt.
Soc. Am. B 24, 1101–1104 (2007). [CrossRef]
] we reported that the large nonlinear absorption of CS
in a short wavelength region and the nanosecond regime can arise from a combination of two-photon absorption and the excited-state absorption induced by two-photon absorption. Excited state
nonlinearity can cause the decrease of
since effective third-order nonlinearities are sequential one-photon process and independent upon the change of polarization state.
4. Conclusion
In summary, we present the evolutions of polarization and nonlinearities in an isotropic medium CS[2]. In the early sixties, Maker, et a.,l planed to simultaneously study the polarization dependence
of the intensity-induced absorption and the intensity-induced rotation in order to obtain accurate relative values of Im(A), Re(A), Im(B) and Re(B). In our work the complex third-order susceptibility
tensors of CS[2] at 440 nm, 470 nm, and 532 nm were measured. To our knowledge, our results offer the first experimental evidence of Im(B) induced nonlinear polarization dynamics and
ellipticity-dependent nonlinearities in an isotropic medium. Further experiments aiming at studying influence of spatial-temporal effects on self-induced polarization changes due to complex third
order nonlinear susceptibility are expected to sharpen this analysis. Many interesting extensions are possible, including the tuning of optical limiting, optical switching, and photonic crystal by
controlling polarization state.
This work is supported by the Natural Science Foundation of China (No. 60708020, 10574075), Chinese National Key Basic Research Special Fund (No. 2006CB921703), and the Program for Changjiang
Scholars and Innovative Research Team in University (IRT0149).
References and links
1. P. D. Maker, R. W. Terhune, and C. M. Savage, “Intensity-dependent changes in the refractive index of liquids,” Phys. Rev. Lett. 12, 507–509 (1964). [CrossRef]
2. P. D. Maker and R. W. Terhune, “Study of optical effects due to an induced polarization third order in the electric field strength,” Phys. Rev. 137, A801–818 (1965). [CrossRef]
3. P. X. Nguyen and G. Rivoire, “Evolution of the polarization state of an intense electromagnetic field in a nonlinear medium,” Opt. Acta. 25, 233–246 (1978). [CrossRef]
4. P. X. Nguyen, J. L. Ferrier, J. Gazengel, and G. Rivoire, “Polarization of picosecond light pulses in nonlinear isotropic media,” Opt. Commun. 46, 329–333 (1983). [CrossRef]
5. A. J. van Wonderen, “Influence of transverse effect on self-induced polarization changes in an isotropic Kerr medium,” J. Opt. Soc. Am. B 14, 1118–1130 (1997). [CrossRef]
6. M. Lefkir and G. Rivoire, “Influence of transverse effects on measurement of third-order nonlinear susceptibility by self-induced polarization state changes,” J. Opt. Soc. Am. B 14, 2856–2864
(1997). [CrossRef]
7. M. V. Tratnik and J. E. Sipe, “Nonlinear polarization dynamics. I. The single-pulse equations,” Phys. Rev. A 35, 2965–2975 (1987). [CrossRef] [PubMed]
8. D. David, D. D. Holm, and M. V. Tratnik, “Hamiltonian chaos in nonlinear optical polarization dynamics,” Phys. Rep. 187, 281–367 (1990). [CrossRef]
9. A. L. Gaeta and R. W. Boyd, “Transverse instabilities in the polarizations and intensities of counterpropagating light waves,” Phys. Rev. A 48, 1610–1624 (1993). [CrossRef] [PubMed]
10. M. Delqué, G. Fanjoux, and T. Sylvestre, “Polarization dynamics of the fundamental vector soliton of isotropic Kerr media,” Phys. Rev. E 75, 016611 (2007). [CrossRef]
11. M. Delqué, T. Sylvestre, H. Maillotte, C. Cambournac, P. Kockaert, and M. Haelterman, “Experimental observation of the elliptically polarized fundamental vector soliton of isotropic Kerr media,”
Opt. Lett. 30, 3383–3385 (2005). [CrossRef]
12. C. Cambournac, T. Sylvestre, H. Maillotte, B. Vanderlinden, P. Kockaert, Ph. Emplit, and M. Haelterman, “Symmetry-Breaking Instability of Multimode Vector Solitons,” Phys. Rev. Lett. 89, 083901
(2002). [CrossRef] [PubMed]
13. R. DeSalvo, M. Sheik-Bahae, A. A. Said, D. J. Hagan, and E. W. Van Stryland, “Z-scan measurements of anisotropy of nonlinear refraction and absorption in crystals,” Opt. Lett. 18, 194–196 (1993).
[CrossRef] [PubMed]
14. Sean J. Wagner, J. Meier, A. S. Helmy, J. Stewart Aitchison, M. Sorel, and D. C. Hutchings, “Polarization-dependent nonlinear refraction and two-photon absorption in GaAs/AlAs superlattice
waveguides below the half-bandgap,” J. Opt. Soc. Am. B 24, 1557–1563 (2007). [CrossRef]
15. Z. B. Liu, Y. L. Liu, B. Zhang, W. Y. Zhou, J. G. Tian, W. P. Zang, and C. P. Zhang, “Nonlinear absorption and optical limiting properties of carbon disulfide in short-wavelength region,” J. Opt.
Soc. Am. B 24, 1101–1104 (2007). [CrossRef]
16. R. W. Boyd, Nonlinear Optics, second edition (Academic Press, San Diego, 2003).
17. M. Sheik-Bahae, A. A. Said, T. H. Wei, D. J. Hagan, and E. W. Van Stryland, “Sensitive measurement of optical nonlinearities using a single beam,” IEEE J. Quantum Electron 26, 760–169 (1990).
18. Z. B. Liu, X. Q. Yan, J. G. Tian, W. Y. Zhou, and W. P. Zang, “Nonlinear ellipse rotation modified Z-scan measurements of third-order nonlinear susceptibility tensor,” Opt. Express 15,
13351–13359 (2007). [CrossRef] [PubMed]
OCIS Codes
(190.0190) Nonlinear optics : Nonlinear optics
(190.3270) Nonlinear optics : Kerr effect
(190.4180) Nonlinear optics : Multiphoton processes
(190.4400) Nonlinear optics : Nonlinear optics, materials
ToC Category:
Nonlinear Optics
Original Manuscript: March 24, 2008
Revised Manuscript: April 17, 2008
Manuscript Accepted: April 18, 2008
Published: May 20, 2008
Zhi-bo Liu, Xiao-Qing Yan, Wen-Yuan Zhou, and Jian-Guo Tian, "Evolutions of polarization and nonlinearities in an isotropic nonlinear medium," Opt. Express 16, 8144-8149 (2008)
Sort: Year | Journal | Reset
1. P. D. Maker, R. W. Terhune, and C. M. Savage, "Intensity-dependent changes in the refractive index of liquids," Phys. Rev. Lett. 12, 507-509 (1964). [CrossRef]
2. P. D. Maker and R. W. Terhune, "Study of optical effects due to an induced polarization third order in the electric field strength," Phys. Rev. 137, A801-818 (1965). [CrossRef]
3. P. X. Nguyen and G. Rivoire, "Evolution of the polarization state of an intense electromagnetic field in a nonlinear medium," Opt. Acta. 25, 233-246 (1978). [CrossRef]
4. P. X. Nguyen, J. L. Ferrier, J. Gazengel, and G. Rivoire, "Polarization of picosecond light pulses in nonlinear isotropic media," Opt. Commun. 46, 329-333 (1983). [CrossRef]
5. A. J. van Wonderen, "Influence of transverse effect on self-induced polarization changes in an isotropic Kerr medium," J. Opt. Soc. Am. B 14, 1118-1130 (1997). [CrossRef]
6. M. Lefkir and G. Rivoire, "Influence of transverse effects on measurement of third-order nonlinear susceptibility by self-induced polarization state changes," J. Opt. Soc. Am. B 14, 2856-2864
(1997). [CrossRef]
7. M. V. Tratnik and J. E. Sipe, "Nonlinear polarization dynamics. I. The single-pulse equations," Phys. Rev. A 35, 2965-2975 (1987). [CrossRef] [PubMed]
8. D. David, D. D. Holm, and M. V. Tratnik, "Hamiltonian chaos in nonlinear optical polarization dynamics," Phys. Rep. 187, 281-367 (1990). [CrossRef]
9. A. L. Gaeta and R. W. Boyd, "Transverse instabilities in the polarizations and intensities of counterpropagating light waves," Phys. Rev. A 48, 1610-1624 (1993). [CrossRef] [PubMed]
10. M. Delqué, G. Fanjoux, and T. Sylvestre, "Polarization dynamics of the fundamental vector soliton of isotropic Kerr media," Phys. Rev. E 75, 016611 (2007). [CrossRef]
11. M. Delqué, T. Sylvestre, H. Maillotte, C. Cambournac, P. Kockaert, and M. Haelterman, "Experimental observation of the elliptically polarized fundamental vector soliton of isotropic Kerr media,"
Opt. Lett. 30, 3383-3385 (2005). [CrossRef]
12. C. Cambournac, T. Sylvestre, H. Maillotte, B. Vanderlinden, P. Kockaert, Ph. Emplit, and M. Haelterman, "Symmetry-Breaking Instability of Multimode Vector Solitons," Phys. Rev. Lett. 89, 083901
(2002). [CrossRef] [PubMed]
13. R. DeSalvo, M. Sheik-Bahae, A. A. Said, D. J. Hagan, and E. W. Van Stryland, "Z-scan measurements of anisotropy of nonlinear refraction and absorption in crystals," Opt. Lett. 18, 194-196 (1993).
[CrossRef] [PubMed]
14. Sean J. Wagner, J. Meier, A. S. Helmy, J. Stewart Aitchison, M. Sorel, and D. C. Hutchings, "Polarization-dependent nonlinear refraction and two-photon absorption in GaAs/AlAs superlattice
waveguides below the half-bandgap," J. Opt. Soc. Am. B 24, 1557-1563 (2007). [CrossRef]
15. Z. B. Liu, Y. L. Liu, B. Zhang, W. Y. Zhou, J. G. Tian, W. P. Zang, and C. P. Zhang, "Nonlinear absorption and optical limiting properties of carbon disulfide in short-wavelength region," J. Opt.
Soc. Am. B 24, 1101-1104 (2007). [CrossRef]
16. R. W. Boyd, Nonlinear Optics, second edition (Academic Press, San Diego, 2003).
17. M. Sheik-Bahae, A. A. Said, T. H. Wei, D. J. Hagan, and E. W. Van Stryland, "Sensitive measurement of optical nonlinearities using a single beam," IEEE J. Quantum Electron 26, 760-169 (1990).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-16-11-8144&id=160536","timestamp":"2014-04-16T07:55:15Z","content_type":null,"content_length":"208203","record_id":"<urn:uuid:ee4ddab6-19f2-4061-9627-37240feb00ed>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seifert fiberable manifolds with several Seifert fiberings
up vote 2 down vote favorite
I have a question on Theorem 2.3 on page 34 of Hatcher's notes on 3-manifolds: Hatcher: Notes on Basic 3-Manifold Topology.
Regarding the class d), it follows from Proposition 2.1 on page 31, that $M(0,0;1/2,-1/2,\alpha/\beta)$ is fiber-diffeomorphic to $M(0,0;1/2,-1/2,\alpha'/\beta')$ if and only if $\alpha/\beta=\pm \
alpha'/\beta'$ if and only if $M(-1,0;\beta/\alpha)$ is fiber-diffeomorphic to $M(-1,0;\beta'/\alpha')$. But it is not clear to me, that for different $q,q'\in\mathbb{Q}$ with $q,q'\geq 0$ the
manifolds $M(-1,0;q)$ and $M(-1,0;q')$ are not diffeomorphic and also not diffeomorphic to the manifolds under a),b),c),e), i.e. not diffeomorphic to the solid torus, the twisted $I$-bundle or the
twisted $S_1$-bundle over the Klein bottle, or any lens space.
Is this true anyway? And do those manifolds under d) have a name (i.e. are known under a certain name like lens spaces)?
at.algebraic-topology 3-manifolds seifert-fiber-spaces
did you tried to see their fundamental groups? – janmarqz Feb 16 at 20:36
add comment
1 Answer
active oldest votes
As explained on page 37 of the notes, a complete proof of the full classification of orientable Seifert manifolds (Theorem 2.2) is not given in the notes. What is missing is the
classification of the manifolds that fiber over $S^2$ with exactly three multiple fibers. The statement here is that these Seifert manifolds are all distinguished by their fundamental
groups (which are not cyclic so they are not lens spaces, $S^3$, or $S^1\times S^2$), and their fiberings are unique apart from the exceptions listed in part (d). A proof of this can be
found in the reference given, namely Orlik's Springer Lecture Notes volume #291.
up vote 10
down vote The manifolds of type (d) in Theorem 2.2 are among these manifolds. They are closed manifolds so they are not of type (a) or (b). They are also not of type (e) since they do not contain
accepted incompressible tori, as shown earlier in the notes. They are not of type (c) since their fundamental groups are noncyclic as noted above. Manifolds of type (d) have 2-sheeted covering
spaces which are lens spaces, so they have finite fundamental group. Sometimes they are called prism manifolds, from a way of constructing them by identifying faces of a prism.
Dear Prof. Hatcher. Thank you for your reply. Did I understand it correctly that each of those exceptional prism manifolds have exactly two Seifert fibrations? – Werner Thumann Feb 18
at 20:36
@Werner Thumann: That is correct, each prism manifold has exactly two Seifert fiberings. This is an interesting contrast to lens spaces, $S^3$, and $S^1\times S^2$, each of which has
infinitely many different Seifert fiberings. – Allen Hatcher Feb 19 at 15:15
Thank you very much! – Werner Thumann Feb 19 at 16:48
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology 3-manifolds seifert-fiber-spaces or ask your own question.
|
{"url":"http://mathoverflow.net/questions/157755/seifert-fiberable-manifolds-with-several-seifert-fiberings","timestamp":"2014-04-20T13:33:36Z","content_type":null,"content_length":"57805","record_id":"<urn:uuid:1caa3034-30ce-43db-a47a-1c2b344221e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Burien, WA ACT Tutor
Find a Burien, WA ACT Tutor
...I have experience tutoring in this area at a high school level. In my approach to the biological sciences, I focus largely on the terminology used in the subject. Learning terms and their
definitions is a critical step in understanding biology.
22 Subjects: including ACT Math, reading, English, writing
...I am currently applying to medical school and I enjoy teaching in my free time. I am applying for medical school because I feel that it is a great way to integrate my passion for teaching as
well as my passion for science and social justice. In the mean time - I am excited to be able to work with so many fantastic students!
27 Subjects: including ACT Math, chemistry, reading, writing
...I make sure to fill in those gaps and then help the student make as much progress as possible. I have tutored in math, especially SAT math, for many years, and am currently tutoring college
math for ITT Technical Institute. Students who struggle with math have almost always had a teacher who forgot to tell them about a crucial fact or step.
38 Subjects: including ACT Math, English, writing, geometry
...I am open to teach any language that also has value in the real world. I have worked as an IT industry professional for over 30 years. I have programmed in various languages, installed and
configured hardware and software, and recently have focused on computer network security issues.
43 Subjects: including ACT Math, chemistry, physics, calculus
...I have been working in MS Windows for over 20 years. I have been coaching students from the community college in C#. I have found that irrespective of the language used, the challenge in
tackling assignments and in understanding programming lies in breaking down a problems and thinking logically...
16 Subjects: including ACT Math, geometry, algebra 1, algebra 2
Related Burien, WA Tutors
Burien, WA Accounting Tutors
Burien, WA ACT Tutors
Burien, WA Algebra Tutors
Burien, WA Algebra 2 Tutors
Burien, WA Calculus Tutors
Burien, WA Geometry Tutors
Burien, WA Math Tutors
Burien, WA Prealgebra Tutors
Burien, WA Precalculus Tutors
Burien, WA SAT Tutors
Burien, WA SAT Math Tutors
Burien, WA Science Tutors
Burien, WA Statistics Tutors
Burien, WA Trigonometry Tutors
Nearby Cities With ACT Tutor
Auburn, WA ACT Tutors
Bellevue, WA ACT Tutors
Bremerton ACT Tutors
Des Moines, WA ACT Tutors
Federal Way ACT Tutors
Kent, WA ACT Tutors
Kirkland, WA ACT Tutors
Normandy Park, WA ACT Tutors
Redmond, WA ACT Tutors
Renton ACT Tutors
Seahurst ACT Tutors
Seatac, WA ACT Tutors
Seattle ACT Tutors
Shoreline, WA ACT Tutors
Tukwila, WA ACT Tutors
|
{"url":"http://www.purplemath.com/Burien_WA_ACT_tutors.php","timestamp":"2014-04-21T07:42:32Z","content_type":null,"content_length":"23616","record_id":"<urn:uuid:862456b8-bf3d-43c9-bd41-5c6836010e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
oglm: Ordinal Generalized Linear Models
Richard Williams, University of Notre Dame
Note: Those who are interested in oglm may also be interested in its older sibling, gologit2. Much of the material on the gologit2 page will also apply to oglm. oglm requires Stata 9 while
gologit2 requires Stata 8.2.
Readings. Allison (1999) showed that comparisons of logit and probit coefficients across groups was potentially problematic. Using Heterogeneous Choice Models To Compare Logit and Probit
Coefficients Across Groups (revised March 2009; a final version is in the May 2009 Sociological Methods and Research) critiques Allison's proposed solution and shows how it can be improved upon
using heterogeneous choice models that can be estimated by the author's Stata program oglm (Ordinal Generalized Linear Models). Download these zip files if you want to replicate the Simulations
of Table 4 or the Other Analyses.
This working paper (last updated on October 17, 2010; forthcoming in The Stata Journal), Estimating heterogeneous choice models with oglm, illustrates how oglm can be used to estimate
heterogeneous choice and related models. It shows that two other models that have appeared in the literature (Allison’s model for group comparisons and Hauser and Andrew’s logistic response model
with proportionality constraints) are special cases of a heterogeneous choice model and alternative parameterizations of it. The paper further argues that heterogeneous choice models may
sometimes be an attractive alternative to other ordinal regression models, such as the generalized ordered logit model estimated by gologit2. Finally, the paper offers guidelines on how to
interpret, test and modify heterogeneous choice models. If you want to replicate the analyses, you can download oglm_examples.do and lrpc.dta. I thank J. Scott Long, Robert Hauser and Megan
Andrew for making their data sets available for this paper. (NOTE: Several additional variables not used in the paper can be found in the 74 MB lrpcfull.dta. More information on the OCG II data
used in the lrpc files can be found at ICPSR.)
This paper from The Stata Journal discusses the closely related gologit2 program, which estimates many of the same models as oglm. Also, this July 2006 presentation (powerpoint or pdf; related
handout) discusses "Interpreting and using heterogeneous choice & generalized ordered logit models." It raises several issues that users of gologit may not be aware of and argues that
heterogeneous choice models (which can be estimated with oglm) can sometimes be an attractive alternative to gologit models.
I also highly recommend this working paper by Keele and Park. It reviews the literature on some of the models estimated by oglm , and also critiques these methods. I asked the authors how much I
should be worried about these problems, and they told me that "I wouldn't worry that much in that the ordinal probit is quite a bit better overall. In political science people use fairly
complicated specifications for the variance model. I think if you know pretty clearly that the heterogeneity is from some set of groups then the specification is a lot easier to get right." This
chapter on SPSS Plum (which oglm is patterned after) is also quite good, but keep in mind that oglm and PLUM do some things differently. The oglm help file includes several examples and documents
its options.
Overview. oglm estimates Ordinal Generalized Linear Models. When these models include equations for heteroskedasticity they are also known as heterogeneous choice/ location-scale /
heteroskedastic ordinal regression models. oglm supports multiple link functions, including logit (the default), probit, complementary log-log, log-log and cauchit.
When an ordinal regression model incorrectly assumes that error variances are the same for all cases, the standard errors are wrong and (unlike OLS regression) the parameter estimates are biased.
Heterogeneous choice/ location-scale models explicitly specify the determinants of heteroskedasticity in an attempt to correct for it. Further, these models can be used when the variance/
variability of underlying attitudes is itself of substantive interest. Alvarez and Brehm (1995), for example, argued that individuals whose core values are in conflict will have a harder time
making a decision about abortion and will hence have greater variability/error variances in their responses.
Several special cases of ordinal generalized linear models can also be estimated by oglm, including the parallel lines models of ologit and oprobit (where error variances are assumed to be
homoskedastic), the heteroskedastic probit model of hetprob (where the dependent variable must be a dichotomy and the only link allowed is probit), the binomial generalized linear models of
logit, probit and cloglog (which also assume homoskedasticity), as well as similar models that are not otherwise estimated by Stata. This makes oglm particularly useful for testing whether
constraints on a model (e.g. homoskedastic errors) are justified, or for determining whether one link function is more appropriate for the data than are others.
Other features of oglm include support for linear constraints, making it possible, for example, to impose and test the constraint that the effects of x1 and x2 are equal. oglm works with several
prefix commands, including by, nestreg, xi, svy and sw. Its predict command includes the ability to compute estimated probabilities. The actual values taken on by the dependent variable are
irrelevant except that larger values are assumed to correspond to "higher" outcomes. Up to 20 outcomes are allowed. oglm was inspired by the SPSS PLUM routine but differs somewhat in its
terminology, and labeling of links.
To install oglm : From within Stata, type
findit oglm
Suggested citations if using oglm in published work
oglm is not an official Stata command. It is a free contribution to the research community, like a paper. Please cite it as such.
Williams, Richard. 2009. "Using Heterogeneous Choice Models To Compare Logit and Probit Coefficients Across Groups." Sociological Methods & Research 37(4): 531-559. A pre-publication version can
be found at http://www.nd.edu/~rwilliam/oglm/RW_Hetero_Choice.pdf .
Williams, Richard. 2010. "Estimating heterogeneous choice models with oglm." Working paper. Last updated October 17, 2010. http://www.nd.edu/~rwilliam/oglm/oglm_Stata.pdf . A final version is
forthcoming in The Stata Journal.
I would appreciate an email notification if you use oglm in published work, as well as a citation of one or more of the sources listed above. Also feel free to email me if you have comments about
the program or its documentation.
|
{"url":"http://www3.nd.edu/~rwilliam/oglm/","timestamp":"2014-04-19T19:37:28Z","content_type":null,"content_length":"11875","record_id":"<urn:uuid:d54e1a9d-720b-4167-81c9-e21f4fd4d0f6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sam Vandervelde's Math Home Page
I am an Associate Professor of Mathematics at St. Lawrence University, located in Canton, NY; I joined the department in the fall of 2007. I enjoy any beautiful mathematical ideas, although I tend
to be inclined towards problems in number theory, graph theory, combinatorics, and Euclidean geometry. Current research projects include work on paradoxical decompositions and new partition
identities, while my dissertation focused on Mahler measure.
I will be teaching MATH 136 (Integral Calculus) and MATH 205 (Multivariable Calculus) during the Fall 2013 semester. At St. Lawrence I have also taught Bridge to Higher Math, College Geometry,
Number Theory, and a Combinatorics seminar.
Contact Info • Publications • Current CV • Snow Bowl
High School Math Contests • Math Circles
I have recently completed a textbook for use with the proofs course taught at St.Lawrence University. My goal in writing the text was to create an excellent, engaging, inexpensive book for my
students. Click on the picture of the cover at left for more information about the book, such as how to place an order or how to download portions of the book for review, including the front
matter, first three chapters, and answers for all corresponding exercises.
Contact Info
Dr. Sam Vandervelde Office: Valentine 212
Dept of Math, CS & Stats Phone: 315-229-5946
St. Lawrence University Fax: 315-229-7413
23 Romoda Drive Email: svandervelde@stlawu.edu
Canton, NY 13617
• Fun With FWANADS, Math Horizons, 21(3) (2013) 10–11.
• On the Divisibility of Fibonacci Sequences by Primes of Index Two, Fibonacci Quarterly, 50(3) (2012) 207–16. PDF
• Jacobi Sum Matrices, American Mathematical Monthly, 119(2) (2012) 100–115. PDF
• Balanced Partitions, Ramanujan Journal, 23(1) (2010) 297–306. PDF
• Bridge to Higher Mathematics, self published at www.lulu.com, 2010. More info
• Circle in a Box, American Mathematical Society, Providence, RI, 2009. Link to PDF
• Expected Value Road Trip, Mathematical Intelligencer, 30(2) (2008) 17–18. PDF
• The Mahler Measure of Parametrizable Polynomials, Journal of Number Theory, 128(8) (2008) 2231–2250. PDF
• A Formula for the Mahler Measure of axy+bx+cy+d, Journal of Number Theory, 100(1) (2003) 184–202. PDF
• Mathematics as a Liberal Art, Journal of Education, 183(3) (2002) 7–15.
• The First Five Years, Greater Testing Concepts, Cambridge, MA, 2004. Order
• The Mandelbrot Problem Book, Greater Testing Concepts, Cambridge, MA, 2002. Order
• Mandelbrot Morsels, Greater Testing Concepts, Potsdam, NY, 2010. Order
To appear
• A Rational Function Whose Integral Values Are Sums of Two Squares, Rocky Mountain Journal of Mathematics. PDF
Curriculum Vitae
My curriculum vitae was last updated during the summer of 2013. PDF
The Snow Bowl
During the fall of 2007 I initiated a friendly math rivalry among Colgate, Hamilton, Skidmore, and St. Lawrence. The Snow Bowl rotates among the math departments at these four schools, awarded
annually to the school whose top five students have the highest total score on the Putnam Competition. The bowl returned to SLU for the 2010-11 academic year. The purpose of the rivalry is to
promote interest and participation in mathematical problem solving for students at these schools.
High School Math Competitions
I am the author and coordinator of the Mandelbrot Competition, a math contest taken by over 6000 high school students from across the country last year. I oversee all aspects of the competition,
including test composition, web site maintenance, and contest administration. My goal is to introduce students to exciting topics outside the normal curriculum and engage them in mathematical
writing. I also write questions for and serve on the committee that produces the US Math Olympiad (USAMO).
Math Circles
I am an enthusiastic supporter and promoter of math circles, an extracurricular activity which brings students interested in mathematics together with professional mathematicians who are able to
engage them in exciting mathematical investigations. I founded the Stanford Math Circle in 2005 and recently led a math circle at A.A. Kingston Middle School in Potsdam, NY. I was also the first
director of the board that created The Teacher's Circle, an analogous activity for middle school math teachers. I organized a minicourse at JMM 2013 designed to aid participants in launching their
own successful math circles. Most recently I was one of the organizers for the Great Circles Workshop hosted by MSRI in April 2009. For more information, visit the National Association of Math
Circles web site.
|
{"url":"http://myslu.stlawu.edu/~svanderv/","timestamp":"2014-04-20T18:24:50Z","content_type":null,"content_length":"16280","record_id":"<urn:uuid:590d4d41-ba66-4df5-8e0e-6697eaeb57dd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definitions for octalˈɒk tl
This page provides all possible meanings and translations of the word octal
Random House Webster's College Dictionary
oc•talˈɒk tl(adj.)
1. of or pertaining to a number system with base 8, employing the numerals 0 through 7.
Category: Computers, Math
Origin of octal:
1935–40; < L oct(ō) or Gk okt(ṓ) eight+ -al1
Princeton's WordNet
1. octal(adj)
of or pertaining to a number system having 8 as its base
"an octal digit"
1. octal(Noun)
The number system that uses the eight digits 0, 1, 2, 3, 4, 5, 6, 7.
2. octal(Adjective)
Concerning numbers expressed in octal or mathematical calculations performed using octal.
1. Octal
The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into
groups of three. For example, the binary representation for decimal 74 is 1001010, which can be grouped into 1 001 010 – so the octal representation is 112. In the decimal system each decimal
place is a power of ten. For example: In the octal system each place is a power of eight. For example: By performing the calculation above in the familiar decimal system we see why 112 in octal
is equal to 64+8+2 = 74 in decimal.
Find a translation for the octal definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for octal?
|
{"url":"http://www.definitions.net/definition/octal","timestamp":"2014-04-18T06:17:33Z","content_type":null,"content_length":"25340","record_id":"<urn:uuid:2c37e8a5-1b9f-4ae0-a982-1a9caad91df8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanics Slope Q
January 22nd 2008, 04:10 PM #1
Junior Member
Apr 2007
Mechanics Slope Q
The grey circle is a pivot, and the ground is rough.
Why are the vertical and horizontal components of the reaction "R", Rcos(a) and Rsin(a) respectively?
Last edited by Apex; January 22nd 2008 at 04:11 PM. Reason: attachment
A little basic geometry:
The horizontal line at the bottom of the diagram is parallel to the horizontal line coming from where R meets the inclined plane (the Rx component). Thus the angle between the Rx component and
the inclined plain is the same as the angle between the inclined plain and the horizontal, a. (Alternate interior angles are equal.)
Since R makes a right angle with the inclined plain, the angle between R and Rx is 90 - a. Again, since Rx and Ry are perpendicular the angle between R and Ry is a. Now find your components. The
angle a is on the adjacent side to the Ry component, so $R_y = R~cos(a)$. etc.
Makes sense now
January 22nd 2008, 05:23 PM #2
January 22nd 2008, 07:29 PM #3
Junior Member
Apr 2007
|
{"url":"http://mathhelpforum.com/advanced-applied-math/26603-mechanics-slope-q.html","timestamp":"2014-04-18T11:47:50Z","content_type":null,"content_length":"37511","record_id":"<urn:uuid:5468491e-220a-4413-be07-d2cd20bb55c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
percentile help with CDF
November 6th 2009, 02:14 PM #1
Junior Member
Sep 2009
percentile help with CDF
I need to understand part b.
how did they get n(p)
in other words, how did they go from
p = F(n(p)) ......to n(p)
see the attachment!
That's just the quadractic equation. Let x be the $p^{th}$ percentile
so p=F(x) or $p=2(x+{1\over x} -2)$
multiply by x and create a quad equation
$px=2(x^2+1 -2x)$
$x^2 -(2+{p\over 2})x+1=0$
NOW solve for x and discard the bad solution.
Last edited by matheagle; November 6th 2009 at 10:36 PM.
November 6th 2009, 10:24 PM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/112835-percentile-help-cdf.html","timestamp":"2014-04-19T17:24:20Z","content_type":null,"content_length":"33562","record_id":"<urn:uuid:f757b44f-6989-4206-8720-fe5d1c53abb1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Efficiently supporting ad hoc queries in large datasets of time sequences
- In proceedings of ACM SIGMOD Conference on Management of Data , 2002
"... Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most
promising solutions' involve performing dimensionality reduction on the data, then indexing the reduced data w ..."
Cited by 235 (28 self)
Add to MetaCart
Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising
solutions' involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been
proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction
technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that
minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments' of varying lengths' such that their individual reconstruction errors' are minimal.
We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower
bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searchin& and even faster approximate
searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its' superiority.
, 2002
"... The problem of indexing time series has attracted much research interest in the database community. Most algorithms used to index time series utilize the Euclidean distance or some variation
thereof. However is has been forcefully shown that the Euclidean distance is a very brittle distance me ..."
Cited by 234 (30 self)
Add to MetaCart
The problem of indexing time series has attracted much research interest in the database community. Most algorithms used to index time series utilize the Euclidean distance or some variation thereof.
However is has been forcefully shown that the Euclidean distance is a very brittle distance measure. Dynamic Time Warping (DTW) is a much more robust distance measure for time series, allowing
similar shapes to match even if they are out of phase in the time axis.
- In proc. of SDM Int’l Conf , 2004
"... It has long been known that Dynamic Time Warping (DTW) is superior to Euclidean distance for classification and clustering of time series. However, until lately, most research has utilized
Euclidean distance because it is more efficiently calculated. A recently introduced technique that greatly miti ..."
Cited by 62 (20 self)
Add to MetaCart
It has long been known that Dynamic Time Warping (DTW) is superior to Euclidean distance for classification and clustering of time series. However, until lately, most research has utilized Euclidean
distance because it is more efficiently calculated. A recently introduced technique that greatly mitigates DTWs demanding CPU time has sparked a flurry of research activity. However, the technique
and its many extensions still only allow DTW to be applied to moderately large datasets. In addition, almost all of the research on DTW has focused exclusively on speeding up its calculation; there
has been little work done on improving its accuracy. In this work, we target the accuracy aspect of DTW performance and introduce a new framework that learns arbitrary constraints on the warping path
of the DTW calculation. Apart from improving the accuracy of classification, our technique as a side effect speeds up DTW by a wide margin as well. We show the utility of our approach on datasets
from diverse domains and demonstrate significant gains in accuracy and efficiency.
- In Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery , 2001
"... The problem of similarity search (query-by-content) has attracted much research interest. It is a difficult problem because of the inherently high dimensionality of the data. The most promising
solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a m ..."
Add to MetaCart
The problem of similarity search (query-by-content) has attracted much research interest. It is a difficult problem because of the inherently high dimensionality of the data. The most promising
solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed,
including Singular Value Decomposition (SVD), the Discrete Fourier Transform (DFT), the Discrete Wavelet Transform (DWT) and Piecewise Polynomial Approximation. In this work, we introduce a novel
framework for using ensembles of two or more representations for more efficient indexing. The basic idea is that instead of committing to a single representation for an entire dataset, different
representations are chosen for indexing different parts of the database. The representations are chosen based upon a local view of the database. For example, sections of the data that can achieve a
high fidelity representation with wavelets are indexed as wavelets, but highly spectral sections of the data are indexed using the Fourier transform. At query time, it is necessary to search several
small heterogeneous indices, rather than one large homogeneous index. As we will theoretically and empirically demonstrate this results in much faster query response times.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1419085","timestamp":"2014-04-19T23:55:57Z","content_type":null,"content_length":"22705","record_id":"<urn:uuid:2ef1deb3-8d30-462c-bd65-89661c771b2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Removing largest item in BST tree?
How would I remove the largest node in a Binary Search tree?
function prototype:
boolean remove_largest(node *& root)
I know that if there's no right child, then the root node has to be deleted.
If there is a right child, then traverse that path for the largest node.
But how to make this in to code that works?
You just described what to do, where are you stuck?
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/118866/","timestamp":"2014-04-18T23:48:23Z","content_type":null,"content_length":"8487","record_id":"<urn:uuid:22b239ed-a0fa-4d04-ba5e-e42701e11950>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many different ways can a teacher line up 5 students for lunch?
Number of results: 88,565
How many different ways can a teacher line up 5 students for lunch?
Sunday, December 20, 2009 at 11:50pm by Michelle
MATH Prob.
How many different ways can a teacher line up 5 students for lunch?
Thursday, August 20, 2009 at 10:09pm by Twg
math for the middle school and elementary teacher
Which one of the following situations a permutation? a) the number of ways 10 people can be lined up in a line for basketball tickets. b) The number of possible four member bowling teams from a
family of six siblings. c) The number of ways to choose a committee of three from ...
Tuesday, February 28, 2012 at 8:58pm by liz
Matt, Bob, Amy, and John are waiting in line at a fast food restaurant. How many different ways can they line up to place their order
Tuesday, October 20, 2009 at 7:07pm by marie
Can someone tell me if I have the correct answers? I am really confused. Three boys and three girls line up to go in the front door. In how many ways can they line up? My answer: 720 How many ways
can they line up if the first one in line is a girl and then alternates by ...
Saturday, August 23, 2008 at 3:41pm by Tim
We would line them up one person at a time. We have 8 choices for the first person. That makes 8 choices. For the second person, we are left with 7 choices, together with the first, we have 8*7=56
ways. For the third person, we are left with 6 people from whom to choose, so ...
Wednesday, December 23, 2009 at 5:27pm by MathMate
I fell asleep trying to figure this one out ... if you could help I would appreciate it ... Here is the problem .. If Jon, Mac, and Heather are taking a group photo, how many different ways can the
photographer line them up? .. okay, I have that one .. = 6 ... but now i have ...
Wednesday, December 19, 2007 at 7:45am by Dylan
Okay, for this one, I know it's either one of two answers. In how many different ways can 7 different people line up for a picture? 49, or 823543.
Friday, January 1, 2010 at 12:58pm by Anna
Ok, so my math teacher didn't tell us ways to do this faster, and it's late enough as it is so I don't want to be up all night doing homework. D; *Note: A permutation is where the order matters. A
combination is where it doesn't. Find the number of permutations. 1.Ways to ...
Thursday, February 14, 2008 at 12:33am by Who cares?
cecile tosses 5 coins one after another a. how many different outcomes are possible b. draw a tree to illustrate the different possibilities c. in how many ways will the first coin turn up heads and
the last coin turn up tails d. in how many ways will the second and thrd and ...
Thursday, August 28, 2008 at 7:52pm by steven james
How many different ways can Kathy, Jessie,and Daniella line up, single file,at the drinking fountain? Thanks
Tuesday, December 15, 2009 at 6:12pm by Ada
Math [Probability]
I'm in a Grade 12 Data Management Course. The Current unit is Factorials and Permutations. I'm having a trouble with this question. The answer is suppose to be 120 but I dont know how. How many
different ways can 6 people be seated at a round table? Explain your reasoning. If ...
Sunday, September 10, 2006 at 3:24pm by Jerry
Math 109
How many ways can a teacher give 5 different prizes to 5 different of her 25 students
Thursday, March 21, 2013 at 10:07pm by Darcy
Math 109
How many ways can a teacher give 5 different prizes to 5 different of her 25 students
Thursday, March 21, 2013 at 10:07pm by Darcy
In how many ways can 7 basketball players of different heights line up in a single row so that no player is standing between 2 people taller than she is?
Wednesday, January 27, 2010 at 7:00pm by joe
a)in how many different ways can 8 exam papers be arranged in a line so that the best and worst papers are not together? b)in how many ways can 9 balls be divided equally among 3 students? c)There
are 10 true-false questions. By how many ways can they be answered? d)lim(x ...
Wednesday, August 5, 2009 at 12:24pm by Aayush
HELP ME pLEASE. DUE TOMORROW D:
there are 4! ways or 24 ways for them to line up
Sunday, March 1, 2009 at 11:42pm by Reiny
Stamps come in large sheets with perforations in between. How many different ways can you buy 4 attached square stamps? ( Two ways to put them together are considered the same if one way can be
turned or flipped so that its outline looks like the other way.) You can attach ...
Tuesday, January 23, 2007 at 8:54pm by Preston
discrete math
Nine people on a baseball team are trying to decide who will play which position. a. in how many different ways could they select a person to be pitcher? b. after someone has already been selected as
pitcher, how mandy different ways could they select someone else to be ...
Monday, September 15, 2008 at 3:07pm by kennedy
How many ways can seven basketball players of different heights line up in a single row so no player is standing between two players taller then herself?
Tuesday, January 26, 2010 at 5:08pm by robin
How many ways can seven basketball players of different heights line up in a single row so no player is standing between two players taller then herself?
Sunday, January 31, 2010 at 9:39pm by robin
8th math
How many ways can seven basketball players of different heights line up in a single row so no player is standing between two players taller then herself?
Thursday, January 28, 2010 at 12:21pm by Robin
math 12
Seven year-old Sarah has nine crayons; three blue, one red, two green and three yellow. a) In how many ways can she line up the crayons on her desk? b) She is to pick up some of the crayons. How many
different choices of some crayons could she make?
Saturday, November 27, 2010 at 10:03pm by kitkat
All three constructions are pyramids. Study pictures of them carefully and find the ways they are alike and ways they are different. For instance, the Ziggurat was built on a platform and then it
went up to the top like stair steps. The Pyramids of Giza rose smoothly from the ...
Tuesday, October 13, 2009 at 11:11pm by Ms. Sue
Ms. Sue please help
5.9(line over it) 0.1515... 1.20(line over it) 0.35(line over it) 0.72(line over it) 1.12(line over it) need to change these from decimals to fractions, im confused as to how to do it and get the
answer, sub teacher taught it different and I am not getting the same answer that...
Tuesday, November 5, 2013 at 9:18pm by Anonymous
How many ways can 3 students be chosen from a class of 20 to represent their class at a banquet? (1 point) 6,840 3,420 1,140 2,280 15. You and 3 friends go to a concert. In how many different ways
can you sit in the assigned seats? (1 point) a.6 ways b.12 ways c.24 ways d.10 ...
Wednesday, April 17, 2013 at 8:17am by Anonymous
You can roll two dice at a time, a white one and a red one, and there are 36 different ways for the "up faces" to land. How many of ways will give a sum of 7 on the two up faces?
Tuesday, April 6, 2010 at 8:43pm by Mia
I need to re-write this sentence in a different light that would make it seem more correct. I am suppose to write it five different ways, I have written it in three other ways but I cannot come up
with two more. The sentence is: Teen marketed films, try to apply different ...
Sunday, November 9, 2008 at 11:31pm by Danielle
One thing to keep in mind (if you TRULY want to be a successful teacher) is that it's not the student's job to adapt to the teacher's thinking and explanation. It's the teacher's job to explain
concepts in different ways until the student DOES understand. One thing to do is to...
Saturday, November 24, 2007 at 10:52am by Writeacher
This is a tricky question. The teacher prepares 5 books for 7 students, but did not say that all five books HAVE to be distributed. So my interpretation is that 0 to 5 books could have been
distributed. For the case of 0 book distributed, there is only one way: no one gets any...
Tuesday, May 8, 2012 at 9:14am by MathMate
I need to know how "clusters of words" are defined in your text or by your teacher. There are many different ways this could be interpreted.
Wednesday, March 21, 2012 at 7:03pm by Writeacher
A teacher prepares 5 different books for 7 students.If each student can get one or no book from the teacher , find the number of ways of distributing the books among the students. can u explain it to
me detailly.i have thought about this question for serveral hours but i still...
Tuesday, May 8, 2012 at 9:14am by Nada
I'm not sure, but shouldn't the answer for Part A be 2! = 2 * 1 = 2 different possible ways I mean, just think about the question. If the digits follow the letters KRIS, how can you arrange them in a
hundred different ways? Let's say the numbers are 5 and 6. You can arrange ...
Wednesday, May 15, 2013 at 10:39pm by Cullen
Each person can stand 4 different ways I believe So I'm thinking 16 different ways all together.
Friday, February 24, 2012 at 6:59pm by L.Bianchessi
Tell me how to figure out these probabilities In how many different orders can 9 people stand in line? In how many ways can 4 people be seated in a row of 12 chairs? Thank you For the first problem,
we have nine people and want to arrange them in a line. For the first position...
Thursday, September 21, 2006 at 10:42pm by Haley
Geography (Ms. Sue)
Trust me. I'm a geography teacher and you should recognize many different ways that people have changed their environments. If your teacher only wants you to copy information from your textbook, o.k.
But I want to make sure that you understand other examples also.
Saturday, September 28, 2013 at 6:31pm by Ms. Sue
English (Reading)
My question has to do with the book, The Secret Life of Bees . . . . My teacher wants us to list three ways the main character (Lily) changed throughout the book. I've already come up with two ways
(the way she views her mother and the way she handles herself in difficult ...
Sunday, August 30, 2009 at 11:19am by Myriah
Discuss three different ways a teacher can foster syntactic or semantic development in students. Provide a classroom example of each.
Saturday, September 20, 2008 at 7:37pm by Melvenia
algebra 2
A store sells rings with birthstones selected from the 12 different months of the year. The stones are arranged in a row. Part A: Mary wants a ring with a topaz, a sapphire, and a ruby set in a line.
Write and evaluate an expression to show how many ways the 3 stones can be ...
Wednesday, May 23, 2012 at 4:23pm by jenny
Probability again
follow the basic methodology that Drwls showed you in the previous post. Assume each color ball can be distinguished from each other. (e.g., name the green balls g1 and g2) First, the denominator.
How many different ways can 3 balls from 12 be chosen. 12-choose-3 is 12!/3!(12-...
Sunday, November 29, 2009 at 7:59pm by economyst
Algebra II
A store sells rings with birthstones selected from the 12 different months of the year. The stones are arranged in a row. Part A: Mary wants a ring with a topaz, a sapphire, and a ruby set in a line.
Write and evaluate an expression to show how many ways the 3 stones can be ...
Sunday, May 27, 2012 at 6:33pm by Sharon
A parking lot has 5 spots remaining and 5 different cars in line waiting to be parked. If one of the cars is too big to fit in the two outermost spots, in how many different ways can the five cars be
Wednesday, July 27, 2011 at 11:08pm by Jennifer
List 2 ways in which prokaryotic and eukaryotic cells are alike and 2 ways in which they are different. Is this correct? Alike= cell membrane and DNA Different=ribosomes and mitochondria
Wednesday, October 19, 2011 at 7:51pm by Ty
List 2 ways in which prokaryotic and eukaryotic cells are alike and 2 ways in which they are different. Is this correct? Alike= cell membrane and DNA Different=ribosomes and mitochondria
Wednesday, October 19, 2011 at 8:06pm by Ty
I suppose you mean how many ways could they line up. Think of who goes first: 5 choices, who goes next: 4 choices, ... who goes last: 1 choice. So by the multiplication principle, there are 5.4.3...1
Thursday, February 16, 2012 at 2:25pm by MathMate
there are 3 different ways of travelling from A to B and 5 different ways of travelling from B to C. How many different ways are there of travelling from A to C through B ?
Saturday, November 17, 2012 at 8:20am by charlie
there are 3 different ways of travelling from A to B and 5 different ways of travelling from B to C. How many different ways are there of travelling from A to C through B ?
Thursday, November 22, 2012 at 7:39pm by charlie
what do you do if a teacher and a behavior analyst want you to teach a child two different ways? (Autistic child)
Thursday, April 14, 2011 at 9:17pm by celli
Interesting question. let's call our boxes A,B, and C and our books 1,2,3,4, and 5 let's do b) first our books are all the same, so we will just separate them into 3 piles adding to 5 1 1 3 1 2 2 1 3
1 2 1 2 2 2 1 3 1 1 so if our first column is box A, second column is B etc ...
Thursday, April 22, 2010 at 10:45am by Reiny
Neptune High
I am in a class for cosmetology my teacher want me to set up a time line with dates and what was created or accomplished who created it and to describ the prouct or accomplishment what dose a time
linr look like and how do you set it up on paper with a horizontal time line?
Saturday, February 26, 2011 at 8:39am by kamisha
A history teacher gives a 20 question T-F exam. In how many different ways can the test be answered if the possible answers are T or F, or possibly to leave the answer blank?
Wednesday, May 23, 2012 at 5:51pm by Randy
A history teacher gives a 27 question T-F exam. In how many different ways can the test be answered if the possible answers are T or F, or possibly to leave the answer blank?
Sunday, August 26, 2012 at 12:13am by TRAY
A history teacher gives a 22 question T-F exam. In how many different ways can the test be answered if the possible answers are T or F, or possibly to leave the answer blank?
Monday, December 3, 2012 at 8:34pm by jacque
com 220
explains how using statistics, graphs, and illustrations can strengthen our arguments in many different ways. In this case, the author used many different stats. He backed up claims with facts, which
makes his argument more convincing!
Wednesday, January 20, 2010 at 9:46pm by luz
How many ways can 6 cars line up is it 6 X 6 is it 6 squared?
Wednesday, May 12, 2010 at 1:55pm by Cooper
1.how many different permutations can you make with the letters in the word s e v e n t e e n ? 2.a teacher has a set of 12 probelms to use on a math exam. the teacher makes different versions of the
exam by putting 10 questions on each exam. how many different exams can the ...
Saturday, September 5, 2009 at 2:51pm by Tuhin
Math 157
no the girls can be arranged 9 different ways and the boys can be arranged 49 different ways
Wednesday, January 5, 2011 at 10:18am by Andrew
Keith, Annie, Elizabeth, and Marcus are taking a picture. How many different ways can the friends stand in a horizontal line?
Friday, February 24, 2012 at 6:59pm by unknown
for the second: let's consider one of the ways: numbers show up: 1,2,3,4,5,6 the prob of that happening = (1/6)*(1/6)..(1/6) six times = (1/6)^6 but it did not have to come up in that order, as a
matter of fact there are 6! ways for the 1,2,3,4,5,6 to come up or 720 ways so ...
Tuesday, March 24, 2009 at 8:18pm by Reiny
Here is a fun way to learn vocab. Get 2 packs of index cards of 2 different colors, say blue and yellow. On the yellow cards write the first vocab word, skip a line write the next, skip a line and
keep going. On the blue index cards, write the definitions. Using the same ...
Sunday, February 16, 2014 at 5:31pm by Ms. Glenn, M.S.Ed.
do you mean different ways? 24x18=12x36 24x18=(30-6)x(10+8) Just write things in different ways, not changing the actual values 24x18=8x3x9x2 etc.
Monday, March 25, 2013 at 4:26pm by Steve
child development
discuss three different ways a teacher can foster syntactic or semantic development in students. Provide a classroom example of each
Thursday, February 26, 2009 at 9:08pm by sha
child development
Discuss three different ways a teacher can foster syntactic or semantic development in students. Provide a classroom example of each.
Tuesday, December 15, 2009 at 5:38pm by lorraine
In how many ways can 7 people line up for plane tickets?
Saturday, March 26, 2011 at 5:24pm by Valerie
In how many ways can 7 people line up for play tickets
Saturday, March 26, 2011 at 5:24pm by Valerie
how many ways can you line up 11 people for a picture?
Tuesday, March 12, 2013 at 11:52pm by Regina
How do you figure this problem out? How many different committees can be formed from 5 professors and 15 students if each committee is made up of 2 professors and 10 students? THANKS!!!!! The
limiting factor is the students. You only have enough for one committee. There are ...
Monday, September 25, 2006 at 5:36pm by Haley
1a 3∪6 represent 2 successful event out of 6 possible outcomes. If the numbers chosen were 2 and 4 instead of 3 and 6, the probability would remain the same (2 out of 6). 1b. Correct. 2. Committee of
5 out of 8 persons: There are C(5,8)=8!/(5!3!)=56 ways to choose 5 ...
Sunday, March 6, 2011 at 6:08pm by MathMate
Finite Math
3 ways to pick the bread, 5 ways to pick the meat, and 2 ways for toppings no of sandwiches = 3x5x2 or 30 different ones.
Monday, March 2, 2009 at 12:45am by Reiny
Language Arts
In poetry,a word or line can have more than one meaning. "I fumble with the task to no avail." What different ways can you interpret this?
Tuesday, April 30, 2013 at 9:00pm by Stephanie
A bakery makes 4 different kids of cake. Each cake can have 3 different kinds of frosting. Each frosted cake can be decorated in 2 different ways. How many ways are there of ordering a decorated,
frosted cake?
Thursday, December 2, 2010 at 7:50pm by Dillard
A bakery makes 4 different kids of cake. Each cake can have 3 different kinds of frosting. Each frosted cake can be decorated in 2 different ways. How many ways are there of ordering a decorated,
frosted cake?
Friday, December 3, 2010 at 9:12am by Dillard
data management grade 12
Cecile tosses 5 coins,one after the other. a) How many different outcomes are possible? b) In how many ways will the first coin turn up heads and the last coin turn up tails? c) In how many ways will
the second third and fourth coins all turn up heads? d) would the same ...
Friday, February 5, 2010 at 8:35pm by Paul
Ways to get 2 socks out of 18 =(18,2)=18!/(2!(18-2)!) =153 Ways to get 2 navy socks =(5,2) Ways to get 2 white socks =(7,2) Ways to get 2 black socks =(6,2) So ways to get two different coloured
socks =(153-(5,2)-(6,2)-7,2)) out of a total possible 153 ways. Calculate the ...
Thursday, April 21, 2011 at 12:59pm by MathMate
For chemistry, I have to make a line graph. My teacher specifically instructed us to make one with smooth, fluid lines, not with choppy ones, so it would form a curve. I am suppposed to find the
slope of the line, but I don't know how to, because the line goes up and down and ...
Monday, February 16, 2009 at 6:11pm by Coralie
How many ways can 6 people line up for a race. This is is a combination, but I do not know how to solve it. Help is appreciated.
Sunday, March 22, 2009 at 3:02pm by Gaby
Math Word Problem
How many ways can 8 people line up for play tickets?
Saturday, December 5, 2009 at 6:00pm by Ashley
Math 4
Teresa has 4 flowers, 4 designs. How many ways can she display/mix them up. Thank you ....but 4x4 seems too simple. If I line up abcd, bcda,cdab, etc...... (Naming each design abcd) ??? is it
possibley more?
Tuesday, September 22, 2009 at 6:31pm by Noah
my teacher told me that the only way you can square EVERYTHING was if you ISOLATE the radicals. What you're doing is something completely different. You could have told me you made up a completely
different problem.
Monday, March 31, 2008 at 7:41pm by Casey
Poorly worded question. So I will assume she can choose either 1, 2, or 3 colours. (It said "she can" not "she must" choose 3 colours) one colour: 4 ways two colours: C(4,2) = 6 three colours: C(4,3)
= 4 ways Of course arranging the colours of one of the choices would yield a ...
Wednesday, August 22, 2012 at 9:56pm by Reiny
center dot x=0 line up 1 to the left of y x=-2 line up to 2.5 to the left of y x=-3 line up to 4.5 to the right of y x=2 line up to 2.5 to the right of y x=3 line up to 4.5 Does this help?
Tuesday, September 18, 2012 at 11:04am by lee
Data management
1. There are n=12! ways to line up 12 distinct object into 12 ordered positions. If positions 1,2,3 form one pile, then N would be overcounted by 3! times. Therefore n must be divided by 3! for each
group of 3. The total number of ways is therefore 12!/(3!3!3!3!) 2. Follow the...
Thursday, October 27, 2011 at 4:43pm by MathMate
4th grade math
I have to make a table or list to solve these problems. 1. Tina and Bruce are each rolling a 1 to 6 number cube. They are looking for different ways to roll two factors whose product is greater than
14. How many different ways will they find? 2. Trenton has 2 hats and 6 ...
Tuesday, September 10, 2013 at 6:27pm by Danny
Each card has equal probability to end uop in the hand of the different players. So, for each ace you have 4 equaly likely choices for the players they will end up at, there are thus 4^4 = 2^8 ways
the aces can end up in the hands of the players. There are 4! ways to ...
Saturday, November 9, 2013 at 5:47pm by Count Iblis
In how many different ways can 8 riders be matched up to 8 horses? Explain answer please.
Monday, March 8, 2010 at 9:07pm by Mitchell
In how many ways can Kwan line up her carvings of a duck, a gull, and a pelican on a shelf?
Thursday, August 25, 2011 at 8:21pm by Mia
I am workng on my introduction part of my speech and here is my transition sentence: With my voice, I can make a difference in 5 different ways. Would these be like the examples of the "5 different
ways" ? preventing people from doing a bad action, helping a friend with their ...
Saturday, December 1, 2012 at 4:12pm by Emma
what exactly is the midpoint formula?? I am taking an oline summer school course in geometery and every website I go to tells me something different for any two points (a,b) and (c,d) the midpoint is
((a+c)/2,(b+d)/2) in words, add up the x's and divide by 2, then add up the y...
Thursday, July 26, 2007 at 11:11am by Kieran
Keith, Anne, Elizabeth, and Marcus are taking a picture. how many different ways can the friends stand in a horizontal line for the picture?
Saturday, February 25, 2012 at 3:11pm by shion
Math (Data Management)
The student council is ordering pizza for their next meeting. There are 20 council members, 7 of whom are vegetarian. A committee of 3 will order 6 pizzas from a pizza shop that has a special price
for large pizzas with up to three toppings. The shop offers 10 different ...
Tuesday, May 27, 2008 at 2:27pm by Megan
6 grade math
lets see. YOu can get 36 different combinations. Of those, the ways to get six are 1,5 2,4 3,3 4,2 5,1 So five ways to get sum of six, 31 ways of not getting a sum of six. odds is ways to get it,
divided by ways not to get it. Odds= 5:31 of getting a six total
Monday, January 18, 2010 at 4:55pm by bobpursley
The Downtown Theater has 1 ticket window. In how many ways can 2 people line up to buy tickets?
Thursday, April 5, 2012 at 2:02pm by Cheril
The Downtown Theater has 1 ticket window. In how many ways can 3 people line up to buy tickets?
Monday, April 9, 2012 at 5:34pm by Cheril
Four accounting majors, two economics majors, and three marketing majors have interviewed for five different positions with a large company. Find the number of different ways that five of these could
be hired. One accounting major, one economics major, and one marketing major ...
Sunday, May 4, 2008 at 9:57pm by Dana
Honors Geometry
Okay, on a number line I have points A,B, C, D, E, F, and G. If AG is different from GA, how many different names can you give the line? My question is: How can AG be different from GA?
Wednesday, September 1, 2010 at 9:31pm by Carsyn
In how many different ways can the top eight new indie bands be ranked on a top eight list? The top hit song for each of the eight bands will compete to receive monetary awards of $1000, $500 and
$250, respectively. In how many ways can the awards be given out? I think the ...
Friday, July 29, 2011 at 6:32pm by Jen
Data managment math
7: The basket ball team has total 14 players : 3-1st year player,5-2nd year player and 6-3rd year player (a) in how many ways can the coach choose a starting line up(5 players) with at least one 1 st
player. (b) in how many ways can he set up a starting lineup with two 2nd ...
Monday, January 28, 2008 at 12:13pm by Samir
8th grade
Laura is training her pet white rabbit, Sugar, to climb up a flight of 25 steps. Sugar can hop up one or two steps at a time. She never hops back down, only up. How many different ways can Sugar hop
up the flight of steps? PLEASE HELP!!!! I'M TOTALLY LOST!!!
Monday, January 18, 2010 at 9:18pm by Lola
Simplify and evaluate using distributive property of multiplcation 3x-4(2x-5) I've tried many ways and can't come up with the same answer as the teacher. help please?
Tuesday, September 15, 2009 at 3:02pm by soliman
Data managment math
I will assume engine up front and caboose at the rear so the different ways relate to the other cars. Now we do not know if the tankers are all picked up together and the flatcars together etc. or
not. In practice that would be likely because they would come from different ...
Tuesday, January 22, 2008 at 7:12pm by Damon
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=How+many+different+ways+can+a+teacher+line+up+5+students+for+lunch%3F","timestamp":"2014-04-18T18:33:18Z","content_type":null,"content_length":"40928","record_id":"<urn:uuid:883b2790-d810-4d53-899b-13b91eb1ed02>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Apr 122014
Knowledge is power, but just knowing alone cannot lead us to victory. We must also take action if we are to succeed
This is because the only person who can give you confidence is yourself
If someone proved to me that they have consistently made huge profits from trading copper futures, will I jump in head first? Probably not. That’s because I haven’t done enough research on
the futures market yet to build up the confidence I need to decide whether or not that’s a good idea for me.
Apr 092014
Last Fall I made some bold predictions that low interest rates are staying until 2016, which will keep the housing market stable. I also suggested that investing in parts manufacturers like Magna
International would be a profitable venture due to the consumer’s love for cars
Fast forward to today and it looks like events are unfolding thus far 25% higher since last year’s post.
Anyway, the International Monetary Fund (IMF) recently published their growth projections for countries in 2014. Canada’s economy is expected to grow at 2.3% this year, lower than that of the U.S. at
2.8%, and the U.K. at 2.9%.
So we must create a plan to make the best of this current economic situation, because if we fail to plan – then we plan to fail
Today I will make some more predictions
Apr 072014
I’m pretty confident that there will always be a healthy number of frugal people in this world. So last Friday, as some of you may already know, I purchased 15 shares of Dollarama Inc (DOL). Each
share was purchased at $87.61 for a total investment amount of $1,324.
This is a dollar store chain with over 800 retail locations across Canada. The reason I decided to invest in this company is because I’m really impressed with how fast it’s expanding, and don’t want
to miss out anymore on that growth. It’s also a recession proof company. There are frugal consumers when times are good, and there are even more of them when times are bad. So Dollarama has a very
solid customer base that is not going anywhere. Here’s a look at how much profit Dollarama made in the last several years.
2010 – $73 million
2011 – $117 million
2012 – $173 million
2013 – $217 million
2014 – ??? (not released yet)
That looks like a pretty good track record of growing profitability to me
Apr 052014
Remember last year when I explained how investing in coffee businesses was an awesome idea? Well good news, because it looks like the daily grind is paying off
That’s a 23% dividend increase! Holy hamburgers! That’s amazing eh (゜∀゜) I only bought 20 shares of Tim Hortons at $50 per share, so all it took was just $1K of my personal savings to make this
happen. Ain’t it great that we don’t need a ton of money to start investing :) The first payment under the new increased rate was distributed last month. Here’s a look at what that dividend payment
looked like for me.
If you buy your doughnuts or coffee from Timmy’s I would like to say thank you on behalf of all Tim Hortons shareholders
Apr 042014
To become successful investors we have to think like burglars, because we must be constantly on the lookout for windows of opportunity
In a recent Poll, I asked if readers would prefer to receive a guaranteed $3,000 or an 80% chance of getting $4,000. If you didn’t get a chance to vote, make a choice now, and remember your decision.
You might be asked about it later
85% of you who voted chose the first outcome
*Sigh* (-_-) Folks, this is NOT okay (>_<) Dagnabbit, you guys
In probability theory, Expected Value means the “expected” average outcome if an event were to run an infinite number of times. And the law of large numbers dictates that the average of the results
obtained from a large number of trials should be close to the Expected Value.
For example, a 6-sided die produces one of 6 numbers when rolled, each with equal probability. Thus, the expected value of a single die roll is 3.5.
According to the law of large numbers, if we rolled a die a large number of times (like 1,000 times) then the average of the produced values is likely to be very close to 3.5, with the precision
increasing the more times it’s rolled.
Below is a graph showing a series of 1,000 rolls of a single die. As the number of rolls increases, the average of the values of all the results will automagically approach 3.5.
The same thing would happen with a coin toss. The Expected Value that a coin will land on heads is 50%. So if we flipped a coin a gazillion times eventually the actual data that we witness will come
closer and closer to the expected value of 50%.
Now that we understand what expected value and law of large numbers mean we can approach the poll again from a mathematical angle. The expected value of the first outcome is $3,000 as there is a 100%
chance of receiving exactly $3,000 every single time. The expected value of the second choice is $3,200 since (80% x $4,000)+(20% x $0)
If math isn’t your strong suit allow me to sum it up for you
|
{"url":"http://www.freedomthirtyfiveblog.com/author/pf359jrkqw5","timestamp":"2014-04-17T01:25:34Z","content_type":null,"content_length":"66626","record_id":"<urn:uuid:69a0d96f-9b6b-4cab-9d60-f33890df0854>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simulation and transfer results in modal logic – a survey
Results 1 - 10 of 17
- Conference on Tableaux Calculi and Related Methods (TABLEAUX , 1998
"... . We define cut-free display calculi for nominal tense logics extending the minimal nominal tense logic (MNTL) by addition of primitive axioms. To do so, we use a translation of MNTL into the
minimal tense logic of inequality (MTL 6= ) which is known to be properly displayable by application of Krac ..."
Cited by 16 (7 self)
Add to MetaCart
. We define cut-free display calculi for nominal tense logics extending the minimal nominal tense logic (MNTL) by addition of primitive axioms. To do so, we use a translation of MNTL into the minimal
tense logic of inequality (MTL 6= ) which is known to be properly displayable by application of Kracht's results. The rules of the display calculus ffiMNTL for MNTL mimic those of the display
calculus ffiMTL 6= for MTL 6= . Since ffiMNTL does not satisfy Belnap's condition (C8), we extend Wansing's strong normalisation theorem to get a similar theorem for any extension of ffiMNTL by
addition of structural rules satisfying Belnap's conditions (C2)-(C7). Finally, we show a weak Sahlqvist-style theorem for extensions of MNTL, and by Kracht's techniques, deduce that these Sahlqvist
extensions of ffiMNTL also admit cut-free display calculi. 1 Introduction Background: The addition of names (also called nominals) to modal logics has been investigated recently with different
motivations; see...
- Advances in Modal Logic - Volume 5 , 2005
"... abstract. We study and give a summary of the complexity of 15 basic normal monomodal logics under the restriction to the Horn fragment and/or bounded modal depth. As new results, we show that:
a) the satisfiability problem of sets of Horn modal clauses with modal depth bounded by k ≥ 2 in the modal ..."
Cited by 9 (2 self)
Add to MetaCart
abstract. We study and give a summary of the complexity of 15 basic normal monomodal logics under the restriction to the Horn fragment and/or bounded modal depth. As new results, we show that: a) the
satisfiability problem of sets of Horn modal clauses with modal depth bounded by k ≥ 2 in the modal logics K 4 and KD4 is PSPACE-complete, in K is NP-complete; b) the satisfiability problem of modal
formulas with modal depth bounded by 1 in K 4, KD4, and S4 is NP-complete; c) the satisfiability problem of sets of Horn modal clauses with modal depth bounded by 1 in K, K 4, KD4, and S4 is
PTIME-complete. In this work, we also study the complexity of the multimodal logics Ln under the mentioned restrictions, where L is one of the 15 basic monomodal logics. We show that, for n ≥ 2: a)
the satisfiability problem of sets of Horn modal clauses in K5n, KD5n, K45n, and KD45n is PSPACE-complete; b) the satisfiability problem of sets of Horn modal clauses with modal depth bounded by k ≥
2 in Kn, KBn, K5n, K45n, KB5n is NP-complete, and in KDn, Tn, KDBn, Bn,
- IN ADVANCES IN MODAL LOGIC , 2004
"... In this paper we give the notion of modularity of a theory and analyze some of its properties, especially for the case of action theories in reasoning about actions. We propose algorithms to
check whether a given action theory is modular and that also make it modular, if needed. Completeness, correc ..."
Cited by 7 (7 self)
Add to MetaCart
In this paper we give the notion of modularity of a theory and analyze some of its properties, especially for the case of action theories in reasoning about actions. We propose algorithms to check
whether a given action theory is modular and that also make it modular, if needed. Completeness, correctness and termination results are demonstrated.
"... Traditionally, consistency is the only criterion for the quality of a theory in logicbased approaches to reasoning about actions. This work goes beyond that and contributes to the metatheory of
actions by investigating what other properties a good domain description should have. We state some metath ..."
Cited by 7 (4 self)
Add to MetaCart
Traditionally, consistency is the only criterion for the quality of a theory in logicbased approaches to reasoning about actions. This work goes beyond that and contributes to the metatheory of
actions by investigating what other properties a good domain description should have. We state some metatheoretical postulates concerning this sore spot. When all postulates are satisfied we call the
action theory modular. Besides being easier to understand and more elaboration tolerant in McCarthy’s sense, modular theories have interesting properties. We point out the problems that arise when
the postulates about modularity are violated, and propose algorithmic checks that can help the designer of an action theory to overcome them.
- Journal of Symbolic Logic , 2001
"... We define an interpretation of modal languages with polyadic operators in modal languages that use monadic operators (diamonds) only. We also define a simulation operator which associates a
logic in the diamond language with each logic in the language with polyadic modal connectives. We prove that t ..."
Cited by 6 (2 self)
Add to MetaCart
We define an interpretation of modal languages with polyadic operators in modal languages that use monadic operators (diamonds) only. We also define a simulation operator which associates a logic in
the diamond language with each logic in the language with polyadic modal connectives. We prove that this simulation operator transfers several useful properties of modal logics, such as finite/
recursive axiomatizability, frame completeness and the finite model property, canonicity and first-order definability.
- Proceedings of TABLEAUX 2002, LNAI 2381 , 2002
"... We give sound and complete analytic tableau systems for the propositional bimodal logics KB , KB C , KB 5 , and KB 5C . These logics have two universal modal operators K and B , where K stands
for knowing and B stands for believing. The logic KB is a combination of the modal logic S5 (for K ) an ..."
Cited by 4 (4 self)
Add to MetaCart
We give sound and complete analytic tableau systems for the propositional bimodal logics KB , KB C , KB 5 , and KB 5C . These logics have two universal modal operators K and B , where K stands for
knowing and B stands for believing. The logic KB is a combination of the modal logic S5 (for K ) and KD45 (for B ) with the interaction axioms I : K ! B and C : B ! K B . The logics KB C , KB 5 , KB
5C are obtained from KB respectively by deleting the axiom C (for KB C ), the axioms 5 (for KB 5 ), and both of the axioms C and 5 (for KB 5C ). As analytic sequent-like tableau systems, our calculi
give simple decision procedures for reasoning about both knowledge and belief in the mentioned logics.
- Journal of Logic and Computation , 2004
"... For every Kripke complete modal logic L we define its hybrid companion LH . For a reasonable class of logics, we present a satisfiability-preserving translation from LH to L. We prove that for
this class of logics, complexity, (uniform) interpolation, finite axiomatization transfer from L to LH . ..."
Cited by 3 (3 self)
Add to MetaCart
For every Kripke complete modal logic L we define its hybrid companion LH . For a reasonable class of logics, we present a satisfiability-preserving translation from LH to L. We prove that for this
class of logics, complexity, (uniform) interpolation, finite axiomatization transfer from L to LH .
- Frontiers of Combining Systems, volume 1794 of Lecture Notes in Arti Intelligence , 1999
"... In a previous work, we describe a method to combine decision procedures for the word problem for theories sharing constructors. One of the requirements of our combination method is that
constructors be collapse-free. This paper removes that requirement by modifying the method so that it applies t ..."
Cited by 3 (2 self)
Add to MetaCart
In a previous work, we describe a method to combine decision procedures for the word problem for theories sharing constructors. One of the requirements of our combination method is that constructors
be collapse-free. This paper removes that requirement by modifying the method so that it applies to non-collapse-free constructors as well. This broadens the scope of our combination results
considerably, for example in the direction of equational theories corresponding to modal logics.
- Bull. Section Logic , 2002
"... this paper concerns the uncertainty as to how substitutivity should be de ned in the product of PDL and S5. Because PDL has a two-sorted language over actions and propositions, a key question is
the following. In axiom schemata, do we allow substitution of all action terms into action variables, or ..."
Cited by 2 (2 self)
Add to MetaCart
this paper concerns the uncertainty as to how substitutivity should be de ned in the product of PDL and S5. Because PDL has a two-sorted language over actions and propositions, a key question is the
following. In axiom schemata, do we allow substitution of all action terms into action variables, or do be allow only substitution of atomic action terms into action variables? If the answer is `yes'
we speak of full substitutivity, whereas if the answer is `no' we speak of weak substitutivity. For PDL we can prove that weak substitutivity implies full substitutivity. (1) We regard this as a good
property, because it allows us to reason about all actions in a uniform way
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=131789","timestamp":"2014-04-18T12:09:26Z","content_type":null,"content_length":"36309","record_id":"<urn:uuid:6880ca3f-fe2e-4b72-9803-d07a8d84ed08>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
|