content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Posts by
Total # Posts: 126
an aluminium of mass 2.1 kg rests on a steel platform. A horizontal force of 15N is applied to block(a) given the coefficient of limiting friction is 0.61 will the block move? (b)if it moves what
will be its acceleration?
Problem 1- A 4 kg puck can slide with negligible friction over a horizontal surface, taken as the xy plane. The puck has a velocity of 3i m / s at one instant ti = 0 . Eight seconds later, its
velocity is to be 8i + 10 j m / s . Find the magnitude and direc...
Problem 3- Blocks A, B, and C are connected by ropes and placed as in the figure. Both A and B weigh 22.6 N each, and the coefficient of kinetic friction between each block and the surface is 0.40.
Block C descends with constant velocity. a) Draw free-body diagram of objects A...
A 4 kg puck can slide with negligible friction over a horizontal surface, taken as the xy plane. The puck has a velocity of 3i m / s at one instant ti = 0 . Eight seconds later, its velocity is to
be 8i+10 j m / s . Find the magnitude and direction of the n...
The maximum possible value for the coefficient of static friction ?
An 100kg box is placed on the ramp. As one end on the ramp is raised the box begin to move down ward just as the angle of inclination reaches 15 degree . What the coeffenict of static friction
between box and ramp .. ?
A500 man stands at the center of the rope such that each half of the ropes makes an angle of 10 with the horizentle . What is the tension of the rope ???
A15N crate rests on an ramp , the maximum angle just before it slips is 25 degree with the horizantle , what the coefficient of static friction between the crate and ramp surfaces ?
Thank you
Acceleration due to gravity on the moon's surface is 1/6th that on the earth . An object weight 900N on the earth . What does it weight on the moon ?
At t=0, a particle leaves the origin with a velocity 6m/s in the positive x direction and moves in the x-y plane with a constant acceleration (-2i-4j)m/s2 How far the particle from the origin at t=1
two parallel chords lie on opposite sides of the centre of a circle of radius 13cm.their lengths are 10cm and 24cm. what is the distance between the chords
Problem 2- 1. As a ship is approaching the dock at 45 cm/s, an important piece of landing equipment needs to be thrown to it before it can dock. This equipment is thrown at 15.0 m/s at 60∘ above the
horizontal from the top of a tower at the edge of the water, 8.75 m abov...
Problem 2- A boat has a velocity of 0.54 m/s southeast relative to the earth. The boat is on a river that is flowing 0.60 m/s east relative to the earth
Problem 1- A dog running in an open field has components of velocity vx = 2.2m/s and vy = -2.5m/s at time t1 = 12 s. For the time interval from t1 = 12 s to t2 = 24 s , the average acceleration of
the dog has magnitude 0.55 m/s2 and direction 25.5∘ measured from the +x...
A students adds two vectors with magnitude of 200 and 40 , what is the resultant ?!
Ahmad throws a ball straight up. For which situation is the vertical velocity zero? [Note: Neglect the air resistance] Answer at the top on the way up none of the above on the way down
1)The speedometer on a car s dashboard measure average speed average velocity instantaneous acceleration instantaneous speed 2)A car moving on a straight road increases its speed from 30 m/s to 50 m/
s in a distance of 180 m. If the acceleration is constant, how ...
A rocket, initially at rest, is fired vertically with an upward acceleration of 10 m/s2. At an altitude of 500 m, find its velocity. 1000 m/s 500 m/s 100 m/s 10 m/s
Problem 4- Speedy car, driving at 30 m/s. The driver suddenly observes a slow-moving van 155 m ahead traveling at 5 m/s in the same direction. The driver of the car immediately applies the brakes
causing a constant acceleration of 2 m/s2 in a direction opposite to the cars vel...
Can I solve it in the equation of motion in one dimension ????
Problem 3- If a bug can jump straight up to a height of 0.5 m. a) what is its initial speed as it leaves the ground? b) How long is it in the air?
Thank you
Problem 2- A car is stopped at a traffic light. It then travels along a straight road so that its position from the light is given by x(t) 2.5t - 0.13t^2 in meters. c. What is the position of the
particle when it reverses its motion?
Physics plz
Do any teacher here have an email plz becouse I want to send some pictures for question ,,, physics plz or how can I attach picture here ????
1)answer : alternative and solution
Solution-dependence-alternative (find) Answer:alternative and dependence Resources-technology-stocks(deplete) Answer:Resources and stocks Now is it correct ??
Development-environment-figures. (Recent) Answer:development and environment Now ?
Development-environment-figures. (Recent) Answer:environment and figures Situation-quantity-interest. (Sufficient ) Answer:quantity and interest Now is it correct ??
Thanks a lot sir
1)better 2)important 3)distinct (Advantages) Important and distinct Last choice
Hhhhh yes becouse I don't know the rule
1)affordable 2)rising 3)rapid. (Prices) Rising and affordable 1)better 2)important 3)distinct (Advantages) Better and distinct Now ?
1)upward 2)growing 3)recent (Problem) Recent and growing 1)steady 2)severe 3)reliable. (Supply) Steady and reliable 1)available 2)increasing 3)public. (Concern) Public and increasing 1)immediate 2)
effective 3)sufficient. (Solution) Immediate and effective 1)affordable 2)risin...
1)upward 2)growing 3)recent (Problem) Recent and growing 1)steady 2)severe 3)reliable. (Supply) Steady and reliable 1)available 2)increasing 3)public. (Concern) Public and increasing 1)immediate 2)
effective 3)sufficient. (Solution) Immediate and effective 1)affordable 2)risin...
Can you tell me what's correct ?
Circle the adjectives that can be used with the nouns on the right . 1)upward 2)growing 3)recent (Problem) Upward and growing 1)steady 2)severe 3)reliable. (Supply) Sever and reliable 1)available 2)
increasing 3)public. (Concern) Available and increasing 1)immediate 2)effective...
Development-environment-figures. (Recent) Answer:environment and figures Situation-quantity-interest. (Sufficient ) Answer:quantity and interest Now is it correct ??
Circle the nouns that can be used with the adjectives on the right , Development-environment-figures. (Recent) Answer:development and environment Crisis-supply-estimates. (Reliable) Answer:supply and
estimates Awareness-theory-concern. (Public) Answer:Awareness and concern Reg...
Solution-dependence-alternative (find) Answer:alternative and dependence Resources-technology-stocks(deplete) Answer:Resources and stocks Now is it correct ??
Thank you
Circle the nouns that can be used with the verbs on the right . Engine-consumption-dependence (reduce) Answer : consumption and dependence Solution-dependence-alternative (find) Answer:solution and
dependence Resources-technology-stocks(deplete) Answer:Resources and technology...
(c) theline x=2 (d) the line y = −2
5. Find the volume of the solid generated by revolving the region bounded by y = x2, y = 0 and x = 1 about (a) the x-axis (b) the y-axis
4. Find the volume of a pyramid of height 160 feet that has a square base of side 300 feet
3. Find the volume of the solid with cross-sectional area A(x) = 10 e^0.01x, 0 ≤ x ≤ 10.
Any answer
(c) y=x2,y=2−x,and the vertical linex=2
Find the integral of sin(x) cos(x)/sin^2(x)-4 dx
(c) find the integral of 1/squrt (x^2+2x+10) dx
(c) find the integral of e^x/ e^3x + e^x dx
(c) find the integral of e3x + ex dx
How ?
Circle the nouns that can be used with the adjectives on the right , Development-environment-figures. (Recent) Crisis-supply-estimates. (Reliable) Awareness-theory-concern. (Public)
Region-prices-solution. (Affordable) Protection-needs-sanitation. (Adequate ) Situation-quantit...
Circle the nouns that can be used with the verbs on the right . Engine-consumption-dependence (reduce) Solution-dependence-alternative (find) Resources-technology-stocks(deplete)
Quantity-problems-question(solve) Energy-crisis-shortage (face) Possibility-problems-century(exami...
Circle the nouns that can be used with the adjectives on the right , Development-environment-figures. (Recent) Crisis-supply-estimates. (Reliable) Awareness-theory-concern. (Public)
Region-prices-solution. (Affordable) Protection-needs-sanitation. (Adequate ) Situation-quantit...
From we're ?
Circle the adjectives that can be used with the nouns on the right . 1)upward 2)growing 3)recent (Problem) 1)steady 2)severe 3)reliable. (Supply) 1)available 2)increasing 3)public. (Concern) 1)
immediate 2)effective 3)sufficient. (Solution) 1)affordable 2)rising 3)rapid. (Price...
2-A 10.0 Kg block is released from point A in the figure below. The block travels along a smooth frictionless track except for the rough 1 m long portion between points B and C. It makes a collision
with stationary 10 kg ball after passing point C. The ball moves forward with ...
1- A 5.0 kg block moving east with speed of 6 m/s collides head on with another 8.0 kg block moving west with speed of 4 m/s. After the collision the 5.0 kg block was moving west with speed of 5 m/s
while the 8.0 kg block was moving east. a) What is the speed of the 8.0 kg blo...
1. Evaluate the following integrals (a) 4x2 +6x−12 / x3 − 4x dx
3- A 5.0 Kg block is released from point A in the figure below. It travels along the smooth surface A E except for the part B C where the surface is rough. Point A is 3 m above the lowest point
C. The block continues in its motion up the surface until reaches...
Please I need the answer
3- A 5.0 Kg block is released from point A in the figure below. It travels along the smooth surface A E except for the part B C where the surface is rough. Point A is 3 m above the lowest point
C. The block continues in its motion up the surface until reaches...
2-A 10.0 Kg block is released from point A in the figure below. The block travels along a smooth frictionless track except for the rough 1 m long portion between points B and C. At the end of the
track, the 10 kg block hits a spring of force constant (k) = 2000 N/m, and compre...
1- A mass m is pushed against a spring of spring constant k = 12000 N/m. The mass slides on a horizontal smooth surface except for a rough one meter long section, and then it encounters a loop of
radius R as shown in the figure. The mass just barely makes it around the loop wi...
A long wire carries a current of 20 A along the directed axis of a long solenoid. The field due to the solenoid is 4 mT. Find the field at a point 3 mm from the solenoid axis.
A long wire carries a current of 20 A along the directed axis of a long solenoid. The field due to the solenoid is 4 mT. Find the field at a point 3 mm from the solenoid axis.
The Age Of Jefferson To The American Expansion
which one of the answers aren't right?tell me so I can know and maybe help you!!
HELPPPP!!!! This is a lab, and i need help with the equations please!! Thank you !! Heat of fusion data table : Mass of foam cup = 3.57g Mass of foam cup + warm water = 104.58g Mass of foam cup +
warm water + melted ice = 129.1g Temperature of warm water = 37 degrees celcius. ...
Write a two column proof proving figure ABCD is a rhombus. Given - diagonal BC.
A 35.0 g block of copper at 350.0 degrees celcius is added to a beaker containing 525 g of water at 35.0 degrees celcius. What is the final temperature of water?
A 335g block of copper cools from 95.0 degrees celcius to 27.2 degrees celcius when it is put in a beaker containing water at 22.4 degrees celcius. What is the mass of the water in the beaker?
7750 J of energy is added to a 325g stone at 24.0 degrees celcius. The temperature increases by 75.0 degrees celcius. What is the specific heat of the stone?
How much heat is required to raise the temperature of a 35.0g block of aluminium from 125 degrees celcius to 322 degrees celcius?
How much heat is required to raise the temperature of a 125g block of iron from 17.5 degrees celcius to 82.3 degrees celcius?
A 25.0 g block of copper at 350.0 degrees celcius has 857J of heat added to it. What is the final temperature of the block of copper?
how much heat is required to raise the temperature of a 35 gm block of aluminum from 125 to 322 degrees?
The molar enthalpy of vaporization for water is 40.79 kJ/mol. Express this enthalpy of vaporization in joules per gram. b. The molar enthalpy of fusion for water is 6.009 kJ/mol. Express this
enthalpy of fusion in joules per gram.
humanities s.s
What are two pitfalls when examining opposing viewpoints
What is the energy of an electron with a principle quantum number of n=2
Michel is twice as old as his sister sylvia. In four years three times Micheal's age is equvialent to five times of sylvia's age. How old is Micheal now
What is the possible ionic equation of TrisEDTA solution? REDOX reaction.
A block of aluminum that has dimensions 2.07 cm by 2.55 cm by 5.00 cm is suspended from a spring scale. The density of aluminum is 2702 kg/m^3. (a) What is the weight of the block? 0.699 N (b) What
is the scale reading when the block is submerged in oil with a density of 817 k...
Part (a) should be convex. Part (b)You have to put the height of your image over the height of your object, like a ratio. Once you do that, multiply that answer by the "20cm." Your answer should be a
negative. Part (c) Use the equation: 1/do+1/di=1/f Part (d) Multipl...
Thank You Very Much
What steps you would take in order to be a writer when you grow up? PLEASE HELP I DON'T KNOW WHO TO ANSWER THIS QUESTION
Algebra 2
x(1/2) - 5 sqrt 3x(1/4) + 18 = 0
Algebra 2
x(1/2) - 5 sqrt 3x(1/4) + 18 = 0
S.S (help please)
thank you!
S.S (help please)
Which language is spoken by most people in Brazil because of that countrys colonial heritage? I think the answer is Spanish.. am i right?
Math 9th grade
x(1/2) - 5 sqrt 3x(1/4) + 18 = 0
Monica has 8 more 3 times the number of marbles Regina has. If Regina has r marbles, how many does Monica have? __________ marbles If Regina has 12 marbles , how many dies Monica have? ________
marbles PLEASE HELP!
I think the answer is t-3 and d/2 ? Am I Right Please Help!
Trisha is t years old.If Kyle is 3 years younger than Trisha, then Kyle is ___________years old Damien has d DVDS. If Nadia has half as many as DVDS as Damien, then Nadia has _______ DVDS please help
math 6th grade
The base of a rectangle is twice the length of the height. If the height of the rectangle is h inches,what is the length of the base? ______________ inches If the height of the rectangle is 4 inches
what is the length of the base? ________ inches PLEASE HELP!
3^2*3^3=3^5 7*0.1=7/10 5^2*5^3=5^5 3*0.1=3/10 13^2*13^3=13^5 4*0.1=4/10 For each set of special cases, write a general pattern. HELP I DONT GET THIS!!
about the diffrent between the life in past and in the nowaday life
Pages: 1 | 2 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=maryam","timestamp":"2014-04-16T08:09:59Z","content_type":null,"content_length":"27419","record_id":"<urn:uuid:133aad47-b188-4df1-9532-3cb07deef91d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analytical solution or Monte-carlo simulation to find shortfall probability?
August 8th 2012, 12:34 PM #1
Aug 2012
Analytical solution or Monte-carlo simulation to find shortfall probability?
Hi everyone, I have a question regarding calculating the shortfall probability in the context of a retirement plan and would greatly appreciate any useful advice on how to solve it.
Imagine you have a retirement plan with a financial institution. At t=0, you deposit $10,000 with the instituion. At the start of every month, you will withdraw 5% of your account balance for own
consumption. The remaining funds is carried over to the next period with an interest rate of 2%. On top of that, the first $6,000 of your account balance will receive an additional 1% as bonus.
Since the amount you withdrew every month (x) is a percentage of the account balance brought forward from the previous month, it varies from month to month. Hence, you are interested to know what
is the probability that the amount withdrawn will be less than your benchmark value of $400 (this is know as the shortfall probability i.e. P(x<$400)). Given these assumptions, what is the
shortfall probability?
In my opinion, it is not possible to solve this question analytically and obtain a closed form solution because the evolution of the account balance depends on the interest rate earned and we
cannot tell in advance what is the probability that it will earn "(2% + 1%) on the full amount" or "2% on full amount + 1% on the first $6,000". Since we cannot solve for the account balance and
amount withdrawn analytically, we also cannot solve for its shortfall probability analytically. As such, I believe we have to rely on a Monte-carlo simulation to run say 10,000 independent paths
and see what's the probability of the shortfall. Nevertheless, this is just my opinion and I may be mistaken. Therefore, any advice from anyone with ideas will be greatly appreciated.
Thank you!
Re: Analytical solution or Monte-carlo simulation to find shortfall probability?
The situation as you wrote it has no random elements, are you sure you typed it correctly?
Re: Analytical solution or Monte-carlo simulation to find shortfall probability?
Hi SpringFan 25, I will attempt to write it in a mathematical form. V_t is the value of your account at the beginning of period t. ω is your withdrawal rate (5% in our e.g.). R_t is the interest
rate of 2% (which is not static, and we can just assume mean=2% with stdev=1% for simplicity).
You withdraw ω% of your account value at the start of every period. You will carry forward (1-ω)∙V_(t-1) from period t-1. This amount will earn R_t with an additional 1% on the first $6,000 only.
V_t=(1-ω)∙V_(t-1)∙(1+R_t ) +
(1-ω)∙V_(t-1)∙1% if (1-ω)∙V_(t-1)≤6,000
60,000∙1% if (1-ω)∙V_(t-1)>6,000
As you can see the value at the beginning of time t (V_t) depends on the value from the previous period (V_t-1) and this value from the previous period earns an additional interest which depends
on whether the value is more than or less than $6,000. Therefore, we need to somehow know V has evolved previously to determine what V is in the current period, only then can we take ω∙V_t to
determine the the amount withdrawn this period. Since we do not know ex-ante the probability of whether (1-ω)∙V_(t-1) is more than or less than $6,000, I believe here is where we need a Monte
Carlo simulation to generate numerous independent paths to find different evolutions of V and finally to get the probability that P(x<$400) at each t. If you do not agree, perhaps for
illustration purpose, can you can show how you would solve for V and the shortfall probability in period 10 analytically (i.e. without simulation)?
Thank you!
Re: Analytical solution or Monte-carlo simulation to find shortfall probability?
if the interest rate is random (which you did not say in post #1), then monte carlo simulation is appropriate.
Depending on the exact distribution of R_t there may be a closed form alternative, but it would be pretty hard to get at if it exists at all.
Re: Analytical solution or Monte-carlo simulation to find shortfall probability?
Thanks lots!
August 8th 2012, 11:23 PM #2
MHF Contributor
May 2010
August 9th 2012, 06:53 AM #3
Aug 2012
August 9th 2012, 08:28 AM #4
MHF Contributor
May 2010
August 10th 2012, 02:03 AM #5
Aug 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/201911-analytical-solution-monte-carlo-simulation-find-shortfall-probability.html","timestamp":"2014-04-16T04:49:27Z","content_type":null,"content_length":"43488","record_id":"<urn:uuid:c7ca35ab-e468-452d-bbce-7e39a3bdcab7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
|
subtraction of vector subspaces??
December 2nd 2011, 05:26 AM
subtraction of vector subspaces??
I know that subtraction of vector subspaces is not defined.
Lets have two subspaces A and B such that A+B=C.
I would like to have A' such that A' $\oplus$ (A $\cap$ B)=A and A' $\oplus$ B=C.
In other words, A' $\cap$ B=0, (A + B)=(A' $\oplus$ B)
As example, let say that the vectors [a1,a2,a3] form a basis of A, and the vectors [b1,b2] are a basis for B. Now, let say that it's possible to form the vector a3=v1*b1+v2*b2, where v1 and v2
are scalars, but it's not possible to obtain a1 nor a2 from linear combinations of b1 and b2. Then a basis of (A $\cap$ B) would be a3, and a basis of A' would be [a1,a2]. A basis for C would be
Is this operation defined in vector space algebra? which symbol to use? This is a sort of substraction A'=A-(A $\cap$ B) but I don't know how to write it properly.
|
{"url":"http://mathhelpforum.com/advanced-algebra/193242-subtraction-vector-subspaces-print.html","timestamp":"2014-04-21T04:12:36Z","content_type":null,"content_length":"5028","record_id":"<urn:uuid:5638c954-2711-446b-815b-0f50bd2ada2d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A math equation
12-16-2002 #1
A math equation
I need help creating a math equation.
"It will take me x energy to do one pushup at x gravity if I am x strong."
Where gravity and strength are supplied and energy needs to be figured.
It needs to work atleast for gravity 1-1000 and strength 1-1000000000, all combinations.
I know I'm pretty much asking for someone to create this for me, and great thanks to anyone that does. I've been experimenting for 4 hours now and can't even seem to get close.
What is this for?
"The most overlooked advantage of owning a computer is that if they foul up there's no law against whacking them around a bit."
Eric Porterfield.
I've been experimenting for 4 hours now and can't even seem to get close.
Lol sure, mate. Ok show us these things that you've 'tried'. We dont like to plain out give people answers, so I'm sure somebody would be glad to help you as long as they know that you are
putting effort into this.
Please direct all complaints regarding this post to the nearest brick wall
considering strength has nothing to do with weight (at least not strictly), it will take you the same amount of enery to push yourself up if you were as strong as, say, Ron Coleman (Mr Olympia,
not sure if current though) compared to how strong you are now, as long as your weight is constant
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
Plus, there are no units of strength...
(I guess that's not a problem if you can find a way to equate calories and g's)
It's actually for a game I'm creating. I don't plan on it having any real relation to physics.
The only equation I've gotten so far that even comes close to what I want is:
E = ((G * (G / S)) * 1000)
This is by pure luck. I've just been doing random things with division and multiplication hoping to get lucky. This won't work for 1000G and 100000000000S though, and I'm really not as
understanding with math to see how to modify it so that it does.
The energy necessary to lift yourself up is the energy (Work) necessary to overcome the force of gravity.
W = PE_f - PE_i = mgh
where m is your mass, g is the force of gravity (about 9.81 meters per sec, 32.2 ft per sec) and h is the height you travel.
Of course you have to take into consideration the fact that your not lifting your total body weight and the force along the path you travel varies, which complicates things exponentially...
Last edited by *ClownPimp*; 12-17-2002 at 09:24 AM.
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
If i remember correctly, doing push-ups in the proper form, you lift about 70% of your body mass....ie if you weigh 165 pounds (like me) then you lift:
int weight=165;
int pushupWeight=weight*.7;
PHP and XML
Let's talk about SAX
Originally posted by dizolve
It's actually for a game I'm creating. I don't plan on it having any real relation to physics.
Dude, if it's for a game, just code in whatever makes you feel good. If you're going to have monsters or aliens or robots beating the crap out of each other, don't worry about getting your
calculations correct. Just try to get close to something that seems plausible and makes the game simple enough for the player to win, but challenging enough to make it a good game.
"The computer programmer is a creator of universes for which he alone is responsible. Universes of virtually unlimited complexity can be created in the form of computer programs." -- Joseph
"If you cannot grok the overall structure of a program while taking a shower, you are not ready to code it." -- Richard Pattis.
12-17-2002 #2
12-17-2002 #3
12-17-2002 #4
Registered User
Join Date
Jan 2002
12-17-2002 #5
12-17-2002 #6
12-17-2002 #7
Registered User
Join Date
Jan 2002
12-17-2002 #8
12-17-2002 #9
|
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/30846-math-equation.html","timestamp":"2014-04-19T23:38:17Z","content_type":null,"content_length":"70427","record_id":"<urn:uuid:c00ddcc7-5a2a-4ebd-9beb-6c381d834111>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2 Math Question Involving Inverse Trig
Last edited by qbkr21; September 23rd 2007 at 07:07 PM.
Let $\sin^{-1}x = \theta$ $\Rightarrow \sin \theta = x$ so now we can construct a right triangle with an acute angle $\theta$, where the opposite side to that angle is $x$ and the hypotenuse is 1.
thus, by Pythagoras' theorem, we can find the adjacent side, which will be $\sqrt {1 - x^2}$ Now, finally, we can say, $\cos \left( \sin^{-1} x \right) = \cos \theta =$?? that final piece is for you
you can try a similar method for the second
To Show: $\cos (\sin^{-1} x ) = \sqrt{1-x^2}$. Do the following steps in order. 1)Define the function $f(x) = \cos(\sin^{-1} x )$. 2)Show that $[-1,1]$ is in the domain of this function. 3)Show that
$f(x)$ is continous on $[-1,1]$. 4)Show that $f(x)$ is differenciable on $(-1,1)$. 5)Show that $f'(x) = \frac{x}{\sqrt{1-x^2}}$ on $(-1,1)$. 6)Notice that $\left( \sqrt{1-x^2} \right)' = f'(x)$ on $
(-1,1)$ 7)Therefore argue that $\sqrt{1-x^2}+C = f'(x)$ on $(-1,1)$. 8)Prove that $C=0$. 9)Therefore, $\cos (\sin^{-1} x ) = f(x) = \sqrt{1-x^2}$ on $(-1,1)$. 10)But $\cos (\sin^{-1} 1) = \sqrt{1-1^
2}$ and $\cos (\sin^{-1} -1) = \sqrt{1-(-1)^2}$. 11)Finally conclude that $\cos(\sin^{-1} x ) = \sqrt{1-x^2}$ on $[-1,1]$.
Or we can take $\cos x=\sqrt{1-\sin^2x}$, and evaluate $\cos(\arcsin x)$ -- For the second problem, find the sine in terms of the tangent, the apply the same method as above.
This equation is not correct! What if $x = \pi$. The correct equation is much more difficult. $\cos x = \mbox{Kriz}(x) \sqrt{1-\sin^2x}$. Where $\mbox{Kriz}(x)$ is the so-called "Krizalid function"
it is defined as follows: $\mbox{Kriz}(x) = \left\{ \begin{array}{c}1 \mbox{ for }x\in [-\pi/2,\pi/2] \\ -1 \mbox{ otherwise } \end{array} \right.$ and $\mbox{Kriz}(x + 2\pi) = \mbox{Kriz}(x)$, i.e.
the function is periodic. So for example, we get $1 \mbox{ on }[-\pi/2,\pi/2]$ it we add $2\pi$ we get back $[3\pi/2,5\pi/2]$ is also $1$. If we subtract $\pi$ we get back $[-5\pi/2,-3\pi/2]$ it is
$1$ here also. But $\pi$ is not among any of these periodic intervals. So it means $\mbox{Kriz}(\pi) = -1$, thus, $\cos \pi = \mbox{Kriz}(\pi) \sqrt{1 - 0^2} = -1$.
The hypothenuse is $1$ the side is $x$ call the other side $y$ then $x^2+y^2 = 1$ by Pythagorus, so $y = \sqrt{1-x^2}$.
One last thing. If we are looking for cos(Ө) that would be adjacent/hypotenuse. However this wouldn't the solution I am looking for. Adjacent/Hypotenuse would instead give me
Sorry...I was thinking of Problem #2. Let me get to work on it so I can see if I have any questions. Thanks, -qbkr21
|
{"url":"http://mathhelpforum.com/calculus/19398-2-math-question-involving-inverse-trig.html","timestamp":"2014-04-19T07:01:01Z","content_type":null,"content_length":"91068","record_id":"<urn:uuid:d240c7b7-efbe-42f2-92f5-e0687dcccf4d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Directory tex-archive/macros/latex/contrib/fc_arith
The fc_arith Package
fc_arith is a LaTeX package used to create an arithmetic flash card.
Addition, subtraction, multiplication, and division problems are randomly
generated. There is a menu system for setting the intervals from which to randomly
draw numbers, and the number of decimal places these numbers should have. The user
can optionally compete against the clock, and awarded points as a function of how
fast the problem is correctly solved.
The flash card created by fc_arith can be customized in many ways. Design and build
your own flash card using an dvips (or dvipsone)/Acrobat Distiller work flow, or
a pdftex work flow.
Requirements are
1. The eforms package (version 2.5c or later, 2010/03/21 or later), and
is available from http://www.math.uakron.edu/~dpstory/webeq.html.
2. The popupmenu package
D. P. Story
Download the complete contents of this directory in one zip archive (592.9k).
fc-arith – Create an arithmetic flash card
The package is used to create an arithmetic flash card. Addition, subtraction, multiplication and division problems are randomly generated.
The flash cards may be customised. Design and build your own flash card using a dvips/Acrobat Distiller work flow, or a pdftex work flow.
Documentation Readme
Version 0.1a
License The LaTeX Project Public License 1
Copyright 1999-2002 D. P. Story
Maintainer D. P. Story
Contained in MiKTeX as fc-arith
Topics typesetting ‘flash’ cards for teaching and learning
|
{"url":"http://www.ctan.org/tex-archive/macros/latex/contrib/fc_arith","timestamp":"2014-04-16T17:19:06Z","content_type":null,"content_length":"8970","record_id":"<urn:uuid:a2e698b2-8e55-48c9-9ce0-efe5d070367e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Turán’s problem 10 revisited
, 2008
"... We use an estimate for character sums over finite fields of Katz to solve open problems of Montgomery and Turán. Let h ≥ 2 be an integer. We prove that inf max n∑ z ν k ∣ ≤ (h − 1)√n + O ( n
0.2625+ǫ). (ǫ> 0) |zk|≥1 ν=1,...,nh k=1 This improves on a bound of Erdős-Renyi by a factor of the order √ l ..."
Add to MetaCart
We use an estimate for character sums over finite fields of Katz to solve open problems of Montgomery and Turán. Let h ≥ 2 be an integer. We prove that inf max n∑ z ν k ∣ ≤ (h − 1)√n + O ( n
0.2625+ǫ). (ǫ> 0) |zk|≥1 ν=1,...,nh k=1 This improves on a bound of Erdős-Renyi by a factor of the order √ log n. 1
, 2008
"... We use an estimate for character sums over finite fields of Katz to solve open problems of Montgomery and Turán. Let h ≥ 2 be an integer. We prove that inf |zk|=1 maxν=1,...,nh | ∑n k=1 zν k | ≤
(h − 1 + o(1))√n. This gives the right order of magnitude for the quantity and improves on a bound of Er ..."
Add to MetaCart
We use an estimate for character sums over finite fields of Katz to solve open problems of Montgomery and Turán. Let h ≥ 2 be an integer. We prove that inf |zk|=1 maxν=1,...,nh | ∑n k=1 zν k | ≤ (h −
1 + o(1))√n. This gives the right order of magnitude for the quantity and improves on a bound of Erdős-Renyi by a factor of the order √ log n. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=14544665","timestamp":"2014-04-18T08:22:50Z","content_type":null,"content_length":"13512","record_id":"<urn:uuid:e4dd6db5-c211-49a2-b54d-ae4c46d03df9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by Jake
Total # Posts: 1,682
Gen. Chemistry
A 0.887 g sample of a mixture of NaCl and KCl is dissolved in water and the solution is treated with an excess of AgNO3 producing 1.822 g of AgCl. What is the percent by mass of each component in the
Use integration by parts to evaluate the definite integral. Intergral: xsec^2(6x) dx I set u=sec^2(6x) and dv=x, but the problems seems to get harder as I go on.
3rd grade
5 / 10 = ? / 20 Multiply the denominator by 2 to get 20. Multiply the numerator by the same number (2) to get 10. Answer: 10 / 20
If 150 mL of 1.73 M aqueous NaI and 235 g of Pb(NO3)2 are reacted stoichiometrically according to the balanced equation, how many grams of Pb(NO3)2 remain? Pb(NO3)2(aq) + 2NaI(aq) → PbI2(s) + 2NaNO3
(aq) Molar Mass (g/mol) NaI 149.89 Pb(NO3)2 331.21 Density (g/mL) - Molar...
What volume of 0.1209 M HCl is necessary to neutralize 0.5462 grams Ca(OH)2? Include a balanced chemical equation.
A 0.2861 gram sample of an unknown acid (HX) required 32.63 mL of 0.1045 M NaOH for neutralization to a phenolphthalein endpoint. What is the molar mass of the acid?
In a acid-base titration, 22.81 mL of an NaOH solution were required to neutralize 26.18 mL of a 0.1121 M HCl solution. What is the molarity of the NaOH solution?
I'm going over some post-laboratory questions for Conductivity of Solutions. One of the questions is... From your measurements, what can you say about a) the presence of ionic impurities in tap
water? b) the presence of molecular impurities in tap water? I measured the con...
URGENT MATH help
(x^2y^6-3)^2 .....PLEASE HELP ME I HAVE NO IDEA HOW TO SOLVE THIS IVE BEEN DOING IT FOR HOURS
never mind what i figured that out but i cant seem to figure this out (x^2y^6-3)^2 .....thanks for your help i really need it
hi what is what is (x^-3)^2
What is (x^2y^6-3)^2
Is (5(x+1)/3) ....a simplified answer or is it (5x+5)/3 sorry i forgot the parenthesees so it might have looked confusing!
is 5(x+1)/3 ....a simplified answer or is it (5x+5)/3
I have to rewrite the fraction without an exponent, but I'm not for sure how to do the negative exponent. pls help! (3/2)^-2
AP Chem
A 0.23 M sodium chlorobenzoate (NaC7H4ClO2) solution has a pH of 8.68. Calculate the pH of a 0.23 M chlorobenzoic acid (HC7H4ClO2) solution
Calculate the mass of the solid you must measure out to prepare 100 mL of 0.025 M CuSO4. Note that this salt is a hydrate, so its formula is CuSO4⋅5H2O. You must include the waters of hydration when
calculating the formula weights. Would the following set-up yield the co...
what are some similarities between the stories Where are you going where have you been and the story Young Goodman Brown...as in similiar character or reoccuring themes? thank you so much :)
Precipitation Reactions Write a molecular equation for the precipitation reaction that occurs (if any) when the following solutions are mixed. If no reaction occurs, write NO REACTION. Express your
answer as a chemical equation. Identify all of the phases in your answer. cesiu...
the diameter of a small pizza is 16 centimeters. this is 2 cm more than 2/5 of the diameter of the large pizza so how big is the large pizza
wait wouldnt you add them because they are both going south
Chem Help .plz
First, You would set up the rate law equation. r=k[OH-]1[CLO2]2 Next, plug in the numbers you have already been given and have figured out r= 230 L2/M2xs [0.175]2 [.0844] Then you should get
5.94x10e-1 as your answer
Simplify the exponential expression. Assume the variables represent nonzero real numbers. (2^-1 x^-5 y^-5)^-2 (2x^-2 y^4)^-2(4x^-2 y^4)^0 ______________________________________ (2x^-5 y^-5)^2 the
line by the way means to divide. i have no idea how to solve :( All i know is the...
Foam fingers sold for $5. Spirit bracelets for $4. You sold 40 more foam fingers than bracelets and earned $965. Write a system of equations to describe this situation. Solve this system using as
least 2 different methods. Explain each method.
Doesn't the line go on forever, so how could you choose 3 points above it?
Graph y=2x+1, then choose 3 points above the line. Give the coordinates of each point and tell wether it is greater than, less than or equal to 2x + 1 at each point.
Hi i am writing a response paper for "Cathedral" by Raymond Carver. Im not sure but is this a proper thesis statement. In the short story Cathedral, by Raymond Carver, a man refers to Robert as "this
blind man." Not only is he categorizing Robert, but he...
Hi i am writing a response paper for "Cathedral" by Raymond Carver. Im not sure but is this a proper thesis statement. In the short story Cathedral, by Raymond Carver, a man refers to Robert as "this
blind man." Not only is he categorizing Robert, but he...
um this was are first day of philosophy, so i dont know anything you just said bobpursley , i'm just confused in general...im not sure if i talk about that humans are what make up earth, and they
have different purposes in life to obtain and stuff like that not sure though...
I have to write a 1 page paper "presenting what i believe the world is made of and explain why i believe that to be the case" and he said it doesn't deal with like water and land, so im not sure what
he means, any help would be great.
college Chemistry
Ok this is the question... Complete the combustion of butane... CH3CH2CH2CH3 + O2 ====> ____________
1) which scientist's work did Copernicus contradict? 2) what made it difficult for him to prove his theory? i researched some site but still i dont get it...
1) which scientist was later imprisoned for openly supporting the Copernican theory? 2) what did he discover that helped prove that Copernicus was correct?
1) which scientist's work did Copernicus contradict? 2) what made it difficult for him to prove his theory?
40 + 40 = 80 so 2 1/2
social studies
There are 32 people.
Algebra I
Stephen is walking from his house to his friend Sharon's house. When he is 12 blocks away, he looks at his watch. He looks again when he is 8 blocks away and finds that 6 minutes have passed. Write 2
ordered pairs for these data in the form (time, blocks). Write a linear e...
Write the equation of a line in slope intercept form that contains (3/4, 1/2) and has the same slope as the line described by y + 3x = 6.
mathh help pleaseee (: ! <345
11. It is $20 to start, every hike, you are charged $2 more from the second company. If you divide 20 by 2, 10 will be even and at 11 it will be 2 $ more.
its 2/1000
Science - gr. 10 physics
How do you find displacement when given a velocity/time graph?
7th grade
is this like a homework question or are you just wondering?? lol
Do you mean 58g/mole?
How could knowing that 35/7=5 help you find 42/7? Explain
Math-Gr.11 urgent
hi, i was wondering where you got: 20/2[2a + 19d] from, im in yr 11 but want to start revising a level because i want to get a phd in maths, so i just wanted to know where you got it from, if you
could answer this, then thanks
Because it can
Ok I know now that the answer is If they left England, they would be Separatists instead of Puritans, and that bothered them. thank you
Ok the first one is B? The reason I was confused on it was because that all 3 of them could work. I just didn't know which one was best I guess. And with the other question, I'm still not sure but I
think it was fear of Mohawk Revolt?
Well I didn't just copy and paste those even if it might look like it. And I only asked 2 questions. Anyway for the first one I am currently thinking that it is probably A or C? Tuxedo's aren't a
necessity, yet they are a nice contribution, and the bravery that C s...
If someone says that Pilgrims didn't do much for us, what should we say to them? A. Tuxedos were invented because of those black and white outfits the Pilgrims wore. B.That Mayflower Compact they
made was the first constitution ever written in North America, and it was sim...
should i add more information 2 my side....or equal information about both sides thanks
thank you, also i have to be able to sympathsize with the other side, but last time i wrote an essay and did that for this class, it sounded like i was for both sides. how do i make it so that i can
strongly agree with 1 side, but also can understand where people are coming fr...
im writing a research essay about the legalization of drugs are there any good websites you have for this topic? Also i have to say which side i take without saying the other side is wrong, and dont
know how to make this paper sound too general.
If i have $550.00 a week.how much is the state deductions in the state of california for one week?
Given: H3PO4(aq) + 3 NaOH(aq) yields 3 H2O(l) + Na3PO4(aq) change in H = -166.5 kJ What is the value for q (heat) if 4.00 g of NaOH reacts with an excess of H3PO4? Please can you explain to me HOW to
do this, not just give me an answer? Thanks!
Given: H3PO4(aq) + 3 NaOH(aq) ¨ 3 H2O(l) + Na3PO4(aq) ¢H = -166.5 kJ What is the value for q (heat) if 4.00 g of NaOH reacts with an excess of H3PO4? Please can you explain to me HOW to do this,
not just give me an answer? Thanks!
True/False - A generator that doubles the electrical energy from a wall outlet without any additional input of energy will work. - The total amount of energy in the universe is constant. thank you
a peptide contains amino acids Phe,Gly Met,Ala, and Ser. N-terminal analysis yielded the PTH derivative of alanine. Partial hydrolysis of the petapeptide yielded the dipeptides Gly-Ser, Ala-Gly, and
Ser-Met. The peptide sequence is what? Ala-Gly-Ser-Phe-Met Ala-Gly-Ser-Met-Phe...
which is false about naturally occurring a-amino acids? they belong to the D family of chiral compounds related to D-glyceraldehyde all are chiral except for glycine each has a characteristic
isoelectric point the ph at which there is no net migration of the amino acid toward ...
what is the best synthesis of 2-heptanol CH3CH2CH2MgBr(2 moles)+formaldehyde in diethyl ether followed by H3O+ CH3CH2CH2MgBr+butanal in diethyl ether followed by H3O+ CH3CH2CH2CH2MgBr+acetone in
diethyl ether followed by H3O+ CH3CH2CH2CH2CH2MgBr+ethanal in diethyl ether follow...
what is the best synthesis of 4-heptanol (CH3CH2CH2)2CHOH CH3CH2CH2MgBr (2 moles)+formaldehyde (CH2=O) in diethyl ether followed by H3O+ (CH3CH2CH2)2ChMgBr+formaldehyde (CH2=O) in diethyl ether
followed by H3O+ CH3CH2CH2MgBr+butanal (CH3CH2CH2CH=O) in diethyl ether followed by...
Which of the craftsmen is in charge of the rehearsals?
how do you square numbers when your typing?
AP Chemistry!!!!
For our test in a few days, we heard that there was something for pi and sigma bonds overlapping...can someone explain to me the "hybridizaiton" of where they overlap?? I dont understand it.
Find the roots of z^6 + 1 and hence resolve z^6 + 1into read quadratic factors; deduce that cos3x = 4[cos(x) -cos(pi/6)][(cos(x) -cos(pi/2)][(cos(x) -cos(5pi/6)]
Find the roots of z^6 + 1 and hence resolve z^6 + 1into read quadratic factors; deduce that cos3x = 4[cos(x) -cos(pi/6)][(cos(x) -cos(pi/2)][(cos(x) -cos(5pi/6)]
Explain why glucose undergoes mutarotation
Are any of the following a tertiary amine 2-methyl-2-butanamine N-Methyl-1-butanamine N-Methyl-2-butanamine
C6H5NO2+? = C6H5N2Cl
The reaction of sec-butylmagnesium bromide with formaldehyde followed by hydrolysis yields? 2,2-dimethyl-1-pentanol 2-butanol 2-methyl-1-butanol 2-pentanol 2,2-dimethyl-1-propanol
what is the best combination for preparing CH3CH2CH2CH2OCH3 is CH3CH2CH2CH2OH+NaOCH3 CH3CH2CH2CH2ONa+NaOCH3 CH3CH2CH2CH2Br+NaOCH3 CH3CH2CH2CH2Br+CH3Br
which phenol is the most acidic? phenol 2-methyl-6-nitrophenol 3-methyl-5-nitrophenol 2,6-dimethyl-3-nitrophenol 2,4,6-trimethylphenol
what is the best synthesis of 2-methyl-2-pentene (CH3)2C=CHCH2CH3? 1.(CH3)2CHCH2CH2CH3+F2,heat 2. KOC(CH3)2 1.(CH3)2CHCH2CH2CH3+Cl2,heat2.NaOCH2CH3 1.(CH3)2CHCH2CH2CH3+Br2,heat2.NaOCH3 1.(CH3)
o chem
Which is not correct? glucose is a reducing sugar glucose has six carbon atoms glucose undergoes mutarotation glucose is a disaccharide glucose is an aldose
o chem
a-D-glucose and b-d-glucose can best be described as? -diasteromers -constitutional isomers -different conformations -enantiomers -keto-enol isomers
7th grade pre algebra
The Answer is: Chicken Wings
estimate quotient and divide (how do you estimate the quotient? 552 divided by 71,520 I dont understand the steps.
why is mitosis important?
7th grade algebra
Are you trying to say this disproves the Associative property? because the associative property is only if the operation contains all and ONLY addition signs or all and ONLY multiplication signs. Not
if it contains both.
Using only addition and subtraction, without changing the order of the digits, and having just three operations, form an equation out of 123456789 that equals 100.
What bird is considered a sign of the coming spring?
3rd grade art
He is doing a book report. (Old school.) LOL
just took test 100 answers are crrect
math 7th grade
well if you're trying to find the difference: 20 - (-10) = 20 + 10. So the difference is 30. You can just add it's opposite. Ex. -30 - -20. That = -30 + 20. therefore the answer is -10.
science plzzz help
Mass vs. Weight: Although the terms mass and weight are used almost interchangeably, there is a difference between them. Mass is a measure of the quantity of matter, which is constant all over the
universe. Weight is proportional to mass but depends on location in the universe...
science plzzz help
I'm not entirely sure but I think the main difference is that mass is used in relation to the metric system (metric syst. is used universally in compiling data from scientific findings)and I think
weight is used in the customary system. LOL! All I remember is our Chem teac...
i can't find these patterns. my mom and dad tried to. first problem 64,__,___,___,32 second problems 61,___, ___, ___, 81
How do you solve the following equation x2 - 5x = 14 x2 - 5 x - 14 = 0 (x - 7) (x + 2) = 0 x = 7 or x = -2 How do I explain this to my son who was not taught this way???
Is the LCM for xy, y3 = x3y3, y3
It is an imaginary line that goes around the center of the earth to separate the northern hemisphere (top) from the southern hemisphere (bottom)
I would say temperature because increasing temperature would cause molecules speed up and increase volume and decreasing temperature usually decreases molecular speed and causes volume to decrease
(except for water)
Chemistry/hydrolysis of ATP
A freshman studying medicine at an Ivy League College is a part of his class crew team and exercises regularly. After a particularly strenuous exercise session, he experiences severe cramps in his
thighs and pain in his biceps. Explain the chemical process that occurred in hi...
Men's Health
I was wondering if someone know's aof a web-site that may be able to help me I have to look up the following and explian them also benefits and risks of them thanks ( Basically what effects good/bad
have on Men's health) Epimedium Leaf Chinese Dodder Seed Ginseg ophiop...
by x in the first one did you mean OH
WHAT IS THE BEST COMBINATION OF REACTANTS FOR CH3CH2CH2CH2OCH3
1-BROMO-2-METHYLBUTANE 2-BROMO-2-METHYLBUTANE
organic chemistry
How many stereogenic carbon atoms does the alkaloid morphine have?
Pages: <<Prev | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jake&page=15","timestamp":"2014-04-18T13:43:14Z","content_type":null,"content_length":"29054","record_id":"<urn:uuid:9e7998c0-b820-43f4-85b6-e172fd57a6ce>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On this page we explain the precise mathematical relationship between the angle of the sun in the sky and the portion of the available insolation that will strike a horizontal surface. We also
explain the formula that calculates the optimal solar collector tilt angle for a given moment.
Our plan of attack for this page is to first give the formulas and show how to use them and then demonstrate why they're true.
Related Pages
An introduction to the concepts on this page (and more!) is at sa&i intro. Examples relating various sun angles to the corresponding insolation levels (and more!) is at sa&i data.
The Formulas
1. Sun Angle and Insolation:
[sin(SEA°) = (portion of the available sunlight that will fall on a horizontal surface)], where "SEA" = "solar elevation angle" which is the angle of the sun above the horizon.
For example, if the sun is 30 degrees above the horizon, we have sin(30°) = .5 and so the sunlight on a horizontal surface will be half as intense as the sunlight in the air. So, if due to say a few
clouds and the effects of air mass, the strength of the sunlight in the air was about 700 W/m^2, 350 W/m^2 would strike a horizontal surface.
2. Optimal Tilt Angle for a given moment:
[Optimal Tilt Angle = 90° - SEA°].
For example, if on a given moment, the sun was 40 degrees above the horizon, at that moment the optimal tilt angle would be 90° - 40° = 50°.
We will now explain why these two equations are true.
Using 1D to talk about 2D
In all that follows, we will be talking about "surface area" (a two-dimensional quantity) but often mixing up "surface area" with "surface lengths" (a one-dimenstional quantity). We are allowed to do
this because we are assuming the unmentioned dimension is always the same. [This means that the unnamed dimension always cancels out: ((L1 * L3)/(L2 * L3)) = (L1/L2).]
Quick Review: Stating the Problem
As we discus in more detail on sa&i intro, the problem is that the lower the sun is in the sky, the more "spread-out" the sunlight falling on a horizontal surface is.
That means that the sunlight falling on a horizontal surface is less concentrated than the sunlight in the air and so if we just lay our solar panels out flat on the ground, they will miss out on a
lot of solar radiation.
We want to be able to use the angle of the sun above the horzion (SEA°) to solve for:
(portion of the available sunlight that will fall on a horizontal surface)
Insolation on an Optimally Tilted Surface
First use the color-coded diagram below to see that the intensity of the sunlight striking an optimally tilted solar collector (one that is tilted so that the sunlight strikes it perpendicularly and
thus isn't spread-out at all) is equal to the intensity of the sunlight in the sky. In both cases the sunlight is as concentrated as possible.
(Optimally Tilted Surface)/(Horizontal Surface)
Therefore (as the following example will help clarify), we can learn how intense the sunlight striking a horizontal surface is in relationship to how strong it is in the air by comparing the surface
area covered by a sunbeam of sunlight falling on a horizontal surface with the surface area that the same sunbeam would cover on an optimally tilted surface.
If, for example, you need a three meter horizontal solar panel to collect the same amount of solar radiation as you can collect with one meter of an optimally tilted solar panel, it must be that per
area the solar radiation falling on the horizontally tilted solar panels is 1/3 as concentrated as the solar radiation falling on the optimally tilted solar panel.
Therefore, the intensity of the solar radiation falling on a horizontal surface will be 1/3 the intensity of the sunlight in the air.
[(area of optimally tilted surface)/(area of horizontal surface covered by same amount of sunlight)] = (portion of the available sunlight that will fall on a horizontal surface)
The Trigonometry
The diagram is color-coded to match our explanation.
We draw a right triangle with the hypoteneuse equal to the
(length of the horizontal surface covered by the sunbeam).
We then see that the line across from the angle of the sun above the horizon (SEA) is the trigonometric "side opposite". Furthermore, the length of this line is equal to the length of an optimally
tilted solar collector being hit by the same sunbeam.
Therefore, [(side opposite)/(hypoteneuse)] is equal to [(length of optimally tilted surface)/ (length of horizontal surface covered by same amount of sunlight)]. This, however, is equal to what we
are trying to discover, namely: (portion of the available sunlight that will fall on a horizontal surface).
But according to trigonometry,
sine = (side opposite)/(hypoteneuse).
And so,
sin(SEA°) = [(length of optimally tilted surface)/ (length of horizontal surface covered by same amount of sunlight)] = (portion of the available sunlight that will fall on a horizontal surface).
Calculating the Optimal Tilt Angle
The optimal tilt for a given moment, is the tilt that keeps the sunlight on the surface of the solar collector as intense as the sunlight in the air at that moment. It is the angle that keeps the
sunlight on the surface as concentrated (or "un-spread-out") as the sunlight in the air. As you can see from the diagram, this angle is also part of the right triangle we just drew.
It is the only "unknown" angle in our right triangle. In a right triangle, one angle will of course always be 90° and in our right triangle one angle is SEA° and so it is easy to find the optimal
tilt. It must equal (180° - 90° - SEA°) = (90° - SEA°).
We have several pages on tilt angles.
Limitations of Optimum Tilt formulas
The optimum tilt for a given day is the tilt that factors in all the different optimum tilts for a given moment and comes up with the tilt that makes the most sunlight hit the solar panel as directly
as possible for as much of the day as possible. The optimum tilt for a given month would factor in all the days in that month. To find the optimum tilt angle for a year you'd factor in all the days
of the year.
However, a tilt angle that is designed solely to compensate for the "spreading-out" effect of lower sun angles fails to account for many other factors such as clouds, diffuse radiation and air mass.
As we discuss in sa&i intro, since diffuse radiation is distributed evenly throughout the sky, the steeper your tilt, the more diffuse radiation you miss out on and so you gather the most diffuse
radiation with a horizontal solar panel. Diffuse radiation accounts for about half of the radiation in many high latitude, cloudy places like Northern Europe.
Air Mass is a measure of how much atmosphere the sunlight has to pass through on its way to the sky and the lower the sun is in the sky, the more the effect of "air mass" reduces how much insolation
makes it to the surface of the earth. Unlike clouds and diffuse radiation, "air mass" is directly related to the angle of the sun in the sky and so it can be calculated in the general case (without
factoring in local weather patterns). [More on tilt angles.]
Air Mass also Important
Air mass's effect on insolation is often significant. It is important to factor in the effects of air mass if you want to get a realistic idea of how much solar radiation will fall on a horizontal
surface with a given sun angle under clear skies.
This is explained and many examples of how insolation on a horiztontal surface varies with sun angle is all at sa&i data.
Updated 5/26/2011
|
{"url":"http://www.ftexploring.com/solar-energy/sun-angle-and-insolation3.htm","timestamp":"2014-04-17T03:54:09Z","content_type":null,"content_length":"19517","record_id":"<urn:uuid:c0289053-9a0e-4e94-a9b8-ff95fd8d952c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hellrazor surface area?
Announcement Announcement Module
No announcement yet.
Hellrazor surface area? Page Title Module
Move Remove Collapse
Conversation Detail Module
Anyone know the surface area of a 6'7" HR?
□ Jesus is coming, look busy!
Moving forward or just spinning faster? 6'0 90kg
5'8 RF Vanguard 32.2L FCS R2 quad
6'0 FST Dominator 34.8L FCS PC SF4 quad + stabiliser
6'0 FST Spitfire 34.9L FCS PC R2 quad + stabiliser
6'2 FST Jacknife 33.3L FCS PC quad front/rear TC Redlines/TC Aqualines
6'2 FST Hellfire 33.7L FCS PC R2 quad + stabiliser (x 2, yes 1 here, 1 there!)
6'4 FST Alternator 34.?L FCS PG7
6'5 FST Hellrazor 33.5L FCS R2 quad + stabiliser
6'7 FST Hellrazor 36.5L FCS R2 thruster
I'm on a bad connection right now so I can't get the compare function to work within a reasonable amount of time but here try this
this will compare the 6'5 and 6'7 HZs and give you their surface areas as part of the comparison in the Tech View.
□ Jesus is coming, look busy!
Moving forward or just spinning faster? 6'0 90kg
5'8 RF Vanguard 32.2L FCS R2 quad
6'0 FST Dominator 34.8L FCS PC SF4 quad + stabiliser
6'0 FST Spitfire 34.9L FCS PC R2 quad + stabiliser
6'2 FST Jacknife 33.3L FCS PC quad front/rear TC Redlines/TC Aqualines
6'2 FST Hellfire 33.7L FCS PC R2 quad + stabiliser (x 2, yes 1 here, 1 there!)
6'4 FST Alternator 34.?L FCS PG7
6'5 FST Hellrazor 33.5L FCS R2 quad + stabiliser
6'7 FST Hellrazor 36.5L FCS R2 thruster
Actually it just came back: 8.2 sq feet for the 6'7" HZ (7.9 for the 6'5").
• Stupid Questions
Sorry maybe this is a stupid question but what is the point of knowing the surface area of a board? Does it affect performance or tell you something particular about a board?
□ Jesus is coming, look busy!
Moving forward or just spinning faster? 6'0 90kg
5'8 RF Vanguard 32.2L FCS R2 quad
6'0 FST Dominator 34.8L FCS PC SF4 quad + stabiliser
6'0 FST Spitfire 34.9L FCS PC R2 quad + stabiliser
6'2 FST Jacknife 33.3L FCS PC quad front/rear TC Redlines/TC Aqualines
6'2 FST Hellfire 33.7L FCS PC R2 quad + stabiliser (x 2, yes 1 here, 1 there!)
6'4 FST Alternator 34.?L FCS PG7
6'5 FST Hellrazor 33.5L FCS R2 quad + stabiliser
6'7 FST Hellrazor 36.5L FCS R2 thruster
By itself it doesn't mean a whole lot but when you are comparing one board with another of similar rocker and bottom contour it probably helps decide which would be faster because theoretically
there will be more planing area. But a flat board with less surface area may still be faster than say a highly rockered board with more surface area. Rocker will effect how much surface area is
in contact with the water and things like concaves can flatten out rocker thus giving back some of the surface area that rocker might have taken away.
Things like volume and plan shape also have an effect. These are the things shapers or designers play with and get the right balance for a board for a particular purpose and with their years of
experience you have to take their word on what works how and for what - along with your own experience too of course. But if you'd never surfed I don't think knowing any (or all) of these
parameters and their design principles would allow you to work out just how a board would perform. It's when you've experimented and experienced a few combinations of these parameters that you
can start to work it out.
Last edited by
11-16-2012, 04:20 AM
Dr. Emmett Brown
□ 6'0 - 175lbs - 30yrs - SoCal -
503 TG Baked Potato - 506 RF Potatonator - 506 WRF Vanguard - 510 FST Unibrow - 600 FST Michel Bourez - 600 FST Alternator - 602 FST Alternator Round - 603 FST Artillery
Firewire Social:
Hey All, use my Firewire email,
[email protected]
for emergency issues, not my forum inbox. However, please avoid contacting me directly with questions on choosing a board, just use the glorious forum. Cheers!
yeah well put slowman!!
its a little hard to use it as an exact spec because boards vary and surface area doesn't tell you anything about volume, rocker or foam distribution. I personally like to look at where the area
is over the length of a board. is it in the nose, center, tail etc. That will tell you more about how the board rides than just the total value of the surface area...
• Thanks Slowman & Chris.
The reason I asked about the surface area is that I learned that a lower surface area board (of the same model) requires a bit more juice to plane and project a bigger surfer. I'm 6'3" 210lbs.
The 6'7" HR I have does go well in the juice but at my size and the average SoCal wave, it loses some of its projection & planing in the slower flatter stuff so I may move up a bit with the same
model maybe a 6'9". The FW bigger boards seem to put on too much thickness and float for my taste. I'd rather do a 6'11" HR at 2 5/8 thickness but it's not available and while i'm wishing, lets
do a rounded pin eh? Say about 6'11", roughly 1375 sq. in., 38-39L HR. The 6'7" floats and paddles me fine, no issues there.
Thanks for the info! still learning...
|
{"url":"http://www.firewiresurfboards.com/f/forum/board-and-tech-talk/hellrazor/1302-hellrazor-surface-area","timestamp":"2014-04-19T04:55:14Z","content_type":null,"content_length":"92704","record_id":"<urn:uuid:dec00ecd-46dd-4abd-8a22-1a3029a697a6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Probability & Measure Theory" by Robert B. Ash, Catherine A. Doléans-Dade
Second Edition
Асаdеmiс Рrеss | 1999 | ISBN: 0120652021 9780120652020 | 532 pages | PDF/djvu | 7/3 MB
This is a text for a graduate-level course in probability that includes essential background topics in analysis. It provides extensive coverage of conditional probability and expectation, strong laws
of large numbers, martingale theory, the central limit theorem, ergodic theory, and Brownian motion.
|
{"url":"http://avaxsearch.com/?q=problems%20solutions%20measure%20theory","timestamp":"2014-04-19T19:34:05Z","content_type":null,"content_length":"24766","record_id":"<urn:uuid:608f3ea5-b312-4c01-ae7f-a3702e5fb6d6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coupled Chemotaxis-Fluid Models
Seminar Room 1, Newton Institute
We consider coupled chemotaxis-fluid models aimed to describe swimming bacteria, which show bio-convective flow patterns on length scales much larger than the bacteria size. This behaviour can be
modelled by a system consisting of chemotaxis equations coupled with viscous incompressible fluid equations through transport and external forcing. The global-in-time existence of solutions to the
Cauchy problem in two and three space dimensions is established. Precisely, for the chemotaxis-Navier-Stokes system, we obtain global existence and convergence rates of classical solutions near
constant states. When the fluid motion is described by Stokes equations, we derive some free energy functionals to prove global-in-time existence of weak solutions for cell density with finite mass,
first-order spatial moment and entropy provided that the potential is weak or the substrate concentration is small. Moreover, with nonlinear diffusion for the bacteria, we give global-in-time
existence of weak solutions in two space dimensions.
|
{"url":"http://www.newton.ac.uk/programmes/KIT/seminars/2010090915301.html","timestamp":"2014-04-19T17:14:40Z","content_type":null,"content_length":"4663","record_id":"<urn:uuid:496ca94f-886e-43bd-b3b7-d8dcdeb67040>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Student Support Forum: 'Can Mathematica 4 do what my TI-89 do?' topic
Author Comment/Response
I own both a TI-89 hand-held calculator and Mathematica for Students 4.
I can't figure out how to do things with Mathematica that I can easily do
with my TI-89.
Here is a list of such things I would love to be able to do with Mathematica:
1. Plot a 2D or 3D graph and trace through it. That is, plot a graph and then
move a cursor on the plot by little steps to see what are the values of
f(x) or f(x,y) or whatever..
2. Compute an integral graphically. On my TI-89, I can plot a 2D graph and
compute the area under the curve graphically. I see the area that I want
to compute getting grayed so I can see what is going on.
3. Could it be possible with Mathematica to graphically observe the integral
of a 3D function? For example, if I plot a cylinder and want to compute the
integral from z=2 to z=3.7, would there be a way for the cylinder part from
z=2 to z=3.7 to get ''filled''? I can't do that with my TI-89.
4. Is there a quick and easy way that I can zoom on a graph plot? With my
TI-89 I can quickly select a ''box'' on a region I want to zoom on and have it
enlarged on the fly.
5. This I can't do with my TI-89: Plot a plane, then, plot a vector (for
example, the normal vector to the plane) on the plane. All that on the same
graph of course.
6. The last thing I miss from my TI-89: Is it possible to do Units
conversion? With my TI-89 I can enter something like 12_ft->_m and have a
conversion from 12 feet to meters. I tried Mathematica's Convert[] function
but it doesn't seem to work. I entered Convert[12 Foot, Meter] and it just
prints back Convert[12 Foot, Meter]. I did load that '<<ConversionPackage'
thing before (the one they tell you to load in the Convert[] on-line help
Can I do all that in Mathematica 4?
Thanks in advance for your time and help !
(p.s.: please, forgive my bad English! It ain't my native language :)
URL: ,
|
{"url":"http://forums.wolfram.com/student-support/topics/4202","timestamp":"2014-04-18T13:25:07Z","content_type":null,"content_length":"27587","record_id":"<urn:uuid:6a3e9512-ddfc-4425-8b65-e18dd72dae3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: problem with find
Replies: 1 Last Post: Jan 22, 2012 8:00 PM
Messages: [ Previous | Next ]
problem with find
Posted: Jan 22, 2012 2:36 PM
I'm having a problem with using find, that's probably super easy to resolve but I'm forgetting the correct format. I have a matrix (A) 100*7. I have identified a subset of the matrix using find and
have the appropriate indices (ind=50*1 matrix). What I'm trying to do is simply extract that subset B=A(ind). If I do this I get a 50*1 matrix, however, what I would like is a 50*7 matrix. What's
the correct format for deriving this?
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2335057&messageID=7650852","timestamp":"2014-04-20T19:27:44Z","content_type":null,"content_length":"17315","record_id":"<urn:uuid:63db7461-b0f5-4a42-85fb-c60766a1cda6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manchester, NH Algebra 2 Tutor
Find a Manchester, NH Algebra 2 Tutor
...Although my educational background is in Chemistry, my other undergraduate major was Applied Math, and I have used much of the subject matter from Algebra II in my own career as well as
science tutoring. I recently (03/2013) passed the Massachusetts Test for Educator Licensing (MTEL) subject 09 ...
12 Subjects: including algebra 2, chemistry, calculus, physics
...As someone who has been teaching algebra for many years, I often see deficiencies in prealgebra skills that need to be remedied. I enjoy drilling those who study math on the concepts in
precalculus that give them the most trouble. With practice, these calculations and simplifications become second nature.
55 Subjects: including algebra 2, English, reading, algebra 1
...I have been a successful tutor or teaching assistant at each university I attended. I received Summa Cum Laude undergraduate honors at Middlebury College majoring in Mathematics with a minor
in Economics. I then did 1 year of graduate work at Dartmouth College, but decided to switch to a Master's in Computer Science program at UNH.
12 Subjects: including algebra 2, physics, calculus, statistics
...Most of what they really needed was a plan of study and to learn organizational skills. I have written a guide for students and parents on how to achieve success in school. Included in this
booklet was how to study for tests, write papers and how to organize classwork for study at a later time.
24 Subjects: including algebra 2, writing, geometry, GED
...I have a B.S in computer Science & Ms in Physics with Electronics as one of the subjects. Logic is used in both Electronics as well as in computer programming. I have worked as a software
program for seven years.
18 Subjects: including algebra 2, chemistry, calculus, geometry
Related Manchester, NH Tutors
Manchester, NH Accounting Tutors
Manchester, NH ACT Tutors
Manchester, NH Algebra Tutors
Manchester, NH Algebra 2 Tutors
Manchester, NH Calculus Tutors
Manchester, NH Geometry Tutors
Manchester, NH Math Tutors
Manchester, NH Prealgebra Tutors
Manchester, NH Precalculus Tutors
Manchester, NH SAT Tutors
Manchester, NH SAT Math Tutors
Manchester, NH Science Tutors
Manchester, NH Statistics Tutors
Manchester, NH Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Amherst, NH algebra 2 Tutors
Auburn, NH algebra 2 Tutors
Bedford, NH algebra 2 Tutors
Concord, NH algebra 2 Tutors
Derry, NH algebra 2 Tutors
Goffstown algebra 2 Tutors
Haverhill, MA algebra 2 Tutors
Hooksett algebra 2 Tutors
Lawrence, MA algebra 2 Tutors
Londonderry, NH algebra 2 Tutors
Lowell, MA algebra 2 Tutors
Merrimack algebra 2 Tutors
Methuen algebra 2 Tutors
Nashua, NH algebra 2 Tutors
Salem, NH algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Manchester_NH_Algebra_2_tutors.php","timestamp":"2014-04-20T07:01:29Z","content_type":null,"content_length":"24091","record_id":"<urn:uuid:0f0c58ad-3480-4beb-9cdc-5e85bd42d823>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Overflow error
Michael Hudson wrote:
(E-Mail Removed)
(Jane Austine) writes:
>>>>>from math import e
>>Traceback (most recent call last):
>> File "<pyshell#15>", line 1, in -toplevel-
>> e**710
>>OverflowError: (34, 'Result too large')
>>What should I do to calculate e**710?
> Well, it's too big for your platform's C double, so you need a
> different representation. I don't know if there are big float
> packages out there that handle such things (likely, though) or if
> there are Python interfaces to the same (less likely). Or you could
> store the logarithms of the numbers you are interested in. Why do you
> need such huge nubmers?
> Cheers,
> mwh
Using a little bit of magic:
First get a good approximation of e:
Using my bits package, I can do:
import bits, math
scaling = bits.lsb(math.e)
characteristic = bits.extract(math.e, scaling, 10)
# This has the same effect as:
# scaling, characteristic = -51, 6121026514868073L
# Now e = characteristic * 2.**scaling
result_scaling = scaling * 710
result_characteristic = characteristic ** 710
intpart = result_characteristic >> -result_scaling
# and you'll have to grab as much fractpart as you want.
Similarly, for decimal, type in enough digits (for your taste) of e
from some reference book, and omit the decimal point.
Then track the exponent in base ten, and you can obtain similar results:
scaling, characteristic = -5, 271828
result_scaling = scaling * 710
result_characteristic = characteristic ** 710
intpart = result_characteristic // 10 ** -result_scaling
So, you needn't use floating point if you are willing to type in
constants. Both of these give a number which is 223. * 10 ** 50 to
three digits, and they differ in the fourth digit (4 or 3 if you round).
The binary-based version (the first above) produces:
While the decimal-based version produces:
This is certainly due to using so few decimal places for e in the
decimal version.
In Knuth's Art of Computer Programming (at least volume 3, which I
happen to have at hand) Appendix A, you can get 41 decimal digits for e,
or 45 octal digits if you prefer to work with binary. I believe
(without checking) that each of the volumes contains this appendix.
The big advantage of using decimal is (a) more readily available tables
of constants come in decimal than in binary, and (b) if you _do_ want
to print some of the fractpart, it is easier just to change the division
to include the extra digits, while for the binary versions you'll have
to multiply by 10**digits before the division.
-Scott David Daniels
(E-Mail Removed)
|
{"url":"http://www.velocityreviews.com/forums/t333633-overflow-error.html","timestamp":"2014-04-20T19:12:20Z","content_type":null,"content_length":"51971","record_id":"<urn:uuid:15ad51cb-c7fa-4321-b78d-4a6dc03271cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Variable - a finding that can take on different values during the course of a study. When used without specification, refers to a measurement taken as part of the study (e.g., AGE, AGREE, SEX).
Variables should not to be confused with values, which represents realized measurements (e.g., 31, neutral, female).
There are numerous ways to classify variable, and no standard taxonomy exists. As used in this E-book, we speak of three classes of variables. These are:
• Continuous variables (also called quantitative variables, scale variables, interval variables), such as AGE
• Ordinal variables (rank ordered categories), such as OPINION (1 = strongly agree, 2 = agree, 3 = neutral, 4 = disagree, 5 = strongly disagree)
• Categorical variables (also called qualitative variables, nominal variables, discrete unordered categories), such as SEX (1 = male, 2 = female).
Categorical variables with only two possible outcomes (e.g., SEX) are called binary, dichotomous, or indicator, variables.
Researchers also speak of the outcome variable (dependent variable, study outcome, "disease," Y) and main predictor variable (independent variable, explanatory variable, "exposure," X). For example,
we may be interested in the effect of high blood pressure (predictor variable) on the incidence of cardiovascular disease mortality (outcome variable). All variable other than the study outcome and
predictor are called extraneous variables or "potential confounders."
|
{"url":"http://www.sjsu.edu/faculty/gerstman/EpiInfo/variable.htm","timestamp":"2014-04-20T14:20:34Z","content_type":null,"content_length":"2129","record_id":"<urn:uuid:2c5a567c-a6d5-4e21-ae48-eb292b3e492b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Web Resources
Newton's First Law of Motion
Khan Academy podcast on Newton's First Law of Motion. Part 1
Lunar Landing
Can you avoid the boulder field and land safely, just before your fuel runs out, as Neil Armstrong did in 1969? Our version of this classic video game accurately simulates the real motion of the
lunar lander with the correct mass, thrust, fuel consumption rate, and lunar gravity. The real lunar lander is very hard to control
Motion in 2D
Students will learn about position, velocity, and acceleration vectors. Move the ball with the mouse or let the simulation move the ball in four types of motion.
Motion in 1D
Explore the forces at work when you try to push a filing cabinet. Create an applied force and see the resulting friction force and total force acting on the cabinet. Charts show the forces, position,
velocity, and acceleration vs. time. View a Free Body Diagram of all the forces (including gravitational and normal forces).
|
{"url":"http://alex.state.al.us/weblinks_category.php?stdID=41194","timestamp":"2014-04-19T22:15:44Z","content_type":null,"content_length":"22446","record_id":"<urn:uuid:b41104b5-bb1e-4aa2-8f80-2f8cdc2bec16>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Professor Bill Parry
11-plus-failure mathematician FRS
William Parry, mathematician: born Coventry 3 July 1934; Lecturer, Birmingham University 1960-65; Senior Lecturer in Mathematics, Sussex University 1965-68; Reader in Mathematics, Warwick University
1968-70, Professor of Mathematics 1970-99 (Emeritus); FRS 1984; married 1958 Benita Teper (one daughter): died Coventry 20 August 2006.
Bill Parry had a meagre school education but went on be an outstanding mathematician in the field of dynamical systems and a Fellow of the Royal Society. He specialised in ergodic theory, which has
close connections with probability theory, statistical mechanics, number theory, differential equations and information theory.
Parry was born in Coventry in 1934, the sixth of a family of seven children. He failed his 11-plus examination and at the age of 13 went to a technical school which specialised in woodwork and
metalwork but where a teacher noticed his mathematical ability and persuaded him to stay in the sixth form. Because the school was unable to provide proper tuition in mathematics, Parry was obliged
to take classes at Birmingham Technical College, and, after obtaining the requisite passes, he was, despite the limitations of his schooling, admitted to University College London to study
Mathematics, where he was encouraged by Hyman Kestelman. After graduating in 1955 he did a one-year MSc course at Liverpool University before studying for his doctorate at Imperial College, London,
under the guidance of Yael Dowker. His first post was as a Lecturer in the Mathematics Department at Birmingham University.
The academic year 1962-63, spent at Yale University, was very important in Parry's mathematical development because he had close contact with Shizuo Kakutani and several young American mathematicians
who were working in the same area. He returned to Birmingham with an enhanced enthusiasm for mathematics and began to supervise research students. Parry's early research work was in several areas of
ergodic theory that turned out to be of major importance. He was the first to study topological Markov chains, later called subshifts of finite type, and these became significant in some coding
theory problems and as models for parts of smooth dynamical systems with hyperbolic behaviour. He showed that each irreducible topological Markov chain has a unique measure of maximal entropy and
these measures, which are now called Parry measures, can be described in a simple way using matrix theory.
In 1965 he moved to the newly founded Sussex University as Senior Lecturer. There he worked on entropy theory showing, amongst other things, that each aperiodic measure-preserving transformation
could be viewed as the shift on the realisation space of a stationary, countable state, stochastic process indexed by the integers or the natural numbers. He moved to Warwick University, in his home
city of Coventry, in the spring of 1968 and spent the remainder of his career there.
Warwick had been founded at much the same time as Sussex and had a thriving research environment through its Mathematical Research Centre, and Parry, who previously had disliked the pretensions of
common rooms, was at home in the atmosphere of discussions, both mathematical and general, in the large comfortable space adorned with many large blackboards in the Mathematics Institute.
Among his contributions during these years was fundamental work on codings between symbolic systems. Sometimes the efficiency of a code is very important. For example, in the theory of computing,
sentences in English need to be changed into strings of zeros and ones, and conversely strings of zeros and ones need to be translated into sentences in English, and this needs to be done as
efficiently as possible. Parry was instrumental in developing a theory of different types of codings between symbolic systems.
In another area he also showed how to use ideas and techniques from analytic number theory to study the distribution of periodic orbits of dynamical systems with hyperbolic properties. He showed that
an analogue of the Prime Number Theorem holds. Over about 40 years, he trained a steady stream of excellent research students, most of whom have academic positions in Britain and other countries, and
he kept a keen interest in their careers. He knew how to motivate people to do mathematics, when to cajole and when to criticise, and he had an infectious enthusiasm for beautiful mathematical
constructions and theories.
He was appointed Professor at Warwick in 1970 and elected FRS in 1984. He published over 80 research papers and four books. After his retirement in 1999, he continued to teach an advanced course for
a further three years and he attended seminars until a few weeks before his death.
Bill Parry's father and brothers were active trade unionists, and some members of his family were members of the Communist Party, which he joined while a student at University College. When in
Liverpool he came into contact with the Socialist Labour League which, to his later regret, he joined. On the Aldermaston March in 1958 he met the love of his life, Benita, who had just arrived from
South Africa, and they married later that year, urged on by Gerry Healy, who frowned on cohabiting as Bohemian. Their daughter, Rachel, was born in 1967.
Through Benita, who works in the field of postcolonial studies and whose academic career began as Bill was nearing retirement, Bill enlarged his circle of friends, participating in discussions on
history, politics, philosophy, literature and art. It was his habit to work at home in his study, with Benita working in hers a short distance away. But their lives were not all earnest talk, and he
and Benita always said they experienced the Sixties in the Seventies and Eighties, having spent the Sixties extricating themselves from the austerities of their previous political commitments,
despite which they remained strongly and uncompromisingly socialist. Both in Sussex and Warwick they lived in the countryside, and it was Bill's great pleasure to visit his daughter and
granddaughters in North Wales. Before and after retirement he immersed himself in poetry and latterly wrote poems himself, some of which have been published.
Peter Walters
Published: 08 September 2006 © The Independent
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Obits2/Parry_Independent.html","timestamp":"2014-04-19T19:37:15Z","content_type":null,"content_length":"7004","record_id":"<urn:uuid:88bd4fcf-bb7e-466c-ba3a-ac3666355b57>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
applications of machine learning / data mining
October 31st, 2012, 05:23 AM #1
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts
What are some areas of application of machine learning and data mining algorithms, which is directly used to generate revenue for the business?
For example, I know that a lot of data companies simply see machine learning and data mining scientists as must-haves, even though their role is not a major contribution of wealth to the
However, are there types of business which directly use machine learning to generate revenue? Or maybe other statistical algorithms.
As another example, a lot of banks employ quants for regulatory purposes, but those roles are simply used to please regulators. In other words, most of the bank's direct revenue comes from other
departments, from traders in front office, etc. But most quants work support roles, which are not even directly relevant to the business itself.
What are some areas of application of machine learning and data mining algorithms, which is directly used to generate revenue for the business?
How google ranks their results, how amazon creates recommendations, how facebook grows their network, how some use advertising (and where they may choose to place it based upon their mining
results) are all examples - these are all core technologies of these companies and while each is different, each leads to sales in one form or another (direct sales of products, advertising,
etc...). Another example: in the biomedical industry data mining can be used to identify biomarkers - markers which can be used to diagnose and study diseases, which in turn leads to products (eg
sales) in the form or kits for diagnosis, drugs for treatment, or even better: cures.
Last edited by copeg; October 31st, 2012 at 09:48 AM.
How google ranks their results, how amazon creates recommendations, how facebook grows their network, how some use advertising (and where they may choose to place it based upon their mining
results) are all examples - these are all core technologies of these companies and while each is different, each leads to sales in one form or another (direct sales of products, advertising,
etc...). Another example: in the biomedical industry data mining can be used to identify biomarkers - markers which can be used to diagnose and study diseases, which in turn leads to products (eg
sales) in the form or kits for diagnosis, drugs for treatment, or even better: cures.
I guess what I also want to know is how profitable are some of these jobs. For example, someone writing ML algos for stock market prediction is probably being paid a lot more than a bio data
miner, because bio data miner is further from the money and the lion share of profits probably goes to the sales team.
Are there many ML jobs which have a closer involvement to the financial aspect of things?
I can tell you that the finance market highly recruits anyone with a strong mathematical background. Personally I find this annoying because I get a rather large quantity of email and physical
mail asking me to pursue an MBA or some other finance degree, or get involved with some math/CS portion of a financial system, etc.
I can't comment on how lucrative such positions are, but there must be a good financial incentive since the demand is fairly large. Somehow I kind of doubt you'd be paid significantly better than
you would at a bio firm or a large corporation like Google or Amazon. These are all multi-billion/trillion dollar markets and companies understand the significance of being able to manage lots of
data effectively.
I guess what I also want to know is how profitable are some of these jobs. For example, someone writing ML algos for stock market prediction is probably being paid a lot more than a bio data
miner, because bio data miner is further from the money and the lion share of profits probably goes to the sales team.
Are there many ML jobs which have a closer involvement to the financial aspect of things?
While I'm not in the finance sector professionally, my understanding is that in the data mining field the efficient-market hypothesis actually detours folks from data mining (the best example
I've heard of is an example in which a neural network was training with the mood on Twitter, it was reported that the model could accurately predicted the Dow Jones (I believe the next day
closing) - whether this could be used to make a profit is an different story, my guess is no). I'd agree with helloworld that math and statistical aspects of someone's experience would be
valuable in this sector.
Are there many ML jobs which have a closer involvement to the financial aspect of things?
Are you talking specifically about the financial sector, or just the profit of a company, because I'd say the examples above are pretty closely tied to profit. Here's two more
- Supermaket chains. Do an experiment, collect the data, and evaluate it using mining approaches for higher profits. For instance, in some stores place items lower on shelves, or move certain
items further away from each other, color or place signs differently, place certain items on sale while others are not, customize coupons, whatever...the key is you have a lot of data of
different conditions and the result from those changes (profit). You can mine this data to look for patterns and correlations that help increase profit.
- Credit card companies - have to pay for fraudulent charges. They loose money when this happens, so its in their best interest to detect fraud as early as possible. Thus, they data mine
cardholder's purchases, categorize the purchases as legitimate or fraud, which helps then detect fraud extremely quickly (with less monetary loss == higher profit)
I'm actually working as a quant for a major bank, where I get to build statistical models for market risk. And I'd like to switch away from finance. I went into more details here, as to why:
Nuclear Phynance
In theory, data mining and machine learning type jobs are supposed to be paying a lot of money, but in reality that's only if you end up working for a top company, where the hiring process is
fairly stringent.
I'm convinced that I want to switch away from finance and more into machine learning / IT type of a role, but I'm not sure how to make the swithch at this stage. As posted on Nuclear Phynance,
I'm pretty much sold at switching to IT, but the question is how do I get into more ML-type roles. I have pretty basic ML background. I can review C++, data structures, statistics and ML (hidden
Markov models, mixture of Gaussians, logistic regression, Monte Carlo methods (rich background in MC methods), Naive Bayes).
From what I've seen so far though, most ML positions already require knowledge of collaborative filtering, for example. Some roles already require Hadoop/MapReduce as well. Any ideas? I need to
find more basic ML roles to start.
October 31st, 2012, 09:43 AM #2
Super Moderator
Join Date
Oct 2009
Thanked 779 Times in 725 Posts
Blog Entries
October 31st, 2012, 11:15 AM #3
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts
October 31st, 2012, 12:20 PM #4
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
October 31st, 2012, 02:55 PM #5
Super Moderator
Join Date
Oct 2009
Thanked 779 Times in 725 Posts
Blog Entries
November 1st, 2012, 07:46 PM #6
Junior Member
Join Date
Oct 2012
Thanked 0 Times in 0 Posts
|
{"url":"http://www.javaprogrammingforums.com/totally-off-topic/18864-applications-machine-learning-data-mining.html","timestamp":"2014-04-16T06:03:53Z","content_type":null,"content_length":"77933","record_id":"<urn:uuid:0a58d4ae-eb75-40a7-a8e7-c4034424693e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mechanical System and SimMechanics Simulation
Darina Hroncova^1, , Miroslav Pastor^1
^1Department of applied mechanics and mechatronics, Technical University of Košice / Faculty of mechanical engineering, Košice, Slovakia
The work shows the use of SimMechanics for modeling of mechanical systems. As an example a mechanical model of one degree of freedom is solved by this toolbox for the Matlab/ Simulink environment.
This paper describes how to simulate the dynamics of mechanical system with SimMechanics. Mechanical systems are represented by connected block diagrams. Unlike normal Simulink blocks, which
represent mathematical operations, or operate on signals, physical modeling blocks represent physical components in SimMechanics, and geometric and kinematic relationships directly. This is not only
more intuitive, it also saves the time and effort to derive the equations of motion. SimMechanics models, however, can be interfaced seamlessly with ordinary Simulink block diagrams. This enables the
user to design e.g. the mechanical and the control system in one common environment.
At a glance: Figures
Keywords: SimMechanics, simulation, mechanical system, Oscilator, Equation of the Motion
American Journal of Mechanical Engineering, 2013 1 (7), pp 251-255.
DOI: 10.12691/ajme-1-7-20
Received October 17, 2013; Revised October 31, 2013; Accepted November 22, 2013
© 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
• Hroncova, Darina, and Miroslav Pastor. "Mechanical System and SimMechanics Simulation." American Journal of Mechanical Engineering 1.7 (2013): 251-255.
• Hroncova, D. , & Pastor, M. (2013). Mechanical System and SimMechanics Simulation. American Journal of Mechanical Engineering, 1(7), 251-255.
• Hroncova, Darina, and Miroslav Pastor. "Mechanical System and SimMechanics Simulation." American Journal of Mechanical Engineering 1, no. 7 (2013): 251-255.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction to SimMechanics
As already mentioned, the SimMechanics blocks do not directly model mathematical functions but have a definite physical - mechanical meaning. The block set consists of block libraries for bodies,
joints, sensors and actuators, constraints and drivers, and force elements. Besides simple standard blocks there are some blocks with advanced functionality available, which facilitate the modeling
of complex systems enormously. An example is the Joint Actuator with event handling for locking and unlocking of the joint. Modeling such a component in traditional ways can become quite difficult.
The machine is assembled automatically at the beginning of the simulation ^[6].
All blocks are configurable by the user via graphical user interfaces as known from Simulink. The option to generate or change models from Matlab programs with certain commands is not implemented
yet. It might be added in future releases. It is possible to extend the block library with custom blocks, if a problem is not solvable with the provided blocks. These custom blocks can contain other
preconfigured blocks or standard Simulink S-functions.
Standard Simulink blocks have distinct input and output ports. The connections between those blocks are called signal lines, and represent inputs to and outputs from the mathematical functions. Due
to Newton’s third law of action and reaction, this concept is not sensible for mechanical systems. If a body A acts on a body B with a force F, B also acts on A with a force −F, so that there is no
definite direction of the signal flow. Special connection lines, anchored at both ends to a connector port have been introduced with this toolbox. Unlike signal lines, they cannot be branched, nor
can they be connected to standard blocks. To do the latter, SimMechanics provides Sensor and Actuator blocks. They are the interface to standard Simulink models ^[6].
Actuator blocks transform input signals in motions, forces or torques. Sensor blocks do the opposite, they transform mechanical variables into signals.
2. The Mechanism Example
Some of the features of SimMechanics will now be shown on an example. Figure 1 shows the mechanical system of interest. It is mechanical system with one degree of freedom. It consists of a mass m
attached to a ground which slides on a smooth horizontal plane.
Mass m is attached to ground with a linear spring/ damper system with the spring stiffness k and the damping coefficient b. The system has one translational degree of freedom, q[1] = x. The equations
of motion for that mechanism can be derived by Hamilton’s, Lagrange equations or Newton’s second law.
2.1. Formulation of the Equation of Motion
Equation of the motion of the mechanical system in axis x ^[1, 2, 3]:
Where: F[r] , F[d] – reaction forces in the springs and damping forces of system.
Reaction forces in the springs and damping forces are:
Equation of the motion of the mechanical system:
We obtain equations of motion of the form ^[7]:
The aim is to determine the response of the oscillating system - displacements x=x(t). This, however, requires solve a system of differential equations. For a mechanical system with one degree of
freedom it is a homogeneous system of 2nd order linear differential equation with constant coefficients.
3. Simulink Solution
The block diagram in Simulink from equation of motion (5) mechanical system:
Block diagram in next form:
M-file for graphical results x(t), v(t) and a(t):
Export Setup window with Properties:
The widow in Simulink Figure 1 with result parameters in graphical form:
M-file for result x(t) in graphic form ^[4, 5]:
Result parameter of the velocity in the form:
Result parameter of the velocity in the form:
4. SimMechanics Solution
The Physical Modeling environment SimMechanics it makes the task easy ^[6]. Simulink block diagram shown in Figure 11 describe model of the mechanism without writing any equations ^[8, 9]. Let us
have a closer look at the diagram. Obviously, every block corresponds to one mechanical component. The properties of the blocks can be entered by double-clicking on them. These are for example mass
properties, dimensions and orientations for the bodies, the axis of rotation for the rotational joint and the spring/ damper coefficients for the Spring & Damper block. The initial conditions are
given directly by specifying the initial position and orientations of the rigid bodies.
Model of the mechanism is so simple, it lacks some important features, though. It is not possible to specify nonzero initial conditions for velocities and accelerations, and no data is output to the
workspace for further processing. This can be done by adding Sensor blocks and Joint Initial Condition block ^[6]. Model which is functionally completely equivalent to the one in Figure 11 is build.
It is shown in Figure 11. The Joint Initial Condition blocks in SimMechanics let the users define arbitrary initial conditions, and the Joint Sensor blocks measure the position, velocity, and
acceleration of the two independent motion variables. If desired, the forces and torques transmitted by the joints can be sensed, too. Remarkable are the different types of connectors between the
blocks. It is clearly visible that the input of the Sensor blocks are the special SimMechanics connection lines, while the output are normal Simulink signals. The signal from SimMechanics can be
conducted to the Simulink workspace.
Block Parameters for Body_1:
SimMechanics is an interesting and important add-on to the simulation environment Simulink ^[6]. It allows to easily include mechanical subsystems into Simulink models, without the need to derive the
equations of motion. Especially non-experts will benefit from the visualization tools, as they facilitate the modeling and the interpretation of results. SimMechanics is certainly not a replacement
for specialized multibody dynamics software packages due to the limitations of the physical modeling approach and the restrictions of the Simulink environment. It is, however, well suited for many
practical problems in education and industry, where Matlab became the standard language of technical computing.
5. Conclusion
The paper describes the compilation of the equations of motion of a mechanical system with one degree of freedom in Matlab/Simulink and by SimMechanics.
The results of all models are of course identical. Figure 8, Figure 9 and Figure 10 shows the result of the simulation. Results are shown graphically. The calculation is done for a model in Matlab/
Simulink, to illustrate the methodology and in SimMechanics.
In this paper, the Matlab toolbox SimMechanics was introduced, it basic functionalities were shown on an example.
The contribution of this work is primarily educational, especially in the field of Applied Mechanics and Mechatronics. The above procedure presents the possibility of practical implementation of this
solution to simple equations of motion of a mechanical system in Matlab/Simulink and SimMechanics.
This work was supported by grant project VEGA No. 1/1205/12 and by project "Research modules for intelligent robotic systems " (IMTS: 26220220141), on the basis of Operational Programme Research and
Development financed by European Regional Development Fund.
[1] Brepta, R., Púst, L., Turek, F.: Mechanické kmitání, Sobotáles, Praha, 1994.
[2] Grepl, R., Modelování mechatronických systémů v Matlab SimMechanics. Praha, 2007.
[3] Juliš, K., BREPTA, R., Mechanika II.díl, Dynamika, SNTL, Praha, 1987.
[4] Karban, P., Výpočty a simulace v programech Matlab a Simulink, Computer Press, Brno, 2006.
[5] Kozák, Š., KAJAN, S., Matlab - Simulink I, STU, Bratislava, 2006.
[6] Schlotter, M., Multibody System Simulation with SimMechanics.
[7] Záhorec, O., Caban, S., Dynamika, Olymp, Košice, 2002.
[8] Virgala, , Frankovský, P., Kenderová, M., Friction Effect Analysis of a DC Motor. In: American Journal of Mechanical Engineering. 2013. Vol. 1, no. 1 (2013), p. 1-5.
[9] Vittek, J., Matlab pre elektrické pohony, Žilina, 1997, 83 p.
|
{"url":"http://pubs.sciepub.com/ajme/1/7/20/index.html","timestamp":"2014-04-24T09:10:39Z","content_type":null,"content_length":"84746","record_id":"<urn:uuid:ece75eb7-5066-4183-ae19-9476bfc7ccd3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
August 23rd 2012, 10:36 AM
Hello Forum! I have the following problem, which is related to the ring of quaternions, I tried but I had no luck in making it, I hope I can help:
Let I be the ring of integers Hamilton quaternions and define:
$N : I\rightarrow{}Z\ with\ N(a+bi+cj+dk)=a^2+b^2+c^2+d^2$
(The N is called norm)
Prove that an element of R is a unit if and only if it has norm +1. Addition show that $I^x$ is isomorphic to the quaternion group of order 8.
August 23rd 2012, 08:08 PM
Re: Unit
I don't understand the question. the quaternions form a division ring, which means every element except zero has inverse, and thus is a unit.
August 23rd 2012, 11:44 PM
Re: Unit
ModusPonens, I think it's because $I$ is the quaternions with integer coordinates. It is similar to the Gaussian integers.
Suppose that $x$ is a unit of $I$. That is, there exists a $y\in I$ such that $xy = yx = 1_I$, where $1_I = 1$ is the identity of $I$. Next, $N$ is a map from $I$ into the non-negative integers.
You should prove that $N(ab) = N(a)N(b)$ where the multiplication on the LHS is in $I$ and the multiplication on the RHS is in $\mathbb{Z}$. Then you have
$N(xy) = N(x)N(y) = N(1) = 1$
Can you take it from here?
August 23rd 2012, 11:47 PM
Re: Unit
he means, presumably, the ring of quaternions with integer coefficients.
what you need to do is show the norm N is multiplicative. that is, for integral quaternions q,q', N(qq') = N(q)N(q').
then if q is a unit in R, we have qp = 1, for some integral quaternion p. so N(q)N(p) = N(qp) = N(1) = 1.
now N(q) and N(p) are integers, so they are units of Z, so N(q) = 1 or -1, since those are the only units of Z.
but N(q) ≥ 0, so N(q) = 1.
this means that precisely one of a,b,c, or d is ±1, that is the units of R are:
|
{"url":"http://mathhelpforum.com/advanced-algebra/202472-unit-print.html","timestamp":"2014-04-20T11:14:33Z","content_type":null,"content_length":"8177","record_id":"<urn:uuid:18a7f4cc-2eeb-4649-8056-de18574fadb1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Franklin Park, IL Math Tutor
Find a Franklin Park, IL Math Tutor
...I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the GRE. I've helped students push past their goal scores in both the
Quant and Verbal. I took the revised version of the GRE the first day it was offered and I scored a 170 on the Quant and a 168 on Verbal.
24 Subjects: including differential equations, discrete math, linear algebra, algebra 1
...I went through the process myself, successfully gaining admission to Northside College Prep, and I am fully up to date on the current CPS protocol for applying to these schools. I am able to
walk you through the application process, give you an overview of prospective schools, and provide academ...
38 Subjects: including algebra 1, reading, trigonometry, statistics
...Thank you for considering my tutoring services. I have a diverse background that makes me well suited to help you with your middle school through college level math classes, as well as
physics, mechanical engineering, intro computer science and Microsoft Office products. I have 4 years of teaching experience: 2 years as a middle school math teacher and 2 years as a high school
math teacher.
17 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
...Thus I bring first hand knowledge to your history studies. I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational
Biology for my Ph.D. dissertation. I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes.
41 Subjects: including SAT math, ACT Math, geometry, prealgebra
...As a recent graduate of Biology from Elmhurst College, science of all kinds has been my life for the past four years. And before that, it was always my favorite subject. I used to help my
friends understand concepts that they didn't understand, and we always did better when we worked as a team to understand the material.
14 Subjects: including prealgebra, algebra 1, chemistry, reading
Related Franklin Park, IL Tutors
Franklin Park, IL Accounting Tutors
Franklin Park, IL ACT Tutors
Franklin Park, IL Algebra Tutors
Franklin Park, IL Algebra 2 Tutors
Franklin Park, IL Calculus Tutors
Franklin Park, IL Geometry Tutors
Franklin Park, IL Math Tutors
Franklin Park, IL Prealgebra Tutors
Franklin Park, IL Precalculus Tutors
Franklin Park, IL SAT Tutors
Franklin Park, IL SAT Math Tutors
Franklin Park, IL Science Tutors
Franklin Park, IL Statistics Tutors
Franklin Park, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Franklin_Park_IL_Math_tutors.php","timestamp":"2014-04-18T23:51:46Z","content_type":null,"content_length":"24327","record_id":"<urn:uuid:aae21879-17d2-4e51-9ac6-f736c60620df>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Changing Areas, Changing Perimeters
Copyright © University of Cambridge. All rights reserved.
'Changing Areas, Changing Perimeters' printed from http://nrich.maths.org/
Isabelle from South Wilts and Natalie from St.Andrews International School in Thailand both answered the first challenge correctly.
Here is Natalie's arrangement of the shapes:
Here is how Isabelle described her strategy:
a) Write down the areas and perimeters of each shape.
b) The only three shapes that share an area are G, A & C therefore they must occupy the middle column.
c) B, D & I have areas less than 14 so must occupy the left column and E, F & H have areas greater than 14 so must occupy the right column.
d) The shapes with perimeter 20 (B, C & H) must go in the bottom row, those with perimeter 18 must go in the middle row and those with perimeter 16 must go in the top row.
There is only one way this can be achieved so by a process of elimination the solution is as above.
Isabelle also answered the second challenge correctly:
- = +
- 2 by 7 4 by 4 3 by 6
PERIMETER = 1 by 9 2 by 8 5 by 5
+ 1 by 15 1 by 16 3 by 8
Krystof from Uhelny Trh, Prague, filled in one of the squares of the extended grid:
It's possible to fill in the box on the left too. I wonder if you can think of a way, and convince yourself that it's impossible to fill in the top and right boxes.
|
{"url":"http://nrich.maths.org/7534/solution?nomenu=1","timestamp":"2014-04-20T13:33:27Z","content_type":null,"content_length":"5599","record_id":"<urn:uuid:89cdd41f-2a1f-4cbe-aa22-ad4c01ee30f2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SMPS transformer overwinding - diyAudio
corrieb :
Sorry, I misunderstood you, I thought you were working in an offline application. Well, you have to find out where is coming so much voltage drop.
Do you have an oscilloscope at hand?
Are you using a fully regulated push-pull with inductors connected just between the output diodes and the output capacitors?
More details would be appreciated.
Concerning your transformer, with 6+6 turns my calculatios predict approx. 100mT peak flux. This is quite low for most materials, so you may consider going for 4+4 primary turns and 160mT, this will
also have a great impact on voltage drop, since leakage inductance will be halved. Note that the flux density that should be considered is not peak to peak, but just the peak value in one direction,
and up to 200mT may be ok for most materials.
Furthermore, if you are using a fully regulated topology with output inductors, you can go for 3 turns and 200mT peak since maximum duty cycle and maximum input voltage won't happen at the same time
(except during load transients, but this is not a concern from the point of view of core losses).
The best way to place the windings is to spread all them evenyly around the toroid. Primaries should be bifilar and as symmetrical as possible, and same applies for the secondaries.
Should 3 turns be too hard to wind, you can reduce the switching frequency to 30Khz (60Khz clock) and use 4 turns instead.
|
{"url":"http://www.diyaudio.com/forums/power-supplies/66801-smps-transformer-overwinding.html","timestamp":"2014-04-17T13:27:12Z","content_type":null,"content_length":"81852","record_id":"<urn:uuid:30c62f11-ff2c-40aa-aa48-7013c8858009>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Problem
Wouldn't it be nice to have symbolic matrix operations in Perl? Instead of composing matrices of numbers, we could compose matrices of expressions. Consider the following:
use strict;
use warnings;
use Math::MatrixReal;
my $x = 1;
my $m = Math::MatrixReal->new_from_rows([ [$x, 2*$x], [3*$x, 4*$x] ]);
for(; $x < 4; ++$x)
print "$m\n";
The output of this code is:
[ 1.000000000000E+00 2.000000000000E+00 ]
[ 3.000000000000E+00 4.000000000000E+00 ]
[ 1.000000000000E+00 2.000000000000E+00 ]
[ 3.000000000000E+00 4.000000000000E+00 ]
[ 1.000000000000E+00 2.000000000000E+00 ]
[ 3.000000000000E+00 4.000000000000E+00 ]
This isn't surprising. After all, we're just passing a scalar value for the matrix elements. If we wanted the expression to be re-evaluated each time, we'd need to enclose it in a coderef. However,
this won't solve the problem because Math::MatrixReal wants numbers, not coderefs.
By now, you've decided to re-implement Math::MatrixReal. However, if you define a matrix with coderef elements, what happens when you multiply it with another matrix?
my $m1 = Math::MatrixReal->new_from_rows([ [sub {$x}, sub {2*$x}], [sub {3*$x}, sub {4*$x}] ]);
my $m2 = Math::MatrixReal->new_from_rows([ [sub {10*$x}, sub {20*$x}], [sub {30*$x}, sub {40*$x}] ]);
my $product = $m1->multiply($m2);
There are three options for how this would work: (The examples are dumbed down because matrix multiplication is rather long.)
1. Each element is evaluated for the multiplication:
my $a = sub {2};
my $b = sub {3};
my $result = $a->() * $b->();
This sort of defeats the purpose of using coderefs, since the expressions won't carry all of the way through.
2. Multiply the coderefs:
my $a = sub {2};
my $b = sub {3};
my $result = $a * $b;
This "works", in the sense that it doesn't blow up. However, $a and $b are essentially pointers to functions. Doing multiplication on them makes Perl treat them as actual numbers, and the $result
has no physical or logical meaning.
3. Use a tree to keep track of the operations within the expression, and then traverse the tree to evaluate the expression when printing.
This will work! However, it wouldn't be very much fun to implement or maintain.
So we have a workable solution, but it's not fun, and it's not really elegant. You'd have to mix tree operations in with your matrix math code. I've already lost interest.
Enter Perl 6
In Perl 6, everything is an object. Yes, even coderefs. (Now they're called Blocks.) What does this mean for us? First of all, we can no longer multiply Blocks:
my $code = {2};
say $code*$code;
Method 'Num' not found for invocant of class 'Block'
in 'Cool::Numeric' at line 1739:CORE.setting
in 'Cool::Numeric' at line 1740:CORE.setting
in 'infix:<*>' at line 6460:CORE.setting
in main program body at line 14:./perl6-test.pl
This error reveals a key piece of information: Num() is not defined for Block, so it doesn't make sense to try to multiply them.
Monkey Typing
Perl 6 supports "monkey typing", or augmenting a class with additional functionality:
use MONKEY_TYPING;
augment class Block
method Num
return self.();
my $code = {2};
say $code*$code;
See how MONKEY_TYPING is in all caps? That's (I'm assuming) to indicate to you that you're doing something weird and unusual. Monkey typing has a global effect. Using this approach, every Block will
get the Num() method. This isn't necessarily a bad thing, but it will make debugging an accidental Block in a numeric context much harder. It's best to avoid monkey typing, especially when
interacting with other people's code.
If you're familiar with Perl 6's object model, or Perl 5's Moose, you'll remember that you can assign roles to individual objects. What if we made a role with a Num() method?
role BlockNumify
method Num
return self.();
my $code = {2};
$code does BlockNumify;
say "I expect this to work: ", $code*$code;
my $code2 = {3};
say "I expect this to fail: ", $code2*$code2;
I expect this to work: 4
Method 'Num' not found for invocant of class 'Block'
in 'Cool::Numeric' at line 1739:CORE.setting
in 'Cool::Numeric' at line 1740:CORE.setting
in 'infix:<*>' at line 6460:CORE.setting
in main program body at line 16:./perl6-test.pl
This allows us to call Num() on Blocks that we are expecting to be treated as numbers, while also preserving the usual type safety with Blocks that we don't touch.
Putting it all Together
I'm not going to implement an entire Matrix class here because it would be long and tedious. I do have an example that should illustrate the point of all of this though:
my @operations;
my $x = 0.3;
my $code = {sin($x)};
$code does BlockNumify;
for 0..10 -> $i
@operations[$i] = {$i + $code};
@operations[$i] does BlockNumify;
my $sum = { [+] @operations };
$sum does BlockNumify;
for 1..5 -> $i
$x *= $i; # We modify $x all the way down here!
my $average = $sum/@operations.elems;
say $average;
The function of this code doesn't matter, but it does show off some important things. Perhaps most important is that $x is set at the beginning, and used in $code. Stuff happens, and at the end,
we're looping around some code that doesn't modify $code anymore, but does modify $x, and results in different output. This shows that it works as intended. Hurray!
Why did I do the sum in such a strange way? The intuitive way would be to do something like:
my $sum = $code;
for 0..10 -> $i
$sum = {$i + $sum};
$sum does BlockNumify;
However, this causes infinite recursion, for reasons I don't fully understand. Thanks to the new reduction operators, it's easy enough to get around this.
A Quick Experiment
To show that this does behave as I've said it does, let's also output some debugging information:
my @operations;
my $x = 0.3;
my $code = {say "--> sin($x)"; sin($x)};
$code does BlockNumify;
for 0..2 -> $i
@operations[$i] = {say "-> $i + $code"; $i + $code};
@operations[$i] does BlockNumify;
my $sum = { say "Summing"; [+] @operations };
$sum does BlockNumify;
for 1..2 -> $i
$x *= $i;
my $average = $sum/@operations.elems;
say $average;
-> 0 + _block141
--> sin(0.3)
-> 1 + _block141
--> sin(0.3)
-> 2 + _block141
--> sin(0.3)
-> 0 + _block141
--> sin(0.6)
-> 1 + _block141
--> sin(0.6)
-> 2 + _block141
--> sin(0.6)
If we look at this as a tree, we can see that we do a depth-first traversal. Each new unknown value is immediately evaluated for the parent operation until we reach an actual number. This is exactly
what we wanted at the beginning, but without explicitly messing around with trees.
|
{"url":"http://blogs.perl.org/users/ryan_fox/atom.xml","timestamp":"2014-04-20T08:17:14Z","content_type":null,"content_length":"11383","record_id":"<urn:uuid:b6ca636f-2304-438b-8a30-c12812658411>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Raritan, NJ Math Tutor
Find a Raritan, NJ Math Tutor
...I like to read and research American History in my spare time as well. I am an American History buff with emphasis on the Revolutionary War period and the Civil War period. I have taught
grammar to foreign nationals in Germany; to foreign area officers while serving in the US Military, and to citizen candidates studying for their citizenship exam.
73 Subjects: including algebra 2, French, public speaking, business
...The level of difficulty is increased as multiple concepts are related. I take the time to make sure that my students are comfortable with basics before we can move ahead. When I teach
trigonometry, I like to use a variety of techniques and strategies to investigate, reason and then interpret solutions to problems.
10 Subjects: including ACT Math, algebra 1, algebra 2, geometry
...This program includes grammar as well, and is not just geared for the younger grades, but scaffolds higher-level phonics & vocabulary up through 12th grade. It is an individualized, naturally
differentiated program. Through my education and teaching experience, I've gained deep insight into how to best motivate students to create strong habits and schedules for themselves.
22 Subjects: including algebra 1, algebra 2, vocabulary, grammar
I have over 15 years of experience teaching and tutoring physics, and have a PhD. I am very patient and can teach at all levels - from high school through college. I believe in active learning -
using many examples, pictures, and simulations to illustrate key concepts while making it fun to learn as well.
15 Subjects: including algebra 2, Microsoft Excel, prealgebra, precalculus
I'm an experienced certified teacher in NJ, currently pursuing a Doctorate in Math Education and Applied Mathematics. I've been teaching and tutoring for over 15 years in several subjects
including pre-algebra, algebra I & II, geometry, trigonometry, statistics, math analysis, pre-calculus, calculu...
83 Subjects: including discrete math, SAT math, Java, statistics
Related Raritan, NJ Tutors
Raritan, NJ Accounting Tutors
Raritan, NJ ACT Tutors
Raritan, NJ Algebra Tutors
Raritan, NJ Algebra 2 Tutors
Raritan, NJ Calculus Tutors
Raritan, NJ Geometry Tutors
Raritan, NJ Math Tutors
Raritan, NJ Prealgebra Tutors
Raritan, NJ Precalculus Tutors
Raritan, NJ SAT Tutors
Raritan, NJ SAT Math Tutors
Raritan, NJ Science Tutors
Raritan, NJ Statistics Tutors
Raritan, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/raritan_nj_math_tutors.php","timestamp":"2014-04-18T11:24:50Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:9fbb420e-f3dc-42dd-ae80-eb1371909103>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kernel k-means - File Exchange - MATLAB Central
File Information
This function performs kernel version of kmeans algorithm. When the linear kernel (i.e., inner product) is used, the algorithm is equivalent to standard kmeans algorithm.
K: n x n a semi-definite matrix computed by a kernel function on all sample pairs
m: the number of clusters k (1 x 1) or the initial label of samples (1 x n, 1<=label(i)<=k)
reference: [1] Kernel Methods for Pattern Analysis
Description by John Shawe-Taylor, Nello Cristianini
sample code:
load data;
K=x'*x; % use linear kernel
MATLAB release MATLAB 7.9 (R2009b)
Comments and Ratings (10)
07 Jun
02 Apr
10 Jan Hi, Phillip,
2012 You computation is right, only not very efficient. Check my new code
Hello I encountered the same problem as john luckily i had the book, I added the following code. The energy is the sum squared clustering cost function. I have been optimizing my kernel
hyper-parameters to minimise this energy. Been working fairly well so thanks. Not an expert so could be wrong.
11 Apr for i=1:1:size(K,1)
2011 A(i,label(i))=1;
energy = trace(K)-trace(sqrt(D)*A'*K*A*sqrt(D));
The code appears broken to me:
>> load data;
K=x'*x; % use linear kernel
15 Nov label=knkmeans(K,3);
2010 scatterd(x,label)
??? Undefined function or variable 'val'.
Error in ==> knkmeans at 31
energy = sum(val)+trace(K);
Hi mathieu,
20 Mar As indicated in the description, this algorithm is explained in
2010 reference: [1] Kernel Methods for Pattern Analysis
by John Shawe-Taylor, Nello Cristianini
23 Feb Mathieu, you can refer to machine learning and pattern recognition by Bishop, 2005. Alternatively this is for free: www-stat.stanford.edu/~hastie/Papers/ESLII.pdf
23 Feb I see, reading the code I do not manage to understand what are the principles behind the algorithm. Do you have a reference that I could get from the web or do you advise to buy the book ?
11 Feb This happens for standard kmeans too, which is caused by the nature of the algorithm. The reason is that when you set a very big number for k, after several iterations, some clusters might
2010 become empty.
11 Feb It seems that if I request N clusters, the algorithms outputs k clusters, k<=N clusters and most of the time k<<N. I was wondering if this is by construction. If yes, could provide me with
2010 an explanation ?
25 Dec 2009 add sample data and detail description
30 Sep 2010 remove empty clusters
03 Feb 2012 fix a minor bug of returning energy
03 Feb 2012
03 Feb 2012 Improve the code and fix a bug of returning energy
|
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/26182-kernel-k-means","timestamp":"2014-04-19T19:38:25Z","content_type":null,"content_length":"31828","record_id":"<urn:uuid:8439b662-8a0d-4def-b0be-450a6ff3a8d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is Euclidean space an affine space?
Hi everyone,
I have a question that I'm not sure about. I wanted to know if it is standard to think of Euclidean space as a linear vector space, or a (more general) affine space? In some places, I see Euclidean
space referred to as an affine space, meaning that the mathematical definition of the space allows us to make translations without affecting our system.
But on the other hand, if we have a Lagrangian like ##1/2 \ m v^2+mgy## then a translation ##y \rightarrow y+d## changes the Lagrangian to ##L \rightarrow L+mgd##. Now, I know that this does not
affect the physics of the system, since it does not matter if we add a constant to the Lagrangian. But when people talk about Noether's theorem, they say that due to the form of this Lagrangian, we
do not have 'translational invariance'. So are they including the Lagrangian as a physical observable of the system?? And so, is our space a linear vector space, not an affine space? Maybe they are
using the term 'translational invariance' to mean something different to the 'translational invariance' that allows us to call our space affine? (and if so, then that is pretty darn confusing).
thanks in advance :)
|
{"url":"http://www.physicsforums.com/showthread.php?t=713914","timestamp":"2014-04-19T12:34:06Z","content_type":null,"content_length":"20829","record_id":"<urn:uuid:d7bb1c7b-4501-44df-b621-6ed49f8cbe0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Laplace Transform: Evaluate L{f(t)} \[\begin {align*} f(t) &= 1 ,\ t \ge 0, \quad t \neq 1, \ t \neq 2 \\ &= 3,\ t = 1\\ &= 4,\ t = 2\end {align*} \] Would appreciate someone explaining how to set
this up and evaluate.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Is the 'del' symbolic of using the delta function?
Best Response
You've already chosen the best response.
Absolutely yes.
Best Response
You've already chosen the best response.
Could you please explain the logic you approach this with and how you handled the intervals where the function equaled a constant?
Best Response
You've already chosen the best response.
In my textbook, the delta function is not introduced for 4 more sections.
Best Response
You've already chosen the best response.
Do you know extended derivative with delta function?
Best Response
You've already chosen the best response.
Mathematician said there is no derivative in uncontinious point but engineers said it has.
Best Response
You've already chosen the best response.
@ali110 is one of them.
Best Response
You've already chosen the best response.
So far what I know of piecewise functions is that L{f(x)} = L{f_1(x)} + L{f_2(x)} + L{f_3(x)}
Best Response
You've already chosen the best response.
Those should be f(t)...
Best Response
You've already chosen the best response.
no, there is no information in an alone point for Laplace, but I made derivative to make an information then used the Laplace.
Best Response
You've already chosen the best response.
How did you make a derivative of a function that has only a constant value?
Best Response
You've already chosen the best response.
L(f(t))=L(1)=1/s if for f(t)=t then L(f(t))=1/s^2
Best Response
You've already chosen the best response.
@ali110 is one of them.
Best Response
You've already chosen the best response.
mohan gholami in which of them?
Best Response
You've already chosen the best response.
If it never equals 't', then do I only have \[\frac{1}{s} +\frac{1}{s} +\frac{1}{s}\] ?
Best Response
You've already chosen the best response.
I believe he was saying that you were an engineer.
Best Response
You've already chosen the best response.
No, you have two points not two functions!
Best Response
You've already chosen the best response.
oh i am an electrical engineering student of 5th semester who got 71 marks out of 100 in laplace transform in his 4th semster:))) @eSpeX
Best Response
You've already chosen the best response.
Unfortunately Laplace transform doesn't sense points unless with delta function.
Best Response
You've already chosen the best response.
Laplace does not make sense to me on how to handle them, and with respect to this piecewise I do not see how it will be done if we have not been shown the delta function. Is it something (or
similar) to the heavyside step function?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
According to the book, the answer is 1/s. Does this mean that the laplace of t=1 and t=2 equate to 0?
Best Response
You've already chosen the best response.
can we take laplace inverse at the end?
Best Response
You've already chosen the best response.
No the answer is just 1/s because Laplace transform can not sense limit points, and it just follow the infinity points which defined with delta function.
Best Response
You've already chosen the best response.
So you would have: L{f(t)} = L{1} + L{3} + L{4} -> 1/s + 0 + 0 ?
Best Response
You've already chosen the best response.
ok I solve it with integral. int(0 inf) f(t)=1/s
Best Response
You've already chosen the best response.
You know the integral change the limit points to continues function and never sense them.
Best Response
You've already chosen the best response.
1 f'(t)=3del(t-1)-3del(t-1)+4del(t-2)-4del(t-2) and f(0)=1 L(f'(t))=3e^-s-3e^-s+4e^-2s-4e^-2s=sL(f(t))-1 L(f(t))=1/s(0+0+1)=1/s
Best Response
You've already chosen the best response.
I thought in two points 1 and 2 it has jumped so they were two alone points.
Best Response
You've already chosen the best response.
And Laplace has problem with the single points.
Best Response
You've already chosen the best response.
Ah. Okay, I will see if I can't apply this to the rest of my problems. Thank you very much.
Best Response
You've already chosen the best response.
You're welcome.
Best Response
You've already chosen the best response.
if the points in 1 and 2 was jumped so the solution was : 1 f'(t)=3del(t-1)+4del(t-2) and f(0)=1 L(f'(t))=3e^-s+4e^-2s=sL(f(t))-1 L(f(t))=1/s(3e^-s+4e^-2s+1)
Best Response
You've already chosen the best response.
But at this point I would have needed to use the integral approach since we have not reached the delta function?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
It appears that all of those examples have a range that the integral is evaluated over. So none of the laplace methods evaluate a point.
Best Response
You've already chosen the best response.
But @ali110 there is no unit step and delta function! I guess Openhiem is the best refrence. Isn't it?
Best Response
You've already chosen the best response.
check page 11 and every problem will solve as writer show that for t not equal to 1 And 2 as in above question F=0 and Agha! i love alan V openheim as i take all his video lectures about signals
and systems but in our engineerig college we study Indian professor Ghosh sumarjit check his book on Signal and system and about fourier series more intersesting then oppenheim
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
L{f(t)} = L{1} + L{3} + L{4} -> 1/s + 0 + 0=1/s
Best Response
You've already chosen the best response.
@eSpeX I GUESS
Best Response
You've already chosen the best response.
@eSpeX CHECK laplace transform linearity property (in which one to one property)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ok. And I should mention that don't use L(3)=0 because it is not true. You can write L*(3)=0 and define L* means Laplace for limit points.
Best Response
You've already chosen the best response.
Alright, I'll have to look up limit points then because I have not seen them yet as I recall.
Best Response
You've already chosen the best response.
You have to use the unit step function for 2 and 3 3u(t-1) 4u(t-2) for the first one I'd break it up u(t-1) - u(t-1minus) + u(t-1 plus) -u(t-2 minus) ect With these there's direct transformations
Best Response
You've already chosen the best response.
In EE the slope of the step function is an indication of bandwidth, if there was infinite bandwidth it would be a unit step
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50b18053e4b09749ccac6dfe","timestamp":"2014-04-17T10:00:24Z","content_type":null,"content_length":"139385","record_id":"<urn:uuid:383e4438-51f2-41f3-9588-683c5c64b71a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We will plot a surface two ways. First we will use easyviz. Consider the following code:
import numpy
from scitools import easyviz
x = numpy.arange(-8,8,.2)
xx,yy = numpy.meshgrid(x,x)
r = numpy.sqrt(xx**2+yy**2) + 0.01
zz = numpy.sin(r)/r
The function surfc takes a list of x coordinates, and y coordinates and a numpy array z. Its plots a surface that has height z[i,j] at the point (x[i],y[i]). Note the use of meshgrid, and vectorized
numpy functions that let us evaluate \(\frac{\sin(\sqrt{x^2+y^2})+1}{\sqrt{x^2+y^2}+1}\) over the grid very easily. We discussed meshgrid at the beginning when we were talking about numpy. Note that
you can drag the plot around with your mouse and look at it from different angles.
We can make this plot look a bit nicer by adding some shading and nicer coloring and some labels as follows.
import numpy
Integer =int
from scitools import easyviz
x = numpy.arange(-8,8,.2)
xx,yy = numpy.meshgrid(x,x)
r = numpy.sqrt(xx**2+yy**2) + 0.01
zz = numpy.sin(r)/r
l = easyviz.Light(lightpos=(-10,-10,5), lightcolor=(1,1,1))
Let us now try to plot some vector fields. Consider the following code
import numpy
from scitools import easyviz
This should plot a vector field that points up everywhere. The arguments to quiver3 are 6, \(n\times n\times n\) arrays. The first three arrays are the location of the vectors, that is there will be
a vector at \((xx[i,j,k],yy[i,j,k],zz[i,j,k])\) for \(0\le i,j,k < n\). The second three arrays are the directions, i.e., the vector at \((xx[i,j,k],yy[i,j,k],zz[i,j,k])\) points in the direction \
Now let us give some examples with MayaVi. First lets see how to plot a function like we did with easyviz.
import numpy
from mayavi.tools import imv
def f(x,y):
return numpy.sin(r)/r
This will open mayavi, and display the plot of the function. The first two arguments to surf are arrays \(x\) and \(y\), s.t. the function will be evaluated at \((x[i],y[j])\). The last argument is
the function to graph. It probably looks a bit different than the easyviz example. Lets try to make it look similar to the easyviz example. First note that on the left there is a list of filters and
modules. Double-click the warpscalars button in the filters menu, and change the scale factor from \(1\) to say \(5\). This should redraw the graph similar to how easyviz drew it. There are quite a
few other options you can play around with. For example, next click on the module surfacemap, and you will see you can make the graph transparent by changing the opacity. You can also change it to a
wireframe or make it plot contours.
TODO: More examples
|
{"url":"http://www.sagemath.org/doc/numerical_sage/plotting.html","timestamp":"2014-04-18T00:14:31Z","content_type":null,"content_length":"20903","record_id":"<urn:uuid:6a3230a1-20a3-4349-8596-d9e3468a23c3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roots of cubic equation
October 10th 2009, 05:25 PM #1
Sep 2009
Roots of cubic equation
Kindly help me with this roots question...please
If the roots of the equation 4x³ + 7x² - 5x - 1 = 0 are α, β and γ, find the equation whose roots are
i. α + 1, β + 1 and γ + 1
ii. α², β², γ²
We have $x_1=\alpha, \ x_2=\beta, \ x_3=\gamma$. Then
1) $y_1=x_1+1, \ y_2=x_2+1, \ y_3=x_3+1$
In this case the easiest way is to let $y=x+1$. Then $x=y-1$ and replace x in the equation:
Now continue.
2) Let $S_1=y_1+y_2+y_3=x_1^2+x_2^2+x_3^2=(x_1+x_2+x_3)^2-2(x_1x_2+x_1x_3+x_2x_3)$
$S_2=y_1y_2+y_1y_3+y_2y_3=x_1^2x_2^2+x_1^2x_3^2+x_2 ^2x_3^2=$
Then the equation is $y^3-S_1y^2+S_2y-S_3=0$
We have $x_1=\alpha, \ x_2=\beta, \ x_3=\gamma$. Then
1) $y_1=x_1+1, \ y_2=x_2+1, \ y_3=x_3+1$
In this case the easiest way is to let $y=x+1$. Then $x=y-1$ and replace x in the equation:
Now continue.
2) Let $S_1=y_1+y_2+y_3=x_1^2+x_2^2+x_3^2=(x_1+x_2+x_3)^2-2(x_1x_2+x_1x_3+x_2x_3)$
$S_2=y_1y_2+y_1y_3+y_2y_3=x_1^2x_2^2+x_1^2x_3^2+x_2 ^2x_3^2=$
Then the equation is $y^3-S_1y^2+S_2y-S_3=0$
I'm having trouble with similar problems. How do you get:
October 11th 2009, 01:05 AM #2
October 11th 2009, 04:18 AM #3
Super Member
Jun 2009
United States
October 11th 2009, 05:35 AM #4
|
{"url":"http://mathhelpforum.com/pre-calculus/107247-roots-cubic-equation.html","timestamp":"2014-04-20T03:33:01Z","content_type":null,"content_length":"46594","record_id":"<urn:uuid:7e608f3e-df5b-4cf0-b6ab-9ec1f51cdec3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distribution of Digits in Irrational Numbers - Physics and Mathematics
I wanted to come up with a fun and frivolous but scientific topic for my 100th thread, and while I was out on my jog this afternoon the following popped into my head:
Decimal numbers
come in two varieties:
• Finite decimals which have nothing but trailing zeros after some point (e.g. [imath]\frac 1 2 = 0.5[/imath])
• Infinite (or "unbounded") decimals which continue on indefinitely given some definition of how they are constructed (e.g. [imath]\frac 1 3 = 0.33333.....[/imath] or for those mathematicians out
there more correctly [imath]0.\overline{3}[/imath])
This latter group can then be divided into
Irrational Numbers
Now, definitionally, Rational numbers have repeating sequences of digits, so the fraction [imath]\frac 1 3[/imath] repeats the three on and on endlessly. Conversely, irrational numbers however have
random sequences that do not repeat at all.
Here's where my pondering came in: for any particular decimal number, you can create a distribution map of the digits in the decimal representation. For the finite representations, this is just
counting appearances so for the number [imath]\frac 5 {32} = 0.15625[/imath] you have the distribution of digits:
0: 0
1: 1
2: 1
3: 0
4: 0
5: 2
6: 1
7: 0
8: 0
9: 0
Which is not only limited but boring to boot!
With unbounded numbers, you have an infinite number of digits, so things get a bit more interesting. In order to "compute" the distribution, you need to start working with ratios of each digit to the
others. With unbounded rational numbers, this is relatively easy since you can take just the "repeating sequence" and count the occurrences of each digit and then divide by the length of the
sequence, so the number [imath]\frac {41} {333} = 0.123123123... = 0.\overline{123}[/imath] has the distribution:
0: [imath]\frac 1 3[/imath]
1: [imath]\frac 1 3[/imath]
2: [imath]\frac 1 3[/imath]
3: 0
4: 0
5: 0
6: 0
7: 0
8: 0
9: 0
This is obviously a skewed distribution!
On the other hand this lovely fraction: [imath]\frac 1 {81} = 0.0123456790123… = 0.\overline{0123456789}[/imath] has the perfectly
Poisson Distribution
curve of:
0: [imath]\frac 1 {10}[/imath]
1: [imath]\frac 1 {10}[/imath]
2: [imath]\frac 1 {10}[/imath]
3: [imath]\frac 1 {10}[/imath]
4: [imath]\frac 1 {10}[/imath]
5: [imath]\frac 1 {10}[/imath]
6: [imath]\frac 1 {10}[/imath]
7: [imath]\frac 1 {10}[/imath]
8: [imath]\frac 1 {10}[/imath]
9: [imath]\frac 1 {10}[/imath]
When you bring in Irrational Numbers though, there's no immediately obvious way to figure this out. Being to lazy to do it right away, I thought I'd throw it out to all the math whizzes out there:
What are the distributions of digits in Irrational Numbers?
Can we make any interesting generalizations? Since the digits in an Irrational number are random, they *could* all be Poisson Distributed, but are they? How's about [imath]\pi[/imath] or
Any references to cool Abstract Algebra or Number Theory proofs that are relevant (or have solved this one completely!)?
Anyone want to contribute some code to test any theories?
The creator of the universe works in mysterious ways. But he uses a base ten counting system and likes round numbers,
|
{"url":"http://www.scienceforums.com/topic/13015-distribution-of-digits-in-irrational-numbers/?p=213566","timestamp":"2014-04-16T19:38:18Z","content_type":null,"content_length":"110762","record_id":"<urn:uuid:312b10d2-d84c-4c39-a042-8510148d0a2f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
March 4th 2013, 06:51 PM #1
Sep 2012
Given a triangle ABC , draw a line from the vertex C to the hypotenuse, which is perpendicular to the hypotenuse at point H . A circle of radius r is inscribed in the triangle . Two more circles
of radi r[1] and r[2] are inscribed between the triangle ,the line CH and the circle r. Find r[1] in terms of r and b= AC , and r[2] in terms of r and a=BC
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/geometry/214232-triangle.html","timestamp":"2014-04-18T01:11:30Z","content_type":null,"content_length":"28953","record_id":"<urn:uuid:6bcb7dbd-3468-46ce-b884-aaa38f3562f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Track & Field News
Statistics books
Moving around, I need to thin out my bookshelf a bit. Is anyone interested in the following books:
USA T&F Annuals: 1996 and 1997
USATF Media Guide & FAST Annual: 2000 and 2001
ATFS Annual: 1998 and 1999
IAAF Statistical Handbook from Athens '97
IAAF World Athletics Series 1996-1999, Complete Results (book)
TAFWA All-Time World Indoors Lists (1997 and 2004)
I'm likely to recycle them, but figured I'd offer them here first. Cover the postage, and they (or any subset of them) are yours. First come first serve, or something.
Re: Statistics books
IAAF Statistical Handbook from Athens '97
How much are you selling this one for?
Re: Statistics books
clay, methinks they are being given away just for postage, at least that's how I interpreted the description...
Re: Statistics books
Do any of the USATF books contain full results from the US Olympic Trials? In particular, 1988 thru 2000. I'd be interested if they did.
Re: Statistics books
You can get the complete history of the OT in the fabuloous Richard Hymans book puublished by USTAF for $24 (new).
http://www.usatf.org/store/showProductD ... OLYTRLS-04
Re: Statistics books
Why am I always out of town when the good stuff appears on here??!
Re: Statistics books
Yes, just postage. I'll bring one by the PO today and find out what the rates are, and post here. I'd rather keep them out of a pulp mill than realize a profit (since I paid for none of them
directly); bear in mind that they're "as-is" and though I've removed as many of my sticky-notes as I could find, there may be the odd highlighting or scribbled note.
Clay, you have first call on the Athens book; anyone else, drop me an email (pjm@parkermorse.net).
OT results: Garry's probably got the best route to those. I *suspect* that the '97 and '01 FAST Annuals will have the '96 and '00 Trials, respectively, but I'll check on that when I'm home. (Does
USATF not have the full '00 results online? They should, if they don't. But perhaps my sense of what "should" be online is a bit distorted.)
Re: Statistics books
Ballpark postage is about $2 for a single book. Looks like the rate improves for multiples.
Re: Statistics books
You can write to me about the USA Guides and FAST Annuals. I still have some in stock. Not all years are sold out.
Scott Davis
Publisher - USATF Media Guide and FAST Annual
|
{"url":"http://www.trackandfieldnews.com/discussion/viewtopic.php?f=7&t=12869","timestamp":"2014-04-20T19:52:51Z","content_type":null,"content_length":"33417","record_id":"<urn:uuid:a72c3d55-4467-4dd1-a2ee-789ecb7b8b3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wheaton, IL Algebra 2 Tutor
Find a Wheaton, IL Algebra 2 Tutor
I have a Master's and Bachelor's in Mathematics and I've tutored elementary students while in high school and tutored high school and college students while in college and graduate school. I
enjoy tutoring students of all ages and especially enjoy helping students who have always struggled with mat...
10 Subjects: including algebra 2, geometry, statistics, GRE
...These guides have been used to improve scores all over the midwest. Science Reasoning is one of the most difficult sections of the ACT. Proper time management and attention to detail are the
keys to a high score.
24 Subjects: including algebra 2, calculus, physics, GRE
...Rhetorical questions can be tricky, and even subjective. I've developed an approach to these types of questions that make them considerably easier for students to tackle. I truly enjoy helping
students to master the Reading portion of the ACT test.
20 Subjects: including algebra 2, reading, English, writing
...Qualification: Masters in Computer Applications My Approach : I assess the child's learning ability in the first class and then prepare an individual lesson plan. I break down math problems
for the child, to make him/her understand in an easy way. I work with the child to develop his/her analytical skills.
8 Subjects: including algebra 2, geometry, algebra 1, ACT Math
...I enjoy discovering ways to make subject material simple to grasp and understand. Please message me if you think I can be of assistance. My education in math includes Algebra, Geometry,
Trigonometry, Calculus and medical-based Statistics.
13 Subjects: including algebra 2, geometry, biology, algebra 1
Related Wheaton, IL Tutors
Wheaton, IL Accounting Tutors
Wheaton, IL ACT Tutors
Wheaton, IL Algebra Tutors
Wheaton, IL Algebra 2 Tutors
Wheaton, IL Calculus Tutors
Wheaton, IL Geometry Tutors
Wheaton, IL Math Tutors
Wheaton, IL Prealgebra Tutors
Wheaton, IL Precalculus Tutors
Wheaton, IL SAT Tutors
Wheaton, IL SAT Math Tutors
Wheaton, IL Science Tutors
Wheaton, IL Statistics Tutors
Wheaton, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Wheaton_IL_algebra_2_tutors.php","timestamp":"2014-04-19T02:25:31Z","content_type":null,"content_length":"23909","record_id":"<urn:uuid:2fb12298-5537-4387-bb66-3927b86376f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Somerset, MD Trigonometry Tutor
Find a Somerset, MD Trigonometry Tutor
...I have worked with middle school and high school students in this subject. I am very patient and can help you become more confident in your math skills! I have tutored Algebra 2 for several
46 Subjects: including trigonometry, English, Spanish, algebra 1
...I completed AP Calculus BC through my Junior year of high school as well. Please feel free to contact with questions. I scored very high on my math portions of the SAT and ACT (750 out of 800
SAT, 35 out of 36 ACT). I have helped several other students with study tips and practice problems for these exams as well.
16 Subjects: including trigonometry, geometry, statistics, algebra 2
...I can effectively instruct all the math-related aspects of the GMAT. I am a former high school math teacher with well over 10 years of full time teaching & tutoring experience. I can also
assist with some chemistry and physics.
28 Subjects: including trigonometry, chemistry, calculus, physics
...Thank you, YuI'm a Chessmaster. I've been tutoring chess for the last 10 years outside of WyzAnt. I have a US Chess Federation ID number.
24 Subjects: including trigonometry, chemistry, reading, calculus
For more than 17 years, I have coached, taught and tutored many students for both high school and graduate tests such as the SAT, ACT, GRE, GMAT (math sections; Quantitative Reasoning, Math
subject test).I am a PhD candidate and have a Master's degree in engineering from USC. Also, I am a qualified...
15 Subjects: including trigonometry, calculus, statistics, GRE
Related Somerset, MD Tutors
Somerset, MD Accounting Tutors
Somerset, MD ACT Tutors
Somerset, MD Algebra Tutors
Somerset, MD Algebra 2 Tutors
Somerset, MD Calculus Tutors
Somerset, MD Geometry Tutors
Somerset, MD Math Tutors
Somerset, MD Prealgebra Tutors
Somerset, MD Precalculus Tutors
Somerset, MD SAT Tutors
Somerset, MD SAT Math Tutors
Somerset, MD Science Tutors
Somerset, MD Statistics Tutors
Somerset, MD Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bethesda, MD trigonometry Tutors
Cabin John trigonometry Tutors
Chevy Chase trigonometry Tutors
Chevy Chase Village, MD trigonometry Tutors
Chevy Chs Vlg, MD trigonometry Tutors
Cottage City, MD trigonometry Tutors
Dunn Loring trigonometry Tutors
Edmonston, MD trigonometry Tutors
Glen Echo trigonometry Tutors
Martins Add, MD trigonometry Tutors
Martins Additions, MD trigonometry Tutors
Merrifield, VA trigonometry Tutors
University Park, MD trigonometry Tutors
W Bethesda, MD trigonometry Tutors
West Mclean trigonometry Tutors
|
{"url":"http://www.purplemath.com/somerset_md_trigonometry_tutors.php","timestamp":"2014-04-21T00:01:49Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:dcf539c3-b2aa-496e-a18d-ea57e0c40f78>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In Python you have the following arithmetic operators:
Symbol Name
+ Addition
- Subtraction
* Multiplication
/ Division
// Floor division
% Modulo
** Power
Now, let's try to use Python as a calculator. We will calculate $3*8/6$:
Here are a few more examples:
The rules of arithmetic precedence are the same as with calculators; Python evaluates expressions from left to right, but things enclosed in brackets are evaluated first:
You can also do some calculations using built-in elementary functions. For this we will need to import a module (i.e. peace of code) which contains such functions. For elementary functions, we will
need to import the module math which has elementary functions. Let calculate $\sqrt{100}$:
Note we call “math.sqrt”, not simply “sqrt”.
Alternatively, you simply import all functions (in this case you do not need to type “math.” on front of each statement
Let us consider a logarithmic function:
Exercise: Change 100 to some other number and check the output
The math module provides access to mathematical constants and functions.
How do we know which functions can be used? There is a special command “dir()” which prints all implemented Python functions in the module “math”:
Run this commands and you will see familiar math functions of Python
Let us find out max and value for integer numbers:
Back to top
|
{"url":"http://jwork.org/learn/doc/doku.php?id=python:arithmetic","timestamp":"2014-04-19T13:01:43Z","content_type":null,"content_length":"26691","record_id":"<urn:uuid:58eb4e89-c910-49f7-a5a2-d8ce22bdec2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- Physic college
Posted by meera on Friday, October 1, 2010 at 7:03am.
The equation for the change of position of a train starting at x
= 0 is given by
x=1/2at^2 + bt3 . Find the dimensions of a and b.
If y = C1 sin (C2 t)
where y is a distance and t is the time. What are the dimensions of C1 and
C2 ?.
• - Physic college - bobpursley, Friday, October 1, 2010 at 9:00am
x=1/2 a t^2 + bt^3
if at^2 is distance, then a must be distance/time^2
b has to be distance per time^3
If y is distance, C1 must be distance. C2 must be 1/time
• - Physic college - meera, Friday, October 1, 2010 at 10:34am
if at^2 is distance, then a must be distance/time^2
okay but from where can i get distance ?!
i mean as a number !
also here sin (C2 t) i think it is constant so we can't take dimension or i am wrong :(
Again Thank u a lot :D
• - Physic college - bobpursley, Friday, October 1, 2010 at 10:46am
You are solving dimensional analysis here, just the dimensions. There are no numbers.
The argument in a trig function is in radians, So C2*time must divide out all units, so C2 is 1/time.
• - Physic college - meera, Saturday, October 2, 2010 at 3:48pm
should we divied it , I mean solve it like
x = 1/2at^2 ,x= bt^3 and take each part individual !
how can get red of (sin) from this formula ,
also here should i handle it individual !
• - Physic college - sa, Sunday, September 18, 2011 at 9:07pm
Related Questions
Physic - Consider the equation v = (1/4)zxt^2. The dimensions of the variables v...
Calculus - In a test run, a new train travels along a straight-line track. Data ...
Physics - Stuck again. The paper just says that the position of an object is ...
Calculus - In a test run, a new train travels along a straight-line track. Data ...
physics - a train slows down from 104m/s to 76m/s with constant decceleration in...
math - the passenger section of a train has width 2x-7, length 2x+3, and height ...
college algebra - given the function fx=x^3+x a) find the rate of change between...
college algebra - Given the function f(x) = x^3+ 3x Find the rate of change ...
college algebra - given the functionf(x)=x^3+3x find the rateof change between ...
College Algebra 2 - The speed of train A is 16 mph slower than train B. Train A ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1285930982","timestamp":"2014-04-20T19:03:22Z","content_type":null,"content_length":"9931","record_id":"<urn:uuid:904a837a-2fbf-4a39-95eb-6fb18a33fb60>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
21-110: Working systematically
We often want to make a complete list of all of the different types of some object, or search for a solution to a puzzle by trial and error. A very helpful technique to follow in cases such as these
is to work systematically. Unless the problem is especially simple, it will often save you a lot of frustration if you have a systematic way to record what you have done and to determine what to try
How I listed the 35 free hexominoes
I wanted to make a list of all 35 of the free hexominoes. How did I do it?
At first I just drew hexominoes more or less randomly. By the time I had about 20 of them, however, it was becoming rather difficult to make sure I wasn’t listing duplicates, and when I got stuck and
couldn’t think of any more, I had no idea what kind of shapes I should be looking for.
Idea 1: Classifying hexominoes according to their longest line of squares
I realized that I needed a more systematic way of listing the hexominoes. So I started over, this time more carefully. I began with the hexomino consisting of six squares in a straight line:
Then I listed all the hexominoes having five squares in a straight line. This was pretty easy, because I only had to think of all the ways to add one square to a line of five. I got three more:
Next I listed all the hexominoes having four squares in a straight line. I did this by thinking of all the ways I could add two squares to a line of four. I found 12 more hexominoes like this:
Now, I could have moved on to listing all hexominoes having three squares in a straight line, but I thought, “If I do this, I am going to have to think of all the ways to add three squares to a line
of three, and it seems that I have lost the advantage I had previously, where I had to add only a few squares to a basic ‘skeleton.’ But I think I have listed a lot of the hexominoes—it shouldn’t be
very hard to figure out all the rest.” So I started listing hexominoes randomly again:
But it didn’t take long before I was stuck again, and as before I had no idea what I was missing. I needed a better idea.
Idea 2: Extending pentominoes
I reviewed my first idea, and I saw that the reason it was easy to list the hexominoes with five squares in a line was that I was adding only one square to a starting shape. So I tried to use this
idea by thinking about adding one square to each of the 12 free pentominoes.
First I had to list the pentominoes. There are only 12 of these, so I was able to list them in a few minutes without requiring a systematic approach. The 12 free pentominoes are shown below, in the
order I found them.
Then I examined each pentomino individually and considered all the ways to add a single square somewhere around its perimeter. Rather than restarting my list from scratch, I just added onto the list
I found using my first idea. The first thing I discovered was that I had missed one of the hexominoes with four squares in a line!
It took me a while to work through all of the pentominoes, adding squares all the way around their perimeters, but I had a system to follow, and so I was never at a loss for what to do next. (It was
still difficult to avoid duplicates, and a few of them slipped by before I caught them later.) My completed list of all 35 free polyominoes is shown below.
Idea 3: Classifying hexominoes by layers
A few days later I wanted a list of the free hexominoes again, but I was at school and had left my list at home. So I started with Idea 1 again, this time being more systematic about the polyominoes
with three squares in a line. But when I got to 34 hexominoes, I couldn’t figure out what I was missing, so I tried a different idea.
I imagined each hexomino as made up of several layers, and then examined the various ways a hexomino could be formed in these ways. I decided that I would rotate each of the hexominoes so that the
“long” dimension (when this made sense) would be horizontal. I made the following table.
Layers Squares in each layer Hexominoes
One 6
1 + 5
Two 2 + 4
3 + 3
1 + 1 + 4
1 + 2 + 3
Three 1 + 3 + 2
1 + 4 + 1
2 + 1 + 3
2 + 2 + 2
After a bit of thought, I saw that it was impossible for a hexomino to have four layers along its “short” dimension, because six squares simply aren’t enough to make a polyomino measuring 4 × 4.
Note that I organized the table in a systematic way: first by the number of layers, then by the number of squares in each of the layers, from top to bottom. I didn’t have to include possibilities
such as 3 + 2 + 1, because that’s the same as 1 + 2 + 3 (since these are free hexominoes). There is also a systematic way the hexominoes in each row are ordered (can you see it?), so that I could be
sure I wasn’t missing any or listing any twice. I did not include the 2 × 3 rectangle (
After I finished my table, I counted and found that I had listed 40 hexominoes! So apparently I had duplicates after all. Crossing out these duplicates gave me my final list. Can you find them? Do
you see why these hexominoes appeared more than once in my table, despite my careful efforts to avoid duplicates? Can you think of an additional rule I could have followed to avoid this duplication?
Listing other polyominoes
The following table gives the number of polyominoes of the first few sizes. Can you use a systematic approach to list all of the free heptominoes? (This will take some work!)
Name Area Fixed polyominoes One-sided polyominoes Free polyominoes Free polyominoes with holes Free polyominoes without holes
Monomino 1 1 1 1 0 1
Domino 2 2 1 1 0 1
Tromino 3 6 2 2 0 2
Tetromino 4 19 7 5 0 5
Pentomino 5 63 18 12 0 12
Hexomino 6 216 60 35 0 35
Heptomino 7 760 196 108 1 107
Octomino 8 2,725 704 369 6 363
Nonomino 9 9,910 2,500 1,285 37 1,248
Decomino 10 36,446 9,189 4,655 195 4,460
Backtracking is a very useful approach to solving certain problems. It is essentially a systematic form of trial and error. The advantage of backtracking over random guessing is that backtracking
gives you a way to keep track of the things you’ve tried so far and a guide for what you should try next.
The eight queens puzzle
Can you place eight queens on an 8 × 8 chessboard in such a way that no queen is attacking any other queen? (A queen on a chessboard can attack any piece on a square it can reach by moving in a
straight line vertically, horizontally, or diagonally.)
A good way to solve this puzzle is by backtracking. Since we know that no two queens can occupy the same column, there must be one queen in each column. So we can start by placing a queen in the
first square of the first column.
Next we want to place a queen somewhere in the second column. The position of the first queen makes some of the squares in the second column off limits, but there are six squares available for the
second queen; we place the second queen in the first available square.
We can continue in this way, placing each queen in the first available square in the appropriate column (keeping in mind that the squares in each column are restricted by the positions of all
previously placed queens). If we do this, we can fill up the first five columns:
But now we are stuck. There is no available square in the sixth column—every square in the sixth column is under attack by at least one of the five queens already on the board. So we backtrack. We
undo the last thing we did, which was to place the fifth queen in the fourth row, and we try the next available option: placing it in the eighth row.
However, we have a problem again: there is still no place to put the sixth queen. Since we have exhausted all of the possibilities for the fifth queen (with the first four queens in their current
positions), we backtrack, undoing the placement of the fourth queen, and trying the next possibility.
Now we can go as far as placing the seventh queen before we get stuck with no place to put the eighth:
Since we are stuck, we backtrack again, undoing the placement of the seventh queen, trying the next possibility, and continuing.
Can you continue this process to find a solution to the eight queens puzzle?
More applications of backtracking
Backtracking is often used to search for solutions to problems that require the arrangement or selection of objects according to certain rules. Here are a few types of puzzles that can be approached
using a backtracking technique.
• Form a 6 × 10 rectangle from the 12 free pentominoes.
• Use tetrominoes (as many copies of each piece as you need) to tile a rectangular grid with some squares removed. There is a very cool “Tetris Tiler” at http://www.gfredericks.com/main/sandbox/
tetris that does exactly this. Play around with it—when it gets stuck, you can see it backtracking.
• Find your way through the following maze (from http://www.xefer.com/maze-generator). (This maze generator will also solve the mazes it produces, using a backtracking technique. Try it!)
• Write the number 21110 as the sum of three squares (in other words, find three integers a, b, and c so that 21110 = a^2 + b^2 + c^2).
• Fit words from a word list into a crossword grid (see Problem 6 on Homework 2).
• Solve the “knapsack problem” (see Problem 7 on Homework 2).
• Analyze the game of Tic-Tac-Toe. What is the best first move to make? What is the best second move?
• How many ways are there to tile a 2 × 3 checkerboard with fixed polyominoes, of any size? (There are 12 ways to tile a 2 × 2 checkerboard with fixed polyominoes.)
Last updated 25 January 2010. Brian Kell <bkell@cmu.edu>
|
{"url":"http://www.math.cmu.edu/~bkell/21110-2010s/systematically.html","timestamp":"2014-04-16T21:58:53Z","content_type":null,"content_length":"31160","record_id":"<urn:uuid:a7678913-51c9-4a1f-b072-82964c7d6323>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the difference/significance between the Pythagorean Theory and Distance Formula? - Homework Help - eNotes.com
What is the difference/significance between the Pythagorean Theory and Distance Formula?
In a Cartesian two dimensional plot, the distance d between two points `(x_1,y_1)` and `(x_2,y_2)` is `d=sqrt((x_2-x_1)^2+(y_2-y_1)^2)`
Conside the two points in a coordinate system.
(1) If the x coordinates are the same the points lie on a vertical line. The distance between points on a line is `|y_2-y_1|=sqrt((y_2-y_1)^2)` which is the distance formula where the difference of
the x values is zero.
(2) Similarly, if the y coordinates are the same the points lie on a horizontal line and the distance is `d=|x_2-x_1|=sqrt((x_2-x_1)^2)`
(3) If both of the x and y coordinates differ then you can connect the two points with a line segment. Without loss of generality assume that `x_1<x_2,y_1<y_2` . Label `(x_1,y_1)=A,(x_2,y_2)=B`
Draw a horizontal line through `x_1` and drop a perpendicular line from `x_2` to the horizontal line. Label the intersection C. ABC is a right triangle (the created segments are perpendicular.) By
the Pythagorean theorem `AC^2+BC^2=AB^2` where d=AB.
Then `d=sqrt(AB^2)=sqrt(AC^2+BC^2)`
`AC=x_2-x_1` and `BC=y_2-y_1` . Substituting we get
`d=sqrt((x_2-x_1)^2+(y_2-y_1)^2)` which is the distance formula.
In three dimensions we can use the generalized distance formula and repeated uses of the Pythagorean theorem.
So the Pythagorean theorem can be used to find the distance between two points in a 2-dimensional real coordinate system if both the x and y values differ. Otherwise, there is no triangle. The
distance formula can be used in any case of two points in a real 2-dimensional coordinate system.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-difference-significance-between-pythagorean-439534","timestamp":"2014-04-20T03:37:08Z","content_type":null,"content_length":"26387","record_id":"<urn:uuid:2667e800-3b0b-4e2b-8497-e781ab6d6f11>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
question on finite sets
May 20th 2009, 06:32 AM #1
May 2009
question on finite sets
I'm reading through a book on Real Analysis (Intro.) and got a question on finite sets. The definition I'm using for a finite set is: a set A is finite iff there is a one-to-one function f on Nk
onto A (Nk is read N sub k where k is a subscript and Nk is a subset of the natural numbers from 1 to k) for some k.
Let A = {1, 2, 3, 2}, Nk = {1, 2, 3, 4}, and assume that f is bijective from Nk onto A. Observe that f(2) = f(4) implies 2 ≠ 4. It appears that f is not bijective. Is A finite or not? If A is
finite, could you describe a bijective function f from Nk onto A.
I'm reading through a book on Real Analysis (Intro.) and got a question on finite sets. The definition I'm using for a finite set is: a set A is finite iff there is a one-to-one function f on Nk
onto A (Nk is read N sub k where k is a subscript and Nk is a subset of the natural numbers from 1 to k) for some k.
Let A = {1, 2, 3, 2}, Nk = {1, 2, 3, 4}, and assume that f is bijective from Nk onto A. Observe that f(2) = f(4) implies 2 ≠ 4. It appears that f is not bijective. Is A finite or not? If A is
finite, could you describe a bijective function f from Nk onto A.
Hi klmsurf.
Yes, $A$ is finite, but you want to take $N_3=\{1,2,3\},$ not $N_4.$ Then f is a bijection $A\to N_3$ with, e.g., $1\mapsto1,\ 2\mapsto2,\ 3\mapsto3.$
If a bijection exists between two sets, we say they have the same cardinality. A = {1, 2, 3, 2} has exactly four elements, regardless of the distinctness between the elements. Can you explain why
the cardinality of A = {1, 2, 3, 2} is 3? Maybe I'm missing a definition or proposition on sets.
$A = \{1, 2, 3, 2\}= \{1, 2, 3\}$ it just has an element listed twice.
$B = \{c,c,c,d,c,d\}= \{c,d\}$ has only two elements.
May I suggest that you study a foundations of mathematics text before you try Intro to Analysis.
May 20th 2009, 06:36 AM #2
May 20th 2009, 08:46 AM #3
May 2009
May 20th 2009, 09:03 AM #4
|
{"url":"http://mathhelpforum.com/differential-geometry/89783-question-finite-sets.html","timestamp":"2014-04-18T01:54:50Z","content_type":null,"content_length":"40872","record_id":"<urn:uuid:50747a95-fe85-40e3-b7a9-4beb5dda4e74>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
starting from rest a 15kg on a horizontal surface acquires kinetic enery due to a constant horizontal force 3ooN . find the kinetic energy acquired after the object has travelled 6m if the
coeefficient of friction is 0.10?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50711f83e4b0c2dc8340c88d","timestamp":"2014-04-20T00:48:23Z","content_type":null,"content_length":"96733","record_id":"<urn:uuid:16b689f4-49c7-42c1-b81b-17857a7e7a29>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics Tutors
Allston, MA 02134
Physics and mathematics tutor; experienced with applications
I am an experienced high school physics and
teacher, retired, with current Massachusetts certification in both areas. I have taught various levels of
--through calculus--as standalone courses and as part of the content of physics courses,...
Offering 7 subjects including algebra 1, algebra 2 and calculus
|
{"url":"http://www.wyzant.com/Hyde_Park_MA_Mathematics_tutors.aspx","timestamp":"2014-04-23T16:15:25Z","content_type":null,"content_length":"62059","record_id":"<urn:uuid:b364d703-0a09-43b4-8b83-414f7849e125>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: RE: st: RE: Automatically changing -ylabel()- values using -grap
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: RE: st: RE: Automatically changing -ylabel()- values using -graph-
From "Clive Nicholas" <Clive.Nicholas@newcastle.ac.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: RE: st: RE: Automatically changing -ylabel()- values using -graph-
Date Fri, 10 Jun 2005 01:42:35 +0100 (BST)
Scott Merryman replied:
> 1. This is a panel data set with 140 unique ids so -twoway line- will
> produce 140 lines per variable which will be rather messy, maybe even
> violent.
I chose -abdata- because it was the only -webuse- data I could find that
contained plentiful data on several percentage variables, much like the
ones I have in my own data. I also chose it because I thought that Nick
Cox was suggesting to me that it might instructive to use such data so
that all interested parties (at least, those who have -mylabels-!) could
replicate the problem I was having for themselves (as you've very kindly
done: thanks very much). That said, I should have -collapse-d the data
before running the graphs.
> 2. Why are you manually specifying the ylabel option when -mylabels-
> will do it for you?
Initially, I was doing that, and I'm not sure I could give a cogent,
intelligent answer as to why I didn't do so there. Let's call it an
> 3. In your ylabel option, should it not be: ylabel(1 "100" .75 ....)
> not ylabel(.1 "100" .75 ....).
Yes, it should, but that's settled by -ylabel(`label100'...)-, as you
point out.
> 4. Does this produce the graph you want:
> use http://www.stata-press.com/data/r8/abdata.dta,clear
> collapse (mean) emp wage, by(year)
> replace emp= emp/100
> replace wage= wage/100
> mylabels 0(25)100, myscale(@/100) local(label100)
> twoway line emp wage year , xtitle("") ///
> ylabel(`label100', angle(0)) ///
> clpattern(dash) scheme(s1mono)
It certainly produces the correct _graph_ (i.e., only two lines, and
scaled in the right proportions). But notice that the y-axis labels are
_still_ normalized to the 0-1 scale (do you get the same?), when they
should be labelled from 0-100.
I take very seriously Nick Cox's rejoinder that you get in Stata what you
ask it to do, but I'm now running what everybody would now agree is the
correct code, and I'm still not getting quite the desired output. I'm not
sure what else I can say.
CLIVE NICHOLAS |t: 0(044)7903 397793
Politics |e: clive.nicholas@ncl.ac.uk
Newcastle University |http://www.ncl.ac.uk/geps
Whereever you go and whatever you do, just remember this. No matter how
many like you, admire you, love you or adore you, the number of people
turning up to your funeral will be largely determined by local weather
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2005-06/msg00283.html","timestamp":"2014-04-18T02:58:52Z","content_type":null,"content_length":"8049","record_id":"<urn:uuid:99207d01-31df-435b-b875-0769d2f2f61f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
piecewise function
dilou wrote:
> I need your help for implementing the pwl in FPGA using VHDL. In a
> study I have found this approximation of the sigmoid function:
> It can be approximate it at:
> y(v) = mi( v-vi-1) + ni-1 , v <> [ vi-1, vi]
> ni = mi ( vi - vi-1 ) + ni-1, i=1,2,3,...
> with v0=0, n0=0
> i : number of sections of the interval of (v) of the function
> yi : linear approximation of the function in the section i
> mi : slope in the section i
> ni : ordered in the origin of section i
> It said that the hardware for implementing this version is:
> · comparators to determinate the area,
> · multipliers,
> · the sumator and a set of registers to save the differents valors
> of slope and displacement and the mi and ni.
> My problem is:
> I don't know how can I choose the slope and the interval, also if
> this method is best or no?
> Mu project is to implement a Hopfield network with the pwl function
> instead of the look up table function.
I think you should look at my other reply. A sequence of the sort:
if (value >= v0 and value < v1) then
-- Use region v0 to v1 slope and intercept to calculate
elsif (value >= v1 and value < v2) then
-- Use region v1 to v2 slope and intercept
elsif (...
-- Last region
end if;
will generate comparators to check "value" against the v0, v1, ... vn
numbers and the sections in between you will multiply and add numbers to
get to the output. As far as I can tell, that will generate
comparators, multipliers, and a summation structures.
Apart from this, I don't know what more to tell you. If the syntax
looks unfamiliar, I would advise cracking open a book on VHDL.
Best regards,
Mark Norton
Mark Norton <(E-Mail Removed)>
Concept Development, Inc.
|
{"url":"http://www.velocityreviews.com/forums/t294278-piecewise-function.html","timestamp":"2014-04-17T12:30:44Z","content_type":null,"content_length":"42893","record_id":"<urn:uuid:d5943d28-98c4-4d87-84c7-030b41a926d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
April 21st 2010, 07:33 AM
show if A and B are connected in (M,d) with AnB!=empty set, AUB is also connected.
that just seems obvious.....how would I go about proving this one?
April 21st 2010, 03:33 PM
Let's prove it in general. Let $X$ be a topological space and $\left\{U_\alpha\right\}_{\alpha\in\mathcal{A}}$ be a collection of connected subspaces of $X$ such that $\bigcap_{\alpha\in\mathcal
{A}}U_\alphae\varnothi ng$. Then, $\bigcup_{\alpha\in\mathcal{A}}U_\alpha=\Omega$ is connected. To see this, suppose not and that $(E\cap \Omega)\cup (G\cap\Omega)=\Omega$ is a separation with
$E,G\subseteq X$ open. Then, since each $E\cap \Omega,G\cap \Omegae\varnothing$ we must have that a part of some $U_{\alpha_1}$ and $U_{\alpha_2}$ are in each. But, $U_{\alpha_1}\cap U_{\alpha_2}
e \varnothing$ this in particular implies that $E\cap U_{\alpha_1},E\cap U_{\alpha_2}e\varnothing$ they are open in $U_{\alpha_1}$ and evidently $(E\cap U_{\alpha_1})\cup (G\cap U_{\alpha_1})=U_
{\alpha_1}$ but this clearly contradicts that $U_{\alpha_1}$ is connected. The conclusion follows.
April 21st 2010, 05:58 PM
I don't quite understand the proof, sorry
Why is http://www.mathhelpforum.com/math-he...156f55fd-1.gif a separation?
Do we get http://www.mathhelpforum.com/math-he...fd3b22c8-1.gif since any intersection of connected sets in X aren't empty?
April 21st 2010, 06:29 PM
I don't quite understand the proof, sorry
Why is http://www.mathhelpforum.com/math-he...156f55fd-1.gif a separation?
I'm making a contradiction and so I'm assuming it's a seperation.
Do we get http://www.mathhelpforum.com/math-he...fd3b22c8-1.gif since any intersection of connected sets in X aren't empty?
Yes, if it were to be a separation $E\cap \Omega,G\cap \Omega$ they must be non-empy open and disjoint. I forgot to mention disjointness but since $E\cap \Omega,G\cap \Omega$ are disjoint
evidently so are $E\cap U_{\alpha_1},G\cap U_{\alpha_1}$
April 21st 2010, 06:53 PM
Here, I think I can say it so you'll better understand.
Let $E,G\subseteq X$ be open and be such that $\left(E\cap \Omega \right)\cap\left(G\cap \Omega\right)=\varnothing$ and $\left(E\cup \Omega\right)\cup\left(E\cap\Omega\right)=\Omega$. We claim
that at least one of $E,G$ must be empty. Since the union of their intersections with $\Omega$ is non-empty (assuming $\Omega$ is
non-empty, but then the conclusion is immediate) we must have that at least one of them is non-empty, assume WLOG that it's $E$. Then, we have that $U_{\alpha_1}\cap Ee\varnothing$
for some $\alpha_1\in\mathcal{A}$. But, it must be true that $U_{\alpha_1}\subseteq E\cap\Omega$ otherwise we'd have that some point of $U_{\alpha_1}$ is not in $E\cap\Omega$ and since $E\cap\
Omega,G\cap\Omega$ cover $\Omega$ it must be that that point is in $G\cap\Omega$. In other
words, $G\cap U_{\alpha_1}$ and $E\cap U_{\alpha_1}$ are both non-empty. But, they are apparently disjoint, open in $U_{\alpha_1}$ and their union is $U_{\alpha_1}$. This clearly contradicts that
$U_{\alpha_1}$ from where it follows that $U_{\alpha_1}\subseteq E\cap \Omega$.
Now, the rest is easy. For, suppose that $G\cap \Omega$ is non-empty. Then, by the previous analysis it follows that $U_{\alpha_2}\cap (G\cap \Omega)e\varnothing$ and so $U_{\alpha_2}\subseteq G\
cap \Omega$. But, this is a contradiction since $G\cap\Omega$ and
$E\cap\Omega$ are disjoint and $U_{\alpha_1}\cap U_{\alpha_2}$ is non-empty. It follows that $G\cap\Omega$ must be empty and thus no separation of $\Omega$ is possible. The conclusion follows.
April 21st 2010, 06:57 PM
I think I kind of get it....
But if I just had 2 sets, like A and B, I would say suppose AUB is disconnected.
So there exist sets X and Y such that
XnAUB != YnAUB != empty
XnAUB U YnAUB = AUB
XnAUB n YnAUB = empty
so some part of A, B is in X, and also some other part of A, B is in Y since XnAUB, YnAUB is not empty and their intersection is empty (so the same part of A, B can't be in both Y and X)
so XnA, YnA is not empty
so XnA U YnA = A??
So A is disconnected because XnA and YnA are disjoint?
And same goes for B
Is this correct? :)
April 21st 2010, 06:59 PM
I think I kind of get it....
But if I just had 2 sets, like A and B, I would say suppose AUB is disconnected.
So there exist sets X and Y such that
XnAUB != YnAUB != empty
XnAUB U YnAUB = AUB
XnAUB n YnAUB = empty
so some part of A, B is in X, and also some other part of A, B is in Y since XnAUB, YnAUB is not empty and their intersection is empty (so the same part of A, B can't be in both Y and X)
so XnA, YnA is not empty
so XnA U YnA = A??
So A is disconnected because XnA and YnA are disjoint?
And same goes for B
Is this correct? :)
Clean it up, but yeah!
April 23rd 2010, 08:30 AM
would the intersection or A and B be connected?
I'm kind of stuck on this because I don't know if the empty set is connected or not. For example, if A=(0,1) and B=(2,3) then their intersection would be empty....but is that connected?
April 23rd 2010, 11:20 AM
The empty set is sometimes connected sometimes not depending on the author.
But, we can prove a nice little theorem.
Theorem: Let $E,G\subseteq\mathbb{R}$ be connected and $E\cap Ge\varnothing$ then $E\cap G$ is connected.
Proof:This follows since the intersection of two intervals is an interval.
In fact this is true in any linear continuum. But, this is a very special case.
Permit me if you will to describe a picture for you.
Imagine two crescent rolls (look here) it is easy to think of their general shape projected into $\mathbb{R}^2$, right? Now, think about taking two of them and having the concave sides face each
other. Now, move them until just their tips are touching. Clearly each crescent roll is connected but their intersection will be the area in the overlapping tips, but this is clearly
April 24th 2010, 08:15 PM
But if we did not have the condition that http://www.mathhelpforum.com/math-he...2c2570a0-1.gif, then the intersection of E and G would not be connected, correct?
I'm looking at your crescent roll example for that: the intersection of the crescent rolls is empty....kind of like the intersection of two open sets, so the empty set in this case would be
April 25th 2010, 07:06 PM
No, the two do intersect.
I guess an easier example would be two circles in $\mathbb{R}^2$ that intersect at two points. Clearly both are connected but their intersection are two disconnected poitns.
|
{"url":"http://mathhelpforum.com/differential-geometry/140504-connectedness-print.html","timestamp":"2014-04-24T16:15:28Z","content_type":null,"content_length":"25362","record_id":"<urn:uuid:f8473954-bcae-4375-83f1-29bf174fdec5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] How can I constrain linear_least_squares to integer solutions?
[Numpy-discussion] How can I constrain linear_least_squares to integer solutions?
Stefan van der Walt stefan@sun.ac...
Wed Nov 28 01:59:37 CST 2007
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
> This is not a trivial problem, as you can see by googling mixed integer least
> squares (MILS). Much will depend on the nature of the parameters, the number of
> variables you are using in the fit, and how exact the solution needs to be. One
> approach would be to start by rounding the coefficients that must be integer
> and improve the solution using annealing or genetic algorithms to jig the
> integer coefficients while fitting the remainder in the usual least square way,
> but that wouldn't have the elegance of some of the specific methods used for
> this sort of problem. However, I don't know of a package in scipy that
> implements those more sophisticated algorithms, perhaps someone else on this
> list who knows more about these things than I can point you in the right
> direction.
Would this be a good candidate for a genetic algorithm? I haven't
used GA before, so I don't know the typical rate of convergence or its
applicability to optimization problems.
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-November/030097.html","timestamp":"2014-04-17T03:56:03Z","content_type":null,"content_length":"4033","record_id":"<urn:uuid:38cc42aa-50b7-4b5d-a278-9f9e7daf1738>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Tawakoni, TX Math Tutor
Find a West Tawakoni, TX Math Tutor
I hold a BS in biochemistry and a PhD in cell biology, so I have extensive education in science and math. I like to adapt my tutoring style to the student and work with him/her to figure out the
method that they best respond to. I like to provide structured tutoring experiences, so please send as much detail about the subject matter that requires tutoring as possible.
9 Subjects: including algebra 2, geometry, precalculus, biochemistry
...I received an A+ and an A in both semesters of biochemistry and recently graduated Cum Laude from the University of Texas at Dallas with a B.S. in Biology. I believe that success in
biochemistry really boils down to truly functional understanding of both chemistry and biology. I have taught swimming lessons for the better part of 10 years.
15 Subjects: including algebra 1, algebra 2, chemistry, physics
...I believe that all children are capable of learning if given the opportunity and the correct method of instruction. I use a hands on approach to assist children in making learning come alive
and relevant to them. For this reason I utilize the students strengths, and learning styles to assist them in overcoming their difficulty with a subject.
21 Subjects: including geometry, English, prealgebra, algebra 1
...I am a certified elementary teacher for grades 1-8 with 20 years of experience in the classroom. I will take your child from the level he/she is on and lead them on an exciting journey to the
next level. Study skills includes learning how to organize and schedule projects, test and daily assignments.
8 Subjects: including algebra 1, vocabulary, grammar, prealgebra
...His method is geared towards understanding of the material, rather than just rote completion of homework. This is, as the story goes - and so many people like this reference - like "teaching
the man to fish." While it may sound cliche, this goes a long way in helping students to achieve true suc...
37 Subjects: including calculus, SAT math, chemistry, algebra 2
Related West Tawakoni, TX Tutors
West Tawakoni, TX Accounting Tutors
West Tawakoni, TX ACT Tutors
West Tawakoni, TX Algebra Tutors
West Tawakoni, TX Algebra 2 Tutors
West Tawakoni, TX Calculus Tutors
West Tawakoni, TX Geometry Tutors
West Tawakoni, TX Math Tutors
West Tawakoni, TX Prealgebra Tutors
West Tawakoni, TX Precalculus Tutors
West Tawakoni, TX SAT Tutors
West Tawakoni, TX SAT Math Tutors
West Tawakoni, TX Science Tutors
West Tawakoni, TX Statistics Tutors
West Tawakoni, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/West_Tawakoni_TX_Math_tutors.php","timestamp":"2014-04-17T07:37:27Z","content_type":null,"content_length":"24128","record_id":"<urn:uuid:b2dcd184-a75a-4f3c-806d-65174ebad206>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pedal Triangle
Pedal Triangle Explorations
Rachael Brown
First, let's talk about the definition of a pedal triangle. A pedal triangle is created by the intersections of perpendicular lines from a point in the plane to the sides of a given triangle. Here is
an example of one...
The pedal triangle in the above picture is orange. The given triangle is blue and the given point is P.
I began exploring pedal triangles by making a script in Geometer's Sketchpad that created pedal triangles. I used this script to see what happens when we place P in different places. (To try the
script for yourself in a GSP file, click here.)
What happens when P is on a side of the given triangle?
First, I looked to see what happens when P is on one of the sides of triangle ABC. Using GSP, I made the following conjecture: the angle in the pedal triangle at P is equal to the sum of the two
angles in the given triangle that are on the same side as P. Look at the picture below for clarification.
Here is a proof of my conjecture:
What happens if P is on one of the vertices of the original triangle?
Again, to explore this I used GSP. The results should not be surprising. The pedal triangle does not exist; its a straight line.
Here P is on side BC and AB so P is the intersection of the perpendicular from P to the triangle. The only side that it does make sense to look at is side AC. Thus, we get a straight line from P to E
that is perpendicular to AC when we look for the pedal triangle when P is at one of the vertices of the original triangle.
What happens when P is the orthocenter of the original triangle?
In this case, the pedal triangle is the same as the orthic triangle. This is because the orthic triangle is made of the intersection points of the altitudes of the original triangle. These are
exactly the same points that make the pedal triangle. Here is a picture of it.
What happens if P is the circumcenter?
After doing some experimenting, I discovered that the pedal triangle in this case is the same as the medial triangle. This is because the circumcenter is created by the perpendicular bisectors of
each side. That means the intersection of the perpendicular from P to a side is at the midpoint. The definition of a medial triangle is the triangle that is created by the midpoints of the sides of
the original triangle.
What happens if P is the incenter?
The incenter is created by the intersection of the three angle bisectors. That means the incenter is equidistant from the sides of the original triangle. Because of the special characteristics of the
incenter, we can create an incircle that is tangent to the triangle's sides. The pedal triangle, in this case, could also be constructed by the points where the incircle intersects the sides of the
triangle. The distance from the incenter to the sides has to be the same and the way we measure distance is along a perpendicular! This means that P is the circumcenter to our pedal triangle. Pretty
cool, huh? Here is a picture of the pedal triangle and the incircle.
What happens when P is the centroid?
The centroid is a tough one. It is constructed by the intersection of the medians of the triangle. The centroid is not equidistant from the vertices of sides of the original triangle. It also is not
constructed using perpendiculars, like the orthocenter. The centroid's claim to fame is that it splits the median into two parts where one part is 1/3 of the length and the other is 2/3 the length of
the median. Could this be influencing the pedal triangle? I couldn't seem to find a special property when P was the centroid. Maybe you'll be able to figure it out. Click here to open a GSP file with
this problem already constructed in it.
Educational Value:
I found these explorations very intriguing. My knowledge of the triangle centers concepts and my understanding of them were tested and strengthened through this process. Not only do I understand what
a pedal triangle is, I also feel much more confident in my understanding of the triangle centers. I recommend this activity to geometry teachers.
|
{"url":"http://jwilson.coe.uga.edu/EMAT6680Fa05/Brown/Pedal%20Triangle/pedal.html","timestamp":"2014-04-21T12:08:37Z","content_type":null,"content_length":"6560","record_id":"<urn:uuid:e2c4a532-182c-43b0-a801-b9be1250c0ed>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A cow is tethered by a 100-ft rope to the inside corner of an L-shaped building, as show in the figure. Find the area that the cow can graze. a=20, b=50, c=100, d=60, e=50
• one year ago
• one year ago
Best Response
You've already chosen the best response.
could you check my answer i got 4950pi ft^2
Best Response
You've already chosen the best response.
Could you show your solutions if I got a wrong answer :)
Best Response
You've already chosen the best response.
I used the formula of 1/4 of a circle :)
Best Response
You've already chosen the best response.
That's correct.
Best Response
You've already chosen the best response.
Thanks! :)
Best Response
You've already chosen the best response.
You computed for 4 areas right? then added it :)
Best Response
You've already chosen the best response.
Actually you should use area of a sector, where angle is Pi/4.
Best Response
You've already chosen the best response.
Sorry Pi/2..:P
Best Response
You've already chosen the best response.
ohhh, so it is better to use the area of sector formula?
Best Response
You've already chosen the best response.
I tried it and it is the same :)
Best Response
You've already chosen the best response.
Area of sector is ( Theta/2) x r^2. WHere theta is in radians.
Best Response
You've already chosen the best response.
Well done..:)
Best Response
You've already chosen the best response.
@ganeshie8 Thanks, I made an arithmetic error with the area with radius 40 :)
Best Response
You've already chosen the best response.
okay :)
Best Response
You've already chosen the best response.
@Champs Which do you think is the better thing to use (although the result will be the same) ? area of sector formula or area of 1/4 of a circle? please explain :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
hmm i have used 1/4 of circle area as it looked obvious at first glance. but as you said it is same thing as sector formula eh ? in sector formula we multiply the area by a factor of : \(\frac{\
pi/2}{2\pi} \) = 1/4 both sector formula and area of 1/4 of a circle give the same factor :P
Best Response
You've already chosen the best response.
@Champs @ganeshie8 Thanks to both of you. :)
Best Response
You've already chosen the best response.
yup ! good thinking !! yw :)
Best Response
You've already chosen the best response.
It is better to use area of sector otherwise you'd face problem when the angle is not =90 deg.
Best Response
You've already chosen the best response.
You are welcome.:)
Best Response
You've already chosen the best response.
@Champs oh. That makes sense. Thanks for the reminder I will remember that. :) I'll also try to solve this with area of sector. :)
Best Response
You've already chosen the best response.
@Moongazer what is your answer for that problem? Thanks.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/500554dce4b0fb99113496ce","timestamp":"2014-04-16T13:09:33Z","content_type":null,"content_length":"85275","record_id":"<urn:uuid:9b928807-d6ae-4673-b529-a1fa84315a38>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] suggestions for Matrix-related changes
Sven Schreiber svetosch at gmx.net
Wed Jul 12 04:44:46 CDT 2006
JJ schrieb:
> Travis Oliphant <oliphant <at> ee.byu.edu> writes:
>> Svd returns matrices now. Except for the list of singular values
>> which is still an array. Do you want a 1xn matrix instead of an
>> array?
Although I'm a matrix supporter, I'm not sure here. Afaics the pro
argument is to have *everything* a matrix when you're in that camp. Fair
enough. But then it's already not clear if you want a row or a column,
and you carry an extra dimension around, which is sometimes annoying
e.g. for cumulation of the values, which I do a lot (for eigenvalues,
that is). So for my personal use I came to the conclusion that the
status quo of numpy (array for the value list, matrix for the decomp) is
just fine.
So maybe the people in favor of values-in-1xn-matrices can tell why they
need to matrix-multiply the value array afterwards, because that's the
only benefit I can see here.
> I had just tried this with my new version of numpy, but I had used svd
> as follows:
> import scipy.linalg as la
> res = la.svd(M)
> That returned arrays, but I see that using:
> res = linalg.svd(M)
> returns matrices. Apparently, both numpy and scipy have linalg
> packages, which differ. I did not know that. Whoops.
I'm trying to get by with numpy (good that kron was brought over!), but
eventually I will need scipy -- I was hoping that all the matrix
discussion in the numpy list implicitly applied to scipy as well. Is
that not true?
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-July/021780.html","timestamp":"2014-04-20T02:17:50Z","content_type":null,"content_length":"4282","record_id":"<urn:uuid:e6eca0d7-ed36-4eef-a35e-2a291eec7f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trading Momenta In A Collision Two Particles Moveperpendicular... | Chegg.com
Trading Momenta in a Collision
Part A
Suppose that after the collision, theparticles "trade" their momenta, as shown in the figure. That is,particle 1 now has magnitude of momentum
Express your answer in terms of
Part B
Consider an alternative situation: This timethe particles collide completely inelastically. How much kineticenergy
Express your answer in terms of
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/trading-momenta-collision-two-particles-moveperpendicular-collide-part-suppose-collision-t-q132543","timestamp":"2014-04-17T14:41:45Z","content_type":null,"content_length":"33523","record_id":"<urn:uuid:2c6a1e65-aa37-401e-9371-87baff50ee61>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Villa Rica, PR Algebra 2 Tutor
Find a Villa Rica, PR Algebra 2 Tutor
I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT. I
am currently finishing up my masters degree from KSU.
4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...Because of my peer-tutoring, I helped others improve their grades in Chemistry. Chemistry is one of my favorite science subjects that I'm great at helping others with. I have a vast knowledge
of Accounting, as I have taken Accounting courses in high school and in college, and I made mostly A's in Accounting.
17 Subjects: including algebra 2, chemistry, calculus, geometry
Teacher of mathematics concepts to students (K thru 12 and college levels) for over 20 years. Booker’s love for mathematics goes back to his childhood, where he developed his interest in math
while working beside his grandfather in a small grocery store. He is a Christian who loves God, life, people and serving others.
5 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...I also ran cross country in high school and participated or led several service organizations in college and high school, so I can easily relate to a wide variety of interests and backgrounds.
I bring a professional, optimistic, and energetic attitude to every session and I think that everyone c...
17 Subjects: including algebra 2, chemistry, physics, geometry
...While enjoying the classroom again, I also passed 6 actuarial exams covering Calculus (again), Probability, Applied Statistics, Numerical Methods, and Compound Interest. It's this spectrum of
mathematics, from high school through post baccalaureate, which I feel most comfortable tutoring. I also became even more proficient with Microsoft Excel, Word, and PowerPoint.
21 Subjects: including algebra 2, calculus, statistics, geometry
|
{"url":"http://www.purplemath.com/Villa_Rica_PR_Algebra_2_tutors.php","timestamp":"2014-04-16T13:38:25Z","content_type":null,"content_length":"24292","record_id":"<urn:uuid:0729ac3a-a55a-4990-a034-68033a91ff10>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help on a cos/sin explanation problem!
January 20th 2009, 07:12 PM #1
Junior Member
Sep 2008
Need help on a cos/sin explanation problem!
Explain why the value of [cos $theta$, sin $theta$) *dot* [(cos(90+ $theta$), sin(90+ $theta$)] is independent of $theta$.
^^ this is one of the problems I have due tomorrow. Problem is I don't even know what it's asking. independent? Are they saying that if you dot those, you can remove the theta and it still works?
If this is what its asking I have no idea. i've been thinking about this for like half an hour and i think I'm missing something key? Please help =(
Explain why the value of [cos $theta$, sin $theta$) *dot* [(cos(90+ $theta$), sin(90+ $theta$)] is independent of $theta$.
^^ this is one of the problems I have due tomorrow. Problem is I don't even know what it's asking. independent? Are they saying that if you dot those, you can remove the theta and it still works?
If this is what its asking I have no idea. i've been thinking about this for like half an hour and i think I'm missing something key? Please help =(
Take the dot product and remember that
$\cos{(x - y)} = \cos{x}\cos{y} + \sin{x}\sin{y}$.
You should find that you get something that doesn't involve a $\theta$.
Explain why the value of [cos $theta$, sin $theta$) *dot* [(cos(90+ $theta$), sin(90+ $theta$)] is independent of $theta$.
^^ this is one of the problems I have due tomorrow. Problem is I don't even know what it's asking. independent? Are they saying that if you dot those, you can remove the theta and it still works?
If this is what its asking I have no idea. i've been thinking about this for like half an hour and i think I'm missing something key? Please help =(
If this is supposed to be the dot product of two vectors $(\cos(\theta), \sin(\theta))$ and $(\cos(\theta + 90), \sin(\theta + 90))$, then:
$(\cos(\theta), \sin(\theta)) \cdot (\cos(\theta + 90), \sin(\theta + 90))$$= \cos(\theta)\times\cos(\theta + 90)+sin(\theta)\times \sin(\theta + 90)$
$= \cos(\theta)(\cos(\theta)\cos(90)-\sin(\theta)\sin(90))+sin(\theta)( \sin(\theta)\cos(90) + \sin(90)\cos(\theta))$
Since $\sin(90) = 1$ and $\cos(90) = 0$:
$= \cos(\theta)(-\sin(\theta))+sin(\theta)\cos(\theta)$
$= -\cos(\theta)\sin(\theta)+sin(\theta)\cos(\theta) = 0$
You should expect this result, since the 2nd vector is the same as the first vector, except it has been rotated through and angle of 90 degrees, and is hence perpendicular. And the dot product of
two perpendicular vectors is always 0.
If this is supposed to be the dot product of two vectors $(\cos(\theta), \sin(\theta))$ and $(\cos(\theta + 90), \sin(\theta + 90))$, then:
$(\cos(\theta), \sin(\theta)) \cdot (\cos(\theta + 90), \sin(\theta + 90))$$= \cos(\theta)\times\cos(\theta + 90)+sin(\theta)\times \sin(\theta + 90)$
$= \cos(\theta)(\cos(\theta)\cos(90)-\sin(\theta)\sin(90))+sin(\theta)( \sin(\theta)\cos(90) + \sin(90)\cos(\theta))$
Since $\sin(90) = 1$ and $\cos(90) = 0$:
$= \cos(\theta)(-\sin(\theta))+sin(\theta)\cos(\theta)$
$= -\cos(\theta)\sin(\theta)+sin(\theta)\cos(\theta) = 0$
You should expect this result, since the 2nd vector is the same as the first vector, except it has been rotated through and angle of 90 degrees, and is hence perpendicular. And the dot product of
two perpendicular vectors is always 0.
My way is easier :P
Thanks both of you =) I understand it now!
January 20th 2009, 07:18 PM #2
January 20th 2009, 07:19 PM #3
Super Member
Dec 2008
January 20th 2009, 07:25 PM #4
January 20th 2009, 07:29 PM #5
Junior Member
Sep 2008
January 20th 2009, 08:18 PM #6
Super Member
Dec 2008
|
{"url":"http://mathhelpforum.com/trigonometry/69146-need-help-cos-sin-explanation-problem.html","timestamp":"2014-04-16T05:29:13Z","content_type":null,"content_length":"55158","record_id":"<urn:uuid:e391f54c-be62-4885-adcb-4ad653625f91>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparing Statistical Methods for Constructing Large Scale Gene Networks
The gene regulatory network (GRN) reveals the regulatory relationships among genes and can provide a systematic understanding of molecular mechanisms underlying biological processes. The importance
of computer simulations in understanding cellular processes is now widely accepted; a variety of algorithms have been developed to study these biological networks. The goal of this study is to
provide a comprehensive evaluation and a practical guide to aid in choosing statistical methods for constructing large scale GRNs. Using both simulation studies and a real application in E. coli
data, we compare different methods in terms of sensitivity and specificity in identifying the true connections and the hub genes, the ease of use, and computational speed. Our results show that these
algorithms performed reasonably well, and each method has its own advantages: (1) GeneNet, WGCNA (Weighted Correlation Network Analysis), and ARACNE (Algorithm for the Reconstruction of Accurate
Cellular Networks) performed well in constructing the global network structure; (2) GeneNet and SPACE (Sparse PArtial Correlation Estimation) performed well in identifying a few connections with high
Citation: Allen JD, Xie Y, Chen M, Girard L, Xiao G (2012) Comparing Statistical Methods for Constructing Large Scale Gene Networks. PLoS ONE 7(1): e29348. doi:10.1371/journal.pone.0029348
Editor: Petter Holme, Umeå University, Sweden
Received: April 13, 2011; Accepted: November 25, 2011; Published: January 17, 2012
Copyright: © 2012 Allen et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work was supported by NSF grant DMS-0907562 and NIH UL1 RR024982, NNJ05HD36G, NCI SPORE in Lung Cancer (P50CA70907), 1R21DA027592 and 5R01CA152301. The funders had no role in study
design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Gene regulatory networks describe interactions among genes and how they work together to form modules to carry out cell functions. GRNs provide a systematic understanding of molecular mechanisms
underlying biological processes [1]–[6]; the visualization of direct dependencies facilitates systematic interpretation and comprehension of the relationships among genes. In the GRN, genes that
interact with many other genes are called hub genes. The hub genes are likely to be drivers of the disease status due to their key positions in the GRNs. Recently, analysis of hub genes has shown to
be a promising approach in identifying key tumorigenic genes [7]–[10].
Gene expression microarrays monitor the transcription activities of thousands of genes simultaneously, which provides great opportunities to explore large scale regulatory networks. Genetic
dependency graphs can and have been constructed through a variety of approaches. Four categories of statistical methods have been proposed to construct the GRN from gene expression microarray data:
(1) Probabilistic networks-based approaches, mainly Bayesian networks (BN), (2) correlation-based methods, (3) partial-correlation-based methods, and (4) Information-theory-based methods. We give the
detailed description of each type in the Methods section.
In this paper, we compared several statistical methods for constructing GRNs. Our goal is to provide a comprehensive evaluation and a practical guide to help investigators choose between different
methods for constructing large scale GRNs. The main contributions of this paper include: (1) The performance on constructing large scale GRNs is compared with a wide range of sample sizes and numbers
of genes in the network; (2) The performance of identifying correct hub genes, which are likely to be the disease driver genes, is compared among different methods; (3) In addition to previously
reviewed methods (Bayesian Networks [11] and GeneNet [12]), three recently developed programs (Sparse PArtial Correlation Estimation (SPACE) [13], Weighted Correlation Network Analysis (WGCNA) [14],
and ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks) [15], [16]) are included in the comparison.
In this study, we are interested not only in comparing the performances of various network construction methods, but also in how the number of microarray experiments affects the accuracy of the
constructed network. In the simulation study, we simulated different numbers of microarray experiments for each simulation setting to study the effect of sample size on the performance of various
Statistical Methods
Here we give a brief summary of four categories of GRN construction approaches; the detailed methodology for each approach has been described in other papers [11]–[13], [17]–[19]. For fair
comparisons, the default parameters were used for each algorithm without additional tuning. We have provided Sweave documents to accompany this study as shown in Sweave S1; Sweave is a literate
programming framework which combines the source code (in R) and documentation (in LaTeX) in one file, to facilitate the reproduction of our results.
Correlation-based methods [14], [17], [20] are the most straightforward way to explore the gene co-expression network. They usually define a gene co-expression similarity matrix , where is the
pair-wise transcription correlation coefficients between genes and , and is the correlation matrix. Then either a hard [21] or soft threshold [14], [17] is applied to to determine the biological
meaningfulness of the connections. These co-expression-based methods have been used in several studies and have shown their usefulness in interpreting biological results and identifying important
gene modules [6], [20], [22]–[24]. WGCNA is a relatively new statistical approach based on correlations and has been used to identify several novel disease-related genes. Therefore, we will use WGCNA
as a representative method for the correlation-based approach. The WGCNA R package implements both weighted and unweighted correlation networks and identifies modules/sub-networks using hierarchical
clustering approaches. Aside from the functions for network construction and module/sub-network identification, the R package also provides functions for calculating topological properties and
network visualization [14]. Furthermore, the WGCNA R package includes interfaces with several commonly used bioinformatics tools for network visualization (e.g. VisANT [25] and Cytoscape [26]) and
enrichment analysis (e.g. DAVID [27]). The WGCNA method has been successfully applied in several studies [28]–[31].
Partial-correlation-based methods are based on Gaussian graphic model [32] theory. They infer the conditional dependency by the non-zero entries in the concentration matrix, , also called the
precision matrix, which is the inverse of covariance matrix. The zero entries in the concentration matrix imply conditional independency between the expression levels of gene and given the expression
of all other genes; in other words, two genes do not interact directly with each other. Two recently published methods: SPACE [13] and GeneNet [12] will be used to represent partial-correlation-based
methods. GeneNet uses Moore-Penrose pseudoinverse [33] and bootstrap methods to obtain a shrunk estimate of the concentration matrix. The SPACE algorithm converts the concentration matrix estimation
problem to a regression problem and optimizes the results with a symmetric constraint and an penalization. Therefore, SPACE tends to get more globally optimized results when compared to GeneNet. In
this study, the partial correlation referred to first order partial correlation.
Information-theory-based methods, such as ARACNE, use mutual information (MI) to determine the dependency among the genes and then remove indirect interactions using data processing inequality (DPI).
ARACNE has been successfully applied to construct gene regulatory networks in the context of specific cellular types, and demonstrated good performance. Since the calculation of mutual information
does not assume a monotonic relationship, an advantage of information-theory-based methods is the ability to identify the non-linear or irregular dependencies, which will be missed by Pearson
correlation. Therefore, the information-theory-based methods could out-perform correlation-based methods if the gene network contains many non-monotonic dependencies.
Probabilistic networks take a wholly different approach by attempting to search through the space of all the possible topological network arrangements given certain constraints. BNs are based on a
probabilistic graphical model that represents a set of variables and their probabilistic independencies and are applicable to many areas in science and technology [34]. The probabilistic nature of
BNs allows them to handle noise inherent in both biological processes and microarray experiments. The gene expression profiles could provide a complete joint distribution of gene expression levels,
while a BN expands the joint probability in terms of simpler conditional probabilities. In our study, we have applied BNArray [35], B-course [36], BNT [37], and Werhli's implementation of BN [11].
BNArray does not run appropriately in our computation settings. Werhli's implementation of BN uniformly outperformed others, which is probably due to the fact that Werhli's method is specifically
developed for constructing GRNs, while BNT and B-course are designed for general use. So, in this study, Werhli's BN implementation was used to represent the best performance of BN methods. The
statistical methods used in this study, as well as their inference categories and implementation platforms are summarized in Table 1.
Table 1. Method Comparison.
Performance Metrics
Some types of networks require that connections be acyclic. Other types of networks may differ on whether or not the connections are directed (causal). BN methods are acyclic and impose a direction
on each edge; for the purposes of this study, these directions are ignored. The GRNs without directions are also called gene association networks.
We used the receiver operating characteristic (ROC) curves to study the sensitivity and specificity of each algorithm to minimize the influence of any default thresholds or cutoff values, and the
area under the curve (AUC) was used to quantify the performance of each method. Clearly, the larger the area under the curve, the better the algorithm performed. ROC curves were determined by
changing the threshold for connection strength (for example, connection strength for the SPACE algorithm refers to the absolute values of the estimated partial correlations). Two genes with a
connection strength higher than the threshold were deemed to be connected.
The AUC measures the performance of the algorithm across all sensitivity and specificity ranges. In practice, biological researchers are more interested in a small subset of that performance curve –
specifically, the part of the curve with high specificity. In order to calculate a metric more relevant to this application, we can use a partial AUC. This metric calculates the AUC for the ROC curve
only where specificity is greater than some threshold. In this study we use the region in which specificity is greater than 99.5% (i.e. the false positive rate is less than 0.005) to calculate the
pAUC. We also examined the pAUC with false positive rate less than 0.05 and obtained very similar results. The global AUC is more intuitive in measuring the overall predictive performance, while the
pAUC provides a useful metric in measuring predictive performance at high specificity, which is usually the focus for biological researchers. In this study, we will use both AUC and pAUC to
comprehensively evaluate the performance of network construction.
Another aspect of performance evaluated in this study was the detection of “hub genes”. A hub gene is a highly connected gene in a network; such genes are often of biological interest because of
their critical involvement in regulatory pathways or sub-networks and these genes often incur a substantial effect on the pathways as a whole. Thus, we also evaluate each method's ability to identify
hub genes in each network using gene “connectivity.” Gene connectivity (or the degree of a gene) is a way of stating how connected a gene is within a network. Some methods produced adjacency matrices
with entirely non-zero entries. Such networks are “complete” and all nodes in each graph have the same “degree,” in that each gene is connected to each other gene. Due to this we define each gene's
connectivity score in a given network by computing the sum of the weights of all connections associated with that gene. This score can then be compared to the actual connectivity score of a gene in
the true network. Second, we also calculate the sensitivity and specificity of each method's connectivity predictions. To do this, we first classify each gene as a hub gene or not based on the true
network using some cutoff. We then utilize an ROC curve which discloses the threshold-independent performance of a method on a given network and quantify this curve using the AUC.
Computational equipment used in this study included a Dell T300 server with 16 GB 667 MHz, DDR2 RAM, Dual Core Intel Xeon E 3113 (3.0 GHz) CPU and the Windows Server 2008 Operating System; and a
RedHat Enterprise Linux server with 48 GB of RAM and two Intel Xeon X5650 (2.66 GHz) CPUs.
Simulation Studies
In the simulation studies, the network structures were simulated based on the real protein-protein interaction networks [38], [39], with an approximately scale free topology. The strengths of
dependencies were randomly simulated from a normal distribution N(0.5,0.2) with the sign (positive or negative regulation) simulated from a binomial distribution with probability 0.5. Specifically, ,
the expression of gene , was simulated from conditional normal distributions , where refers to a set of genes that regulate gene based on the simulated network structure, is the strength of
dependency of gene on gene , and , is the expression level of gene which is true for most microarray studies [40].
In each study, the datasets were simulated across two independent variables: (1) network size, and (2) numbers of samples. The number of genes represented in the networks varied over a wide range.
Simulated networks had one of six sizes: 17 genes with 20 connections, 44 genes with 57 connections, 83 genes with 114 connections, 231 genes with 311 connections, 612 genes with 911 connections, or
1344 genes with 1511 connections as shown in Figure 1. We base these networks off of real protein-protein interaction networks and, to construct networks of different sizes, vary the number of
references required to support each connection. The other variable in the simulated data was the number of samples (microarrays) in a dataset. Datasets had 20, 50, 100, 200, 500, or 1,000 simulated
microarray samples. Obviously, the intuition is that, as the number of samples increases, the algorithms would be able to perform better [41]. Datasets were generated for all combinations of these
variables, producing 36 total data sets for each simulation study.
Figure 1. Diagram depicting the network structures of each of the six network sizes used in this study.
Network Construction Performance
We first calculated the ROC curves for each combination of network size, sample size, and construction algorithm; the results of the 17-gene network are shown in Supplementary Figure S1. We use both
AUC and pAUC to evaluate the network construction performance of different methods. Figure 2 shows the performance on the 1344-gene network, and the detailed performance on all other simulation
settings can be seen in Supplementary Figures S2 and S3. From all simulation settings, we can see that as the sample size increased, the performance of all methods tended to improve; this is expected
and consistent with previous research [41].
Figure 2. The AUCs and pAUCs for 1344-gene network in simulation 1.
Left: The area under the curve using various network construction methods across various sample sizes on a network with 1344 genes; Right: The partial area under the curve for FPR0.005 for various
For the performance in constructing the global network structure (1344-gene network) measured by AUC, WGCNA and GeneNet performed best, followed by ARACNE, and SPACE performed the worst (Figure 2A).
The differences decreased as the number of samples increased. When the sample size reached 1000, all the method performed very well (with AUC close to 1). In the five other sizes of networks (Figure
S2), the performance of various methods were similar for smaller networks. The Bayesian method could only compute the smaller networks and failed for all the datasets involving 1,000 samples on our
computing equipment. It performed comparably to the other methods on the smallest network (17 genes) as can be seen in Figure S2.
For identifying a few connection with high specificity, SPACE outperformed the other methods across all simulation settings followed by GeneNet and then WGCNA (Figure 2B). In the five other sizes of
networks (Figure S3), SPACE and GeneNet were both the best-performing methods; SPACE slightly outperformed GeneNet for smaller numbers of samples (100).
The performance of the Bayesian networks is inconsistent with the general belief that Bayesian networks produce the most accurate networks. That is probably because BN methods perform better for
smaller gene network construction and it could also be impacted by the simulation setups. In order to verify this, we studied the performance of constructing GRNs using the same 11-gene network as
was used by Werhli et al [11]. We found that on this 11-gene network, the Bayesian method outperformed the other methods, which is consistent with the conclusion of Werhli's study (Figure S4). This
may also be due to the underlying assumptions of each of these methods: Bayesian inference algorithms (typically) rely on categorical variables while partial-correlation and correlation-based
algorithms assume a normal distribution for their variables. The data were simulated from a normal distribution to more accurately represent true gene expression experiments [40], thus we would
expect degraded performance for Bayesian inference algorithms. A more specific comparison of performance is available in the supplementary material.
Hub Gene Detection
Another important metric of interest in this study was the ability of a method to detect highly connected (or “hub”) genes within a network. These genes are often of particular biological interest as
the activity of such hub genes may affect many genes in the biological network and hence drive disease status.
Performance in this area was measured by first calculating each gene's predicted connectivity (as described earlier) and comparing this against the binary classification of whether or not a gene was
truly a hub gene in the true network. We computed ROC curves by changing the connectivity thresholds and used the AUC to measure the performance of detecting hub genes. We experimented with various
cutoffs for the determination of hub genes, using either 4, 5, or 6 connections as the threshold. We obtained similar results on all three except for the smallest network (17 genes). This is because
this network had only one gene which was classified as a hub gene when the threshold was set at either 5 or 6; on this network most methods performed perfectly or almost perfectly when using these
thresholds. Three genes were classified as hub genes using a threshold of 4, which made for more meaningful performance measurements on this network, so we opted to use this threshold throughout the
duration of the study.
When examining the AUC for hub genes (Figure 3 and Supplementary Figure S5), SPACE was consistently the top performer for nearly all numbers of samples on all network sizes. The Bayesian method
performed well on the smallest network, but was not competitive on the other network sizes. WGCNA performed well with very small numbers of samples, but was quickly outperformed by SPACE in every
Figure 3. Comparison of the AUC performance on detecting hub genes.
Measuring the performance of each method at detecting hub genes as measured by the Area Under the ROC Curve (AUC). Hub genes were classified as having 4 or more connections in the true network.
GeneNet exhibited somewhat strange performance when dealing with hub genes. It was fairly competitive on the smaller networks, but produced severely degraded performance on the larger networks with
AUCs well below 0.5 (which is the value of a random guess). This is likely due to the connectivity values produced by GeneNet. Most methods produced networks for which most connections were zeros or
near-zero which produces near-zero connectivity values for most genes. When viewed as a histogram, the connectivity of all other algorithms was skewed to the right, while GeneNet had many more genes
with high connectivity scores as shown in Supplementary Figure S6.
Simulation Studies Under Non-Normal Distribution
To evaluate the performance of different methods when the underlying distribution is non-normal, we also simulated data under a non-normal distribution. In this simulation study, the expression data
were simulated from a bimodal mixture of 2 normal distributions, which models the possible “on” and “off” status of a gene's expression. The mixture probability for each status is 0.5. The AUC curve
for 1344-gene network with various methods and different numbers of samples were shown in Figure 4. The AUC and pAUC on all other simulation settings can be seen in Supplementary Figures S7 and S8.
For the performance in constructing the global network structure, WGCNA and GeneNet still performed best, followed by ARACNE, and SPACE still performed the worst. The results were consistent with
simulation study 1. If we use the performance of GeneNet and WGCNA as references, we notice that the performance of ARACNE improves while that of SPACE get worse. This could due to the fact that
ARACNE does not rely on the normal assumption, while SPACE highly does, so under a non-normal distribution the performance of ARACNE improves while that of SPACE decreases relative to GeneNet and
Figure 4. The AUCs for 1344-gene network in simulation 2 with non-normal distribution.
The area under the curve using various network construction methods across various sample sizes on a network with 1344 genes.
Computational Complexity and Program Usability
Aside from accuracy, one of the important attributes of each algorithm is computational complexity. In environments lacking a strong computational infrastructure, certain algorithms may be unfeasible
(especially when processing a large dataset). The Bayesian algorithm was the only algorithm that caused concerns. Most other methods would finish computing within minutes on standard desktop hardware
for any of the datasets we examined. The Bayesian algorithm, on the other hand, typically took hours or even days to compute and required advanced hardware.
Program usability is also a consideration, especially among groups with no special expertise in computer programming. Among the selected implementations, no specific one stands out as more or less
usable than the others. Each provides a command-line interface; usability would largely be determined by a user's familiarity with a particular platform (R and/or C, or Matlab or JAVA). The only
notable user-friendly feature offered in these packages was that the WCGNA package and ARACNE software provide many useful network analysis and visualization functions which are very convenient.
Also important is the ability to process and store the resultant networks in either adjacency list or matrix format. SPACE, which is designed to operate on sparse matrices, produced networks in which
only approximately 10% of the network was non-zero, making it much easier to store in a compressed format than the networks produced by the other methods (which typically had 99% non-zero matrices).
Empirical Study In E. coli
The predictive performance of our approach was tested using the Escherichia coli (E.coli) gene expression database entitled M3D (Many Microbe Microarrays Database [42]). The dataset contains 524
arrays measured under 264 experimental conditions. The data were measured using Affymetrix GeneChip E.coli Genome arrays with 4292 gene probes. The arrays measured under the same experimental
conditions were averaged. From the gene expression data matrix, we used SPACE, GeneNet, WGCNA and ARACNE methods to derive the gene network in E. coli. To evaluate the performance, we used the
transcriptional regulatory network from the RegulonDB [43], which provides the regulation targets of the transcriptional factors in E coli. An overview of the network is shown in Figure 5. The ROC
curves of various methods were shows in Figure 6. For this real data example, the thresholds for a false positive rate of 0.005 are 0.05, 2.4E-7, 0.12, 0.37 for GeneNet, SPACE, WGCNA and Aracne,
respectively. For constructing the global network structure, WGCNA and ARACNE performed the best, followed by GeneNet, with SPACE performing the worst. On the other hand, for identifying a few
connections with high specificity, GeneNet and SPACE performed better than the others. Overall, the results were relatively consistent with the simulation studies.
Figure 5. The transcriptional regulatory network for E. coli derived from the RegulonDB database.
Each red dot is a gene, and a blue line between genes indicates a connection.
Figure 6. The performance in constructing gene regulatory network in E. coli.
Left: The entire ROC curves using various network construction methods; Right: The corner of ROC with high specificity.
We have measured the performance of various gene regulatory network construction methodologies against various sizes of simulated data with different numbers of samples. From this, a few conclusions
can be drawn.
First, WGCNA and ARACNE performed well in constructing the global network, while SPACE did well in identifying a few connections with high specificity. GeneNet performed well in both aspects, but it
is not suitable for identifying the hub genes, which can often be of biological interest. In the simulation study, SPACE performed well in identifying the hub genes as shown in Figure 3. Since there
is no a single method that outperforms other methods in all aspects, the user should choose an appropriate method based on the purpose of the study.
In applying these methods to the real E. coli data, WGCNA and ARACNE performed best, which may indicate that these two methods are relatively more robust. Overall, the performance in real data seemed
to be worse than that in the simulation study, and there are several possible reasons: (1) the real biological network is much more complex than the simulation study; (2) many true connections in
this network are still unknown; (3) some of the connections in RegulonDB may not be supported by gene expression data [44]. Surprisingly, SPACE performed poorly in constructing the global network,
which is because the SPACE algorithm uses an L1 penalty to shrink most of partial correlation to zero. If we manually decrease the penalty term, the performance improved as fewer partial correlations
were shrunk to zero, but it also became much more computationally intensive. In this study, we used default parameters or recommended settings for each method whenever possible for a fair comparison.
So, here we still present the results based the default setting of SPACE algorithm.
Another conclusion which can be drawn is that as sample sizes increase, the accuracy also increases. For the number of samples tested (20–1,000), the most significant performance improvements were
obtained at the beginning; they began to saturate as the number of samples approached 1,000. This demonstrates that having thousands of samples may not offer significant performance improvements.
Also, this study demonstrates that it is feasible to use current techniques to generate accurate, informative networks even with dozens or hundreds of genes. Several algorithms scaled to such
environments well without requiring sophisticated computational resources.
One disadvantage of probabilistic-network-based methods is the discretization of data. It is generally preferred to discretize into a small number of “buckets” which directly represent an underlying
biological observation when using probability networks. To this end, data is typically discretized into binary buckets (implying that a gene is either “on” or “off”) or ternary buckets (signifying
“under-expressed,” “normally expressed,” and “over expressed”). Unfortunately, fitting the data into any reasonable number of buckets will result in substantial data loss.
Finally, we found that the Bayesian methods did not scale to larger networks well. Because of the computational complexity as well as the memory requirements, these methods – as currently implemented
– are not the ideal choice for such large networks. WGCNA, GeneNet, ARACNE and SPACE, on the other hand, were designed to construct the gene network at very large scales. Also, it worth mentioning
that the WGCNA package provides several useful tools to facilitate the analysis and visualization of resulting networks, including tools to identify subnetworks and an interface to Cytoscape. The
WGCNA package can be used for not only constructing gene networks but also for detecting modules/sub-networks, identifying hub genes, and selecting candidate genes as biomarkers.
Supporting Information
ROC Curves for the 17 Gene Network. The Receiver Operating Characteristic (ROC) curves for the 17 gene network which will quantified using the Area Under the Curve (AUC).
AUCs for All Network Sizes. The relationship between sample size and the area under the ROC curve (AUC) values for each network size and network construction method.
pAUCs for All Network Sizes. The relationship between sample size and the partial area under the ROC curve (AUC) values for FPR0.005 for each network size and network construction method.
AUCs on 11-gene network. The AUCs for each method on Werhli's 11 gene network. As Werhli had demonstrated, the Bayesian method performs quite well compared to other network construction methods.
Hub Gene Performance. For all network sizes, the figure shows the relationship between the sample size and the area under the ROC curve (AUC) regarding each method's classification of hub genes by
classifying a hub gene as a gene with 4 or more connections.
Histograms of Connectivity Scores for Various Methods. Depicts the differences in the distributions of the gene's connectivity values (weighted degree) across the different methods on the 44 gene
network with 200 samples. Scores were normalized to [0,1] by dividing all predicted connectivity scores by the maximum connectivity score in that setup. Note that GeneNet is skewed such that most
genes are highly-connected when compared to the other methods. This causes problems later on when evaluating the AUC scores for the classification of hub genes for this method.
AUCs for All Network Sizes in Simulation Study 2. The relationship between sample size and the area under the ROC curve (AUC) values for each network size and network construction method in
simulation study 2 which uses non-normal distribution assumptions for expression values.
pAUCs for All Network Sizes in simulation study 2. The relationship between sample size and the area under the ROC curve (AUC) values for FPR0.005 for each network size and network construction
method in simulation study 2 which uses non-normal distribution assumptions for expression values.
Sweave Documentation for all Analysis. Documents the creation and analysis of the reverse-engineered methods for all network types and network construction methods.
Author Contributions
Conceived and designed the experiments: JDA YX GX. Performed the experiments: JDA YX GX. Analyzed the data: YX MC LG GX. Wrote the paper: JDA YX MC LG GX.
|
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0029348","timestamp":"2014-04-21T02:02:34Z","content_type":null,"content_length":"180980","record_id":"<urn:uuid:d27e8406-4d5a-4b32-a5d2-b92fa12b7dc7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparison and Inheritance
February 22, 2013
We continue last week's discussion of comparison functions by thinking about how to compare objects from different parts of an inheritance hierarchy.
We continue last week's discussion of comparison functions by thinking about how to compare objects from different parts of an inheritance hierarchy.
Suppose we have a Vehicle class from which we have derived Car, Truck, and other classes such as Airplane. How do we go about defining comparisons between objects of these various classes that will
meet the C++ ordering requirements in a useful way?
Implementing these comparisons will have its own problems, because C++ objects are not polymorphic by themselves. To take advantage of polymorphism in C++, we must use pointers or references, perhaps
directly, or perhaps through some kind of smart pointer class. Nevertheless, it is probably useful to think about the characteristics that such comparison functions might have before we worry about
the details of implementing them.
The obvious problem in defining such operations is that there might well be an obvious strategy for comparing one Car with another, or one Truck with another — but it might not be obvious how to
extend this strategy to allow comparing a Car with a Truck. For example, every Car might have a serial number, and every Truck might have a serial number, but these two kinds of serial numbers might
be in completely different formats.
One's first temptation might be to use the obvious methods of comparing serial numbers between two objects of the same type, and then to say that two objects of different types are unrelated. In
other words, define the < operation on two Vehicle objects to return false unless the two objects have the same type. In that case, < should compare the objects' contents (serial numbers, or any
other of the objects' aspects we might choose).
This strategy is a disaster. To see why, consider two Car objects c1 and c2, and a Truck object t. Under this definition, c1 is unrelated to t, t is unrelated to c2, but one of c1 < c2 and c2 < c1
will be true. This behavior violates the C++ rules for order relations, because the unrelated operation is not transitive. In effect, when you're designing a comparison function, you should think of
unrelated as meaning conceptually equivalent.
With this insight, let's revisit our original problem: We know how to compare two Cars, we know how to compare two Trucks, and we need to figure out how to compare a Car with a Truck. We can imagine
this problem as if we had two stacks of cards, representing Cars and Trucks, respectively, and we want a way to shuffle these stacks of cards together into a single stack, while still preserving the
ordering within each of the original stacks.
If the problem is described that way, it should be obvious that the easiest way to solve it is to put one stack on top of the other. Translating this solution into a comparison function is trivial:
We decree that an object of one type is always less than an object of the other type, and continue to use the comparisons that we have already defined within each type. Another way to look at this
solution is that each object becomes a string of two symbols, where the first one is the object's type, and we define ordering as dictionary ordering over these two-symbol strings.
The more general kind of "shuffling" is harder to define, because we need to be able to compare objects of different types while ensuring that the results of such comparisons are always consistent
with each other. However, using dictionary order suggests one way of doing it: We find a string — such as a serial number — that describes each object, and then create a pair in which the first
component is that string and the second component represents the type.
This strategy suggests a general technique: Represent each object's contents with a canonical value — a value of a single, well-defined type that already has an order relation defined on it. Then use
the canonical values for comparison, perhaps appending the object's type if we want two objects of different types with the same canonical value to be ordered rather than conceptually equivalent.
That's the general strategy. Now it's your turn. Imagine an inheritance hierarchy with Number at its root and Integer and Real each derived from Number. Assume that an Integer is a container for an
int, and Real is a container for a float. How would you define a comparison operation between two Numbers? How would you implement it? Beware: This problem is nowhere near as simple as it looks.
|
{"url":"http://www.drdobbs.com/parallel/comparison-and-inheritance/240149250","timestamp":"2014-04-16T04:40:56Z","content_type":null,"content_length":"97501","record_id":"<urn:uuid:81e34cdb-df16-4d88-8ab2-0d599ff35d35>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Data Structures
About the course
This is the homepage for CS 241: Data Structures, a course offered by Fritz Ruehr at the Computer Science Department of Willamette University.
The study of data structures and algorithms serves as a basic foundation to a Computer Science education. As a second course in programming, it enriches a student's understanding of the basic
processes involved in computing. But it also begins to focus attention on deeper and more abiding issues. In this course, we shift our attention from simple coding techniques to the analysis of
algorithms in terms of resource use (time and space), generic solutions to recurring problems and larger-scale program design. A good portion of our time will be spent becoming familiar with the
discipline's standard repertoire of data structures and algorithms. In order to support larger-scale design, we will stress principles of abstraction and modularity. We will try to divide our
programs into cleanly separated components, with narrow interfaces, and consider the specification of their behavior separate from its possible implementations. Through all of this, our programming
vehicle will be the modern, object-oriented programming language Java. Students should come out of this course with a solid capability for programming and design and a good foundation for future
study of Computer Science in general.
• Welcome to the Spring 2014 teaching of the course!.
• Hey! You can find correct solutions to the Lab 4 princess data here (as text data output) and here (as a PDF of a graphic of the tree).
• Curious about new ways to program? Check out this video on Vimeo by Bret Victor (of Apple, among other places) and his ideas on how immediacy of access to runtime behavior affects coding.
(This demo is, by the way, 2 years old ... .)
• Lab 5 is now posted!
• You can find a nice, relatively browser-compatible interactive AVL tree animation at this link from qmatica (possibly useful in studying for the Final Exam).
• Here is a very sketchy overview of how pointers in an AVL tree might be re-assigned so as to effect a double-rotation “all at once”; this is definitely not optimized, and the diagrams could be
more helpfully annotated, but it should provide some clues as to how to solve that last exam problem (many other orderinsg are possible as well). Details are in this PDF file.
• Study for the midterm using these notes!
• Lab 4 is now posted!
• The link to the “illustrated code” for tree traversals is in this hot-linked diagram.
• Looking for sample code from lecture?
You should now be able to find it in in this folder (.java files only, though).
(Warning! This is code straight from lecture: little documentation, not nec. ideal code, either!)
• Lab 3 is now posted!
• Lab 2 is now posted!
• Check out these graphics with some program design advice: part one and part two.
• (Just for fun: here is the final Zombie Game project from the last time I taught 141.)
• The I/O examples are here.
• The "retro mash-up" version of the windows lab is here.
(Click repeatedly on the arrow in the lower left.)
• Here are the sample console and option pane programs.
• Here is Joel Spolsky and Schlemiel the painter on Java, C and what you need to know to succeed ...
(and more from Spolsky here, plus discussion from readers, and some other thoughts)
• Here is a a PDF version of the corrected syllabus!.
• Check out this Java entities diagram and this outline (in progress) of Java ontology.
• Remember to check the CS tutoring page. for information about regular tutoring hours as well as individual, one-on-one tutoring via the Learning Center (all free of charge!)
• You can find a short example of using Scanners to open and read a file here.
• Here is the Java Tutorials “trail” on interfaces; see also here.
(Older news below)
• Study guides from the 2012 teaching of the class for the midterm and final exams are still available.
• Here is a the key to sorting algorithms (for exam study).
• Here is a quick overview of sorting (more to be added later) … and here is page with sorting algorithm animations.
• See the AVL tree (etc.) animations here.
• Here’s a couple of more direct links to the new windows applet and the form applet (now with code embedded … at least for the second one!).
• You can find the Applet examples in this directory.
• Relevant memes?
• This directory contains (a variation on) the sample code from lecture on Tuesday 7 Feb regarding call stacks and exceptions.
(Thanks to Kendra for requesting it.)
• ... and you can find sample output for Lab 2 here for Fibonacci and here for Collatz.
(Thanks to Kendra again!)
• Stephanie Jones, graduating senior in CS and Math sends along this video gem on a Fibonacci analysis of Spongebob's pineapple (!).
• Maybe we should use this old lab as a foil for learning about interfaces and anonymous inner classes?
(See also Section 1.6 of the Weiss textbook.)
• Know your tools! What's new in Java 7? Check out these links:
□ Wikipedia’s Java version history page
□ As far as I know, we don’t have Java 7 installed in our labs; you are welcome to install it on your own machine, however.
• The Wikipedia entry on the Collatz Conjecture (not available on Wednesday 18 January 2012).
• Check out the Wikipedia list of data structures.
• You can find the detailed case analysis of AVL tree rotations on Roger Whitney's site at San Diego State University.
Wikipedia also has a nice illustration of tree rotations as well as a good summary of splay trees.
• OK, wrt visualization of sorting algorithms, we have here the more fair side-by-side comparison I was hoping for in lecture
and, over here, the static (non-animated) pictures I emailed about (which may nevertheless not give a good sense of
overall running time differences, as discussed in the comments to the reddit post).
• Check out these animated data structures: here they are.
• By the way, speaking of logarithms, exponents and numerals, here's what a trillion dollars looks like (a Public Service Announcement). [LINK BROKEN]
• The link for the lecture on tree traversals is this hot-linked diagram (some code updated with syntax highlighting!).
• Those of you still working on Java exceptions will want to check out this handy list. ☺
• Special thanks to Blake Lavender for this link to a nice site with good information & tutorials on applets, double-buffering, etc.; check it out!
• Here is The Tragedy of Linked Lists.
Labs and written homework (in reverse chron. order)
Some on-line references
Miscellaneous links
Here are a few fun, interesting or otherwise difficult to categorize links that came up in lecture at various points.
• Here's a nice little demonstration of the power of exponentiation (in base 10): The Powers of Ten.
|
{"url":"http://www.willamette.edu/~fruehr/241/","timestamp":"2014-04-20T08:37:01Z","content_type":null,"content_length":"21467","record_id":"<urn:uuid:a1598650-d9d8-41b7-ac0b-b18729dc1d73>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Controlling Dengue with Vaccines in Thailand
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
PLoS Negl Trop Dis. Oct 2012; 6(10): e1876.
Controlling Dengue with Vaccines in Thailand
Alison P. Galvani, Editor^
Dengue is a mosquito-borne infectious disease that constitutes a growing global threat with the habitat expansion of its vectors Aedes aegyti and A. albopictus and increasing urbanization. With no
effective treatment and limited success of vector control, dengue vaccines constitute the best control measure for the foreseeable future. With four interacting dengue serotypes, the development of
an effective vaccine has been a challenge. Several dengue vaccine candidates are currently being tested in clinical trials. Before the widespread introduction of a new dengue vaccine, one needs to
consider how best to use limited supplies of vaccine given the complex dengue transmission dynamics and the immunological interaction among the four dengue serotypes.
Methodology/Principal Findings
We developed an individual-level (including both humans and mosquitoes), stochastic simulation model for dengue transmission and control in a semi-rural area in Thailand. We calibrated the model to
dengue serotype-specific infection, illness and hospitalization data from Thailand. Our simulations show that a realistic roll-out plan, starting with young children then covering progressively older
individuals in following seasons, could reduce local transmission of dengue to low levels. Simulations indicate that this strategy could avert about 7,700 uncomplicated dengue fever cases and 220
dengue hospitalizations per 100,000 people at risk over a ten-year period.
Vaccination will have an important role in controlling dengue. According to our modeling results, children should be prioritized to receive vaccine, but adults will also need to be vaccinated if one
wants to reduce community-wide dengue transmission to low levels.
Author Summary
An estimated 40% of the world's population is at risk of infection with dengue, a mosquito-borne disease that can lead to hospitalization or death. Dengue vaccines are currently being tested in
clinical trials and at least one product will likely be available within a couple of years. Before widespread deployment, one should plan how best to use limited supplies of vaccine. We developed a
mathematical model of dengue transmission in semi-rural Thailand to help evaluate different vaccination strategies. Our modeling results indicate that children should be prioritized to receive
vaccine to reduce dengue-related morbidity, but adults will also need to be vaccinated if one wants to eliminate local dengue transmission. Dengue is a challenging disease to study because of its
four interacting serotypes, seasonality of its transmission, and pre-existing immunity in a population. Models such as this one are useful coherent framework for synthesizing these complex issues and
evaluating potential public health interventions such as mass vaccination.
Dengue is a mosquito-borne disease, caused by a flavivirus with four serotypes, responsible for an estimated 500,000 hospitalizations and 20,000 deaths per year, mostly in the tropics [1], although
these are probably conservative estimates. The toll of dengue may rise with the increasing range of its primary vectors, Aedes aegypti and A. albopictus, because of climate change and increasing
urbanization in the developing world. Severe dengue cases (i.e., dengue shock syndrome (DSS) and dengue hemorrhagic fever (DHF)) occur primarily among children [2]. Although the mortality rate for
dengue cases is low, even uncomplicated dengue fever causes considerable suffering and loss of productivity despite its short duration [3]–[5]. Because vector control has achieved only limited
success so far in reducing the transmission of dengue [6]–[8], an effective tetravalent vaccine against all four dengue serotypes may be the only means to effectively control dengue. Such a vaccine
could drive dengue rates to very low levels, as has the vaccine against yellow fever, which is also caused by flavivirus [9]. Since urban and sylvatic dengue transmission are not tightly linked [10],
it is not inconceivable that dengue could be eliminated in urban areas with the targeted use of a highly efficacious vaccine.
Several dengue vaccine candidates are currently in development or in clinical trials [11]–[13]. Once vaccine becomes available, initially there will not be sufficient quantities to cover the up to
2.5 billion people at risk [1]. Vaccine will need to be introduced gradually, allowing evaluation of vaccine effectiveness and safety [14]. To reduce disease burden most efficiently with a limited
supply of vaccine, it may be necessary to prioritize certain geographic regions or age groups for vaccination while taking into account the constraints of government vaccination programs and
finances. However, with up to four competing dengue serotypes [15]–[17], seasonal vectors [18], [19], complex and potentially harmful immune responses to infections with heterologous serotypes [20]–
[22], and the difficulty in formulating a tetravalent vaccine that protects against all four serotypes [13], [23], it is important to anticipate how the deployment of such vaccines will affect dengue
virus transmission, and morbidity and hospitalizations caused by the disease [23]–[25].
Here, we investigate the potential effectiveness of different dengue vaccination strategies using a model of dengue transmission in a Thai population. The individual-level stochastic model was
developed to match the epidemiology of dengue in a population in semi-rural Thailand that has experienced hyperendemic dengue transmission for many years. We modeled both single-year campaigns, in
which part of the population is vaccinated well before the dengue season, and multi-year roll-outs, in which young children are vaccinated first and progressively older individuals are vaccinated in
subsequent years as part of a catch-up campaign.
Simulation model
We developed an agent-based model of dengue transmission. The model is described in detail in Text S1. In brief, the model uses a synthetic population based on the demography of Ratchaburi, Thailand.
In the model, individual humans spend time at home, work, or school, and can be susceptible, exposed, infectious, or recovered with respect to each of the four dengue serotypes. Uninfected
mosquitoes, which can not transmit dengue, reside in buildings until they become infected by biting a viremic human host, at which point the mosquito may travel among nearby buildings. Exposed
mosquitoes become infectious to humans after an extrinsic incubation period and remain infectious until they die (Figure 1A). Humans are immune to all serotypes for 120 days after recovering from
infection. After 120 days, they are susceptible to serotypes to which they had not been exposed [26].
Computer simulation model of dengue transmission.
Secondary cases may have severe outcomes (i.e., DSS/DHF) at an age-specific proportion (Text S1). Secondary infections are otherwise treated the same as primary infections in the model except that
viremia resolves one day faster [27].
We describe the synthetic population created for the model in detail in Text S1. Briefly, the model represents a 20Figure 1B). We populate each square kilometer with households to match population
density estimates [28]. The households are randomly drawn from the household microdata from the census of Ratchaburi province. By drawing households from census microdata, we obtain realistic age and
gender distributions both within the households and in the overall population (Text S1). The synthetic population has 207,591 individuals.
Within each square kilometer, individual households, schools, and workplaces are assigned random locations. Children of the appropriate age are sent to the elementary school (ages 5 to 10 years),
lower secondary school (ages 11 to 14 years), or upper secondary school (15 to 17 years). People of the appropriate age are assigned workplaces according to a gravity model in which people tend to
commute to locations that are nearby and have a relatively high population density. Workplaces have an average of 20 workers, who occupy the same location during the workday.
During the morning and evening hours, people are at home, and they may go to school or work during the rest of the day (Figure 1C). Individuals symptomatically infected with dengue may stay at home
until they recover. One consequence of this behavior is that there is more dengue transmission in households than at workplaces when dengue is symptomatic. Mosquitoes tend to stay in the same
location (i.e., house, workplace, or classroom), but may migrate to adjacent locations with a fixed probability per day (Figure 1D and Text S1). Occasionally, the simulated infected mosquitoes will
migrate to a random distant location to account for occasional long-distance travel. Because simulated mosquitoes migrate to adjacent locations with the same probability regardless of distance, they
will travel farther in more sparsely inhabited regions.
To simulate multi-year epidemics, we make two simplifying assumptions: 1) there is no correlation of prior exposure to dengue within households and 2) household structures do not change over time.
After simulating a single year of dengue transmission, we “age” the population by setting the immune status (both prior infections and vaccination) of all individuals of age [29]. Therefore, to
minimize the effects assuming a constant population structure, we do not run the model beyond ten years. The advantage of our approach is that the complex dynamics of household structures such as
births, deaths, and marriages do not need to be included in the model. These processes are extremely difficult to simulate realistically but would be required to maintain plausible age distributions
within households, schools, and the workforce. Also, the correlation of immune statuses within households and within geographic areas is disrupted in the multi-year model [30]. It also makes it
impossible to trace the immune history of an individual person, since an individual's prior exposure to dengue and vaccination history will be copied from a randomly selected younger person each
year. However, the population-level history of exposure to the circulating strains of dengue will be correct.
Estimated dengue serotype-specific exposure in Thailand
In the model, individuals are assigned to have immunity from prior exposure to the four serotypes of dengue based on their age. The age-specific immune profile is based on two sources of data on the
prevalence of serotypes in Thailand. Thailand's Ministry of Public Health releases an “Annual epidemiological surveillance report” that summarizes dengue serotype surveillance data. Reports from
2000–2009 are available at epid.moph.go.th, which we summarize in Table S1. For 1973–1999, we use data from a surveillance study based on children hospitalized at the Queen Sirikit National Institute
of Child Health in Bangkok, as published in [31] (Table S2). Although we should be cautious about concatenating data from different sources, many of the cases reported to the Ministry of Public
Health are 10–14 years old, so the populations in these two datasets are reasonably comparable.
We estimate the age-specific immunity to the four dengue serotypes in our model. We assume that the level of exposure to dengue each year was such that 11% of nave individuals would be infected,
based on studies in nearby Vietnam [32], [33]. To determine the contribution of the four serotypes to this constant annual exposure to infection, we estimate the relative prevalence of the 4
serotypes by combining the Thailand's Health Ministry's national data from 2000–2009 (available at http://epid.moph.go.th) and Queen Sirikit National Institute of Child Health in Bangkok from
1973–1999 [31] (Figure 2).
Estimated relative prevalence of the 4 serotypes in Thailand.
For each of the years for which we have serotype prevalence estimates, we randomly selected 11% of the population who was alive in that year (i.e., was 0 years old or older) to be exposed to dengue,
and for each individual simulated exposure to a single serotype drawn from that year's prevalence data. Individuals exposed to a serotype are considered to be permanently immune. For years before
1973, we performed the same procedure, except that we assumed that the serotype prevalence was the mean serotype prevalence from 1973–2009. The mean serotype prevalences are 9.8%, 14.6%, 7.5%, and
5.2% for DENV-1, DENV-2, DENV-3, and DENV-4, respectively. In other words, we assumed that there was a constant 11% exposure to dengue (sufficient to infect) for all individuals, regardless of age or
immune status, and that exposure to a serotype at any point in an individual's past grants sterilizing immunity to that serotype. In other words, each person who is exposed to dengue each year is
exposed to exactly one serotype of dengue, and he or she gains sterilizing immunity to that serotype if he or she was not already immune from prior exposure.
Because the four serotypes have different symptomatic fractions, surveillance data give a skewed representation of the number of individuals infected by each serotype. We re-scaled the number of
cases for each of the four serotypes in the historical data as described in Text S2. By scaling the historical surveillance data, the population-level immunity to the four serotypes changes, with
increased levels of immunity to less pathogenic serotypes than if the unadjusted surveillance data were used. Figure 3 shows the age-specific immunity to dengue in the synthetic population.
Prior exposure to the four dengue serotypes in the model's synthetic population.
Simulating a single dengue season
We simulated a single year of dengue transmission in Ratchaburi, Thailand (Figure 1B). Dengue seasonality was simulated by modeling the monthly mosquito population to conform to mosquito count data
from Thailand (Text S2). To seed the epidemic, we randomly selected two people to expose to each of the four dengue serotypes for each simulation day (i.e., eight total per day, or 1.4% of the
population per year). Pre-existing immunity protects many of these individuals (Figure 3), so only a few actually become infected each day. This constant seeding represents the repeated introduction
of dengue from neighboring unvaccinated regions and prevents dengue from being eradicated in the model. Simulated epidemics peak in July–August (Figure 4A), about two months later than the peak in
the mosquito population, which is in May–June (Text S2). This delay of dengue activity after mosquito activity is consistent with observations [34]–[36]. The lag is caused by the long mean generation
time, i.e., time between when one human infects another through infected mosquitoes, of 24 days (Text S2).
Simulated dengue incidence in a single year.
The simulated dengue season produced a 5% infection attack rate with some stochastic variation among runs (Table 1). Because of age-specific immunity from prior exposure (Figure 3), most of the
infections occur in children (Figure 5A). The 1.7% dengue illness attack rate is consistent with the estimated 2% observed in children in Ratchaburi in the 2006–2007 season [37], [38]. There were 39
severe cases requiring hospitalization per 100,000 individuals in a simulated dengue season, primarily among school-aged children (Figure 5C). The age distribution of severe cases is largely a
consequence of the high inherent risk of severe outcome upon secondary infection for this age group as described in Text S1).
Simulated incidence of dengue infection and illness by age in a simulated season.
Single year dengue simulation results.
We report the total number of uncomplicated and severe (DSS/DHF) cases produced by our model assuming perfect surveillance. Estimates of reporting rates would be needed to compare our modeling
results with actual surveillance data. Wichmann et al. estimated that, among children, total dengue cases in Thailand may be underreported by a factor of 8.7 and severe (inpatient) dengue cases by
2.6, with less underreporting in school-aged children than in younger children [37]. Underreporting among adults is likely higher [39] but is difficult to quantify due to the lack of prospective
cohort or active surveillance studies that include adults [25], [40]. The age distribution of symptomatic cases produced by our model is older than we had anticipated (Figure 5B). This discrepancy
may be due to underreporting of adult dengue cases by routine surveillance, which would skew the age distribution downward. It is also possible that the model overestimates cases among older
individuals. Antibodies from exposure to multiple serotypes may be cross-protective, so third and fourth dengue infections may be rare or only mildly symptomatic [41]. The model is sensitive to
changes in the maximum permissible infection parity (Text S3). Reducing the maximum infection parity to two or three not only greatly reduces the attack rate, but also shifts the age distribution of
cases downward.
During the simulated seasonal peak of dengue transmission, a single person infected an average of 1.9 to 2.3 others, depending on the serotype (Text S2). This is the reproductive number, [14]. For
example, a vaccine with
Simulating vaccination before a single dengue season
We simulated vaccinating the population to protect them before a single dengue season. Recently, an observer-blind, randomized, controlled, phase 2b vaccine trial was conducted with a tetravalent
dengue vaccine [38]. The serotype-specific estimated vaccine efficacy for confirmed dengue illness ranged from 55–90% for serotypes 1, 3 and 4, but was close to 0% for serotype 2. Partially based on
this, we investigate the [12]. If one conservatively assumes that an individual is only protected after receiving all three doses, then only those 2 years and older could be protected by vaccine.
Therefore, in the simulation results presented below, we simulated the vaccination of individuals between 2 and 46 years old.
Vaccinating 70% of children 2 to 14 years old would reduce the number of dengue infections by 48%, uncomplicated dengue fever cases by 41%, and severe dengue cases (DSS/DHF) by 54% in a single year (
Table 1 and Figure 4B). The proportion of uncomplicated cases prevented is lower than the proportion of infections because infected children are less likely to become symptomatic with dengue fever
than adults (Text S1), but the proportion of severe cases prevented is higher than the proportion of infections because children are more likely to develop DSS or DHF upon secondary infection than
adults (Figure 5 and Text S1). Because children from ages 2 to 14 years comprise only 22.2% of the population, vaccinating them does not reach the estimated 80% coverage required to control dengue.
Extending the vaccination to include adults up to 46 years old reduced the number of infections by 82%, dengue fever cases by 81%, and severe cases by 83% (Table 1). Vaccinating 70% of individuals
aged 2 to 46 years would result in 52% coverage of the total population. Thus, vaccinating 70% of this population greatly reduces the seasonal peak (Figure 4C), while vaccinating a smaller fraction
of this population is less effective. Simulations in which the vaccine had higher efficacy produced better, but similar, results (Text S4). However, a vaccine that protects against only three of the
four serotypes is substantially less effective than one that offers good protection against all four (Text S4). Because the four serotypes compete in our model, reduction in the circulation of three
of the serotypes could result in increased transmission of the remaining serotype, at least in the short term.
Those who are not vaccinated receive indirect protection when enough of the remaining population is vaccinated. In our simulations, those who are over 46 years old are never vaccinated, but people in
this age group were 44%, 61%, and 71% less likely to become infected when 30%, 50%, and 70% of those from ages 2 to 46 years were vaccinated. Unvaccinated individuals from ages 2 to 46 were 60%, 80%,
and 91% less likely to become infected when 30%, 50%, and 70% of this age cohort were vaccinated.
Certain age groups could be prioritized to receive vaccine. Younger people have the least prior exposure, so they would be the most likely to become infected with and transmit dengue. Simulations
demonstrated that vaccinating children (2–14 years old) would reduce dengue infections in the total population more than using the same number of doses to cover both children and adults (2–46 years
old) (Figure 6A). However, dengue is more likely to be symptomatic in older individuals than younger (Text S1). Thus, the advantage of concentrating vaccine in children was less pronounced when
observing symptomatic dengue (Figure 6B). Children are more likely than adults to have severe outcomes (DSS/DHF) upon secondary dengue infection (Text S1), and vaccinating children was more effective
in reducing severe cases than vaccinating adults (Figure 6C). For example, vaccinating 70% of children from ages 2 to 14 years would reduce the overall severe case rate to 18.0 per 100,000, compared
to 22.8 per 100,000 if the same number of individuals from ages 2 to 46 years were vaccinated. Vaccinating 70% of those from ages 15 to 46 years would reduce the overall severe case rate to 13.7 per
100,000, compared to 10.6 per 100,000 if the same number of individuals from ages 2 to 46 were vaccinated. In other words, concentrating vaccine among children should reduce hospitalizations more
than vaccinating both children and adults.
The simulated effects of pre-vaccinating different age cohorts against dengue.
Simulating multi-year vaccine roll-outs
Due to limited vaccine availability and the logistics of mass vaccination programs, dengue vaccine will probably be deployed in multi-year vaccine roll-out campaigns [42]. We simulated a vaccine
roll-out that covers only children, reaching 70% of children from ages 2 to 14 years within three years, after which point only 2-year-olds are vaccinated. Specifically, we simulated the vaccination
of 70% of 2 to 4 year olds in the first year, 2 year olds and 6 to 9 year olds in the second year, 2 year olds and 11 to 14 year olds in the third year, then only 2-year-olds for the following six
years, as shown in Figure S1A. The incidence of dengue infections drops sharply for the first three years, after which incidence declines slowly (Figure 7B).
The simulated effect of vaccination on daily infection and illness incidence over ten years.
We also simulated a vaccine roll-out that extended the catch-up to include adults up to age 46. This roll-out targets the same age groups for the first three years as the previously described
roll-out, but after this point both 2-year-olds and the youngest four unvaccinated age cohorts are vaccinated, as shown in Figure S1B. Including young adults in the catch-up caused the incidence of
dengue to continue dropping rapidly after children were covered by the third year (Figure 7C and Table 2). For the roll-out that includes adults, 7,699 uncomplicated cases and 217 severe cases per
100,000 people at risk would be averted by vaccination over a ten-year period.
Multi-year dengue simulation results.
We used a dengue simulation model to estimate that vaccination of 50% of the population of rural Thailand could be sufficient to reduce local dengue transmission to low levels. Based on our modeling
study, we conclude that at least 70% efficacy against infection for all serotypes is desirable if one wants to control dengue in a hyperendemic area, and a higher efficacy vaccine would require less
careful targeting of vaccine to reduce community-wide transmission of dengue. We further showed that vaccinating children is the most efficient use of vaccine to reduce cases and hospitalizations,
but control of dengue transmission would also require vaccinating adults. In addition, both vaccinated and unvaccinated people would receive protection from mass vaccination because of the
considerable indirect effects of dengue vaccination. A vaccine that only protects against only three serotypes could lead to a significant reduction in overall vaccine effectiveness. Further work
will be needed in order to understand how to use vaccines that may not protect against all four serotypes. Using a detailed model of dengue transmission allows one to explore strategies that target
vaccines most efficiently.
To capture the complex interactions required to evaluate the effectiveness of mass vaccination with tetravalent dengue vaccines, the model includes vector population seasonality [34], [43], human
mobility [44], [45], population heterogeneities, and individual vectors [19]. Thus, we have a coherent framework for modeling both dengue transmission and the effects of vaccination in a complex
population. The model by necessity includes a number of assumptions and simplifications, such as the model structure, parameterization, and vaccine efficacy. The model may be sensitive to assumptions
we made regarding unresolved questions about dengue immunology, such as the susceptibility of individuals after sequential infection by more than two serotypes (Text S3). Although our model
qualitatively captures the epidemic dynamics of a single season of dengue in semi-rural Thailand, there are complex multi-year dynamics that we can only approximate. More realistic modeling of the
prevalence cycles of the four dengue serotypes would require more complex and calibrated inter-serotype interactions (e.g., [15], [46]), and further studies are needed to quantify these effects.
Furthermore, our results apply to dengue transmission in a hyperendemic area, which has a high incidence of dengue and multiple circulating serotypes. In regions with lower transmission, the levels
of population immunity to the various serotypes and the force of infection would be lower, resulting in different effectiveness of mass vaccination. Models that require a great deal of regional data
such as ours may need to be adapted to the specific regions of interest to produce useful results. However, our model agrees with previous model-based estimates that 50–85% of a population need to be
vaccinated to reduce transmission to negligible levels [47]. Therefore, our model produces results qualitatively similar to those from simpler models that assume homogeneous mixing of the human
An estimated 40% of the world's population is at risk of dengue infection [48], and vaccinating this population is not feasible in the short term. The greatest need for dengue control is in areas
where dengue disease is hyperendemic, primarily South-east Asia, Latin America, and the Caribbean and Pacific Islands. A coalition of non-governmental organizations, national health ministries, and
vaccine manufacturers could establish priorities for allocating vaccine in publicly funded mass vaccination campaigns. Private demand might be sufficient to cover enough of the remaining population
to reduce dengue transmission to manageable levels [49].
Large-scale vaccination campaigns would be both challenging and costly but could be more cost-effective than relying solely on vector control and other non-pharmaceutical interventions [50], [51].
Vaccination would not only reduce local disease burden, but may reduce the rate of evolution of dengue viral genetic changes. Education campaigns and aggressive vector control measures could
complement vaccination if it is not feasible to vaccinate enough individuals to eliminate local dengue transmission. However, such non-pharmaceutical strategies are difficult to sustain, and there
have been doubts about their effectiveness [6]–[8], [52]. Novel vector control strategies that involve releasing parasitic bacteria or genetically engineered mosquitoes are promising [53]–[56], but
their deployment may be controversial. Given the difficulty of controlling dengue with currently available technologies, we believe that vaccination will become an essential component of dengue
reduction efforts.
We thank Aubree Gordon for helpful conversations. The authors wish to acknowledge Thailand's National Statistical Office for providing part of the population data that made this research possible.
Funding Statement
This work was partially supported by the National Institute of General Medical Sciences MIDAS grant U01-GM070749 and the Dengue Vaccine Initiative. The funders had no role in study design, data
collection and analysis, decision to publish, or preparation of the manuscript.
Gubler DJ (1998) Dengue and dengue hemorrhagic fever. Clin Microbiol Rev 11: 480–96. [PMC free article] [PubMed]
Lum LCS, Suaya JA, Tan LH, Sah BK, Shepard DS (2008) Quality of life of dengue patients. Am J Trop Med Hyg 78: 862–7. [PubMed]
Shepard DS, Coudeville L, Halasa YA, Zambrano B, Dayan GH (2011) Economic impact of dengue illness in the Americas. Am J Trop Med Hyg 84: 200–7. [PMC free article] [PubMed]
Martelli CM, Nascimento NE, Suaya JA, Siqueira JB Jr, Souza WV, et al. (2011) Quality of life among adults with confirmed dengue in Brazil. Am J Trop Med Hyg 85: 732–8. [PMC free article] [PubMed]
Heintze C, Velasco Garrido M, Kroeger A (2007) What do community-based dengue control programmes achieve? A systematic review of published evaluations. Trans R Soc Trop Med Hyg 101: 317–25. [PubMed]
Horstick O, Runge-Ranzinger S, Nathan MB, Kroeger A (2010) Dengue vector-control services: How do they work? A systematic literature review and country case studies. Trans R Soc Trop Med Hyg 104:
379–86. [PubMed]
Esu E, Lenhart A, Smith L, Horstick O (2010) Effectiveness of peridomestic space spraying with insecticide on dengue transmission; systematic review. Trop Med Int Health 15: 619–31. [PubMed]
Monath TP (2001) Yellow fever: an update. Lancet Infect Dis 1: 11–20. [PubMed]
10. Halstead SB (2008) Epidemiology. In: Halstead SB, editor, Dengue, London: Imperial College Press. pp. 75–122.
Schmitz J, Roehrig J, Barrett A, Hombach J (2011) Next generation dengue vaccines: A review of candidates in preclinical development. Vaccine 29: 7276–84. [PubMed]
Guy B, Barrere B, Malinowski C, Saville M, Teyssou R, et al. (2011) From research to phase III: Preclinical, industrial and clinical development of the Sanofi Pasteur tetravalent dengue vaccine.
Vaccine 29: 7229–41. [PubMed]
Sabchareon A, Wallace D, Sirivichayakul C, Limkittikul K, Chanthavanich P, et al. (2012) Protective effcacy of the recombinant, live-attenuated, CYD tetravalent dengue vaccine in Thai schoolchildren:
a randomised, controlled phase 2b trial. LancetIn press. [PubMed]
14. Halloran ME, Longini IM Jr, Struchiner CJ (2010) Design and analysis of vaccine studies. New York: Springer.
Ferguson NM, Donnelly CA, Anderson RM (1999) Transmission dynamics and epidemiology of dengue: insights from age-stratified sero-prevalence surveys. Philos Trans R Soc Lond B Biol Sci 354: 757–68. [
PMC free article] [PubMed]
Cummings DAT, Schwartz IB, Billings L, Shaw LB, Burke DS (2005) Dynamic effects of antibodydependent enhancement on the fitness of viruses. Proc Natl Acad Sci U S A 102: 15259–64. [PMC free article]
Adams B, Holmes EC, Zhang C, Mammen MP Jr, Nimmannitya S, et al. (2006) Cross-protective immunity can account for the alternating epidemic pattern of dengue virus serotypes circulating in Bangkok.
Proc Natl Acad Sci U S A 103: 14234–9. [PMC free article] [PubMed]
Yasuno M, Tonn RJ (1970) A study of biting habits of Aedes aegypti in Bangkok, Thailand. Bull World Health Organ 43: 319–25. [PMC free article] [PubMed]
Focks DA, Daniels E, Haile DG, Keesling JE (1995) A simulation model of the epidemiology of urban dengue fever: literature analysis, model development, preliminary validation, and samples of
simulation results. Am J Trop Med Hyg 53: 489–506. [PubMed]
Halstead SB, Nimmannitya S, Yamarat C, Russell PK (1967) Hemorrhagic fever in Thailand; recent knowledge regarding etiology. Jpn J Med Sci Biol 20 Suppl: 96–103. [PubMed]
Dejnirattisai W, Jumnainsong A, Onsirisakul N, Fitton P, Vasanawathana S, et al. (2010) Crossreacting antibodies enhance dengue virus infection in humans. Science 328: 745–8. [PMC free article] [
Murphy BR, Whitehead SS (2011) Immune response to dengue virus and prospects for a vaccine. Annu Rev Immunol 29: 587–619. [PubMed]
Halstead SB (2012) Dengue vaccine development: a 75% solution? LancetIn press. [PubMed]
Thomas SJ, Endy TP (2011) Critical issues in dengue vaccine development. Curr Opin Infect Dis 24: 442–50. [PubMed]
Beatty M, Boni MF, Brown S, Buathong R, Burke D, et al. (2012) Assessing the potential of a candidate dengue vaccine with mathematical modeling. PLoS Negl Trop Dis 6: e1450. [PMC free article] [
Sabin AB (1952) Research on dengue during World War II. Am J Trop Med Hyg 1: 30–50. [PubMed]
Vaughn DW, Green S, Kalayanarooj S, Innis BL, Nimmannitya S, et al. (2000) Dengue viremia titer, antibody response pattern, and virus serotype correlate with disease severity. J Infect Dis 181: 2–9.
Center for International Earth Science Information Network (CIESIN), Columbia University; International Food Policy Research Institute (IPFRI); the World Bank; and Centro Internacional de Agricultura
Tropical (CIAT) (2004). Global rural-urban mapping project, version 1 (GRUMPv1). Available: http://sedac.ciesin.columbia.edu/gpw/
Cummings DAT, Iamsirithaworn S, Lessler JT, McDermott A, Prasanthong R, et al. (2009) The impact of the demographic transition on dengue in Thailand: insights from a statistical analysis and
mathematical modeling. PLoS Med 6: e1000139. [PMC free article] [PubMed]
Salje H, Lessler J, Endy TP, Curriero FC, Gibbons RV, et al. (2012) Revealing the microscale spatial signature of dengue transmission and immunity in an urban population. Proc Natl Acad Sci U S A 109
24: 9535–8. [PMC free article] [PubMed]
Nisalak A, Endy TP, Nimmannitya S, Kalayanarooj S, Thisayakorn U, et al. (2003) Serotypespecific dengue virus circulation and dengue disease in Bangkok, Thailand from 1973 to 1999. Am J Trop Med Hyg
68: 191–202. [PubMed]
Thai KTD, Binh TQ, Giao PT, Phuong HL, Hung LQ, et al. (2005) Seroprevalence of dengue antibodies, annual incidence and risk factors among children in southern Vietnam. Trop Med Int Health 10:
379–86. [PubMed]
Tien NTK, Luxemburger C, Toan NT, Pollissard-Gadroy L, Huong VTQ, et al. (2010) A prospective cohort study of dengue infection in schoolchildren in Long Xuyen, Viet Nam. Trans R Soc Trop Med Hyg 104:
592–600. [PubMed]
Halstead SB (2008) Dengue virus–mosquito interactions. Annu Rev Entomol 53: 273–91. [PubMed]
Sanchez L, Vanlerberghe V, Alfonso L, del Carmen Marquetti M, Guzman MG, et al. (2006) Aedes aegypti larval indices and risk for dengue epidemics. Emerg Infect Dis 12: 800–6. [PMC free article] [
Pham HV, Doan HTM, Phan TTT, Minh NNT (2011) Ecological factors associated with dengue fever in a Central Highlands province, Vietnam. BMC Infect Dis 11: 172. [PMC free article] [PubMed]
Wichmann O, Yoon IK, Vong S, Limkittikul K, Gibbons RV, et al. (2011) Dengue in Thailand and Cambodia: An assessment of the degree of underrecognized disease burden based on reported cases. PLoS Negl
Trop Dis 5: e996. [PMC free article] [PubMed]
Sabchareon A, Sirivichayakul C, Limkittikul K, Chanthavanich P, Suvannadabba S, et al. (2012) Dengue infection in children in Ratchaburi, Thailand: A cohort study. I. Epidemiology of symptomatic
acute dengue infection in children, 2006–2009. PLoS Negl Trop Dis 6: e1732. [PMC free article] [PubMed]
Meltzer MI, Rigau-Pérez JG, Clark GG, Reiter P, Gubler DJ (1998) Using disability-adjusted life years to assess the economic impact of dengue in Puerto Rico: 1984–1994. Am J Trop Med Hyg 592: 265–71.
Porter KR, Beckett CG, Kosasih H, Tan RI, Alisjahbana B, et al. (2005) Epidemiology of dengue and dengue hemorrhagic fever in a cohort of adults living in Bandung, West Java, Indonesia. Am J Trop Med
Hyg 721:60–6. [PubMed]
Gibbons RV, Kalanarooj S, Jarman RG, Nisalak A, Vaughn DW, et al. (2007) Analysis of repeat hospital admissions for dengue to estimate the frequency of third or fourth dengue infections resulting in
admissions and dengue hemorrhagic fever, and serotype sequences. Am J Trop Med Hyg 775:910–3. [PubMed]
Zorlu G, Fleck F (2011) Dengue vaccine roll-out: getting ahead of the game. Bull World Health Organ 89: 476–7. [PMC free article] [PubMed]
Wearing HJ, Rohani P (2006) Ecological and immunological determinants of dengue epidemics. Proc Natl Acad Sci U S A 10331: 11802–7. [PMC free article] [PubMed]
Stoddard ST, Morrison AC, Vazquez-Prokopec GM, Soldan PV, Kochel TJ, et al. (2009) The role of human movement in the transmission of vector-borne pathogens. PLoS Negl Trop Dis 3: e481. [PMC free
article] [PubMed]
Barmak DH, Dorso CO, Otero M, Solari HG (2011) Dengue epidemics and human mobility. Phys Rev E Stat Nonlin Soft Matter Phys 84: 011901. [PubMed]
Zhang C, Mammen MP Jr, Chinnawirotpisan P, Klungthong C, Rodpradit P, et al. (2005) Clade replacements in dengue virus serotypes 1 and 3 are associated with changing serotype prevalence. J Virol 79:
15123–30. [PMC free article] [PubMed]
Johansson MA, Hombach J, Cummings DAT (2011) Models of the impact of dengue vaccines: A review of current research and potential approaches. Vaccine 29: 5860–8. [PubMed]
Amarasinghe A, Wichmann O, Margolis HS, Mahoney RT (2010) Forecasting dengue vaccine demand in disease endemic and non-endemic countries. Human Vaccines 6: 745–53. [PMC free article] [PubMed]
Shepard DS, Suaya JA, Halstead SB, Nathan MB, Gubler DJ, et al. (2004) Cost-effectiveness of a pediatric dengue vaccine. Vaccine 229–10: 1275–80. [PubMed]
Lee BY, Connor DL, Kitchen SB, Bacon KM, Shah M, et al. (2011) Economic value of dengue vaccine in Thailand. Am J Trop Med Hyg 84: 764–72. [PMC free article] [PubMed]
Al-Muhandis N, Hunter PR (2011) The value of educational messages embedded in a communitybased approach to combat dengue fever: A systematic review and meta regression analysis. PLoS Negl Trop Dis 5:
e1278. [PMC free article] [PubMed]
Phuc HK, Andreasen MH, Burton RS, Vass C, Epton MJ, et al. (2007) Late-acting dominant lethal genetic systems and mosquito control. BMC Biol 5: 11. [PMC free article] [PubMed]
Kambris Z, Cook PE, Phuc HK, Sinkins SP (2009) Immune activation by life-shortening Wolbachia and reduced filarial competence in mosquitoes. Science 326: 134–6. [PMC free article] [PubMed]
Fu G, Lees RS, Nimmo D, Aw D, Jin L, et al. (2010) Female-specific ightless phenotype for mosquito control. Proc Natl Acad Sci U S A 107: 4550–4. [PMC free article] [PubMed]
Walker T, Johnson PH, Moreira LA, Iturbe-Ormaetxe I, Frentiu FD, et al. (2011) The wMel Wolbachia strain blocks dengue and invades caged Aedes aegypti populations. Nature 476: 450–3. [PubMed]
Articles from PLoS Neglected Tropical Diseases are provided here courtesy of Public Library of Science
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3493390/?tool=pubmed","timestamp":"2014-04-20T17:10:40Z","content_type":null,"content_length":"139593","record_id":"<urn:uuid:058aa533-072c-420f-8427-93f27a6b4d7e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
French resources for (Geometric) Group Theory
up vote 3 down vote favorite
I am looking for ways to improve my mathematical French while learning more material about either finite group theory or geometric group theory. In particular, I would love to find a French
equivalent to Rotman's group theory book, if possible. As far as geometric group theory goes, I am open to about anything, but I love Cayley graphs and hyperbolic groups. I apologize if this request
is too vague, but do any of you have suggestions for French texts in these areas? In terms of mathematical background, I would consider myself a beginner with a decent foundation in these subjects.
Thanks in advance!
gr.group-theory geometric-group-theory
4 Serre's book on representation theory or the one on trees. – Felipe Voloch May 16 '12 at 22:46
10 I'm surprised you haven't come across: ams.org/mathscinet-getitem?mr=1086648 – Ian Agol May 16 '12 at 22:51
1 Also: amazon.com/Geometrie-theorie-groupes-hyperboliques-Mathematics/… – Steve D May 17 '12 at 0:01
@Agol: Some years ago I had to give a course in French to the local group theorists so that they could read the book by Ghys and de la Harpe. – Chandan Singh Dalawat May 17 '12 at 3:36
add comment
1 Answer
active oldest votes
I suggest the survey articles of Séminaire Bourbaki, such as
Ghys, Étienne Groupes aléatoires (d'après Misha Gromov,…). Astérisque No. 294 (2004), viii, 173–204.
Ghys, Étienne Les groupes hyperboliques. Séminaire Bourbaki, 32 (1989-1990), Exposé No. 722, 36 p.
Tits, Jacques Groupes à croissance polynomiale. Séminaire Bourbaki, 23 (1980-1981), Exposé No. 572, 13 p.
At a more elementary level, there are short articles on various topics in the Gazette des Mathématiciens, such as
up vote 5 down vote accepted Topologie, théorie des groupes et problèmes de décision by Pierre de la Harpe in volume 125.
At an even more elementary level, there are many highly readable and visually appealing online articles in the Images des Mathématiques
such as this one:
Un concept mathématique, trois notions : Les groupes au XIXe siècle chez Galois, Cayley, Dedekind, by Caroline Ehrhardt.
Happy reading !
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory geometric-group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/97163/french-resources-for-geometric-group-theory?sort=newest","timestamp":"2014-04-19T12:26:16Z","content_type":null,"content_length":"57580","record_id":"<urn:uuid:546d3ace-4859-42f6-96a7-3d3b4a43f235>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/e.g.1/medals","timestamp":"2014-04-16T13:22:19Z","content_type":null,"content_length":"66001","record_id":"<urn:uuid:53bc15ef-c19b-4f16-b2c4-2ba3149e015b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
math - a fortune file
At the end of a proof you write Q.E.D, which stands not for
Quod Erat Demonstrandum as the books would have you believe, but
for Quite Easily Done.
-- R. Ainsley in Bluff your way in Maths, 1988
Euler's formula: A connected plane graph with n vertices, e edges and f faces
satisfies n - e + f = 2
Proof. Let T be the edge set of a spanning tree for G. It is a subset of the
set E of edges. A spanning tree is a minimal subgraph that connects all the
vertices of G. It contains so no cycle. The dual graph G* of G has a vertex
in the interior of each face. Two vertices of G* are connected by an edge if
the correponding faces have a common boundary edge. G* can have double edges
even if the original graph was simple. Consider the collection T* of edges
E* in G* that correspond to edges in the complement of T in E. The edges of
T* connect all the faces because T does not have a cycle. Also T* does not
contain a cycle, since otherwise, it would seperate some vertices of G
contradicting that T was a spanning subgraph and edges of T and T* don't
intersect. Thus T* is a spanning tree for G*. Clearly e(T)+e(T*)=2.
For every tree, the number of vertices is one larger than the number of
vertices. Applied to the tree T, this yields n = e(T)+1, while for the tree
T* it yields f=e(T*)+1. Adding both equations gives n+f=(e(T)+1)+(e(T*)+1)=e+2.
-- from M.Aigner, G. Ziegler "Proofs from THE BOOK"
|
{"url":"http://www.dynamical-systems.org/fortune/index.html","timestamp":"2014-04-20T11:34:14Z","content_type":null,"content_length":"3945","record_id":"<urn:uuid:3accb336-14e1-4783-aa26-15530eacf294>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
= Preview Document = Member Document = Pin to Pinterest
Double-six dominoes, but with weather words instead of dots. Great for early literacy/language instruction.
"The duckling lived in a farm with 45 animals. There were 10 pigs, 12 goats, 5 dogs, 2 horses and the rest were sheep." Choose the equation that best expresses the word problem.
Solve the simple addition problems. Use the answers to color the picture.
Students fill in missing sums for illustrated basic fact number sentences.
Solve six each difficult addition and subtraction problems, answer sheet included.
"Use the number from the table above to complete the problems below."
Our math "machines" make addition drills fun. Cut out the two shapes and practice simple addition.
Students fill in missing sums for illustrated basic facts number sentences.
• [member-created with abctools] 27 pages of worksheets introducing multiplication to 12.
Students fill in missing sums for illustrated basic facts number sentences.
Roll a die with coin pictures on the faces (available on abcteach) or draw coins out of a bag. Write the value of each coin in the row, then add up the row.
Students fill in missing sums for illustrated basic facts number sentences.
Students fill in missing sums for illustrated basic facts number sentences.
Students fill in missing sums for illustrated basic facts number sentences
Students fill in missing sums for illustrated basic facts number sentences.
• Make 8 copies of the kites and bows. Write an "answer" to a math problem on each kite. Write various combinations of math problems (that match up with each answer) on the bows.
Solve simple addition problems using In and Out boxes. Numbers are within 10.
Students fill in missing sums for illustrated basic facts number sentences.
Students fill in missing sums for illustrated basic facts number sentences.
Students fill in missing sums for illustrated basic facts number sentences.
This penguin theme unit is a great way to practice counting and adding to 20. This 21 page unit includes; tracing numbers, cut and paste, finding patterns, ten frame activity, in and out boxes
and much more! CC: Math: K.CC.B.4
This set of worksheets introduces the fundamentals of addition with concrete examples, story problems, and colorful pictures.
Commutative property addition worksheet. Includes ten frames, addition, and word problems using the common core standards in first grade math. Common Core Math: 1.OA.B.3, 2.OA.C.4
Fill in the appropriate operator symbols, addition or subtraction.
Addition Activity:; Add the sums (up to 18) and use the key to color the picture of two runners.
Students fill in missing sums for illustrated basic fact number sentences.
|
{"url":"http://www.abcteach.com/directory/subjects-math-addition-650-6-3","timestamp":"2014-04-17T15:45:33Z","content_type":null,"content_length":"144806","record_id":"<urn:uuid:4eb74bbd-6bff-42db-bbd4-f7b649a6078f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Northlake, TX Prealgebra Tutor
Find a Northlake, TX Prealgebra Tutor
...In all, I take great joy in seeing my students grasp the concepts and become successful in their endeavors. I look forward to helping and serving you or your child's learning needs and to
assist them on their road to success. VERY IMPORTANT: If you submit to me questions on a future assignment ...
40 Subjects: including prealgebra, reading, English, calculus
...In college I would help my classmates figure out problems. I enjoy helping others to learn. I grew up on the SRA phonics program where you learn how to sound out each word rather than just
remember what a word is.
10 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...SAT Math problems focus on testing analytical skills as well as math knowledge. For this reason, it’s important to know how to quickly reason out a plan for solving an SAT Math problem. I help
students identify strategies and work through problems to develop this skill.
15 Subjects: including prealgebra, reading, writing, geometry
Hi! Let me tell you a bit about myself. I obtained my Master's degree in Microbiology at the University of Texas at Arlington, where I also attended as an undergraduate and received my Bachelor's
degree in Biology.
6 Subjects: including prealgebra, biology, ESL/ESOL, algebra 1
...In fact I am currently working with one of the students I tutor to help him become more organized in his study habits. Developing good study skills involves learning good time management;
using a calendar to schedule appropriate time to study and complete lessons, such as term papers; learning w...
82 Subjects: including prealgebra, English, chemistry, calculus
Related Northlake, TX Tutors
Northlake, TX Accounting Tutors
Northlake, TX ACT Tutors
Northlake, TX Algebra Tutors
Northlake, TX Algebra 2 Tutors
Northlake, TX Calculus Tutors
Northlake, TX Geometry Tutors
Northlake, TX Math Tutors
Northlake, TX Prealgebra Tutors
Northlake, TX Precalculus Tutors
Northlake, TX SAT Tutors
Northlake, TX SAT Math Tutors
Northlake, TX Science Tutors
Northlake, TX Statistics Tutors
Northlake, TX Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Argyle, TX prealgebra Tutors
Bartonville, TX prealgebra Tutors
Colleyville prealgebra Tutors
Copper Canyon, TX prealgebra Tutors
Corinth, TX prealgebra Tutors
Corral City, TX prealgebra Tutors
Denton, TX prealgebra Tutors
Highland Village, TX prealgebra Tutors
Justin prealgebra Tutors
Oak Point, TX prealgebra Tutors
Roanoke, TX prealgebra Tutors
Saginaw, TX prealgebra Tutors
Shady Shores, TX prealgebra Tutors
Southlake prealgebra Tutors
University Park, TX prealgebra Tutors
|
{"url":"http://www.purplemath.com/Northlake_TX_prealgebra_tutors.php","timestamp":"2014-04-18T16:02:17Z","content_type":null,"content_length":"24082","record_id":"<urn:uuid:d333b34c-8a18-4a47-b2c7-03562ad9d234>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please - Need help tonight on a couple more calc problems
Quote: Originally Posted by JohnSena This is an exponential problem since you remove 17% it is the same as saying 83% of the total amount thus, $A(t)=100(.83)^t$ For the second part you need to find,
$100(.83)^3\approx 57.18$
Quote: Originally Posted by JohnSena a) the ball hits the ground when: $<br /> f(t)=-16t^2+96t+6=0<br />$ This will have two roots, one positive and the other negative, the negative root is
unphysical but the positive one is what you want. The roots are: $<br /> -0.06186, 6.06186<br />$ So the ball is in the air $\approx 6.06$seconds. b) The maximum height is achieved when $f(x)$ is a
maximum. As this is a quadratic we know this occurs midway between the roots, so it occurs at $(-0.06186+6.06186)/2=3$, so the maximum height is: $<br /> f(3)=16\ 3^2+96\ 3+6=438$ feet. c) Already
answered at $t=3$seconds. RonL
|
{"url":"http://mathhelpforum.com/calculus/3084-please-need-help-tonight-couple-more-calc-problems-print.html","timestamp":"2014-04-18T17:23:59Z","content_type":null,"content_length":"8255","record_id":"<urn:uuid:7adc1a5e-9112-4da7-8a49-10a620e2e5c8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yann LeCun
Deep Learning has taken over all the big search companies, including Google, Microsoft and Baidu, as well as a few companies that produce technology for them, such as IBM.
They all have deployed DL systems for speech recognition and image content analysis (for search and other things). And there are several efforts to use DL for language modeling and ranking for search
and ad placement.
Google has a large number of people in several groups working to develop and deploy DL applications. Microsoft has several groups working on it for speech, image and search, and Baidu is setting up
their Institute for Deep Learning in Cupertino.
It's interesting how Deep Learning has spread like wildfire in industry (probably because it works so well), while its spread in academic research circles has been somewhat slower, relatively
It's a rather unusual phenomenon.
Well, the phrase "taken over" seems to imply that DL has replaced other ML technology completely. Is that really the case? If yes, it would indeed be very striking. Of course I don't know what these
companies are doing internally, but seeing that their research departments regularly put out papers using other methods, it doesn't seem likely that this is what is happening.
If, on the other hand, DL is merely one tool that is being applied for particular problems, then this would more closely match what is being done in academia, wouldn't it?
People in universities are more and more involved in filling forms and requesting grants and publication races. So they (we ?.. :-) ) prefer things that you can study after 20 lines of code before
starting to write the corresponding article. Deep learning is not something easy with 20 lines of code. It's a bit similar in discrete time control, in my humble opinion; things which are good for
your publication records are not always things which are good for science.
I think the enthusiasm from industry is because deep learning really shines when you have a lot of training data available.
as a company, I think it is the sense of crisis that drive them study and apply new technology much quicker.
Maybe this is a case where universities can't compete with industry? I cite the fact that Geoff Hinton is mostly moving to Google as evidence. Put another way: What are the DL research questions that
industry is not going to explore and where university researchers could play an important role?
It is difficult to analyze DL theoretically in the same way that one can analyze linear regression, logistic regression, or SVM. One reason for this is the lack of convexity and another is the
deviation from linearity.
University researchers are more concerned with theory than industry scientists and engineers, which is why you see a lot of papers on linear regression and such in statistics and ML literature.
Adding neurons is a solution for avoiding local minima; at least one neuron should be initialized in the right direction for avoiding local minima :-) hard to formalize, but this principle makes
sense I guess. There also studies on the expressive power of neural networks depending on the size (separating the effect of numbers of layers and numbers of neurons). I am not enough involved in
this literature for pointing out references but I remember this kind of stuff, a long time ago.
Many of the deep learning techniques that are responsible for the success of the field were developed by accident or by trying things that intuitively made sense. After they are shown to work, some
sort of theoretical justification is loosely formed to say why doing these things made sense. The theory is still very informal and intuition-based. I wonder if the lack of academic interest is
more because theoretical justification is uninteresting for things that already work and were developed without a theoretical basis, or if it's just that the analysis of such networks is just too
dang hard.
Every new learning technique is developed through a combination of intuitive insight and guidance from theoretical insight. A few (very few) come up through inspiration from biology (also combined
with intuitive and theoretical insight). Many come about from the need to solve a new practical problem.
In my experience, it never occurs "by accident".
In all cases that I know, the theory always comes after the insight/intuition. But that very much depends what you mean by "theory".
If by "theory" you mean a generalization bound, then every deep learning algorithm in wide use has one by default, because every fixed-size deep learning system has a finite VC dimension. The general
VC bounds apply. It's only for things like SVM that you need special bounds because they are non-parametric (the number of parameters grows with the number of samples). Even the maximum margin
regularizer comes from a very natural intuition (it's basically L2 regularization). Incidentally, the idea of L2 and L1 regularization on parameters are much, much older than most people think (way
older than any of the theoretical papers written about them).
Interestingly, theoretical insights can also prevent you from doing the right thing. And deep learning has fallen victim to this too: for a long time, people thought that neural nets shouldn't be too
big to avoid over-fitting. But it turns out the best strategy (for speech or image recognition) is to make the network ridiculously large and regularize the hell out of it (e.g. with drop out and
other methods).
The main problem with being too enamored with theory is that it can restrict your thinking to models that you can analyze. This pretty much limits you to generalized linear models and convex losses.
For theorists, deep learning is a wide open field. There are huge opportunities for theory to analyze and understand what goes on in deep learning systems. Now that deep learning have shown to work
very well, and now that there is large commercial interest in them, there might be enough of a motivation for theorists to crack that nut.
About DL.. I'm not good at it yet, I talked to my friends (most of them are industry engineers) "It's so difficult!" Because if one want to understand DL thoroughly, he have to know very many things.
(The things are all listed in Bengio's review paper well.) Individually i am moved from the fact that many achievments can converge to DL now and it can show good performance indeed.
+Thomas Dietterich
You don't need huge means to make a big difference. As witness, the neural net trained by Alex Krizhevsky et al on his laptop that has made a big splash in the last few months.
+Yoshua Bengio
Are you talking about his ImageNet entry? Wasn't that two GTX 580's strapped together?
|
{"url":"https://plus.google.com/+YannLeCunPhD/posts/7gM7C6XEjQC","timestamp":"2014-04-18T17:09:46Z","content_type":null,"content_length":"100825","record_id":"<urn:uuid:54ef4052-2242-455c-84fa-d723ff36d401>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Evelyn Boyd Granville, second african american woman mathematican
│ Evelyn Boyd Granville │ │
│ │ │
│Born: May 1, 1924 │ │
│ │ │
│Birthplace: Washington, D.C. │1997 photo by Margaret Murray│
├──────────────────────────────────────────────────────────────────────────┤ │
│ │ │
│thesis: On Laguerre Series in the Complex Domain; Advisor: C. Einar Hille │ │
Granville was born in Washington, D.C., on May 1, 1924. Her father, William Boyd, worked as a custodian in their apartment building; he did not stay with the family, however, and Granville was raised
by her mother, Julia Walker Boyd, and her mother's twin sister, Louise Walker, both of whom worked as examiners for the U.S. Bureau of Engraving and Printing. Granville and her sister Doris, who was
a year and a half older, often spent portions of their summers at the farm of a family friend in Linden, Virginia.
Evelyn Boyd grew up in Washington, D.C. and attended the segregated Dunbar High School (from which she graduated as valedictorian) maintained high academic standards. Several of its faculty held
degrees from top colleges, and they encouraged the students to pursue ambitious goals. Granville's mathematics teachers included Ulysses Basset, a Yale graduate, and Mary Cromwell, a University of
Pennsylvania graduate; Cromwell's sister, who held a doctorate from Yale, taught in Dunbar's English department.
Inspired by her high school teachers and with the encouragement of her family and teachers, Granville entered Smith College with a small partial scholarship from Phi Delta Kappa, a national sorority
for black women. During the summers, she returned to Washington to work at the National Bureau of Standards. After her freshman year, she lived in a cooperative house at Smith, sharing chores rather
than paying more expensive dormitory rates. Granville majored in mathematics and physics, but was also fascinated by astronomy after taking a class from Marjorie Williams. She considered becoming an
astronomer, but chose not to commit herself to living in the isolation of a major observatory, which was necessary for astronomers of that time. Though she had entered college intending to become a
teacher, she began to consider industrial work in physics or mathematics. She graduated summa cum laude in 1945 and was elected to Phi Beta Kappa.
With help from a Smith College fellowship, Granville began graduate studies at Yale University, for which she also received financial assistance. She earned an M.A. in mathematics and physics in one
year, and began working toward a doctorate at Yale. For the next two years she received a Julius Rosenwald Fellowship, which was awarded to help promising black Americans develop their research
potential. The following year she received an Atomic Energy Commission Predoctoral Fellowship. Granville's doctoral work concentrated on functional analysis, and her dissertation was titled On
Laguerre Series in the Complex Domain. Her advisor, Einar Hille, was a former president of the American Mathematical Society. Upon receiving her Ph.D. in mathematics in 1949, Granville was elected to
the scientific honorary society Sigma Xi.
This was second year an African American woman received a Ph. D. in Mathematics was 1949 (the first was 1943 when Euphemia Lofton-Haynes earned a Ph.D.). That same year, Marjorie Lee Browne finished
her Ph.D. thesis at the University of Michigan, but was not awarded the degree until February of the next year,1950.
│Granville then undertook a year of postdoctoral research at New York University's Institute of Mathematics and Science. Apparently because of housing discrimination, she was unable to find an │ │
│apartment in New York, so she moved in with a friend of her mother. Despite attending segregated schools, Granville had not encountered discrimination based on race or gender in her │ │
│professional preparation. Only years later would she learn that her 1950 application for a teaching position at a college in New York City was turned down for such a reason. A female adjunct │ │
│faculty member eventually told biographer Patricia Kenschaft that the application was rejected because of Granville's race; however, a male mathematician reported that despite the faculty's │ │
│support of the application, the dean rejected it because Granville was a woman. │1975│
│ │ │
│In 1950, Granville accepted the position of associate professor of mathematics at Fisk University, a noted black college in Nashville, Tennessee. At Fisk, Boyd she taught two students, │ │
│Vivienne Malone Mayes and Etta Zuber Falconer, who would be, respectively, the seventh and eleventh, African American women to receive Ph.D.'s in Mathematics. │ │
After two years of teaching, Granville went to work for the Diamond Ordnance Fuze Laboratories as an applied mathematician, a position she held for four years. From 1956 to 1960, she worked for IBM
on the Project Vanguard and Project Mercury space programs, analyzing orbits and developing computer procedures. Her job included making "real-time" calculations during satellite launchings. "That
was exciting, as I look back, to be a part of the space programs--a very small part--at the very beginning of U.S. involvement," Granville told Loretta Hall in a 1994 interview.
On a summer vacation to southern California, Granville met the Reverend Gamaliel Mansfield Collins, a minister in the community church. They were married in 1960, and made their home in Los Angeles.
They had no children, although Collins's three children occasionally lived with them. In 1967, the marriage ended in divorce.
Upon moving to Los Angeles, Granville had taken a job at the Computation and Data Reduction Center of the U.S. Space Technology Laboratories, studying rocket trajectories and methods of orbit
computation. In 1962, she became a research specialist at the North American Aviation Space and Information Systems Division, working on celestial mechanics, trajectory and orbit computation,
numerical analysis, and digital computer techniques for the Apollo program. The following year she returned to IBM as a senior mathematician.
Because of restructuring at IBM, numerous employees were transferred out of the Los Angeles area in 1967; Granville wanted to stay, however, so she applied for a teaching position at California State
University in Los Angeles. She happily reentered the teaching profession, which she found enjoyable and rewarding. She was disappointed in the mathematics preparedness of her students, however, and
she began working to improve mathematics education at all levels. She taught an elementary school supplemental mathematics program in 1968 and 1969 through the State of California Miller Mathematics
Improvement Program. The following year she directed a mathematics enrichment program that provided after-school classes for kindergarten through fifth grade students, and she taught grades two
through five herself. She was an educator at a National Science Foundation Institute for Secondary Teachers of Mathematics summer program at the University of Southern California in 1972. Along with
colleague Jason Frand, Granville wrote Theory and Application of Mathematics for Teachers in 1975; a second edition was published in 1978, and the textbook was used at over fifty colleges.
In 1970, Granville married Edward V. Granville, a real estate broker. After her 1984 retirement from California State University in Los Angeles, they moved to a sixteen-acre farm in Texas, where they
sold eggs produced by their eight hundred chickens.
From 1985 to 1988, Granville taught mathematics and computer science at Texas College in Tyler. In 1990, she accepted an appointment to the Sam A. Lindsey Chair at the University of Texas at Tyler,
and in subsequent years continued teaching there as a visiting professor. Smith College awarded Granville an honorary doctorate in 1989, making her the first black woman mathematician to receive such
an honor from an American institution.
Throughout her career Granville shared her energy with a variety of professional and service organizations and boards. Many of them, including the National Council of Teachers of Mathematics and the
American Association of University Women, focused on education and mathematics. Others, such as the U.S. Civil Service Panel of Examiners of the Department of Commerce and the Psychology Examining
Committee of the Board of Medical Examiners of the State of California, reflected broader civic interests.
When asked to summarize her major accomplishments, Granville told Hall, "First of all, showing that women can do mathematics." Then she added, "Being an African American woman, letting people know
that we have brains too."
Most important is her biography, My Life as a Mathematician by Eveyln Boyd Granville, which can be read on Agnes Scott College's "Women in Mathematics" website.
References: [Giles], [Granville], [Kenschaft1981], [Kenschaft1987], [Kenschaft1993]
back to Black Women in the Mathematical Sciences
The website
MATHEMATICIANS OF THE AFRICAN DIASPORA
are brought to you by
The Mathematics Department of
The State University of New York at Buffalo.
Created and Maintained by
Scott W. Williams
Professor of Mathematics
|
{"url":"http://www.math.buffalo.edu/mad/PEEPS/granville_evelynb.html","timestamp":"2014-04-16T20:04:29Z","content_type":null,"content_length":"12627","record_id":"<urn:uuid:e8bc413a-838c-4a60-b3ad-411082eb0c1b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jackknife bias reduction for polychotomous logistic regression
, 1999
"... We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than
zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a ..."
Cited by 56 (4 self)
Add to MetaCart
We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than
zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such
as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by
as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few
events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a
quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a
tiny fraction of nonevents (peace). This enables scholars to save as much as 99 % of their (nonfixed) data collection costs or to collect much more meaningful explanatory
, 2000
"... Some of the most important phenomena in international conflict are coded as "rare events data," binary dependent variables with dozens to thousands of times fewer events, such as wars, coups,
etc., than "nonevents". Unfortunately, rare events data are difficult to explain and predict, a problem that ..."
Cited by 5 (2 self)
Add to MetaCart
Some of the most important phenomena in international conflict are coded as "rare events data," binary dependent variables with dozens to thousands of times fewer events, such as wars, coups, etc.,
than "nonevents". Unfortunately, rare events data are difficult to explain and predict, a problem that seems to have at least two sources. First, and most importantly, the data collection strategies
used in international conflict are grossly inefficient. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly
measured, explanatory variables. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of
non-events (peace). This enables scholars to save as much as 99% of their (non-fixed) data collection costs, or to collect much more meaningful explanatory variables. Second, logistic regression, and
other commonly ...
, 2010
"... Description Extends the approach proposed by Firth (1993) for bias reduction of MLEs in exponential family models to the multinomial logistic regression model with general covariate types.
Modification of the logistic regression score function to remove first-order bias is equivalent to penalizing t ..."
Add to MetaCart
Description Extends the approach proposed by Firth (1993) for bias reduction of MLEs in exponential family models to the multinomial logistic regression model with general covariate types.
Modification of the logistic regression score function to remove first-order bias is equivalent to penalizing the likelihood by the Jeffreys prior, and yields penalized maximum likelihood estimates
(PLEs) that always exist. Hypothesis testing is conducted via likelihood ratio statistics. Profile confidence intervals (CI) are constructed for the PLEs.
"... Discipline is the highest of all virtues. Only so may strength and desire be counterbalanced and the endeavors of man bear fruit. N. KAZANTZAKIS, ..."
Add to MetaCart
Discipline is the highest of all virtues. Only so may strength and desire be counterbalanced and the endeavors of man bear fruit. N. KAZANTZAKIS,
, 2009
"... In logistic regression analyses of small or sparse data sets, results obtained by maximum likelihood methods cannot be generally trusted. In such analyses, although the likelihood meets the
convergence criterion, at least one parameter may diverge to plus or minus infinity. This situation has been t ..."
Add to MetaCart
In logistic regression analyses of small or sparse data sets, results obtained by maximum likelihood methods cannot be generally trusted. In such analyses, although the likelihood meets the
convergence criterion, at least one parameter may diverge to plus or minus infinity. This situation has been termed ’separation’. Examples of two studies are given, where the phenomenon of separation
occurred: the first one investigated whether primary graft dysfunction of lung transplants is associated with endothelin-1 mRNA expression measured in lung donors and in graft recipients. In the
second example, conditional logistic regression was used to analyze a randomized animal experiment in which animals were clustered into sets defined by equal follow-up time. I show that a penalized
likelihood approach provides an ideal solution to both examples, and provide comparative analyses including possible alternative approaches. The estimates obtained by the penalized likelihood
approach have reduced bias compared to their maximum likelihood counterparts, and inference using penalized profile likelihood is straightforward. Finally, I provide an overview of software that can
be used to apply the proposed penalized likelihood approach. Eviter les estimations infinies avec la regression logistique-
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2849492","timestamp":"2014-04-18T00:42:01Z","content_type":null,"content_length":"22942","record_id":"<urn:uuid:97d08561-a41c-439c-bdca-f1b8a4ff1959>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiplication by a constant - how?
Sergey Solyanik <Sergey.Solyanik@bentley.com>
14 Mar 1997 00:30:31 -0500
From comp.compilers
| List of all articles for this month |
From: Sergey Solyanik <Sergey.Solyanik@bentley.com>
Newsgroups: comp.compilers
Date: 14 Mar 1997 00:30:31 -0500
Organization: BSI
Keywords: arithmetic, optimize
MicroSoft Visual C seems to be capable to generate multiplication by a
constant using sequence of shifts and adds no longer than 6 operations
(total), using at most two registers. The optimization is used always
and does not seem to be confined to any special cases. Does anybody
know what algorithm is used and where can I look for information?
a = a * 127; // 1111111 bin
=> mov eax, dword ptr [a]
mov ecx, eax
shl eax, 7
sub eax, ecx
a = a * 11101; // 10101101011101 bin
=> mov eax, dword ptr [a]
mov ecx, eax
lea eax, [eax+eax*8]
lea eax, [ecx+eax*4]
lea eax, [eax+eax*4]
lea eax, [eax+eax*4]
lea eax, [eax+eax*2]
lea eax, [ecx+eax*4]
a = a * 11229; // 10101111011101 bin
=> mov eax, dword ptr [a]
mov ecx, eax
lea eax, [eax+eax*2]
lea eax, [ecx+eax*4]
lea eax, [eax+eax*8]
shl eax, 05
sub eax, ecx
lea eax, [eax+eax*2]
Thanks and regards --
Sergey Solyanik
Bentley Systems, Inc
[In the computer theory biz this topic is known as addition chains. I know
lots of heuristics but no general optimal scheme. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/97-03-062","timestamp":"2014-04-20T15:53:58Z","content_type":null,"content_length":"6466","record_id":"<urn:uuid:52d909af-4855-4cf8-ac82-1dad81f5b1a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factorial calculation optimization
February 23rd, 2013, 11:08 AM #1
Join Date
Feb 2012
Thanked 10 Times in 10 Posts
For a problem I'm working on, I need to calculate some very, very large factorials; or at least the first five non-zero digits of said factorial. I've worked out the function:
F(n) = (^(n!)/[(10^S[5](n))])%10^5
S[5](n) = ec6eb4f9e17c320d12784365efaad220.png where p = 5 to get the trailing zeros of the factorial.
The Algorithm:
public static void main(String[] args) {
// TODO Auto-generated method stub
Timer t = new Timer();//just a timer, can be replaced with long start = System.currentTimeMillis();
long fact = 1;//variable containing the needed value
long limit = (long) 100000.0;//the nth factorial
long pow10 = 10;//powers of 10
for(long i = 2; i <= limit; i++)//for loop to calculate the needed factorial
fact*=i; // (i-1)! * i = i! at that point
while(fact%10==0)//does the fact/10^(s_5(i)) function to strip the trailing zeros
fact %= 100000;// does the modulus to keep the numbers in bounds
if(i%pow10==0)//simply prints out the factorial of powers of ten
System.out.println(i + ": " + fact);
//System.out.println(i + ": " + fact);
System.out.println(fact);//print the result
t.end();//ends the timer and prints out the result
t.printElapsedTimeSeconds();//can be replaced with System.out.println("Elasped time: " + (System.currentTimeMillis()-start)/1000 + " seconds.");
I can't use De Polignac's formula because the required sieve would take too much memory and brute forcing every prime takes too long. This is the output of F(1000000000):
10: 36288
100: 16864
1000: 53472
10000: 79008
100000: 2496
1000000: 4544
10000000: 51552
100000000: 52032
1000000000: 64704
Elapsed time: 13.476 sec.
I need to calculcate F(1000000000000) or F of 1 trillion. This would take a very long time, there has to be some optimization or tweak that I'm missing somewhere.
Hmm, interesting problem...
So you want to find the last 5 digits of n! after all the trailing zeros?
The first thing to do is eliminate all the trailing zero.
Each trailing zero equates to a prime factor pair of (2,5).
So first order of business is to eliminate all of these pairs. An easy way to do this is count up how many times n! is divisible by 5. Since this is always going to be smaller than the number of
times n! is divisible by 2, we know that the factor 5 will be the limiting value of the (2,5) pairs.
Then as you're multiplying through, if a value is divisible by 2/5 and you haven't reached the limiting value for that number, divide the value by the appropriate value.
For example, let's take 10!:
There are two 5 prime factors of the result (5, 10).
Then as we multiply through, removing factors of 2 up to the limit:
num = 1
num = num * 2 / 2, two_count = 1
num = num * 3
num = num * 4 / 2, two_count = 2
num = num * 5 / 5, five_count = 1 (not actually necessary to keep track of five_count since every factor of 5 will be removed)
num = num * 6
num = num * 7
num = num * 8
num = num * 9
num = num * 10 / 5, five_count = 2 (not actually necessary to keep track of five_count since every factor of 5 will be removed)
Once you guarantee that the least significant digit is non-zero, you can use a simple math trick to calculate the last x digits of a*b:
You only need to multiply the last x digits from each number a, b to find the last x digits of the result.
For example, to find the last 2 digits of 123456 * 789012:
56 * 12 = 672
Last two digits are 72.
This last step isn't necessary for smaller factorials but is absolutely vital for larger numbers because there's the potential for a*b to overflow, especially for larger factorials.
The algorithm is O(1) space and ~O(n) runtime (possibly O(n log(n)), not sure about this).
Some run statistics (not comparable with your times because we likely have different hardware):
10: 36288
100: 16864
1000: 53472
10000: 79008
100000: 62496
1000000: 12544
10000000: 94688
100000000: 54176
1000000000: 38144
Time taken: 21.404 s
Interestingly, there are some discrepancies between my answers and your answers, especially for larger factorials (possibly due to numerical errors, or my mis-understanding of program
requirements). i checked using Wolfram Alpha and it looks like my answers are correct.
There might be some optimization for figuring out how many factors of 5 there are, I'm not sure.
You could also try optimizing by using divide and conquer and parallelizing, just make sure you ensure your counts are computed for each division (if you're going to try multi-threading).
Ok, well, you can use De Polignac's formula to figure out how many factors of a prime factor there are in n!.
Could you also explain this a different way?
So first order of business is to eliminate all of these pairs. An easy way to do this is count up how many times n! is divisible by 5. Since this is always going to be smaller than the number of
times n! is divisible by 2, we know that the factor 5 will be the limiting value of the (2,5) pairs.
Then as you're multiplying through, if a value is divisible by 2/5 and you haven't reached the limiting value for that number, divide the value by the appropriate value.
For example, let's take 10!:
There are two 5 prime factors of the result (5, 10).
Then as we multiply through, removing factors of 2 up to the limit:
num = 1
num = num * 2 / 2, two_count = 1
num = num * 3
num = num * 4 / 2, two_count = 2
num = num * 5 / 5, five_count = 1 (not actually necessary to keep track of five_count since every factor of 5 will be removed)
num = num * 6
num = num * 7
num = num * 8
num = num * 9
num = num * 10 / 5, five_count = 2 (not actually necessary to keep track of five_count since every factor of 5 will be removed)
This is a project Euler problem, so there is a solution possible that takes less than a few minutes possible. I noticed that there are a few thousand final five values under 100000! that have
quite a few repetitions, so that says to me that there may be a way to predict when and where those repetitious values come up.
Hmm, after close examination of your code you're right they are very similar. The only difference is that you use De Polignac's Formula which is faster, but you failed to properly handle the
factorial computation part.
In either case, much slower than should be expected for a Project Euler problem. Which problem number is it?
Problem 160 - Project Euler
Problem 160. I've been working on it for a while. First attempt was to brute force the factorial using JScience's LargeInteger class and huge multi-threading. That took too long. Then I moved to
trying De Polignac's algorithm specifically, but brute forcing prime numbers took too long and a large enough sieve is impossible to do in Java. This is the only method that's come close to
what's needed. I then noticed that there were last-five-digit-combos that never came up, and a whole lot that came up quite often. That's probably the key to figuring out the solution, but I
don't know how to apply it.
Here's something I just thought of:
One of the key tricks we're taking advantage of is by only keeping track of the last 5 digits for multiplication. Even with 1 trillion factorial we're going to be repeatedly multiplying
essentially these same 5 digit numbers over and over again. You might have some luck with either a quick integer power function or look-up tables to quickly compute the repeated multiplications.
Dunno if it will work, but may be worth a shot.
February 23rd, 2013, 08:20 PM #2
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
February 23rd, 2013, 08:52 PM #3
Join Date
Feb 2012
Thanked 10 Times in 10 Posts
February 23rd, 2013, 09:54 PM #4
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
February 23rd, 2013, 10:07 PM #5
Join Date
Feb 2012
Thanked 10 Times in 10 Posts
February 24th, 2013, 02:00 AM #6
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
|
{"url":"http://www.javaprogrammingforums.com/algorithms-recursion/24368-factorial-calculation-optimization.html","timestamp":"2014-04-18T22:08:35Z","content_type":null,"content_length":"84146","record_id":"<urn:uuid:df879bed-14f8-4c7a-beff-efdddd0ccb50>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Direction finding systems
A DF system using a circular array of three to eight antennas, analyses the received signals digitally by first calculating at (8) the Fourier transform of the received signals to obtain frequency
information and deriving at (10) from the Fourier transform the relative phases of the received signals at each of a number of spaced sample frequencies. This phase information is then fed to a stage
(12) which takes the spatial Fourier series of the phases from which the required bearing information is derived as .pi./2 minus the arctan of the ratio of the real and imaginary parts of the Fourier
series taken to suitable moduli. For a four antenna array the diameter of the circular array is constrained to be less than half of the wavelength at the highest frequency of interest. For a three
antenna array, the diameter is constrained to be less than one third the wavelength at the highest frequency of interest in order that the analysis should yield accurate and unambiguous bearing
Inventors: Stansfield; Edward V. (Reading, GB2)
Assignee: Racal Research Limited (Berkshire, GB2)
Appl. No.: 06/655,637
Filed: September 28, 1984
Current U.S. Class: 342/442 ; 342/444; 342/445
Current International Class: G01S 3/14 (20060101); G01S 3/48 (20060101); G01S 005/04 ()
Field of Search: 343/423,442,444,445,5SA,5DP,5FT,394,465
|
{"url":"http://patents.com/us-4626859.html","timestamp":"2014-04-19T19:41:02Z","content_type":null,"content_length":"38264","record_id":"<urn:uuid:9a73d19c-b74a-449d-ac76-b6029f56e6e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bubbles and Math Olympiads
Predicting the geometric shapes of soap bubble clusters can lead to surprisingly difficult mathematical problems.
Frank Morgan of Williams College in Williamstown, Mass., recently illustrated such difficulties when he invited an audience of mathematicians, students, and others to vote on which one of a given
pair of different representations of the same number of clustered planar bubbles would have a smaller total perimeter. Assembled for a ceremony at the National Academy of Sciences in Washington,
D.C., to honor the 12 winners of the 2001 U.S.A. Mathematical Olympiad (USAMO), audience members were wrong as often as they were right.
"These are very tricky questions," Morgan says. "You often can't even come up with reasonable conjectures."
Even the case when two bubbles join to form a double bubble–a sight familiar to any soap-bubble aficionado–has posed problems for mathematicians. In this case, the two bubbles share a disk-shaped
wall, and this divider meets the individual bubbles' walls at an angle of 120 degrees. Mathematicians call this configuration the standard double bubble. If the bubbles are of equal size, the
interface is flat. If one bubble is larger than the other, the rounded surface of the boundary film bulges into the bigger bubble.
Soap bubbles naturally assume the standard double bubble configuration. However, this geometric structure isnt the only candidate for the most economical way of packaging a pair of volumes. For
instance, one bubble may ring the other–like an inner tube fitting snugly around a peanuts waist–to form a two-chambered torus bubble (see http://www.math.uiuc.edu/~jms/Images/double/).
Mathematicians have long known that the circle is the shortest way to enclose a given area and that the sphere is the most economical way to enclose a given volume. "The double bubble, however, was
long neglected: classical mathematics preferred smooth surfaces, not pieces of surfaces meeting at angles in unpredictable ways," Morgan notes. "Only with the advent of geometric measure theory in
the 1960s were mathematicians ready to think seriously about such problems."
The campaign to settle the double bubble question began in earnest in 1990 when Morgan suggested to a group of undergraduate students that they tackle the two-dimensional case. Joel Foisy and his
coworkers proved that the standard double bubble in two dimensions–that is, two circles squeezed up against each other–has the least possible perimeter. There are no bizarrely curved configurations
that do any better.
Going to three dimensions, Michael L. Hutchings, now at Stanford University, proved a conjecture that greatly restricted the possibilities for area-minimizing double bubbles, especially in the case
when the two volumes are equal. In effect, his result narrowed the candidates to the standard double bubble and the torus bubble.
Then, in 1995, Joel Hass of the University of California, Davis, and Roger Schlafly of Real Software in Soquel, Calif., proved that the standard double bubble represents the least surface area when
the two bubble volumes are equal. Such a two-chambered geometric structure triumphs over any other possible geometric form as the most efficient way of enclosing and separating two equal volumes of
Hass and Schlafly used a computer to check all the torus bubble configurations. They established criteria for comparing the surface areas of the various enclosures and wrote a program to conduct the
search. Normally, mathematicians dont use computers to obtain mathematical proofs, partly because computers generally make slight rounding-off errors when they do calculations. However, Hass and
Schlafly found a way of circumventing this deficiency. In the end, they showed that no torus bubble does better than the standard double bubble.
The year 2000 finally saw a proof of the double-bubble conjecture for the case where the two volumes are unequal. Morgan, Hutchings, and Manuel Ritoré and Antonio Ros of the University of Granada in
Spain developed an efficient, pencil-and-paper method for checking alternative configurations to establish whether they are unstable or fail to beat the standard double bubble as the most economical
"The main idea of the proof is to prove every competing double bubble unstable by rotating different pieces of the bubble at different rates around a carefully chosen axis in a way that preserves
volumes, but decreases the total area," Morgan says.
Meanwhile, during the summer of 1999, undergraduates Ben Reichardt, Yuan Lai, Cory Heilmann, and Anita Spielman extended the three-dimensional double bubble theorem to four-dimensional bubbles. Last
summer, Andrew Cotton and David Freeman proved the double bubble conjecture for equal volumes in hyperbolic and spherical space.
Despite such progress, many questions about soap bubble configurations remain unanswered. For example, no one has yet proved that the standard triple bubble in two dimensions is the least perimeter
way to enclose and separate three given areas, even for equal areas. The situation is even more unsettled for clusters made up of more than three bubbles, whatever the dimension.
Undergraduates have contributed significantly in the past to resolving questions about bubble configurations, Morgan noted. Perhaps someone among the 12 high school students who were present in the
audience as the winners of the 2001 U.S.A. Mathematical Olympiad may make further advances, he added. As an incentive, Morgan handed out "soap bubble research kits" (bottles of soap solution and
plastic wands for generating bubbles) to the students.
The 2001 USAMO top scorers were Reid Barton of Arlington, Mass., Gabriel Carroll of Oakland, Calif., and Tiankai Liu of Saratoga, Calif. The other winners were Daniel Kane of Madison, Wis., Oaz Nir
of Saratoga, Calif., Po-Ru Loh of Madison, Wis., Luke Gustafson of Breckenridge, Minn., Ian Le of Princeton, N.J., David Shin of West Orange, N.J., Stephen Guo of Cupertino, Calif., Ricky Liu of
Newton, Mass., and Gregory Price of Falls Church, Va.
Michael Hamburg of South Bend, Ind., received a special award for the most original correct solution to one of the six problems in this year's contest. Here's the problem: Each point in the plane is
assigned a real number such that, for any triangle, the number at the center of its inscribed circle is equal to the arithmetic mean of the three numbers at its vertices. Prove that all points in the
plane are assigned the same number. Hamburg's proof can be seen at http://www.claymath.org/awards/cmiolympiadscholar.htm.
The 12 USAMO finalists are now attending a 6-week summer program at Georgetown University to prepare for the International Mathematical Olympiad (IMO), which is being held in the United States for
the first time in 20 years. Organizers expect teams from more than 80 countries to come to Washington, D.C., to participate in the competition, starting on July 4.
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset
your password. You may also log into Disqus using Facebook, Twitter or Google.
|
{"url":"https://www.sciencenews.org/article/bubbles-and-math-olympiads-0","timestamp":"2014-04-17T11:58:14Z","content_type":null,"content_length":"78683","record_id":"<urn:uuid:fd0a35a6-9afe-4e09-864e-478e0a7dc014>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Exterior Tangents to Three Circles
Two nonintersecting circles have one pair of common interior tangents and one pair of common exterior tangents. The interior tangents intersect at a point on the segment between the the centers; the
exterior tangents intersect outside that segment when the circles have different radii (otherwise they are parallel to the line joining the centers).
For three circles, there are three pairs of exterior tangents, one for each pair of circles. The intersections of those three pairs of exterior tangents lie on a line.
(You can drag the three circles.)
D. Wells,
The Penguin Book of Curious and Interesting Puzzles
, New York: Penguin, 1993 p. 150.
|
{"url":"http://demonstrations.wolfram.com/ExteriorTangentsToThreeCircles/","timestamp":"2014-04-19T07:11:38Z","content_type":null,"content_length":"41962","record_id":"<urn:uuid:444dcfd6-0fae-41f8-ab4d-d9211bf217b6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Correlations May Not Be What They Seem
Chart In Focus
Correlations May Not Be What They Seem
August 06, 2010
Free Chart In Focus email
Delivered to you every week
A lot of what technical analysts do deals with correlation. The idea of a divergence has to do with two things which move together, and then one of them does something different. Diversification is
the idea of putting money into "non-correlated" asset classes. It is an important concept.
The simplest tool for quantifying correlation is known as Pearson's Correlation Coefficient. It expresses the quality of the correlation of two sets of data on a scale from -1.00 to +1.00. If you
took two sets of data and arrayed them on a "scatter plot" chart, the correlation coefficient would tell you how linear the dots are. But using correlation coefficients can get an analyst into
trouble if one only looks at the number, and with this week's chart I want to give you a great example of why this is a problem.
The top chart this week shows two sets of data which are perfectly inversely correlated. I know this because I calculated a sine wave in a spreadsheet, and then created a second column which was just
the same number with the plus or minus sign reversed. The correlation coefficient for these two sets of data is -1.00, meaning that it is a perfectly inverse correlation.
But if we add an upward slope to these two same series of data, then everything changes. The second chart shows the same two sine waves used in the first chart, but I have added an upward slope.
Now, the correlation coefficient flips around to +0.96, meaning that they have a nearly perfect positive correlation. Even though the two are still moving in opposition to each other, the calculation
implies that they are positively correlated.
This happens because of a principle known as "autocorrelation", which is a word that has different meanings when used in different fields of statistical study. Here, I am using it to refer to the
effect of increasing the correlation coefficient of two inverse sets of data by sloping them upward. The same effect could be achieved by sloping them downward together.
And here is a great example of why this principle is important for technicians. Below is a price pattern analog that has been watched for years by a lot of analysts. I wrote about this recently in a
prior Chart In Focus article. It involves a comparison of the current price plot of the Nasdaq Composite against the history of Japan's Nikkei 225 Index. The point of the alignment is the Internet
bubble top in 2000 versus the Nikkei's peak in 1989.
The overall correlation coefficient for the 12 year history shown here is +0.64, which is a pretty strong positive correlation. But when we look more closely at the two plots, we can find significant
periods when correlation seems to go away. For the first third of the chart, the correlation was much stronger, at +0.85. But in the middle of this study period, it drops to approximately zero.
Within that middle period, there appear to be price movements which are almost perfectly inverse.
In the final third of the chart, the positive correlation seems to be reestablished, with a coefficient of +0.71.
The key point to take from this is that correlation can be a somewhat variable attribute, and the fact that a correlation seems to exist over a long period does not necessarily mean anything about
what happens to the data for shorter periods within that long period.
It may not be apparent from a visual inspection of this chart, but the two patterns right now are showing signs of inversion once again. For the past 2 months ending August 6, the correlation
coefficient is -0.21. That is not a very good track record on which to base daily trading decisions.
So be careful next time someone offers you a correlation coefficient in order to prove a point. Like most statistics, correlation coefficients can often show you things that are not really there.
Tom McClellan
Editor, The McClellan Market Report
Related Charts
Jul 30, 2010 May 28, 2010 Oct 23, 2009
Not the Great Depression, But an Interesting Facsimile Nikkei-Nasdaq Analog Update An Interesting Divergence
|
{"url":"http://www.mcoscillator.com/learning_center/weekly_chart/correlations_may_not_be_what_they_seem/","timestamp":"2014-04-20T13:18:47Z","content_type":null,"content_length":"20209","record_id":"<urn:uuid:08221d2b-7595-4fbd-bca3-1ebdeb130116>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sum of reciprocals of squarefree numbers
June 15th 2009, 09:55 PM #1
Sum of reciprocals of squarefree numbers
This one is for fun. I'll give my solution later.
Let $S$ be the set of squarefree positive integers. Show, as simply as you, can that $\sum_{s \in S} \frac{1}{s} = \infty$.
Of course an instant solution is given by the fact that the sum of the reciprocals of the primes diverges. However try using another way for fun!
Anyways here is my solution.
(1) every integer can be uniquely represented by the product of a square and a square-free number.
(2) $\sum_{j=1}^\infty\frac{1}{j^2}$ converges. This is easy to show and we don't need to know what it converges to.
But by (1) we have that
$\Big(\sum_{j=1}^\infty\frac{1}{j^2}\Big)\Big(\sum_ {s\in S} \frac{1}{s}\Big)$
is the harmonic series; hence $\sum_{s\in S} \frac{1}{s}$ diverges.
This can be generalized as follows. Suppose we have $n$ sets of integers, $S_1,...,S_n$, such that each integer can be uniquely represented as a product $s_1...s_n$. Then one of $\sum_{s \in S_j}
\frac{1}{s}$ diverges.
This is quite interesting because if we have an infinite family of sets $S_0,S_1,...$ instead of a finite one then the above does not necessarily hold anymore! For instance if we take the nth set
to be the set of powers of the nth prime (so that we have the usual representation as a product of prime powers), then all of the sums converge (they're just geometric series).
Last edited by Bruno J.; June 20th 2009 at 11:30 AM.
June 19th 2009, 12:04 PM #2
June 20th 2009, 10:19 AM #3
|
{"url":"http://mathhelpforum.com/number-theory/92983-sum-reciprocals-squarefree-numbers.html","timestamp":"2014-04-19T03:15:13Z","content_type":null,"content_length":"40408","record_id":"<urn:uuid:328ede97-1fba-45c1-b4db-fdd06fa04afc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Jir Adamek and Lurdes Sousa
Dedicated to Horst Herrlich with a wish of a nice start into the second half of a good life span
For each concrete category (K; U) an extension LIM(K;U) is constructed and under certain
\smallness conditions" it is proved that LIM(K;U) is a solid hull of (K; U ), i.e., the least nally
dense solid extension of (K; U ). A full subcategory of Top 2
is presented which does not have
a solid hull.
AMS Subj. Class.: 18A22, 18A30, 18A40, 18B30, 18E15, 54B30.
Key words: Concrete category, MacNeille completion, solid hull, limit closure.
It has been clear from the early development of the theory of concrete categories that the
concept of solid category, introduced by V. Trnkova [21] and R.-E. Homann [12], is of major
impact because all \reasonable" concrete categories encountered in algebra and general topology
are solid, and yet, solidness has a number of important consequences. In one respect, however,
solidness is far less satisfactory then other properties (e.g. topological, mono-topological, or
cartesian closed topological): it is the question of solid hulls, i.e., the smallest solid nally dense
extension of a given concrete category. It was Horst Herrlich who started in [9] and [10] a
systematic study of various hulls of concrete categories. Let us recall that in the above three
mentioned cases of topological hull, mono- topological hull and CCT hull a general construction
assigning to a concrete category (K; U) an extension (K ; U ) is known, and the basic result is: if
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/133/3893700.html","timestamp":"2014-04-18T19:16:11Z","content_type":null,"content_length":"8798","record_id":"<urn:uuid:51c55928-91e4-4802-a8dc-227f79306144>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: pages with MathML
From: <juanrgonzaleza@canonicalscience.com> Date: Fri, 14 Apr 2006 08:27:19 -0700 (PDT) Message-ID: <3309.217.124.69.209.1145028439.squirrel@webmail.canonicalscience.com> To: <www-math@w3.org>
Romeo Anghelache,
It is rather surprising that one can claim that HERMES is generating
semantic content, when articles generated from HERMES looks like
--------------------- REAL CODE
<p class="abstract">
<span class="fn"> </span><span class="fb">Abstract </span><span
class="fn">We review the present status of black hole thermodynamics. Our
review includes discussion of classical black hole thermodynamics, Hawking
radiation from black holes, the generalized second law, and the issue
of entropy bounds. A brief survey also is given of approaches to the
calculation of black hole entropy. We conclude with a discussion of
some unresolved open issues. </span>
Is the use of empty paragraphs for simulating layouts, headings of level 3
for encoding dates, and others points you mean by “semantic”?
Do you name “semantic” the next encoding generated by HERMES
<h3><a href="http://surubi.fis.uncor.edu/reula">Oscar A. Reula</a></h3>?
Uff! Author encoded as heading of the document!
Moreover, the mathematical code presents in the articles generated by
Hermes are not verifying accessibility, structure is far from good, and
several equations are rendered via “tricks”.
For example, in “Hyperbolic methods for Einstein’s Equations”
one reads (before equation 2):
\epsilon _{abcd} is the Levi-Civita tensor corresponding to the physical
The underlying math is not encoded via tensors but
<math xmlns="http://www.w3.org/1998/Math/MathML">
<span class="fi"> </span><span class="fn">is the Levi-Civita tensor
corresponding to the physical metric, </span>
Sorry, but I cannot call that "good code", because the Tensor is being
rendered via a ***visual*** forcing of subscripts instead via multiscript
tag of MathML 2.0
And what about the redundancy of MathML ½ in equation 2? and what about
the "terrorific" code of equation 3?
Do you name “semantic” content to encoding of “integral on s” like
(equation 10 of [http://hermes.aei.mpg.de/1998/1/article.xhtml])
Do you consider correct the l_Planck of equation (24)? Do you know for
what was <mtext> designed?
And what about the equation (25) of
The Gamma *there* is a tensor, but is encoded as subscript ab and
superscript j with several redundant mrows.
Is that you call good semantic content?
And what about the metric equation just after the section 2.1? This is one
of my favourites: accesibility, structure, "semantics", encoding, and
rendering are all wrong.
One find a line element ds^2. If my math is correct ds^2 = (ds)^2 but the
code appear in the journal article generated via HERMES is
That is, d{s}^2 (or 2s ds), which is VERY different from (ds)^2 is
supposed to be encoded via your "semantic" approach.
and all that even ignoring that one would type the differential using the
MathML entity instead of identifier "d".
Really do I need to continue writing samples of incorrect output you are
serving to the world?
I wrote to you in the past, because I was critizing HERMES approach and I
consider that when one is critized, one would be informed for one can
reply if consider needed.
That is also the reason I said that about NAG and New York Journal of
Mathematics recently I consider that people would obtain opportunity to
read I am writing and reply if consider needed.
About the “canonical science site” I already said many occasions that site
was experimental and very wrong in many points.
Juan R.
Center for CANONICAL |SCIENCE)
Received on Friday, 14 April 2006 15:27:32 GMT
This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 February 2010 06:12:58 GMT
|
{"url":"http://lists.w3.org/Archives/Public/www-math/2006Apr/0033.html","timestamp":"2014-04-17T13:19:51Z","content_type":null,"content_length":"12771","record_id":"<urn:uuid:4b329466-c0d3-4ee4-9672-8db46205bd12>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Euclidean Approach to the FTC - Gregory's Proof of the FTC
This Euclidean definition of the tangent is pivotal to Gregory's contradiction. For if KC is not tangent to OEK, it means that some point a on KC must lie above the curve OEK. In particular, if DF is
the line segment perpendicular to OA which intersects OA at D, OEK at E, KC at a, and OFL at F, then DE < Da by choice of a. This is shown in the figure below (adapted from Gregory's proof-see the
Appendix ).
Actually, the figure tells only half of the story. It's also possible that a could lie on the other side of K on the line CK. However, the argument for that case is similar, and the details make a
good exercise (or see the Appendix ).
By the definition of the curve OEK, if area(OFD) denotes the area enclosed by the curve OF and the line segments OD and DF, then DE = area(OFD). Likewise, IK = area(OFLI). Thus, IK/DE = area(OFLI)/
area(OFD). Since DE < Da, it follows that IK/Da < IK/DE and so
IK area(OFLI)
< .
Da area(OFD)
Notice that
, a, and
are collinear and that
a are both perpendicular to
. Thus, the triangles
a and
are similar. So by corresponding sides,
a =
and hence
). Now multiply the numerator and denominator in
to obtain
. It follows that
IC·IL area(OFLI)
DC·IL area(OFD)
This last step may seem mysterious, but it serves the purpose of converting the original inequality into an inequality about areas which will lead Gregory to an obvious geometrical contradiction.
Specifically, DC·IL is the area of the rectangle with sides IL and DC, and IL·IC is the area of the rectangle with sides IL and IC. Inverting the ratios in the last inequality yields:
area(OFD) DC·IL
area(OFLI) IC·IL
Now we can make some observations about these areas to arrive at a contradiction: Note that
) =
) +
) and likewise, since
. Substituting these expressions into the numerators of the last inequality and doing some algebra, we have
area(OFLI) - area(DFLI) IC ·IL - DI·IL
area(OFLI) IC·IL
area(DFLI) DI·IL
Û 1 - < 1 -
area(OFLI) IC·IL
DI ·IL area(DFLI)
Û < .
IC·IL area(OFLI)
But recall the defining equality for C-namely, IK/IC = IL. This implies that IK = IC·IL. Since IK = area(OFLI) by definition of the curve OEK, the denominators in the last inequality are the same.
Therefore, the numerators must satisfy DI·IL < area(DFLI). But remember that OFL is increasing by assumption. Thus, the rectangle with sides IL and DI must circumscribe the region DFLI.
Hence area(DFLI) < DI ·IL as well. This contradiction shows that KC must in fact be tangent to OEK. Hence the fundamental theorem of calculus follows.
|
{"url":"http://www.maa.org/publications/periodicals/convergence/a-euclidean-approach-to-the-ftc-gregorys-proof-of-the-ftc?device=mobile","timestamp":"2014-04-23T19:00:26Z","content_type":null,"content_length":"29327","record_id":"<urn:uuid:4e6a6279-881d-4d53-8a89-35c417141135>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Many Markov components
In my recent preprint, I showed that once $r \ge c / n$ the Metropolis Markov chain on configurations of hard discs is no longer ergodic. In particular, even for reasonably small radius there are
stable configurations of discs. In this note the goal is to make a more quantitative statement about the number of Markov components in particular when $r > c_2 / \sqrt{n}$ (which is the usual range
of interest in statistical mechanics.
For the main construction, start with the following configuration of $24$ discs in a square. (It is not relevant to the following discussion, but it is fun to point out that this is the densest
packing of $24$ equal sized discs in a square.)
This configuration of $24$ discs is easily seen to be stable (meaning that each disc is individually held in place by its neighboring discs and the boundary of the square), and in fact it is still
stable if we delete a carefully chosen set of four discs as follows.
By stacking $k^2$ of these together one has stable configurations with $20 k^2$ discs.
Finally one can add discs in any of the “holes” in these kinds of configurations, leading to more and more possibilities for stable configurations. For example, here is one of the ${36 \choose 18} \
approx 9.08 \times 10^9$ possibilities for dropping discs in $18$ of the $36$ holes in the previous illustration.
By continuing constructions like this for larger and larger $k$ we observe two facts.
(1) As $n \to \infty$ the number of components can grow quite fast. To make this quantitative we expand on the example. Suppose there are $20 k^2$ discs to start out, then there are $4 k^2$ holes.
Suppose we drop $2 k^2$ discs into those holes. Then the number of discs $n = 22 k^2$ in total, and then the number of Markov components is at least ${4 k ^2 \choose 2 k^2 } = { 2n/11 \choose n/11 }$
. (This is for unlabeled discs. For labeled discs, multiply this by $n!$.) By Stirling’s formula, this number grows exponentially fast in $n$ (something like $1.134^n$).
In this particular example the discs have total area $\pi r^2 n = \approx 0.67$, but that could obviously adjusted by changing the ratio of extra discs to holes.
(2) For every $\lambda \in (0.60, 0.75)$ there is an $\alpha = \alpha(\lambda) > 1$ and a sequence of configurations $C_1, C_2, \ldots$ with $C_i \in$ Config(i, r(i)), the number of Markov components
greater than $\alpha ^ i$ for sufficiently large $i$, and $\pi r^2 n \to \lambda$ as $i \to \infty$.
It turns out that what are described here are only separate components in the sense that a Markov chain that only moves one disc at a time can not mix from one state to the other. (The four discs in
the center of a square can rotate about the center of the square!) But a small modification of this construction would seem to give exponentially many components in the sense of $\pi_0$, each with
small positive measure.
Is exponentially many components the maximal possible? The Milnor-Thom Theorem gives an upper bound much larger than this, something like $O(e^{c n^2})$ for some $c > 0$. It would also be very
interesting to know any bounds on the topological complexity of these path components.
|
{"url":"http://matthewkahle.wordpress.com/2010/02/22/many-markov-components/?like=1&source=post_flair&_wpnonce=7ebcc2514d","timestamp":"2014-04-21T04:31:26Z","content_type":null,"content_length":"54074","record_id":"<urn:uuid:1362afd9-7aa2-4d98-aa36-18b059d7dcf7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gibbs sampling
Gibbs sampling is a special case of the Metropolis-Hastings algorithm. From a given state of the variables it randomly moves to another state, preserving the desired equilibrium distribution. The
simplest version is single-site Gibbs sampling, where each variable is visited in turn and sampled from its conditional distribution given the current state of its neighbors. Variables may be visited
in any order and some variables may be sampled more than others, as long as each variable gets sampled sufficiently often.
A more efficient version of Gibbs sampling, especially in the case of deterministic constraints, is the block Gibbs sampler. Here the variables are grouped into acyclic blocks, each block is visited
in turn, and the entire block is sampled from its conditional distribution given the state of its neighbors. This procedure is valid even if blocks overlap; variables in multiple blocks will simply
be sampled more often. Typically you will not have to specify the blocking for Gibbs sampling as default blocking is automatically based on the deterministic factors and constraints in your graphical
model. However, you do have the option of explicitly specifying blocks using the Group method on the
inference engine
The Gibbs sampling algorithm object has two configurable properties:
• BurnIn: number of samples to discard at the beginning
• Thin: reduction factor when constructing sample and conditional lists in order to avoid correlated samples
Gibbs sampling typically requires many more iterations than EP or VMP (each iteration is one sample).
// Use Gibbs sampling
GibbsSampling gs = new GibbsSampling();
gs.BurnIn = 100;
gs.Thin = 10;
InferenceEngine ie = new InferenceEngine(gs);
ie.NumberOfIterations = 2000;
When choosing an inference algorithm, you may wish to consider the following properties of Gibbs sampling:
• Gibbs sampling will eventually converge to the correct solution, but may take a long while to get there.
• Speed of convergence may be helped by careful initialisation.
• Gibbs sampling only supports conjugate factors - i.e. marginal posteriors and conditional distributions over any variable in the factor have the same parameteric form as the prior for that
• Non-conjugate factors require more sophisticated Monte Carlo methods which are not currently supported by Infer.NET.
• Gibbs sampling allows you to query for (a) posterior marginals, (b) a list of conditional distributions, and/or (c) a list of samples. See QueryType in Running inference.
• Gibbs sampling does not support variables defined within a gate, but does support mixture models which only have factors within a stochastic If, Case, or Switch statement.
• Gibbs sampling is stochastic - it will give different samples depending on the random seed for different initialisations. You can read about how to change message initialisation
|
{"url":"http://research.microsoft.com/en-us/um/cambridge/projects/infernet/docs/Gibbs%20sampling.aspx","timestamp":"2014-04-20T21:09:49Z","content_type":null,"content_length":"15913","record_id":"<urn:uuid:00594452-fe29-4b61-8b68-7366be59f255>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
'The pea and the sun'
Issue 36
September 2005
The Pea and the sun: a mathematical paradox
by Leonard M. Wapner
The topic of this book - the Banach-Tarski Paradox - is a result so strange and counterintuitive that the author says he didn't believe it when he first saw it. The "paradox" - in fact an impeccable
mathematical theorem - says that a small sphere, for example a pea, can be cut into as few as five pieces which can then be reassembled so as to make a far bigger sphere, for example the sun.
The result seems to suggest that mathematicians have discovered what alchemists never did: a way to make something out of nothing. In fact, the secret lies in the strange dissection used - the "pea"
is cut into five pieces according to a method that could never in real life be implemented. The pieces are so-called "non-measurable" sets, which means just what it sounds like: sets that cannot be
assigned any size (or "measure") at all without causing contradiction. By their nature, non-measurable sets cannot be constructed via explicit geometric steps, and, as the author says: "Some might
find it disagreeable that such sets exist, but that is mathematics!"
The author gives a fascinating account, in a journalistic style, of the history of the Banach-Tarski Theorem, devoting a chapter to the cast of characters, including Georg Cantor, Kurt Gödel, Paul
Cohen, and of course Stefan Banach and Alfred Tarski. Chapter 3, on the different sorts of paradoxes, is particularly entertaining. Some of the paradoxes described are fallacies where the error is
hard to spot, but others, like the Banach-Tarski paradox and many other counterintuitive results that arise from the seemingly innocuous "Axiom of choice", are correct but seem absurd. I particularly
liked "Hotel Infinity", a song written by Lawrence M. Lesser that is reproduced in full here, which sets Hilbert's "Hotel Paradox" to the tune of "Hotel California"!
Wapner quotes Mark Twain as saying: "Truth is stranger than fiction, but it is because fiction is obliged to stick to possibilities; truth isn't," and it is certainly true that the development of
this area of mathematics - measure theory - has been a tale of astonishment and disbelief. Georg Cantor, who proved many of the early "paradoxical" results, wrote to a friend of one of his results
(that there is a one-to-one correspondence between the points on a line segment and the interior points of a square): "I see it, but I don't believe it." And in an echo of the famous attempt in 1897
by the Indiana House of Representatives to legislate for a rational value of Pi, a citizen once demanded of the Illinois legislature that it outlaw the teaching of the Banach-Tarski Theorem in
Illinois schools.
This is, however, not just a book about maths - it is a book of maths. The reader must be willing to take the time to understand deductive steps and constructions, and must not be frightened off by
theorems, proofs and professional mathematical language such as "without loss of generality" (which, as a general rule, we try to avoid on Plus). The Banach-Tarski Theorem itself is proved in full.
What is presented in this book is maths for its own sake: beautiful, elegant, artistic, astonishing. I doubt whether it would appeal particularly to those who think of maths as a "useful tool" -
although it might open their minds! But it would surely make a great present for a budding pure mathematician - and what a present it would be, to give someone their first inkling of the wonders that
lie at the heart of pure mathematics.
Book details:
The pea and the sun: a mathematical paradox
Leonard M Wapner
hardback - 232 pages (2005)
A K Peters Ltd
ISBN: 1568812132
You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small
commission from your purchase.
Helen Joyce
is past editor of
|
{"url":"http://plus.maths.org/content/pea-and-sun","timestamp":"2014-04-18T05:51:47Z","content_type":null,"content_length":"27086","record_id":"<urn:uuid:156f2483-2a94-44d8-a773-086d88780d30>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Descriptions
MATH 5103
Linear Algebra II
Prerequisite: MATH 4003 or consent of the department of mathematics.
A continuation of MATH 4003 with emphasis on abstract vector spaces, inner product spaces, linear transformations, kernel and range, and applications of linear algebra.
Note: MATH 5103 may not be taken for credit after completion of MATH 4103 or equivalent.
MATH 5153
Applied Statistics II
Prerequisite: MATH 3153.
This course is a continuation of Math 3153 with emphasis on experimental design, analysis of variance, and multiple regression analysis. Students will be required to design and carry out an
experiment, use a current statistical software package to analyze the data, and make inferences based upon the analysis.
Note: Math 5153 may not be taken for credit after completion of Math 4153 or equivalent.
MATH 5173
Advanced Biostatistics
Prerequisite: An introductory statistics course or permission of instructor.
This course will include analysis of variance, one factor experiments, experimental design with two or more factors, linear and multiple regression analysis, and categorical data analysis.
MATH 5243
Differential Equations II
Prerequisites: MATH 3243 and MATH 4003 or consent of the instructor.
A continuation of MATH 3243 with emphasis on higher order and systems of differential equations.
MATH 5273
Complex Variables
Prerequisite: MATH 2943.
An introduction to complex variables. This course will emphasize the subject matter and skills needed for applications of complex variables in science, engineering, and mathematics. Topics will
include complex numbers, analytic functions, elementary functions of a complex variable, mapping by elementary functions, integrals, series, residues and poles, and conformal mapping.
Note: May not be taken for credit after the completion of MATH 4273 or equivalent.
MATH 5343
Introduction to Partial Differential Equations
Prerequisites: MATH 2934 and MATH 3243.
This course is an introduction to partial differential equations with emphasis on applications to physical science and engineering. Analysis covers the equations of heat, wave, diffusion, Laplace,
Dirichlet and Neumann equations. Course is suitable for senior level or first year graduate students in Mathematics, Physics, and Engineering.
MATH 6213
Methods in Teaching Middle School Mathematics
Prerequisite: Permission of instructor.
The course is an exploration of inductive teaching models, techniques, strategies, and research for teaching mathematics in the middle school. Emphasis will be placed on constructivist learning.
MATH 6323
Methods in Teaching Secondary Mathematics
Prerequisite: Permission of the instructor.
The course is a study of materials, methods, and strategies for teaching mathematics in the secondary school. Emphasis will be placed on activity-based learning.
MATH 6881, 6882, 6883
Prerequisite: Permission of instructor.
The workshop will require the equivalency of fifteen clock hours of instruction per credit hour.
MATH 6891, 6892, 6893, 6894
Independent Study
Open to graduate students who wish to pursue individual study or investigation of some facet of knowledge which complements the purpose of the University's graduate program. Students will be required
to plan their studies and prepare formal written reports of their findings.
Note: The selected topic may not constitute any duplication of study leading to the accomplishment of a thesis.
MATH 6991
Project or Thesis Research Continuation
This course allows students additional time to research and compose their capstone project/portfolio.
Copyright © 2010 Arkansas Tech University | All Rights Reserved
Russellville, Arkansas 72801 USA | For general information call 479-968-0389
All trademarks herein belong to their respective owners
|
{"url":"http://www.atu.edu/academics/catalog-graduate/archive/2012/descriptions/course_desc_page.php?pre=MATH","timestamp":"2014-04-19T14:36:33Z","content_type":null,"content_length":"14496","record_id":"<urn:uuid:533050f1-5eb4-4a25-ae48-9e2ef929faa7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Perfect Reconstruction Two-channel Filter Bank
Often in digital signal processing the need arises to decompose signals into low and high frequency bands, after which need to be combined to reconstruct the original signal. Such an example is found
in subband coding (SBC). This demo shows an example of perfect reconstruction of a two-channel filter bank, also known as the Quadrature Mirror Filter (QMF) Bank since it uses power complementary
filters. We will simulate our perfect reconstruction process by filtering a signal made up of Kronecker deltas. Plots of the input, output, and error signal will be provided, as well as the magnitude
spectra of the signals. The mean-square error will also be computed to measure the effectiveness of the perfect reconstruction filter bank.
Perfect Reconstruction
Perfect reconstruction is a process by which a signal is completely recovered after being separated into its low frequencies and high frequencies. Below is a block diagram of a perfect reconstruction
process which uses ideal filters. The perfect reconstruction process requires four filters, two lowpass filters (H0 and G0) and two highpass filters (H1 and G1). In addition, it requires a
downsampler and upsampler between the two lowpass and between the two highpass filters. Note that we have to account for the fact that our output filters need to have a gain of two to compensate for
the preceding upsampler.
Perfect Reconstruction Two-Channel Filter Bank
The DSP System Toolbox™ provides a specialized function, called FIRPR2CHFB, to design the four filters required to implement an FIR perfect reconstruction two-channel filter bank as described above.
FIRPR2CHFB designs the four FIR filters for the analysis (H0 and H1) and synthesis (G0 and G1) sections of a two-channel perfect reconstruction filter bank. The design corresponds to so-called
orthogonal filter banks also known as power-symmetric filter banks, which are required in order to achieve the perfect reconstruction.
Let's design a filter bank with filters of order 99 and passband edges of the lowpass and highpass filters of 0.45 and 0.55, respectively:
N = 99;
[H0,H1,G0,G1] = firpr2chfb(N,.45);
Note that the analysis path consists of a filter followed by a downsampler, which is a decimator, and the synthesis path consists of an upsampler followed by a filter, which is an interpolator. So,
we can use the multirate filter objects available in the DSP System Toolbox to implement our analysis and synthesis filter bank by using a decimator followed by an interpolator, respectively.
% Analysis filters (decimators).
Hlp = mfilt.firdecim(2,H0);
Hhp = mfilt.firdecim(2,H1);
% Synthesis filters (interpolators).
Glp = mfilt.firinterp(2,G0);
Ghp = mfilt.firinterp(2,G1);
Looking at the first lowpass filter we can see that it meets our 0.45 cutoff specification.
hfv = fvtool(Hlp);
legend(hfv,'Hlp Lowpass Decimator');
set(hfv, 'Color', [1 1 1])
Let's look at all four filters.
set(hfv, 'Filters', [Hlp,Hhp,Glp,Ghp]);
legend(hfv,'Hlp Lowpass Decimator','Hhp Highpass Decimator',...
'Glp Lowpass Interpolator','Ghp Highpass Interpolator');
For the sake of the demo let p[n] denote the signal
and let the signal x[n] be defined by
NOTE: Since MATLAB® uses one-based indexing, delta[n]=1 when n=1.
x = zeros(512,1);
x(1:3) = 1; x(8:10) = 2; x(16:18) = 3; x(24:26) = 4;
x(32:34) = 3; x(40:42) = 2; x(48:50) = 1;
set(gcf, 'Color', [1 1 1])
Now let's compute the signal's magnitude spectra using a periodogram spectrum object and plot it.
h = spectrum.periodogram;
hopts = psdopts(h); hopts.CenterDc=true; hopts.NormalizedFrequency=false;
hpsdx = psd(h,x,hopts);
set(gcf, 'Color', [1 1 1])
Simulation of Perfect Reconstruction
Using MATLAB's FILTER command with the multirate filters designed above we will implement the perfect reconstruction two-channel filter bank and filter the signal x[n] defined above.
% Lowpass frequency band.
x0 = filter(Hlp,x); % Analysis filter bank output
x0 = filter(Glp,x0); % Synthesis filter bank output
% High frequency band.
x1 = filter(Hhp,x); % Analysis filter bank output
x1 = filter(Ghp,x1); % Synthesis filter bank output
xtilde = x0+x1;
Perfect Reconstruction Output Analysis
We can see from the plot of xtilde[n] below that our perfect reconstruction two-channel filter bank completely reconstructed our original signal x[n].
set(gcf, 'Color', [1 1 1])
Another way to verify that we achieved perfect reconstruction is by plotting the error defined by the difference between the original signal x[n] and the output of the two-channel filter bank xtilde
[n], i.e., e[n] = xtilde[n]-x[n]. We see that the error is very small.
% Delay x[n] so that it aligns with the filtered output, xtilde[n] which
% was delayed due to the filtering.
e = xtilde-xshifted;
stemplot(e,'Error e[n]')
set(gcf, 'Color', [1 1 1])
We can also verify that we achieved perfect reconstruction by comparing the magnitude spectra of x[n] with that of xtilde[n] by overlaying both spectras. We will do this by creating a DSP data object
(dspdata) with the spectra from both PSD objects created above, and then plot it.
hpsdxtilde = psd(h,xtilde,hopts);
hpsd = dspdata.psd([hpsdx.Data,hpsdxtilde.Data]);
legend('PSD of x[n]','PSD of xtilde[n]')
set(gcf, 'Color', [1 1 1])
As we can see from time plot of our output signal as well as its spectral content, our perfect reconstruction two-channel filter bank did an excellent job. Moreover, the mean-square error (MSE) is
mse = sum(abs(e).^2)/length(e)
mse =
See also FIR Halfband Filter Design.
Listing of helper functions used above.
type stemplot.m
function stemplot(x,varname)
%STEMPLOT Plots the signal specified using the STEM function.
% Copyright 1999-2004 The MathWorks, Inc.
title(['Signal ',varname]);
xlabel('Samples (n)');
% Zoom-in to samples of interest.
ylim = get(gca,'ylim');
idx = find(abs(x)>=0.001);
if ~isempty(idx),
axis([idx(1) idx(end) ylim]);
% [EOF]
|
{"url":"http://www.mathworks.de/products/dsp-system/code-examples.html?file=/products/demos/shipping/dsp/pr2chfilterbankdemo.html&nocookie=true","timestamp":"2014-04-25T00:08:54Z","content_type":null,"content_length":"31477","record_id":"<urn:uuid:17eb1c7a-3a68-452b-be13-56657e61d784>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: AW: AW: AW: xtline - How can I control add a note and a caption
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: AW: AW: AW: xtline - How can I control add a note and a caption
From "Alexander Brunner" <alexander.brunner@uni-wuerzburg.de>
To <statalist@hsphsun2.harvard.edu>
Subject st: AW: AW: AW: xtline - How can I control add a note and a caption
Date Tue, 18 May 2010 13:44:55 +0200
Problem solved. The solution provided by Martin works. Thanks.
Kind regards,
-----Ursprüngliche Nachricht-----
Von: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Martin Weiss
Gesendet: Dienstag, 18. Mai 2010 11:09
An: statalist@hsphsun2.harvard.edu
Betreff: st: AW: AW: xtline - How can I control add a note and a caption
If such solutions are not apparent from the help file - which in this case I
do not think they are - you can always take a close look at the dialog box
at -db xtline-, and more often than not, you would come up with the correct
answer after repeated use of the "Submit" button...
-----Ursprüngliche Nachricht-----
Von: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Martin Weiss
Gesendet: Dienstag, 18. Mai 2010 11:04
An: statalist@hsphsun2.harvard.edu
Betreff: st: AW: xtline - How can I control add a note and a caption
xtline calories, byopts(caption(this is a caption) note(this is a note))
-----Ursprüngliche Nachricht-----
Von: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Alexander
Gesendet: Dienstag, 18. Mai 2010 11:00
An: statalist@hsphsun2.harvard.edu
Betreff: st: xtline - How can I control add a note and a caption
Dear all,
I am trying to modify a simple xtline graph, but unfortunately it does not
work. Consider these simple commands:
sysuse xtline1
xtset person day
xtline calories
I'd like to modify the graph in the following way:
In the default setting the caption says "Graphs by person" and I want to
replace that. I thought adding a caption or a note will do it. So I used the
following command:
xtline calories, note("this is a note") caption("this is a caption")
But this adds a note and a caption under each separate plot. So how can I
add a "general" note or caption for the whole graph?
Thanks in advance!
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2010-05/msg00987.html","timestamp":"2014-04-17T10:11:24Z","content_type":null,"content_length":"11588","record_id":"<urn:uuid:94a7637d-f97e-4639-aff9-6608c4f30291>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
series solution!
September 10th 2012, 03:58 AM #1
Jul 2011
series solution!
pls folks, the problem above i have attempted and got the indicial equation as c=1 or -1 and the following series while holding c=1;
what would this be in closed form? pls correct me if i made any mistake. thanks!
Re: series solution!
Did you just compute the first coefficients, or you get a reccurrence relation? (we can get the latter multiplying first the equation by $x^2$)
Re: series solution!
just the first coefficient to obtain that relation. what to know if am on course?
September 13th 2012, 04:52 AM #2
September 13th 2012, 05:56 AM #3
Jul 2011
|
{"url":"http://mathhelpforum.com/differential-equations/203212-series-solution.html","timestamp":"2014-04-16T20:21:21Z","content_type":null,"content_length":"34421","record_id":"<urn:uuid:5d9445bf-5800-4d47-ad75-7c7da1411bce>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
L'Hospital's Rule
lim -> infinite (3x+4)/(ln(4+4e^x)) lim -> 0 (2/x^4)-(5/x^2) lim -> infinite (1+(10/x))^(x/4)
Come on...you don't need L'hopitals for any of these. Just think about this As x gets incredibely huge $4+4e^x\to{4e^x}$ So $\ln(4+4e^x)\to\ln(4)+x$ So $\frac{3x+4}{\ln(4+4e^x)}\to\frac{3x+4}{\ln(4)
+x}\t o{3}$ For the secone one combining the fractions give $\frac{2-5x^2}{x^4}}$ now when you put 0 in there it should be pretty obvious what the answer is For the third one Im sure you know that $\
lim_{x\to\infty}\left(1+\frac{1}{x}\right)^x=e$ Well in your limit if we let $\frac{10}{x}=u$ we have that $\lim_{x\to\infty}\left(1+\frac{10}{x}\right)^{\fra c{x}{4}}=\lim_{u\to{0}}\left(1+u\right)^
{\frac{10} {4u}}$ Now we can see that this is equivalent to $\left[\lim_{u\to{0}}\left(1+u\right)^{\frac{1}{u}}\right]^{\frac{10}{4}}$ On the inner integral letting $\frac{1}{x}=u$ again we get $\
|
{"url":"http://mathhelpforum.com/calculus/56453-l-hospital-s-rule.html","timestamp":"2014-04-23T11:04:49Z","content_type":null,"content_length":"34938","record_id":"<urn:uuid:2046f4d2-51ce-4986-82ce-50467de097cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematica Tutorial
Bill Titus, Carleton College
Department of Physics and Astronomy, Northfield, MN 55057-4025 March 1, 2003
These nine, self-paced tutorials that will expose you to various Mathematica tools and skills. Note: You must have Mathematica installed on your computer to open these files. During these tutorials,
1. Learn a subset of Mathematica by example
2. Get a feeling for the structure of Mathematica and some of its potential
3. Learn how to find out information about Mathematica and its various commands
4. Develop the ability and confidence to pick Mathematica up again, even if some time has gone by (but not too much time)
5. Have a sense of the power and scope of Mathematica and whether you want to invest the time and effort in learning this software package.
Here are the 9 tutorials as well as the introductory material from the workshop for which these were created.
|
{"url":"http://serc.carleton.edu/quantskills/tools_data/btitus.html","timestamp":"2014-04-18T14:22:03Z","content_type":null,"content_length":"20242","record_id":"<urn:uuid:f4e8f5e7-644f-4482-aebd-f79f6079881a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spring 2013 Courses
For each of the courses, syllabi are to be handed out in class. Please note that it is possible my teaching assignment could change. Also note that the information here is subject to change (e.g. a
classroom could be moved).
Mathematics 367/367Z (Discrete Probability)
Class numbers 2359 (for AMAT 367) and 2360 (for AMAT 367Z). MWF 9:20-10:15 AM in ES 245. The textbook is Discrete Probability, by Gordon.
Mathematics 487/587 (Topics in Modern Mathematics)
Class numbers 9526 (for AMAT 487) and 9528 (for AMAT 587), MWF 1 11:30 AM-12:25 PM in PH 123. The topic focussed on is probability on finite groups, and the background expected of students is
undergraduate courses in abstract algebra and probability. The textbook is Group Representations in Probability and Statistics, by Diaconis. This textbook is available online by clicking here.
Questions: Send me e-mail. The e-mail address is mhildebrand AT albany.edu (where you should replace AT with @).
|
{"url":"http://www.albany.edu/~martinhi/spring2013courses.html","timestamp":"2014-04-19T03:23:10Z","content_type":null,"content_length":"1512","record_id":"<urn:uuid:0b70a748-ef38-4fca-b9bb-fbcfa00a74c7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
equivalence classes
January 20th 2010, 12:44 PM #8
January 20th 2010, 10:08 AM #7
January 20th 2010, 09:47 AM #6
January 20th 2010, 09:44 AM #5
January 20th 2010, 09:41 AM #4
January 20th 2010, 09:33 AM #3
January 20th 2010, 09:30 AM #2
January 20th 2010, 09:25 AM #1
Junior Member
What about saying it a little better. The equivalence class of a subset of the naturals under this relation is the class of all sets such that there exists a bijection between that set and the
class representative.
Maybe. Or at least insist that questions are not posted in image files, unless there are accompanying diagrams or something.
I don't really see a better way to describe the classes other than to reuse the language of the problem. Something like,
For $A \in \mathcal P (\mathbb N),~ [A] = \{ B \in \mathcal P (\mathbb N) ~:~ |A| = |B| \}$
The post says "whenever $|A|-|B|$." What does that mean?
That is not a relation. Please reply with a complete question.
equivalence classes
Maybe. Or at least insist that questions are not posted in image files, unless there are accompanying diagrams or something.
I don't really see a better way to describe the classes other than to reuse the language of the problem. Something like,
For $A \in \mathcal P (\mathbb N),~ [A] = \{ B \in \mathcal P (\mathbb N) ~:~ |A| = |B| \}$
that is fine i suppose. i wanted to emphasize that we are dealing with sets here. and what you described is what |A| = |B| means by definition. so it's a matter of taste, i think... which i think
is also what you're saying.
A~B iff A and B have the same cardinal.
therefore, there are countable many equivalent classes:
1-class: the collection of all sets that contains only one element.
2-class: the collection of all sets that contains two elements.
n-class: the collection of all sets that contains n elements.
infinite-class:the collection of all sets that contains countable infinitely many elements.
A further question: What is P(N)/~? I leave it for you to solve it. Solve it, and you will understand the cardinality better.
January 20th 2010, 12:53 PM #9
January 20th 2010, 12:55 PM #10
January 20th 2010, 11:48 PM #11
|
{"url":"http://mathhelpforum.com/discrete-math/124591-equivalence-classes.html","timestamp":"2014-04-20T07:18:40Z","content_type":null,"content_length":"73926","record_id":"<urn:uuid:fdbc12ed-50ba-4eb1-8cea-aa07daf34aef>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
kirchhoff's law
Is there, precisely, a
detailed mathematical deduction(proof) of kirchhoff's law
I have read around what I can find in a couple of text books and on the www. It seems as much an article of faith as anything, based on the second law of thermodynamics. You may have to dig further
than just what Google has to offer at the first level. And it's not just a matter of a mathematical derivation.
This link
refers to the original paper and
this link
has a lot about the studies at the time. It points out the big error in Kirchoff's approach.
Hope it's of some use.
|
{"url":"http://www.physicsforums.com/showthread.php?s=90f3e92f774720618b817b2bc7d43586&p=4658788","timestamp":"2014-04-19T07:26:23Z","content_type":null,"content_length":"49643","record_id":"<urn:uuid:1c950d26-af6e-4804-b313-4f94ee3d12e1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mapping a polygon to a unit sphere
I need some suggestions about mapping a polygon to a unit sphere. Kindly help me.
I would look at spherical texture mapping assuming the polygon is a 2d object
mapping a polygon to a unit sphere. You've asked several questions about "mapping" this to that, but you don't really explain what you mean by that. Are you talking about producing a set of texture
coordinates? Are you talking about somehow turning a polygon into a sphere? Are you talking about generating a polygonal representation of a sphere? Describe the result you're trying to achieve.
I think I need to clarify my problem. I have one cross section. I would like to give it the shape of another cross section which is totally different in shape. I thought about unitizing both cross
sections by enclosing them within a unit bounding box or sphere, then map the vertices of one cross section to another. I am not sure whether my approach is ok or not. I need some suggestions.
Are you talking aout morphing one cross-section into another? If you you will need to have the same number of vertices on each cross-section. One way I can think of is as follows Lets call the
cross-section with the most vertices XA and the other XB express the vertices of XA as a percentage along the path of the cross-section; find the correspond percentage along XB and insert a vertex.
Now you can morph XA into VB or vice-versa using a linear function applied to each vertex. There is a caveat - this works well for 2D but you can get some strange in-betweens in 3D depending on the
relative start and end positions of the cross-sections. You can notice this when you do skinning/lofting in a 3D modelling package.
Actually it something like morphing, but I have one X-section with a known set of vertices and the one to which I would like to map is an extracted boundary from an image. I would like to morph my
X-section to give the shape of image. I am not sure whether I am on the right track. As both are of different size, I wanted to unitize them to a unit bounding box and centred at the center of the
bounding box. Then pass line from center to the vertices of the cross section upto the bounding box. This intersects the extracted image boundary somewhere. Corresponding X-section vertex is pushed/
or pulled (as necessary) at the intersection point. Does it sound ok? Please give me some more suggestion.
|
{"url":"http://www.opengl.org/discussion_boards/showthread.php/180881-Mapping-a-polygon-to-a-unit-sphere","timestamp":"2014-04-21T10:03:00Z","content_type":null,"content_length":"55992","record_id":"<urn:uuid:59a1b937-baf0-4896-8564-cde32f8d0c46>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advanced Mathematics for Computer Science - Page 2 - Math and Physics
For a lot of the topics mentioned (topology, differential geometry, nonlinear dynamics, etc), basically anything where there is a continuum instead of just finite structures, it will be difficult
to make much progress without a solid grounding in real analysis. There's a great set of video lectures by Francis Su from Harvey Mudd where I did my undergrad,
(or http:/ /www.youtube.com/watch?v=sqEyWLGvvdw and click through to the other videos)
Cool beans
Been looking for an easy i.e. video intro to real analysis. I tried to read Mandelbrot's Fractal book and Kip Thorne's gravitation books a while back and both of them quickly lost me since right off
the bat they both go into metric spaces
I can also brush up on monoids too now
|
{"url":"http://www.gamedev.net/topic/621988-advanced-mathematics-for-computer-science/page-2","timestamp":"2014-04-23T21:45:38Z","content_type":null,"content_length":"134715","record_id":"<urn:uuid:dfbf3ac5-b010-4e2b-a48d-46318b1165cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In Geometry, the Nine-Point Circle is a circle that can be constructed for any given triangle. It is so named because it passes through nine significant points. They include: the midpoint of each
side of the triangle, the foot of each altitude and the midpoint of the segment of each altitude from its vertex to the orthocenter (where the three altitudes meet).
The Nine-point Circle is also known as Feuerbach's Circle, Euler's Circle, Terquem's Circle, the Six-Points Circle, the Twelve-points Circle, the N-Point Circle, the Medioscribed Circle, the Mid
Circle or the Circum-Midcircle.
Here are some triangle centers that lie on the Nine-Point Circle:
X(115), X(116), X(117), X(118), X(119), X(120), X(121), X(122), X(123), X(124), X(125), X(126), X(127), X(128), X(129), X(130), X(131), X(132), X(133), X(134), X(135), X(136), X(137), X(138), X
(139), X(1312), X(1313), X(1560), X(1566), X(2679).
|
{"url":"http://www.uff.br/trianglecenters/nine-point-circle.html","timestamp":"2014-04-16T10:21:50Z","content_type":null,"content_length":"6632","record_id":"<urn:uuid:d1d8f799-d88e-4c96-b674-e9ce320809f8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pitch and pitch strength of iterated rippled noise.
ASA 129th Meeting - Washington, DC - 1995 May 30 .. Jun 06
3aPP1. Pitch and pitch strength of iterated rippled noise.
William A. Yost
Sandra J. Guzman
Stanley Sheft
Parmly Hear. Inst., Loyola Univ. Chicago, 6525 N. Sheridan Rd., Chicago, IL 60626
A cascade of add, delay (d ms), and attenuate (-1(less than or equal to)g(less than or equal to)1) circuit excited with noise produces iterated rippled noise (IRN) stimuli. The matched pitch and
discriminability between pairs of IRN stimuli were studied as a function of g, d, and the number of circuit iterations (n). For g>0, the pitch of all IRN stimuli equals 1/d. For g<0, pitch depends on
n: For small n, there were two pitches in the region of 1/d, while for large n there was a single pitch equal to 1/2d. Peaks in the autocorrelation function of IRN stimuli accounted for all of the
results. Peaks in the autocorrelation functions for IRN stimuli indicate the number of intervals in the waveform with durations pd (p=1,2,...,n), and for g<0 intervals related to peaks near 1/md (m=
odd integers) caused by assumed auditory filtering. The number of intervals (i.e., the heights of the autocorrelation peaks) determines the discriminability between IRN stimuli, while the reciprocal
of the interval duration determines the matched pitch. These results support a temporal rather than a spectral account of the pitch of IRN stimuli. [Work supported by NIH.]
|
{"url":"http://www.auditory.org/asamtgs/asa95wsh/3aPP/3aPP1.html","timestamp":"2014-04-17T04:07:01Z","content_type":null,"content_length":"1851","record_id":"<urn:uuid:4acf2689-166d-4596-bd3e-643d68841e1f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathFiction: Mefisto: A Novel (John Banville)
Although the mathematics is only discussed in this novel in the vaguest terms, it is of the greatest importance to the book. Gabriel Swan, the main character/narrator is so focused on numbers and
equations that even his descriptions of non-mathematical situations are described in mathematical terms. He attributes his fascination with numbers to his knowledge of his twin who died at birth:
(quoted from Mefisto: A Novel)
It seems out of all this somehow that my gift for numbers grew. From the beginning, I suppose, I was obsessed with the mystery of the unit, and everything else followed. Even yet I cannot see a one
and a zero juxtaposed without feeling deep within me the vibration of a dark, answering note. Before I could talk I had been able to count, laying out my building blocks in ranked squares, screaming
if anyone dared to disturb them...My party piece was to add up large numbers instantly in my head, frowning, a hand to my brow, my eyes downcast. It was not the manipulation of things that pleased
me, the mere facility, but the sense of order I felt, of harmony, of symmetry and completeness.
We follow Gabriel through a very messy childhood and adolescence, encountering the unpleasant side of life far too often. His mother's unexpected death in a car accident, his own tragic accident
leaving him burned and scarred over his entire body, his first girlfriend's deafness and his first lover who was a drug addict. Throughout all of this, Gabriel seeks order in his mathematics and
hopes to apply it to the real world around him.
(quoted from Mefisto: A Novel)
Oh, I worked. Ashburn, Jack Kay, my mother, the black dog, the crash, all this, it was not like numbers, yet it too must have rules, order, some sort of pattern. Always I had thought of number
falling on the chaos of things like frost falling on water, the seething particles tamed and sorted, the crystals locking, the frozen lattice spreading outwards in all directions. I could feel it in
my mind, the crunch of things coming to a stop, the creaking stillness, the stunned white air. But marshal the factors how I might, they would not equate now.
This is what the author is trying to show us: the chaos of the real world cannot be tamed by numbers. Of course, the chaos he refers to has nothing to do with the mathematical definition of chaos.
This is not sensitive dependence, nor transitivity. It is just the unpleasantness of reality which Swan cannot tame with numbers. Nevertheless, one of his mathematical mentors, Professor Kosok, is
working on research which precisely demonstrates this lack of usefulness of numbers. Wnen a representative of the representative of the government comes to complain about his use of the grant money
they have given him (to perform calculations on an old, teletype main frame computer) the following conversation ensues:
(quoted from Mefisto: A Novel)
-We're only asking, she said, the minister is only aksing, for some sort of statement of your precise aims in this programme? Everything you show us seems so...well, so hazy, so...uncertain?
At this the professor made a violent whooshing noise, like a breathless swimmer breaking the surface, and turned on her in a fury.
- There is no certainty! he cried. That is the result! Why don't you understand that, you you you ...! Ach, I am surrounded by fools and children. Where do you think you are living, eh? This is the
world, look around you, look at it! You want certainty, order, all that? Then invent it.
- There! he said. Him [Swan]! He is the one you need, he thinks that numbers are exact, and rigourous, tell your minister about him!
You won't learn any mathematics from this book, but I still recommend it as a well written piece of literature in which math plays a fundamental role.
Contributed by andy
: "Mathematics aside Banville creates a world within which the reader breathes a different air. It is the work that introduced me to Banvilles brilliance with words and his newest book Eclipse is no
disappointment. For those of you of a more scientific bent than myself I recommend Banvilles Dr Copernicus. Cheers"
Contributed by Anonymous
wonderful piece of literature!
|
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf147","timestamp":"2014-04-17T07:40:45Z","content_type":null,"content_length":"13207","record_id":"<urn:uuid:35ebdf3b-1e32-4b42-a8df-0db889387fb1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A stream of primes as Comonad
I figured I've beaten enough around this bush to encircle it with a path half-a-meter in depth. So, I've finally decided to bear down enough to study comonad types and to create my own instance.
The above is a
: this is an introductory guide from a beginner's perspective. YMMV. But, perhaps, as the other material out there on comonads is rather scarce to begin with, and rather intimidating where present,
this entry may spark the interest of the reader wishing to start off on comonads and doing some basic things with them.
The comonad definition I use is from Edward Kmett's
. What a sweet library it is; ⊥-trophy is in the ether-mail.
Let's talk a bit about what a comonad is, first. A comonad is a special way at looking at a data structure that allows one to work with the individual members of that data structure in the context of
the entire data structure. It takes the opposite approach to the data in the data structure that monads do, so the monadic
) ...
>>= :: Monad m ⇒ m a → (a → m b) → m b
... has its dual in the comonad (commonly called
, but sensibly named
) in
). Its signature is as follows:
=>> :: Comonad w ⇒ w a → (w a → b) → w b
I started gaining insight into comonads by studying the extender function — it takes the comonad in its entirety and resolves that to a value in context. That value is then used to reassemble the
comonad into its new form. In short, the extender
the comonad.
. Now, just as monads have the unit function (
) that creates a monad from a plain value ...
return :: Monad m ⇒ a → m a
... so, too, comonad has the dual of that function, called counit (or, again, more sensibly called
for the library I'm using) ...
extract :: (Copointed f, Functor f) ⇒ f a → a
... which, instead of injecting a value
the comonad, it
s a value
the comonad.
That's comonad, in a nutshell: the dual of monad.
Okay, then, let's jump right in to creating and using a Comonad instance. We'll start with the list data type:
> import Control.Comonad
> import Control.Arrow
> import List
> instance Copointed [] where
> extract = head
> instance Comonad [] where
> extend fn [] = []
> extend fn list@(h:t) = fn list : (t =>> fn)
Just like for monads where
m >>= return
is the
(right) identity
, we can show that for comonads that
w =>> extract
is an identity:
[1..10] =>> extract
What's the answer that you obtain? Why?
Now that we can use the whole list to create each element of the new list, thanks to the Comonad protocol, let's solve the example problem from
Why Attribute Grammars Matter
, which is we must replace each element with the difference from the average of the list. With comonads, that's easy to express!
> avg list = sum list / genericLength list
> diff list = list =>> (extract &&& avg >>> uncurry (-))
What does
diff [1..10]
give you, and why?
Now, the comonad did not eliminate the problem of multiple list traversals that Wouter points out in his article (please do read it!), but comonads do show a nice, simple,
way to synthesize the whole to build each part. Beautiful!
A stream ...
Streams can be considered infinite lists, and are of the form:
a :< b :< ...
Uustalu and Vene, of course, discuss the
Stream Comonad
, but I obtained my implementation from
a site with a collection of Comonad types
, modifying to work with Kmett's protocol:
> module Control.Comonad.Stream where
> import Control.Comonad
> data Stream a = a :< Stream a
> instance Show a ⇒ Show (Stream a) where
> show stream = show' (replicate 5 undefined) stream
> where show' [] _ = "..."
> show' (_:t) (x :< xs)
> = show x ++ " :< " ++ show' t xs
> instance Functor Stream where
> fmap f (x :< xs) = f x :< fmap f xs
> instance Copointed Stream where
> extract (v :< _) = v
> instance Comonad Stream where
> extend f stream@(x :< xs) = f stream :< extend f xs
> produce :: (a → a) → a → Stream a
> produce f seed = let x = f seed in x :< produce f x
Um, it is the user's responsibility
to guard these functions below ...
> toList :: Stream a → [a]
> toList (x :< xs) = x : toList xs
> mapS :: (a → b) → Stream a → Stream b
> mapS = fmap
As for lists, it is quite easy to make streams an instance of the Comonad class. So, a stream of 1s is
let ones = 1 :< (ones =>> (extract >>> id))
ones ≡ 1 :< 1 :< 1 :< 1 :< 1 :< ...
The natural numbers are
let nats = 0 :< (nats =>> (extract >>> succ))
nats ≡ 0 :< 1 :< 2 :< 3 :< 4 :< ...
And a stream of primes is ... um, yeah.
... of primes
The ease at which we generated the stream of 1s and natural numbers would lead one to believe that generating primes would follow the same pattern. The difference here is that a prime number is a
number not divisible evenly by
any other
prime number. So, with the schema for streams as above, one would need to know all the prime numbers to find this prime number. A problem perfectly suited for comonads, so it would seem, but, as the
current element of the stream depends on the instantiation of the entire breadth of the stream, we run into a bit of a time problem waiting for our system to calculate the
stream of primes in order to compute the current prime. Hm.
One needs to put some thought into how to go about computing a stream of primes. Uustalu and Vene created the concepts of
ipation along with a
. All that is outside the scope of this article. Instead, let's consider
in a different light: why not embed the "history" (the primes we know) into the
itself. And what is the history? Is it not a list?
> primeHist :: Stream [Integer]
> primeHist = [3,2] :< primeHist =>> get'
With that understanding, our outstanding function to find the current prime (
) reduces to the standard Haskell hackery:
> get' :: Stream [Integer] → [Integer]
> get' stream = let primus = extract stream
> candidate = head primus + 2
> in getsome' candidate primus
> where getsome' :: Integer → [Integer] → [Integer]
> getsome' candidate primus
> = if all (λp . candidate `rem` p ≠ 0) primus
> then candidate : primus
> else getsome' (2 + candidate) primus
So, now we have a stream of histories of primes, to convert that into a stream of primes is a simple step:
> primes = 2 :< (3 :< fmap head primeHist)
And there you have it, a comonadic stream of primes!
Uustalu and Vene's implementation of
suffer for the layers of complexity, but my implementation suffers at least as much for its relative simplicity.
Each element
of the stream contains the history of all the primes up to that prime. A much more efficient approach, both in space and time, would be to use the
monad or comonad to encapsulate and to grow the history as the state ... or, for that matter, just use list compression.
And, of course, there's the "Genuine Sieve of Eratosthenes" [O'Neill, circa 2006] that this article blithely ignored.
4 comments:
For comparison, and to demonstrate how this works in the imperative D language, of which I am a fanboy:
// difference of values from average
// works with any kind of collection
ElemType!(T)[] diff_avg(T)(T inp) {
__auto res = new ElemType!(T)[inp.length];
__real avg = 0.0;
__foreach (v; inp) avg += v;
__avg /= inp.length;
__foreach (i, v; inp) res[i] = abs(v - avg);
__return res;
And to define a stream of primes:
int delegate() primes() {
__return stuple([2], 3) /apply/ (ref int[] primes, ref int current) {
____while (true) {
________current ++;
______auto limit = cast(int) (sqrt(current) + 1);
______bool found;
______foreach (prime; primes) {
________if (prime > limit) break;
________if (current % prime == 0) {
__________found = true; break;
______if (found) continue;
______primes ~= current;
______return current;
A bit longer, I guess, but not that much.
Sorry for the strange indentation. It appears Blogger lacks a code tag.
Your diff is wrong. The average of the list keeps getting taken on a different list... i.e.
diff [1..10]
[-4.5,-4.0 .. 0.0]
when it should return
[-4.5, -3.5 .. 4.5]
And how does expressing this computation comonadically help to eliminate extra passes of the list? I'll make two points:
1. Your incorrect implementation of "diff" makes many, many needless passes. ;-) It's quadratic in the length of the list when the real diff is linear. Even the incorrect version could be
implemented linearly.
2. Laziness and tying-the-knot can indeed be employed, sometimes, to eliminate multiple passes of data. However, applying that techique to "diff", a la "Why Attribute Grammars Matter", is a red
herring. That computation _cannot_ be done in a single pass. Period.
In that paper, the "optimized" version of diff builds a list of thunks, not Nums, making the second necessary pass implicit. Compiled with GHC -O, the "optimized" version (1 explicit pass + 1
implicit pass) takes basically the same amount of time as the naive version. (3 explicit passes.)
Moreover, the "optimized" version breaks GHC's list fusion scheme, whereas the naive version is a "good producer." (though not a good consumer.) Thus the optimized version can actually perform
_worse_ than the naive version when inlined inside the right context.
You are much better off combining the two passes in the naive version of "avg" into a single pass, implementing "diff" as 2 explicit passes. This will actually speeds things up.
("Why Attribute Grammars Matter" is an ok paper, probably worth reading. But that example is bad and my feeling is that it doesn't really make it's case of why they matter as a result.)
I'd love to see a comonadic version of diff, with the optimizations needed to reduce the number of passes to 2.
This comment has been removed by a blog administrator.
Oes Tsetnoc Seo Contest one of the ways in which we can learn seo besides Upaya Mengembalikan Jati Diri Bangsa. By participating in the Oes Tsetnoc or Mengembalikan Jati Diri Bangsa we can
improve our seo skills. To find more information about Oest Tsetnoc please visit my Oes Tsetnoc pages. And to find more information about Mengembalikan Jati Diri Bangsa please visit my
Mengembalikan Jati Diri Bangsa page and other update like as Beratnya Mengembalikan Jati Diri Bangsa, Mengembalikan Jati Diri Bangsa di perpanjang and Jangan Berhenti Mengembalikan Jati Diri
Bangsa. Thank you So much.
Oes Tsetnoc | Lanjutkan Mengembalikan Jati Diri Bangsa
|
{"url":"http://logicaltypes.blogspot.com/2008/09/stream-of-primes-as-comonad.html?showComment=1230642900000","timestamp":"2014-04-17T04:00:45Z","content_type":null,"content_length":"101707","record_id":"<urn:uuid:c112efb1-3704-43a7-b498-c45bd843e5ab>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Get a Half Life: Exponential Equations: Overview of the Lesson
Get a Half Life
Exponential Equations: Overview of the Lesson
Get a Half Life
• The students modeled an exponential decay situation in this activity.
• Students were given M&Ms which they poured onto a plate
• They removed the M&Ms that had the Ms facing upwards and counted the ones remaining on the plate.
• They repeated the procedure until they only had 1 or 2 M&Ms on their plates.
• The data was put into the graphing calculators and they attempted to graph a line using a linear equation.
• Then they did the same thing for an exponential equation and saw a curve.
• This helped them see that an exponential equation matched their data better than the line.
|
{"url":"http://fcit.usf.edu/fcat8m/resource/notes/halflife.htm","timestamp":"2014-04-21T02:40:06Z","content_type":null,"content_length":"3182","record_id":"<urn:uuid:09610410-139c-49f0-95ab-9c8c3b1652f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AC (2003), pp. 325-332
Discrete Random Walks, DRW'03
Cyril Banderier and Christian Krattenthaler (eds.)
DMTCS Conference Volume AC (2003), pp. 325-332
author: Valentin Topchii and Vladimir Vatutin
title: Individuals at the origin in the critical catalytic branching random walk
keywords: catalytic branching random walk; critical two-dimensional Bellman-Harris process
abstract: A continuous time branching random walk on the lattice
is considered in which individuals may produce children at the origin only. Assuming that the underlying random walk is symmetric and the offspring reproduction law is critical we prove
a conditional limit theorem for the number of individuals at the origin.
If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files.
reference: Valentin Topchii and Vladimir Vatutin (2003), Individuals at the origin in the critical catalytic branching random walk, in Discrete Random Walks, DRW'03, Cyril Banderier and Christian
Krattenthaler (eds.), Discrete Mathematics and Theoretical Computer Science Proceedings AC, pp. 325-332
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dmAC0130.ps.gz (32 K)
ps-source: dmAC0130.ps (92 K)
pdf-source: dmAC0130.pdf (96 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Di Sep 27 10:09:26 CEST 2005 by gustedt
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAC0130/1417","timestamp":"2014-04-20T23:46:39Z","content_type":null,"content_length":"14200","record_id":"<urn:uuid:1080a2f9-f028-4555-8fcc-3ff73104676a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Section: Linux Programmer's Manual (3) Updated: 2010-09-20 Local index Up
ldexp, ldexpf, ldexpl - multiply floating-point number by integral power of 2
#include <math.h>
double ldexp(double x, int exp);
float ldexpf(float x, int exp);
long double ldexpl(long double x, int exp);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
ldexpf(), ldexpl():
_BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L;
or cc -std=c99
The ldexp() function returns the result of multiplying the floating-point number x by 2 raised to the power exp.
On success, these functions return x * (2^exp).
If exp is zero, then x is returned.
If x is a NaN, a NaN is returned.
If x is positive infinity (negative infinity), positive infinity (negative infinity) is returned.
If the result underflows, an range error occurs, and zero is returned.
If the result overflows, a range error occurs, and the functions return HUGE_VAL, HUGE_VALF, or HUGE_VALL, respectively, with a sign the same as x.
See math_error(7) for information on how to determine whether an error has occurred when calling these functions.
The following errors can occur:
Range error, overflow
errno is set to ERANGE. An overflow floating-point exception (FE_OVERFLOW) is raised.
Range error, underflow
errno is set to ERANGE. An underflow floating-point exception (FE_UNDERFLOW) is raised.
C99, POSIX.1-2001. The variant returning double also conforms to SVr4, 4.3BSD, C89.
frexp(3), modf(3), scalbln(3)
This page is part of release 3.27 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/.
This document was created by man2html, using the manual pages.
Time: 21:49:04 GMT, April 16, 2011
|
{"url":"http://www.makelinux.net/man/3/L/ldexpf","timestamp":"2014-04-17T09:39:34Z","content_type":null,"content_length":"10518","record_id":"<urn:uuid:71229e75-424f-4f10-8ff7-70a87fa060b9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
December 2008
This issue's editorial topics
Support Plus — make a difference to mathematics
Were you aware that Plus is entirely funded by grants and donations? It is the generosity of our sponsors, individuals and organisations committed to the public understanding of mathematics, that
enables us to bring you new stories from the fascinating world of maths every week, and to keep Plus free of charge for all our readers.
Unfortunately, our current core funding runs out in March 2009, and that's why we are launching a fundraising campaign to sustain the future of Plus. If you enjoy Plus, then please consider making a
donation via our parent organisation, the Millennium Mathematics Project, based at the University of Cambridge. Your money will go towards the salaries of our editorial team and the technical staff
who maintain the Plus website.
We're committed to keeping Plus as it is: interesting, intelligent, fun and free, and one of the only regular popular maths publications on the Web. Every donation, no matter how small, will help us
to achieve this aim and make a difference to mathematics as a whole.
Thank you for your support!
Common sense
Common sense, so dictionaries agree, is sound practical judgment arising from native intelligence. Some aspects of common sense are instinctual and sensory. They prevent us from having to re-learn
the effects of gravity every day, and from trying to mate with bicycles. In that sense, animals can be said to have common sense, too. But the more interesting aspects of common sense are unique to
humans. They refer to our ability to apply reasonable checks and balances to our fanciful imaginations. Much of our success as a species can be attributed to our ability to agree on a common sense
view of the world.
But common sense has little to do with wisdom, let alone truth. Not so long ago, common sense dictated that all but well-born white males needed protection from the effects of education and
self-determination. Today, a considerable number of people still find nothing more natural than the idea that homosexuality is unnatural. Common sense is culturally subjective. When touted by
politicians and tabloids, to peddle immigration policy, anti-terrorism laws or to stop political correctness from going mad, common sense is rarely more than the glorified status quo, and/or a tacit
agreement not to think too hard. People like Joe the plumber make great campaign tools because their disarming ability to spot the obvious and ignore the rest can be re-packaged as common sense and
used to sound a populist note. As an argumentative tool, common sense is popular for its emotiveness, but hardly convincing.
Homophobia, racism, oppression of women — they've all been justified using common sense.
One area that illustrates the limitations of common sense is science. "Sound judgment" of evidence is clearly an important part of scientific thought, but science is essentially iconoclastic. Flat
worlds have become round, time relative, and a dice-playing god a matter of accepted scientific fact. There are countless examples of scientific discoveries that fly in the face of common sense.
Without the willingness to transcend common sense in the light of new evidence (and sometimes even without it), science would be stuck in the stone age. Unsurprisingly, Einstein had a dim view of
common sense, calling it the "set of prejudices we have acquired by age eighteen".
But perhaps we shouldn't make too much of the fact that common sense is irrelevant to theoretical physics. After all we have no direct experience of the very small, the very large, or the very fast.
When it comes to plumbing, Joe is much better advised to use a common sense approach to physics — the uncertainty principle doesn't apply to toilets. Mathematics has a more ambivalent, possibly more
relevant, relationship with common sense. Mathematics is all about questioning your assumptions, but as an axiomatic system it needs a collection of self-evident truths (axioms) to base itself on.
These self-evident truths might include statements like "things that are equal to the same thing are also equal to one another," one of Euclid's common notions postulated around 2300 years ago.
Without axioms there is no mathematics, and in this sense mathematics is based on some sort of common sense.
Common sense comes into mathematics in other ways, too. Hardly any mathematical proof published in an academic journal follows stringent axiomatic rules, which dictate that every statement must be
derived from a clearly defined set of axioms using only the rules of logic. Mathematicians rely on intuition, assumptions of what's "obvious" or "trivial", sometimes even pictures. In fact, Gödel's
incompleteness theorem puts a theoretical limit on what can be proved within an axiomatic system. What's more, mathematicians routinely ignore profound philosophical issues. They happily talk about
things they cannot possibly know, like the infinite decimal expansion of Pi in its entirety, and perform leaps of faith, for example when arguing that if premise A leads to a contradiction, then
premise not A must be true — what if there's an alternative to A and not A? Mathematics is riddled with philosophical holes patched up by a common sense of what's an acceptable argument. If
mathematicians insisted on never, ever using some sort of common intuition, then mathematics, too, would be stuck in the stone age.
So, if common sense enters mathematics, the very study of what's knowable, isn't it legitimate to assume and to occasionally appeal to a wider common human sense, one that is perhaps best left whole
and undefined? After all, statements like Euclid's common notion concerning identity are undeniably true to every single human being, even through we can't prove them. But statements such as Euclid's
are the bare sediments of a thorough process of scrutiny. Philosophers over the centuries have struggled hard with common sense. After weeding out its not-so-common aspects, they often end up with
skeletal statements that have a mathematical ring to their precision. Try, for example, George Edward Moore's "There exists at this time a living human body which is my body". Common sense by
definition needs to be self-evident, and self-evident things, by their very nature, are not worth appealing to. Common sense on its own certainly can't sort out complex issues such as immigration.
We clearly do need a common yardstick by which to measure the products of our fanciful imaginations, but that yardstick is about method, rather than content. Working mathematicians use their very own
brand of common sense to deal with shaky grounds: scrutinise your argument for hidden assumptions (the bad kind of common sense), identify your explicit assumptions (your mathematical common sense),
and be aware of the fact that your argument won't work for those who don't accept those assumptions. If something works in practice, then use it, but remember that unturned stones can hide doors to
new worlds. It's about separating what you know from what you believe and making the best of certainties by being aware of their limitations. If common sense tells us anything at all, it's that
things are probably more complicated than we think, and that others view the world differently — not attitudes usually taken by self-proclaimed defenders of common sense.
|
{"url":"http://plus.maths.org/content/editorial-8","timestamp":"2014-04-16T11:30:19Z","content_type":null,"content_length":"27059","record_id":"<urn:uuid:8a775b99-337c-4e89-b5a9-61685f518cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|