content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Green Oaks, IL Math Tutor
Find a Green Oaks, IL Math Tutor
...I have taught basic discrete/finite math (probability, combinatorics, naive set theory, basic linear algebra, linear programming) at the college level. I also wrote a study guide covering all
the above topics. Used copies of the study guide are still available on amazon.com.
24 Subjects: including differential equations, discrete math, linear algebra, algebra 1
...I have tutored students of varying levels and ages for more than six years. While I specialize in high school and college level mathematics, I have had success tutoring elementary and middle
school students as well. I have experience working with ACT College Readiness Standards and have been successful improving the ACT scores of students.
19 Subjects: including calculus, statistics, discrete math, GRE
...Homeschool group or private lessons available for pre-algebra - calculus students. Please contact me for more information. I love math and teaching.I have taught and tutored Algebra I for 8
24 Subjects: including trigonometry, differential equations, discrete math, dyslexia
...I have a very laid back personality and I love my subject area. Looking for tutoring opportunities nearby.I'm have extensive experience in helping students with study skills. I have taught
resource classes for students for 8 years in which we work on study skills.
12 Subjects: including algebra 2, chemistry, physics, prealgebra
...It has its own vocabulary, syntax, and grammar. For example, when we say x=2, everybody *just knows* that we're talking about a single x equaling 2. 1x is written as x. Because it's more
"efficient." (Sounds better than saying "lazy", doesn't it?) Once the language of algebra becomes comfortable, we can start playing around with it, which leads us to the wild world of algebra.
14 Subjects: including algebra 1, algebra 2, geometry, prealgebra
Related Green Oaks, IL Tutors
Green Oaks, IL Accounting Tutors
Green Oaks, IL ACT Tutors
Green Oaks, IL Algebra Tutors
Green Oaks, IL Algebra 2 Tutors
Green Oaks, IL Calculus Tutors
Green Oaks, IL Geometry Tutors
Green Oaks, IL Math Tutors
Green Oaks, IL Prealgebra Tutors
Green Oaks, IL Precalculus Tutors
Green Oaks, IL SAT Tutors
Green Oaks, IL SAT Math Tutors
Green Oaks, IL Science Tutors
Green Oaks, IL Statistics Tutors
Green Oaks, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Green_Oaks_IL_Math_tutors.php","timestamp":"2014-04-21T02:49:04Z","content_type":null,"content_length":"24040","record_id":"<urn:uuid:0eb58156-a91b-4b1b-9f78-607cff6aee58>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaseous system: Meaning of this integral eq.
What is the physical or statistical meaning of the following integral
[tex]\int^{a}_{o} g(\vartheta) d(\vartheta)[/tex] = [tex]\int^{\infty}_{a} g(\vartheta) d(\vartheta)[/tex]
where [tex]g(\vartheta)[/tex] is a Gaussian in [tex]\vartheta[/tex] describing the transition frequency fluctuation in a gaseous system (assume two-level and inhomogeneous) .
[tex]\vartheta = \omega_{0} -\omega[/tex], where [tex]\omega_{0}[/tex] is the peak frequency and [tex]\omega[/tex] the running frequency.
I understand that the integral finds a point [tex]\vartheta = a[/tex] for which the area under the curve (the Gaussian) between 0 to a and a to [tex]\infty[/tex] are equal.
But is there a statistical meaning to this integral? Does it find something like the most-probable value [tex]\vartheta = a[/tex]? But the most probable value should be [tex]\vartheta = 0[/tex] in my
understanding! So what does the point [tex]\vartheta = a[/tex] tell us?
I will be grateful if somebody can explain this and/or direct me to a reference. | {"url":"http://www.physicsforums.com/showthread.php?t=298042","timestamp":"2014-04-19T15:12:00Z","content_type":null,"content_length":"20318","record_id":"<urn:uuid:97aacdca-8238-4a42-9eda-bd6a018b58a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating an Array of Random Floats
While numerically testing a solution for the February 2007 IBM Ponder This challenge I had to generate a large n×2 Array of random floats uniformly distributed in [0,1).
My first attempt was to use structured flavors in the RandomTools package:
picks[4] := proc(n::posint)
uses RandomTools;
, datatype=float[8]);
end proc:
While this worked, it was rather slow. I then noticed that there is a listlist flavor in the RandomTools package. Using it gave a significant improvement (about 2×) over using nested list flavors.
picks[3] := proc(n::posint)
uses RandomTools;
, 'datatype=float[8]'
end proc:
By specifying a 2×n listlist structure and using the transpose option to the Array constructor this could be slightly improved:
picks[2] := proc(n::posint)
uses RandomTools;
, 'datatype=float[8]'
, 'transpose=true'
end proc:
While the structured flavors are convenient, they are not as fast as I'd like. My next attempt was to use the GenerateFloat64 procedure in the MersenneTwister subpackage of RandomTools. It calls (I
believe) compiled code that is significantly faster than the random number generator used to generate structured flavors. Unlike the structured flavor approach, GenerateFloat64 returns only a singly
random float that is (thankfully) uniformly distributed on [0,1) so we'll have to use an appropriate constructor to initialize the Array.
picks[1.3] := proc(n::posint)
uses RandomTools;
Array(1..n, 1..2, (i,j) -> MersenneTwister:-GenerateFloat64()
, 'datatype=float[8]'
end proc:
That is significantly faster (about 5× with n=50000) than the previous method. As you may have guessed by the numbering scheme, there are still a few improvements. Each is somewhat faster than the
picks[1.2] := proc(n::posint)
uses RandomTools;
local i;
, MersenneTwister:-GenerateFloat64()]
, 'datatype=float[8]'
end proc:
picks[1.1] := proc(n::posint)
uses RandomTools;
local i;
, [seq(MersenneTwister:-GenerateFloat64(), i=1..n)]
, [seq(MersenneTwister:-GenerateFloat64(), i=1..n)]
, 'transpose = true'
, 'datatype=float[8]'
end proc:
Jacques Carette suggests the following trick, which directly calls the compiled procedure used by GenerateFloat64.
picks[1.0] := proc(n::posint) local t;
uses RandomTools; kernelopts(opaquemodules=false):
t := (a,b) -> MersenneTwister:-MTKernelInterface(3);
Array(1..n, 1..2
, t
, 'datatype'=float[8]
end proc:
Here is a minor improvement. First, I restored the setting of opaquemodules. I also used the faster technique of passing a listlist structure rather than using functional initializer. Finally, to
generate the lists quickly, two seq commands were used and the structure transposed, using the transpose option to Array.
picks[0.8] := proc(n::posint)
local i,rnd,opacity;
uses RandomTools;
opacity := kernelopts('opaquemodules'=false);
rnd := MersenneTwister:-MTKernelInterface;
, [seq(rnd(3), i=1..n)]
, [seq(rnd(3), i=1..n)]
, 'transpose=true'
, 'datatype=float[8]'
end proc:
This, too, can be further improved. For this case it is faster to create an empty Array and then use the map function to initialize random data.
picks[0.7] := proc(n::posint)
local opacity,rnd;
opacity := kernelopts('opaquemodules'=false);
rnd := RandomTools:-MersenneTwister:-MTKernelInterface;
map(()->rnd(3), Array(1..n,1..2,'datatype=float[8]'));
end proc:
A bit more speed can be gained by using a hack. A number, including a hardware float, that is applied to any argument evaluates to itself. For example, 3.2(0.0) = 3.2. We can use this trick to avoid
having to evaluate the operator ()->rnd(3), instead we use the unevaluated function 'rnd(3)' which, when evaluated, returns a hardware float that is then applied to 0.0 (the initial value of the
Array) and so evaluates to itself:
picks[0.65] := proc(n::posint)
local opacity,rnd;
opacity := kernelopts('opaquemodules'=false);
rnd := RandomTools:-MersenneTwister:-MTKernelInterface;
map('rnd(3)', Array(1..n,1..2,'datatype=float[8]'));
end proc:
Acer points out that rtable has a built-in frandom initializer. While the documentation for this feature is deficient (it doesn't mention whether the generated random numbers are uniformly
distributed within the range, or other details) it is orders of magnitude faster than the previous approach.
picks[0.1] := proc(n::posint)
rtable(1..n, 1..2
, frandom(0..1, 1)
, 'subtype = Array'
, 'datatype=float[8]'
end proc:
Here is a performance comparison of all the techniques. and scaling the result. The fastest procedure is abpit 3 orders of magnitude faster than my first attempt. All tests were run with Maple 11 on
a Linux box.
GGenerate a 50000 x 2 Array of random floats uniformly distributed in [0,1)
----- timing ------- ---------- memory ------------
total (s) maple (s) used (MB) alloc (MB) gctimes proc
--------- --------- --------- ---------- ------- ----
0.02 0.02 0.76 0.81 0 picks[.1]
0.85 0.82 18.32 10.87 1 picks[.65]
1.03 0.92 18.96 10.69 1 picks[.7]
1.14 1.11 20.82 13.19 1 picks[.8]
1.25 1.20 21.97 14.62 1 picks[.85]
1.37 1.37 21.64 11.75 1 picks[.9]
1.50 1.46 21.74 11.75 1 picks[1.0]
1.41 1.37 23.43 14.31 1 picks[1.1]
1.53 1.50 24.12 16.06 1 picks[1.2]
1.68 1.61 23.41 13.25 1 picks[1.3]
7.79 7.58 125.29 39.74 4 picks[2]
8.51 8.36 129.21 41.49 4 picks[3]
16.63 16.01 199.61 43.93 6 picks[4] | {"url":"http://www.mapleprimes.com/posts/41873-Generating-An-Array-Of-Random-Floats","timestamp":"2014-04-17T18:54:57Z","content_type":null,"content_length":"65158","record_id":"<urn:uuid:8ad4f943-d96f-47bf-8b11-df6e66780c2c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ring Functions
June 10th 2010, 01:06 PM #1
Jun 2010
Ring Functions
I had posted this problem before, however the expression didn't appear as expected. The final expression should read f ^2=f
Let the commutative ring F(R) be the set of all functions R-R equipped with the operations of pointwise addition and pointwise multiplication: given f, g are members F(R), define functions f+g
and fg by
f+g: a→ f(a) + g(a) and fg: →f(a)g(a)
(Notice fg is not their composite)
Pointwise addition and multiplication are the operations on functions occurring in calculus for example.
∫f(x) g(x)dx =∫f(x) dx +∫g(x) dx
And D(fg)=D(f)g+f(D(g), where D denotes derivative.
The sum f+g in the first integrand is pointwise addition while the product fg in the derivative formula is pointwise multiplication.
Show that the above commutative ring F(R) contains elements f≠0,1 with f2=f
The expresiion should read the square of f equals f. What I mean here is that f multiplied by f equals f. Symbolicaly this is f ^2=f
I had posted this problem before, however the expression didn't appear as expected. The final expression should read f ^2=f
Let the commutative ring F(R) be the set of all functions R-R equipped with the operations of pointwise addition and pointwise multiplication: given f, g are members F(R), define functions f+g
and fg by
f+g: a→ f(a) + g(a) and fg: →f(a)g(a)
(Notice fg is not their composite)
Pointwise addition and multiplication are the operations on functions occurring in calculus for example.
∫f(x) g(x)dx =∫f(x) dx +∫g(x) dx
And D(fg)=D(f)g+f(D(g), where D denotes derivative.
The sum f+g in the first integrand is pointwise addition while the product fg in the derivative formula is pointwise multiplication.
Show that the above commutative ring F(R) contains elements f≠0,1 with f2=f
The expresiion should read the square of f equals f. What I mean here is that f multiplied by f equals f. Symbolicaly this is f ^2=f
Hmmm... this would mean that for all $x\in\mathbb{R}\,,\,\,f^2(x)=f(x)\iff f(x)\left(f(x)-1\right)=0\Longrightarrow f(x)=0\,\,\,or\,\,\,f(x)=1$ , since $\mathbb{R}$ is an integer domain (in fact
a field, of course).
We can thus define $f(x):=\left\{\begin{array}{ll}0&\,if\,\,\,x\leq 0\\1&\,if\,\,\,x>0\end{array}\right.$ , and there we have a non-trivial idempotent.
June 10th 2010, 06:44 PM #2
Oct 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/148593-ring-functions.html","timestamp":"2014-04-19T09:33:04Z","content_type":null,"content_length":"36914","record_id":"<urn:uuid:9c092be8-c157-4f67-aaa0-eb159032ab50>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Models of Multiplication
Area Model Video
What is an area model for multiplication? I learned this week that the area model for multiplication is a pictorial way of representing multiplication. In the area model, the length and width of a
rectangle represent factors, and the area of the rectangle represents their product. According to Math and Teaching.org "The area model for multiplication establishes the groundwork for helping
visual learners in the conceptual understanding of the traditional algebraic skills of polynomial multiplication and factoring."
Here is an example of the area model for multiplication:
I think the area model is a great way to show students that multiplication is nothing more than adding up partial products. Base 10 blocks are useful manipulatives for teaching kids about
multiplication. The blocks allow students to get their hands on the numbers to see how the numbers add up to the final product. I can see how cooperative learning groups would be a good way to let
kids practice the area model. Start with groups of 3-4 students and have them use the base 10 blocks together to solve the equations.
Math and Teaching - Area Model
Websites like Kidspitation have manipulatives that would be fun for students to use while learning the area model for multiplication. Many classrooms have SmartBoards that students could use to
practice on during instruction. Students would solve a 2 digit multiplication problem by drawing an area model and writing the parital products as they go. The process of adding the partial products
would help students see the step by step process. I think the area model is a method that will make sense to kids and allow them to see their answers. | {"url":"http://henningsenmath.blogspot.com/2012/06/week-four-area-models-of-multiplication.html","timestamp":"2014-04-21T01:59:30Z","content_type":null,"content_length":"55071","record_id":"<urn:uuid:dad8d2d0-f1ff-45cd-8ac2-c1c74542a024>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Rotating a vector about a new origin
Replies: 0
Liger Rotating a vector about a new origin
Posted: Jan 10, 2011 11:33 AM
Posts: 1
Registered: I'm trying to do something like this:
1/10/11 http://img28.imageshack.us/i/vectorq.jpg/
The pyramid is linked to the box. I have the positions and rotations of both the green box and the green pyramid. The axis intersecting the green box is at (0, 0, 0) with a rotation of
(0, 0, 0). I want to move and rotate this group to the new axis centered in the purple box. I have the position and rotation of the purple box. What I need to know is how to calculate the
new rotation and position of the red pyramid.
It's been quite a while since I've done stuff like this, so if someone could help me with formulas and how to use them to calculate the new rotation and position, I would appreciate it. | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2226738","timestamp":"2014-04-19T03:07:42Z","content_type":null,"content_length":"14212","record_id":"<urn:uuid:71be1acc-2e32-4232-b2fe-2d626119ffc9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of Compound Events
Table of Contents
ACOS Objective 13
Determine the probability of a compound event.
ARMT Possible Points = 6 (MC, GR, OE)
• The drawings of one or more spinners may be used.
• Coins may be used.
• Compound events with replacement or without replacement will be required.
• Word problems/real-life situations may be used.
Sample problems from Item Specs
The AMSTI unit
What Do You Expect
is a great resource for teaching this objective.
The 8th grade unit
Clever Counting
has several activites appropriate for 7th grade.
Step Right Up and Win a Prize
What are my chances of winning? You probably ask yourself that question any time you play a game that offers a prize for winning. You're about to embark on a gaming adventure. You'll investigate the
mathematical probabilities of winning various carnival games. You'll also research and design a game of your own. So, come on and take a chance! Sharpen up that hand-eye-coordination and grab your
probability tool kit. This adventure is a win-win situation!
Game Challenge 1:
Using the given carnival games and their data, complete different probability calculations and make predictions based on your calculated probabilities.
Game Challenge 2:
Second, research other carnival games and design a carnival game of your own. Then, prepare a report detailing why your game would be good to include in a school carnival.
Game Challenge 3:
Third, create presentation of your findings that includes a scale model or scale drawing(s) of your game.
Math Content:
compound probability, theoretical probability, experimental probability, simulations, scale drawing or model
Language arts (persuasive writing)
Cool Problems
A Throw of the Dice
f you roll two dice, what number is most likely to come up?
The Daily Spark Critical Thinking
, p. 25.
Drawing Socks
It's time to get up. You roll out of bed, eyes still closed, and stagger over to your sock drawer. You know that you have three green socks, five red socks, eight blue socks, nine black socks, and
twelve white socks scattered at random in the drawer. How many socks will you need to withdraw (keeping your eyes closed) in order to be sure you've got a matching pair?
The Daily Spark Critical Thinking
, p. 27.
Flip a Coin
You have flipped a perfectly normal coin ten times and gotten heads every time. What is the probability that you will get heads the next time you flip it?
The Daily Spark Critical Thinking
, p. 36.
The High Stakes World of Statistics
This 26 minute video from
United Streaming
(requires membership) contains 8 segments. They are targeted to grades 9-12, but some segments would be suitable for middle school.
What's the chance you'll draw a face card out of a 52-card deck? That's one of many questions related to probability! Find out about probability and more. © 2002 Standard Deviants
Stick or Switch
In this TI-activity, students will simulate an experiment to determine the best strategy for winning a game, whether it is staying with the card originally picked or switching to the other card. Each
strategy and outcome is given a number. The numbers will be collected in a list.
My Filamentality
Hotlist Determining Simple Probability
From AAA math...Explanation and practice with feedback, timed games
Fundamental Counting Principle
More from AAA
Probability Problems from Figure This Math Challenge Majority Vote
What percentage does it take to win a vote?
Matching Birthdays
In any group of six people, what is the probability that everyone was born in different months?
Does drinking soda affect your health?
Two Points
Probability question about basketball free throws
I Win!
Is this game fair?
How many fish in the pond?
How could I send the check and not pay the bill?
National Library of Virtual Manipulatives Probability Activities
Coin toss, spinners, stick or switch, and more
Shodor Interactivate: Marbles
This activity allows the user to pull marbles from a 'bag' in a variety of ways in order to explore several different concepts involving randomness and probability. The user controls the number and
color of marbles located in the "bag". The user can also change whether the marbles are replaced after each draw and if the order in which the marbles are drawn matters. A table presents the user
with a summary of the trials, and allows them to explore the difference between experimental and theoretical results. | {"url":"http://7math.wikispaces.com/Probability+of+Compound+Events?responseToken=0288bc6e9acce50b43ad8f5721fcd3452","timestamp":"2014-04-21T12:08:05Z","content_type":null,"content_length":"54619","record_id":"<urn:uuid:7f2175f7-c577-4ab4-8454-9808f60cde11>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Break Down a Cubic Difference or Sum
After you’ve checked to see if there’s a Greatest Common Factor (GCF) in a given polynomial and discovered it’s a binomial that isn’t a difference of squares, you should consider that it may be a
difference or sum of cubes.
A difference of cubes sounds an awful lot like a difference of squares, but it factors quite differently. A difference of cubes is a binomial that is of the form (something)^3 – (something else)^3.
To factor any difference of cubes, you use the formula a^3 – b^3 = (a – b)(a^2 + ab + b^2).
A sum of cubes is a binomial of the form: (something)^3 + (something else)^3. When you recognize a sum of cubes a^3 + b^3, it factors as (a + b)(a^2 – ab + b^2).
For example, to factor 8x^3 + 27, you first look for the GCF. You find none, so now you use the following steps:
1. Check to see if the expression is a difference of squares.
You want to consider the possibility because the expression has two terms, but the plus sign between the two terms quickly tells you that it isn’t a difference of squares.
2. Determine if you must use a sum or difference of cubes.
The plus sign tells you that it may be a sum of cubes, but that clue isn’t foolproof. Time for some trial and error: Try to rewrite the expression as the sum of cubes; if you try (2x)^3 + (3)^3,
you’ve found a winner.
3. Break down the sum or difference of cubes by using the factoring shortcut.
Replace a with 2x and b with 3. The formula becomes [(2x) + (3)] [(2x)^2 – (2x)(3) + (3)^2].
4. Simplify the factoring formula.
This example simplifies to (2x + 3)(4x^2 – 6x + 9).
5. Check the factored polynomial to see if it will factor again.
You’re not done factoring until you’re done. Always look at the leftovers to see if they’ll factor again. Sometimes the binomial term may factor again as the difference of squares. However, the
trinomial factor never factors again.
In this example, the binomial term 2x + 3 is a first-degree binomial (the exponent on the variable is 1) without a GCF, so it won’t factor again. Therefore, (2x + 3)(4x^2 – 6x + 9) is your final | {"url":"http://www.dummies.com/how-to/content/how-to-break-down-a-cubic-difference-or-sum.navId-403857.html","timestamp":"2014-04-19T10:24:18Z","content_type":null,"content_length":"55446","record_id":"<urn:uuid:fa8cd638-f3fd-4233-b749-623c1492ea8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total Ordering on Real Function
April 13th 2008, 08:21 AM #1
Global Moderator
Nov 2005
New York City
Total Ordering on Real Function
I been trying to find a total ordering on $[0,1]^{[0,1]}$, the set of all functions from $[0,1]\mapsto [0,1]$. This cardinality of this set is $2^{ 2^{\aleph_0}} > 2^{\aleph_0}$. I tried a few
ideas and they all failed. I happen to know a total ordering exists by the axiom of choice (this is correct, right?). But I am trying to find an explicit ordering on this set for a
One of the most important consequences of the Axiom of Choice is the Well –Ordering Theorem: Every set can be well ordered. However, the proof is an outstanding example of a nonconstructive
proof. That is, the proof does not give any indication of how the order relation is constructed. For example, it is not known how the set of real numbers can be well ordered.
One of the most important consequences of the Axiom of Choice is the Well –Ordering Theorem: Every set can be well ordered. However, the proof is an outstanding example of a nonconstructive
proof. That is, the proof does not give any indication of how the order relation is constructed. For example, it is not known how the set of real numbers can be well ordered.
For homework, I have the following problem:
If $(P,<)$ is a totally ordered set which has a countable dense subset then $|P|\leq 2^{\aleph_0}$
At first I tried proving this but failed. I assume my textbook had a mistake in it. Here is the counterexample: Let $P_1 = [0,1]^{[0,1]}$ which is totally ordered (okay, so we need the axiom of
choice). Let $P_2 =\mathbb{Q}$ ordered in the natural way. Define $P = P_1\cup P_2$ where any element in $P_2$ is less than any element in $P_1$. This ordering is a total ordering and it has a
countable dense subset, i.e. $P_2$, yet $|P| > 2^{\aleph_0}$. Thus, it is wrong.
But I am unsure of this counterexample. Because all the set theory we done so far never used the axiom of choice. Maybe without the axiom of choice this statement is true? Can that be? Or if it
is possible to construct counterexample using axiom of choice (eventhough we never used it before) it must mean this problem is faulty. This is the reason why I tried to find an explicit ordering
on the real functions, but that seems really hard to do. Do you happen to know of any total ordering for sets with cardinality greater than the countinuum?
This is Mine 93
Last edited by ThePerfectHacker; April 13th 2008 at 09:38 AM.
How about the this. I been trying to find an explicit total-ordering on $[0,1]^{[0,1]}$. I found a way how to do that if we can find an explicit well-ordering of the interval $[0,1]$.
Suppose that we can explicitly well-order $[0,1]$ call it $<$. Let $f,g$ be elements of $[0,1]^{[0,1]}$. Say $fot = g$ then the set $\Delta(f,g) = \{ x \in [0,1] : f(x) ot = g(x) \}$ is a
non-empty subset of $[0,1]$ which means there is a least element $x_0$. If $f(x_0) < g(x_0)$ then we define $f \prec g$. This defines a total ordering on $\left( [0,1]^{[0,1]}, \prec \right)$.
This reduces the problem to well-ordering the continuum. So are there any well-known (pun) well-orderings of the continuum?
April 13th 2008, 08:56 AM #2
April 13th 2008, 09:06 AM #3
Global Moderator
Nov 2005
New York City
April 13th 2008, 11:00 AM #4
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/discrete-math/34302-total-ordering-real-function.html","timestamp":"2014-04-23T10:59:07Z","content_type":null,"content_length":"49126","record_id":"<urn:uuid:ffa4de6c-1f3b-4c7d-b4c0-4c400c592b83>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
static pressure and airflow at reduced fan speeds
This is probably a stupid question but I've been going around in circles for a while now and have gotten myself completely confused.
I have a fan and a pump, with performance charts (static pressure vs. air flow) for both. I need to try and determine flow rates for each device when run at reduced speeds.
The fan/pump will be pulling a gas flow through a small tube, and exhausting to atmosphere. I know that's not the right way to use a fan, but that's what I'm stuck with right now. I can measure the
pressure at the fan/pump inlet while running at different voltages.
Searching around the internet, I found that air flow is linearly proportional to fan speed and static pressure is proportional to fan speed squared.
So I'm guessing that if I'm at half the fan speed I should be at a quarter (.5^2) of the static pressure. I can measure the pressure, multiply by 4 and look it up on the chart. Then take the
corresponding air flow and knock it in half to get the actual flow rate.
That's a complete guess. If anyone can steer me in the right direction, I would appreciate it because I'm really stuck here. Thanks for your help. | {"url":"http://www.physicsforums.com/showthread.php?t=595313","timestamp":"2014-04-17T15:27:06Z","content_type":null,"content_length":"20524","record_id":"<urn:uuid:8b99316b-5920-4ea0-beb6-8bddd9bb9752>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
3(a + b) = 3a + 3b is an example of which algebraic property? Distributive Property Associative Property of Addition Commutative Property of Addition Symmetric Property
• 7 months ago
• 7 months ago
Best Response
You've already chosen the best response.
wow for a change it is not C go with Distributive Property
Best Response
You've already chosen the best response.
you can tell because it has both \(+\) and \(\times \) in it in english (well math english) you say "multiplication distributes over addition" i.e \(a(b+c)=ab+ac\)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52269ef2e4b06211a680281b","timestamp":"2014-04-21T04:32:23Z","content_type":null,"content_length":"30368","record_id":"<urn:uuid:3316a72b-37bd-4689-8413-c2d77debff26>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry and the imagination
You are currently browsing the tag archive for the ‘geometrization conjecture’ tag.
In winter and spring of 2001, Nathan Dunfield and I ran a seminar at Harvard whose purpose was to go through Thurston’s proof of the geometrization theorem for Haken manifolds. This was a very useful
and productive exercise, and there was wide participation from faculty and students. As well as talks by Nathan and myself, there were talks by David Dumas, Laura de Marco, Maryam Mirzakhani, Curt
McMullen, Dylan Thurston, and John Holt. At the conclusion of the semester, Bill Thurston agreed to come out and lead a discussion on geometrization, in which he ended up talking a bit about what had
led him to formulate the conjecture in the first place, what ideas had played into it, how and when he had gone about proving it, his ideas about exposition, and so on.
I had recently bought a video camera, and decided to tape Bill’s talk. I never did anything with it until now (in fact, I don’t think I ever re-watched anything that I taped), but it turned out to be
not too difficult to transfer the file from tape to computer. Since this seems like an interesting fragment of intellectual history, I thought it might be worthwhile to post the result to YouTube —
the video link is here.
Recent Comments
Ian Agol on Cube complexes, Reidemeister 3…
Danny Calegari on kleinian, a tool for visualizi…
Quod est Absurdum |… on kleinian, a tool for visualizi…
dipankar on kleinian, a tool for visualizi…
Ludwig Bach on Liouville illiouminated | {"url":"http://lamington.wordpress.com/tag/geometrization-conjecture/","timestamp":"2014-04-20T11:02:41Z","content_type":null,"content_length":"34058","record_id":"<urn:uuid:9a696e80-11af-4337-a41a-8e36f3302a65>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Wealth of Numbers: An Anthology of 500 Years of Popular Mathematics Writing
Despite what we may sometimes imagine, popular mathematics writing didn’t begin with Martin Gardner. In fact, it has a rich tradition stretching back hundreds of years. This entertaining and
enlightening anthology – the first of its kind – gathers nearly one hundred fascinating selections from the past 500 years of popular math writing, bringing to life a little-known side of math
history. Ranging from the late fifteenth to the late twentieth century, and drawing from books, newspapers, magazines, and websites, A Wealth of Numbers includes recreational, classroom, and work
mathematics; mathematical histories and biographies; accounts of higher mathematics; explanations of mathematical instruments; discussions of how math should be taught and learned; reflections on the
place of math in the world; and math in fiction and humor.
Featuring many tricks, games, problems, and puzzles, as well as much history and trivia, the selections include a sixteenth-century guide to making a horizontal sundial; ‘Newton for the Ladies’
(1739); Leonhard Euler on the idea of velocity (1760); ‘Mathematical Toys’ (1785); a poetic version of the rule of three (1792); ‘Lotteries and Mountebanks’ (1801); Lewis Carroll on the game of logic
(1887); ‘Maps and Mazes’ (1892); ‘Einstein’s Real Achievement’ (1921); ‘Riddles in Mathematics’ (1945); ‘New Math for Parents’ (1966); and ‘PC Astronomy’ (1997). Organized by thematic chapters, each
selection is placed in context by a brief introduction.
A unique window into the hidden history of popular mathematics, A Wealth of Numbers will provide many hours of fun and learning to anyone who loves popular mathematics and science.
‘[F]or the enthusiast for the history of popular maths writing this is a must-have book.’ Brian Clegg, Popular Science
‘In A Wealth of Numbers, we have the end product of what must have been a lot of challenging research. … This book works well for random browsing as well as for sustained reading; purely recreational
essays and puzzle problems are well-mixed with more serious topics such as an article explaining Cantor’s diagonalization proof and “Cubic equations for the practical man.” There’s something in here
for everyone, and it’s a great contribution to the mathematics literature to have it all in one place.’ Mark Bollman, MAA Reviews
‘This accessible and inviting anthology shows how entertaining it can be to think about mathematics. The selection, organization, and commentaries result in a unique book that is equal to far more
than the sum of its parts.’ Paul C. Pasles, author of Benjamin Franklin’s Numbers | {"url":"http://www.benjaminwardhaugh.co.uk/wealth.html","timestamp":"2014-04-16T19:01:49Z","content_type":null,"content_length":"4224","record_id":"<urn:uuid:25a8fbdd-6b3c-43dc-9963-185b7ada9e37>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Really Stuck, graphs of functions???
November 24th 2008, 02:26 AM #1
Junior Member
Nov 2008
Really Stuck, graphs of functions???
The Question Reads:
draw the graphs of the functions listed over the given range's of x in steps of one unit. from these, estimate the nature of the roots.
(i) y = x(2) + 2x - 6 (Range of x = -5 to +3)
I understand pretty much none of this, i dont know how to draw the graph because i dont know what to plot, or how to calculate the data, in order to plot. I've got no literature, had no worked
examples and a useless lecturer who likes to show how clever he is by means of baffling myself and the other 70 students with utter gobledegook.
anyway, enough about the poor quality of my lecturers. can anyone please help, i know its a very basic question but i really just dont know where to start. many thanks
The Question Reads:
draw the graphs of the functions listed over the given range's of x in steps of one unit. from these, estimate the nature of the roots.
(i) y = x(2) + 2x - 6 (Range of x = -5 to +3)
I understand pretty much none of this, i dont know how to draw the graph because i dont know what to plot, or how to calculate the data, in order to plot. I've got no literature, had no worked
examples and a useless lecturer who likes to show how clever he is by means of baffling myself and the other 70 students with utter gobledegook.
anyway, enough about the poor quality of my lecturers. can anyone please help, i know its a very basic question but i really just dont know where to start. many thanks
You just plug in a value of x from a range of -5 to +3. ie: -5, -4, -3, -2, -1, 0, 1, 2, 3 thts all you do then you will get the value of y, you should do at least 3 to 5 points
I've plotted those as i understand from the above:
so i should have calc's such as -5(2) + (2 x -5) - 6 = -41
-3(2) + (2 x -3) - 6 = -21
3(2) + (2 x 3) - 6 = 9
Etc, this would give me what looks to be a straight line, or possibly curved graph, where as i had expected a negative curved graph such as:
Can i also ask, would i begin plotting at -5 on the negative X axis?
Many thanks
I've plotted those as i understand from the above:
so i should have calc's such as -5(2) + (2 x -5) - 6 = -41
-3(2) + (2 x -3) - 6 = -21
3(2) + (2 x 3) - 6 = 9
Etc, this would give me what looks to be a straight line, or possibly curved graph, where as i had expected a negative curved graph such as:
Can i also ask, would i begin plotting at -5 on the negative X axis?
Many thanks
No u cant just plug -5 on the axis. u have to plug -5 in as the value of x then solve for y ie:
y = x(2) + 2x - 6
y=-5(2) + 2(-5) -6
y = -10 + -10 -6
y= -26 so first point could be (-5,-26)
Usually i just use the smallest values of x -2,-1,0,1,2 just so i can plot it on a piece of paper and look to see how the graph is gonna go.
Yep, with you on that, dont think i worded it very well.
What i meant was when x = -5, y = -41 so the co-ordinates for the first plot would be X = -5, and Y = -41, and so on???
One more question :0)
with an equation such as y = -6 + 4x - x^2, is it better to tranpose this, in order to bring x^2 to the front, eg: y = x^2 - 4x + (-6) ???
I'm not sure why but i feel with quadratic functions they should start x^2???
Thanks again Brent, really appreciate your time buddy
November 24th 2008, 02:30 AM #2
Junior Member
Nov 2008
November 24th 2008, 02:45 AM #3
Junior Member
Nov 2008
November 24th 2008, 03:21 AM #4
Junior Member
Nov 2008
November 24th 2008, 03:46 AM #5
Junior Member
Nov 2008 | {"url":"http://mathhelpforum.com/pre-calculus/61321-really-stuck-graphs-functions.html","timestamp":"2014-04-20T10:23:46Z","content_type":null,"content_length":"44282","record_id":"<urn:uuid:a75812d8-52be-41c2-90ef-244ea7f8e493>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polymath7 discussion thread
Filed under: discussion,hot spots — Terence Tao @ 5:50 am
The “Hot spots conjecture” proposal has taken off, with 42 comments as of this time of writing. As such, it is time to take the proposal to the next level, by starting a discussion thread (this one)
to hold all the meta-mathematical discussion about the proposal (e.g. organisational issues, feedback, etc.), and also starting a wiki page to hold the various facts, strategies, and bibliography
around the polymath project (which now is “officially” the Polymath7 project).
I’ve seeded the wiki with the links and references culled from the original discussion, but it was a bit of a rush job and any editing would be greatly appreciated. From past polymath experience,
these projects can get difficult to follow from the research threads alone once the discussion takes off, so the wiki becomes a crucial component of the project as it can be used to collate all the
progress made so far and make it easier for people to catch up. (If the wiki page gets more complicated, we can start shunting off some stuff into sub-pages, but I think it is at a reasonable size
for now.)
One thing I see is that not everybody who has participated knows how to make latex formatting such as $\Delta u = \lambda u$ appear in their comments. The instructions for that (as well as a
“sandbox” to try out the code) are at this link.
Once the research thread gets long enough, we usually start off a new thread (with some summaries of the preceding discussion) to make it easier to keep the discussion at a manageable level of
complexity; traditionally we do this at about the 100-comment mark, but of course we can alter this depending on how people are able to keep up with the thread.
[...] 7 is officially underway. It is here. It has a wiki here. There is a discussion page here. Like this:LikeBe the first to like this [...]
Pingback by Polymath 7 « Euclidean Ramsey Theory — June 9, 2012 @ 5:22 pm
My ignorance is embarrassing, but I’ll ask anyway. Could someone point me to directions on how to edit a comment I make on the research thread after I’ve posted it? I’d like to save moderators and
readers the irritation of fixing my typographical/LaTeX errors.
Comment by Nilima Nigam — June 9, 2012 @ 7:38 pm
• Unfortunately, the hosting company for this blog doesn’t allow editing of comments by users :-(. So I’ll be editing comments manually, which has worked well enough in the past.
For really lengthy computations, though, it may be a good idea to put the details on the wiki (maybe creating a subpage if necessary) and just put a link and a summary on the blog, since the wiki
is easier to edit and format.
Comment by Terence Tao — June 9, 2012 @ 8:29 pm
□ OK, thanks!
Comment by Nilima Nigam — June 10, 2012 @ 1:44 am
• It took me a while to find something that worked… the trick seems to be to type $ latex [YOUR LATEX CODE] $
but without a space between the first $ and the word ‘latex’.
Comment by Chris Evans — June 11, 2012 @ 3:03 am
As there has been a lot of talk about approaching the conjecture using Bessel functions I decided to finally learn about them. I wrote up a summary (basically for my own benefit to understand the
material better) which can be seen at
The case of a sector is considered at the end, which ties back to the discussion in the research thread.
Comment by Chris Evans — June 10, 2012 @ 9:20 pm
The research thread is getting rather lengthy, so I will probably roll it over tomorrow by starting a new research thread that tries to summarise the progress so far, and then direct all discussion
to the new thread. This is just a heads-up, though; in the meantime, keep using the current research thread. :-)
Comment by Terence Tao — June 12, 2012 @ 2:45 am
• Hmm, it’s only been three days and I think I may have to roll over the thread again, maybe sometime tomorrow. (From past experience with polymath projects, the first week or two are quite hectic
and chaotic, with lots of people pursuing lots of possible angles of attack but after that things settle down, focusing on a core group of people pursuing a core set of strategies; the progress
becomes less exciting, but more steady.)
Comment by Terence Tao — June 15, 2012 @ 2:04 am
Continuing the comments of Hung (and moving the discussion to the discussion thread):
I met with Hung today to discuss analytic proof of Isosceles Triangle – Special Case – Corollary 4, so maybe I can clarify some of our confusion.
Hung suggested that an argument could be made via the scalar maximum principle by instead of considering the vector-valued gradient of $u$, considering the directional derivative $u_{\xi}$ in a
direction $\xi$ which “points within the cone” (i.e. $0\leq\arg(\xi)\leq \Theta$, where $\Theta$ is the angle ABD).
This seems reasonable: $u_{\xi}$ itself solves the heat equation. By considering, say $u_0\equiv 1$, we have that $u_{\xi}\leq 0$ at the bottom of the parabolic boundary. Along the Dirichlet boundary
$u_{\xi}\leq 0$ for all time (because $u$ is non-negative in the interior). But we couldn’t figure out why $u_{\xi}\leq 0$ for all time on the Neumann part of the parabolic boundary. If we only had
that we would be done by the scalar maximum principle… Maybe there is a simple reason we missed
My understanding of the vector-valued (weak) maximum principle is that if you have a vector valued function which solves the heat equation, and its parabolic boundary data lies in some convex set $K$
, then so too will the function on the entire parabolic domain. Switching between this and the scalar valued maximum principle is only a matter of projecting onto an axis $\xi$ or equivalently
considering your convex set $K$ to be the space between two hyperplanes (Things get messier for, say, reaction-diffusion equations but for the heat equation I believe considering the gradient and
directional derivatives are equivalent?)
So in arguing via the vector-valued maximum principle we are taking as our convex set the infinite sector $S$ it seems. But then I wasn’t sure what the importance of $\epsilon$ is. Is an argument
with $\epsilon$ necessary if we take for granted the weak maximum principle in the previous paragraph? Or is it that the argument is going beyond the basic weak maximum principle (in either the
scalar of vector-valued case)? Part of the confusion for me I think is in parsing things as both the domain and range of $u$ were in the sector $S$.
Comment by Chris Evans — June 16, 2012 @ 7:32 am
• (Thanks, by the way, for moving these sorts of discussions to the discussion thread rather than the research thread – I should have mentioned earlier that this thread is intended in part to help
explain and clarify all the hectic stuff that goes on on the research thread.)
It is a bit odd that one can’t use an “off the shelf” maximum principle, either in the scalar or vector setting, to establish this result, but has instead to “roll one’s own” principle by
adapting the proof (and this is why the epsilons have to come in, usually they are hidden in the proof of the textbook maximum principle). I’m not sure exactly why this is the case, but I think
it is because the Neumann boundary condition does not directly place the gradient of u inside S on the boundary, but instead offers the option of either lying in the interior of S or in the
exterior, and one then has to rule out the second possibility by an additional argument (taking advantage of either the Neumann boundary conditions or on a reflection argument) to conclude.
I think considering just a single derivative $u_\xi$ doesn’t work because when one bounces off of a boundary, this derivative somehow gets mixed up with other derivatives, so one has to control
the whole gradient at once or else one can’t predict what happens on a boundary.
Comment by Terence Tao — June 16, 2012 @ 4:35 pm
Ok, I see your point about losing information when only considering one directional derivative; it seems that part of the argument is to take advantage of the fact that $abla u$ points parallel to
the boundary for points on the Neumann boundary (something which cannot be reflected in a property of a single derivative $u_{\xi}$).
It seems you take advantage of this by expanding the sector to the set $S_{\epsilon(t+1)}$, which then allows you to identify a unique point $v$ that $abla u(x,t)$ is equal to for $x\in$ BD. But then
I don’t see where you take advantage of knowing the particular location of $v$ (besides knowing that it is on the boundary).
Is it correct that the idea of reflecting across BD is just to justify that the equality $\partial_t\left(abla u\right)=\Delta\left(abla u\right)$ holds on the boundary BD as well?
Another point I don’t get is why you consider the set $S_{\epsilon(t+1)}$ and not just $S_{\epsilon}$. Is the idea that “As the set $S_{\epsilon(t+1)}$ is expanding in time, the only way for $abla u$
to catch it is if it is actively moving towards the (receeding) boundary$… which prevents it from touching the boundary with “zero derivative”?
P.S. Looking at the argument I don’t see any place that the acuteness of the triangle is used. Is it correct to say the same argument would work for an obtuse mixed Dirichlet-Neumann triangle?
Comment by Chris Evans — June 16, 2012 @ 6:39 pm
• The location of v is important (as pointed out by Hung) because it is both on the boundary of $S_{\varepsilon(t+1)}$ and on its reflection, which allows one to keep the direction of $\Delta abla
u$ pointing inwards or tangentially. As you say, the receding nature of the boundary is to make sure that the time derivative $\partial_t abla u$ points strictly outwards, rather than
tangentially, since otherwise one doesn’t quite get a contradiction. (One could of course use some other increasing function of t than $\varepsilon(t+1)$, e.g. $\varepsilon e^t$, if desired.)
I think the argument works fine for obtuse triangles, though one has to be a little careful with regularity. Once one has an obtuse Neumann angle, one no longer has C^2 regularity at the
vertices, but only $C^{1,\alpha}$ for some $\alpha$ (basically, one has $\pi/\theta$ degrees of regularity at a vertex of angle $\theta$, except when $\theta$ divides evenly into $\pi$ when one
has smoothness instead). But this may still be enough regularity to run the argument.
Comment by Terence Tao — June 16, 2012 @ 7:24 pm
□ Ok, I think I follow the argument now: The idea is that if $(x_0,t_0)$ is the first point with $v=abla u(x_0,t_0)$ on $\partial S_{\epsilon(t+1)}$ and $x_0\in$ BD, then (considering
reflections) points of the form $(x,t_0)$ in the reflected domain have $abla u(x,t_0)$ lying in the union of $S_{\epsilon(t+1)}$ and its reflection, which is a *convex* set. Thus the “pull”
from these nearby $x$, i.e. $\Delta abla u$, is into/tangent to this convex set and in particular doesn’t point in the direction of $\partial_tabla u$.
I don’t think this argument would work for the obtuse case then as the union of $S_{\epsilon(t+1)}$ and its reflection wouldn’t be convex (and so while $\Delta abla u$ might be horizontal, it
might still point in the same direction as $\partial_tabla u$).
Also, whether my understanding above is correct or not, because the level curves of $u$ are roughly concentric circles at the corner, in the obtuse case even if we could show that $abla u$
stayed within certain angle bounds, it would not be the case that the *directional derivative* in the expected directions would have constant sign. So, it definitely seems considering $abla
u$ has its advantages!
Comment by Chris Evans — June 17, 2012 @ 4:29 am
☆ This may be somewhat tangential to the original aim of this project, but it may be of interest to try to explore the vector maximum principle/coupled Brownian motion connection further.
Two obvious directions to pursue are (a) finding a maximum principle version of mirror coupling arguments and (b) finding analogues of both coupled Brownian motion and maximum principle
arguments in the discrete graph setting. (There must presumably be some literature on this sort of thing already – I would find it hard to believe that the two most fundamental tools in
parabolic PDE have not been previously linked together!)
Comment by Terence Tao — June 17, 2012 @ 7:21 am
○ Well actually my introduction to the hot spots problem came from David Jerison who suggested that his graduate student, Nikola Kamburov, and I try to come up with a purely analytic
replacement for the coupling arguments used in hotspots problems (specifically those in the paper on Lip domains by Atar and Burdzy). We approached it by trying to consider maximum
principles etc on the product domain, but we never formalized anything concrete… it seems that we should have been looking at the gradient vector (although as your argument shows some
finesse is required even there)
Comment by Chris Evans — June 17, 2012 @ 7:33 am
Here is another tool that may be useful. As I don’t see a direct application to solving the main conjecture, I will just give a brief explanation here in the discussion thread.
The paper “Scaling coupling of Reflecting Brownian Motion and the Hot Spots Conjecture” by Mihai Pascu (http://www.ams.org/journals/tran/2002-354-11/S0002-9947-02-03020-9/home.html) introduces an
exotic coupling known as the “Scaling Coupling” and uses it to identify the extremum of the first eigenfunction for certain mixed Dirichelet-Laplacian domains.
The idea of a “Scaling Coupling” is as follows (see also http://www.diaspora-stiintifica.ro/prezentari/wks04/Pascu.pdf for pictures/further explanation):
Consider the unit disk $U$. Then if $X_t$ is (free) Brownian motion started at $x$ it turns out that $Y_t=\frac{X_t}{M_t}$, where $M_t =a\vee\sup_{s\leq t}\{\vert X_t\vert\}$ for $a\leq1$, is (after
a time-scaling) a reflected Brownian motion started at $y=\frac{x}{a}$. In fact, this still holds if $X_t$ itself was a reflected Brownian motion to begin with!
Suppose now we consider the upper half of the unit disk, $U^+$ and assign Dirichlet boundary conditions to its flat bottom and Neumann boundary conditions to its arc-boundary. Considering the paths
of $X_t$ and $Y_t$, we see that the *paths* will meet up when $X_t$ first hits the arc-boundary of $U^+$, or otherwise both *paths* will terminate at the same time when they hit the Dirichlet
boundary. But now recall that for $Y_t$ to be a true Brownian motion, we need to scale its speed; to be a Brownian motion, $Y_t$ must travel slower than $X_t$ until they meet. But this implies that
$X_t$ will always hit the Dirichlet boundary before $Y_t$ (since $Y_t$ is further behind on its path). By the usual adjoint/duality between Brownian motion and the heat equation it therefore follows
that the first eigenfunction for $U^+$ is monotone radially out from the origin!
Then, by conformal mapping, Pascu extends this scaling coupling to more general domains. The basic idea is that, as conformal mappings preserve angles, reflected Brownian motion gets mapped to
reflected Brownian motion up to a time-scaling (the idea is that the “reflection angles” at the boundary are preserved, and in the interior “angles of direction of motion” are not “squished” so the
Brownian motion still is equally likely to head in any direction). Therefore, if we consider a $C^{1,\alpha}$ (I think this matters for conformal mappings?) domain $D$ with an axis of symmetry
(whence we can consider the domain $D^+$ with mixed boundary), we can identify it with $U$ via conformal mapping, and define a scaling coupling on $D^+$ via the scaling coupling on $U^+$.
The crucial issue however, is that while the *paths* of $X_t$ and $Y_t$ in $D^+$ are nicely coupled, we need also control on the time-scaling of $X_t$ and $Y_t$ to ensure that $Y_t$ moves slower
along its path than $X_t$ does along its path until the paths meet. This can be ensured provided that $D$ is a *convex* domain (This is the content of Proposition 2.13). Under this additional
assumption, we can argue as before to show that the extremum of the first eigenfunction of $D^+$ lies on its Neumann boundary.
Points of Interest:
1) It may be useful to keep in mind the scaling coupling as a tool (at least at the heuristic level… i.e. we can define a scaling coupling and move it to other domains via conformal maps provided we
are wary of time-scaling effects) especially as this is a somewhat exotic coupling which I don’t believe has a direct analytic counterpart (though I would of course be happy to see one).
2) I don’t really see where in the argument we need to consider all of $U$ and $D$. It seems that we only need the convexity of $D^+$ (for time-scaling purposes) so we could consider just it provided
it is conformally equivalent to $U^+$. This then would give information on the location of the extremum of the first eigenfunction of such a convex mixed-boundary domain.
However, the paper
of Banuelos, Pang, and Pascu seems to argue such an extension and requires that the Dirichlet boundary and Neumann boundary meet at acute angles, so there is probably something I am overlooking.
Comment by Chris Evans — June 18, 2012 @ 5:43 am
I seem to be unable to leave comments on the research thread. Is there an FAQ section I can look up to fix this?
Comment by Nilima Nigam — June 22, 2012 @ 4:51 am
• seems to be working again. [Huh, for some reason your comment got trapped in the spam filter, but I managed to retrieve it - T.]
Comment by Nilima Nigam — June 22, 2012 @ 4:52 am
□ to clarify: I was able to post anonymously, but not after signing in.
Comment by Nilima Nigam — June 22, 2012 @ 5:07 am
Since it may get lost in the shuffle: I’d made an error in the data I reported earlier today. Bartlomiej Siudeja very kindly identified the problem. I’ve subsequently updated the data file http://
Apologies for the error and any confusion caused, and thanks to Bartlomiej for catching it.
Comment by Nilima Nigam — June 23, 2012 @ 6:13 am
For some reason I cannot post anything to research thread.
Comment by Bartlomiej Siudeja — June 25, 2012 @ 5:39 pm
• Hmm, your comments got caught by the spam filter (presumably because of the link). Sometimes it has false positives. Nilam had a similar issue; I will try to upgrade both of your user statuses to
try to get past the filter.
Comment by Terence Tao — June 25, 2012 @ 5:46 pm
I am a bit behind in all that’s been done with regards to the rigorous numerics argument, but I am confused on one point:
As I understand it, the goal is to get C^0 continuity of the eigenfunction with respect to the domain in order to get control over where the extrema of the eigenfunction go as we perturb the domain.
And for the moment, all we have is continuity with respect to the L^2 and H^1 norms.
But the paper of Banuelos and Pang gives a C^0 continuity result. So why can’t we apply their result as it stands?
Comment by Chris Evans — June 29, 2012 @ 11:55 pm
• The Banuelos-Pang paper does in principle give explicit C^0 bounds, but they depend on bounds on the heat kernel on the triangles, which are presumably in the literature but it would take a fair
amount of effort to make all the constants explicit (and I would imagine that the final constants would be terrible, making it much harder to use them for numerics as one would have to use an
enormous number of reference triangles). My belief is that by working with the explicit nature of the triangular domains one can get better constants.
Also, there is a possibility that we may also get explicit C^1 or even C^2 bounds as well, which would also be helpful in locating extrema, though it is probably going to be simplest to try for C
^0 bounds first.
Comment by Terence Tao — June 30, 2012 @ 12:09 am
□ I see, thanks!
Comment by Chris Evans — June 30, 2012 @ 4:02 am
Terry, I have looked over your recent notes “Stability Theory for Neumann Eigenfunctions” and had a few comments/questions:
1) (Typo) In Lemma 1.2 you talk about P and Q but then call them X and Y.
2) (Typo?) On page 6 in the equation after the line “From the orthonormality (2.5) and the Bessel inequality, we conclude that” shouldn’t it be $e^{-4\omega}$ in the term on the left?
3) (Question) At the bottom of page 6, why is it that when we differentiate $\frac{d^2}{dt^2}\int_H e^{2\omega}$ we don’t have an extra term of the form $\int_H 2 \omega'' e^{2\omega}$?
4) (General Question) It seems that getting a bound on $|\dot{u_2}|_{L^\infty}$ will control how much the values of $u_2$ change… but I don’t see how that would immediately give control of the
*location* of the extrema. Is the argument to appeal to the fact that “if the extrema is near the corner it must be at the corner”?
Comment by Chris Evans — July 15, 2012 @ 8:20 pm
• (1) thanks for the correction, it will appear in the next revision of the notes.
(2) I think the factor is $e^{-2\omega} = (e^{-2\omega})^2 \times e^{2\omega}$, the extra $e^{2\omega}$ factor coming from the weight in (2.5).
(3) $\omega$ depends linearly on time (assuming $\alpha,\beta$ vary linearly in time) and so the second derivative is zero.
(4) Yes, one also needs to separately exclude extrema occuring near the corner in addition to L^infty variation bounds to completely control all extrema, this is the rationale behind my previous
comment at Comment 11. Unfortunately I am beginning to be a bit worried that the bounds there are a bit weak and will lead to requiring the mesh density to be huge…
Comment by Terence Tao — July 15, 2012 @ 9:52 pm
□ Is your concern about the mesh spacing required in parameter space, or that required for the computation on any given triangle? In other words, are you concerned about the bounds on the
variation, or on the location of the extrema near the corners?
I should be ready to post some numerical results by Wednesday on computing the bounds you calculated on the variation, as well as the (numerically computed) variation. Based on the
preliminary results, no surprises so far.
Comment by nilimanigam — July 16, 2012 @ 2:20 am
☆ I guess both, because the net error in our L^infty control on an eigenfunction on a triangle will depend on both (a) the distance in parameter space to the nearest reference triangle, and
(b) the accuracy of our eigenfunction approximation in the reference triangle, as well as (c) the spectral gap bounds. Then this has to be compared against (d) our numerical bounds on how
far away from the extrema the numerical eigenfunctions are far away from the extremal vertices, and (e) the neighbourhoods around the extremal vertices for which we may rigorously exclude
extrema. The hope is that (c), (d), (e) are strong enough that we can use numerically feasible mesh sizes for (a) and (b).
Comment by Terence Tao — July 16, 2012 @ 3:44 am
○ Agreed.
A while ago I was trying to get a handle on (d) numerically. My approach was to use an overlapping Schwarz iteration. The idea is to iterate on eigenvalue problems on subdomains of
the triangle – I partitioned the triangle into regions which are wedges and a full circle. My hope was the tools of Siudeja and Banuelos-Pang would help in establishing the method
converged. Unfortunately, I was unable to rigorously prove this.
But I think the approach could work, using tools from PDE analysis. http://www.math.sfu.ca/~nigam/polymath-figures/Schwarz.pdf
Comment by Nilima Nigam — July 16, 2012 @ 7:51 pm
■ Well, perhaps we don’t need a rigorous guarantee that the numerical algorithm converges, but instead go with a numerical recipe that in practice gives a numerical eigenfunction $\
tilde u_2$ and numerical eigenvalue $\tilde \lambda_2$ with very good residual $\| -\Delta \tilde u_2 - \tilde \lambda_2 \tilde u_2 \|_{L^2}$, and then do some a posteriori
analysis to rigorously conclude that the error is small. Indeed, if one has a demonstrable gap between the numerical eigenvalue $\tilde \lambda_2$ and the true third eigenvalue $\
lambda_3$, then some simpleplaying around with eigenvalue decomposition (computing the inner product of $-\Delta \tilde u_2 -\tilde \lambda \tilde u_2$ against other true
eigenfunctions $u_k$ via integration by parts) shows that the residual controls the error $\| \tilde u_2 - u_2 \|_{H^2}$ in H^2 norm (and hence in L^infty norm, by the Sobolev
inequality in my notes), at least if one can ensure that $\tilde u_2$ obeys the Neumann condition exactly.
Comment by Terence Tao — July 16, 2012 @ 9:53 pm
★ The challenge with this is in how I compute the residual. Numerically, my strategy was to approximate $u_i$ (in the notes) by finite linear combinations of Fourier-Bessel
functions. The trace of the approximations on the arcs can be written down readily; the application of the Laplacian on the sub-domains is also OK. However, to compute the L2
inner products, I used a quadrature. This is how I assemble the matrices to get the approximate eigenfunctions. Also, the conditioning of the eigenvalue problems wasn’t great.
Since one is looking at minimizing the residual in $L^2$, the all-critical traces of $u_i^n \frac{\partial u_0^n}{\partial \nu}$ on the common interfaces play a role, but not
as important as one may want. While I want to believe this method gives a good approximation by looking at the numerical residual, I am not 100% convinced.
One I got the approximate eigenfunction by this method, I still have to locate the extrema. I do this by interpolating the function by piecewise linears onto a mesh of the
triangle, and then doing a search. This can be improved.
Let me add in some of the details of the implementation in the notes. Perhaps some collective trouble-shooting will help.
Using the finite element method, the quadratures are exact (since I use piecewise polynomials). The search proceeds as above. Since I’m using a quasi-regular discretization,
both the Galerkin errors and the Lanczos errors are well-understood and the methods are provably convergent. This is a reliable, if not super-fast, work-horse.
Comment by nilimanigam — July 16, 2012 @ 10:13 pm
◎ I’ve updated http://www.math.sfu.ca/~nigam/polymath-figures/Schwarz.pdf to include the implementation details. As a numerical method, this is OK (not great because of
conditioning issues!)
Comment by nilimanigam — July 16, 2012 @ 11:21 pm
◎ Here’s one possibility. You’re dividing the triangle into three sectors and a disk, and on each of these regions one can create an exact eigenfunction with Neumann
conditions on the original boundary (and some garbage on the new boundaries). Now with some explicit C^2 partition of unity, one can splice together these exact
eigenfunctions on the subregions into an approximate eigenfunction on the whole triangle, and the residual will be controlled by the H^1 error between the exact
eigenfunctions on the intersection between the subregions.
To illustrate what I mean by this, let us for simplicity assume that the triangle $\Omega$ is covered into just two subregions $\Omega_1, \Omega_2$ instead of four. Let
$u_1, u_2$ be exact eigenfunctions on $\Omega_1,\Omega_2$ respectively with the same eigenvalue $\lambda$, and obey the Neumann condition exactly on $\partial \Omega \cap
\Omega_1$ and $\partial \Omega \cap \Omega_2$ respectively. We then glue these together to create a function $u := \eta u_1 + (1-\eta) u_2$ on the entire triangle $\Omega$
, where $\eta$ is a C^2 bump function that equals 1 outside of $\Omega_2$ and equals 0 outside of $\Omega_1$. Then we may compute
$-\Delta u = \lambda u + 2 abla \eta \cdot abla (u_1-u_2) + \Delta \eta \cdot (u_1-u_2)$.
Also u obeys the Neumann conditions exactly. Thus if u_1 and u_2 are close in H^1 norm on the common domain $\Omega_1 \cap \Omega_2$, the global residual $\| -\Delta u - \
lambda u \|_{L^2}$ will be small.
One advantage of this approach is that we don’t need to care too much about the boundary traces of u_1,u_2. But one does need a certain margin of overlap between the
subregions so that the cutoffs $\eta$ lie in C^2 with reasonable bounds, it’s not enough for them to be adjacent.
Comment by Terence Tao — July 18, 2012 @ 7:25 pm
● Yes, this is certainly one way to analyze the overlapping strategy: the partitions of unity will assure convergence of the Schwartz iteration in one step.
In the set-up I tried using numerically, the domains have non-trivial overlap. Solving boundary value problems this way would ensure nice convergence of the iteration.
My misgiving came from the conditioning of the eigenvalue problems on the sub-domains; since the computations were in floating-point arithmetic, poor conditioning is
My thinking was that since the actual eigenfunction is C^2 in the interior, the non-standard eigenvalue problem for the disk will have smooth coefficients. My
rationale for not using the partition of unity was that the approximation functions I used in each region satisfy $-\Delta u = \Lambda u$ exactly (but potentially not
the boundary data). However, for the purpose of an analytical treatment, the partition of unity strategy may be easier to work with.
Comment by nilimanigam — July 18, 2012 @ 7:42 pm
□ Thanks for the clarifications!
Comment by Chris Evans — July 16, 2012 @ 3:20 am
I’ve posted something twice on the research thread- my first attempt did not show up after refreshing the page. Would it be possible to remove the duplicate post? I’m not sure how to do this.
[Done. - T.]
Comment by nilimanigam — July 21, 2012 @ 4:44 am
Just a short note to say that I’m still interested in this problem, but am preparing for a two-week vacation starting on Saturday and so unfortunately have had to prioritise my time. But I will
definitely return to this project afterwards…
Comment by Terence Tao — August 9, 2012 @ 3:07 am
• Apologies about the delay from my end- I’ve been writing up some notes to summarize the numerical strategy, include some validation experiments, and discuss the results so far.
The conjecture has been (numerically) examined and (numerically) verified on a fine, non-uniform, grid in parameter space away from the equilateral triangle. The grid spacing is chosen so that
the variation of the eigenfunctions is controlled to 0.001. At each of these points, we have numerical upper and lower bounds on the second eigenvalue; these bounds provide an interval of width
1e-7 around the true eigenvalue. The eigenfunctions are computed so the Ritz residual is under 1e-11.
I have *something* coded up which uses the bounds near the equilateral triangle, but am not confident enough about these yet to present them.
Comment by nilimanigam — August 9, 2012 @ 4:30 am
I just wanted to say that Bartlomiej and I are at a stochastic analysis conference at the moment and we are discussing ideas for the problem (the discussion has been restricted to analytic approaches
though) along with some other interested people (Mihai Pascu, Rodrigo Banuelos, Chris Burdzy, etc) at the conference.
My internet access is limited (I am on a public computer at the moment) but I/we will try to write a summary of our discussion after the conference!
Comment by Chris Evans — September 12, 2012 @ 9:18 am
• Nice! An analytic approach would be great.
Comment by nilimanigam — September 12, 2012 @ 4:54 pm
Recent Comments
hadimaster65555 on Polymath9: P=NP? (The Discreti…
sadaoui hamza on Polymath proposal: bounded gap…
colinwytan on Two polymath (of a sort) propo…
Bojin Zheng on Polymath9: P=NP? (The Discreti…
Коллективный разум в… on Polymath proposal: bounded gap…
vznvzn on Two polymath (of a sort) propo… | {"url":"http://polymathprojects.org/2012/06/09/polymath7-discussion-thread/?like=1&source=post_flair&_wpnonce=f5309b4960","timestamp":"2014-04-20T08:58:58Z","content_type":null,"content_length":"152809","record_id":"<urn:uuid:c3020869-a49e-4531-83bf-ef63fef03656>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
help solve 4
May 20th 2010, 10:30 PM #1
Junior Member
May 2010
help solve 4
let p(x)=(x+1)(x-3) Q(x)+a(x+1)+b where Q(X) is a polynomial and a and b are real numbers when p(x) is divided by (x+1) the remainder is -11
when p(x) is divided by (x-3) the remainder is 1
i)what it the value of b
ii)what is the remainder when p(x) is divided by (x+1)(x-3)
with working out please
let p(x)=(x+1)(x-3) Q(x)+a(x+1)+b where Q(X) is a polynomial and a and b are real numbers when p(x) is divided by (x+1) the remainder is -11
And this means that $p(-1)=-11$
when p(x) is divided by (x-3) the remainder is 1
And this means $p(3)=1$ . Now just substitute and evaluate directly the values you want.
i)what it the value of b
ii)what is the remainder when p(x) is divided by (x+1)(x-3)
with working out please
May 21st 2010, 06:00 AM #2
Oct 2009 | {"url":"http://mathhelpforum.com/pre-calculus/145796-help-solve-4-a.html","timestamp":"2014-04-18T04:19:52Z","content_type":null,"content_length":"32610","record_id":"<urn:uuid:9a2db9f2-b83e-4830-8cbb-c4a3ad0ab825>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinations 6 of 49 without the results with numbers in sequence
Re: Combinations 6 of 49 without the results with numbers in sequence
Casework is never simple and you must always be thinking of the form of your solution. Most math types are notorioulsy bad problem solvers. Just take a look at all those equations that they think are
beautiful and are not worth a dime computationally.
What if this solution takes 1.2 million years to run?
Last edited by bobbym (2013-02-21 07:15:40)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=254459","timestamp":"2014-04-18T18:30:29Z","content_type":null,"content_length":"18751","record_id":"<urn:uuid:0398d563-0940-4858-ba1a-f55ad8f42784>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method, apparatus and article for identification and signature - Yeda Research and Development Company Limited
What is claimed:
1. A method of creating a unique identifier for use by an entity which cannot be forged by others including those capable or verifying the entity, comprising the steps of:
(a) selecting a modulus n which is the product of at least two secret primes;
(b) selecting a pseudo random function f capable of mapping arbitrary strings to numbers;
(c) preparing a string I containing information unique to an entity;
(d) selecting k distinct values of j so that each v[j] =f(I,j) is a residue (mod n) having a root s[j] ;
(e) computing roots s[j] of v[j]^-1 (mod n);
(f) recording on a retrievable medium of an identifier I, k, s[j] and related indices j.
2. The method of claim 1 wherein the recording on the identifier is in binary form.
3. The method of claim 1 wherein the recording is in a ROM and the identifier includes microprocessing and input/output features.
4. A method of utilizing the identifier of claim 1 comprising:
(a) placing the identifier of claim 1 in communication with a verifier having recorded therein modulus n and pseudo random function f;
(b) transmitting I and the indices j from the identifier to the verifier;
(c) generating in the verifier v[j] =f(I,j) for the indices j;
(d) selecting in the identifier a random r[i] ε (O,n);
(e) computing x[i] =r[i]^2 (mod n) in the identifier and sending x[i] to the verifier;
(f) selecting a random binary vector e[i1] . . . e[ik] from a predetermined set of such vectors in the verifier and sending to the identifier;
(g) computing in the identifier ##EQU5## and sending y[i] to the verifier; (h) checking in the verifier ##EQU6## (i) repeating steps (d) through (h) t times, where t≥1.
5. The method of claim 4 wherein the transmissions between the identifier and verifier are carried out in binary form.
6. The method of claim 4 wherein all steps are carried out using binary signals.
7. The method of claim 6 wherein modulus n is composed of at least 512 bits.
8. The method of claim 6 wherein only a lashed version of x
is used.
9. The method of claim 6 wherein steps (d) through (h) are repeated at least two times.
10. A method of signing a message m exchanged between an identifier created according to claim 1 and verifier comprising:
(a) selecting in the identifier random r[i] . . . r[t] ε (O,n);
(b) computing in the identifier x[i] =r[i]^2 (mod n);
(c) computing in the identifier f(m, x[1] . . . x[t]) and extracting from it kt bits as e[ij] values;
(d) computing in the identifier ##EQU7## for i=1 . . . t; (e) sending to the verifier I, the indices j, m, the e[ij] matrix and all the y[i] values;
(f) computing in the verifier v[j] =f(I,j) for the indices j;
(g) computing in the verifier ##EQU8## and (h) verifying the signature to message m by determining whether the kt bits extracted from f(m,z[1] . . . z[t]) are the same as e[ij].
11. The method of claim 10 wherein the first kt bits of f(m, z
. . . z
) are used as e
12. The method of claim 10 wherein the exchange is in binary form.
13. The method of claim 10 wherein the product kt is at least 72.
14. The method of claim 10 wherein k is at least 18 and t is at least 4.
15. Apparatus for creating a unique identifier for use by an entity and unforgeable by others including those capable of verifying the entity, comprising:
(a) means for selecting k distinct indices of j so that each v[j] =f(I,j) is a quadratic residue (mod n);
(b) where f is a pseudo random function f capable of mapping arbitrary strings to numbers in the range (0,n) and n is a modulus which is the product of at least two secret primes and I is a string
containing information unique to an entity;
(c) means for computing roots s[j] of v[j]^- 1 (mod n); and
(d) means for recording on a retrievable medium of an identifier I, s[j] and related indices.
16. The apparatus of claim 15 wherein the recording on the identifier is in binary form.
17. The apparatus of claim 15 wherein the recording is in a ROM and the identifier includes microprocessing and input/output features.
18. Apparatus for utilizing the identifier of claim 15 comprising:
(a) means for placing the identifier of claim 1 in communication with a verifier having recorded therein modulus n and pseudo random function f;
(b) means for transmitting I and the indices j from the identifier to the verifier;
(c) means for generating in the verifier v[j] =f(I,j) for the selected indices j;
(d) means for selecting in the identifier a random r[i] "(0,n);
(e) means for computing x[i] =r[i]^2 (mod n) in the identifier and sending x[i] to the verifier;
(f) means for selecting a random vector e[i1] . . . e[ik] in the verifier and sending to the identifier;
(g) means for computing in the identifier ##EQU9## and sending to the verifier; (h) means for checking in the verifier ##EQU10## and (i) means for repeating steps (d) through (h) at least once.
19. The apparatus of claim 18 wherein the transmissions between the identifier and verifier are carried out in binary form.
20. The apparatus of claim 18 wherein all steps are carried out using binary signals.
21. The apparatus of claim 20 wherein modulus n is composed of at least 512 bits.
22. The apparatus of claim 20 wherein only a hased version of x
is used.
23. The apparatus of claim 20 wherein steps (d) through (h) are repeated at least two times.
24. Appratus for signing a message m exchanged between an identifier created according to claim 1 and a verifier comprising:
(a) means for selecting in the identifier random r[i] . . . r[t] ε(0,n);
(b) means for computing in the identifier x[i] =r[i]^2 (mod n);
(c) means for computing in the identifier f(m, x[i] . . . x[t]) and extracting from it kt bits as e[ij] values;
(d) means for computing in the identifier ##EQU11## for i=1 . . . t; (e) means for sending to the verifier I, m, the e[ij] matrix and all the y[i] values;
(f) means for computing in the verifier v[j] =f(I,j) for the indices j;
(g) means for computing in the verifier ##EQU12## for i=1 . . . t; and (h) means for verifying the signature to message m by determining whether the kt bits extracted from f(m, z[1] . . . z[t]) are
the same as e[ij].
25. The apparatus of claim 24 wherein the exchange is in binary form.
26. The apparatus of claim 24 wherein the product kt is at least 72.
27. An identifier comprising microprocessor means, memory means and I/O means and having recorded in said memory means a string I containing information unique to an entity, a modulus n which is the
product of at least two secret primes, a pseudo random function f capable of mapping arbitrary strings to numbers, indices; and values v
which are quadratic residues (mod n), values s
which are roots of v
(mod n), said microprocessor means including selection means for selecting a number r
ε (O,n), and computing means for computing x
(mod n) and ##EQU13## in responsive to receiving a binary vector e
28. An identifier according to claim 27, wherein the microprocessor means includes loop means for repeating the selection of r
and computing of x
and y
29. An identifier according to claim 27 wherein the microprocessor means includes selection means for selecting random r
. . . r
ε (O,n), computing means for computing x
(mod n), computing means for computing f(m, x
. . . x
), selection means for extracting from it kt bits as e
values, and computing means for computing ##EQU14## for i=1 . . . t.
30. A verification device for use with the identifier of claim 27, comprising microprocessor means, memory means and I/O means and having recorded in said memory means modulus n and function f, said
microprocessor means including generating means for generating values of v
=f(I,j) for the indices j; selection means for selecting a binary vector e
. . . e
, and checking means for checking that ##EQU15##.
31. A verification device according to claim 30 for use with the identifier of claim 27, wherein the microprocessor means includes computing means for computing ##EQU16## for i=1 . . . t and
comparing means for comparing that the kt bits extracted from f(m, z
. . . z
) are e
32. The method of claim 1 including the step of placing the numbers v
in a public key directory.
33. The method of claim 4 including the steps of placing the numbers v
in a public key directory, and retrieving the numbers v
from the public key directory.
34. A method of utilizing the identifier of claim 1 comprising:
(a) placing the identifier of claim 1 in communication with a verifier having recorded therein modulus n and pseudo random function f;
(b) transmitting the numbers v[j] along with a signature of a trusted center from the identifier to the verifier;
(c) selecting in the identifier a random r[i] ε (O,n);
(d) computing x[i] =r[i]^2 (mod n) in the identifier and sending x[i] to the verifier;
(e) selecting a random binary vector e[i] . . . e[ik] from a predetermined set of such vectors in the verifier and sending to the identifier;
(f) computing in the identifier ##EQU17## and sending y[i] to the verifier; (g) checking in the verifier ##EQU18## and (h) repeating steps (d) through (h) at least once.
35. The method of claim 4 wherein the repetition of steps (d) through (h) are carried out in parallel.
36. A method of signing a message m by an identifier created according to claim 1 comprising:
(a) selecting in the identifier random r[i] . . . r[t] ε (O,n);
(b) computing in the identifier x[i] =r[i]^2 (mod n);
(c) computing in the identifier f(m, x[i] . . . x[t]) and extracting from it kt bits as e[ij] values (1≤i≤t, 1≤j≤k);
(d) computing in the identifier ##EQU19## for i=1 . . . t; and (e) storing I, indices j, m, and e[ij] matrix and all the y[i] values.
37. A method of verifying the stored signature of a stored message m as defined in claim 36 including the steps of:
(a) retrieving I, the indices j, m, and e[ij] matrix and all the y[i] values from storage;
(b) computing in the verifier v[j] =f(I,j) for the indices j;
(c) computing in the verifier ##EQU20## and (d) verifying the signature to message m by determining whether the kt bits extracted from f(m, z[1] . . . z[t]) are the same as e[ij].
38. Apparatus as defined in claim 15 further including means for establishing a public key directory and means for recording the I, v
and related indices in said public key directory.
39. Apparatus as defined in claim 38 further including means for retrieving the I, v
and related indices from said public key directory.
40. Apparatus for utilizing the identifier of claim 15 comprising:
(a) means for placing the identifier of claim 1 in communication with a verifier having recorded therein modulus n and pseudo random function f;
(b) means for transmitting the numbers v[j] along with a signature of a trusted signature from the identifier to the verifier;
(c) means for selecting in the identifier a random r[i] ε(0,n);
(d) means for computing x[i] =r[i]^2 (mod n) in the identifier and sending x[i] to the verifier;
(e) means for selecting a random vector e[i1] . . . e[ik] in the verifier and sending to the identifier;
(f) means for computing in the identifier ##EQU21## and sending to the verifier; (g) means for checking in the verifier ##EQU22## (h) means for repeating steps (d) through (h) at least once.
41. Apparatus for signing a message m exchanged between an identifier created according to claim 1 and a verifier comprising:
(a) means for selecting in the identifier random r[i] . . . r[t] ε (O,n);
(b) means for computing in the identifier x[i] =r[i]^2 (mod n);
(c) means for computing in the identifier f(m, x[i] . . . x[t]) and extracting from it kt bits as e[ij] values;
(d) means for computing in the identifier ##EQU23## for i=1 . . . t; and (e) means for storing the verifier I, the indices j, m, the e[ij] matrix and all the y[i] values.
42. The apparatus according to claim 41 including
(a) means for retrieving the verifier I, the indices j, m, the e[ij] matrix and all the y[i] values from storage;
(b) means for computing in the verifier v[j] =f(I,j) for the indices j;
(c) means for computing in the verifier ##EQU24## for i=1 . . . t; and (d) means for verifying the signature to message m by determining whether the kt bits extracted from f(m, z[1] . . . z[t]) are
the same as e[ij]. | {"url":"http://www.freepatentsonline.com/4748668.html","timestamp":"2014-04-20T15:59:38Z","content_type":null,"content_length":"64649","record_id":"<urn:uuid:01a6b226-ef7c-4899-89d3-f619149dd63b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Joint or Combined Variation (with videos, worksheets, games & activities)
Joint or Combined Variation
Videos, worksheets, games and acivities to help Algebra students learn about joint or combined variation.
Joint Variation
Joint Variation
Suppose y varies jointly as x and z. What is y when x = 2 and y = 3, if y = 20 when x = 4 and y = 3?
Joint Variation
Joint Variation Application
The energy that an item possesses due to its motion is called kinetic energy. The kinetic energy of an object (which is measured in joules) varies jointly with the mass of the object and the square
of its velocity.
If the kinetic energy of a 3 kg ball traveling 12 m/s is 216 Joules, how is the mass of a ball that generates 250 Joules of energy when traveling at 10 m/s?
Direct, Inverse and Joint Variation
Determine whether the data in the table is an example of direct, inverse or joint variation. Then, identify the equation that represents the relationship.
Combined Variation
In Algebra, sometimes we have functions that vary in more than one element. When this happens, we say that the functions have joint variation or combined variation. Joint variation is direct
variation to more than one variable (for example, d = (r)(t)). With combined variation, we have both direct variation and indirect variation.
How to set up and solve combined variation problems.
Lesson on combining direct and inverse or joint and inverse variation
How to solve problems involving joint and combined variation
y varies jointly as x and z and inversely as w, and y = 3/2, when x = 2, z =3 and w = 4. Find the equation of variation.
Custom Search
We welcome your feedback, comments and questions about this site - please submit your feedback via our Feedback page. | {"url":"http://www.onlinemathlearning.com/joint-variation-algebra.html","timestamp":"2014-04-17T18:45:38Z","content_type":null,"content_length":"21682","record_id":"<urn:uuid:72d92e10-4e05-468c-bb72-bb8314e0bd7c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: RE: RE: interaction
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: RE: RE: interaction
From "Scott Merryman" <smerryman@kc.rr.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: RE: RE: interaction
Date Sun, 9 Oct 2005 14:44:21 -0500
> -----Original Message-----
> From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-
> statalist@hsphsun2.harvard.edu] On Behalf Of lm335@drexel.edu
> Sent: Thursday, October 06, 2005 6:46 PM
> To: statalist@hsphsun2.harvard.edu
> Subject: st: RE: RE: interaction
> Scott,
> I'm sorry if I wasn't clear about my question. Since the dependent
> variable is log transformed, the percentage increase is (exp(.29)-1)*100=
> 34 percent for group=1
Thanks. I was thinking in terms of a continuous variable, in which
ln(Y) = b1 + b2X2 + e
b2 = dln(Y)/dX2 = (dY/Y)/dx
With a dummy variable the usual interpretation is:
ln(Y) = b1 + b2D + e
Percentage change = (Y1/Y0) -1 = exp{b1 + b2*1 + e} / exp{b1 + b2*0 + e} - 1
= exp{b2} - 1
In taking a look at the issue interpreting dummy variables in loglinear
models, I came across a few papers on this topic. The following is taken
from van Garderen and Shah (2002).
Halvorsen and Palmquist (1980) pointed out that the unlike a continuous
variable, the coefficient of a dummy variable, multiplied by 100, is
not the usual percentage effect of that variable on the dependent variable
(the naïve estimator). Instead it should be calculated as:
p = exp(b} -1
This is the case if the value of b is known. In practice, however, b is
unknown and has to be estimated. Kennedy (1981) pointed out that this
transformation results in a biased estimator for p.
If the error term is assumed to be normally distributed then the OLS
estimator of b is efficient and unbiased. Goldberger (1968) noted that the
expected value exp{b_hat} = exp{b + .5*V(b_hat)} where V(b_hat) is the
variance of b_hat.
This led Kennedy to suggest:
p = 100*(exp{b_hat - .5*V(b_hat)} - 1),
where V(b_hat) is the OLS estimate of the variance of b_hat.
Van Garderen and Shah go on to develop the Exact Minimum Variance Unbiased
Estimator of the percentage change in Y due to a change in D from 0 to 1:
p =100*{exp(b_hat)0_F_1(m; -.5*V(b_hat)) - 1},
where m = (n-k)/2, n is the sample size k is the number of regressors and
0_F_1 is the hypergeometric function.
The term 0_F_1(m; -.5*V(b_hat)) tends to exp{-.5V(b_hat)} as the sample
The authors also give the variance of Kennedy's estimator as:
V(p) = 100^2*exp{2*b_hat}*[exp{-V(b_hat)} - exp{2*V(b_hat)}
Stata does not have hypergeometric function so the exact estimator cannot be
directly calculated, but the program listed below calculates the unbiased
percentage change using Kennedy's method.
. sysuse auto, clear
(1978 Automobile Data)
. gen lnprice = ln(price)
. qui reg lnprice fore mpg gear
. disp "HP estimator =" exp(_b[fore])-1
HP estimator =.57716448
. semidum fore
Unbiased Estimated Percentage Change in Dependent Variable
Kennedy's (1981) approximation method for semilogarithmic equations
variable | % Change Std. Err. t P>|t| [ 95% Conf. Interval ]
foreign | 56.70733 17.6987 3.20 0.002 21.40832 92.00633
Giles, D.E.A. (1982). The interpretation of dummy variables in
semilogarithmic equations. Economics Letters, 10, 77-79.
Goldberger, A.S. (1968). The interpretation and estimation of
Cobb-Douglas functions. Econometrica 36, 464-472.
Halvorsen, R. and Palmquist, R. (1980). The interpretation of dummy
variables in semilogarithmic equations. American Economic Review,
70, 474-75.
Kennedy, P. E. (1981). Estimation with correctly interpreted dummy
variables in semilogarithmic equations. American Economic Review,
71, 801.
van Garderen, K. J. (2001). Optimal prediction in loglinear models. Journal
of Econometrics, 104, 119-140.
van Garderen, K. J. and Shah, C. (2002). Exact interpretation of dummy
variables in semilogarithmic equations. The Econometrics Journal, 5,
*! version 1.0.0 October 9, 2005
*! Scott Merryman
program semidum
version 9.1
syntax varname, [level(integer 95)]
scalar v_hat = _se[`varlist']^2
local kennedy = 100*(exp(_b[`varlist'] -.5*v_hat)-1)
local var = 100^2*exp(2*_b[`varlist'])*[exp(-v_hat) - exp(-2*v_hat)]
local se= sqrt(`var')
local t = `kennedy'/`se'
local pvalue = 2*ttail(`=e(df_r)', abs(`t'))
local ul =`kennedy' + invttail(`=e(df_r)',(1- `level'/100)/2)*`se'
local ll =`kennedy' - invttail(`=e(df_r)',(1- `level'/100)/2)*`se'
disp ""
disp in smcl in gr "Unbiased Estimated Percentage Change in Dependent
disp in smcl in gr "Kennedy's (1981) approximation method for
semilogarithmic equations "
disp in smcl in gr "{hline 15}{c TT}{hline 68}"
disp in smcl in gr "{ralign 14:variable}" _col(15) " {c |} " ///
_col(20) "% Change" ///
_col(30) `"Std. Err."' ///
_col(44) "t" ///
_col(52) "P>|t|" ///
_col(62) `"[ `level'% Conf. Interval ]"' ///
_n "{hline 15}{c +}{hline 68}"
di in smcl in gr "{ralign 14: `varlist' }" _col(15) " {c |} " ///
_col(18) in ye %-9.0g `kennedy' ///
_col(30) in ye %8.0g `se' ///
_col(42) in ye %5.2f `t' ///
_col(52) in ye %5.3f `pvalue' ///
_col(62) in ye %9.0g `ll' " " in ye %9.0g `ul' ///
_n in gr "{hline 15}{c BT}{hline 68}"
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-10/msg00272.html","timestamp":"2014-04-17T04:41:24Z","content_type":null,"content_length":"10433","record_id":"<urn:uuid:b93d1d8b-eb34-4a3e-9ef5-788cef6b3cbf>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find all solutions to the equation. cos2x + 2 cos x + 1 = 0
• one year ago
• one year ago
Best Response
You've already chosen the best response.
For cos(2x) * (2cos(x) + 1) = 0, use the double angle identity for cos(2x), which is cos^2 x - sin^2 x = cos^2 x - (1-cos^2) = 2cos^2 x - 1. So we have (2cos^2 x - 1)(2cos x + 1) = 0. So 2cos^2 x
-1 = 0 or x = 0 and 2pi. For 2sec^2 x + tan^2 x - 3 = 0, use the identity sec^2 x = tan^2 x + 1, so we have 2(tan^2 x + 1) + tan^2 x - 3 = 0 or 2tan^2 x + tan^2 x - 1 = 0 or 3 tan^2 x = 1. So x =
pi/2, pi/2 + pi = 3pi/2.
Best Response
You've already chosen the best response.
hi the first thing I tried to solve this problem was to factor it so I have (cos x+1)(cos x+1) but now I dont know what to next
Best Response
You've already chosen the best response.
Unclear how to proceed. What is the first term, \(\cos(2x)\) or \(\cos^{2}(x)\)?
Best Response
You've already chosen the best response.
cos \[\cos^{2}x\]
Best Response
You've already chosen the best response.
Perfect. The we have only to solve - by factoring. \(\cos^{2}x + 2\cos x + 1 = 0\) \((\cos(x) + 1)^{2} = 0\) Now what?
Best Response
You've already chosen the best response.
That is where I got stuck I am not sure what to do after factoring
Best Response
You've already chosen the best response.
What's the point of factoring? Why do we do that at all? If a*b = 0, what do we know about a or b? To contract, if a*b = 4, what do we NOT know about either a or b?
Best Response
You've already chosen the best response.
what each of the equal?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Kind of. a*b = 0 tells us that either a = 0 or b = 0. What can we tell about a and b when it isn't zero? a*b = 4, for example? Is one of them zero? Aboslutely not, but that's about all we can
tell. Is a = 4? Maybe. Is b = 12? Maybe. Not a clue. Only "=0" is particularly helpful. So, what should we do with that factored expression?
Best Response
You've already chosen the best response.
tkhunny your explanations are more complex than the problem itself :)
Best Response
You've already chosen the best response.
That's why we invent notation, so that problems can be simpler than explanations. If we have this: Factor1 * Factor2 = 0 Then we must have Either this Factor1 = 0 or this Factor2 = 0 How can we
apply this to your factored equation?
Best Response
You've already chosen the best response.
cos x+1(0)=0 I am not sure
Best Response
You've already chosen the best response.
\((\cos(x)+1)^{2} = 0\) We have two factors, but hey are exactly the same. This limits the possible results. \(\cos(x) + 1 = 0\) \(\cos(x) = -1\) \(x = \dfrac{3\pi}{2} + 2k\pi\) where \(k\in \
mathbb{Z}\) Do you know what all that means?
Best Response
You've already chosen the best response.
I do... but if cos(x)=-1 wouldn't [x=\pi\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
anyway I got it ... thanks for the help!
Best Response
You've already chosen the best response.
Awesome. How did I do that?! Right you are. \(x = \pi + 2k\pi\)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ac4eb5e4b064039cbdc5c7","timestamp":"2014-04-18T14:12:29Z","content_type":null,"content_length":"71650","record_id":"<urn:uuid:ea0af977-5160-4fcf-8a9a-865b3255276b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
OSGi™ Service Platform
Overview Package Class Tree Deprecated Index Help Release 4 Version 4.2
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Class Measurement
All Implemented Interfaces:
public class Measurement
extends java.lang.Object
implements java.lang.Comparable
Represents a value with an error, a unit and a time-stamp.
A Measurement object is used for maintaining the tuple of value, error, unit and time-stamp. The value and error are represented as doubles and the time is measured in milliseconds since midnight,
January 1, 1970 UTC.
Mathematic methods are provided that correctly calculate taking the error into account. A runtime error will occur when two measurements are used in an incompatible way. E.g., when a speed (m/s) is
added to a distance (m). The measurement class will correctly track changes in unit during multiplication and division, always coercing the result to the most simple form. See Unit for more
information on the supported units.
Errors in the measurement class are absolute errors. Measurement errors should use the P95 rule. Actual values must fall in the range value +/- error 95% or more of the time.
A Measurement object is immutable in order to be easily shared.
Note: This class has a natural ordering that is inconsistent with equals. See compareTo(java.lang.Object).
│ Constructor Summary │
│ Measurement(double value) │ │
│ Create a new Measurement object with an error of 0.0, a unit of Unit.unity and a time of zero. │ │
│ Measurement(double value, double error, Unit unit) │ │
│ Create a new Measurement object with a time of zero. │ │
│ Measurement(double value, double error, Unit unit, long time) │ │
│ Create a new Measurement object. │ │
│ Measurement(double value, Unit unit) │ │
│ Create a new Measurement object with an error of 0.0 and a time of zero. │ │
│ Method Summary │
│ Measurement │ add(double d) │
│ │ Returns a new Measurement object that is the sum of this object added to the specified value. │
│ Measurement │ add(double d, Unit u) │
│ │ Returns a new Measurement object that is the sum of this object added to the specified value. │
│ Measurement │ add(Measurement m) │
│ │ Returns a new Measurement object that is the sum of this object added to the specified object. │
│ int │ compareTo(java.lang.Object obj) │
│ │ Compares this object with the specified object for order. │
│ Measurement │ div(double d) │
│ │ Returns a new Measurement object that is the quotient of this object divided by the specified value. │
│ Measurement │ div(double d, Unit u) │
│ │ Returns a new Measurement object that is the quotient of this object divided by the specified value. │
│ Measurement │ div(Measurement m) │
│ │ Returns a new Measurement object that is the quotient of this object divided by the specified object. │
│ boolean │ equals(java.lang.Object obj) │
│ │ Returns whether the specified object is equal to this object. │
│ double │ getError() │
│ │ Returns the error of this Measurement object. │
│ long │ getTime() │
│ │ Returns the time at which this Measurement object was taken. │
│ Unit │ getUnit() │
│ │ Returns the Unit object of this Measurement object. │
│ double │ getValue() │
│ │ Returns the value of this Measurement object. │
│ int │ hashCode() │
│ │ Returns a hash code value for this object. │
│ Measurement │ mul(double d) │
│ │ Returns a new Measurement object that is the product of this object multiplied by the specified value. │
│ Measurement │ mul(double d, Unit u) │
│ │ Returns a new Measurement object that is the product of this object multiplied by the specified value. │
│ Measurement │ mul(Measurement m) │
│ │ Returns a new Measurement object that is the product of this object multiplied by the specified object. │
│ Measurement │ sub(double d) │
│ │ Returns a new Measurement object that is the subtraction of the specified value from this object. │
│ Measurement │ sub(double d, Unit u) │
│ │ Returns a new Measurement object that is the subtraction of the specified value from this object. │
│ Measurement │ sub(Measurement m) │
│ │ Returns a new Measurement object that is the subtraction of the specified object from this object. │
│ java.lang.String │ toString() │
│ │ Returns a String object representing this Measurement object. │
│ Methods inherited from class java.lang.Object │
│ clone, finalize, getClass, notify, notifyAll, wait, wait, wait │
public Measurement(double value,
double error,
Unit unit,
long time)
Create a new Measurement object.
value - The value of the Measurement.
error - The error of the Measurement.
unit - The Unit object in which the value is measured. If this argument is null, then the unit will be set to Unit.unity.
time - The time measured in milliseconds since midnight, January 1, 1970 UTC.
public Measurement(double value,
double error,
Unit unit)
Create a new Measurement object with a time of zero.
value - The value of the Measurement.
error - The error of the Measurement.
unit - The Unit object in which the value is measured. If this argument is null, then the unit will be set to Unit.unity.
public Measurement(double value,
Unit unit)
Create a new Measurement object with an error of 0.0 and a time of zero.
value - The value of the Measurement.
unit - The Unit in which the value is measured. If this argument is null, then the unit will be set to Unit.unity.
public Measurement(double value)
Create a new Measurement object with an error of 0.0, a unit of Unit.unity and a time of zero.
value - The value of the Measurement.
public final double getValue()
Returns the value of this Measurement object.
The value of this Measurement object as a double.
public final double getError()
Returns the error of this Measurement object. The error is always a positive value.
The error of this Measurement as a double.
public final Unit getUnit()
Returns the Unit object of this Measurement object.
The Unit object of this Measurement object.
See Also:
public final long getTime()
Returns the time at which this Measurement object was taken. The time is measured in milliseconds since midnight, January 1, 1970 UTC, or zero when not defined.
The time at which this Measurement object was taken or zero.
public Measurement mul(Measurement m)
Returns a new Measurement object that is the product of this object multiplied by the specified object.
m - The Measurement object that will be multiplied with this object.
A new Measurement that is the product of this object multiplied by the specified object. The error and unit of the new object are computed. The time of the new object is set to the time of
this object.
java.lang.ArithmeticException - If the Unit objects of this object and the specified object cannot be multiplied.
See Also:
public Measurement mul(double d,
Unit u)
Returns a new Measurement object that is the product of this object multiplied by the specified value.
d - The value that will be multiplied with this object.
u - The Unit of the specified value.
A new Measurement object that is the product of this object multiplied by the specified value. The error and unit of the new object are computed. The time of the new object is set to the time
of this object.
java.lang.ArithmeticException - If the units of this object and the specified value cannot be multiplied.
See Also:
public Measurement mul(double d)
Returns a new Measurement object that is the product of this object multiplied by the specified value.
d - The value that will be multiplied with this object.
A new Measurement object that is the product of this object multiplied by the specified value. The error of the new object is computed. The unit and time of the new object is set to the unit
and time of this object.
public Measurement div(Measurement m)
Returns a new Measurement object that is the quotient of this object divided by the specified object.
m - The Measurement object that will be the divisor of this object.
A new Measurement object that is the quotient of this object divided by the specified object. The error and unit of the new object are computed. The time of the new object is set to the time
of this object.
java.lang.ArithmeticException - If the Unit objects of this object and the specified object cannot be divided.
See Also:
public Measurement div(double d,
Unit u)
Returns a new Measurement object that is the quotient of this object divided by the specified value.
d - The value that will be the divisor of this object.
u - The Unit object of the specified value.
A new Measurement that is the quotient of this object divided by the specified value. The error and unit of the new object are computed. The time of the new object is set to the time of this
java.lang.ArithmeticException - If the Unit objects of this object and the specified object cannot be divided.
See Also:
public Measurement div(double d)
Returns a new Measurement object that is the quotient of this object divided by the specified value.
d - The value that will be the divisor of this object.
A new Measurement object that is the quotient of this object divided by the specified value. The error of the new object is computed. The unit and time of the new object is set to the Unit
and time of this object.
public Measurement add(Measurement m)
Returns a new Measurement object that is the sum of this object added to the specified object. The error and unit of the new object are computed. The time of the new object is set to the time of
this object.
m - The Measurement object that will be added with this object.
A new Measurement object that is the sum of this and m.
java.lang.ArithmeticException - If the Unit objects of this object and the specified object cannot be added.
See Also:
public Measurement add(double d,
Unit u)
Returns a new Measurement object that is the sum of this object added to the specified value.
d - The value that will be added with this object.
u - The Unit object of the specified value.
A new Measurement object that is the sum of this object added to the specified value. The unit of the new object is computed. The error and time of the new object is set to the error and time
of this object.
java.lang.ArithmeticException - If the Unit objects of this object and the specified value cannot be added.
See Also:
public Measurement add(double d)
Returns a new Measurement object that is the sum of this object added to the specified value.
d - The value that will be added with this object.
A new Measurement object that is the sum of this object added to the specified value. The error, unit, and time of the new object is set to the error, Unit and time of this object.
public Measurement sub(Measurement m)
Returns a new Measurement object that is the subtraction of the specified object from this object.
m - The Measurement object that will be subtracted from this object.
A new Measurement object that is the subtraction of the specified object from this object. The error and unit of the new object are computed. The time of the new object is set to the time of
this object.
java.lang.ArithmeticException - If the Unit objects of this object and the specified object cannot be subtracted.
See Also:
public Measurement sub(double d,
Unit u)
Returns a new Measurement object that is the subtraction of the specified value from this object.
d - The value that will be subtracted from this object.
u - The Unit object of the specified value.
A new Measurement object that is the subtraction of the specified value from this object. The unit of the new object is computed. The error and time of the new object is set to the error and
time of this object.
java.lang.ArithmeticException - If the Unit objects of this object and the specified object cannot be subtracted.
See Also:
public Measurement sub(double d)
Returns a new Measurement object that is the subtraction of the specified value from this object.
d - The value that will be subtracted from this object.
A new Measurement object that is the subtraction of the specified value from this object. The error, unit and time of the new object is set to the error, Unit object and time of this object.
public java.lang.String toString()
Returns a String object representing this Measurement object.
toString in class java.lang.Object
a String object representing this Measurement object.
public int compareTo(java.lang.Object obj)
Compares this object with the specified object for order. Returns a negative integer, zero, or a positive integer if this object is less than, equal to, or greater than the specified object.
Note: This class has a natural ordering that is inconsistent with equals. For this method, another Measurement object is considered equal if there is some x such that
getValue() - getError() <= x <= getValue() + getError()
for both Measurement objects being compared.
Specified by:
compareTo in interface java.lang.Comparable
obj - The object to be compared.
A negative integer, zero, or a positive integer if this object is less than, equal to, or greater than the specified object.
java.lang.ClassCastException - If the specified object is not of type Measurement.
java.lang.ArithmeticException - If the unit of the specified Measurement object is not equal to the Unit object of this object.
public int hashCode()
Returns a hash code value for this object.
hashCode in class java.lang.Object
A hash code value for this object.
public boolean equals(java.lang.Object obj)
Returns whether the specified object is equal to this object. Two Measurement objects are equal if they have same value, error and Unit.
Note: This class has a natural ordering that is inconsistent with equals. See compareTo(java.lang.Object).
equals in class java.lang.Object
obj - The object to compare with this object.
true if this object is equal to the specified object; false otherwise.
OSGi™ Service Platform
Overview Package Class Tree Deprecated Index Help Release 4 Version 4.2
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Copyright © OSGi Alliance (2000, 2010). All Rights Reserved. Licensed under the OSGi Specification License, Version 1.0 | {"url":"http://www.osgi.org/javadoc/r4v42/org/osgi/util/measurement/Measurement.html","timestamp":"2014-04-21T15:49:33Z","content_type":null,"content_length":"45216","record_id":"<urn:uuid:c657b863-dd27-48b6-82c7-77519f411122>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical Applications in Digital Signal Processing: Review of Digital Frequency
This chapter explains how to define a digital frequency and how to mathematically represent a digital frequency.
This chapter explains how to define a digital frequency and how to mathematically represent a digital frequency.
It is easy to mathematically represent an analog frequency on paper. The range of frequencies in the analog domain is both continuous and theoretically infinite. If we use the symbol f[O] to
represent some arbitrary analog frequency all we need to do is to equate it with any one of an infinite number of available frequencies. We could, for example, choose f[O] to be equal to 23.456 Hz,
or we could just as easily choose f[O] to be equal to 1.005 MHz. We could choose just about any other value to any precision that we can dream up. As long as we remain realistic, there is no limit on
the values that f[O] can take on.
However, a digital system operates on digital data and generates digital results that are valid only at discrete increments of time equal to the period of the system sample clock. Therefore the value
that a digitally generated discrete frequency can take on is a small subset of the range of values available to analog frequencies. The discrete frequency values within this subset are directly
related to and dependent on the sample rate of the digital system clock.
This leads to some confusion when people deal with digital frequencies for the first time. Much of the confusion can be summed up with three frequently asked questions:
1. How do I define a digital frequency?
2. How do I mathematically represent a digital frequency?
3. How do I synthesize a digital frequency in hardware or software?
The scope of this chapter is to provide an answer for the first two of these questions. The answer to question number 3 requires its own chapter and is dealt with in detail in Chapter 8, “Digital
Frequency Synthesis.”
1.1. Definitions
In this chapter, we will make the following symbol definitions:
1. f defines any arbitrary analog frequency in hertz.
2. f[O] defines a specific analog frequency in hertz.
3. f[K] defines a specific digital frequency in hertz.
4. ω[O] defines a specific analog radian frequency in radians/second.
5. ω[K] defines a specific digital radian frequency in radians/second.
6. f[S] defines the sample rate or the frequency of a digital system clock.
7. T defines the period of the digital sample clock T = 1/f[S]. | {"url":"http://www.informit.com/articles/article.aspx?p=1967020","timestamp":"2014-04-18T18:14:02Z","content_type":null,"content_length":"28455","record_id":"<urn:uuid:fd9d1c62-9ba9-4450-bd2f-f4f494c2d1d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
A032465 - OEIS
%S 2,4,6,12,16,20,23,32,38,48,54,60,100,116,118,119,140,150,163,170,190,
%T 244,271,294,299,320,334,414,439,442,468,724,794,842,864,1032,1750,
%U 2050,3980,4010,4756,5096,5963,5966,6836,14160,16748,16844,19814,25398
%N Numbers n such that 177*2^n+1 is prime.
%H Ray Ballinger, <a href="http://www.prothsearch.net/index.html">Proth Search Page</a>
%H Ray Ballinger and Wilfrid Keller, <a href="http://www.prothsearch.net/riesel.html">List of primes k.2^n + 1 for k < 300</a>
%H Wilfrid Keller, <a href="http://www.prothsearch.net/riesel2.html">List of primes k.2^n - 1 for k < 300</a>
%H <a href="/index/Pri#riesel">Index entries for sequences of n such that k*2^n-1 (or k*2^n+1) is prime</a>
%K nonn,hard
%O 1,1
%A _N. J. A. Sloane_. | {"url":"http://oeis.org/A032465/internal","timestamp":"2014-04-17T15:56:31Z","content_type":null,"content_length":"7613","record_id":"<urn:uuid:52fc3056-928a-464a-a3d3-f2a77f5eef3b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Towards a Prototype of a Spherical Tippe Top
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 268537, 34 pages
Research Article
Towards a Prototype of a Spherical Tippe Top
^1Howest, ELIT, University College West Flanders, G. K. De Goedelaan 5, 8500 Kortrijk, Belgium
^2Department of Mathematical Analysis, Research Group NaM2, University of Ghent, Galglaan 2, 9000 Ghent, Belgium
^3Department of Architecture, Sint-Lucas Visual Arts, Institute for Higher Education in the Sciences and the Arts, 9000 Ghent, Belgium
^4Howest, Industrial Design Center, University College West Flanders, Marksesteenweg 58, 8500 Kortrijk, Belgium
Received 14 April 2011; Revised 6 October 2011; Accepted 7 October 2011
Academic Editor: Yuri Sotskov
Copyright © 2012 M. C. Ciocci et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Among spinning objects, the tippe top exhibits one of the most bizarre and counterintuitive behaviours. The commercially available tippe tops basically consist of a section of a sphere with a rod.
After spinning on its rounded body, the top flips over and continues spinning on the stem. The commonly used simplified mathematical model for the tippe top is a sphere whose mass distribution is
axially but not spherically symmetric, spinning on a flat surface subject to a small friction force that is due to sliding. Three main different dynamical behaviours are distinguished: tipping,
nontipping, hanging, that is, the top rises but converges to an intermediate state instead of rising all the way to the vertical state. Subclasses according to the stability of relative equilibria
can further be distinguished. Our concern is the degree of confidence in the mathematical model predictions, we applied 3D printing and rapid prototyping to manufacture a “3-in-1 toy” that could
catch the three main characteristics defining the three main groups in the classification of spherical tippe tops as mentioned above. We propose three designs. This “toy” is suitable to validate the
mathematical model qualitatively and quantitatively.
1. Introduction
Spinning toys are among the most ancient toys, and there is a great variety of them. It is quite simple to start spinning objects like a top or a gyroscope, and though it is simple to explain their
motion in general, it is challenging to write down the detailed equations of motion. Among spinning objects, the tippe top exhibits one of the most bizarre and counterintuitive behaviours. The
commercially available tippe tops, patented in Denmark in the 50s, basically consist of a section of a sphere with a rod. After spinning on its rounded body, the top flips over and continues spinning
on the stem. It is the friction with the bottom surface and the position of the center of mass below the centre of curvature that cause the tippe top to rise its centre of mass while continuing
rotating around its symmetry axis (through the stem). See Figure 1 for an illustration. Remarkably, at the inverted state, the center of mass lies higher than at the initial condition, defying
gravity. Experimentally, it is known that such a transition occurs only if the initial spin exceeds a certain critical threshold.
The commonly used simplified mathematical model for the tippe top is a sphere whose mass distribution is axially but not spherically symmetric, spinning on a flat surface subject to a small friction
force that is due to sliding. Adopting a bifurcation theory point of view we reach a global geometric understanding of the phase diagram of this dynamical system. According to the eccentricity of the
sphere and the Jellett invariant (which includes information on the initial angular velocity) six main classes of tops can be identified within three main groups according to the distinguished
dynamical behaviours: tipping, nontipping, and hanging; see Figure 3. Note that objects displaying inversion properties such as the tippe top have been known since the 1800s, see, for example, [1].
After the type of tippe top as in Figure 1 was introduced in Denmark, several theoretical articles have been published since then, see, for example, [2–6] for a survey of the literature. Since it was
established that sliding friction was necessary to explain the tippe top inversion [2, 7, 8], many studies have been dedicated to the analysis of models for tippe tops, involving linear stability
analysis of the relative equilibria, numerical simulations, and so forth. Some studies have addressed the occurrence of transitions between rolling and sliding during the motion, see [3, 4, 9]. In
this paper the presented mathematical results mainly reproduce those in [10–15] but our approach is inspired by the hands-on numerical approach as first attempted by Cohen in [2]. We believe this
approach is the best choice in giving a clear view of the role of the different parameters that is necessary during the design process of an actual three-dimensional object that effectively
demonstrates the model. We remark that in the mathematical model we stick to the common assumption that the only external force acting on the system consists of a normal reaction force and a
frictional force of viscous type opposing the motion of the contact point in the supporting plane. This is the most common assumption in the literature, though in [4] the inclusion of a nonlinear
Coulomb-type friction is discussed. It is shown there by numerical simulations that the Coulomb term contributes to the tippe top inversion but the effect is weaker compared to the viscous term. The
nonlinear Coulomb term results in algebraic destabilization of the initially spinning top, whereas the viscous friction gives exponential destabilization, see also [5]. This argument motivates our
choice of including viscous friction only.
The phase diagram and bifurcation diagrams illustrate the main results that confirm the findings described in [10]; the type of asymptotic dynamics is a function of the Jellett invariant (which
includes information on the initial angular velocity) and eccentricity of the sphere. Either the asymptotic state is unique or the system is bistable. Three main different regimes are distinguished:
no tippe top phenomenon occurs no matter what the initial spin is, tippe top dynamics may occur if the Jellett invariant (which is proportional to the initial spin) is sufficiently large, or
incomplete tippe top behaviour occurs, where the top rises but converges to an intermediate state instead of rising all the way to the vertical state.
We underline that though the classification results can be obtained in a less cumbersome way by using the Routhian reduction as in [10], the approach used in this paper is standard and
straightforward to implement from a prototyping point of view. Also, it is amenable for extensions to include for example transitions from sliding to rolling and vice versa. Our concern in this paper
is the degree of confidence in the mathematical model predictions. We wanted to be sure that the mathematical model as presented here and in [10] reflected the reality. Our goal was to investigate if
it was possible to make a “3-in-1 toy” that could catch the three main characteristics tipping, nontipping, and hanging that define the three main groups in the classification of spherical tippe tops
as mentioned above. As far as we know such a toy does not exist yet. We successfully applied the methodology for efficient use of prototyping during the design process as presented in [16]. To the
best of our knowledge this is the first time that 3D printing and rapid prototyping is being applied to design and to produce a “toy” suitable to validate the qualitative and quantitative
mathematical model describing the behaviour of a dissipative nonlinear dynamical system. From the bifurcation diagram it was clear that it should be possible to hit three out of the 6 classes of
tippe tops (one type for each main group) by keeping one of the characterizing parameters of the system fixed and varying the other. We believe that the realization of an actual toy is a more
powerful validation of the model than software simulations which are directly affected by the underlying mathematical idealization assumptions.
Since the two parameters on which the whole classification is based, inertia ratio and eccentricity, are not independent, the challenge was to come up with a feasible prototype which could be easily
mechanically driven. After detailed mathematical calculations and the development of 3D animations (see electronic attachments or http://cage.ugent.be/~bm/tippetop/tippetop.html) we used 3D printing
to create a functional model giving us a quick and easy hands-on demonstration capability.
2. Mathematical Results
In this paper we consider a sphere whose mass distribution is axially but not spherically symmetric, spinning on a flat surface subject to a small friction force that is due to sliding. In [13],
Ebenfeld and Scheck presented a detailed analysis of the dynamics of the eccentric spinning sphere on a flat surface where friction is assumed to be only due to sliding, see also [14]. Without making
any other assumptions, we show that their results imply a full qualitative understanding of the asymptotic long-term dynamics. Whereas the treatment of [13, 14] is mainly analytical, here we adopt a
bifurcation theory point of view leading to a global geometric understanding of the phase diagram. The phase diagram in Figure 2 and bifurcation diagrams in Figure 3 illustrate our main results.
Recall that an -limit set of a dynamical system is a closed invariant set that is accumulated by a (forward) trajectory [17]. Our main result is summarized in the following theorem.
Theorem 2.1. A spinning eccentric sphere on a flat surface with small slipping friction admits three types of (asymptotically stable) -limit sets:(i)vertically spinning top (), which has its center
of mass straight below its geometric center,(ii)vertically spinning top (), which has its center of mass straight above its geometric center,(iii)intermediate spinning top (), whose center of mass is
neither straight below nor straight above its geometric center.These are solutions of constant energy that are purely rolling due to the assumption on sliding friction (i.e., they display no
slipping). The vertical states are periodic, whereas, in general, the intermediate states are quasiperiodic.
At most two of the above types of solutions can be stable at the same time. In case the stable solution is unique its basin of attraction consists of almost the entire phase space (subset of full
measure). If the system is bistable, the separatrix between the two different domains of attraction for the asymptotically stable states is expected to be formed by the stable manifold of an unstable
intermediate spinning top solution.
All the analytical results needed to arrive at this conclusion can in principle be found in [13, 14]. However, these papers stop short of drawing the full global conclusions as formulated in the
above theorem, and also crucially they did not present the phase diagram and bifurcation diagrams that we present in Figures 2 and 3.
We note that for the eccentric sphere in regime I, the state is always asymptotically stable and thus does not display the tippe top phenomenon. Similarly, no such dynamics arises in regime III since
there the inverted state is always unstable. Tippe top dynamics may occur in regime II if the Jellett invariant (which is proportional to the initial spin) is sufficiently large, corresponding to the
empirical observation that tippe top dynamics requires a sufficiently large initial spin. In the subregimes IIb and III it is also possible to observe incomplete tippe top behaviour, where the top
rises but converges to an intermediate state instead of rising all the way to the vertical state . Note that, in regime I, tipping might occur if the top is initially spun sufficiently fast under an
angle not close to .
It is important to recognize the existence of symmetries. Recall that symmetries are transformations of the phase space that map solutions of a system to other solutions. In our model of the
eccentric sphere on a flat surface, symmetries arise due to the homogeneity of the surface on which the sphere moves and the rotational symmetry of the eccentric sphere. The combined symmetries are
thus the Euclidean group (acting as translations and rotations in the plane) and the rotation group acting as rotation of the sphere around its axis of symmetry.
It turns out that the -limit sets mentioned above are all relative equilibria with respect to the symmetry group , see Section 4. Recall that relative equilibria with respect to a group are
equilibria for the associated flow on a reduced phase space that is obtained from the original phase space by taking the quotient with respect to the action of . The existence and type of such
relative equilibria depend solely on the inertia ratio, the eccentricity of the sphere, and the Jellett integral of motion. We identify a number of regimes characterizing the relative equilibria as a
function of the Jellett invariant (which is proportional to the initial angular velocity). The vertical states and are always relative equilibria and their stability depends on the inertia ratio, the
eccentricity of the sphere, and the Jellett integral of motion. In addition, intermediate states may exist, which branch off from the vertically spinning solutions. We sketch the phase diagram in
Figure 2. For the labeled regions in this phase diagram, the corresponding bifurcation diagrams for the relative equilibria are presented in Figure 3.
The proof of Theorem 2.1, which builds upon the results by [13, 14], can be found in the appendix. In order to present our point of view clearly and in a self-contained way, in Section 3 we also
present a derivation of the equations of motion of the eccentric sphere model of the tippe top, including a discussion of the symmetries and their consequences. Here also, one finds a precise
description of the assumed nature of the friction and a definition of all the relevant variables that appear as parameters in Figures 2 and 3. In Sections 4 and 5 the relative equilibria of the
system and their stability are discussed. The readers who are acquainted with the topic can start reading from Section 6.
We would like to point out that the strategy of proof used here may well be applicable to a large number of similar examples of mechanical systems under the influence of some kind of friction, such
as the Rattleback [18] or Hycaro tippe top of Tokieda [19]. The key observation is that for mechanical systems under the influence of friction, in a natural way the energy becomes a Lyapunov function
since friction causes energy loss. The next observation is that orbits which do not dissipate energy need to lie entirely on a subvariety of the phase space that is defined by the condition that
friction is absent. Equilibria naturally lie on this subset since they have zero velocity and friction is absent at zero velocity. However, one would expect that typically no solution lies entirely
on this subvariety, unless the solution lies on the orbit of a symmetry group that leaves the zero friction subvariety invariant. (One can make this precise by constructing a small local perturbation
that moves a solution off the zero friction subvariety.) In many cases, the set of such relative equilibria can be accurately analyzed, either analytically or numerically, and the local stability
properties can be deduced from a dissipation-induced instability point of view, based on the local stability properties of the relative equilibria in the absence of friction. We note in this respect
that the set of -limit sets on the zero friction subvariety is independent of the form and size of the friction. The final step is to draw global conclusions from this local information. The latter
is within reach if one has a good understanding of the -limit sets. We would like to stress that Theorem 2.1 concerns the asymptotic dynamics. In an experiment with small friction, the observation
may well be dominated by transient dynamics which bears strong resemblance (on short time scales) to the dynamics of the spherical top without friction. The dynamics of the latter is rather
complicated as it is a nonintegrable Hamiltonian system. In Section 7 we present some results from numerical simulations demonstrating explicitly some examples where the transient dynamics does not
appear to prevent fast convergence to the asymptotic states (although of course for sufficiently small friction coefficient the transient dynamics would dominate on finite time intervals).
3. The Equations of Motion
We consider an eccentric sphere as in Figure 4, where denotes the center of mass and the center of the sphere. The line joining the center of mass and the geometrical center is an axis of inertial
symmetry: in the plane perpendicular to this axis the moment of inertia tensor of the sphere has two equal principal moments of inertia.
We describe the motion of the sphere using three reference frames:(I)an inertial (laboratory) frame , where is some point on the table and the -axis is the vertical.(II)a (noninertial) rotating frame
whose origin is in the center of mass and whose 3rd axis is always parallel to the vertical. The and axes are specified below.(III)a principal axis system , whose -axis is the symmetry axis of the
sphere (Note that in [5] the origin of the reference frames (II) and (III) is at the center of the sphere, and not at the center of mass.)
The reference frames (II) and (III) are indicated in Figure 4. The eccentricity is the distance between the center of mass and the geometric center of the sphere, with , where denotes the radius of
the sphere. We denote the moments of inertia and . The point of contact with the plane of support is denoted by .
Let be the Euler angles of the body with respect to , see Figure 5 for an illustration. The -plane contains the vector which joins the center of mass to the point of contact. The plane is inclined at
angle to the vertical -plane and processes with angular velocity around the vertical . We choose the horizontal in so that is perpendicular to . For the rotating frame, is in and perpendicular to the
symmetry axis and the axis coincides with . (Note that the axes and are principle axes, but they are not body fixed.) The angle between the vertical and the axis of the top is denoted by . The
angular velocity describes the nutation of the body in the vertical plane . The angle describes the orientation of the body with respect to the frame and is the spin of the sphere around its symmetry
axis. We denote by the unit vectors along and by those along , and . Note that . Because of the inherent translational symmetry (of the body on the plane), it is convenient to describe the body in
terms of the relative (moving) reference frames (II) and (III), rather than in the absolute reference frame (I). By doing so we thus ignore the translational motion on the plane and focus on the
relative motion of the body, which captures the tippe top behaviour.
The (relative) position vector of the body is or . Note that the coordinates of the reference frames (II) and (III) are related by the relations The reference frames (II) and (III) rotate with
respective angular velocities The angular velocity of the body involves, in addition, the angular velocity : where denotes the component of about , better known as the spin. For later use we
introduce the notation . Consequently, with denoting the inertia tensor of the sphere, the angular momentum of the sphere is given by The point of contact has coordinates The velocity of the point of
contact is where denotes the vector from the center of mass to the point of contact and is the velocity of the center of mass. We set and use the fact that to obtain Hence, we obtain The fact that
the sphere remains in contact with the table is expressed by the (holonomic) constraint where denotes the height of the center of mass above the table, compare Figure 4. From this constraint it
follows that the component of is so that the -component of vanishes, consistent with the constraint.
We note that the physical interpretation of concerns the phenomenon of slipping. In case the body slips on the surface. In contrast, a rolling motion of the body is characterized by the fact that .
The equations of motion will be derived, in Newton’s spirit, as a consequence of the action of external forces. We distinguish the following forces acting on the sphere(i)The gravitational force: ,
where is the total mass of the sphere.(ii)A force acting on the point of contact , where , is the normal reaction force at (due to the stiffness of the surface) and is a friction force. For
completeness we mention [20] where a mathematical model for the tippe top is proposed taking elasticity properties of the table and tippe top into account.
Friction is the resistive force acting between bodies that tends to oppose and damp out motion. Friction is usually distinguished as being either static friction (the frictional force opposing
placing a body at rest into motion) or kinetic friction (the frictional force tending to slow a body in motion). Importantly, we assume that the friction force is entirely due to the slipping of the
sphere on the surface and neglect all other sources of friction. Friction forces can be complicated, and there are various models in circulation. We adopt the assumption of viscous friction [9, 15,
21] and assume the friction force to be given by where is the coefficient of sliding friction with the dimension of (velocity)^−1. is proportional to the size of the normal reaction force and
vanishes smoothly when . (An alternative model for the friction force is the so-called Coulomb friction . This model is not appropriate when due to the singular nature of this force when .) Euler’s
equations of motion for the sphere, govern the evolution of the angular momentum in a noninertial reference frame, rotating with frequency , due to the influence of the external torque . The equation
of motion for the center of mass in the rotating frame is In terms of the coordinates in reference frame (III) the equations of motion (3.13) yield where and . From the equation for the motion of the
center of mass (3.14), in terms of reference frame (II), we obtain Recalling that , from the last of the latter equations we may derive an expression for : The equations of motion (3.15) and (3.16)
can be written as a system of six coupled first-order nonlinear ordinary differential equations in the variables , where , , , and .
Setting for simplicity, these may be arranged in the standard form (when ) It should be remembered that , , and are still functions of the other variables. For instance, from (3.17) and (3.18) one
finds Recall that we require that . If this condition fails, the sphere loses contact with the surface. Expressions for and follow similarly from (3.12).
It is important to recognize that some of the structure of the equations of motion (3.18) is due to symmetry. We recall that the symmetries are the Euclidean group (acting as translations, rotations,
and reflections in the plane) and the rotation group acting as rotation of the sphere around its axis of symmetry. The effect of the Euclidean symmetry is that the right-hand side of the equations of
motion contain no reference to the position of the sphere on the surface. In a similar way, due to the rotational symmetry the equations of motion do not depend explicitly on . The system can be
viewed as three coupled systems, where the coupling is of skew product type: the evolution of , and does not depend on , and , and the evolution of , and does not depend on . Moreover, note that the
position of the center of mass relative to the surface (in coordinates) could in principle be obtained by integrating the velocities and over time. Because of the fact that we take friction into
account, Noether’s theorem does not apply, so the continuous symmetries we observe need not (and do not) give rise to conserved quantities. However, it was discovered by Jellett [8] by an approximate
argument, and later proved by Routh [6], that the system (3.18) has the following conserved quantity: Indeed, it follows from (3.13) that , so that Note that the Jellett invariant can be written as
([10, 15, 21])
4. -Limit Sets Are Relative Equilibria
Our aim is to describe the asymptotic dynamics of the eccentric sphere. Recall that a subset of the phase space is an -limit set if this set is accumulated by (forward) orbits. While the friction
force destroys the Hamiltonian nature of the dynamics, it greatly simplifies the asymptotic dynamics. This follows from the fact that in the presence of friction the energy, which is conserved in the
absence of friction, is almost always decreasing along solutions.
The energy is given by , where is the kinetic energy with its rotational and translational part and is the potential energy. With our choice of variables we may write and , where .
Lemma 4.1 (see [13]). The energy is a Lyapunov function (recall that a Lyapunov function is nonincreasing along orbits) for (3.18). In particular,
As is parallel (and opposite) to , vanishes if and only if vanishes. Observe that decreases monotonically and hence is a suitable Lyapunov function. Moreover, is analytic and therefore along orbits
it is either strictly decreasing or constant. The energy is constant only if , that is in the absence of friction. Thus, the -limit sets must consist of orbits which do not experience friction. We
show that such orbits are necessarily relative equilibria.
Proposition 4.2. Solutions have constant energy only if they are relative equilibria with respect to the action of .
Proof. We already concluded that needs to be equal to 0 along any orbit in an -limit set. A straightforward calculation shows that indeed implies that , so that such a solution must be an relative
This observation is in fact what one would generically expect to find. If is a submanifold of the phase space that corresponds to the absence of friction, in general it would be quite unexpected to
find a nonequilibrium solution that lies entirely inside .
5. Stability and Bifurcations of Relative Equilibria
Having determined that the relative equilibria are the only possible asymptotic states in the presence of friction, we derive in this section these solutions of constant energy using the explicit
equations of motion (3.15)-(3.16), see also [5, 13, 15]. With and , the equations of motion yield , , , , , and These equations have the following three types of solutions. The linear stability
analysis differs from [13, 14] in methodology.
Vertical States
(1)Vertical state : The top is spinning about its axle with center of mass straight below the geometric center.(2)Vertical state : The top is spinning about its axle with center of mass straight
above the geometric center.
Intermediate States
For these solutions we have , , and are related by Elimination of from the above yields Hence, the condition for the existence of intermediate states is
It is natural to divide the solutions into three groups, according to regimes of the parameters and [21].Group I: . Intermediate states exist with Group II: . Intermediate states exist with any .
Group III: . Intermediate states exist with
As in [10] we further refine this classification. Note that the intermediate states discussed here correspond to the tumbling solutions discussed in [13].
The intermediate states are completely determined by (5.4), (5.5), and the Jellett invariant . More precisely, combining the square of (3.22) with (5.4) and (5.6), they are obtained by solving The
next theorem summarizes the linear stability and local bifurcation results, as depicted in Figure 3 (cf. [21]), and the proof is sketched in the Appendix. We identify six different groups according
to how the value of is related to the eccentricity . In the literature, results have previously been expressed in terms of variables and , referring to the spin of an initial condition in a vertical
state. We define , which is the value of the spin at for motion with Jellett invariant . Similarly, denotes the spin of the solution with Jellett invariant at . Note that, for a fixed value of the
Jellett invariant , these spins are related by . We further define and . Furthermore we define and .
Theorem 5.1. The bifurcation diagrams of the eccentric sphere spinning on a flat surface with small friction fall in one of the following six categories (Figure 3).Group I: .
(i)The vertical state is stable for any value of .(ii)The vertical state is stable if and unstable otherwise.(iii)Intermediate states exist for all values of satisfying . Group Ia: . The entire
branch of intermediate states is unstable. Group Ib: . The branch of intermediate states has a fold point at . The branch with is stable, while the branch with is unstable.
Group II: .
(i)The vertical state is stable if and unstable otherwise.(ii)The vertical state is stable if and unstable otherwise.(iii)Intermediate states exist for all . We distinguish the following three
subgroups.Group IIa: and . A fold bifurcation of intermediate states occurs.Group IIb: or . The entire branch of intermediate states is stable.Group IIc: and . The entire branch of intermediate
states is unstable.
Group III: .
(i)The vertical state is stable if .(ii)The vertical state is unstable for all .(iii)Intermediate states exist for and are all stable.
The proof of Theorem 5.1 can be mainly recovered from [10]; for completeness we provide the calculations based on a direct approach in the appendix.
6. Prototype of a Spherical Tippe Top
Rapid prototyping (RP) technologies enable solid models to be obtained from designs generated with CAD applications. Their increasing popularity in industry is due to the reduction in cost and time
associated with the use of these models when verifying product development stages and improvements in end quality. These technologies can also be applied to verify the correctness and/or accuracy of
mathematical models and last but not least to enhance students’ active learning in the frame of a learning-by-doing approach. Students can bring their designs to fruition and develop a deeper insight
in abstract concepts. We made a prototype of a spherical tippe top for educational use in the Product Development Laboratory of Howest.
As pointed out earlier, to realize a 3-in-1 toy an axially symmetric sphere where one has control over and is needed. We considered three possible designs:(1)a solid sphere with a cylindrical hole
through the center where a setscrew can move;(2)a hollow sphere with a cylindrical rod on which a weight can move;(3)a hollow sphere with a toroidal band at the equator fitted with a cylindrical rod
on which a weight can be screwed.
From the bifurcation diagram in Figure 2, it is clear that it is possible to hit the three main groups by fixing and changing . Therefore, it is important to understand for the three designs how and
vary with respect to each other when the weight is moved. We set up a Maple worksheet based on the given mathematical description and calculated and in function of the position of the midpoint of the
moving weight with respect to the center of the sphere; this will be further on denoted by . We took into account the physical parameters: dimensions of the different parts (radii, heights, and
thickness) and the density of the materials.
From this we realized that for the solid sphere the goal of the three types is within reach, whereas for the hollow sphere the design has to be modified. Our modifications resulted in the third
design as given above. We now discuss our findings for the realized prototypes. Our realizations were all printed with the commercial available Dimension SST1200es with printing technology based on
the FDM principle (fused deposition modeling) in ABSplus.
6.1. Sphere with Cylindrical Hole and Setscrew
For the first design we realized three different tops, varying the geometrical dimensions. This was done because the calculation showed that for the given materials some zones are hard to achieve or
are very narrow, see Figure 7. The prototype consists of a sphere with a cylindrical hole through the center, together with a piece of adjustable cylindrical iron wire (setscrew), see Figure 6. With
a caliper, it can be checked how deep the setscrew is set in the hole. The position of the midpoint of the setscrew with respect to the center of the sphere is denoted by . The hole is suitable for a
setscrew M12. The dimensions of the toy were chosen based on the mathematical calculations derived from the model. The diameter of the sphere was chosen so that one can comfortably spin the toy by
hand. With a sphere of diameter 50mm, good values for the chosen design are a hole of radius 5.5mm, filled with the setscrew of height 15mm, or a hole of radius 1.5mm, filled with the setscrew of
height 3mm, see Figure 7. The densities are 1.08g/cm^3 for ABSplus and 7.87g/cm^3 for the setscrew.
The prototype is axially symmetric; therefore, only the eccentricity and the moment of inertia are functions of , they are easily calculated; remains constant when moving the setscrew up and down. In
Figure 7 the quantities , , and are plotted as functions of , (a) for the prototype with a M12 setscrew and (b) for the prototype with a M3 setscrew. The printed prototype in Figure 6, is of the
first type and according to the mathematical calculations will exhibit the predicted behaviour as follows: for between mm and mm the toy does not show tippe top dynamic no matter what the initial
spin is (type I), for between mm and mm complete tippe top dynamic is observed (type IIc). For above mm the top is of type IIb (incomplete tipping is observed if initial spin is not sufficiently
For the different positions of the weight, we launched by hand the toy plenty (≥50) of times and registered each time tipping, nontipping, or hanging. In Table 1 we report the typical results for 5
launches. Note that the tippe top is hand spun, so there will be a deviation from the starting position . Tipping and nontipping were mostly observed in the setup of type IIa and I, respectively. For
the setup of type IIb, the expected behaviour (tipping) was not observed; we always observed the hanging behaviour. This is because we were not able to launch the toy fast enough by hand and also
because the setscrew sticking out the toy does not allow ideal launching position. Our observations indicate that the prototype behaves as predicted by the model. In details, tipping and hanging at
can be explained by the presence of a branch of intermediate states and stable position that one could hit if the toy is not launched exactly from the position, see Figure 3. The hanging at is due to
the fact that we did not launch the top fast enough.
For a prototype fit for a M3 screw, the intervals for are as follows: for between mm and mm the toy does not show tippe top dynamic no matter what the initial spin is (type I); for between mm and mm
complete tippe top dynamic is observed (type II). For above mm the top is of type III (incomplete tipping is observed). Also, this prototype was spun ≥50 times, and we registered similar observations
as for the previous one.
We conclude that this prototype can give a working 3-into-1 toy but has some disadvantages.(i)The setscrew can come loose after intensive use.(ii)When there are three zones present, at least one of
the zones is small.(iii)Using a caliper to know if the setscrew is in the center is not practical.
Several attempts were done in the computations to improve the design, for example, by adding holes into the solid sphere. These attempts were not successful, so no other prototype was printed.
Instead, we concentrated on the sphere with a cylindrical rod.
6.2. Sphere with Cylindrical Rod
The second design consists of a spherical shell with a cylindrical solid rod through the center along which a symmetric bead is spun; this bead can be put at different heights along the rod. In this
design, the user must open the sphere and change the position of the weight by screwing it up or down, after which the sphere can be closed and spun. See Figure 8.
The advantage of this design is that different ABS colors can easily be used for both sides making the tipping more visible and that the rod can be marked at the critical positions. Where in the
first design setscrews of different length can be used, in this design weights of different lengths and different widths can be considered within one toy. Also, different materials for the rod can be
However, computations show that a 3-into-1 toy was difficult to obtain with the chosen materials: ABS, iron, and nylon. We briefly summarize our findings that form the basis of further improvements
that lead to the third prototype. We tried three different possibilities for the rod: iron, nylon, and ABS. Nylon and ABS seem to work best. With an iron rod and physical parameters that allow easy
playing with the toy we did not succeed in catching all the three zones. See below for the specific values of the parameters. As Figure 9(a) shows, only zone three is hit, which means that this tippe
top never shows complete inversion but may tip up to a certain angle.
The prototype is axially symmetric; therefore, only the eccentricity and the moment of inertia are functions of , they are easily calculated; remains constant when moving the weight up and down. The
physical parameters for the construction are radius of the spherical shell of 25mm, thickness of the shell of mm, radius of the cylindrical rod of mm, radius of the weight of 10mm, and height of
the weight of mm.
In the case of a nylon or ABS rod, it was possible to obtain a tipping top. As illustrated in Figure 9(b), both zones II and III are hit, which means that complete tipping and incomplete tipping may
be observed according to the position of the weight. Note that the section of the blue curve in zone III is very small, which makes it very difficult to observe the hanging phenomenon. A similar
observation holds for the section of the blue curve in zone I; this is so small that nontipping behaviour cannot in practice be observed. Many launches of this prototype only confirmed the
observations above. This seems to be a good type II tippe top, but we were not able to observe the other behaviours.
6.3. Sphere with a Toroidal Band and a Cylindrical Rod
The third design consists of a spherical shell with a toroidal band around the equator and a cylindrical solid rod through the center along which a symmetric bead is spun. Also in this design, the
user must open the sphere and change the position of the weight by screwing it up or down, after which the sphere can be closed and spun. See Figure 10.
Adding a toroidal band was a way to find a compromise between a spherical shell and a solid sphere. The prototype is still axially symmetric, and the band provides a better click system to open and
close the toy. The physical parameters for the construction are radius of the spherical shell of mm, thickness of the shell of mm, radius of the cylindrical rod of mm, radius of the weight of mm, and
height of the weight of mm. The band has the form of a solid of revolution generated by an ellipse rotating around the rod, the semiaxes of the ellipse measure, respectively, mm and mm. The rod is
made of iron. According to the mathematical model, the top should behave as follows: for between mm and mm the toy does not show tippe top dynamic (type I); for between mm and mm complete tippe top
dynamic is observed (type II). For above mm the top is of type III (incomplete tipping is observed).
The toy was launched many times, allowing to observe without problems zones I and II predicted by the model. Zone III is, however, difficult to observe at first, the maximum that can be obtained with
the constructed prototype was only 15mm (due to the fastening system of the rod), and moreover which is not so easy to see. Different weights can, however, be used so that zone III becomes visible;
for example, this is the case when using an iron weight with radius mm and height mm. We conclude that this last prototype is a good candidate for the 3-in-1 toy, although some further optimizations
of the physical parameters can be considered (Figure 11).
7. Numerical Illustrations
7.1. System Trajectories
In this section we present some simulations of (3.15)-(3.16). We focus on the parameter regime of Group II since these are the tops exhibiting “tipping” behaviour. Indeed, if the initial spin , then
tipping occurs. The trajectories lie in the (reduced) 6-dimensional phase space (we ignore the equation); here we show their projections in the 3-dimensional subspace of the variables .
Figure 12 shows a number of trajectories for a tippe top of Group IIb starting from initial conditions and . Other input parameters are , the friction coefficient , the eccentricity , and inertia
ratio . Points with are stable whereas those for which are unstable. Let be the value of the initial spin calculated for the angle at the Jellett where the change in stability for the inverted
position () occurs. Trajectories originating near an unstable noninverted position are attracted either to one of the intermediate states at an angle when or when to a steady state for which ; in
this case, the ball rises fully to a stable inverted vertical (tippe top) position with a final spin determined by the Jellett. The blowups in the insets show oscillations in the immediate
neighborhood of the fixed points; these depend on the precise initial conditions. We note also that changing does not affect the final destination of the trajectories but it does affect the time
needed to follow such trajectories in phase space, this at least within our parameter range of computations.
Figure 14 shows a number of trajectories for a tippe top of Group IIa with friction coefficient , starting now under an angle close to the inverted position . Physical parameters are , , and .
Recall from Figure 3 that trajectories starting near the inverted position will, depending on the initial spin, remain in the neighborhood of go all the way down to the noninverted position , or
reach a stable intermediate state. For a clearer overview of the possible behaviours we use the symmetry and sketch in Figure 13 the curve of intermediate states (5.10) in -plane also indicating the
essential ’s at which changes in stability type occur. Recall that for a given the relations (5.6) and (5.4) determine and .
The depicted trajectories have been obtained with the initial conditions , is arbitrary, and is one of the following: , , or . These choices were made to reduce to a minimum in the drawings the
initial oscillations in the -direction. We start near the equilibrium on the eigenvector of the positive eigenvalue(s) so that the motion quickly evolves in the unstable direction away from the
initial position. Trajectories originating near an unstable inverted position will either reach a stable intermediate state at when or fall in the noninverted vertical position when . Here is the
value of the initial spin corresponding to the angle at which intermediate states change stability type. Points on with are stable and trajectories starting near the inverted state will be attracted
to it.
7.2. 3D Animations
In this section we comment on the 3D animations illustrating the phenomena of “tipping” or “hanging on an intermediate state” for an eccentric sphere (http://cage.ugent.be/~bm/tippetop/tippetop.html
). In the films, the eccentric sphere is drawn as a transparent ball with a top in it. We focus on eccentric spheres belonging to Groups IIb and IIa, see Figure 3 for the corresponding bifurcation
diagrams. The films have been made using Maple to solve the ODE system and feeding the results to Povray, Imagemagick and ffmpeg. The films are in 5x slow motion for the sake of clarity, so 1 second
takes 5 seconds in the animation, with 30 frames for every second. For the sake of clarity the evolution of the nutation angle is shown in each film.
The films for a top of Group IIb show a complete flip (See http://cage.ugent.be/~bm/tippetop/tippetop_IIb_flip.mpg), and the rising to a stable intermediate state (http://cage.ugent.be/~bm/tippetop/
tippetop_IIb_IntSt_Comp.mpg) for a top of Group IIa the film shows how the top launched upside-down migrates to a stable intermediate state (http://cage.ugent.be/~bm/tippetop/tippetop_IIaComb.mpg).
In the first film one sees a complete flip (tippe top effect) of the sphere; the physical parameters used are gram, cm, , mg/m^2, and mg/m^2. The friction coefficient is 0.3. We show 90 seconds,
which is presented as 7.5 minutes of the film.
The second film shows the motion towards a stable intermediate state from the unstable noninverted or unstable inverted position. The initial data is chosen so that the Jellett coincides with that of
a stable intermediate state. It corresponds to 45 seconds of the tippe top movement. The physical parameters are here as in the first film except for the friction coefficient which is now . The
initial conditions around and are chosen to correspond to a Jellett of approximately . The intermediate state is at degrees.
The animations for a top out of Group IIa are meant to illustrate how, depending on the initial conditions, the top started at the inverted position () can fall either to an intermediate state or to
the noninverted position (), see Figure 3. The physical parameters are gram, cm, eccentricity , mg/(m^2), and mg/(m^2). The friction coefficient is 0.08. In the left of the animation we see the
behaviour in which the tippe top flips towards the noninverted state. However, oscillation of theta occurs. To the right, we see the movement for a slightly different initial state, with motion
towards an intermediate state. Also here oscillation of theta occurs.
8. Further Remarks
Global Dynamics
Concerning Theorem 2.1 we wish to stress the importance of the local bifurcation diagrams for the global dynamics. Clearly, if we have a unique asymptotically stable -limit set it is clear that the
basin of attraction for this set is equal to nearly the full measure set of the phase space defined by the complement to the stable manifolds of all coexisting (unstable) -limit sets.
It thus remains to analyze the situation when we have two coexisting asymptotically stable -limit sets. From the bifurcation analysis we know that in such case the coexisting stable -limit sets are
the vertical states.
Theorem 8.1. The -limit sets of the eccentric sphere on a flat surface with small friction are asymptotically stable relative equilibria (with respect to ). The -limit set is a unique relative
equilibrium, in which case the basin of attraction is the complement of the stable manifolds of the unstable relative equilibria (and hence dense, and of full measure, in the phase space). Otherwise,
there are at most two stable relative equilibria (the vertical states) and the system is bistable. In this case, the union of the basins of attraction of the two vertical states is the complement of
the stable manifold of the unstable relative equilibria, which is an intermediate state. This union is dense and of full measure in the phase space. The separatrix between the two basins of
attraction (inside a level set of the Jellett invariant) consists of the stable manifold of an unstable intermediate state.
Proof. Most of the above statement is a direct consequence of the existence of the energy as a Lyapunov function (through La Salle’s principle). One readily verifies (from the local bifurcation
analysis) that the stable manifold of the intermediate state coexisting with two asymptotically stable vertical states has codimension one (inside the level set of the Jellett invariant) and divides
the phase space into two parts.
The regions of bistability (as a function of the Jellett invariant ) follow from the local bifurcation diagrams discussed in Theorem 5.1.
Remark 8.2. Note that the specific ’s where the changes in stability type of the steady states occur do not depend on . The viscous friction influences the time needed for an orbit to reach such a
point. This fact was already clear in [13] and could be proved in advance also in our setup. The result remains true also for more general forms of friction laws proportional to such as those
proposed in [4]. This suggests that the study of the asymptotic dynamics of other (mechanical) problems, as for example the rattleback, might notably simplify by the introduction of viscous friction
in the model.
Rolling Model
A “rolling” eccentric ball does not tip. In this section we give a simple argument showing that, if pure rolling is assumed, then the tippe top phenomenon cannot occur.
Solving the tippe top under the constraint of pure rolling (i.e., when the nonholonomic constraint is satisfied) allows for complete reduction of the equations of motion to a second-order . See [3]
for a discussion of this approach. In the pure rolling regime the system is not anymore dissipative and admits three conserved quantities: the energy, , the Jellett as before, and the Routhian, Routh
, given by [3]
“Tipping” in the rolling model would violate the conservation of . Indeed, from the Jellett invariant we know that the sign of has to change in a complete inversion since and . But this is not
allowed if Routh = constant has to hold.
The motion in the rolling model is governed by a functional relation of the type . Indeed, the conserved quantities give three relations for the components of the angular velocity . In details, for a
given , the Routhian (8.1) fixes , then the Jellett fixes , and finally the energy fixes the tipping rate (cf. (3.4)), yielding a functional relation of the type . Note further that the constraint
gives , from (3.9). In this approach, as mentioned in [3] one has to check whether a found solution is physically possible, that is, one has to take into account that rolling cannot be sustained if ,
where is the coefficient of static friction. In [3] it is remarked that only a few pure rolling precessional solutions satisfy this condition. The analysis, however, leaves open the possibility to
have pure rolling periodic motions around the intermediate states as we discuss below.
Sliding versus Rolling
A debatable issue is whether transitions between sliding and rolling are possible during the motion of the top. As it was pointed out in [9], such transitions must also be considered when setting up
a realistic model to describe the dynamics of the tippe top. To the different regimes there correspond different sets of equations. A switch between sliding and rolling occurs as the absolute contact
velocity vanishes. In the Coulomb-friction model, a switch from rolling to sliding occurs when the tangent reaction force required to maintain rolling exceeds . We refer to [4] for considerations and
simulations on this topic and to [3] for a detailed analysis of the pure rolling model. We consider two test cases, the pendulum motion and the behaviour of the tippe top around a stable intermediate
(i)The pendulum motion is easily observed by placing the tippe top on the ground, rotating it under an angle , and releasing it. We have in this regime. Normally a pure rolling motion is observed.
Only when is large, some slipping might be observed initially. Solved under the pure rolling constraint, the solution is pendulum-like. In contrast the sliding equation of motion (3.18), gives a
qualitatively different solution. For , the solution is the pure sliding pendulum, where the center of mass remains fixed, but the tippe top makes a pure slipping periodic pendulum motion. (Recall
that for periodic solutions are possible, which was also clear from the Hopf bifurcation. However they all disappear when .) On activation of , we have that is stable, and the slipping pendulum
solution slowly degrades towards the stable point. In this case, the tippe top is best modeled with the pure rolling equations.(ii)Periodic motion around an intermediate state is characterized by the
precession of the tippe top axle around the -axis combined with a nutation where . In the pure rolling case periodic solutions can be obtained exactly. Using a point on this periodic solution as
initial condition for (3.18) allows to investigate the persistence of this solution when friction is added, see Figure 15. For a quasiperiodic motion is obtained around the pure rolling solution.
Activating makes this motion unstable, and the solution goes towards the intermediate state, this behaviour is already dominant for (note also the high value). However, for ever larger the decay
slows down, the solution remaining very long in the neighborhood of the pure rolling solution. The value in this case is very small, indicating that the condition for transition from slipping to
rolling is satisfied.
A. Proof of Theorem 5.1
For the interested reader, the following sections contain the calculations needed for a straightforward linear stability analysis of the steady states. These form the proof of Theorem 5.1.
A.1. Stability of the Vertical State
With the Taylor expansions in linearizing in and noting that and , the linearization of the equations of motion (3.16) and (3.15) at yields Introducing the complex coordinates(A.3)–(A.6) can be
reduced to two complex equations. The addition (A.3)(A.4) yields whereas (A.5)+(A.6) leads to
These equations admit a solution of the form when satisfies the determinant equation When the roots of (A.10) are where we set In the absence of friction, that is, when , the vertical state is
marginally stable as are purely imaginary since .
We now analyze the effect of small friction () by examining how the roots (A.11) are perturbed to first order in : As , the vertical state is stable if . This is the case when It follows that, in
Group I (), the vertical state is always stable, while for Group II and Group III () stability requires that .
It remains to be shown how yields the relation (A.14). We focus on the inequality , the arguments are similar for . The inequality yields Using (A.12), this gives Note that if the above condition is
satisfied for all . If on the other hand the inequality holds, then ; squaring both sides yields where . Since we are in the case , we can rewrite this last condition as (A.14).
Remark A.1. Ignoring translational effects, that is, throwing everything in the variable away (cf. [5]), one is left with where we set for simplicity , . Equation (A.19) is of Maxwell-Bloch type [5]
and allows us to recover the analysis carried out in [5]. An analogous result holds when linearizing around .
A.2. Stability of the Vertical State
The stability of the vertical state is studied in a similar way as in Section A.1. From the equation of motion (3.15)-(3.16), introducing complex coordinates we obtain the coupled complex equations
The corresponding determinant equation for eigenvalue is given by When , the roots of (A.22) are Thus, at , the vertical state is at most marginally stable, when is purely imaginary, that is, when
This is of course also a necessary condition for stability if is small. Provided , corresponding to a resonance at , the roots of (A.22) are perturbed at order to Thus, for stability we have to
require , which yields Condition (A.26) is never satisfied for Group III, so is unstable. In the case of Groups I and II, when , the condition for stability is
Note that for tippe tops of Group I and II , with . The equality holds when .
A.3. Stability of Intermediate States
In this section we consider intermediate asymptotic states, which exist if the condition (5.7) is satisfied. Such a state, if it exists, is of the form , with constant and related by (5.4). In order
to study the stability properties, we study the eigenvalues of the -reduced equations of motion, obtained from (3.18) by omitting the equation.
With denoting the corresponding Jacobian of this reduced equation, eigenvalues satisfy the determinant equation where is the (six-by-six) identity matrix and is a polynomial of degree 5 in . So, 0 is
always a solution of (A.28).
Remark A.2. It is not possible to reduce the system to a “Maxwell-Bloch” form around an intermediate state as on the contrary was the case around the two vertical spin states.
When the six roots of (A.28) are where and is given by (5.6). Note that from (5.7). All the eigenvalues , are on the imaginary axis, and the intermediate states at are marginally stable.
Remark A.3. It is worth noting that for the resonance occurs when . Using the expressions for and given before, one checks that this equality is satisfied when the equation admits a real root between
−1 and 1. The resonance disappears when higher-order terms in are added.
When is small, , we write the first-order perturbation of the roots as From (A.28) it follows that and are as before and The coefficients and are given by with where The coefficients are calculated
with the help of Maple For tippe tops of Group II they have a fixed sign for varying in . Note that and . The first inequality is obvious, and the second is less straightforward and is proved below.
The friction is stabilizing at when .
The sign of depends on only, since the two terms , are never zero for . The zero of is the bifurcation point where the change in stability type happens. When the intermediate states are not anymore
stable. For the sake of brevity we refer to [10] for the details on the behaviour of , where the same crucial function is encountered in the stability analysis via the Routhian reduction. This
analysis completes the proof of Theorem 5.1.
Proof of . To prove that it is sufficient to prove that , see (A.33), since and . The sign is determined by sign. Considering as a square polynomial in , , we have that the discriminant of is given
by which is negative for all , . Hence, the sign of remains fixed. It is easily verified that for , or , sign is always negative (e.g., , hence for all , and . We conclude that .
1. J. Perry, Spinning Tops and Gyroscopic Motions, Dover, New York, NY, USA, 1957.
2. C. M. Cohen, “The tippe top revisited,” American Journal of Physics, vol. 45, pp. 12–17, 1977. View at Publisher · View at Google Scholar
3. C. G. Gray and B. G. Nickel, “Constants of the motion for nonslipping tippe tops and other tops with round pegs,” American Journal of Physics, vol. 68, no. 9, pp. 821–828, 2000. View at Publisher
· View at Google Scholar
4. A. C. Or, “The dynamics of a Tippe top,” SIAM Journal on Applied Mathematics, vol. 54, no. 3, pp. 597–609, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
5. N. M. Bou-Rabee, J. E. Marsden, and L. A. Romero, “Tippe top inversion as a dissipation-induced instability,” SIAM Journal on Applied Dynamical Systems, vol. 3, no. 3, pp. 352–377, 2004. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. E. J. Routh, Dynamics of a System of Rigid Bodies, MacMillan, New York, NY, USA, 1905.
7. C. M. Braams, “On the influence of friction on the motion of a top,” Physica, vol. 18, pp. 503–514, 1952. View at Zentralblatt MATH
8. J. H. Jellett, A Treatise on the Theory of Friction, MacMillan, London, UK, 1872.
9. T. R. Kane and D. Levinson, “A realistic solution of the symmetric top problem,” Journal of Applied Mechanics, vol. 45, pp. 903–909, 1978. View at Publisher · View at Google Scholar
10. M. C. Ciocci and B. Langerock, “Dynamics of the tippe top via Routhian reduction,” International Journal of Bifurcation and Chaos, vol. 12, no. 6, pp. 602–614, 2007. View at Publisher · View at
Google Scholar
11. B. Y. M. Branicki, H. K. Moffatt, and Y. Shimomura, “Dynamics of an axisymmetric body spinning on a horizontal surface. III. Geometry of steady state structures for convex bodies,” Proceedings of
the Royal Society of London. Series A, vol. 462, no. 2066, pp. 371–390, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. B. Y. M. Branicki and Y. Shimomura, “Dynamics of an axisymmetric body spinning on a horizontal surface. IV. Stability of steady spin states and the `rising egg' phenomenon for convex axisymmetric
bodies,” Proceedings of the Royal Society of London. Series A, vol. 462, no. 2075, pp. 3253–3275, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
13. S. Ebenfeld and F. Scheck, “A new analysis of the tippe top: asymptotic states and Liapunov stability,” Annals of Physics, vol. 243, no. 2, pp. 195–217, 1995. View at Publisher · View at Google
Scholar · View at MathSciNet
14. S. Rauch-Wojciechowski, M. Sköldstam, and T. Glad, “Mathematical analysis of the tippe top,” Regular & Chaotic Dynamics, vol. 10, no. 4, pp. 333–362, 2005. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
15. H. K. Moffatt, Y. Shimomura, and M. Branicki, “Dynamics of an axisymmetric body spinning on a horizontal surface. I. Stability and the gyroscopic approximation,” Proceedings of The Royal Society
of London. Series A, vol. 460, no. 2052, pp. 3643–3672, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. R. Bastiaens, J. Detand, O. Rysman, and T. Defloo, “Efficient use of traditional-, rapid- and virtual prototyping in the industrial product development process,” in Proceedings of the 3rd
International Conference on Advanced Research in Virtual and Rapid Prototyping (VRAP '09), Lleira, Portugal, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
17. V. I. Arnol’d, Dynamical Systems. Encyclopedia of Mathematical Sciences, vol. 3, Springer, New York, NY, USA, 1988. View at Zentralblatt MATH
18. H. K. Moffatt and T. Tokieda, “Celt reversals: A prototype of chiral dynamics,” Proceedings of the Royal Society of Edinburgh Section A, vol. 138, no. 2, pp. 361–368, 2008. View at Publisher ·
View at Google Scholar
19. T. Tokieda, “Private communications,” in Proceedings of the Geometric Mechanics and its Applications (MASIE), Lausanne, Switzerland, July 2004.
20. C. Friedl, Der Stehoufkreisel, Zulassungsarbeit zum 1. Staatsexamen, Universität Augsburg, http://www.physik.uniaugsburg.de/%18wobsta/tippetop/index.shtml.en.
21. T. Ueda, K. Sasaki, and S. Watanabe, “Motion of the tippe top: gyroscopic balance condition and stability,” SIAM Journal on Applied Dynamical Systems, vol. 4, no. 4, pp. 1159–1194, 2005. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jam/2012/268537/","timestamp":"2014-04-18T02:39:09Z","content_type":null,"content_length":"999706","record_id":"<urn:uuid:ab8b80de-5a5e-4078-848f-1227685d4f58>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Complete Circular Arc Calculator
This calculator calculates for the radius, length, width or chord, height or sagitta, apothem, angle, and area of an arc or circle segment given any two inputs. Please enter any two values and leave
the values to be calculated blank. There could be more than one solution to a given set of inputs. Please be guided by the angle subtended by the arc. If the angle is greater than 180 degrees then
the arc length described is greater than the arc length of a semi-circle (Click here for illustration). The Millimeter unit (mm) has been added to the choice of length units to use.
For iPhone optimized version of this calculator click here. | {"url":"http://www.handymath.com/cgi-bin/arc18.cgi?submit=Entry","timestamp":"2014-04-19T14:39:59Z","content_type":null,"content_length":"8133","record_id":"<urn:uuid:44998265-db24-4d86-870e-32c0f3a5ff68>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
U.S.S. Mariner
A Fun Thought
Dave · April 16, 2009 at 11:45 am · Filed Under
Whether the M’s deserve to be 7-2 right now or not, the fact is that those wins don’t get taken off the board. They’re in the bank, and they aren’t going anywhere.
Because of that, even if you haven’t changed your opinion one iota about the strength of the roster (and honestly, you shouldn’t have changed it much – nine games is too small of a sample to mean
much), you need to add three wins to whatever you thought the team’s final record was going to be. Math requires you to.
You thought they were a 75 win team on Opening Day? That would be a .463 winning percentage. If they play .463 ball over the rest of the season, they’ll win 71 more games. 71 + 7 = 78.
You though they were a 78 win team on Opening Day (hey, me too!)? That would be a .481 winning percentage. If they play .481 ball over the rest of the season, they’ll win 74 more games. 74 + 7 = 81.
You can do this for basically any expected record. Almost everyone should just add three wins to their expected record to find their new expected record. If you were really high on this team and
thought they would win 90, you only add two wins (.555 * 153 = 85 + 7 = 92).
Most of us, we’ll add three wins. So, now, I “expect” the M’s to finish 81-81, based on their current roster, assuming no injuries/trades/etc…
Given that, I’d say it’s likely – not possible, likely – that the team will still be playing meaningful baseball in September. Seriously – get ready for some kind of pennant race. The M’s are in this
thing, and barring a summer sell-off of all the expiring contracts, they should be all year.
67 Responses to “A Fun Thought”
1. sass on April 16th, 2009 11:49 am
Not to mention that 5 of those wins are against division rivals. As was discussed last year, those are more important, in that being dominant in those is a good way to win the division, if all
other things are more or less equal.
2. BrianV on April 16th, 2009 11:51 am
Many thought this team would be fun to watch going into the season, and it’s turned out to be the case. No, they’re not this good (and please, let’s not succumb to “they should be 8-1 because of
the blown save!), they’ve gotten lucky, and some guys are playing above they’re heads.
but man, it’s been a blast watching this outfield, Felix and especially Bedard, and even Griffey, whose signing I was against.
Stranger things have happened than a ~.500 team playing a bit above its head and taking a weak division, and this looks like it will be a really fun summer.
3. hub on April 16th, 2009 12:14 pm
At what point is it feasible for the M’s to become potential ‘buyers’ this summer? And if so, what position is the most likely to be upgraded?
4. JakeSuds on April 16th, 2009 12:19 pm
It is great to imagine Safeco packed every night during a playoff run… I’ll be there.
5. stevie_j13 on April 16th, 2009 12:22 pm
I’m not sure I entirely understand this point (which probably has more to do with having taken only one stats class six years ago). Regression to the mean recently justified USSMariner’s lack of
panic over Silva, and the results were much better in game two. By the same logic, the more the M’s win, shouldn’t I become more and more fearful that a massive, team-wide slump is coming? If a
7-2 team has 75-win talent, isn’t a 4-10 stretch just around the corner?
6. Noonan on April 16th, 2009 12:23 pm
I don’t understand. Mostly, my lack of understanding is probably due to my above average lifetime consumption level of alcohol and lead-based paint chips. What I don’t get is this: we can expect
someone like Chavez’s batting average to regulate back to it’s norms as the season progresses. Why can’t we expect the same with our winning percentage? Isn’t 9 games of 162 too few samples to
adjust the predicted win total for the year?
7. Noonan on April 16th, 2009 12:25 pm
stevie_j13 – well put. I wish i had seen your post before hitting submit.
8. arbeck on April 16th, 2009 12:27 pm
Noonan and Stevie,
Future results have no memory of the past. If I flip a fair coin and receive heads 10 times in a row. Then I proceed to flip it 1000 more times. I’m probably going to end up with about 510 heads
versus 500 tails.
9. jld on April 16th, 2009 12:27 pm
I don’t quite agree that everyone has to tack 3 wins on at this point. If someone thought this was going to be a .500 team, there isn’t necessarily a reason to change that now.
The M’s could go 2-7 over the next 9 games. At the end of the year won’t matter if a .500 team won every other game or had an 81 game winning streak followed by an 81 game losing streak.
10. Luc on April 16th, 2009 12:29 pm
Streaks this long are rare, whether it’s a losing streak or a winning streak. I think it’s reasonable to think that streaking for six in a row at the beginning of the season is more than enough
to adjust your predictions at +3 wins. Makes sense to me.
11. Utah911 on April 16th, 2009 12:33 pm
If the Mariners can play .600 ball throughout the injuries of Oakland’s starting rotation and Angel’s starting rotation, there’s going to be some serious competition in late August and Sept.
Clutch hitting and injuries will be key, but I am starting to get confidence!
Silva giving up 2 runs?? are you kidding me??
12. Noonan on April 16th, 2009 12:34 pm
arbeck – makes sense, but why isn’t this same reasoning applied to the players averages? Seems like we expect players various averages to regulate out to their expected level over the course of a
13. Gregor on April 16th, 2009 12:37 pm
If someone thought this was going to be a .500 team, there isn’t necessarily a reason to change that now.
That’s precisely the point of this post though. If you assume that this is a .500 team, you would expect them to win half their remaining games, which would mean they stay about 5 games above
.500 for the rest of the year.
14. arbeck on April 16th, 2009 12:40 pm
It is applied to players averages. Lets say Ichiro gets off to a horrid start and bats .120 over his first 100 at bats. That would give him 12 hits over his first 100 at bats. But Ichiro is
really a true talent .330 hitter and will get 550 more at bats over the season. If he regresses to the mean, we would then expect him to get 182 hits in those at bats (.33 x 550). His average for
the year would then be .298. Regression to the mean does not mean that he will end with a .330 average.
15. Viper on April 16th, 2009 12:42 pm
The problem is a flawed understanding of regression to the mean…
If a fair coin is flipped heads 10 times in a row, the coin is not more likely to land tails on the 11th try. The probability is still 50-50 on where it will land. Regression to the mean does not
mean a hot streak is likely to be followed by a cool streak, but rather that a player or team will be more likely to return to their true talent level.
This means that if you thought that the team’s true talent level was a .463 winning percentage, you would still expect them to win 46.3% of their remaining games. Since 9 games are already out of
the way and have no bearing on future performance, you would expect the team to win 46.3% of the remaining 153 games.
16. tkight on April 16th, 2009 12:43 pm
Meaningful September baseball seems like a breath of fresh air. I think a lot of it will center around our much improved defense. Like this guy.
17. Doc Baseball on April 16th, 2009 12:43 pm
What Dave, and others, are saying is correct statistically speaking. If you thought the M’s were a .500 team, and you had said BEFORE the first game that the M’s might go 7-2 at one point and
then 2-7 later, that would be correct and this streak would not influence your expectation. However, once they have gone 7-2, if you want to now estimate the future, that record is already in the
past. Estimating the future now — statistically speaking — cannot take that record into account. Unless you think these 9 games have shown the M’s are a different team than you thought, your
winning % number should stay the same, but be applied to the remaining games, as Dave did, and as arbeck illustrated. It would only be predictions BEFORE the fact that would place this into an
expected range.
18. domovoi on April 16th, 2009 12:44 pm
Some of you have fallen victim to what is known as the Gambler’s fallacy.
19. Steve T on April 16th, 2009 12:46 pm
Well, you losers can adjust all you want, but *I* thought from the beginning that this was going to be a 126-win team, so I don’t have to adjust nuthin’!
20. Gregor on April 16th, 2009 12:49 pm
Some of you have fallen victim to what is known as the Gambler’s fallacy.
Aka the broadcaster’s fallacy, as in “he’s due for a big hit”.
21. Ralph_Malph on April 16th, 2009 12:53 pm
Here’s my example: I flip a coin 30 times a row and it comes up heads every time.
Noonan or Stevie (no offense)would bet on tails, because that coin is really, really due.
I would bet on heads, because I think it might be a two headed coin.
22. domovoi on April 16th, 2009 12:58 pm
Seems like we expect players various averages to regulate out to their expected level over the course of a season.
“regression to the mean” only means that we expect the player’s numbers to gradually get closer to what we initially expected. The Law of Large Numbers says if we gave them enough “trials” (e.g.,
for batters, ABs), then their numbers will end up being very close to their true talent, no matter what they initially started at. The issue is whether or not there are enough “trials.”
So for example, let’s say hitter A will get 500 at-bats and is a true .300 hitter. If he goes 7/10 in his first 10 at-bats, we’d expect his final average to be (.300*490 + 7)/500 = .308, very
close to our prior expectation. If the same hitter goes 70/100 in his first 100 at-bats, even in the unlikely situation where our expectations of him don’t change (i.e. he’s still a true .300),
he’ll end up hitting (.3*400 + 70)/500 = .380, which is significantly better than .300, but a lot closer to .300 than the .700 average he displayed in his first 100 at-bats. Give him a couple
thousand more at-bats, and he’ll end up at .300.
So whenever Dave or anyone else says “regression to the mean” they don’t mean the player’s final stats will actually end up being their “mean” (i.e. their true talent level), but rather their
final numbers will be closer to their “mean” than they currently are. How close depends on how many games remain.
23. Noonan on April 16th, 2009 1:00 pm
but…but… i have a feeling!
24. Coolalvin206 on April 16th, 2009 1:00 pm
Hey, whatever happens..its been a fun ride so far. Who would think that on April 16th, the M’s would have the best record in the AL.
25. eponymous coward on April 16th, 2009 1:02 pm
Most of us, we’ll add three wins. So, now, I “expect†the M’s to finish 81-81, based on their current roster, assuming no injuries/trades/etc.
Well, the thing is, I did it more as a range. I expected a 75-85 win team (coming out just under .500). Now, I’d adjust it to being more likely slightly OVER .500, more like 77-87 wins (not
adding all 3 wins because a 7-2 start was a scenario partially baked into my initial estimate).
Just looking at it, it seems the starting pitching and defense will be strengths of the team, assuming that The Interview keeps performing at a high level, King Felix stays on his throne and the
rotation does OK. Washburn benefiting from the improved defense in a contract year is somewhat less than surprising, and while our 3/4/5 are pretty generic back end starters, they also aren’t
really much better or worse than our 6/7/8 (Olson, Vargas, Jakubauskas).
The problems that I suspect will have to be addressed if this team stays in contention are as follows, just by looking at the team…
a) the offense is going to have 2-3 week stretches where not enough people are hitting (Ichiro will have his usual slice of .270/.320/.350 at some point, Endy isn’t going to hit like this all
year, Beltre and Branyan will have times where they are swinging at pitches in different time zones, and so on). It’s also really heavily RHB. Figuring out a way to get another LH bat in this
lineup in such a way that it doesn’t kill the defense would be nice (in this scenario, Endy becomes a Mark McLemore/Stan Javier supersub in CF/LF).
b) the bullpen is a mostly RHP who throw plus fastballs with poor command (with the possible exception of Shawn Kelley- I’d like to see him do this for longer, though, before anointing a bullpen
ace). I think they’re going to cough up more games like Game 2 before we’re done.
The nice thing is adding a LH bat and bullpen depth are things that can usually be done pretty easily during the season without mortgaging your farm system (unless you’re Bill Bavasi). As such, I
like where this team is. If we’re in good shape by Memorial Day, there’s no reason why we shouldn’t be able to make a run for a playoff spot in a fairly weak division, and a team with a very
strong 1-2 punch in the rotation has the capability of going deep in the playoffs (once you have the ability to be picky with your backend starters come playoff time).
26. domovoi on April 16th, 2009 1:06 pm
Here’s my example: I flip a coin 30 times a row and it comes up heads every time.
Noonan or Stevie (no offense)would bet on tails, because that coin is really, really due.
I would bet on heads, because I think it might be a two headed coin.
This is a slightly different scenario than what we’ve discussing, as your prior expectations have adjusted. So far we’ve been assuming that our priors haven’t adjusted, specifically, going 7-2
hasn’t changed our initial expectations of how the team would perform. Analogized to your scenario, the 30 heads hasn’t changed our expectation that the coin is still a fair coin.
Of course, 30 heads probably would make us question whether the coin is fair, so it’s a better analogy of what we should if the Mariners go 23-7. At that point, we probably would adjust our
expectations upwards.
27. Zero Gravitas on April 16th, 2009 1:17 pm
This is a great point and I also like the contrast to last year, when they started so poorly but kept making the ‘well, it’s still early’ excuse. This team is already 5-0 against the AL West!
It’s certainly not too early to talk about the playoff implications of this fast start, given the weakness of the division.
28. JH on April 16th, 2009 1:18 pm
Noonan: maybe this will help. Most people say regress TOWARDS the mean, with the understanding that a statistically insignificant but still practically insignificant blip doesn’t have any effect
on the probable outcomes of future events. So, when Brian Roberts hit 5 HRs in his first month way back when, everyone knew that he didn’t really have 30HR power, but everyone also knew it would
likely be the best power output of his career.
It’s the same with a player like Endy Chavez. His personal stats aren’t enough to be a huge factor in his overall season line, but it’s enough that if he hits like he’s expected to the rest of
the way, his final season line will be a little better than we’d normally expect from him.
29. Ralph_Malph on April 16th, 2009 1:20 pm
By the way, coolstandings.com puts the M’s expected record at 81-81 (28.0% chance of winning the division, 3.5% chance of the wild card).
I’m not sure of their exact methodology, but it’s interesting to see how this has increased with each early win (2 days ago they had us at 77-85/20.5%/3.1%). You can pull up their projections as
of each day of the season. As of opening day, they projected the M’s at 68-94.
30. JH on April 16th, 2009 1:22 pm
Wow, that was poorly edited. I was going for the difference between regression TOWARDS the mean and regression TO the mean. If the Mariners’ were supposed to be a .450 team, a 7-2 run might not
be statistically significant enough to change that expectation of “true probability.” BUT, it’s significant enough to alter the final result.
31. Soonerman22 on April 16th, 2009 1:24 pm
Here is another wild thought only 2 starts in.
Lets say the Mariners are competitive enough this season and they don’t trade Bedard, would you want him re-signed? If Erik Bedard ends up being, in the words of Dennis Green “who we thought he
was,” and he was open to coming back would you want him?
It doesn’t look like Morrow will ever be a starter, and who knows what will happen with RRS. Would you be interested in a rotation with Felix and Erik at the top for the next 4 years, or would
you prefer to wash your hands of Bedard and move on with the rest of our lives?
Just another fun thought/debate.
32. DizzleChizzle on April 16th, 2009 1:32 pm
Is it fair to say that winning the AL West is the only chance that the M’s have of making the postseason? How many wins do you forecast it would take to win the division at this point in the
season? I know it’s early but I don’t think the Angels are gonna win 100 again.
33. wabbles on April 16th, 2009 1:38 pm
So, 85 wins then. Kewl with a capital K. Given what we’ve seen of the Angels juggernaut (Does their outfield REALLY average 34 years old?), September could be very interesting indeed. I guess
this is where the FO makes the big bucks. Do we continue building towards the future regardless of our September fortunes, sell off something (one of the expiring contracts) to try making a run
for it, sell off something to build more for the future or…Stand Pat?
34. mln on April 16th, 2009 1:59 pm
“Because of that, even if you haven’t changed your opinion one iota about the strength of the roster (and honestly, you shouldn’t have changed it much – nine games is too small of a sample to
mean much), you need to add three wins to whatever you thought the team’s final record was going to be. Math requires you to.”
Great. I’ll pencil in the M’s for 119 wins this year then! Woo-hoo. World Series here we come!
35. eponymous coward on April 16th, 2009 2:01 pm
Do we continue building towards the future regardless of our September fortunes, sell off something (one of the expiring contracts) to try making a run for it, sell off something to build more
for the future or…Stand Pat?
Um, why would you trade Bedard or Beltre to “make a run for it”?
Realistically, let’s say we wanted a serviceable OF at the deadline. Look at what Griffey or Winn cost (yes, I know about Griffey in the OF- but the point is he didn’t cost a ton). For a relief
pitcher, look at what Ron Villone or Armando Benitez cost. For a back-end rotation starter, check out what it cost for Jamie Moyer (just some examples).
Really, given that Zdurencik is pretty good at trades from what we can tell (and a marked improvement from the previous administration), I’m not particularly concerned that he’ll give away the
farm come July if we have a reasonable shot at contention and an obvious roster hole (and given that Clement’s still in Tacoma, we even have internal options if we need an LH bat). If we give
away talent in a trade, I’m pretty confident we’ll get talent back (and it will be a “trade that helps both teams” trade, not a “Bill Bavasi trades away the farm for the wrong player” trade).
36. Pete Livengood on April 16th, 2009 2:04 pm
I think they have to consider siging Bedard if they’re at all in contention.
I was very much against the Bedard trade at the time, unless the M’s knew something I didn’t – that they would be able to re-sign him. It was a horrible trade unless he could be locked up. Given
what they gave up, that is still very much the case. Why wouldn’t you make a run at signing him before even considering trading him? And if you’re in contention, why would you trade him (and run
up the white flag at the end of July) even if you can’t sign him?
37. Alaskan on April 16th, 2009 2:13 pm
Here’s what I don’t like about this statistical concept, as it applies to baseball:
We’re not talking about a typical coin flip. Rather, let’s imagine we have 29 coins (one for every MLB team besides us), all weighted according to our direct team-to-team comparison. So, our Red
Sox coin will come up ‘WIN’ 30% of the time, and our Royals coin will come up ‘WIN’ 70% of the time. If we look at the probability that way, team-by-team rather than simply over 162 games, than
the specific teams we played becomes significant.
For instance, let’s say we are 7-3 after 10 games. Even if we expect this team to be .500 over the entire season, starting out at .700 would only require us to adjust our expectation if we
thought the talent of our opponents should lead to less than a .700 average. What if we were playing the Royals the first ten games of the season? We expect to win 70% of the time against them,
and 10 games into the year we have. That doesn’t mean that I don’t think we’ll end up at .500 once we’ve played all the other, primarily better, teams.
If before the season started you asked me what the M’s record would be after 10 games, I would not simply guess 5-5, just because I expect them to finish at 81-81. Rather, I would ask you “who
are they playing?”
Now, this question doesn’t really apply to the M’s, in this case: they are outperforming my expectations against better teams. If you asked me before the season, about these 9 games, I probably
would have arrived at 4-5, which is 3 games worse than where we are, so I should raise my expectations for the year by 3. But I don’t understand why we shouldn’t look at the strength of schedule
first, before determining that we’re 3 games ahead of where we thought we would be.
38. stevie_j13 on April 16th, 2009 2:13 pm
I just want to make sure I am understanding the point of those who are saying I have fallen prey to the “gambler’s fallacy”: If I said at the beginning of the season that the team would win 75
games because of my statistical research, and I understand that teams will go through hot streaks and cold spells in reaching those 75 wins. Hypothetically, those “secondary” stats that predict a
75-win team are still in place, only the team has experienced some good luck and has started strong. Now, I should change my prediction? They lose their next eight games, and now they are a
72-win team? They then win eight of their next ten, and they are a 78-win team again.
This seems to me to be precisely the small sample size fluctuations that are supposed to be avoided. If I predicted at the beginning of the season that the M’s are going to win 75 wins, that
means really that I predicted that you could put a 75-win trend line right through the middle of a graph of the M’s win percentage. It’s not so much about saying that I think the M’s are “due” so
much as saying that I think my original prediction still holds.
39. CCW on April 16th, 2009 2:25 pm
stevie_j13: Google “gambler’s fallacy”. You just described it. Here it is, from wikipedia:
“The gambler’s fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated
independent trials of some random process then these deviations are likely to be evened out by opposite deviations in the future.”
On another note, I’ve seen enough positives from Bedard (and no contrasting negatives) to revise my personal projection of the M’s upwards. They went from an 80-win team to an 82-win team, so I
think they’re going to win 84-85 games. Yay!
40. wildmaxd on April 16th, 2009 2:28 pm
The justification for add 3 wins is flawed for the simple fact the “odds” and “anticipated outcome” are totally different.
Just because some one thought they were going to finish the season at .463 doesn’t mean that they have a 46.3% chance of winning each game.
41. fairweatherfan on April 16th, 2009 2:36 pm
“you need to add three wins to whatever you thought the team’s final record was going to be. Math requires you to.”
This also assumes that either “opponent difficulty” for the first nine games is roughly the same as for the remaining 153(seems reasonable in this case – not so if first nine games all happened
to be against the Nationals), or that the odds of winning remains the same, regardless of opponent.
Regardless, the prediction of finishing near .500 ball and playing meaningful games in September, is ample reason for more fans to start tuning in again.
42. stevie_j13 on April 16th, 2009 2:36 pm
Also from wikipedia: The Law of Large Numbers is important because it “guarantees” stable long-term results for random events. For example, while a casino may lose money in a single spin of the
roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It
is important to remember that the LLN only applies (as the name indicates) when a large number of observations are considered. There is no principle that a small number of observations will
converge to the expected value or that a streak of one value will immediately be “balanced” by the others. See the Gambler’s Fallacy.
Is 162 games a large enough sample? That’s not snarky, but a genuine question.
43. Uncle Ted on April 16th, 2009 2:37 pm
I don’t think that Stevie is motivated by the typical Gambler’s fallacy. That is, he’s not saying that there are causal processes that even things out in the end. Rather, he wants to know why
some random streak alters the expectations of wins because it happens at the beginning of the season, given that there are random streaks as a standard part of baseball. At least I think this is
his question.
So, Stevie, the point is that the random streaks alter the end expectations regardless of where they happen in the season. To use an extreme example to illustrate the point in the original post,
Suppose that the team got of to an 0-50 start, Given your line of reasoning, you’d still say that you’d expect them to win 75 games, or whatever, but for them to do that, they’d have to go 75 and
37 over the rest of the season. That’s a 66% win percentage, aproximately. Which would be a real stretch for a team that you think should win at a 46% clip.
44. georgmi on April 16th, 2009 2:37 pm
Yes, you should change your prediction. It is specifically not possible to predict luck, and having some good luck does not increase your likelihood of seeing some bad luck to make up for it.
Looking forward, seeing continued good luck is exactly as likely as seeing some bad luck. As a result, it can’t factor into your predictions.
In ’01, we had pretty much nothing but good luck for the first 162 games of the season. Except for that awful, awful night in Cleveland. Last year, we had pretty much nothing but bad luck.
45. IMFinksPa on April 16th, 2009 2:38 pm
To be fair, Dave’s post is not entirely accurate unless you assume that the competition faced to date is comparable to the overall level of competition that you anticipated that they would face
throughout the season. If, on the other hand, your projection of this team’s talent level was that, against these 3 teams in particular, the M’s could expect to go, say, 5-4, then the over
performance is only 2 games. Bear in mind, this not a small sample size issue, it is a sampling issue. The M’s have not faced their entire schedule, they have merely faced a small sample of their
entire schedule, a schedule that will include 9 game stretches on the road against such teams as Toronto, Boston, and Tampa, where their expected win total based on their actual talent level may
only be 2-7 or 1-8. It is not falling for the gambler’s fallacy if you believe they are due to balance out not as a consequence of regression, but as a consequence of the sample.
46. Uncle Ted on April 16th, 2009 2:39 pm
and no, 162 games isn’t a large enough sample. the point of the LLN is that as the number of samples approach infinity, you will tend towards the average.
47. georgmi on April 16th, 2009 2:42 pm
I think it’s fair to consider games against Oakland and Anaheim as representative of the average level of competition we’re going to face this year; a quarter of our games will be against those
two teams specifically.
48. joser on April 16th, 2009 2:47 pm
The other key thing to remember is that the M’s don’t have to be one of the best 8 teams in baseball. They don’t have to be one of the best four in the AL. They just have to have the best record
in the AL West. Thus already going 5 for 5 (hopefully 6 for 6) against two of those three other teams is huge. Not only do those wins not get taken off the board for the M’s, those losses don’t
get taken off the board for the A’s and Angels. To paraphrase what Dave said on Spokane radio yesterday: if the Mariners can open up a (say) 3 game lead on the other AL West teams and hold on to
it into May, it’s going to be hard for anybody in this seemingly weak division to catch them by the end of the summer. Though I wouldn’t discount one of Beane’s patented second-half pushes, and
it’s always possible the Rangers will find some pitching under a rock somewhere and surprise everybody.
And, weirdly enough, the M’s are fairly well built for the playoffs: Bedard and Felix, plus Wasburn and the amazing velcro outfield starting the third game, sets up for a pretty favorable short
49. Mike Snow on April 16th, 2009 2:53 pm
it’s always possible the Rangers will find some pitching under a rock somewhere
The only thing you’ll find under a rock in Texas in the summer is a scorpion.
50. Uncle Ted on April 16th, 2009 3:01 pm
That’s actually a really good point, Joser. I mean, lets be honest, we’re not competing for the wild card, so really all we need to look at is the comparative records of teams in the AL West.
51. stevie_j13 on April 16th, 2009 3:02 pm
I think Uncle Ted summarized my thinking pretty well. If he is right about LLN (and I suspect he might be, as Dave and the others are probably right about the stats), then I should have more
license to feel good about this team.
Part of me just worries about the example of the 2007 Mariners. Many who followed that team, including myself, kept saying, “They’re not this good. They’re not this good.” Then they lost 15 of
17, and it hurt even more because they had outperformed expectations so much and were so close to riding that good luck into the playoffs.
52. Oolon on April 16th, 2009 3:10 pm
While I might have to add 3 games due to the hot start, I’m tempted to subtract 3 games due to the team’s fascination with “doing the little things”…
If we start consistently bunting in the 4th inning of scoreless games with our 6th place hitter to move a runner from 2nd to 3rd – it has to have a negative effect on our total wins. My preseason
estimate (77) assumed that Wakamatsu would be a capable manager, these small ball tactics have me wondering.
53. Wilder83 on April 16th, 2009 3:26 pm
The real question is, how many games do you take away for the Washington Nationals?!
This season has already been full of excitement. It really does have a 2001 feel to it. I just hope we don’t absolutely bomb to finish out April.
54. skjes on April 16th, 2009 3:28 pm
I’m trying to avoid reading too much into the current series against the Angels. On the one hand, yeah, the M’s are winning and are looking good doing it. On the other hand, the Angels have three
starters on the DL, plus Nick Adenhart was killed in a senseless DUI accident (with the entire team dealing with the fact that his next start would have been that Tuesday and his impending
funeral). These really are games the M’s are supposed to win.
55. Ron Stevens on April 16th, 2009 3:40 pm
Given that they have won 7 of nine,and
with your premise that they are a .481
team,I calculated that they have a 32%
chance of finishing the regular season
with 82 to 89 wins;they’ve got a 50%
chance of winning 82 or more.
56. Ron Stevens on April 16th, 2009 3:52 pm
I should have one less win;
it should be 81 to 88,and 50%
chance of winning 81 or more.
57. Mike Snow on April 16th, 2009 3:56 pm
Ron, I don’t know what you’re using that affects your formatting, but please remember to use the space bar after you type a comma.
58. droppedrod on April 16th, 2009 4:07 pm
It’s been some time since I took a stats class, but does Dave’s logic work when the estimated record for the team isn’t based on a true sample? In other words, is a prediction that the team will
finish .500 really a “statistic” in the true sense of the term?
Saying, before the season starts, that the team will finish 81 and 81, regardless of how you get there, isn’t based on a sample of this team’s past performance. Once you have an adequate sample
of the team’s performance, then you can apply Dave’s math. If the team goes .500 through the first 80 games, adding 3 wins to the predicted win total after a 6 game streak in August seems to work
59. DLCheeZ on April 16th, 2009 4:15 pm
But I don’t understand why we shouldn’t look at the strength of schedule first, before determining that we’re 3 games ahead of where we thought we would be.
So let’s look at strength of schedule. In April, the Mariners face the following:
The 2008
AL West Champs
AL Central Champs
AL Central Runners Up
AL East Champs and AL Pennant Winners
It’s not all A’s and Tigers. Though let’s acknowledge that the Tigers currently are tied for first in their division.
As it turns out, the teams we’ve beaten so far all have sub .500 records. Why is that? At least part of that is because the 2009 Mariners are handing out beatings like Halloween candy.
NOBODY thought the M’s would finish April above .500 and somehow, they’re a 5-8 run away from making that a reality. You’d better believe this team is 3 games ahead of where we thought they would
60. Dylan S on April 16th, 2009 4:51 pm
“This also assumes that either “opponent difficulty†for the first nine games is roughly the same as for the remaining 153…”
“To be fair, Dave’s post is not entirely accurate unless you assume that the competition faced to date is comparable to the overall level of competition that you anticipated that they would
face throughout the season”
No, Dave’s post doesn’t assume that. It assume that the remaining 153 games are as difficult as the whole 162 games (which they are). The post assumes nothing about the first nine games and gives
them no consideration going forward. Dave’s post is making the same prediction he made before the season except now assuming that the M’s play a 153 game season and start off with a record of
61. Gregor on April 16th, 2009 5:15 pm
No, Dave’s post doesn’t assume that. It assume that the remaining 153 games are as difficult as the whole 162 games (which they are).
Of course, the assumption you state implies that the first nine games are also as difficult as the whole 162 games, and hence as the remaining 152 games.
62. beckya57 on April 16th, 2009 6:30 pm
Calm down, Dave. They will have injuries, after all; for starters, Ichiro’s ulcer could recur, and Jr is only one hamstring pull away from a lengthy DL stint. I agree that this is a .500 team,
and that’s a huge improvement from last year (plus they’re a whole lot more fun to watch), but playoff talk is WAY premature. Lots of teams start strong and then regress to their true level
later. I think Jack Z and Wakamatsu are building for 2010 or 2011, and that’s exactly what they should be doing.
63. joser on April 16th, 2009 7:24 pm
Yeah, because if there’s one thing Dave has been guilty of over the past few years, it’s his excessive optimism.
The folks that run this blog just can’t win. When they state an unpopular opinion and are proven right, they never have anyone who disagreed come back and acknowledge that. When they’re realistic
about the limitations of the team, they’re accused of excessive negativity, of being “haters” or somehow wanting the team to fail.
And now, when they express a little optimism, they’re told to “calm down.”
Sometimes I wonder why they bother. (And then I remember: they’re fans.)
(FWIW, I think the brain trust is building for 2010-11 too; but clearly they’re also trying to balance that with winning now and turning the fanbase around — which, for their employers, is just
as important. Hence the Griffey signing.)
64. joser on April 17th, 2009 1:49 am
An unfun thought: In the opening days of the 2008 season, the Royals went 6-2 and held first place in the AL Central for 11 days, until April 14th. A week later they had fallen to last place, on
their way to a 72-90 record.
65. DMZ on April 17th, 2009 1:53 am
That’s not going to happen again since they have Bloomquist to keep them focused and motivated.
66. joser on April 17th, 2009 12:35 pm
Oh smacks forehead of course! How could I have overlooked that?
67. joser on April 17th, 2009 12:36 pm
Wait, the M’s had Bloomquist last year. Are you saying he’s going to focus the Royals on losing 100+ games this year?
You must be logged in to post a comment. | {"url":"http://www.ussmariner.com/2009/04/16/a-fun-thought/","timestamp":"2014-04-16T18:58:15Z","content_type":null,"content_length":"77348","record_id":"<urn:uuid:677292e0-63aa-401c-94b4-03728462a95d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method of images
April 27th 2010, 09:03 PM
Method of images
For the heat equation on a semi infinite domain we have been shown the method of images. Which to my understanding is basically using an even or odd extension to match the boundary condition.ie
for $U_x(0,t)=0$
we use the even extension which meets this condition.And when the derivative is not specified but instead $U(0,t)=0$ we use the odd extension which has non zero derivative at x=0 but meets this
boundary condition.
For example:
Then using the general solution to solve this i get $2(sinx)e^{-kt}$
The solution does not have a two. Which implies that the initial condition has been extended to an infinite domain and there is only one integral. As we were shown in class i get two integrals
and since -sin(-x)=sinx i get the factor of two. I am wondering why there isnt a two in the solution. | {"url":"http://mathhelpforum.com/differential-equations/141823-method-images-print.html","timestamp":"2014-04-16T20:05:20Z","content_type":null,"content_length":"4871","record_id":"<urn:uuid:6d9b1965-ca28-4275-99be-e0cea7b77f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US8218687 - Frequency dependent I/Q imbalance estimation
1. Technical Field
This disclosure relates to devices having frequency dependent I/Q imbalance estimation, and to corresponding methods, software and integrated circuits.
2. Technical Field
RF receivers or transmitters of any type typically have mixers driven by a local oscillator to generate I and Q signals in quadrature, i.e., 90 degrees out of phase with each other with the result
that there is no correlation between the two signals. As explained in US patent application 2003206603, it is well known in the art that deviations from the ideal I and Q signals can occur in the
form of gain or magnitude imbalances, and relative phase imbalances. This can cause an unwanted image in a negative frequency part of a spectrum of the received signal, which can lead to interference
or errors in subsequent processing stages such as demodulation, or filtering or algorithms to extract wanted signal components.
The phase imbalance can arise from local oscillator (LO) input signals to mixers not being exactly 90 degrees out of phase, or from path length differences, or errors caused by unbalanced parasitic
capacitance, for example. Prior attempts to calibrate and correct for magnitude and phase imbalances have involved applying dedicated calibration signals at the receiver, for example, by switching
the local oscillator to generate in-phase signals rather than out-of-phase signals. The resulting signals are analyzed to produce correction factors to be applied to received signals carrying data of
interest, either at the analogue side or the digital side. R. A. Green, in “An Optimized Multi-tone Calibration Signal for Quadrature Receiver Communication Systems,” 10th IEEE Workshop on
Statistical Signal and Array Processing, pp. 664-667, Pocono Manor, Pa., August 2000, shows an optimized multi-tone calibration signal to which linear regression techniques are applied to generate
correction factors to update adaptive filters that are intended to compensate for gain and phase imbalances. Special circuitry typically needs to be used to produce, analyze, and correct for the
results of analysis on such calibration signals. A further drawback is that the quadrature receiver typically cannot continue to actively receive normal transmitted data while the calibration is
US patent application 2003/206603 tries to provide I/Q calibration at a quadrature receiver that does not require that a separate calibration signal be transmitted to the receiver and that does not
necessarily involve additional analogue components prior to the analogue to digital converters (ADCs). It achieves this by having an initial calibration period during which the local oscillator
outputs are switched to feed in-phase signals to both mixers. Two switches (204, 206) and two phase shifters (208, 210) as well as the oscillator itself (280) are arranged to provide either (a) two
LO output signals that are 90 degrees out of phase or (b) two LO output signals that are in phase (0 degrees out of phase). Mode (a) is used for normal operation of the receiver and mode (b) is used
for the calibration period. One disadvantage of this is that the receiver cannot be used while it is being calibrated. I/Q imbalance factors in terms of phase and amplitude, are determined at
different frequency bands, during the calibration period, using either frequency domain or time domain separation of frequency bands. These are then used to correct the I/Q imbalance when receiving
signals in normal operation of the radio, with the LO signals 90 degrees out of phase.
One embodiment, shown in FIG. 9 of US2003/206603, is said to be capable of performing I/Q calibration and compensation and does not require a special switched local oscillator LO arrangement for an
initial calibration period, for frequency independent imbalance estimation. However, it states that for frequency dependent phase mismatch (FDPM), the system does need the initial calibration period
with the LO being switched to provide signals of equivalent phase to the mixers.
U.S. Pat. No. 5,872,538 (Fowler) shows frequency dependent I/Q imbalance correction, and shows a method of implementing it using a single complex FFT. Correction factors are obtained using a prior
calibration process.
According to one embodiment, there is provided signal processing apparatus for estimating I/Q imbalance in in-phase (I) and quadrature-phase (Q) input signals, comprising circuitry arranged to
separate different frequency components of the I and Q input signals to represent different parts of a frequency spectrum of the input signals, and estimation circuitry arranged to estimate I/Q
imbalance at the different parts of the frequency spectrum of the input signals.
Compared to the above referenced existing methods of frequency dependent estimation, this apparatus has the notable consequence of enabling a “fully passive” estimation of frequency dependent
imbalance. In other words an estimation which does not need a prior calibration period, or a test signal for calibration of the estimation, and may continuously estimate a changing imbalance while
receiving live input signals carrying a useful information payload. This can help reduce errors in reception and can improve the performance of downstream processing such as demodulation, while
avoiding or reducing the inconvenience of sending test signals or the expense or losses or uncorrected signal delays which can be introduced by switched LO circuitry.
Any additional features may be added, and some such additional features are described and claimed.
Other aspects include a corresponding integrated circuit, a corresponding receiver and a corresponding method of estimation, and corresponding software for carrying out the method. Additional
features and advantages will be described below. Any of the additional features can be combined together or with any of the embodiments, as would be apparent to those skilled in the art. Other
advantages may be apparent to those skilled in the art, especially over other prior art not known to the inventors.
Embodiments will now be described by way of example, and with reference to the accompanying drawings, in which:
FIG. 1 shows a schematic view of a receiver to which some embodiments can be applied,
FIG. 2 shows a schematic view of a known example for use in frequency independent estimation, and adapted in some embodiments,
FIG. 3 shows a schematic view of an imbalance estimation arrangement according to an embodiment,
FIG. 4 shows a schematic view of an imbalance estimation and correction arrangement according to an embodiment,
FIG. 5 shows a schematic view of an imbalance estimation arrangement according to an embodiment,
FIG. 6 shows a schematic view of another imbalance estimation arrangement according to an embodiment,
FIG. 7 shows a schematic view of a further imbalance correction arrangement according to an embodiment,
FIG. 8 shows a schematic view of an imbalance estimation and correction arrangement according to an embodiment,
FIGS. 9 and 10 show graphs of frequency spectra with and without frequency dependent correction according to an embodiment, and
FIGS. 11 and 12 show further embodiments.
The present disclosure relates to an apparatus for estimating frequency dependent I/Q imbalance errors. Some of the additional features of embodiments can be summarized as follows. The circuitry
arranged to separate the frequency bands may comprise transform circuitry for generating a frequency domain representation of the input signals, and the separating of the frequency bands being
carried out in the frequency domain.
The separating of the frequency bands may involve separating transformed values into a number of frequency bins representing values in negative and positive frequency bands, and combining
corresponding values in bins representing corresponding negative and positive frequency bands. The apparatus may have a converter to convert the separated frequency domain representation of each
frequency band to a time domain representation before the estimation.
The estimation circuitry may be arranged to determine an estimate of a phase imbalance and an estimate of an amplitude imbalance. The phase imbalance determining circuitry may be arranged to
determine an average amount of correlation of the I input signal and the Q input signal, over a period of time. The amplitude imbalance determining circuitry may be arranged to determine an average
difference between the amplitudes of the input signals over a period of time. Where the phase imbalance is determined on input signals before correction for amplitude imbalance, a polynomial
correction may be carried out to compensate for the effects of amplitude imbalance on the phase imbalance.
The apparatus may have correction circuitry arranged to use the estimated imbalance to correct the input signals. The estimation may be carried out before or after the correction. The correction may
be carried out in the frequency domain or the time domain. The correction may be based on equations:
where alpha is a vector of amplitude imbalances at different frequency bands, beta is a vector of phase imbalances at different frequency bands, x is the I input signal and y is the Q input signal, x
and y being in the form of vectors of signal values at different frequency bands. The apparatus may be implemented in the form of an integrated circuit. The correction based on the detected imbalance
may be carried out anywhere upstream or downstream of the detection.
Embodiments may be in the form of a receiver and may incorporate a local oscillator to produce quadrature signals, a pair of mixers to mix a received signal with the quadrature signals, to generate I
and Q signals, an analogue to digital converter, to convert the I and Q signals, and the above signal processing apparatus to detect imbalance in the I and Q signals.
Embodiments may be in the form of a method of receiving involving using a local oscillator to produce quadrature signals, mixing a received RF signal with the quadrature signals, to generate I and Q
signals, converting the I and Q signals to digital form, and detecting an imbalance in the I and Q signals by separating different frequency components of the I and Q input signals, to represent
different parts of a frequency spectrum of the input signals, and estimating I/Q imbalance at the different parts of the frequency spectrum of the input signals.
Embodiments may be in the form of computer software for use by the apparatus and arranged to detect an imbalance in the I and Q signals by separating different frequency components of the I and Q
input signals, to represent different parts of a frequency spectrum of the input signals, and to estimate I/Q imbalance at the different parts of the frequency spectrum of the input signals.
References to circuits or circuitry can include any kind of hardware including analogue or digital circuitry, general purpose microprocessors, digital signal processor (DSP) circuitry, application
specific integrated circuits (ASICs) and so on, and can encompass circuitry and associated software to be executed by the circuitry to implement a given function. Items shown as separate circuits may
be separated at different places or integrated in different ways.
In some embodiments, the frequency components of an incoming signal are separated using a Fast Fourier Transform (FFT). Positive and negative frequency components with the same magnitude are then
recombined, and the resulting signals processed using a known algorithm to obtain a vector of correction values for different frequencies that can be applied to compensate the incoming signal. This
arrangement can correct frequency dependent signal path errors more accurately than systems based on a correction at one frequency. A further improvement on the known estimation algorithm is also
The apparatus can be used to improve the image rejection of modern CMOS-implemented radio receivers. A modern integrated radio receiver is shown in FIG. 1. The device includes an antenna 8 that
receives RF signals, followed by a band-defining filter 13 that filters out all signals except those in a band around the wanted signal. The low noise amplifier (LNA) 4 amplifies all the signals
passed by the band pass filter 13, and the purpose of the rest of the radio receiver is to select the wanted signal from any other signals that are not filtered out by the band-defining band pass
filter 13.
In order to process the radio signal further, the signal from the LNA 4 is separated into what are known as I and Q components. In order to do this, using mixers 5, the signal is multiplied by
(“mixed with”) two local oscillator signals that are ideally in quadrature (90 degrees apart and generated by local oscillator 7), and thus can be represented as sin ω[L]t and cos ω[L]t. In the radio
receiver of FIG. 1, a mismatch error has been shown and the local oscillator signals are not 90 degrees apart; there is an error represented by ε. This is only one possible type of error that can
occur in the signal path, but it will lead to errors in the processing that follows if not corrected. An I/Q imbalance estimation and correction part 6 is shown, following conversion to digital
signals by analogue to digital converters (ADCs) 9.
Errors in the form of imbalance in I/Q signals of radio systems can often be detected relatively easily, based on the fact that the outputs of the I and Q branches should be uncorrelated, no matter
what the signal input to the radio is. If however there is an error as shown above, there is some correlation. This is because:
cos(ωLt+ε)=cos ωLt cos ε−sin ωLt sin ε
and so there is a component in the cos ω[L]t branch that has been multiplied by sin ω[L]t. A previous patent disclosed a method of detecting frequency independent errors: GB2215544, (Cheer) published
on 20^th Oct. 1989: “Apparatus for the correction of frequency independent errors in quadrature zero IF radio architectures.” A diagram from the patent is shown in FIG. 2. This will now be explained
further, as a similar algorithm is used in some embodiments of the invention, with adaptations to enable frequency dependent operation, in other words detection and correction at a number of
different frequencies. FIG. 2 shows three stages, a first stage to apply DC correction, a second stage to estimate and correct phase imbalance, and a third stage to estimate and correct amplitude
Following the DC correction (which is important in Zero IF receivers, but less important in other applications) the next stages shown in FIG. 2 work as follows: the phase is corrected by multiplying
together the I and Q components, integrating the result and adding a small amount of the I channel into the Q channel. Next the amplitudes of the two paths are corrected by detecting an average
difference in absolute values then simply applying an appropriate amount of amplification in one path. Although this approach seems in GB2215544 to be somewhat modular and ad-hoc, it can be shown
that it does effectively detect the correlation between I and Q channels. However it is unable to deal with frequency dependent errors. Although the variation of the errors may be quite small, due to
the fact that the total bandwidth of the receiver is much smaller than the RF bandwidth, this variation will become important when high accuracy is required, as when attempting to achieve very good
image rejection in a modern CMOS based radio receiver. Some embodiments make use of a similar algorithm, but arranged to deal with frequency dependent errors.
A component of some embodiments of this disclosure is a Fast Fourier Transform (FFT) to separate out the frequency components of the I and Q signals. However, the algorithm applied in GB2215544
(Cheer) cannot be applied to the separated out components directly, as it requires knowledge of both the positive and negative components of a given frequency value. The output of the FFT will
consist of positive and negative frequencies, and the basis of the new algorithm is to collect FFT bins that have identical frequency magnitudes but opposite signs. These components are added
together and effectively put into an inverse FFT, converting the signal back into a time-based waveform that only contains frequencies (positive and negative) in the band defined by the FFT bin. If
the original FFT has 2N bins, there will be N−1 such signals (DC does not require correction). These N−1 time-based waveforms are then applied to an algorithm corresponding to that shown in Cheer,
resulting in N−1 sets of correction coefficients.
The correction coefficients are then applied to the signal components output by the FFT (for frequency domain correction as shown in Fowler) or as output by the inverse FFT, for time domain
correction as shown in Cheer. Each set of correction coefficients is used twice, once for the negative frequency component and once for the positive frequency component.
Embodiments of the present disclosure show an I & Q demodulation system with frequency dependent quadrature mismatch correction by: FFT conversion of I & Q signals, adding the positive and negative
frequency bins of same frequency value and carrying out an IFFT on each of these sum bins to get time domain signals of the frequency bin and generate an imbalance value for each frequency band. The
imbalance values may be used in any way to compensate the input signal, or to compensate downstream applications using the input signal. One way of using these values is to apply the values to each
frequency bin of the FFT output, to generate frequency dependent mismatch corrected signals. These corrected signals relating to different frequency bins may be combined to create corrected I and Q
In some embodiments, I and Q signals in a radio apparatus are processed using a Fast Fourier Transform (FFT) to separate out the frequency components of these signals. Positive and negative frequency
components with the same magnitude are combined and an inverse FFT applied to convert the signals back into the time domain, resulting in a set of time-domain waveforms each containing only
frequencies with magnitudes corresponding to a pair of FFT bins. These frequency-dependent waveforms are used to generate correction coefficients any known method for frequency-independent
correction, and then the correction coefficients are used for frequency-dependent correction of the signal components output by the FFT. This can provide improved image rejection in applications such
as narrowband radio receivers, e.g., for GSM/EDGE.
FIG. 3 shows a first embodiment, to estimate I/Q imbalance in I and Q input signals. These input signals are fed to separation circuitry 15 to separate different frequency components of the I and Q
input signals, to represent different parts of a frequency spectrum of the input signals. The separated signals are fed to estimation circuitry 18 arranged to estimate I/Q imbalance at the different
parts (in other words frequency bands) of the frequency spectrum of the input signals. Correction using the estimated values may be added at various places in the I and Q signal paths. Various ways
to implement the circuitry can be envisaged and many additions can be envisaged, and some will be described in more detail.
FIG. 4 shows another embodiment for carrying out estimation. ADC converters 9 to convert I and Q inputs to digital domain are followed by separation circuitry 15 to separate different frequency
components of the I and Q input signals, to represent different parts of a frequency spectrum of the input signals. Signals for each band are converted by conversion circuitry 20 into time domain
signals, and time domain I/Q imbalance estimation for each band is carried out by imbalance estimation part 40. The estimation values for each frequency band are fed to I/Q imbalance correction part
50 for correction of each frequency band. In combining part 60, the corrected I/Q signals of different frequency bands are recombined to output a corrected signal.
FIG. 5 shows another embodiment of estimation circuitry. The I and Q signals are fed to transform circuitry 110 in the form of an FFT circuit for example. Other transforms could be used. Outputs in
the form of frequency domain signals are separated into frequency bins, shown as +1 to +N and −1 to −N, according to positive and negative frequency bands 1 to N. Each of these bands are converted to
time domain signals by conversion circuitry 120, and kept separate. Then they are combined by combining circuitry 130 for combining the time domain signals of corresponding positive and negative
frequencies, to output time domain signals for bands 1 to N. Each of these are fed to time domain imbalance estimation circuitry 140. Each arrow shown in FIG. 5 represents two signal streams, for I
and Q values, or complex values representing both I and Q signals. The outputs are I and Q imbalance signals, each representing imbalance at frequencies 1 to N.
FIG. 6 shows a schematic view of an example of how to implement the estimation of imbalance as in box 18 of FIG. 3, box 40 of FIG. 4, or box 140 of FIG. 5 for the preceding embodiments or other
embodiments, to calculate alpha (the amplitude correction) and beta (the phase correction). These are similar to estimation values W1 and W2 of Cheer. As in the above mentioned Cheer patent, I and Q
are multiplied together and integrated to give the beta parameter, and the difference in the magnitudes is used to calculate alpha. Both parameters are scaled with the amplitude of the original
As shown in FIG. 6, the I and Q input signals in the form of a time series of complex values are first of all separated by the part 150 labelled complex to real-imag. Three multipliers are shown,
Product 153, Product1 152 and Product2 151. Product2 is used to square the real part, effectively to obtain an absolute value, as is also carried out by Cheer as shown in the amplitude correction
part of FIG. 2. Product1 is used to obtain a square of the imaginary part of the input, again as done in FIG. 2. Parts Mean1 154 and Mean2 155 take average values of the squares output by Product2
and Product1, respectively, equivalent to the integrating function in the amplitude correction part of FIG. 2. Part Add1 157 in FIG. 6 obtains a difference in the averaged values output by the Mean1
and Mean2 parts. Add2 158 and Divide1 160 and Gain1 161 parts are used to scale this difference, which is then output as the amplitude imbalance value alpha. This output may be calculated for each of
the different frequency bands by providing as the input to FIG. 6, a time domain signal for a given frequency band, and repeating the algorithm in series or parallel for different input signals for
different frequency bands.
In FIG. 6 the phase imbalance estimation is carried out using the multiplier labelled Product 153 to carry out a correlation. The output of this part is averaged by the averaging part labelled Mean
156. The output of this is scaled by parts Add2 158 and Divide2 159. This much follows the principles of the phase estimation shown in FIG. 2.
Regarding the phase imbalance estimation, there is an improvement on the original algorithm shown in FIG. 2. As the amplitude of the signal used for scaling is calculated using the uncorrected
signal, this is slightly wrong, by an amount that can be determined to a first order using the value of beta from the initial calculation. This can be corrected to a first order by the polynomial
evaluation block 162 shown in FIG. 6, which implements the polynomial y=x+x^3, slightly correcting the input value x when x is small. The output of the polynomial evaluation is the value beta. Again
this beta output can be calculated for each of the different frequency bands by providing as the input to FIG. 6, a time domain signal for a given frequency band, and repeating the algorithm in
series or parallel for different input signals for different frequency bands.
The imbalance correction parameters alpha and beta calculated by the circuitry shown in FIG. 6 may be used by the imbalance correction block 50 of FIG. 4. FIG. 7 shows a schematic view of an example
of how to implement the correction block for the preceding embodiments or other embodiments, using the alpha and beta correction components of imbalance as generated according to FIG. 6 or other
embodiments. FIG. 7 shows the estimated imbalance parameters alpha and beta being applied to correct the I and Q signals. Note that the correction would be done to each of the time domain signals at
different frequencies as shown in FIG. 5.
The equations implemented here are:
The parameter alpha is used to adjust the relative amplitude of the signals. If x(t) is bigger than y(t), alpha can be increased to compensate for this. The phase difference between the two paths is
adjusted by adding beta times one path into the other, as in much previous work, including [Cheer].
FIG. 7 shows four multipliers, Product1 172, Product2 173, Product3 174 and Product4 175, and two adders, Add2 176 for the real part of the signal and Add1 177 for the imaginary part. The input
signal 3 in the form of a time series of digital samples representing complex values is separated into real and imaginary components by the item labelled complex to real-imag1 171. The real part is
fed to Product1 and Product3, and the imaginary part is fed to Product2 and Product4. The alpha component 1 (amplitude imbalance) is fed to Product1 and Product4 while the beta component 2 (phase
imbalance) is fed to Product2 and Product3. Part Add2 176 produces a sum of the real part of the input and the output of Product2, and subtracts the output of Product1. Part Add1 177 produces a sum
of the outputs of Product3, Product4 and the imaginary part of the input. The real and imaginary parts output by respectively Add1 and Add2 are put together into complex values by the part labelled
real-imag to complex 178.
Note that the time domain algorithm suggested above is not the only way of doing this, and [Fowler] shows a method of achieving the same result by correction of a frequency domain representation of
the signal.
FIG. 8 shows a schematic view of signal processing according to an embodiment and having estimation and correction. As with any of the embodiments it can be implemented in any kind of conventional
processing hardware such as a general purpose microprocessor or DSP or ASIC or dedicated logic circuitry, for example, and software for such processing hardware can be written in any conventional
computer language. The starting point for the signal processing is, exactly as in [Fowler], an N-point FFT labelled block 201 in FIG. 8.
The input to this block 201 consists of the time-domain I and Q signals of the radio apparatus in question, x[i](n) and x[q](n) using notation from [Fowler], where n represents sample number which
increases with time.
The output from block 201 is a frequency domain representation of the signal which is labelled, as in [Fowler], X^R(k) and X^I(k) where k represents frequency bin and increases with frequency, and R
and I indicate real and imaginary, respectively.
The signal processing operation performed by the N-point FFT block 201 is well-known, converting N time domain complex samples to N frequency domain complex samples.
In a simple uncorrected radio apparatus, the signal would be recovered from the complex samples directly. In the example shown in the graphs of FIGS. 9 and 10 described below, an interferer is shown
at positive frequencies, causing an unwanted image at negative frequencies. This makes it difficult to recover wanted signals at negative frequencies. Therefore, the purpose in this instance of the
embodiments of the invention, is to correct, either in the time domain or the frequency domain, the input signal so that the interferer does not appear at negative frequencies (note that the negative
frequencies can not simply be removed, because this is where parts of the wanted signal will be).
Block 202 takes each complex frequency bin pair produced by block 1 and produces a time domain waveform from each X^R(k) and X^I(k) pair. This is a new feature, and is not done in [Fowler] where
corrections are performed in the frequency domain.
The processing in Block 202 involves multiplication of each X^R(k) and X^I(k) pair with a time domain waveform. This operation can be subject to many simplifications and optimizations known to those
working in the signal-processing field, but these are not explored here.
The output of Block 202 consists of N time-domain signals at positive and negative frequencies. Each signal consists of a cisoid (complex sinusoid) of constant amplitude and frequency defined by the
input X^R(k) and X^I(k) pair.
In Block 203, the time domain signals generated in Block 202 are combined in the following manner. Each positive frequency cisoid is added to the negative frequency cisoid with the same magnitude.
This results in (N/2)−1 time domain waveforms. The number of waveforms is halved by the adding together operation. The algorithm can not be applied to the DC (zero frequency) bin and the twice
sampling rate bin, resulting in the “−1” in the number of time domain waveforms. The above operation results in time domain waveforms with both positive and negative frequency information, which can
be used to calculate the correction coefficients in the manner described by [Cheer]. Note that the sorting operation described above is completely different to the “Reverse Order” operation described
by [Fowler], which operates on signals in the frequency domain. The purpose of the operation described by [Fowler] is different, as described below.
In Block 204, the techniques described in [Cheer] are applied to each of the time domain signals generated by Block 203. The output of Block 203 consists of (N/2−1) I+Q time domain waveforms each
containing information from a narrow band of frequencies. Since [Cheer] operates on one I+Q time domain waveform, it can be applied to each of these signals individually resulting in (N/2−1)
weighting coefficients W1 and W2, each applicable to a narrow band of frequencies. Block 204 can be implemented as shown in FIG. 6, for example.
In Block 205, the weighting coefficients W1 and W2 as described in [Cheer] are applied to each of the time domain waveforms generated by Block 2 in precisely the manner described in [Cheer]. This is
done for both the positive and the negative frequencies, resulting in (N−2) time-domain corrected I+Q waveforms. These signals are then added together to give a single time domain corrected waveform
which is the output of the system. Block 205 can be implemented as shown in FIG. 7, for example. The outputs of block 205 are z[i](n) and z[q](n), and correspond to the outputs of FIG. 7. All of the
I, and all of the Q values for the different frequency bands can be added, to produce two (I & Q) time domain corrected waveforms, typically each in the form of a series of digital values.
Embodiments of the frequency dependent imbalance estimation system can start from a baseband signal being fed into a 64 point FFT, resulting in a 64 frequency bin output vector. The time waveforms
corresponding to each bin are then calculated using the output vector to scale an inverse FFT matrix. The result of this process is a 64×64 matrix the columns of which correspond to time values and
the rows of which correspond to frequency values. Thus in an individual row of the matrix is a time waveform with frequency components from one bin of the FFT. The bins corresponding to negative
frequencies are then selected and their signals are added to the matrix row for the corresponding positive frequency. This results in a 31×64 matrix, as no calculation is performed for DC and
half-sampling frequency components. The 31 resulting time waveforms are then input to the above mentioned algorithm of [Cheer], resulting in 31 correction coefficients, one for each non-zero
frequency magnitude of the original FFT.
These correction coefficients are then re-ordered and used as input to a correction block, which can be implemented as described in [Cheer], for example, with each frequency band treated separately.
An example of a spectrum of the input signal and the same signal corrected by the frequency dependent correction system with a 64 point FFT, is shown in FIGS. 9 and 10. As well as the wanted signal
at +200 kHz, it is easy to see the unwanted image of the signal at −200 kHz, which could prevent wanted signals from being received at this frequency. The image is about 20 dB below the wanted
FIG. 11 shows another embodiment, similar to FIG. 5, but having the separating circuitry arranged to operate in the time domain. The input I and Q signals are fed to time domain bandpass filters 320,
330, 340, each for a different frequency band, 1, 2 . . . N. The time domain signals output by these parts are then each fed to a time domain I/Q imbalance estimation circuit 140, as in FIG. 5. The
filters can be implemented following established practice.
In another embodiment shown in FIG. 12, the imbalance estimation part 18 for each band is arranged to be fed from imbalance corrected output signals, rather than carrying out the correction
downstream of the estimation. The I/Q imbalance correction part 50 carries out correction at each frequency band. Combining part 60 recombines the corrected I and Q signals of the different frequency
bands. The corrected outputs are fed back to the I/Q estimation part 18 via a separation part 15 for separating the frequency bands. Clearly another alternative arrangement is to use the outputs of
the imbalance correction part 50 as inputs to the imbalance estimation part 18, before the recombination by combining part 60, to avoid the need for separation part 15.
In the embodiments described, the I/Q mismatch compensation factors may be used to adjust the magnitude and phase response in the time domain or in the frequency domain, in the analogue or in the
digital portion of a receiver. The passive I/Q mismatch calibration system can calibrate frequency dependent gain or magnitude imbalance, frequency independent magnitude imbalance, frequency
dependent phase imbalance, and frequency independent phase imbalance or combinations or these. The calibration may occur for a set number of samples followed by, or accompanied by, compensation based
on the analysis. In other embodiments, an iterative approach may be used to provide adaptive compensation for I/Q mismatch.
Some of the embodiments can be applied to image rejection of a low IF radio receiver for application in GSM/EDGE cellular networks. The system could also be used for many other types of radio system,
such as the Zero IF receiver described in GB2215544 (Cheer).
The embodiments of the disclosure have been conceived in the context of the transmitter and receiver in cellular radio handsets targeted at the 2.5G and 3G standards. It is of potential application
to any wireless communication systems and can include systems using frequency division multiple access (FDMA), time division multiple access (TDMA), and various spread spectrum techniques, such as
code division multiple access (CDMA) signal modulation. GSM systems use a combination of TDMA and FDMA modulation techniques. Wireless communication devices incorporating wireless technology can
include cellular radiotelephones, PCMCIA cards incorporated within portable computers, personal digital assistants (PDAs) equipped with wireless communication capabilities, and the like.
In the present specification and claims the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. Further, the word “comprising” does not exclude the
presence of other elements or steps than those listed.
From reading the present disclosure, other modifications will be apparent to persons skilled in the art. Such modifications may involve other features which are already known in the design,
manufacture and use of radio apparatus and component parts therefore and which may be used instead of or in addition to features already described herein.
The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents,
foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects
of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to
the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are
entitled. Accordingly, the claims are not limited by the disclosure. | {"url":"http://www.google.com/patents/US8218687?ie=ISO-8859-1&dq=patent:6161142","timestamp":"2014-04-20T21:58:54Z","content_type":null,"content_length":"95912","record_id":"<urn:uuid:ea85d288-f30a-42fd-a69a-71e5694d7b6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
76T99 None of the above, but in this section
The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are
applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be
expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce
unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic
interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume
tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain
immiscible lattice BGK. | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/12964","timestamp":"2014-04-16T07:15:59Z","content_type":null,"content_length":"17577","record_id":"<urn:uuid:c4a815cb-b5a2-493e-9768-4e558c6c43b3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measurement unit conversion: million cubic foot/minute
Measurement unit conversion: million cubic foot/minute
›› Measurement unit: million cubic foot/minute
Full name: million cubic foot/minute
Plural form: million cubic feet/minute
Category type: volume flow rate
Scale factor: 471.94745
›› SI unit: cubic meter/second
The SI derived unit for volume flow rate is the cubic meter/second.
1 cubic meter/second is equal to 0.00211887997276 million cubic foot/minute.
›› Convert million cubic foot/minute to another unit
Convert million cubic foot/minute to [ ] [Go!]
Valid units must be of the volume flow rate type.
You can use this form to select from known units:
Convert million cubic foot/minute to [ ] [Go!]
I'm feeling lucky, show me some random units
›› Sample conversions: million cubic foot/minute
million cubic foot/minute to barrel/second [UK] million cubic foot/minute to cubic millimetre/day million cubic foot/minute to billion cubic foot/minute million cubic foot/minute to barrel/hour
[petroleum] million cubic foot/minute to cubic millimetre/hour million cubic foot/minute to barrel/second [US] million cubic foot/minute to hectare metre/second million cubic foot/minute to barrel/
minute [US] million cubic foot/minute to ounce/minute [UK] million cubic foot/minute to cubic dekametre/second | {"url":"http://www.convertunits.com/info/million+cubic+foot/minute","timestamp":"2014-04-16T16:32:21Z","content_type":null,"content_length":"29115","record_id":"<urn:uuid:1e855f03-b4c9-4b09-8a04-df83d3647d25>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fragments of First-Order Logic over Infinite Words
when quoting this document, please refer to the following
http://drops.dagstuhl.de/opus/volltexte/2009/1818/ Diekert, Volker
Kufleitner, Manfred
Fragments of First-Order Logic over Infinite Words
We give topological and algebraic characterizations as well as language theoretic descriptions of the following subclasses of first-order logic $\mathrm{FO}[<]$ for $\omega$-languages: $\Sigma_2$, $\
Delta_2$, $\mathrm{FO}^2 \cap \Sigma_2$ (and by duality $\mathrm{FO}^2 \cap \Pi_2$), and $\mathrm{FO}^2$. These descriptions extend the respective results for finite words. In particular, we relate
the above fragments to language classes of certain (unambiguous) polynomials. An immediate consequence is the decidability of the membership problem of these classes, but this was shown before by
Wilke (1998) and Boja{\'n}czyk (2008) and is therefore not our main focus. The paper is about the interplay of algebraic, topological, and language theoretic properties.
BibTeX - Entry
author = {Volker Diekert and Manfred Kufleitner},
title = {{Fragments of First-Order Logic over Infinite Words}},
booktitle = {26th International Symposium on Theoretical Aspects of Computer Science},
pages = {325--336},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-09-5},
ISSN = {1868-8969},
year = {2009},
volume = {3},
editor = {Susanne Albers and Jean-Yves Marion},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2009/1818},
URN = {urn:nbn:de:0030-drops-18185},
doi = {http://dx.doi.org/10.4230/LIPIcs.STACS.2009.1818},
annote = {Keywords: Infinite words, Regular languages, First-order logic, Automata theory, Semigroups, Topology}
Keywords: Infinite words, Regular languages, First-order logic, Automata theory, Semigroups, Topology
Seminar: 26th International Symposium on Theoretical Aspects of Computer Science
Issue date: 2009
Date of publication: 2009 | {"url":"http://drops.dagstuhl.de/opus/volltexte/2009/1818/","timestamp":"2014-04-25T07:43:46Z","content_type":null,"content_length":"8187","record_id":"<urn:uuid:13a9696b-7db0-4028-a1b4-1fff90d97fea>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is so "spectral" about spectra?
up vote 23 down vote favorite
What is the background of the terminology of spectra in homotopy theory? In what extend does the name "spectrum" fit to the definition and the properties? Also, are there relations to other spectra
in mathematics (algebraic geometry, operator theory)?
PS: The title is an allusion to this question ;-)
terminology homotopy-theory
Although I've been using spectra for some time now, I have never read or heard why this name was chosen, even in a history text like May's math.uiuc.edu/K-theory/0321/history.pdf . As described
there, the concept of a spectrum has its origin in Lima's thesis. Perhaps one should have a look there. – Lennart Meier May 10 '10 at 16:57
To your other questions: there are of course articles, where several meanings of spectra are used, say in derived algebraic geometry or more generally in modern stable homotopy theory or also in
K-Theory stuff, which can be defined by operators. But I don't think the topological meaning has a real mathematical relationship to the other meanings. – Lennart Meier May 10 '10 at 16:58
I always thought that the terminology "spectrum" in algebraic geometry and operator theory has been chosen because of the analogy between the Gelfand Naimark construction for commutative $C^*$
1 algebras and the way we represent varieties in terms of algebra hom's. Does the above construction of the spectra not realize a similar construction for the homotopy functor? – plusepsilon.de Apr
19 '11 at 14:19
add comment
1 Answer
active oldest votes
It seems reasonable to me that in operator theory the term "spectrum" comes from the Latin verb spectare (paradigm: specto, -as, -avi, -atum, -are), which means "to observe". After all
in quantum mechanics the spectrum of an observable, i.e. the eigenvalues of a self adjoint operator, is what you can actually see (measure) experimentally.
Edit: after having a look to an online etymological dictionary, it seems the relevant Latin verb is another: spècere (or interchangeably spicere)= "to see", from which comes the root
spec- of the latin word spectrum= "something that appears, that manifests itself, vision". Furthermore, spec- = "to see", -trum = "instrument" (like in spec-trum). Also the term
"spectrum" in astronomy and optics has the same origin.
In algebraic geometry, I believe the term "spectrum", and the corresponding concept, has been introduced after the development of quantum mechanics became well known. In this context,
the concept of spectrum as a space made of ideals is perfectly analogous of that in operator theory (think of Gelfand-Naimark theory, and that the Gelfand spectrum of the abelian C-star
algebra generated by one operator is nothing but the spectrum of that operator).
up vote 7 I wouldn't be surprised if the term "spectral sequence" had something to do with "inspecting" [b.t.w. also "to inspect" comes from in + spècere...] step by step the deep properties of
down vote some cohomological constructions.
Maybe the term "spectrum" in homotopy theory and generalized (co)homology -but I don't know almost anything about these- has to do with "probing", "testing", a space via maps from (or
to?) certain standard spaces such as the Eilenberg-MacLane spaces or the spheres. Does it sound reasonable?
Edit: The following paragraph from the wikipedia article on "primon gas" seems to support my guess:
"The connections between number theory and quantum field theory can be somewhat further extended into connections between topological field theory and K-theory, where, corresponding
to the example above, the spectrum of a ring takes the role of the spectrum of energy eigenvalues, the prime ideals take the role of the prime numbers, the group representations take
the role of integers, group characters taking the place the Dirichlet characters, and so on"
2 In 1920s people in topology considered "projective spectra" which were just what some now call inverse systems of toplogical spaces, whose index set is the natural numbers. Inverse
system approximates its limit. Similarly spectral sequences approximate homologies. There are similarities of such definitions with the spectra in stable homotopy. – Zoran Skoda Jan
20 '11 at 19:04
3 On the other hand the etypology above is a bit misleading. The spectrum of a system is a generalization of a spectrum of a light, related to energies the same way. For the light it is
well known that the spectrum has been named by Newton who was doing the experiment with prism, which he could not explain as he followed corpuscular theory of light. The meaning was
apparition, for its ghostly appearance from the prism. – Zoran Skoda Jan 20 '11 at 19:16
@zoran: so the motivation would be different from the one I suggested (roughfly, "appearing" instead of "inspecting"), but the etymology could be the same. – Qfwfq Mar 11 '11 at 14:24
Hi, does anyone know about the spectrum in topoi theory? I met someone named John Kennison who spoke about this at a conference recently and I wanted to figure out the origins of the
1 term. If people think this would be better as a full question I can post it, but I didn't want another "What's so spectral" question after this great answer was given here. Kennison
deals with the cyclic spectrum of a Boolean flow, which is a special case of a Cole Spectrum. He references P.T. Johnstone's "Topos Theory" which I can't find a copy of. Has anyone
studied this kind of spectrum? – David White May 20 '11 at 18:39
add comment
Not the answer you're looking for? Browse other questions tagged terminology homotopy-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/24090/what-is-so-spectral-about-spectra?sort=oldest","timestamp":"2014-04-23T23:56:59Z","content_type":null,"content_length":"63630","record_id":"<urn:uuid:d1aacbee-159b-4843-82ec-b7e6837cc317>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Volume
Method 1 of 6: Calculate the Volume of a Cube
1. 1
Write down the formula for finding the volume of a cube. The formula is simply V = S^3, where V represents Volume and S represents the length of one side of the cube.
2. 2
Find the length of one side of a cube. Any side will do. Write it down.
3. 3
Cube it. Since all the lengths of the sides of a cube are equal, you can just cube the length of one side of a cube to find its volume. You can think of this as multiplying the length, width, and
height when they all just happen to be the same.
□ Ex: 5 in^3 = 5 in. * 5 in. * 5 in. = 125 in.^3
4. 4
State your answer in cubic units. Don't forget to put your final answer in cubic units. The final answer is 125 in.^3
Method 2 of 6: Calculate the Volume of a Rectangular Prism
1. 1
Write down the formula for finding the volume of a rectangular prism. The formula is simply V = L * W * H, where V = Volume, L = Length. W = Width, and H = Height.
2. 2
Find the length. The length is the longest side of the flat surface of the rectangle on the top or bottom of the rectangular prism.
3. 3
Find the width. The width of the rectangular prism is the shorter side of the flat surface of the rectangle on the top or bottom of the shape.
4. 4
Find the height. The height is the part of the rectangular prism that rises up. You can imagine the height of the rectangular prism as the part that stretches up a flat rectangle and makes it
5. 5
Multiply the length, the width, and the height. You can multiply them in any order to get the same result.
□ Ex: 4 in. * 3 in. * 6 in = 72 in.^3
6. 6
State your answer in cubic units. The final answer is 72 in.^3
Method 3 of 6: Calculate the Volume of a Cylinder
1. 1
Write down the formula for finding the volume of a cylinder. The formula is V = πr^2h. V represents Volume, h represents height, and πr^2 represents the area of the circular base.
□ You can think of it as finding the volume by multiplying the base times the height. In this case, the base just happens to be circular. A simpler formula is V = B * H, where the B represents
the base, or the area of the circle, and H represents the height.
2. 2
Calculate the area of the circular base. To calculate the area of the circular base, you simply have to use the formula A = πr^2. The symbol r represents the radius of the circle. Let's say this
particular cylinder has a circular base with a radius of 4 in.
□ To find the area of this circle, just plug 4 into the formula A = πr^2 to get A = π * 16 in^2. You should use π on your calculator to get the most accurate results. If you don't have a
calculator, you can use 3.14, the first three digits of π, but your answer will be slightly less accurate.
☆ Ex: A = 16 in.^2 * π.
☆ Ex: A = 50.24 in.^2
3. 3
Find the height. The height of the cylinder is simply how high the cylinder extends. Let's say that the height of this same cylinder is 10 in. The cylinder has a base with the area 50.24 in.^2
and a height of 10 in.
4. 4
Multiply the area of the base times the height. Just plug in the known constants into the original formula, V = πr^2h. Remember that the cylinder has a base with the area 50.24 in.^2 and a height
of 10 in.
□ Ex: V = 50.24 in.^2 * 10 in.
□ Ex: V = 502.4 in.^3
5. 5
State your answer in cubic units. Don't forget to state your final answer in cubic units since you're still working in a three-dimensional space. The final answer is 502.4 in.^3
Method 4 of 6: Calculate the Volume of a Cone
1. 1
Write down the formula for finding the volume of a cone. The formula is V = 1/3πr^2h. V represents Volume, h represents height, and πr^2 represents the area of the circular base.
□ You can think of it as finding the volume by multiplying 1/3 times the base times the height. In this case, the base just happens to be circular. A simpler formula is V = 1/3 * B * H, where
the B represents the area of the base, or the area of the circle, and H represents the height.
2. 2
Calculate the area of the circular base. To calculate the area of the circular base, you simply have to use the formula A = πr^2. The symbol r represents the radius of the circle at the base of
the cone. Let's say this particular cone has a circular base with a radius of 3 in.
□ To find the area of this circle, just plug 3 into the formula A = πr^2 to get A = π * 9 in^2. You should use π on your calculator to get the most accurate results. If you don't have a
calculator, you can use 3.14, the first three digits of π, but your answer will be slightly less accurate.
☆ Ex: A = 9 in.^2 * π.
☆ Ex: A = 28.27 in.^2
3. 3
Find the height. The height of the cone is simply how high the cone extends. Let's say that the height of this same cone is 5 in. The cone has a base with the area 28.27 in.^2 and a height of 5
4. 4
Multiply the area of the base times the height. Just plug in the known constants into the original formula, V = 1/3 * πr^2h. Remember that the cone has a base with the area 28.27 in.^2 and a
height of 5 in.
□ Ex: V = 28.27 in.^2 * 5 in.
□ Ex: V = 141.37 in.^3
5. 5
Multiply your answer by 1/3. You have essentially found the volume of a cylinder and now have to divide that answer by 3. You can think of how a cone can fit into a cylinder three times. Instead
of multiplying your answer by 1/3, just divide it by 3 to get the same result.
□ Ex: 141.37 in.^3/ 3 = 47.12 in.^3
6. 6
State your answer in cubic units. Don't forget to state your final answer in cubic units since you're still working in a three-dimensional space. The final answer is 47.12 in.^3
Method 5 of 6: Calculate the Volume of a Sphere
1. 1
Write down the formula for finding the volume of a sphere. The formula is V = 4/3πr^3.
2. 2
Find the radius of the sphere. Write it down.
□ Let's say the radius of the sphere is 3 in.
3. 3
Cube your answer. Just multiply 3 in. by itself three times to cube it.
□ Ex: 3^3 = 3 in. * 3 in. * 3 in. = 27 in.^3
4. 4
Multiply your answer by 4/3. Multiply your new answer, 27 in.^3, by 4/3. You can do this by multiplying the whole number with the fraction: Simply create a fraction with the whole number as the
numerator and 1 on the denominator, and multiply it by 4/3, taking care to multiply the numerators and denominators of both fractions. Here's how you do it.
□ Ex: 27/1 in.^3 * 4/3 = 108/3 in.^3
□ Simplify. 108/3 in.^3 = 36 in.^3
5. 5
Multiply your answer by π. Simply take your answer, 36, and multiply it by the digits represented by π. You should use π on your calculator to get the most accurate results. If you don't have a
calculator, you can use 3.14, the first three digits of π, but your answer will be slightly less accurate. Here's how to do it:
□ Ex: 36 in.^3 * π = 113.09 in.^3
6. 6
State your answer in cubic units. The final answer is 113.09 in.^3
Method 6 of 6: Calculate the Volume of a Regular Pyramid
1. 1
Write down the formula for finding the volume of a regular pyramid. The formula for finding the volume of a regular pyramid is V = 1/3 * area of base * height.
2. 2
Calculate the area of the base. The base of a regular pyramid can be triangular, square, or hexagonal, but each side of the base will be equal in length. Let's use a pyramid with a square base
whose sides are all 6 inches long. To find the area of a square, you just have to multiply its base times its height, or just square one of the sides, since that will give you the same answer.
□ Ex: 6 in. * 6 in. = 36 in.^2
3. 3
Find the height. Let's say the height of this pyramid is 10 in.
4. 4
Multiply the area of the base times the height. Now take the area of the base of the pyramid and multiply it by its height.
□ Ex: 36 in.^2 * 10 in. = 360 in.^3
5. 5
Multiply your answer by 1/3. Just multiply your answer by 1/3 to find the volume of the pyramid. You can also just divide your answer by 3, since that's a slightly easier way of multiplying a
number by 1/3.
□ Ex: 360 in.^3/3 = 120 in.^3.
6. 6
State your answer in cubic units. The final answer is 120 in.^3
• Always use cubic units.
• And keep care of units while using any formula. For example if you are calculating the volume of a cuboid, then your length, width, and height should be in same units.
Article Info
Featured Article
Categories: Featured Articles | Volume
Recent edits by: LeahlovesGod, Lugia2453, Daniel Bauwens
In other languages:
Italiano: Come Calcolare il Volume, Русский: находить объем, Español: Cómo calcular el volumen, Nederlands: Volume berekenen, Deutsch: Volumen berechnen, 中文: 计算三维物体体积, Français: Comment
calculer un volume, Português: Como Calcular Volume
Thanks to all authors for creating a page that has been read 146,999 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Calculate-Volume","timestamp":"2014-04-19T05:22:58Z","content_type":null,"content_length":"85338","record_id":"<urn:uuid:b8751982-f30a-486c-b087-c93e2fdeb4b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
1999-2000 UAF Catalog
Degrees and Programs
College of Science, Engineering and Mathematics
Department of Physics
(907) 474-7339
www.uaf.edu/physics/Degrees: B.A., B.S., M.S., M.A.T., Ph.D.
Minimum Requirements for Degrees: B.A.: 130 credits; B.S.: 130 credits; M.S.: 30-33 credits; M.A.T.: 36 credits; Ph.D.: open
The science of physics is concerned with the nature of matter and energy and encompasses all phenomena in the physical world from elementary particles to the structure and origin of the universe.
Physics provides, together with mathematics and chemistry, the foundation of work in all fields of physical science and engineering, and contributes to other fields such as biology and medicine.
The undergraduate curriculum provides a solid foundation in general physics with emphasis on its experimental aspects. A student completing this curriculum should be prepared for careers in education
and industry, and for advanced work in the fields of physics, applied physics and related sciences.
The M.S., M.A.T., and Ph.D. degrees are offered in physics, atmospheric sciences, space physics and general science.
Graduate work is offered in various areas of physics and applied physics including many of the research areas found in the UAF Geophysical Institute. Faculty and student research programs currently
emphasize investigation of auroral, ionospheric, magnetospheric and space plasma physics, the physics and chemistry of the upper and middle atmosphere, radio wave propagation and scattering,
solar-terrestrial relations, condensed matter physics and polar meteorology.
Teaching and research assistantships are available on a competitive basis. Contact the department or individual faculty members for more information.
The physics department is responsible for the physics, atmospheric sciences, space physics and the general science programs. See atmospheric sciences and space physics listings for more information
on degree requirements in these disciplines.
Undergraduate Program
Physics -- B.A. Degree
1. Complete the general university requirements.
2. Complete the B.A. degree requirements.
3. Complete the following program (major) requirements:
a. Complete the following:*
PHYS 113 -- Concepts of Physics (1 credit)
PHYS 211X -- General Physics (4 credits)
PHYS 212X -- General Physics (4 credits)
PHYS 213X -- Elementary Modern Physics (4 credits)
PHYS approved electives (20 credits)
b. Complete the following minor in mathematics:
MATH 200X -- Calculus** (4 credits)
MATH 201X -- Calculus** (4 credits)
MATH 202X -- Calculus (4 credits)
MATH electives at the 300-level or above (6 credits)
4. Minimum credits required (130 credits)
* Student must earn a "C" grade or better in each course.
** Satisfies core curriculum or B.A. degree requirements, but not both.
Physics -- B.S. Degree
1. Complete the general university requirements. (As part of the core curriculum requirements, these courses are suggested: CHEM 105X and CHEM 106X; or GEOS 101X; or BIOL 105X.)
2. Complete the B.S. degree requirements (page 30).
3. Complete the following program (major) requirements:*
PHYS 113 -- Concepts of Physics (1 credit)
PHYS 211X -- General Physics (4 credits)
PHYS 212X -- General Physics (4 credits)
PHYS 213X -- Elementary Modern Physics (4 credits)
PHYS 311 -- Mechanics (4 credits)
PHYS 312 -- Mechanics (4 credits)
PHYS 313 -- Thermodynamics and Statistical Physics (4 credits)
PHYS 331 -- Electricity and Magnetism (3 credits)
PHYS 332 -- Electricity and Magnetism (3 credits)
PHYS 381W,O -- Physics Laboratory (3 credits)
PHYS 382W -- Physics Laboratory (3 credits)
PHYS 411 -- Modern Physics (4 credits)
PHYS 412 -- Modern Physics (4 credits)
PHYS 445 -- Solid State Physics and Physical Electronics (4 credits)
PHYS 462 -- Geometrical and Physical Optics (4 credits)
4. Complete the following program (major) requirements:
MATH 200X -- Calculus** (4 credits)
MATH 201X -- Calculus** (4 credits)
MATH 202X -- Calculus (4 credits)
MATH 302 -- Differential Equations (3 credits)
MATH electives at the 300-level or above*** (9 credits)
5. Minimum credits required (130 credits)
* Student must earn a "C" grade or better in each course.
** Satisfies core curriculum or B.A. degree requirements, but not both.
*** Suggested electives: MATH 314, 421 and 422.
Note: Suggested core courses GEOS 101X and BIOL 105X require completion of the second semester sequential course to fulfill the natural science depth emphasis.
Note: Other courses suggested to fulfill minimum credit requirements: ES 201, 307 and 308.
1. Complete the following:
PHYS 103X-104X -- College Physics (8)
or PHYS 211X-212X -- General Physics (8) (8 credits)
2. Complete the following:
PHYS 213X -- Elementary Modern Physics (4 credits)
Electives at the 300- 400-level (8 credits)
3. Minimum credits required (20 credits)
Graduate Program
Physics -- M.S. Degree
1. Complete the general university requirements.
2. Complete the master's degree requirements.
3. Complete the thesis or non-thesis requirements:
a. Complete the following:
PHYS 699 -- Thesis (6-12 credits)
b. Complete 4 of the following:
PHYS 611 -- Mathematical Physics (3 credits)
PHYS 612 -- Mathematical Physics (3 credits)
PHYS 621 -- Classical Mechanics (3 credits)
PHYS 622 -- Statistical Mechanics (3 credits)
PHYS 631 -- Electromagnetic Theory (3 credits)
PHYS 632 -- Electromagnetic Theory (3 credits)
PHYS 651 -- Quantum Mechanics (3 credits)
PHYS 652 -- Quantum Mechanics (3 credits)
c. Complete 12 credits from the following:
Approved PHYS 600-level courses
Approved ATM 600-level courses
d. Minimum credits required (30 credits)
a. Complete the following:
PHYS 698 -- Research (3-6 credits)
Approved courses (18 credits)
b. Complete 4 of the following:
PHYS 611 -- Mathematical Physics (3 credits)
PHYS 612 -- Mathematical Physics (3 credits)
PHYS 621 -- Classical Mechanics (3 credits)
PHYS 622 -- Statistical Mechanics (3 credits)
PHYS 631 -- Electromagnetic Theory (3 credits)
PHYS 632 -- Electromagnetic Theory (3 credits)
PHYS 651 -- Quantum Mechanics (3 credits)
PHYS 652 -- Quantum Mechanics (3 credits)
c. Minimum credits required* (33 credits)
* At least 30 credits must be regular coursework.
Physics -- M.A.T. Degree
1. Complete the general university requirements.
2. Complete the M.A.T. degree requirements.
3. Contact the department head for specific degree requirements.
4. Minimum credits required (36 credits)
Physics -- Ph.D. Degree
1. Complete the general university requirements.
2. Complete the Ph.D. degree requirements.*
3. Minimum credits required open
* Demonstrate competency in a foreign language or a research tool.
See Applied Physics.
See General Science.
Catalog Index | Class Schedule | Admissions | UAF Home | UAF Search | News and Events
Send comments or questions to the UAF Admissions Office.
Last modified March 10, 1999 | {"url":"http://www.uaf.edu/catalog/catalog_99-00/programs/physics.html","timestamp":"2014-04-21T04:39:46Z","content_type":null,"content_length":"10595","record_id":"<urn:uuid:4e690524-5c88-4621-83a0-0f0019c26d8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Are you a Bayesians?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Are you a Bayesians?
From Paul Millar <paul.millar@shaw.ca>
To statalist@hsphsun2.harvard.edu
Subject RE: st: Are you a Bayesians?
Date Sat, 08 Apr 2006 17:35:59 -0600
I didn't notice Nick's response to this (sorry for the lengthy delay).
I agree generally with his analysis. The Bayesian take on things is a real change in paradigm for most researchers, some of whom find existing frequentist techniques challenging enough as they are. A
new learning curve is a barrier for many. On the other hand, ideas from the Bayesian side of things have a lot to offer. The world isn't necessarily universally normally distributed as many models
assume (although recent frequentist approaches, such as XTGEE, allow different distributions for the dependent variable), leaving aside the need for priors. I agree that the gauntlet lies before
those who straddle both worlds to some extent and who can come up with applications influenced by the Bayesian paradigm. I hope to be among those, eventually. The -bic- and -bicdrop1- commands (for
example) from ssc are example of using Bayesian concepts in a frequentist way that is easy and powerful for use in conjunction with conventional models. I agree that Stata's role is to implement
accepted techniques, not invent new ones or to try to implement rarely used esoteric algorithms.
- Paul
At 05:53 AM 08/03/2006, you wrote:
One main thrust of Bayesian work seems to have been the insistence
that each problem ideally requires its own model, which commands fairly
easy assent as an abstract principle. This would be made easier by MCMC
engines, etc. but the extent to which it can be automated is
questionable. I don't think this is mainly because analysts need to reach
into their subconscious to pull out prior distributions: it is mainly
because of the need to customise a model according to the
structure of the scientific or substantive problem.
So, ultimately each researcher needs to write their own
"program", which I put in quotation marks because
in Stata that need not necessarily mean a program
in Stata's sense. That's why WinBugs and R are languages widely
used for this: WinBugs is designed for the purpose,
and R is designed mainly for statistical people willing to write
their own programs. Or so I perceive.
I have to guess that most researchers using statistics are
most unlikely to want to write their own program. Also,
the prevailing mindset, as shown by many, many posts on this
list, is that there is a "correct" analysis that can be
obtained by plugging your data into a pre-existing program.
Just tell me what it is, please!
While Bayesian stuff seems to be growing on an exponential,
I predict that exponential will turn into a sigmoid,
given the likely mass unwillingness of people applying
statistics to adopt it. The intellectual arguments are
secondary here.
That all said, the crunch really is this.
1. There is no detectable interest on the part of StataCorp in
providing the tools. If this is true, they probably won't say so,
or say much more than that there is no present intent to do
Bayesian stuff in a major way. StataCorp prefer positive
statements to zero or negative ones.
2. Regardless of 1, StataCorp do not like doing token efforts
or playing with something. (When they have done something
that ended up as a token effort, usually by accident, they
have regretted it bitterly.) So if StataCorp go Bayesian, they
will go Bayesian in a massive way, and that's a long-term project.
3. Regardless of 1 and 2, user-programmers could do a lot more
than they have done already, but there is very little interest.
My main guess here is, as above and as mentioned previously
on this thread, interested people just do it in other
4. It is always nice when people say, "Well, I use X for Y, but
I would rather use Stata." However, when other programs are years
ahead of Stata, it is not clear why Stata should play catch-up.
I don't write as anti-Bayes or non-Bayes. I have, in a minor way,
implemented Bayesian ideas in Stata for one problem. (See -cij-
on SSC, most of which was adopted in official -ci-.) I have seen work
in which the frequentist answer was a heap of garbage and the Bayesian
solution neat and elegant and scientifically much more acceptable.
I have also seen Bayesian projects that seemed to take up many, many times
more effort than a frequentist solution that got most of the way.
Paul Millar
> In the past I have used R or Winbugs for Bayesian problems. I agree
> Stata could be better equipped for this approach. In fact, I don't
> think Bayesian approaches will, despite their power compared to
> frequentist techniques, get into the mainstream until people develop
> routines for packages like Stata that make it easy for the researcher
> to take advantage of.
> >If you are a Bayesian using stata, please respond with raised voice.
> >
> >Most of my work is frequentist in nature, but I apply Bayesian
> >techniques for some of my more onerous problems. As was mentioned in
> >the fall, "Stata is not much of a vehicle for doing Bayesian
> >things." Should this change?
> >
> >The paucity of interest in Bayesian techniques, or its appearance,
> >may represent an area of development for stata. Bayesians, if you
> >are out there, I personally would like to how you manage. Maybe
> >stata and its users will develop greater tools if we can show that
> >there is a market.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-04/msg00300.html","timestamp":"2014-04-18T13:09:17Z","content_type":null,"content_length":"11277","record_id":"<urn:uuid:f81ee458-ae12-4dc1-93cf-cf362a6f7c2b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
My current research interest is
Tropical Algebra and Geometry.
See my research papers (some on arXiv) regarding tropical issues:
Formerly I was interested in Real Algebraic Geometry. My Master thesis (UCM, 1984, advisor: J.M. Ruiz Sancho) dealt with Zariski-dense curves. In my Ph. D. thesis (Stanford University, 1988,
advisor: G.W. Brumfiel), I defined and studied spaces of valuations on rings compatible with orderings (I called these spaces "real Riemann surfaces", following denotation by O. Zariski) and spaces
of orderings together with involutions (called "the complex spectrum of a ring"), similar to the real spectrum of M. Coste and M. F. Roy. I tried to put these spaces in connection with several
compactifications of algebraic varieties known then (due to G. Bergman, and to J. Morgan and P. Shalen). Later on, I studied atypical values of real polynomial functions on the plane. Later, I
pursued the task of making a few facts about real plane algebraic curves (well--known to experts) available to a wider audience. See my list of related published papers:
• Real plane algebraic curves. Expo. Math. 20 (2002), no. 4, 291--314. 14P05, MR1940009(2003h:14088)
• Multiplicidad de intersección y resultantes. (Spanish) Mathematical contributions: volume in honor of Professor Enrique Outerelo Domínguez (Spanish), 333--348, Homen. Univ. Complut., Editorial
Complutense, Madrid, 2004. 14H50 (13P99), MR2212975(2006k:14052)
• Curvas algebraicas reales planas. (Spanish) Mathematical contributions: volume in honor of Professor Joaquín Arregui Fernández (Spanish), 249--263, Homen. Univ. Complut., Editorial Complutense,
Madrid, 2000. 14P05, MR 1803907
• (with M. Coste) Atypical values at infinity of a polynomial function on the real plane: an erratum, and an algorithmic criterion. J. Pure Appl. Algebra 162 (2001), no. 1, 23--35. 14P25 (14P20
26C99), MR1844807(2002f:14078)
• (with J. Ferrera) The asymptotic values of a polynomial function on the real plane. J. Pure Appl. Algebra 106 (1996), no. 3, 263--273. 26C99 (14P99), MR1298760(95j:14076)
• (with J. Ferrera) Level curves of open polynomial functions on the real plane. Comm. Algebra 22 (1994), no. 14, 5973--5981. 14P25 (26C05), MR1298760(95j:14076)
• A complex version of the Baer-Krull theorems. Comm. Algebra 28 (2000), no. 8, 3727--3737. 12J15 (12J10 12L12), MR1767584(2001i:12010)
• The complex spectrum of a ring. Real algebraic geometry and ordered structures (Baton Rouge, LA, 1996), 235--249, Contemp. Math., 253, Amer. Math. Soc., Providence, RI, 2000. 13J30 (12D15),
MR1747588 (2001f:13039)
• Specializations and a local homeomorphism theorem for real Riemann surfaces of rings. Pacific J. Math. 176 (1996), no. 2, 427--442. 14P10 (12D15), MR1435000 (98b:14044)
• The compatible valuation rings of the coordinate ring of the real plane. Recent advances in real algebraic geometry and quadratic forms (Berkeley, CA, 1990/1991; San Francisco, CA, 1991),
231--242, Contemp. Math., 155, Amer. Math. Soc., Providence, RI, 1994. 13F30 (13A18), MR1260710 (94k:13029)
Prior to getting interested in Tropical Mathematics, I devoted some years to write a
text book
(undergraduate level) on
plane algebraic curves (complex and real)
. It is written in Spanish. About half of the exercises in it are solved in full detail. Also, I stress some techniques to draw algebraic curves. Here are some | {"url":"http://www.mat.ucm.es/~mpuente/research.html","timestamp":"2014-04-16T15:10:16Z","content_type":null,"content_length":"13302","record_id":"<urn:uuid:519b7241-9de1-4802-840e-82243debc8fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Middletown Twp, PA Statistics Tutor
Find a Middletown Twp, PA Statistics Tutor
...The reasoning and language parts present the major stumbling blocks for students. I believe that geometry is among the hardest subjects to teach and to tutor. But... I love it!
23 Subjects: including statistics, English, calculus, algebra 1
I have been working as a statistician at the University of Pennsylvania since 1991, providing assistance to researchers in various areas of health behavior. I am proficient in several statistical
packages, including SPSS, STATA, and SAS. One of my particular strengths is the ability to explain sta...
1 Subject: statistics
I am a recent college graduate from Duquesne University in Pittsburgh, PA. I received my undergraduate degree in Elementary Education (K-6), and I am certified to teach in the state of
Pennsylvania. I am athletic and energetic and have a unique style of tutoring.
20 Subjects: including statistics, reading, calculus, algebra 1
...During my tenure with WyzAnt, I have been assisting students with their preparation for the Florida State Licensing Exam for Registered Nurses (NCLEX). In addition, I have especially enjoined
teaching English as a Second Language to non-native English speakers. Some of my students have been native speakers of Spanish, Portuguese and Swedish. It is a privilege to teach.
51 Subjects: including statistics, English, reading, geometry
My name is Brian. I have a master's in mathematics and a bachelor's in mathematics and psychology. I have six years tutoring experience and three years of training and mentoring on a professional
15 Subjects: including statistics, physics, geometry, calculus
Related Middletown Twp, PA Tutors
Middletown Twp, PA Accounting Tutors
Middletown Twp, PA ACT Tutors
Middletown Twp, PA Algebra Tutors
Middletown Twp, PA Algebra 2 Tutors
Middletown Twp, PA Calculus Tutors
Middletown Twp, PA Geometry Tutors
Middletown Twp, PA Math Tutors
Middletown Twp, PA Prealgebra Tutors
Middletown Twp, PA Precalculus Tutors
Middletown Twp, PA SAT Tutors
Middletown Twp, PA SAT Math Tutors
Middletown Twp, PA Science Tutors
Middletown Twp, PA Statistics Tutors
Middletown Twp, PA Trigonometry Tutors
Nearby Cities With statistics Tutor
Abington, PA statistics Tutors
Bensalem statistics Tutors
Burlington Township, NJ statistics Tutors
Burlington, NJ statistics Tutors
Croydon, PA statistics Tutors
Delran Township, NJ statistics Tutors
Fairless Hills statistics Tutors
Florence, NJ statistics Tutors
Horsham statistics Tutors
Hulmeville, PA statistics Tutors
Langhorne statistics Tutors
Levittown, PA statistics Tutors
Penndel, PA statistics Tutors
Rockledge, PA statistics Tutors
Tullytown, PA statistics Tutors | {"url":"http://www.purplemath.com/Middletown_Twp_PA_Statistics_tutors.php","timestamp":"2014-04-17T01:09:15Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:30276f70-1f12-42bb-a012-976234ed7fe3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Tuesday, April 12, 2005 15:21:31
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
2001 Fall Western Section Meeting
Irvine, CA, November 10-11, 2001
Meeting #972
Associate secretaries:
Bernard Russo
, AMS
Sunday November 11, 2001
• Sunday November 11, 2001, 8:00 a.m.-12:00 p.m.
Meeting Registration
Lobby, Rowland Hall
• Sunday November 11, 2001, 8:00 a.m.-10:50 a.m.
Special Session on Operator Spaces, Operator Algebras, and Applications
Room 118, Multipurpose Science and Technology
Marius Junge, University of Illinois, Urbana-Champaign junge@math.uiuc.edu
Timur Oikhberg, University of Texas and University of California Irvine timur@mail.ma.utexas.edu
• Sunday November 11, 2001, 8:00 a.m.-12:00 p.m.
Exhibit and Book Sale
Room 188, Rowland Hall
• Sunday November 11, 2001, 8:30 a.m.-10:50 a.m.
Special Session on Quantum Topology
Room 114, Rowland Hall
Louis Kauffman, University of Illinois at Chicago kauffman@uic.edu
Jozef Przytycki, George Washington University przytyck@research.circ.gwu.edu
Fernando Souza, University of Waterloo fsouza@cacr.math.uwaterloo.ca
• Sunday November 11, 2001, 8:30 a.m.-10:50 a.m.
Special Session on Groups and Covering Spaces in Algebraic Geometry
Room 114, Multipurpose Science and Technology
Michael Fried, University of California Irvine mfried@math.uci.edu
Helmut Voelklein, University of Florida helmut@math.ufl.edu
• Sunday November 11, 2001, 8:30 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Complex Analysis
Room 124, Multipurpose Science and Technology
Xiaojun Huang, Rutgers University huangx@math.rutgers.edu
Song-Ying Li, University of California, Irvine sli@math.uci.edu
□ 8:30 a.m.
□ 9:00 a.m.
Equivalence of real submanifolds in complex space.
Linda P Rothschild*, UCSD
□ 9:30 a.m.
CR embeddings into strictly pseudoconvex hypersurfaces.
Peter F Ebenfelt*, University of California, San Diego
□ 10:00 a.m.
Normalization of complex-valued planar vector fields which degenerate along a real curve.
Paulo D Cordaro*, University of S.Paulo - Brazil
Xianghong Gong, University of Wisconsin, Madison
□ 10:30 a.m.
A nonlinear Fourier transform.
Camil Muscalu, UCLA
Terence Tao, UCLA
Christoph Thiele*, UCLA
• Sunday November 11, 2001, 8:35 a.m.-10:50 a.m.
Special Session on Random and Deterministic Schr\"odinger Operators
Room 122, Multipurpose Science and Technology
Svetlana Jitomirskaya, University of California Irvine szhitomi@math.uci.edu
Abel Klein, University of California Irvine aklein@math.uci.edu
• Sunday November 11, 2001, 8:40 a.m.-10:50 a.m.
Special Session on Extremal Metrics and Moduli Spaces
Room 108, Rowland Hall
Steven Bradlow, University of Illinois, Urbana-Champaign bradlow@uiuc.edu
Claude LeBrun, State University of New York, Stony Brook claude@math.sunysb.edu
Yat Sun Poon, University of California Riverside ypoon@math.ucr.edu
• Sunday November 11, 2001, 9:00 a.m.-10:40 a.m.
Special Session on Topology of Algebraic Varieties
Room 210, Physical Sciences Classroom Bldg
Eriko Hironaka, Florida State University hironaka@math.fsu.edu
Grigory Mikhalkin, University of Utah mikha@math.utah.edu
□ 9:00 a.m.
On arrangements of a plane real quintic curve with respect to a pair of lines.
Anatoly B. Korchagin*, Texas Tech University
□ 10:00 a.m.
On the Mapping Class Group of $S^3\times S^3$.
Nikolai A Krylov*, University of Illinois at Chicago
• Sunday November 11, 2001, 9:00 a.m.-10:55 a.m.
Special Session on Partial Differential Equations and Applications
Room 120, Multipurpose Science and Technology
Edriss S. Titi, University of California Irvine etiti@math.uci.edu
• Sunday November 11, 2001, 9:00 a.m.-10:50 a.m.
Special Session on Harmonic Analyses and Partial Differential Equations
Room 220, Physical Sciences Classroom Bldg
Gustavo Ponce, University of California Santa Barbara ponce@math.ucsb.edu
Gigliola Staffilani, Stanford University gigliola@math.stanford.edu
□ 9:00 a.m.
A characterization of finite sets that tile the integers.
Izabella Laba*, University of British COlumbia
□ 9:40 a.m.
Besov-Morrey spaces and applications to PDE.
Anna L Mazzucato*, Yale University and MSRI, Berkeley
□ 10:20 a.m.
On the initial value problem for the fully non-linear 1-D Schr\"odinger equation.
Gustavo Ponce*, University of California-Santa Barbara
Wee Keong Lim, University of California-Santa Barbara
• Sunday November 11, 2001, 9:00 a.m.-10:50 a.m.
Special Session on Dynamical Systems of Billiard Type
Room 240, Physical Sciences Classroom Bldg
Marek Rychlik, University of Arizona rychlik@u.arizona.edu
Andrew Torok, University of Houston torok@math.uh.edu
□ 9:00 a.m.
Periodic billiard orbits in non-euclidean polygons.
Eugene Gutkin*, Santa Monica, CA
□ 9:40 a.m.
Mushrooms and Other Billiards With Divided Phase Space.
Leonid A Bunimovich*, Georgia Institute of Technology
□ 10:20 a.m.
Statistics and ergodicity on a class of partially hyperbolic symmetric attractors.
Michael Field*, University of Houston
• Sunday November 11, 2001, 9:00 a.m.-10:45 a.m.
Special Session on Symplectic Geometry
Room 230, Physical Sciences Classroom Bldg
Jonathan Weitsman, University of California, Santa Cruz weitsman@cats.ucsc.edu
□ 9:00 a.m.
Constrained quantization of the harmonic oscillator.
Ranee Brylinski*, Penn State Univ
□ 10:00 a.m.
Cohomological induction and the quantization of noncompact coadjoint orbits Session Name: Special Session on Symplectic Geometry.
Gregg J Zuckerman*, Mathematics Department, Yale University
• Sunday November 11, 2001, 11:10 a.m.-12:00 p.m.
Invited Address
Dispersive Equations and Almost Conservation Laws.
Room 104, Rowland Hall
Gigliola Staffilani*, Brown University (Providence, RI) and Stanford University (Stanford, CA)
• Sunday November 11, 2001, 1:30 p.m.-2:20 p.m.
Invited Address
The topology of Hamiltonian Loop Group spaces.
Room 104, Rowland Hall
Jonathan Weitsman*, University of California,Santa Cruz
• Sunday November 11, 2001, 3:00 p.m.-5:20 p.m.
Special Session on Quantum Topology
Room 114, Rowland Hall
Louis Kauffman, University of Illinois at Chicago kauffman@uic.edu
Jozef Przytycki, George Washington University przytyck@research.circ.gwu.edu
Fernando Souza, University of Waterloo fsouza@cacr.math.uwaterloo.ca
• Sunday November 11, 2001, 3:00 p.m.-5:25 p.m.
Special Session on Partial Differential Equations and Applications
Room 120, Multipurpose Science and Technology
Edriss S. Titi, University of California Irvine etiti@math.uci.edu
• Sunday November 11, 2001, 3:00 p.m.-5:50 p.m.
Special Session on Groups and Covering Spaces in Algebraic Geometry
Room 114, Multipurpose Science and Technology
Michael Fried, University of California Irvine mfried@math.uci.edu
Helmut Voelklein, University of Florida helmut@math.ufl.edu
□ 3:00 p.m.
Basic algebras for sporadic simple groups in GAP.
Klaus Lux*, University of Arizona
T Hoffman, University of Arizona
□ 3:40 p.m.
Maximal automorphism groups of curves of small genus.
Kay Magaard, Wayne State University
Sergey Shpectorov*, Bowling Green State University
Helmut Voelklein, University of Florida
□ 4:20 p.m.
□ 4:40 p.m.
Computing braid orbits with GAP.
Kay Magaard*, Wayne State University
Sergey Shpectorov, Bowling Green State University
Helmut Voelklein, University of Florida
□ 5:20 p.m.
Families of covers of the sphere and their images in $M_g$.
Helmut Voelklein*, University of Florida
• Sunday November 11, 2001, 3:00 p.m.-5:55 p.m.
Special Session on Random and Deterministic Schr\"odinger Operators
Room 122, Multipurpose Science and Technology
Svetlana Jitomirskaya, University of California Irvine szhitomi@math.uci.edu
Abel Klein, University of California Irvine aklein@math.uci.edu
• Sunday November 11, 2001, 3:00 p.m.-5:20 p.m.
Special Session on Extremal Metrics and Moduli Spaces
Room 108, Rowland Hall
Steven Bradlow, University of Illinois, Urbana-Champaign bradlow@uiuc.edu
Claude LeBrun, State University of New York, Stony Brook claude@math.sunysb.edu
Yat Sun Poon, University of California Riverside ypoon@math.ucr.edu
□ 3:00 p.m.
Three remarks on the geometry of special Lagrangian $3$-cycles.
Robert L Bryant*, Duke University
□ 3:50 p.m.
On Moduli Spaces of Sasakian Structures.
Charles P Boyer*, University of New Mexico
□ 4:20 p.m.
□ 4:30 p.m.
Deformations of Hypercomplex Structures associated to Heisenberg Groups.
Gueo V Grantcharov*, University of Connecticut
Henrik Pedersen, University of Southern Denmark
Yat Sun Poon, University of California, Riverside
□ 5:00 p.m.
Mirror symmetry, Langlands duality and Hitchin systems.
Tamas Hausel*, Miller Institute, University of California, Berkeley
• Sunday November 11, 2001, 3:00 p.m.-4:20 p.m.
Special Session on Harmonic Analysis and Complex Analysis
Room 124, Multipurpose Science and Technology
Xiaojun Huang, Rutgers University huangx@math.rutgers.edu
Song-Ying Li, University of California, Irvine sli@math.uci.edu
□ 3:00 p.m.
Convex mappings in several complex variables.
Sheng Gong*, University of California, San Diego
□ 3:30 p.m.
Bundle rigidtiy of tangent bundles over Kahler manifolds.
Bun Wong*, UC Riverside
Wing Sum Cheung, Univ. of Hong-Kong
Stephen Yau, Univ. of Illinois at Chicago
□ 4:00 p.m.
Nonlinear Riemann-Hilbert problems on domains in several complex variables.
Marshall A Whittlesey*, California State University, San Marcos
• Sunday November 11, 2001, 3:00 p.m.-5:50 p.m.
Special Session on Operator Spaces, Operator Algebras, and Applications
Room 118, Multipurpose Science and Technology
Marius Junge, University of Illinois, Urbana-Champaign junge@math.uiuc.edu
Timur Oikhberg, University of Texas and University of California Irvine timur@mail.ma.utexas.edu
• Sunday November 11, 2001, 3:00 p.m.-5:55 p.m.
Special Session on Symplectic Geometry
Room 230, Physical Sciences Classroom Bldg
Jonathan Weitsman, University of California, Santa Cruz weitsman@cats.ucsc.edu
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2075_program_sunday.html","timestamp":"2014-04-21T13:12:24Z","content_type":null,"content_length":"68468","record_id":"<urn:uuid:ed645eee-822f-4020-93ae-5bfb195dd3dc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Curvature Properties of
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 380657, 6 pages
Research Article
Some Curvature Properties of -Manifolds
Department of Mathematics, Faculty of Arts and Science, Gaziosmanpasa University, 60100 Tokat, Turkey
Received 14 January 2013; Revised 4 March 2013; Accepted 6 March 2013
Academic Editor: Narcisa C. Apreutesei
Copyright © 2013 Mehmet Atçeken. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The object of the present paper is to study -manifolds with vanishing quasi-conformal curvature tensor. -manifolds satisfying Ricci-symmetric condition are also characterized.
1. Introduction
Recently, in [1], Shaikh introduced and studied Lorentzian concircular structure manifolds (briefly -manifold) which generalizes the notion of LP-Sasakian manifolds, introduced by Matsumoto [2].
Generalizing the notion of LP-Sasakian manifold in 2003 [1], Shaikh introduced the notion of -manifolds along with their existence and applications to the general theory of relativity and cosmology.
Also, Shaikh and his coauthors studied various types of -manifolds by imposing the curvature restrictions (see [3–6]). In [7, 8], the authors also studied -manifolds.
The submanifold of an -manifold is studied by Atceken and Hui [9, 10] and Shukla et al. [11]. In [12], Yano and Sawaki introduced the quasi-conformal curvature tensor, and later it was studied by
many authors with curvature restrictions on various structures [13].
After then, the same author studied weakly symmetric -manifolds by several examples and obtain various results in such manifolds. In [7], authors shown that a pseudo projectively flat and pseudo
projectively recurrent manifolds are -Einstein manifold.
On the other hand, in [5], authors proved the existence of -recurrent manifold which is neither locally symmetric nor locally -symmetric by nontrivial examples. Furthermore, they also give the
necessary and sufficient conditions for a -manifold to be locally -recurrent.
In this study, we have investigated the quasi-conformal flat -manifolds satisfying properties such as Ricci-symmetric, locally symmetric, and -Einstein. Finally, we give an example for -Einstein
2. Preliminaries
An -dimensional Lorentzian manifold is a smooth connected paracompact Hausdorff manifold with a Lorentzian metric tensor , that is, admits a smooth symmetric tensor field of the type such that, for
each , is a nondegenerate inner product of signature . In such a manifold, a nonzero vector is said to be timelike (resp., nonspacelike, null, and spacelike) if it satisfies the condition (resp., ≤0,
=0, >0). These cases are called casual character of the vectors.
Definition 1. In a Lorentzian manifold , a vector field defined by for any is said to be a concircular vector field if for , where is a nonzero scalar function, is a 1-form, is also closed 1-form,
and denotes the Levi-Civita connection on [7].
Let be a Lorentzian manifold admitting a unit timelike concircular vector field , called the characteristic vector field of the manifold. Then we have Since is a unit concircular unit vector field,
there exists a nonzero 1-form such that The equation of the following form holds: for all , where is a nonzero scalar function satisfying being a certain scalar function given by .
Let us put then from (6) and (8), we can derive which tell us that is a symmetric -tensor. Thus the Lorentzian manifold together with the unit timelike concircular vector field , its associated
1-form , and -type tensor field is said to be a Lorentzian concircular structure manifold.
A differentiable manifold of dimension is called -manifold if it admits a -type tensor field , a covariant vector field , and a Lorentzian metric which satisfy for all . Particularly, if we take ,
then we can obtain the -Sasakian structure of Matsumoto [2].
Also, in an -manifold , the following relations are satisfied (see [3–6]): for all vector fields on , where and denote the Riemannian curvature tensor and Ricci curvature, respectively, is also the
Ricci operator given by .
Now let be an -dimensional Riemannian manifold; then the concircular curvature tensor , the Weyl conformal curvature tensor , and the pseudo projective curvature tensor are, respectively, defined by
where and are constants such that , and isalso the scalar curvature of [7].
For an -dimensional -manifold the quasi-conformal curvature tensor is given by for all [14].
The notion of quasi-conformal curvature tensor was defined by Yano and Swaki [12]. If and , then quasi-conformal curvature tensor reduces to conformal curvature tensor.
3. Quasi-Conformally Flat -Manifolds and Some of Their Properties
For an -dimensional quasi-conformally flat -manifold, we know for from (23), Here, taking into account of (16), we have Let be in (25); then also by using (18) we obtain Taking the inner product on
both sides of the last equation by , we obtain that is, Now we are in a proposition to state the following.
Theorem 2. If an -dimensional -manifold is quasi-conformally flat, then is an -Einstein manifold.
Now, let be an orthonormal basis of the tangent space at any point of the manifold. Then putting in (28), and taking summation for , we have In view of (28) and (29), we obtain which is equivalent to
for any .
By using (29) and (31) in (23) for a quasi-conformally flat -manifold , we get for all . If we consider Schur's Theorem, we can give the following the theorem.
Theorem 3. A quasi-conformally flat -manifold M is a manifold of constant curvature provided that .
Now let us consider an -manifold which is conformally flat. Thus we have from (21) that for all vector fields tangent to . Setting in (33) and using (16), (18) we have If we put in (34) and also
using (18), we obtain
Corollary 4. A conformally flat -manifold is an -Einstein manifold.
Generalizing the notion of a manifold of constant curvature, Chen and Yano [15] introduced the notion of a manifold of quasi-constant curvature which can be defined as follows:
Definition 5. A Riemannian manifold is said to be a manifold of quasi-constant curvature if it is conformally flat and its curvature tensor of type is of the form for all , where are scalars of which
and is a nonzero 1-form (for more details, we refer to [13, 16]).
Thus we have the following theorem for -conformally flat manifolds.
Theorem 6. A conformally flat -manifold is a manifold of quasi-constant curvature.
Proof. From (33) and (35), we obtain This implies (36) for This proves our assertion.
Next, differentiating the (19) covariantly with respect to , we get for any . Making use of the definition of and (8), we have Thus we have Here taking account of (17), we arrive at Again, by using (
13), (18), and (19), we reach Thus we have the following theorem.
Theorem 7. If an -manifold is Ricci-symmetric; then is constant.
Proof. If -dimensional -manifold is Ricci-symmetric, then from (43) we conclude that It follows that from which which is equivalent to that is, which proves our assertion.
Since implies that , we can give the following corollary.
Corollary 8. If an -dimensional -manifold is locally symmetric, then is constant.
Now, taking the covariant derivation of the both sides of (18) with respect to , we have From the definition of the covariant derivation of Ricci-tensor, we have If an -manifold Ricci symmetric, then
Theorem 7 and (43) imply that This leads us to state the following.
Theorem 9. If an -manifold is Ricci symmetric, then it is an Einstein manifold.
Corollary 10. If an -manifold is locally symmetric, then it is an Einstein manifold.
In this section, an example is used to demonstrate that the method presented in this paper is effective. But this example is a special case of Example 6.1 of [6].
Example 11. Now, we consider the 3-dimensional manifold where denote the standard coordinates in . The vector fields are linearly independent of each point of . Let be the Lorentzian metric tensor
defined by for . Let be the 1-form defined by for any . Let be the (1,1)-tensor field defined by Then using the linearity of and , we have , for all . Thus for , defines a Lorentzian paracontact
structure on .
Now, let be the Levi-Civita connection with respect to the Lorentzian metric , and let be the Riemannian curvature tensor of . Then we have Making use of the Koszul formulae for the Lorentzian metric
tensor , we can easily calculate the covariant derivations as follows: From the previously mentioned, it can be easily seen that is an -structure on , that is, is an -manifold with and . Using the
previous relations, we can easily calculate the components of the Riemannian curvature tensor as follows: By using the properties of and definition of the Ricci tensor, we obtain Thus the scalar
curvature of is given by On the other hand, for any , and can be written as and , where and are smooth functions on . By direct calculations, we have Since and and , we have This tell us that is an
-Einstein manifold.
The authors would like to thank the reviewers for the extremely carefully reading and for many important comments, which improved the paper considerably.
1. A. A. Shaikh, “On Lorentzian almost paracontact manifolds with a structure of the concircular type,” Kyungpook Mathematical Journal, vol. 43, no. 2, pp. 305–314, 2003. View at Zentralblatt MATH ·
View at MathSciNet
2. K. Matsumoto, “On Lorentzian paracontact manifolds,” Bulletin of Yamagata University, vol. 12, no. 2, pp. 151–156, 1989. View at Zentralblatt MATH · View at MathSciNet
3. A. A. Shaikh, “Some results on ${\left(LCS\right)}_{n}$-manifolds,” Journal of the Korean Mathematical Society, vol. 46, no. 3, pp. 449–461, 2009. View at Publisher · View at Google Scholar ·
View at MathSciNet
4. A. A. Shaikh and S. K. Hui, “On generalized $\rho$-recurrent ${\left(LCS\right)}_{n}$-manifolds,” in Proceedings of the ICMS International Conference on Mathematical Science, vol. 1309 of
American Institute of Physics Conference Proceedings, pp. 419–429, 2010. View at Publisher · View at Google Scholar
5. A. A. Shaikh, T. Basu, and S. Eyasmin, “On the existence of $\varphi$-recurrent ${\left(LCS\right)}_{n}$-manifolds,” Extracta Mathematicae, vol. 23, no. 1, pp. 71–83, 2008. View at MathSciNet
6. A. A. Shaikh and T. Q. Binh, “On weakly symmetric ${\left(LCS\right)}_{n}$-manifolds,” Journal of Advanced Mathematical Studies, vol. 2, no. 2, pp. 103–118, 2009. View at Zentralblatt MATH · View
at MathSciNet
7. G. T. Sreenivasa, Venkatesha, and C. S. Bagewadi, “Some results on ${\left(LCS\right)}_{2n+1}$-manifolds,” Bulletin of Mathematical Analysis and Applications, vol. 1, no. 3, pp. 64–70, 2009. View
at MathSciNet
8. S. K. Yadav, P. K. Dwivedi, and D. Suthar, “On ${\left(LCS\right)}_{2n+1}$-manifolds satisfying certain conditions on the concircular curvature tensor,” Thai Journal of Mathematics, vol. 9, no.
3, pp. 597–603, 2011. View at MathSciNet
9. M. Atceken, “On geometry of submanifolds of ${\left(LCS\right)}_{n}$-manifolds,” International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 304647, 11 pages, 2012. View
at Publisher · View at Google Scholar · View at MathSciNet
10. S. K. Hui and M. Atceken, “Contact warped product semi-slant submanifolds of ${\left(LCS\right)}_{n}$-manifolds,” Acta Universitatis Sapientiae. Mathematica, vol. 3, no. 2, pp. 212–224, 2011.
View at MathSciNet
11. S. S. Shukla, M. K. Shukla, and R. Prasad, “Slant submanifold of ${\left(LCS\right)}_{n}$-manifolds,” to appear in Kyungpook Mathematical Journal.
12. K. Yano and S. Sawaki, “Riemannian manifolds admitting a conformal transformation group,” Journal of Differential Geometry, vol. 2, pp. 161–184, 1968. View at Zentralblatt MATH · View at
13. A. A. Shaikh and S. K. Jana, “On weakly quasi-conformally symmetric manifolds,” SUT Journal of Mathematics, vol. 43, no. 1, pp. 61–83, 2007. View at Zentralblatt MATH · View at MathSciNet
14. R. Kumar and B. Prasad, “On ${\left(LCS\right)}_{n}$-manifolds,” to appear in Thai Journal of Mathematics.
15. B.-Y. Chen and K. Yano, “Hypersurfaces of a conformally flat space,” Tensor, vol. 26, pp. 318–322, 1972. View at Zentralblatt MATH · View at MathSciNet
16. A. A. Shaikh and S. K. Jana, “On weakly symmetric Riemannian manifolds,” Publicationes Mathematicae Debrecen, vol. 71, no. 1-2, pp. 27–41, 2007. View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/aaa/2013/380657/","timestamp":"2014-04-19T08:12:36Z","content_type":null,"content_length":"568728","record_id":"<urn:uuid:10c2fd29-fbaa-4a97-ae0d-bb89ade68a41>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometric Equations
Search Results: 'Trigonometric Equations'
A "Sine" of the Times: Designing with Trigonometric Graphs
Paperback: $23.12
Ships in 3-5 business days
How do mathematical equations translate into physical form? What formulas are needed to create a work of art? Students in A “Sine” of the Times chose their inspiration, developed designs... More >
using oscillation graphs, and wrote reflections about how they learned to see math as well as themselves as mathematicians. With patterns as unique as their creators, this publication will inspire
students and teachers to find creative applications for math in their own classrooms.< Less
Math Notebook for Students: 350 Essential Mathematical Formulas and Equations
eBook (PDF): $7.95
This is a little book for students to have the essential formulas and equations of mathematics in a single easily accessible source. In about 50 pages, the 350 most essential mathematical formulas...
More > are listed. Unlike other large books on this topic, there is no need to go through hundreds of pages and thousands of formulas for the student to get the basic equations. The author has
searched several books on mathematical formulas and tables and selected only those equations which are essential to the student. The mathematical formulas and equations listed in this book are useful
for students and researchers in various fields including mathematics, physics, engineering, etc. Only the most elementary and basic topics are covered including formulas for various geometric shapes,
several types of functions (trigonometric, hyperbolic, exponential, logarithmic, etc), the quadratic equation, analytic geometry, derivatives and integrals, arithmetic series, geometric series, and
Taylor series.< Less
Math Notebook for Students: 350 Essential Mathematical Formulas and Equations
Paperback: $16.95
Ships in 3-5 business days
This is a little book for students to have the essential formulas and equations of mathematics in a single easily accessible source. In about 50 pages, the 350 most essential mathematical formulas...
More > are listed. Unlike other large books on this topic, there is no need to go through hundreds of pages and thousands of formulas for the student to get the basic equations. The author has
searched several books on mathematical formulas and tables and selected only those equations which are essential to the student. The mathematical formulas and equations listed in this book are useful
for students and researchers in various fields including mathematics, physics, engineering, etc. Only the most elementary and basic topics are covered including formulas for various geometric shapes,
several types of functions (trigonometric, hyperbolic, exponential, logarithmic, etc), the quadratic equation, analytic geometry, derivatives and integrals, arithmetic series, geometric series, and
Taylor series.< Less
College Trigonometry
Hardcover: $32.94
Ships in 6-8 business days.
College Trigonometry (40 Lessons) covers: Review of Functions; Review of Geometry; Right Triangle Trigonometry; Angles of Elevation and Depression; Bearing; Linear Interpolation;Trigonometric... More
> Functional Value of any Angle; Trigonometric Functional Values of Quadrantal Angles; Trigonometry of Oblique Triangles; Laws of Sines and Cosines; Applications of Trigonometry to Vectors;
Representation of Vectors; Addition (Sum, Resultant, or Composition) of Vectors; Trigonometry of Real Numbers; Radian Measure; Arc Length; Reference Number ; Trigonometric Functional Values of Angles
and of Real Numbers; Graphs of Trigonometric Functions; Periodicity of Trigonometric Functions; Inverse Trigonometric Functions; Operations Involving Inverse Trigonometric Functions; Graphs of
inverse Trigonometric Functions; Trigonometric identities and Proving Trigonometric Identities; Solutions of Trigonometric Equations; and Measurements.< Less
MATHEMATICS: The Calculus: Book 2: GCE Advanced Level
eBook (PDF): $7.74
The Calculus - Book 2 (63 pages) is a study of the calculus at GCE Advanced Level (approximately year 14). It follows on from The Calculus _ Book 1, which covers calculus from GCSE Level (year 12) up
... More > to GCE Advanced Subsidiary Level (year 13). It contains clearly explained teaching text, worked examples (in graduated order of difficulty), followed by exercises (with fully worked
answers).< Less
MATHEMATICS: 'Calculus ... the way to do it'
eBook (PDF): $12.17
'Calculus...the way to do it' (140 pages) is a study of the calculus from basics up to GCE Advanced Level (years 12, 13 and 14). Section 1 is appropriate for students of GCSE Additional Mathematics
... More > (year 12) and GCE Advanced Subsidiary Level (year 13). Section 2 is appropriate for GCE Advanced Level (year 14). It contains clearly explained teaching text, worked examples (in graduated
order of difficulty), followed by exercises (with fully worked answers).< Less
Intermediate Mathematics
Hardcover: List Price: $68.08 $51.06 You Save: 25%
Ships in 6-8 business days.
Intermediate Mathematics covers the following topics: Review of Operations; Exponents, Radicals. and operations on radical and Fractional Exponents; Reduction of Indices; Factoring Polynomials;...
More > Solving quadratic equations and applications; Graphs, Slopes, Intercepts, and Equations of Straight Lines; Graphs of Parabolas; Linear Inequalities; Compound Inequalities; Inequality Word
Problems; Reduction, multiplication, division, and addition of algebraic fractions; Solving Fractional or Rational Equations; Radical Equations; Complex Numbers; Absolute value equations; Absolute
Value Inequalities; Logarithms; Logarithmic equations and Exponential Equations; Variation and Variation Problems; Basic Areas and Perimeters of triangles, rectangles, trapezoids, circles, and
composite figures; Congruency Theorems; Similar Triangles; Right triangle trigonometry; Functional value of any angle; Laws of sines and cosines. Trigonometric Identities; Trigonometric equations.<
MATHEMATICS: 'Trigonometry ... the way to do it'
eBook (PDF): $12.14
Trigonometry book (98 pages) covers material from GCSE Level (approximately age 16 years) to GCE Advanced Level (approximately age 18 years) - teaching text, worked examples, exercises (with full...
More > worked answers)- ideal for examination revision. It is available also from our website at www.mathslearning.com and on iPad from the iBookstore.< Less
College Algebra & Trigonometry
Hardcover: List Price: $71.02 $60.37 You Save: 15%
Ships in 6-8 business days.
This book covers both algebra and trigonometry. The topics include the following: Polynomial, Nonlinear, and Radical Equations; Sets, Relations, Functions; Absolute Value Equations and Inequalities;
... More > Linear Programming; Graphs of Functions; Asymptotes; Logarithms; Exponential and Logarithmic Equations; Graphs of Exponential and Logarithmic Functions; Matrix and Matrix Methods;
Determinants; Complex Numbers and Operations; Polar Form of Complex Numbers; Roots of Complex Numbers; Graphing Polar Coordinates and Equations; Conic sections;; Remainder and Factor Theorems;
Rational Roots; Partial Fractions; Sequences and Series; Binomial Theorem; Permutations and Combinations; Mathematical Induction; Right Triangle Trigonometry; Trigonometry of Real Numbers; Graphs of
Trigonometric Functions; Graphs of Inverse Trigonometric Functions; Trigonometric identities and Equations.< Less
Paperback: $11.00
Ships in 3-5 business days
This is a text on elementary trigonometry, designed for students who have completed courses in high-school algebra and geometry. Though designed for college students, it could also be used in high...
More > schools. The traditional topics are covered, but a more geometrical approach is taken than usual. Also, some numerical methods (e.g. the secant method for solving trigonometric equations) are
discussed. A brief tutorial on using Gnuplot to graph trigonometric functions is included.< Less | {"url":"http://www.lulu.com/shop/search.ep?keyWords=Trigonometric%20Equations","timestamp":"2014-04-16T05:52:54Z","content_type":null,"content_length":"112842","record_id":"<urn:uuid:9b4c9181-cfce-41d2-8b63-27361be39dfe>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
“My Day Timeline” is a lesson on a.m. and p.m. and the events that occur in each
view a plan
“My Day Timeline” is a lesson on a.m. and p.m. and the events that occur in each
Title – My Day Timeline
By – Rose Stewart
Primary Subject – Math
Grade Level – 2
Time Requirement – 2 Days for 45 minutes
I. Concept to be taught:
Measurement of time and determining activities done in the a.m. and p.m.
II. Nevada State Mathematics Standards — 2nd Grade
3.2.1 – Compare and order objects by various measurable attributes (e.g., time, temperature, length, weight, capacity and area) communicating their similarities and differences.
3.2.6 – Read time to the nearest quarter hour; distinguish between A.M. and P.M.
2.2.2 – Generate and solve problems based on various numerical sentences; represent mathematical situations using numbers, symbols, and words.
2.2.4 – Generate and solve problems based on various numerical sentences; represent mathematical situations using numbers, symbols, and words.
7.1 – Discuss and exchange ideas about mathematics as a part of learning.
7.2 – Use inquiry techniques (e.g. discussion, questioning, research, data gathering) to solve mathematical problems.
7.4 – Use pictorial representations to identify mathematical operations and concepts.
7.7 – Use physical materials, models, pictures, or writing to represent and communicate mathematical ideas.
7.16 – Express mathematical ideas and use them to define, compare, and solve problems orally and in writing.
7.17 – Use mathematical notation to communicate and explain mathematical situations.
III. Behavioral Objectives:
1. To determine time on an analog clock
2. To order the events that occur in a day.
3. To distinguish between events that occur in the a.m. and p.m.
IV. Materials:
• Large Model of analog clock for class
• 12″ x 18″ sheet of white construction paper
• Small analog clock for each student
• Sheet of blank clocks
• Pencil
• Glue
• Scissors
V. Teaching/Learning Process:
A. Prior Knowledge:
Introduce the terms of analog, a.m. and p.m. and discuss the difference between the two and brainstorm activities for each. Develop prior knowledge by asking for suggestions of activities
that occur in the a.m. and p.m. Create a tree map to display students’ suggestions.
B. Instructional Procedure:
1. Give each student a small analog clock to review time to the hour, half hour, and quarter hour. Use large class clock for teacher modeling.
2. Using students’ previous suggestions of events, ask students to demonstrate the time on their clock and name whether it is a.m. or p.m.
3. Hand out white construction paper and model for students on how to fold into 8 sections. Students will follow step by step directions by folding once lengthwise and twice widthwise to
create the 8 sections.
4. Students will cut the handout of paper blank clocks and glue one on to each section of the paper.
5. Students will choose 4 events that happen in the a.m. and record the time to the hour, half hour, or quarter hour on their blank analog clock with pencil. Under each time, students
will write the digital time.
6. Students will write a sentence describing the event and the time it occurs under the digital time. For example, “I get up in the morning at 8:15 a.m.”
a. Extension: Students can elaborate on their sentence by describing what and how they do each activity in their sentence.
7. Students will complete steps 5-6 for the p.m. times on the second day.
C. Closure:
Once students have completed their “My Day” Timeline, they will get together with a partner to read their timeline. The partner will ask them a questions after each time to encourage social
and communication skills.
VI. Evaluation:
The students will be evaluated based on the completion of the “My Day” Timeline. Teacher will monitor as students complete closure activity and check for accurate use of time and a.m./p.m.
E-Mail Rose Stewart ! | {"url":"http://lessonplanspage.com/mathmydaytimelineampmevents2-htm/","timestamp":"2014-04-21T14:43:19Z","content_type":null,"content_length":"46376","record_id":"<urn:uuid:f5e11dfa-4bfb-4b08-9e02-6adf6f435327>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Its a tenstion problem
A traffic light at an intersection is hanging from two wires of equal length making angles of 10 degrees below the horizontal. The traffic light weighs 2500N. What arethe tensions on the wire?
• What's the slope form?
What's the Slope form?
• Find the limit
\(\lim_{x \rightarrow 0^{+}} \sqrt{x}^{\sqrt{x}}\) If you can't tell, this is from x to 0 from the right.I got it to \(e^{\lim_{x \rightarrow 0^{+}} {\sqrt{x} ln (\sqrt{x})}}\) but I don't know
how to proceed from here...need adetailed explination please, so I can understand this better!! Thank you
• What can you say about the ratio z1/z2?
Suppose z1; z2 are two complex numbers that satisfy abs(z1 + z2) = abs(z1) + abs(z2)What can you say about the ratio z1/z2?
• aKwu1 Need Help
The rate of growth dP / dt of a population ofbacteria is proportional to the square root of t, whereP is the population size andt is the time indays (0 ≤ t ≤ 10) as shown below.The initial size
of the population is 500. After 1 day the population has grown to600. Estimate the population to thenearest whole number after 9 days.1
• Random Variable
Let the r.v.'s and represent the temperature of a certain object in degrees Celsius and Fahrenheit, repectively. Then, it is known that , so that .(i) If , determinethe distribution of .(ii) If ,
thenalso , forsome . Determinethe numbers and .(iii) We know that: , ,and . Calculate theintervals , for which is, respectively , equal to .
• Rolling Dice
Given a fair, six-sided dice, what is the probability of rolling the dice twice and getting a "1" each time?
• critical values, maximum, minimum, or neither
find critical value(s) and local maximum and local minimum or neither
• Moment generating functions
Let Y=Σ from i=1 to n of Xi where Xi are independent geometric random variables with parameter p. Find the moment generating function of Yand use the moment generating function of Y to find E(Y)
and V(Y).I know I have to put the summation into the MY(t) equation so E(etΣxi) but I don't know where to go from here.
• probability in excel
7. The amount of moisture content (in kilograms) in a 25-kilogram wheel of cheese is distributed N(9, 0.9)a. What is the probability that the moisture content of a wheel:i. Exceeds 10 kilograms?
ii. Is less than 8.5 kilograms?iii. Differs from the mean by more than .45 kilograms?b. 99% of wheels contain less than what weight of moisture?
• Finding area of the shared region by two polar equations
Given the two polar equations and Find the area of the region common to both curves. Show all work and steps. If someone has the solution manual for Calculus of a single variable 9th ed,
itsproblem number 42 Chapter 10 section 5
• interior angles of a triangle
FInd the interior angles of triangle ABC where A=(2,-3) B=(4,2) C=(-5,-2)
• find the value of the expression
Using the periodic properties of trigonometric functions,find the value of the:1.2.3.4.
• Types of data
Instructions:Form a conclusion about statisticalsignificance.Do not make any formalcalculations.Eitheruseresults provided or makesubjective judgments about theresults.Questions:Mendel's
Geneticsexperiments: One of Gregor Mendel's famous hybridizationexperiments with peasyielded 580offspring with 152 of thosepeas (or 26%) having yellow pods. According to Mendel's theory.25%of the
off spring peas should have yellow pods.Do the results oftheexperiment differ from Mendel's claimed rate of 25% by anamount that is statistically significant?.Help with this question ,please.
• -4(x+4)=-2(2x+8)
• 3 Stats Questions. Please do not substitute numbers in question.
4. Find the value of z such that approximately 47.93% of the distribution lies between it and the mean.5. Assume that the average annual salary for a worker in the United States is $35,000 and
that the annual salaries for Americans are normally distributed with astandard deviation equal to $6,500. Find the following:(A) What percentage of Americans earn below $26,000?(B) What
percentage of Americans earn above $40,000?Please show all of your work.6. X has a normal distribution with a mean of 80.0 and a standard deviation of 4.0. Find the following probabilities:(A) P
(x < 78.0)(B) P(75.0 < x < 85.0)(C) P(x > 82.0) | {"url":"http://homework.boodom.com/math/1-30","timestamp":"2014-04-18T21:12:06Z","content_type":null,"content_length":"21288","record_id":"<urn:uuid:20f6eddc-f6e9-4a3d-97b2-5897c98e503b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector analysis
From Encyclopedia of Mathematics
A branch of vector calculus in which scalar and vector fields are studied (cf. Scalar field; Vector field).
One of the fundamental concepts in vector analysis for the study of scalar fields is the gradient. A scalar field
The concepts of divergence and curl are also employed in the study of vector fields. Let a vector field
The curl (rotor)
For vector and scalar fields of class
where Laplace operator.
Gradient, divergence and curl together are usually known as the basic differential operations of vector analysis. See Curl; Gradient; Divergence for their properties and expressions in special
coordinate systems.
Fundamental integral formulas, connecting volume, surface and contour integrals, can be written down in terms of the basic operations of vector analysis. Let a vector field be continuously
differentiable in a bounded connected domain
Let Stokes formula will be applicable:
where the vector
where the points
Let the vector field Ostrogradski formula reads as follows:
The integral
If the divergence and the curl of a vector field are defined at each point
Vector fields for which Vector calculus.
Ostrogradski's formula is commonly called Gauss' formula.
The condition
The notions of gradient, divergence, Laplace operator, flux of a vector field, and the given integral formulas can easily be extended to higher-dimensional Euclidean spaces and Riemannian manifolds,
and all other notions can be extended to Riemannian
In this context, the given integral formulas appear in a unified way as Stokes' formula, saying that the integral of a
[a1] A. Marsden, "Calculus" , 3 , Springer (1988)
[a2] N.J. Hicks, "Notes on differential geometry" , v. Nostrand (1965)
How to Cite This Entry:
Vector analysis. A.B. Ivanov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Vector_analysis&oldid=16895
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Vector_analysis","timestamp":"2014-04-18T23:15:23Z","content_type":null,"content_length":"33450","record_id":"<urn:uuid:2827944c-d503-428c-80f2-06babf269273>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
meta-infographic - Statistical Modeling, Causal Inference, and Social Science
7 Comments
1. I don’t like to swear, but that graph is fuckin brilliant.
2. I’m inspired
(A wordle with 1) the line “posted by …”, and 2) the words “comment”/”comments” and “filed under” removed. It needs a java plugin to view.)
3. Brilliant, indeed. But if we ask “which one doesn’t fit?” I’d have to say the Periodic Table of the Elements, which shows up everywhere because it is such a useful infographic (if that’s what you
want to call it).
I even have the Periodic Table on a t-shirt, which I wear periodically.
□ No—they’re talking about the Periodic Table of “something.” That’s the infographic that is in the form of the periodic table but not displaying the elements. Here’s an example which I found
in a quick google search. As with the other examples above, it was a clever idea the first time somebody did it.
4. I was being ironic with the wordle but ….
I actually thought it was quite interesting because in the centre it had a cross of “people”, “data”, “think”.
And “people” wasn’t a word I thought would have been highlighted that much in a statistics blog.
□ Megan:
This is probably coming from a bunch of sentences like, “I hate when people think . . .” and “People think they know about data but really they don’t . . .” and “Data show that people today
can’t think . . .” and other sorts of curmudgeonly thoughts.
Recent Comments
• Eli Rabett on “Schools of statistical thoughts are sometimes jokingly likened to religions. This analogy is not perfect—unlike religions, statistical methods have no supernatural content and make
essentially no demands on our personal lives. Looking at the comparison from the other direction, it is possible to be agnostic, atheistic, or simply live one’s life without religion, but it is
not really possible to do statistics without some philosophy.”
• Jonathan (another one) on One-tailed or two-tailed?
• hjk on One-tailed or two-tailed?
• george on One-tailed or two-tailed?
• question on One-tailed or two-tailed?
• Anonymous on One-tailed or two-tailed?
• Rahul on One-tailed or two-tailed?
• Jonathan (another one) on One-tailed or two-tailed?
• Rahul on One-tailed or two-tailed?
• question on One-tailed or two-tailed? | {"url":"http://andrewgelman.com/2011/09/16/meta-infographic/","timestamp":"2014-04-20T13:19:03Z","content_type":null,"content_length":"27943","record_id":"<urn:uuid:89a0d744-e523-427a-96d9-b9e29380f231>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic of cladistics (was: ANNOUNCEMENT : TREECON 3.0 ... )
Mark Siddall mes at zoo.toronto.edu
Thu Jun 9 15:58:47 EST 1994
apologies for not continuing this thread immediately, I've been doing
the proverbial chicken without a head thing recently...
...the above also pertains to Eric's concerns regarding lack of
follow-up... (in article 1759 of this group)...
In article <2sri9k$eh4 at news.u.washington.edu> joe at evolution.u.washington.edu (Joe Felsenstein) writes:
>"Cladists" (phylogenetic systematists of the Willi Hennig Society persuasion)
>usually say you should look only at the most parsimonious tree or trees. But
>they acknowledge that these don't have a 100% probability of including the
>true tree. I can't quite put this together unless they believe that
>a statistical approach would be valid, but that existing ones are not, so one
>should avoid looking at the confidence intervals they suggest. However I
>don't hear that from cladists, but rather a complete rejection of the framework
>of statistics instead. Perhaps I miscontstrue.
I agree with Joe here that there is some strong resistance to the notion]
of pursueing a statistical framework in cladistics (ASIDE: I actually
like the word cladist... and am an ardent one :-) ...); witness
Carpenter's diatribe against all-things-random in the journal _Cladistics_
about 2 yr ago. Though defintitely a cladist myself, I don't fall into
that group that eshews issues of probability and have even taken a stab
at some of this myself (see below). Perhaps the perspective is that
there is a feeling that one cannot ever know the "true" phylogeny so
how does one go about looking to empirically measure the performance of
a statistical approach to phylogeny reconstruction?
Parsimony is seen by many (like myself) to be a logical "path of least
resistance" approach to phylogeny reconstruction. That is, why
propose widespread convergence (for example) when there's a simpler
I think that one would likely find that those that resist a
probabilistic approach have their grounding in morphology and not in
sequence data.
Whereas an argument can (and has) been made to think of the phylogenetic
signal in sequence data to be a very jumbled one and in need of some
filtering, perhaps by a maximum liklihood approach, or by transversional
weighting or whatever, the equivalent can not really be said of
the evolution of a femur. Mind you, the latter may well be revisited
at differing taxonomic levels; what may be clear at one level may not
be at others.
I confess, once again, to not really understand max-like and cannot
therefore criticize it. Then again, my work to date has been largely
morphological (it's changeing, though) and thus I have dug deep into
a very comfortable parsimony pit, one that I believe in strongly.
Of the "statistical" approaches to cladograms, I believe that there is
great merit to investigating the sensitivity of a data set and the
resulting tree(s) to disturbances. The bootstrap tests a particular
type of disturbance (i.e., can we simulate what parts of the tree(s)
might be unstable to further character data). Mind you, whereas I
agree with the use of a bootstrap to investigate this "variance" I do
not buy into using it to construct some other tree (as proposed by Joe
in his seminal 1985 paper). One reason for this is that by analogy,
where statisticians may use a bootstrap to get a better handle on variance
in a small or complete population, I have never seen it used to reject
the mean as the best-available estimate.
My other reason, related, is that I believe that the bootstrap, though
it can tell you how well a clade IS supported, cannot tell you how well
it is NOT supported. It, thus, IMHO, allows you to accept certain
clades but does not allow you to reject clades in your most parsimonious
trees. Thus, constructing a bootstrap tree, which by excluding them,
rejects certain clades in those most parsimonious, is not a valid
approach. Assigning bootstrap values to clades in most parsimonious
tree(s) is.
>If one is using the trees for some secondary analysis such as looking at
>host-parasite coevolution, and one concentrates only on most parsimonious
>trees, it would seem that if a statistical framework is allowed even in
>principle, then one is effectively assuming a 100% probability for the
>set of most parsimonious trees.
This IS disturbing, I agree. The above, however, assumes that the
coevolutionary biologist is concerned with confidence in their
coevolutionary hypothesis. Rarely are they. Rod Page is. So am I.
The extent to which I have taken it (submitted... and crossing my fingers)
is to ask can I get a fit of the host and parasite cladograms as good
or better when I randomize the observed associations of host(s) and
parasite(s). The upshot of this is, that where the answer is:
observed is no better than random... then less than 100% confidence
in the contributing cladograms isn't going to make it any better.
Where the answer is: non-random association... one could make the argument
that this is only a partial probability of the system.
I do not profess to have a solution to this.
>I think all statistical types agree that the best-fitting trees are those that
>are most probable. I don't know anyone who argues for preferring a less
>well-fitting tree. But that may or may not be the same thing as a less
>parsimonious tree, as parsimony may well not be the best measure of goodness
>of fit.
Agreed. Like all issues of fitting a function to a set of data points,
things are not what they appear. Take a set of corresponding values.
Measure the performance of a linear, logistic and logarithmic
function to the data... find that logistic fits best (and has a
significant p-value)... one would choose the logistic. BUT... there
is an infinite number of better fitting uninvestigated functions!
It all comes down to what one considers to be (hopefully a priori)
the suite of defensible approaches to the data. So, the cladists,
like myself vis morphology, contend that the ONLY defensible approach
is parsimony. This tends to put a deadening tone to the argument.
Parsimony finds the best fit given parsimony, thus all equally
parsimonious trees are equally well supported. One can't reject them
any more than one can reject an observed mean as the best available
estimate of the parametric mean given the availability of data.
However, being my sort of cladist involves an underlying conviction
that would be like a conviction that the mean is the best estimate
of central tendency. And, of course, make certain assumptions about
the nature and structure of the data. I am not afraid of such things...
I wish my fellow cladists would more often own up to it though.
That last comments approaches addressing the following:
>If one believes that degree of parsimony is by definition always the best
>measure of goodness of fit, then of course ML would not always be g-o-f,
>but I would argue against that way of defining g-o-f.
>I'm delighted to find some discussion of this -- as far as I can see
>cladists are currently extremely reluctant to openly discuss the logical
>foundations of their approach. See my book review in Cladistics, volume
>8 pages 191-196, 1993 and the lack so far of the discussion it pointedly
>calls for. Mark is to commended for rising to the occasion.
Thanks to Joe for the kind words. I too am glad of some discussion.
I would invite Joe to (as his time permits) submit a synopsis, in laymans
terms of course for those who like me may not make it past the
formulae, ... that ran on some, sorry... a synopsis of maximum liklihood...
I would definitely be interested!
If only to arm myself against it ;-)
That was long - ouch - I'll try for brevity in the future.
Mark E. Siddall "I don't mind a parasite...
mes at vims.edu I object to a cut-rate one"
Virginia Inst. Marine Sci. - Rick
Gloucester Point, VA, 23062
More information about the Mol-evol mailing list | {"url":"http://www.bio.net/bionet/mm/mol-evol/1994-June/001738.html","timestamp":"2014-04-18T08:47:56Z","content_type":null,"content_length":"10951","record_id":"<urn:uuid:8e2cefbb-760f-4c99-a5e3-f77aa80ced26>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gunter Math Tutor
Find a Gunter Math Tutor
...He has even more years experience as an ESL instructor for various companies and universities. He provides tutoring in subjects ranging from elementary math up to calculus and statistics, earth
science to chemistry, grammar to essay writing, and also physics and Spanish. He is currently completing the last few classes necessary for a degree in Biochemistry.
37 Subjects: including calculus, SAT math, chemistry, algebra 2
...I have been doing algebra since elementary school and it was taught to me daily by my father, an engineer, on a dry erase board during our family meals. Because of this early familiarity I
usually can see at least 2 to 4 different ways to solve any algebra problem and have found that students wi...
15 Subjects: including algebra 1, algebra 2, chemistry, physics
...Sometimes, this is all a student needs in order to achieve success in the classroom.I have a degree in elementary education from Texas Wesleyan University, as well as a grade 1 - 8 Texas
teaching certificate. I have more than 10 years of public school teaching experience, a master's degree in gi...
39 Subjects: including ACT Math, English, statistics, reading
...I achieved a 5 on the AP Calculus BC exam and a 4 on the AP Statistics exam. During my time in college, I have completed various courses in math including Multivariable Calculus, Linear
Algebra, Differential Equations, Theoretical Concepts of Calculus, Abstract Algebra, Mathematical Analysis, Pr...
7 Subjects: including algebra 1, algebra 2, prealgebra, SAT math
...In addition, I have tutored kids in every level of math from Kindergarten counting to Algebra I. I have enjoyed helping students master geographic memory work such as states and capitals,
countries, and bodies of water. From tutoring a young mother from Burundi during my college days in Bryan/C...
41 Subjects: including prealgebra, study skills, GED, elementary (k-6th) | {"url":"http://www.purplemath.com/Gunter_Math_tutors.php","timestamp":"2014-04-21T10:51:21Z","content_type":null,"content_length":"23608","record_id":"<urn:uuid:ef375e00-6682-46af-8d6e-c2fe467e50e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral Applications
July 6th 2009, 07:22 AM #1
Junior Member
May 2009
Integral Applications
I am trying to solve this question where I need to find the y-coordinate of the center of mass. Once again, I am having trouble dealing with the integral with the maximum function.
I am assuming that to find the y coordinate, you multiply the function by y, since with only 2 integrals with two variables that is what we are supposed to do.
Any suggestions on how to solve this problem, or mainly on how to eliminate the maximum would be great!
y-co-ordinate of centre of mass=
$\frac{\int\int\int (y\rho) dxdydz}{\int\int\int(\rho) dxdydz}$
when integrating with respect to z, break the integral into two different integrals
in one integral, set the limits of y from 0 to 1
in other integral, sat the limit of y from 1 to 2.
in the first of the above max(1,y^2) will be 1
in the second of the above, it will be $y^2$
y-co-ordinate of centre of mass=
$\frac{\int\int\int (y\rho) dxdydz}{\int\int\int(\rho) dxdydz}$
when integrating with respect to z, break the integral into two different integrals
in one integral, set the limits of y from 0 to 1
in other integral, sat the limit of y from 1 to 2.
in the first of the above max(1,y^2) will be 1
in the second of the above, it will be $y^2$
That makes sense. So would it basically be like a preset condition like for all y such that 0 < y < 1...the integration would look one way and then another way for greater y values?
Therefore, when you do the final integrate with respect to y, for y = 0 you would sub it into the first integrated equation, and for y = 2 you would sub into the second integrated equation..
If I misunderstood you, please let me know...and thanks again for your help!
$<br /> \frac{\int_1^2\int_{y^2}^4 \int_{-y}^{\frac{-yz}{y+z}} (y\rho) dxdzdy+\int_0^1\int_{1}^4 \int_{-y}^{\frac{-yz}{y+z}} (y\rho) dxdzdy}{\int_1^2\int_{y^2}^4 \int_{-y}^{\frac{-yz}{y+z}} (\
rho) dxdzdy+\int_0^1\int_{1}^4 \int_{-y}^{\frac{-yz}{y+z}} (\rho) dxdzdy}<br />$
July 6th 2009, 07:35 AM #2
July 6th 2009, 12:14 PM #3
Junior Member
May 2009
July 6th 2009, 07:14 PM #4 | {"url":"http://mathhelpforum.com/calculus/94524-integral-applications.html","timestamp":"2014-04-16T06:39:56Z","content_type":null,"content_length":"39630","record_id":"<urn:uuid:1772f650-1152-4f88-b6e2-b8d266c870be>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Developmental Education Descriptions
Academic Support Services:
Developmental Education Descriptions
DEVELOPMENTAL COURSE DESCRIPTIONS
US 10003 – Reading Strategies for College Success (3 credit hours)
This course is designed to enable student to use reading to teach themselves to learn which is necessary for college success. Students will develop a college level vocabulary, a daily newspaper
reading habit and a positive attitude about reading and learning. Successful completion of this course will empower the student to establish and maintain an interest in a topic, allowing the student
to concentrate and thus, understand and remember what they read. Since the student has had only very little experience reading to learn, students who take US 10003 should not take heavy reading
content courses because these courses require 80 pages of textbook reading each week. Students earning a 'D' or 'F' grade must repeat the course. A grade of 'C' or better is required to move onto the
next course in the sequence, US 10006 – Study Strategies for College Success.
US 10006 – Study Strategies for College Success (3 credit hours)
This course is about applying the right strategy or technique to different courses in order to make the most efficient use of study time. Placement into this course may include what is described for
US 10003 and/or one of more of the following: a student who "got by" without reading high school textbooks or doing much studying; a student who did not take high school courses that prepared him/her
for the demands of college coursework; or a returning student who has been away from school for a period of time. Students will benefit most if they take a heavy content reading course during the
same semester they enroll for US 10006. Students who place into US 10006 and also enroll in a reading content course such as psychology, history or sociology do better in the reading content course
because they courses demand 80 pages of textbook reading each week.
A study of Kent State students who placed into US 10006 and completed the course reports that those students average GPA at the end of their first semester was 2.32, while students who placed into US
10006 and did not take the course earned an average probationary GPA of 1.85.
ENG 11001 – Intro to College Writing I S
(3 credit hours)
ENG 11002 – College Writing I S (3 credit hours)
In this writing "stretch" sequence the student will remain with the same instructor for an entire academic year. ENG 11001 will emphasize developmental work, including how to choose a topic, make a
point and support ideas with clear details, the essentials for successful college writing. A minimum grade of 'C' is required, along with passing a portfolio assessment to move onto ENG 11002 –
College Writing I S. ENG 11002 will emphasize reading, thinking, writing and technological skills to prepare students for ENG 21011 – College Writing II. A minimum grade of 'C' is required to advance
to ENG 21011.
MATH 10020 – Pre-Core Mathematics (2 credit hours)
Properties of whole numbers, fractions, decimals, percents, signed numbers and order of operations, to a greater degree than in Core Math I and Core Math II. Mental math and elementary algebraic
theinking skills are emphasized and calculators are not allowed. Hours do not count toward graduation. Prerequisite: none.
10006 – Core Mathematics I & II
(4 credit hours)
This math placement indicates the student's need for review of basic math, pre-algebra and high school Algebra I and II. Course content includes operations on integers, fractions, decimals, and
percents, properties of real numbers. Introduction to variables, first degree equations and problem-solving with formulas. Equations and inequalities in one variable, linear equations, rate of change
and slope, graphing in the cartesian plane. Also includes Introduction to functions, systems of linear equations, exponents, polynomial operations, scientific notation. Factoring polynomials, solving
quadratics by factoring, radicals and rational exponents. Students must successfully pass this course with a grade of 'C' or better before moving on to the next course in the sequence which is Math
10007 – Core Mathematics III & IV.
MATH 10007 – Core Mathematics III & IV (4 credit hours)
This math placement indicates the student's need for review of high school Algebra I and II. Course content includes some of what is described in Core Mathematics II, but also includes Zeros of
functions, rational expressions and equations, problem-solving with rational expressions, intermediate factoring techniques. Quadratics: functions, graphs, equations, inequalities, "quadratic type"
equations and problem-solving. Furthermore, this course includes advanced factoring techniques, rational functions, radical equations, absolute value equations and inequalities. Exponential and
logarithmic functions: introduction, graphing, problem-solving. Student must complete each course with a grade of 'C' or better to move onto Math 11009 – Modeling Algebra or Math 11010 – Algebra for
Calculus, or another math course if required by their major. | {"url":"http://www.geauga.kent.edu/academics/support/developmental-education-descriptions.cfm","timestamp":"2014-04-17T00:50:31Z","content_type":null,"content_length":"45487","record_id":"<urn:uuid:95f7f2d2-2c73-4273-ae43-0e2ef4b9531c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Sobrante Algebra 1 Tutor
Find an El Sobrante Algebra 1 Tutor
...Public speaking remains many peoples' greatest fear. Let me help you turn it into your greatest strength. I have a great deal of experience with students who have ADD and ADHD.
24 Subjects: including algebra 1, English, reading, writing
...I often find, when working with my students, that an important component of the tutoring is attention to these skills in addition to the specific subject areas for which tutoring had been
requested. I have a BA in mathematics from UCLA, where linear algebra (matrices, vector analysis, etc.) were...
20 Subjects: including algebra 1, calculus, statistics, trigonometry
...My undergraduate education was completed at Pennsylvania State University, where I earned a Bachelor of Science degree in Mechanical Engineering in 1983. After working in the HVAC industry as a
consulting engineer, I returned to graduate school at West Chester University to complete my teaching ...
32 Subjects: including algebra 1, reading, ACT Math, elementary math
...I will get your child back on track if he or she has fallen back temporarily. I also work with students who are looking for a leg up in college admissions by accelerating their progress. WHY I
AM A GOOD MATH TUTOR I am passionate about teaching and about math.
12 Subjects: including algebra 1, calculus, algebra 2, geometry
...I also tutor Math and Biology for high-school and college students. I am a very patient, kind, understanding person and will go out of my way to help kids do their very best in school. I also
help teach how to study well for tests, how to take notes, how to be organized and what to expect and how to prepare for college.
8 Subjects: including algebra 1, calculus, geometry, biology | {"url":"http://www.purplemath.com/el_sobrante_ca_algebra_1_tutors.php","timestamp":"2014-04-21T14:56:54Z","content_type":null,"content_length":"23899","record_id":"<urn:uuid:9637388c-aa08-4a2e-95b8-f83e56f55648>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Mathematical Certainty: reply to Silver
Joe Shipman shipman at savera.com
Tue Dec 15 14:01:31 EST 1998
Charles Silver wrote:
> This criterion may be right in all cases that we know of, but it
> is still possible for it to come out wrong. This establishes that
> agreement and correctness are distinct.
I agree with you of course, but Hersh doesn't; I am not defending Hersh's position,
just talking about what arguments can be used against it.
> I think you are doing something very different from what Hersh
> wanted to do. Hersh wanted to capture the *meaning* of mathematical
> truth. For him, agreement of a certain sort simply *is* mathematical
> truth. I don't think you are claiming that your criteria capture the
> *meaning* of mathematical truth. The very fact that you are asking
> whether anyone knows any counterexamples shows that the concepts
> "mathematically true" and "satisfy the criteria" are distinct.
Yes; but if no counterexamples can be found Hersh can maintain that this is a
distinction without a difference! The point is that a counterexample would show
that his notion of mathematical truth did not entail a property of mathematical
truth that we would all agree on (namely incorrigibility) and therefore could not be
correct; without such a counterexample he is free to redefine what mathematicians
are "really" doing.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-December/002487.html","timestamp":"2014-04-20T16:38:13Z","content_type":null,"content_length":"3770","record_id":"<urn:uuid:092b6490-fb1c-4e6c-926b-580ad5b0e59c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total # Posts: 7
Geometry Triangle Coordinate Plane Question
Ok thanks :)
Geometry Triangle Coordinate Plane Question
What kind of triangle is made by connecting the points A(0, 6), B(3, 6), and C(3, 2)? equilateral right isosceles right and isosceles Thank You :)
Geometry Quadrilateral Coordinate Plane Question
What type of quadrilateral is formed by connecting the points (0, 9), (3, 6), (0, 1), and ( 3, 6)? rhombus trapezoid kite quadrilateral
Geometry Solving a Kite Question PLEASE HELP!
Find the variables and lengths of the sides of this kite. {top left is y-4, top right is x+5, bottom left is 2x+5, and bottom right is x+12} Thank you (:
Geometry kite solving question
Find the variables and lengths of the sides of this kite. {top left is y-4, top right is x+5, bottom left is 2x+5, and bottom right is x+12} Thank you (:
Geometry rhombus variable question
Thank you (:
Geometry rhombus variable question
In the rhombus, m<1=18x m<2=x+y m<3=30z Find the value of the variables X,Y, and Z. {angle 1 is on bottom right, angle 2 is on bottom left, on the upper half is angle three on the left. the figure is
shaped like a wide diamond} Thank You ^0^ It means alot! | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=sLEXANDria","timestamp":"2014-04-20T09:03:20Z","content_type":null,"content_length":"7367","record_id":"<urn:uuid:1501d709-9b17-47f0-b4f8-95ddbd469db9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
-2,_____,______,54; geo sequence
October 27th 2007, 02:43 PM #1
Oct 2007
-2,_____,______,54; geo sequence
and I don't know how to do this one either:
Or this one:
_____,_______,_______,2.5 r=2
My textbook doesn't go over this either....
recall that the terms of a geometric series are given by:
$a_n = ar^{n - 1}$ for $n = 1,2,3,4...$
where $a_n$ is the nth term, $a$ is the first term, $r$ is the common ratio, and $n$ is the current number of the term.
here we see that $a = \frac 14$ and $a_3 = ar^2 = \frac 14r^2 = 4$
solve for $r$ and you can find the middle term.
Or this one:
_____,_______,_______,2.5 r=2
My textbook doesn't go over this either....
here you have $a_4 = 2.5 = ar^3 = 8a$
solve for $a$ and you can find the other terms
October 27th 2007, 03:23 PM #2 | {"url":"http://mathhelpforum.com/algebra/21454-2-_____-______-54-geo-sequence.html","timestamp":"2014-04-17T16:12:58Z","content_type":null,"content_length":"35575","record_id":"<urn:uuid:a549c1c8-5910-4221-9e7d-cc2970637c09>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Physics: Ensembles
As a system is defined by the collection of a large number of particles, so the “ensembles” can be defined as collection of a number macroscopically identical but essentially independent systems.
Here the term macroscopically independent means, as, each of the system constituting an ensemble satisfies the same macroscopic conditions, like Volume, Energy, Pressure, Temperature and Total number
of particles etc. Here again, the term essentially
independent means the system (in the ensemble) being mutually non-interacting to others, i.e., the systems differ in microscopic conditions like parity, symmetry, quantum states etc.
There are three types of ensembles:
1. Micro-Canonical Ensemble
2. Canonical Ensemble
3. Grand Canonical Ensemble
Micro-canonical Ensemble
It is the collection of a large number of essentially independent systems having the same energy E, volume V and total number of particles N.
The systems of a micro-canonical ensemble are separated by rigid impermeable and insulated walls, such that the values of E, V & N are not affected by the mutual pressure of other systems.
This ensemble is as shown in the figure below:
Here all the borders are impermeable and insulated.
Canonical Ensemble
It’s the collection of a large number of essentially independent systems having the same temperature T, volume V and the number of particles N.
The equality of temperature of all the systems can be achieved by bringing all the systems in thermal contact. Hence, in this ensemble the systems are separated by rigid impermeable but conducting
walls, the outer walls of the ensemble are perfectly insulated and impermeable though.
This ensemble is as shown in figure:
System 1; System 2; System 3; System 4; System 5;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N.
System 6; System 7; System 8; System 9; System 10;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N.
Number of Particles N.
System 11; System 12; System 13; System 14; System 15;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N.
System 16; System 17; System 18; System 19; System 20;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N.
System 21; System 22; System 23; System 24; System 25;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N. Number of Particles N.
Here, the borders in bold shade are both insulated and impermeable while the borders in light shade are conducting and impermeable.
Grand Canonical Ensemble
It is the collection of a large number of essentially independent systems having the same temperature T, volume V & chemical potential μ.
The systems of a grand canonical ensemble are separated by rigid permeable and conducting walls. This ensemble is as shown in figure:
System 1; System 2; System 3; System 4; System 5;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Chemical Potential μ . Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ.
System 6; System 7; System 8; System 9; System 10;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ.
System 11; System 12; System 13; System 14; System 15;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ.
System 16; System 17; System 18; System 19; System 20;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ.
System 21; System 22; System 23; System 24; System 25;
Temperature T Temperature T Temperature T Temperature T Temperature T
Volume V Volume V Volume V Volume V Volume V
Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ. Chemical Potential μ.
Here inner borders are rigid, permeable and conducting while outer borders are impermeable as well as insulated. As the inner separating walls are conducting and permeable, the exchange of heat
energy as well as that of particles between the system takes place, in such a way that all the systems achieve the same common temperature T and chemical potential μ .
Ensemble Average
Every statistical quantity has not an exact but an approximate value. The average of a statistical quantity during motion is equal to its ensemble average.
If R(x) be a statistical quantity along x-axis and N(x) be the number of phase points in phase space, then the ensemble average the statistical quantity R is defined as,
$ \bar{R} := \dfrac{\int_{-\infty}^{\infty} R(x) N(x) \mathrm{d} x}{\int_{-\infty}^{\infty} N(x) \mathrm{d} x}$ | {"url":"http://gauravtiwari.org/2013/01/25/statistical-mechanics-ensembles/","timestamp":"2014-04-20T03:24:34Z","content_type":null,"content_length":"97159","record_id":"<urn:uuid:d39549c5-7b9a-49ea-b456-37a94783b74f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrices and Transformations
This book presents an elementary and concrete approach to linear algebra that is both useful and essential for the beginning student and teacher of mathematics. Here are the fundamental concepts of
matrix algebra, first in an intuitive framework and then in a more formal manner. A Variety of interpretations and applications of the elements and operations considered are included. In particular,
the use of matrices in the study of transformations of the plane is stressed. The purpose of this book is to familiarize the reader with the role of matrices in abstract algebraic systems, and to
illustrate its effective use as a mathematical tool in geometry.
The first two chapters cover the basic concepts of matrix algebra that are important in the study of physics, statistics, economics, engineering, and mathematics. Matrices are considered as elements
of an algebra. The concept of a linear transformation of the plane and the use of matrices in discussing such transformations are illustrated in Chapter #. Some aspects of the algebra of
transformations and its relation to the algebra of matrices are included here. The last chapter on eigenvalues and eigenvectors contains material usually not found in an introductory treatment of
matrix algebra, including an application of the properties of eigenvalues and eigenvectors to the study of the conics. Considerable attention has been paid throughout to the formulation of precise
definitions and statements of theorems. The proofs of most of the theorems are included in detail in this book.
Matrices and Transformations
assumes only that the reader has some understanding of the basic fundamentals of vector algebra. Pettofrezzo gives numerous illustrative examples, practical applications, and intuitive analogies.
There are many instructive exercises with answers to the odd-numbered questions at the back. The exercises range from routine computations to proofs of theorems that extend the theory of the subject.
Originally written for a series concerned with the mathematical training of teachers, and tested with hundreds of college students, this book can be used as a class or supplementary text for
enrichments programs at the high school level, a one-semester college course, individual study, or for in-service programs.
Reprint of the 1966 edition.
Availability Usually ships in 24 to 48 hours
ISBN 10 0486636348
ISBN 13 9780486636344
Author/Editor Anthony J. Pettofrezzo
Format Book
Page Count 142
Dimensions 5 5/8 x 8 1/4 | {"url":"http://store.doverpublications.com/0486636348.html","timestamp":"2014-04-17T15:27:49Z","content_type":null,"content_length":"38121","record_id":"<urn:uuid:4243b119-7a83-42ac-8ab6-9112fb47732f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A highway makes an angle of 6 with the horizontal. This angle is maintained for a horizontal distance of 5 miles. To the nearest hundredth of a mile, how high does the highway rise in this 5-mile
section? Show the steps you use to find the distance.
• 8 months ago
• 8 months ago
Best Response
You've already chosen the best response.
I got .53 Using sin
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Not sin :P We need to use a trig function that includes a known quantity as well as the unknown one. Sin is opposite over hypotenuse, but the hypotenuse gives us no information. We use tangent
because tangent is opposite over adjacent, which includes our x as well as our known quantity
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Sine is opposite over hypotenuse. You have opposite side over adjacent side which is NOT sine.
Best Response
You've already chosen the best response.
I thought we would do...|dw:1377069953819:dw|
Best Response
You've already chosen the best response.
Interesting, law of sines O.o
Best Response
You've already chosen the best response.
Not necessary, though.
Best Response
You've already chosen the best response.
I dont know then...:(
Best Response
You've already chosen the best response.
Checked it, answer is the same, just.....a much longer way of doing it that isnt needed xD
Best Response
You've already chosen the best response.
@OpenSessame Go ahead and do the Law of Sines and see what the answer is.
Best Response
You've already chosen the best response.
its .53
Best Response
You've already chosen the best response.
Yeah, it's the same answer doing it the faster way with tangent. It's nice to know the law of sines, but you should also know the more efficient way of doing this problem :3
Best Response
You've already chosen the best response.
I thought that was more efficient cause i can do it really easy that way...
Best Response
You've already chosen the best response.
Well, do whats comfortable of course. But do you understand what I did by chance?
Best Response
You've already chosen the best response.
Go with what you know. nearest hundredth of a mile --> .52552... --> .53 as you said @OpenSessame
Best Response
You've already chosen the best response.
Yes you used tan and then multiplied.
Best Response
You've already chosen the best response.
Mhm. As long as you know what I did, then thats fine. The more you know the better, but use whats comfortable in the end. You got the right answer, so good job!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52146adee4b0450ed75e0a97","timestamp":"2014-04-19T07:34:03Z","content_type":null,"content_length":"100434","record_id":"<urn:uuid:8099f0d9-9201-4346-8e44-79dd86353663>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
[gd,w] = grpdelay(b,a)
[gd,w] = grpdelay(b,a,n)
[gd,w] = grpdelay(sos,n)
[gd,w] = grpdelay(d,n)
[gd,f] = grpdelay(...,n,fs)
[gd,w] = grpdelay(...,n,'whole')
[gd,f] = grpdelay(...,n,'whole',fs)
gd = grpdelay(...,w)
gd = grpdelay(...,f,fs)
[gd,w] = grpdelay(b,a) returns the group delay response, gd, of the discrete-time filter specified by the input vectors, b and a. The input vectors are the coefficients for the numerator, b, and
denominator, a, polynomials in z^-1. The Z-transform of the discrete-time filter is
The filter's group delay response is evaluated at 512 equally spaced points in the interval [0,π) on the unit circle. The evaluation points on the unit circle are returned in w.
[gd,w] = grpdelay(b,a,n) returns the group delay response of the discrete-time filter evaluated at n equally spaced points on the unit circle in the interval [0,π). n is a positive integer.
[gd,w] = grpdelay(sos,n) returns the group delay response for the second-order sections matrix, sos. sos is a K-by-6 matrix, where the number of sections, K, must be greater than or equal to 2. If
the number of sections is less than 2, grpdelay considers the input to be the numerator vector, b. Each row of sos corresponds to the coefficients of a second-order (biquad) filter. The ith row of
the sos matrix corresponds to [bi(1) bi(2) bi(3) ai(1) ai(2) ai(3)].
[gd,w] = grpdelay(d,n) returns the group delay response for the digital filter, d. Use designfilt to generate d based on frequency-response specifications.
[gd,f] = grpdelay(...,n,fs) specifies a positive sampling frequency fs in hertz. It returns a length-n vector, f, containing the frequency points in hertz at which the group delay response is
evaluated. f contains n points between 0 and fs/2.
[gd,w] = grpdelay(...,n,'whole') and [gd,f] = grpdelay(...,n,'whole',fs) use n points around the whole unit circle (from 0 to 2π, or from 0 to fs).
gd = grpdelay(...,w) and gd = grpdelay(...,f,fs) return the group delay response evaluated at the angular frequencies in w (in radians/sample) or in f (in cycles/unit time), respectively, where fs is
the sampling frequency. w and f are vectors with at least two elements.
grpdelay(...) with no output arguments plots the group delay response versus frequency.
grpdelay works for both real and complex filters. | {"url":"http://www.mathworks.com.au/help/signal/ref/grpdelay.html?nocookie=true","timestamp":"2014-04-23T08:47:58Z","content_type":null,"content_length":"45637","record_id":"<urn:uuid:e503cc9e-c1ec-4b16-9ca4-5e79baf989d8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
is apple lossless real lossless?
post #16 of 101
2/12/13 at 12:29pm
Originally Posted by
yep, i think he was thinking compressed lossless is not really lossless because its compressed but compressed lossless should sound exactly the same as uncompressed lossless but just takes up less
space... thats probably where he misunderstood apple lossless as not being a lossless format
Compress lossless gets decompressed on the fly, so what you actually hear is the decompressed signal.
If I compress a file using either a lossy or a lossless approach and play (or see) such file without decompressing it, I would likely get some random noise stuff coming out.
Originally Posted by ultrabike
Compress lossless gets decompressed on the fly, so what you actually hear is the decompressed signal.
If I compress a file using either a lossy or a lossless approach and play (or see) such file without decompressing it, I would likely get some random noise stuff coming out.
does that have anything to do with "upsampling"?
Not really, it's just that it's a totally different format when compressed. It's only a representation of what the audio data is rather than actual audio data, and a DAC couldn't read the compressed
data on its own. If it could it would just be a garbled mess I believe.
This shouldn't be news to anyone, but basically any encoded audio file needs to be decoded (usually to LPCM - what WAVs normally contain) during playback.
The input might be 10, 100 or 1000 kbps but the output will always be 1411.2 kbps for 44.1 kHz, 16 bit, 2 channel audio.
Edited by xnor - 2/12/13 at 2:34pm
for example, i heard the colorfly c4 has upsampling option, what is the use of that though? since upconverting shouldnt provide better sound quality
I'm not 100% sure on this, but upsampling would provide a better representation of the signal (higher sampling rate provides a better "picture" of the true signal). I don't think this necessarily
means that it would have better sound quality, but the thought of having a higher sampling rate would make sense to provide a more accurate signal. Of course the upsampling probably isn't perfect so
it might degrade the signal actually.
Audirvana Plus on Mac OS X provides an over sampling option. I don't use it though.
As for ALAC or FLAC, it's just like a .zip file. You put a file in it and it gets compressed. Later when you need that file, you unzip the .zip file and you get your original, uncompressed file back
in pristine condition. Likewise, for ALAC, you put a lossless music file in a .zip-like container that happens to be called ALAC, and when you need that music the computer decodes/unzips it to the
original uncompressed state.
Edited by miceblue - 2/12/13 at 3:13pm
Upsampling makes as much sense as "upconverting" mp3 to wav before playback. Even less actually, because modern DACs oversample internally and low input sample rate is a hint that they can sacrifice
ultrasonic performance for better audio band performance.
FWIW, my soundcards RMAA slightly better at lower rates. Dunno if it's real or caused by some RMAA bug, but generally there is no reason for high rate capable DAC to perform worse at low rate and
noise shaping could be a reason for better performance at low rate.
An exception would be avoiding OS resampler (if you can't do it any other way) or converting 44.1k to 48k on Xonars if their higher noise floor at 44.1k bothers you.
Edited by mich41 - 2/12/13 at 3:47pm
I don't see why upsampling would ever be a good thing. Correct me if I'm wrong but the only audible change that could result is the addition of aliasing.
On the Xonars I think that might have been just been a bit of a screw up by them them... at any rate on my STX there was no audible noise floor at 44.1.
Edited by chewy4 - 2/12/13 at 3:58pm
Apple has not come out with any new ideas in 5 years. Samsung is now the world technology leader. In 4 years people will start to notice that Apple is just repackaging their old tech. They are
history but still cute, just like Twinkies.
B-b-b-b-but they'll have the iWatch. And then they'll make an iPad Mini 2 with a "revolutionary" advanced processor.
The Retina Display is actually pretty nice, but not very ideal for the hardware they offer on their computers. Ironically I'm typing this on a Retina MacBook Pro.
I still don't understand why they decided to make ALAC open-source....not that many people use it from what I understand.
Data compression and upsampling are definitively two different things. Upsampling can be used to simplify the analog part of the DAC, at the expense of making the digital a bit more complicated. A
bit of a different story.
One way to achieve lossless compression is through Huffman coding (among other more elaborate and powerful techniques.) Consider a pretty stream of 1's and 0's:
Let's take this stream for a date. Get acquainted by breaking the stream into pairs of 1's and 0's:
After first impressions, the first thing that comes to our attention is that this beautiful stream of bits is actually hollow and superficial. Too many 0's there. Lets replace the pairs of 1's and
0's by sequences of different sizes according to the following rule:
00 => 1
11 => 01
01 => 001
10 => 000
Note that less bits will be assigned to the pairs of bits that happen more often. Fondle this stream by substituting pairs of 1's and 0's using the above rule:
So, we went from:
001011100000000011010001000000110000 (36 bits)
10000100011110100110011110111 (29 bits)
Our 1's and 0's stream dropped 7 bits of clothes, and is now bit naked.
We scored. Lets be nice and help the bit stream cover up. We go from left to right. Found 3 consecutive zeros? Substitute with 10. Found 2 consecutive zeros and 1 one? Substitute with 01. Found 1
zero and 1 one? Substitute with 01. Found no zeros and 1 one? Substitute with 00.
I think this is right in a weird kind of way.
Originally Posted by miceblue
B-b-b-b-but they'll have the iWatch. And then they'll make an iPad Mini 2 with a "revolutionary" advanced processor.
The Retina Display is actually pretty nice, but not very ideal for the hardware they offer on their computers. Ironically I'm typing this on a Retina MacBook Pro.
I still don't understand why they decided to make ALAC open-source....not that many people use it from what I understand.
I own two Macbooks an Ipad#2 and 5 ipods. I use Itunes but still don't really see the imagination like the old days. I do think they have great DAP interfaces but it's all to propitiatory.
They will never change. I feel Samsung will have some weird cool piece of glass DAP in four years. Apple will be in the dust. They are too busy being at war with the world to make a better phone or
Originally Posted by ultrabike
Data compression and upsampling are definitively two different things. Upsampling can be used to simplify the analog part of the DAC, at the expense of making the digital a bit more complicated. A
bit of a different story.
One way to achieve lossless compression is through Huffman coding (among other more elaborate and powerful techniques.) Consider a pretty stream of 1's and 0's:
Let's take this stream for a date. Get acquainted by breaking the stream into pairs of 1's and 0's:
After first impressions, the first thing that comes to our attention is that this beautiful stream of bits is actually hollow and superficial. Too many 0's there. Lets replace the pairs of 1's and
0's by sequences of different sizes according to the following rule:
00 => 1
11 => 01
01 => 001
10 => 000
Note that less bits will be assigned to the pairs of bits that happen more often. Fondle this stream by substituting pairs of 1's and 0's using the above rule:
So, we went from:
001011100000000011010001000000110000 (36 bits)
10000100011110100110011110111 (29 bits)
Our 1's and 0's stream dropped 7 bits of clothes, and is now bit naked.
We scored. Lets be nice and help the bit stream cover up. We go from left to right. Found 3 consecutive zeros? Substitute with 10. Found 2 consecutive zeros and 1 one? Substitute with 01. Found 1
zero and 1 one? Substitute with 01. Found no zeros and 1 one? Substitute with 00.
I think this is right in a weird kind of way.
lol thats still confusing as hell since i have no background in computers, hence dont know much about the binary language
Originally Posted by Redcarmoose
I own two Macbooks an Ipad#2 and 5 ipods. I use Itunes but still don't really see the imagination like the old days. I do think they have great DAP interfaces but it's all to propitiatory.
They will never change. I feel Samsung will have some weird cool piece of glass DAP in four years. Apple will be in the dust. They are too busy being at war with the world to make a better phone or
think samsung will make a high end dap in the future?
Well, where I live there is a Samsung store almost every 200 yards as you walk down the street in places. It's actually overwhelming. I just see them using the technology to invent a small multimedia
device. Yes, I think it will have great sound quality but they will not stop there. It will be like a Touch but totally better. Wait and see in 4 years time!!
They have more talent I feel. There TVs and Fridges are fantastic. There laptops are great. I just see the quality. Still this is just my humble opinion after being super happy with their products
for years!!
They had 1080p plazma TVs that had connections for PCs and USB stick ports which would play pictures and MP3s way back in 2010. They make smart TVs now. I just think they are cutting edge. Give em 4
more years and see what they do.
post #17 of 101
2/12/13 at 1:42pm
Thread Starter
post #18 of 101
2/12/13 at 1:56pm
post #19 of 101
2/12/13 at 2:29pm
• 4,140 Posts. Joined 5/2009
• Location: Austria
post #20 of 101
2/12/13 at 2:38pm
Thread Starter
post #21 of 101
2/12/13 at 3:08pm
post #22 of 101
2/12/13 at 3:45pm
• 95 Posts. Joined 7/2012
post #23 of 101
2/12/13 at 3:51pm
post #24 of 101
2/12/13 at 4:11pm
post #25 of 101
2/12/13 at 4:36pm
post #26 of 101
2/12/13 at 5:18pm
post #27 of 101
2/12/13 at 6:46pm
post #28 of 101
2/12/13 at 6:49pm
Thread Starter
post #29 of 101
2/12/13 at 6:50pm
Thread Starter
post #30 of 101
2/12/13 at 7:01pm | {"url":"http://www.head-fi.org/t/650804/is-apple-lossless-real-lossless/15","timestamp":"2014-04-17T00:56:19Z","content_type":null,"content_length":"169554","record_id":"<urn:uuid:c6de1d65-7a80-4e68-901b-b304b40ab470>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help on how to do equation.
April 12th 2013, 06:20 PM #1
Apr 2013
Help on how to do equation.
I need to know how to solve for S in the following:
I've tried what I can think of as a current Algebra 2 student, but I can't figure it out. Plugged in to a TI-89 it shows the answer as S=(H^2-Z)^1/2 -H (sorry, I don't know how to insert a square
root into the equation.)
Re: Help on how to do equation.
Just transfer all terms on one side. S^2 - 2HS + Z =0. It is a quadratic in S. Use the formula for solving quadratic equation and you would get the result.
April 12th 2013, 08:14 PM #2
Super Member
Jul 2012 | {"url":"http://mathhelpforum.com/algebra/217355-help-how-do-equation.html","timestamp":"2014-04-17T19:16:07Z","content_type":null,"content_length":"32572","record_id":"<urn:uuid:b81e60a6-83fe-4f31-a24e-9106337bcce9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
One Man's Trash is Another Man's Blog
How to use Unity 3D’s Linear Interpolation (Vector3.Lerp) correctly
Posted by BlueRaja on February 8, 2014 – 3:09 pm
A lot of Unity 3D tutorials online use Unity’s linear interpolation incorrectly, including the official video tutorials(!). However, it’s actually very easy to use, once you understand how it works.
The prototype for Vector3.Lerp looks like this
static Vector3 Lerp(Vector3 start, Vector3 finish, float percentage)
What it does is simple: it returns a point between start and finish, based on the value of percentage:
• At percentage = 0.0, Lerp() returns start
• At percentage = 1.0, Lerp() returns finish
• At 0.0 < percentage < 1.0, Lerp() returns a point between start and finish.
• So at percentage = 0.5, it returns the point exactly halfway between start and finish.
• And at percentage = 0.10, it returns a point very near to start
This explains why it’s called linear interpolation: It moves smoothly (interpolates) at a constant speed (linearly) between two points!
How to use Lerp correctly
To use Lerp() correctly, you simply have to make sure that you pass the same start and finish values every frame while moving percentage up from 0.0 to 1.0 each frame.
Here’s an example. When the spacebar is pressed, we’ll lerp our object 10 spaces forward over a period of 1 second.
using UnityEngine;
using System.Collections;
public class LerpOnSpacebarScript : MonoBehaviour
/// <summary>
/// The time taken to move from the start to finish positions
/// </summary>
public float timeTakenDuringLerp = 1f;
/// <summary>
/// How far the object should move when 'space' is pressed
/// </summary>
public float distanceToMove = 10;
//Whether we are currently interpolating or not
private bool _isLerping;
//The start and finish positions for the interpolation
private Vector3 _startPosition;
private Vector3 _endPosition;
//The Time.time value when we started the interpolation
private float _timeStartedLerping;
/// <summary>
/// Called to begin the linear interpolation
/// </summary>
void StartLerping()
_isLerping = true;
_timeStartedLerping = Time.time;
//We set the start position to the current position, and the finish to 10 spaces in the 'forward' direction
_startPosition = transform.position;
_endPosition = transform.position + Vector3.forward*distanceToMove;
void Update()
//When the user hits the spacebar, we start lerping
//We do the actual interpolation in FixedUpdate(), since we're dealing with a rigidbody
void FixedUpdate()
//We want percentage = 0.0 when Time.time = _timeStartedLerping
//and percentage = 1.0 when Time.time = _timeStartedLerping + timeTakenDuringLerp
//In other words, we want to know what percentage of "timeTakenDuringLerp" the value
//"Time.time - _timeStartedLerping" is.
float timeSinceStarted = Time.time - _timeStartedLerping;
float percentageComplete = timeSinceStarted / timeTakenDuringLerp;
//Perform the actual lerping. Notice that the first two parameters will always be the same
//throughout a single lerp-processs (ie. they won't change until we hit the space-bar again
//to start another lerp)
transform.position = Vector3.Lerp (_startPosition, _endPosition, percentageComplete);
//When we've completed the lerp, we set _isLerping to false
if(percentageComplete >= 1.0f)
_isLerping = false;
How to use Lerp incorrectly
In our above example, we call Vector3.Lerp() like this:
transform.position = Vector3.Lerp(_startPosition, _endPosition, percentageComplete);
However, the method for calling Lerp you see in many tutorials online looks like this:
transform.position = Vector3.Lerp(transform.position, _endPosition, speed*Time.deltaTime);
Before I explain what’s wrong with this, please take a moment and try to figure out for yourself what this code will do, and why it’s wrong
The issue with the above code is that the first parameter to Lerp(), transform.position, changes every frame! Additionally, the percentage parameter (the third parameter) does not increase from 0.0
to 1.0, but instead is set to the completely arbitrary value speed*Time.deltaTime.
What this code actually does is move the object speed*Time.deltaTime-percent closer to _endPosition every frame.
There are several problems with this.
1. It’s non-linear ie. the speed is not constant. This is easy to see in the official video tutorial; the object moves quickly at first, and slows down as it nears its destination
2. The object’s speed varies with the user’s framerate. The higher the user’s framerate, the faster the object will move. This defeats the purpose of using FixedUpdate() to begin with, which is
supposed to alleviate this problem.
3. The object never reaches its destination since we only move a percentage closer each frame. Actually, due to floating-point rounding, it may or may not ever reach its destination. Additionally,
because of problem 2, whether or not it ever reaches its destination depends on the user’s framerate. Ouch!
The implementation of Lerp
As a final note, I’d like to share how Lerp() is typically implemented. This section is not required, but mathematically-inclined readers will find it interesting.
Implementing Lerp() is actually remarkably simple:
2 public static Vector3 Lerp(Vector3 start, Vector3 finish, float percentage)
3 {
4 //Make sure percentage is in the range [0.0, 1.0]
5 percentage = Mathf.Clamp01(percentage);
6 //(finish-start) is the Vector3 drawn between 'start' and 'finish'
7 Vector3 startToFinish = finish - start;
8 //Multiply it by percentage and set its origin to 'start'
9 return start + startToFinish * percentage;
10 }
How fast are computers in everyday terms?
Posted by BlueRaja on July 24, 2013 – 11:50 pm
It’s hard for humans to understand just how freakishly fast modern computers are. Here’s an analogy that has helped me:
Imagine that it takes you 1 second to do a simple computation, like adding two numbers. If you were to sit at your desk and add numbers all day non-stop, with no sleep, every day of every month of
every year, it would take you 95 years straight to do what a computer can do in one second.^†
95 years. That’s probably longer than your entire life. Stop to think about how many seconds have passed in your life, and how many more will need to pass before you hit 95 years old. A computer does
that much work every second.
Now that we have a relatable timescale, here are some approximate computer-times for some programming tasks (and other things). Keep in mind that 95 years of computer-time is one second of real-life
time. The following are all in computer-time (Based on Intel i7 x86 timings):
Time for light to travel 1 cm (in vacuum) 0.1 seconds
Addition/subtraction 1 second
If statement 2 seconds
Integer multiplication 5 seconds
Cache miss (L1 to L2) 10 seconds
Function call^†† 8-12 seconds
Virtual function call^†† 12-14 seconds
If statement (branch prediction failure) 20 seconds
Integer division 17-28 seconds
Cache miss (L2 to L3) 40 seconds
Cache miss (L3 to DRAM) 2 minutes
SHA1 hash of 64-byte message 15 minutes
Call to malloc() (rough estimate) 30 minutes
Time for a speeding bullet to travel one inch 3 days
Fastest Windows Timer Resolution (0.5ms) 2 weeks
Read 1MB file with high-end SSD drive (1.9ms) 2 months
Fastest blip the human eye can see (2ms) 2.25 months
Executing a large-ish SQL statement (3ms) 3.5 months
Time between frames in a 60fps video game (16.6ms) 1.5 years
Time it takes an incandescent lightbulb to turn on (90ms) 8.5 years
Average human reaction time (220ms) 21 years
† That’s assuming a 3 GHz x86 CPU with one core and no throttling or instruction level parallelism. If we took into account the multiple cores and ILP that modern CPUs have, the time would be in the
1000′s of years!
†† Includes cost of pushing 2 parameters, plus the stack-frame setup. Note that the compiler can sometimes optimize virtual-methods to be called non-virtually.
A Heap-Based C# Priority Queue Optimized for A* Pathfinding
Posted by BlueRaja on July 3, 2013 – 1:44 am
In fine-tuning my Pathery application, I found the need for a faster priority queue. Since this could be useful to others (pathfinding is often a bottleneck for video games), I finally got around to
publishing it.
You can find it here; or read the documentation here. Enjoy!
Note: If you found this page hoping to optimize your pathfinder, you may want to check out this post first, which has a few tips on optimizing pathfinders for games.
Branchless Conditionals (Compiler Optimization Technique)
Posted by BlueRaja on May 10, 2011 – 2:51 am
One of the neater and lesser-known optimizations a compiler can perform is a “branchless conditional” – a conditional statement that contains no branches (go figure).
To understand what this means, take a look at the following C-snippet:
if(SomeFunc() == 4) return 54;else return 2;
When compiled with no optimizations in Visual Studio, this is the output
if(SomeFunc() == 4)00A0140E call SomeFunc (0A01145h) 00A01413 cmp eax, 4 00A01416 jne 0A01421h return 54;00A01418 mov eax,
5400A0141D jmp 0A01426h else return 2;00A01421 mov eax, 20A01426h ret
Nothing surprising here – almost a literal translation from C to assembly. Small, fast, efficient – what could we possibly improve upon?
One possible improvement could be removing the branches – the jmp and jne instructions.
Branch instructions are not in-and-of themselves particularly slow; all a jmp instruction does is write a value to the Program Counter (PC) register, and on a simple CPU which reads and executes one
instruction at a time, jmp wouldn’t be slower than any other instruction which writes to a register. However, modern CPUs are anything but simple.
On modern CPUs, instructions are not read then immediately executed, one-by-one. Rather, the process is split into multiple stages, called the Instruction Pipeline. This allows, for example, the
instruction two instructions in the future to be fetched (read) while the next instruction is being decoded and the current instruction is being executed, and the previous instruction is writing its
results to memory! This allows incredible speedups in program execution time, but now that the CPU is essentially executing multiple instructions at once, the CPU designers must take extreme
precautions to make sure that the results of the execution are still correct!
And herein lies the problem with conditional jumps like jne: How does the processor know what the “next” instruction will be before it actually executes the conditional? The answer is: it doesn’t. It
can guess (I believe I read that the Intel x86 guesses correctly ~60% of the time), but if it guesses incorrectly, the entire Instruction Pipeline must be flushed: all the work the CPU did fetching
and decoding the instructions it thought were going to be executed is invalidated and completely removed from the pipeline, and the entire pipeline sits around doing nothing while it waits for the
jne instruction to complete. Essentially, a jne instruction with a failed prediction causes our fancy expensive pipelined CPU to act as though it were a boring old non-pipelined CPU.
(In fact, the situation is even more complicated than that: modern CPUs essentially have multiple pipelines, meaning they literally execute more than one instruction at once, even if there is only
one core and the program is single-threaded. When a conditional-jump prediction fails, all these pipelines must be flushed and stalled, so all the pipelines sit around doing nothing when the
conditional-jump executes)
Knowing all of this, let’s see how Visual Studio compiles the above code with optimizations enabled:
0012100B call SomeFunc (0A01145h) 0012101A sub eax, 4 0012101D neg eax 0012101F sbb eax, eax 00121021 and eax, -52 00121024 add eax, 54
Whoa, what happened there? There are no longer any branch instructions! Let’s translate this back to C, to help see what’s going on:
//A rough translation of the above assembly codeeax = SomeFunc() - 4;eax = -(eax != 0); //See beloweax = (eax & -52) + 54;
It appears the branches have been replaced with some clever math and bit-fiddling. Here’s how it works (don’t feel bad if you have to read this over a few times, this is complex stuff!):
• We set eax = SomeFunc() – 4. Thus, if SomeFunc() == 4, eax = 0; otherwise, eax != 0.
• neg eax will set the carry-flag to 0 if eax == 0, or 1 otherwise. Thus, the carry-flag gets set to (eax != 0).
sbb A, B (sbb means “subtract with borrow”) essentially does A = A – (B + carryFlag). Thus, sbb eax, eax sets eax = 0 if the carry-flag was 0, or eax = -1 if the carry-flag was 1.
At this point, if SomeFunc() == 4, eax == 0. Otherwise, eax == -1
• Finally, we take the bitwise-AND of eax with -52, and add 54. Since 0 & -52 == 0 and -1 & -52 == -52 (Remember, in 2′s complement -1 has all bits set, so -1 & x always equals x), this means that
if SomeFunc() == 4 (and thus eax == 0), the result will be 0 + 54 = 54; while if SomeFunc() != 4 (and thus eax == -1), the result will be -52 + 54 = 2. This is the exact same result as our
original function!
So now the important question: Are branchless conditionals really faster?
Well, that depends. If the branch-predictor in the CPU is consistantly right for our code, the instruction pipeline would never need to be flushed, and the code with branches would be faster – it
does, after all, involve a lot fewer operations. If, however, the branch-predictor is wrong often enough, the instruction pipeline will need to be flushed all/most of the time the code is run, which
would make the branchless code faster. And unfortunately, there is no way to determine if the branch predictor will guess correctly for your code. Thus, if this is an absolutely critical portion of
your code: Profile! Test both code-chunks (branchless and branching), and see which is faster in your case!
And, for the love of god, don’t write your code like the “rough translation” above to try to force the compiler to write branchless code. Not only will you confuse the compiler and probably end up
making your code slower, but you will drive whoever has to maintain the code absolutely insane.
Just for reference, here are some more examples of branchless conditionals:
Example from Eldad Eilam’s “Reversing: Secret of Reverse Engineering”:
if(LocalVariable & 0x00001000) return 1;else return 0;mov eax, [ebp - 10]and eax, 0x00001000neg eaxsbb eax, eaxneg eaxret
Example from http://stackoverflow.com/questions/539836/emulating-variable-bit-shift-using-only-constant-shifts
int isel( int a, int x, int y ){ return (a >= 0 ? x : y);};int isel( int a, int x, int y ){ int mask = a >> 31; // arithmetic shift right, splat out the
sign bit // mask is 0xFFFFFFFF if (a < 0) and 0x00 otherwise. return (x & (~mask)) + (y & mask);};
Example from http://stackoverflow.com/questions/1610836/branchless-code-that-maps-zero-negative-and-positive-to-0-1-2
int Compare(int x, int y){ int diff = x - y; if (diff == 0) return 0; else if (diff
< 0) return 1; else return 2;}int Compare(int x, int y){
int diff = y - x; return (!!diff) << ((diff >> 31) & 1);}
Making Use of Spare Hard-Drives
Posted by BlueRaja on August 21, 2009 – 6:20 am
1+2+3+… = -1/12
Posted by BlueRaja on August 13, 2009 – 6:19 am
The following is from an essay on Ramanujan I wrote a few years back. Enjoy.
…they recognized Ramanujan’s enormous talent for mathematics, and together they convinced Ramanujan to write to English mathematicians about his discoveries. [...] The third letter he wrote, to M. J.
M. Hill, received a reply, but was not very encouraging. In his reply, Hill stated,
“Mr. Ramanujan is evidently a man with a taste for Mathematics, and with some ability, but he has got on two wrong lines. He does not understand the precautions which have to be taken in dealing with
divergent series, otherwise he could not have obtained the erroneous results you send me, viz:–
However, the problem was not with Ramanujan’s result itself, but rather with his lack of explanation of the result – or perhaps with Hill’s lack of knowledge on divergent series.
Before Ramanujan’s result can be explained, though, we have to first take a step back over 150 years in history. In 1749, the mathematics giant Leonhard Euler wrote a paper in which he derived the
paradoxical result that 1 – 2 + 3 – 4 + … = 1/4.
Theorem (Euler) 1 – 2 + 3 – 4 + … = 1/4
Proof. Notice that the power series for 1/(x+1)^2 is
If we assume this to be true for all x, we can plug in the value x=1 to obtain the desired result! ◊
In order for his result to make sense, Euler proposed (several times throughout his life) that an extended definition of the word “sum” be made. In his own words,
“Let us say, therefore, that the sum of any infinite series is the finite expression, by the expansion of which the series is generated. In this sense the sum of the infinite series 1 − x + x^2 − x^3
+ … will be 1⁄1+x, because the series arises from the expansion of the fraction, whatever number is put in place of x. If this is agreed, the new definition of the word sum coincides with the
ordinary meaning when a series converges; and since divergent series have no sum in the proper sense of the word, no inconvenience can arise from this new terminology. Finally, by means of this
definition, we can preserve the utility of divergent series and defend their use from all objections.”
(It should be noted that Euler’s proof would not be valid by today’s standards, because 1 – 2x + 3x^2 – 4x^3 + … does not define a valid function when x=1. However, it would be valid to take the
limit as x approaches 1 from the left; doing this yields the same result. Today there are many methods of finding the “sum” of a divergent series; the method just mentioned, laid out by Euler, is now
known as the “Abel summation” of the series).
Now that we know what it means for a divergent series to “sum” to a certain finite value, we can ask the question of whether or not Ramanujan’s results were in any way correct.
Theorem 1+2+3+…+∞ = -1/12
Proof. This proof is due to Ramanujan – though he didn’t send it to Hill originally, he did write it down in his first notebook.
Say we set c = 1 + 2 + 3 + … Then
Subtracting the bottom row from the top,
ala Euler. Thus, c = -1/12. This result is correct to the extent that Euler’s result is correct as well. ◊
(Un)Common Questions about Electricity
Posted by BlueRaja on July 27, 2009 – 6:18 am
When I first began learning about electronics and electricity, there were a number of seemingly obvious questions that never seemed to be answered by my professors, books, or articles on electricity;
I had to scrape together answers gradually, gathering knowledge from every source I could find.
I’m collecting these questions and answers here in the hopes that they will someday save some poor internet-goer like yourself from all the trouble I had to go through.
How to Give Someone Elf Ears and Vampire Fangs in Photoshop
Posted by BlueRaja on June 21, 2009 – 6:16 am
As if the Internet doesn’t have enough weird fetishes floating around, here’s a quick tutorial for beginners on how to give someone elf ears and vampire fangs in photoshop.
For this tutorial, I’ve used a stock photograph of the beautiful American model, Valerie Hatfield.
Those interested in her work can contact her here.
1. Open your image in Photoshop
2. As always, duplicate the background layer before anything else.
3. Highlight the ear. This can be done rather easily using the quick-select tool (alt+click to remove from selection).
4. Now warp the ear (edit->transform->warp) to your liking.
(Note that you may have to remove part of the ear from the background. See any tutorial on removing objects from images – you could, for example, simply copy and paste a piece of hair over the
old ear)
That takes care of the elf ears – now let’s give her vampire teeth as well. We could use the same technique we used for the ears, but that leaves the tooth looking flat and two-dimensional; I
prefer a technique which causes the tooth to affect the lips as well.
5. Select the tooth in the same way you selected the ear.
6. Now, using the lasso tool, add to the selection (by holding shift) a small portion of the lip and chin just below and around the tooth, as so:
7. Open the liquify window (Filter->liquify). Using the forward-warp tool and a brush about the size of the tooth, drag the tooth downwards. Try to do it one one swift motion, using undo
(ctrl+alt+z) to go back as many times as necessary.
8. Select the Reconstuct Tool on the left, and use it to give your tooth a point. You may want to lower your brush density and experiment with different reconstruct modes (under ‘Tool Options’) to
get it just right.
How to Double the Length of Any Essay (Without Writing a Word!)
Posted by BlueRaja on June 15, 2009 – 6:11 am
As promised, here are a few tips to help double the length of any essay.
All of these statistics/instructions are for Microsoft Word 2007, but they apply equally well to older versions of Word or OpenOffice.
Replace All the Periods
Increase in size: 42.9%
How to do it: Go to edit->replace and place a period (.) in both boxes. Highlight the period in the "replace with" box, and click on "more" in the lower-right hand corner. Then click format->font.
Under "size," increase the font-size significantly – I used 16 in this example.
Click OK, then hit "replace all."
Increase the Paragraph Spacing
Increase in size: 21.6%
How to do it: Higlight everything (edit->select all), right-click->Paragraph. Set "Line Spacing" to multiple, and set it to something between 2 and 3 (or between 1 and 2 if it’s a single-spaced
essay). I set it to 2.5 for this example.
Change the Font Size
Increase in size: 9.1%
How to do it: The font size is right next to the font face, at the top. After highlighting everything, increase it by up to a whole point – I set it a half-point larger (11.5) for this example.
Use a Different Font
Increase in size: 9.1%
How to do it: Highlight everything, and just change the font from something other than the default, Calibri. I changed it to the old default, Times New Roman (12 pt font), for the 9.1% increase, but
there are probably other similiar-looking fonts that will increase that even more.
Change the Margins
Increase in size: 7.2%
How to do it: Go to Page Layout->Margins->Custom and increase the margins. They default to 1" all around – I changed it to 1.15" all around.
Change the Character Spacing
Increase in size: 7.1%
How to do it: Select everything (ctrl+a), then right-click->Font->Character Spacing. Change the spacing to something small (Half a point or less). I use 0.3pt
All effects put together:
Increase in size: 114.5% (over double!)
How to Save Thousands on Textbooks
Posted by BlueRaja on January 23, 2009 – 6:07 am
At the start of every new college semester, I inevitably find myself listening to dozens of whiney Freshmen complain about how it costs (their parents) $800 for new textbooks. As I enter my final
semester, I feel I should pass my secrets down to the coming generations. See, I haven’t spent that much on textbooks over the course of my entire college career. And today I share my secrets with
all of you. Why? Simply for the joy of seeing you smile (because now you can afford those dental bills).
1. Don’t Buy Your Books
This may seem like useless non-advice, but it’s actually the most important tip, which is why I placed it first. Ask any college senior, and you’ll likely find that as many as 50% of their textbooks
have gone completely unopened over the years. Many professors recommend a book simply for the sake of recommending a book; it may be awful even as a reference, or have very little to do with the
class. Find someone who has taken the class before and ask them if they needed the book for homework or assigned reading, they used it often as a reference, or if they simply never opened it. This
little bit of anticipatory research can save you thousands over the course of four years.
2. The Library is Your Best Reference
Even if you need the book for homework or the occasional reference, you can still get away with not buying it. Most school libraries carry copies of the books required for each class. Though you
probably won’t be able to check it out for any extended period of time, you can usually check it out long enough to do your homework each week or, if that’s too inconvenient for you, you could
photocopy all the homework-pages ahead of time for only a few dollars, saving hundreds.
3. Borrow From a Friend
Okay, so you absolutely need the textbook, and can’t stand walking to the library every week. I have good news for you – you can still get out of buying your textbook! If, at least, you happen to
make friends with someone who has taken the course previously. Ask around, find someone who has taken the class already, and bring them out for a drink. Even if they don’t have their textbook anymore
or refuse to borrow (or rent..?) it out to you, they can still help you learn if you can save in other ways on this class’s textbook.
3. Buy Old Editions
A lot of the more greedy book publishers have realized that if they don’t reissue a new edition of their book every few years, the same few copies will continuously circulate in online trading sites,
killing their sales. Thus, every few years they add a new paragraph or two, fix a few mistakes, and jumble the chapter problems (not even change them!) in order to re-release the same book as the "
new edition." Since the demand for old editions of textbooks is so low, the price of even the previous edition is usually dirt-cheap.
The basics of Calculus have been the same for over 300 years – so do we really need a new Calculus textbook every six months?
On the off-chance that you actually need to do the chapter problems, you can still go to the library and photocopy them – or, even better, figure out which problems from your book correspond to the
homework problems in the new book.
If you were planning on using the book only as a reference, you may want to consider buying a completely different book. The textbook I used for Calculus was nearly identical to the one we were
supposed to use, but only cost me four dollars (including shipping)!
4. Buy Used or International Copies
An international version of a book has the same content as the non-international version, but is meant to be sold in places where it’s illegal to artificially jack up the price of textbooks (I’m
looking at you, America). These textbooks can be found at any online book-trading site; one good place to compare prices from these sites is www.dealoz.com, though there are many others (Facebook has
a marketplace as well).
5. Sell Your Books at the Start of the Next Semester
Buying a book for $50 and selling it for $40 is basically the same as renting the book for $10. Google is a better reference than most of your textbooks anyways, so why keep them around after you’re
finished with them?
If you’re going to sell a textbook, the best time is when they’re in highest demand – at the beginning of the next semester. Based on the number of users, the best places to sell are probably amazon
and half.com.
That’s it – with all the cash you’ll be saving, you can finally afford that solid gold replica of Billy Joel you’ve been eyeing on ebay for the past week. You better start bidding before the auction
Next week I’ll show you how to double the length of an essay without writing a word. | {"url":"http://www.blueraja.com/blog/","timestamp":"2014-04-20T16:12:46Z","content_type":null,"content_length":"124668","record_id":"<urn:uuid:52537898-bbe2-4edb-b753-25c3a634a30e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why can't I xor strings?
Grant Edwards grante at visi.com
Mon Oct 11 21:35:55 CEST 2004
On 2004-10-11, Jeremy Bowers <jerf at jerf.org> wrote:
>>> With respect, this isn't something to doubt or not doubt.
>>> There is one, and only one, way to represent any positive
>>> number in base two, since encoding sign is not an issue.
>>> Assuming an extra bit to show sign, there is one and only one
>>> way to represent any negative number, too.
>> That's news to me. I've used three different base-2
>> representations for negative numbers in the past week, and I
>> can think of at least one other one I've used in the past.
> I am aware of only one encoding that uses a single bit to
> represent sign, as I stipulated, and discarding endianness
> issues I'm having a hard time imagining what reasonable
> alternatives there are.
* Two's compliment.
* One's compliment.
* Signed-magnitude with a "1" sign bit being positive.
* Signed-magnitude with a "1" sign bit being negative.
* Excess-N notation.
Four of the five are in use in the software I'm working on
>>> (Zero gets to be the exception since then you can have
>>> positive and negative zero,
>> That depends on which base-2 representation you've chosen. In
>> two's compiliment and excess-N representations, there is only
>> one zero value. In signed-magnitude there may be two.
> I explicitly only discussed signed-magnitude: "Assuming an
> extra bit to show sign".
I don't know what you mean. Two's compliment, one's
compliment, signed-magnitude, and excess-N _all_ use a single
bit for sign.
Grant Edwards grante Yow! Kids, don't gross me
at off... "Adventures with
visi.com MENTAL HYGIENE" can be
carried too FAR!
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2004-October/246783.html","timestamp":"2014-04-21T14:55:28Z","content_type":null,"content_length":"4631","record_id":"<urn:uuid:068efd3f-85ef-474c-ab82-17edd48b0b74>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solutions of Right Triangles
Solutions of Right Triangles
Triangles are made up of three line segments. They meet to form three angles. The
sizes of the angles and the lengths of the sides are related to one another.
If you know the size (length) of three out of the six parts of the triangle (at least one
side must be included), you can find the sizes of the remaining sides and angles.
If the triangle is a right triangle, you can use simple trigonometric ratios to find the
missing parts.
In a general triangle (acute or obtuse), you need to use other techniques, including
the law of cosines and the law of sines. You can also find the area of triangles by
using trigonometric ratios.
All triangles are made up of three sides and three angles. If the three angles of the
triangle are labeled ∠ A, ∠ B and ∠ C, then the three sides of the triangle should be
labeled as a, b, and c.
Know More About How to Read z Score Table
Tutorcircle.com Page No. : 1/4
Figure 1 illustrates how lowercase letters are used to name the sides of the triangle
that are opposite the angles named with corresponding uppercase letters.
If any three of these six measurements are known (other than knowing the measures
of the three angles), then you can calculate the values of the other three
The process of finding the missing measurements is known as solving the triangle. If
the triangle is a right triangle, then one of the angles is 90°.
Therefore, you can solve the right triangle if you are given the measures of two of the
three sides or if you are given the measure of one side and one of the other two
Example 1: Solve the right triangle shown in Figure 1 (b) if ∠ B = 22°
Because the three angles of a triangle must add up to 180°, ∠ A = 90 ∠ B thus ∠ A
= 68°.
Example 2: Solve the right triangle shown in Figure 1 (b) if b = 8 and a = 13.
You can use the Pythagorean theorem to find the missing side, but trigonometric
relationships are used instead. The two missing angle measurements will be found
first and then the missing side.
In many applications, certain angles are referred to by special names. Two of these
special names are angle of elevation and angle of depression. The examples shown
in Figure 2 make use of these terms.
Read More About How to Test for Differentiability
Tutorcircle.com Page No. : 2/4
Example 3: A large airplane (plane A) flying at 26,000 feet sights a smaller plane
(plane B) traveling at an altitude of 24,000 feet. The angle of depression is 40°. What
is the line of sight distance ( x) between the two planes?
Example 4: A ladder must reach the top of a building. The base of the ladder will be
25′ from the base of the building. The angle of elevation from the base of the ladder to
the top of the building is 64°. Find the height of the building (h) and the length of the
ladder ( m).
Tutorcircle.com Page No. : 3/4
Page No. : 2/3
Thank You For Watching | {"url":"http://www.docstoc.com/docs/122792562/Solutions-of-Right-Triangles","timestamp":"2014-04-24T20:11:41Z","content_type":null,"content_length":"54779","record_id":"<urn:uuid:cabb171e-0d36-4457-b7f2-6cb46e3845b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
should be simple, where am i going wrong?
September 16th 2008, 07:30 AM #1
Dec 2007
should be simple, where am i going wrong?
[(5)/(x-1) - ((2x)/(x+1))] - 1 is less than 0
when I simplify (get common denom to subtract the 1), i get (-3x^2 + 7x + 6)/(x^2 - 1).
i am assuming i am making an error in the simplification here, but i have checked 3 times. the answers in the back of the book (for zeros) are 1, -1 (from denom, i get this part) and -(2/3) and
3. it's these two i can not get from my numerator. where is my error???
$\bigg[\frac{5}{x-1}-\frac{2x}{x+1}\bigg] -1 < 0$
$\frac{5x + 5 -2x^2 + 2x -(x^2 -1)}{x^2 -1} < 0$
$\frac{-3x^2 +7x +6}{(x+1)(x-1)} < 0$
Nope that is correct but you have to test and see in what intervals it would be positive and when it would be negative
Look at this post you can use a chart like the one Moo used to see what is going on
thanks for the response, thought i was losing my mind. i had checked it 3 times.
September 16th 2008, 07:49 AM #2
September 16th 2008, 07:51 AM #3
Dec 2007 | {"url":"http://mathhelpforum.com/algebra/49317-should-simple-where-am-i-going-wrong.html","timestamp":"2014-04-21T08:13:18Z","content_type":null,"content_length":"35218","record_id":"<urn:uuid:b7e09867-fe01-461a-9810-c6aec09b8952>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
8^8*8^4 What is each expression written using each base only once?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f8adffce4b09e61bffc3ddc","timestamp":"2014-04-18T11:02:40Z","content_type":null,"content_length":"52937","record_id":"<urn:uuid:232e13bf-8781-4069-95f5-be2acadd377b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Disproofs of Riemann's Hypothesis
Authors: Chun-Xuan Jiang
As it is well known, the Riemann hypothesis on the zeros of the ζ(s) function has been assumed to be true in various basic developments of the 20-th century mathematics, although it has never been
proved to be correct. The need for a resolution of this open historical problem has been voiced by several distinguished mathematicians. By using preceding works, in this paper we present
comprehensive disproofs of the Riemann hypothesis. Moreover, in 1994 the author discovered the arithmetic function J[n](ω) that can replace Riemann's ζ(s) function in view of its proved features: if
J[n](ω) ≠ 0, then the function has infinitely many prime solutions; and if J[n](ω) = 0, then the function has finitely many prime solutions. By using the Jiang J[2](ω) function we prove the twin
prime theorem, Goldbach's theorem and the prime theorem of the form x^2 + 1. Due to the importance of resolving the historical open nature of the Riemann hypothesis, comments by interested colleagues
are here solicited.
Comments: 13 pages
Download: PDF
Submission history
[v1] 5 Apr 2010
Unique-IP document downloads: 171 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1004.0028","timestamp":"2014-04-16T10:11:15Z","content_type":null,"content_length":"7617","record_id":"<urn:uuid:9ea1c0a7-98ab-41c0-909a-4e6ea363a23b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting knots with fixed number of crossings
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
How to obtain an upperbound for knots up to k crossings? I think I've found something which involves the genus but I'm not sure.
up vote 1 down vote favorite knot-theory
add comment
How to obtain an upperbound for knots up to k crossings? I think I've found something which involves the genus but I'm not sure.
There are some known exponential bounds on the number. For example, if k[n] is the number of prime knots with n crossings, then Welsh proved in "On the number of knots and links"
(MR1218230) that
2.68 ≤ lim inf (k[n])^1/n ≤ lim sup (k[n])^1/n ≤ 13.5.
up vote 8 down vote The upper bound holds if you replace k[n] by the much larger number l[n] of prime n-crossing links.
Sundberg and Thistlethwaite ("The rate of growth of the number of prime alternating links and tangles," MR1609591) also found asymptotic bounds on the number a[n] of prime
alternating n-crossing links: lim (a[n])^1/n exists and is equal to (101+√21001)/40.
add comment
There are some known exponential bounds on the number. For example, if kn is the number of prime knots with n crossings, then Welsh proved in "On the number of knots and links" (MR1218230) that
2.68 ≤ lim inf (kn)1/n ≤ lim sup (kn)1/n ≤ 13.5.
The upper bound holds if you replace kn by the much larger number ln of prime n-crossing links.
Sundberg and Thistlethwaite ("The rate of growth of the number of prime alternating links and tangles," MR1609591) also found asymptotic bounds on the number an of prime alternating n-crossing links:
lim (an)1/n exists and is equal to (101+√21001)/40.
Dowker codes can be used to get an (over)estimate for the number of knots with $k$ crossings. Hoste has written a few, extremely clear, papers on using Dowker codes for enumeration of
up vote 2 knot tables. I don't see how genus could be used - crossing number is an invariant defined in terms of diagrams while genus is much more topological... Very curious!
down vote
add comment
Dowker codes can be used to get an (over)estimate for the number of knots with $k$ crossings. Hoste has written a few, extremely clear, papers on using Dowker codes for enumeration of knot tables. I
don't see how genus could be used - crossing number is an invariant defined in terms of diagrams while genus is much more topological... Very curious! | {"url":"http://mathoverflow.net/questions/19745/counting-knots-with-fixed-number-of-crossings/19759","timestamp":"2014-04-24T00:22:30Z","content_type":null,"content_length":"57396","record_id":"<urn:uuid:2b520c25-d2be-4b20-9922-29d64802b887>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Maxima] Integrating abs(x)?
Barton Willis willisb at unk.edu
Mon Jul 14 17:21:28 CDT 2008
-----maxima-bounces at math.utexas.edu wrote: -----
>On Mon, Jul 14, 2008 at 5:11 PM, Alasdair McAndrew <amca01 at gmail.com>
>I can probably use pmint, but the results it produces aren't really
>useful. For example:
>(%i1) pmint(abs(sin(x)),x);
>(%o1) -cos(x)*abs(sin(x))/sin(x)
>which is true, but doesn't allow me to enter values for which sin(x)=0.
>This is too restrictive.
If you are looking for a fun summer project, write a function that
*tries* to determine if an expression is continuous on a given
interval (or at a given point). It would have to give up (return
unknown) for many expressions, but Maxima could at least do as well as
an average UNK undergraduate student. Maybe Maxima already has such
logic, (for definite integrals and limits) but I don't think it is a
user-level function.
More information about the Maxima mailing list | {"url":"http://www.ma.utexas.edu/pipermail/maxima/2008/012538.html","timestamp":"2014-04-19T04:24:08Z","content_type":null,"content_length":"3527","record_id":"<urn:uuid:e2e0b014-f5f7-4bfd-afc7-8ee6d91900b0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Case Study of Economic Load Dispatch for a Thermal Power Plant us...
A Case Study of Economic Load Dispatch for a Thermal Power Plant using Particle Swarm Optimization
This paper discusses the possible applications of particle swarm optimization (PSO) in the Power system. One of the problems in Power System is Economic Load dispatch (ED). The discussion is carried
This paper discusses the possible applications of particle swarm optimization (PSO) in the Power system. One of the problems in Power System is Economic Load dispatch (ED). The discussion is carried
out in view of the saving money, computational speed – up and expandability that can be achieved by using PSO method. The general approach of the method of this paper is that of Dynamic Programming
Method coupled with PSO method. The feasibility of the proposed method is demonstrated, and it is compared with the lambda iterative method in terms of the solution quality and computation
efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.
Total Views
Views on SlideShare
Embed Views
Usage Rights
© All Rights Reserved | {"url":"http://www.slideshare.net/ijsrneteditorial/a-case-study-of-economic-load-dispatch-for-a-thermal-power-plant-using-particle-swarm-optimization","timestamp":"2014-04-20T17:13:31Z","content_type":null,"content_length":"226579","record_id":"<urn:uuid:f97b5b48-0206-406b-a6b9-deac1930afd3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
algebraic model category
Model category theory
Universal constructions
Producing new model structures
Presentation of $(\infty,1)$-categories
Model structures
for $\infty$-groupoids
for $n$-groupoids
for $\infty$-groups
for $\infty$-algebras
for stable/spectrum objects
for $(\infty,1)$-categories
for stable $(\infty,1)$-categories
for $(\infty,1)$-operads
for $(n,r)$-categories
for $(\infty,1)$-sheaves / $\infty$-stacks
The structure of an algebraic model category is a refinement of that of a model category.
Where a bare model category structure is a category with weak equivalences refined by two weak factorization systems ($(cofibrations, acyclic fibrations)$ and $(acyclic cofibrations, fibrations)$) in
an algebraic model structure these are refined further to algebraic weak factorization systems plus a bit more.
This extra structure supplies more control over constructions in the model category. For instance its choice induces a weak factorization system also in every diagram category of the given model
An algebraic model structure on a homotopical category $(M,W)$ consists of a pair of algebraic weak factorization systems $(C_t, F)$, $(C,F_t)$ together with a morphism of algebraic weak
factorization systems
$(C_t,F) \to (C,F_t)$
such that the underlying weak factorization systems form a model structure on $M$ with weak equivalences $W$.
A morphism of algebraic weak factorization systems consists of a natural transformation
$\array{ & \text{dom} f & \\ {}^{C_{t}f}\swarrow & & \searrow {}^{{C}{f}} \\ Rf & \stackrel{\xi_f}{\to} & Qf \\ {}_{{F}{f}}\searrow & & \swarrow {}_{F_{t}f} \\ & \text{cod} f & }$
comparing the two functorial factorizations of a map $f$ that defines a colax comonad morphism $C_t \to C$ and a lax monad morphism $F_t \to F$.
Every cofibrantly generated model category structure can be lifted to that of an algebraic model category.
Any algebraic model category has a fibrant replacement monad $R$ and a cofibrant replacement comonad $Q$. There is also a canonical distributive law $RQ \to QR$ comparing the two canonical bifibrant
replacement functors.
The notion was introduced in
The algebraic analog of monoidal model categories is discussed in | {"url":"http://www.ncatlab.org/nlab/show/algebraic+model+category","timestamp":"2014-04-19T07:11:24Z","content_type":null,"content_length":"47962","record_id":"<urn:uuid:93e43c39-af64-4c6d-9683-f22d3878053a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
(a)find Time Constant T (b)write The First Order ... | Chegg.com
(a)find time constant t
(b)write the first order differential equation for voltage andcurrent for t >0
(c)solve the first order equation
a)find time constant t
(b)write the first order differential equation for voltage andcurrent for t >0
(c)solve the first order equation
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/find-time-constant-t-b-write-first-order-differential-equation-voltage-andcurrent-t-0-c-so-q234217","timestamp":"2014-04-17T05:03:31Z","content_type":null,"content_length":"21480","record_id":"<urn:uuid:441ba056-4070-44f4-aa4f-a6e940d0c274>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glen Ellen SAT Math Tutors
...Andrew G. B.S. Civil Engineering, Carnegie-Mellon University M.S., Ph.D.
13 Subjects: including SAT math, calculus, physics, geometry
...From August of 2001 to October 2004 I worked for TCD Tutorial Services in Bangkok Thailand. I primarily taught private lessons to students of all ages. I taught ESL in a government school
(Visuttarangsi School) in Kanchanaburi, Thailand from August 2004- March 2006.
10 Subjects: including SAT math, geometry, statistics, algebra 1
...My doctoral degree is in psychology. I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math
and take great joy in communicating this to reluctant and struggling students, as well as to able student...
20 Subjects: including SAT math, calculus, geometry, statistics
...Typical math courses include algebra, geometry, trigonometry, pre-calculus and undergraduate calculus, statistics and more. I also offer a great deal of applied experience in industry, and I've
taught a wide range of employees in the work environment, including the applications of electronics an...
30 Subjects: including SAT math, calculus, statistics, geometry
Access the Power of Math and Science: Math and Science are most often taught in a manner that makes each difficult to understand, much less remember. This need not be so! Equations do not merely
drop out of the sky.
22 Subjects: including SAT math, chemistry, reading, physics | {"url":"http://www.algebrahelp.com/Glen_Ellen_sat_math_tutors.jsp","timestamp":"2014-04-17T21:27:37Z","content_type":null,"content_length":"24676","record_id":"<urn:uuid:5b4ef51c-e182-4edc-bf59-c28fe77664b4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generation of entangled photon pairs using small-coherence-time continuous wave pump lasers
We address the generation of entangled photon pairs by parametric downconversion from solid state cw lasers with small coherence time. We consider a compact and low-cost setup based on a two-crystal
scheme with type-I phase matching. We reconstruct the full density matrix by quantum tomography and analyze in detail the entanglement properties of the generated state as a function of the crystal’s
length and the coherence time of the pump. We verify the possibility to improve the visibility using a purification protocol based on a compensation crystal.
© 2008 Optical Society of America
OCIS Codes
(270.1670) Quantum optics : Coherent optical effects
(270.5290) Quantum optics : Photon statistics
ToC Category:
Quantum Optics
Original Manuscript: November 12, 2007
Revised Manuscript: January 17, 2008
Manuscript Accepted: January 18, 2008
Published: April 8, 2008
Simone Cialdi, Fabrizio Castelli, Ilario Boscolo, and Matteo G. Paris, "Generation of entangled photon pairs using small-coherence-time continuous wave pump lasers," Appl. Opt. 47, 1832-1836 (2008)
Sort: Year | Journal | Reset
1. P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. G. Eberhard, “Ultrabright source of polarization-entangled photons,” Phys. Rev. A 60, R773-R776 (1999). [CrossRef]
2. D. Dehlinger and M. W. Mitchell, “Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory,” Am. J. Phys. 70, 903-910 (2002). [CrossRef]
3. Y. Nambu, K. Usami, Y. Tsuda, K. Matsumoto, and K. Nakamura, “Generation of polarization-entangled photon pairs in a cascade of two type-I crystals pumped by femtosecond laser pulses,” Phys. Rev.
A 66, 033816 (2002) [CrossRef]
4. M.G. A.Paris and J.Rehacek, eds., “Quantum state estimation,” Lect. Notes Phys. 649, 1-4 (2004).
5. G. M. D'Ariano, L. Maccone, and M. G. A. Paris, “Quorum of observables for universal quantum estimation,” J. Phys. A 34, 93-104 (2001). [CrossRef]
6. K. Banaszek, G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, “Maximum-likelihood estimation of the density matrix,” Phys. Rev. A 61, 010304 (2000). [CrossRef]
7. D. F. V. James, P. G. Kwiat, W. J. Munro, and A. G. White, “Measurement of qubits,” Phys. Rev. A 64, 052312 (2001). [CrossRef]
8. A. Gogo, W. D. Snyder, and M. Beck, “Comparing quantum and classical correlations in a quantum eraser,” Phys. Rev. A 71, 052103 (2005). [CrossRef]
9. M. H. Rubin, D. N. Klyshko, Y. H. Shih, and A. V. Sergienko, “Theory of two-photon entanglement in type-II optical parametric down-conversion,” Phys. Rev. A 50, 5122-5133(1994). [CrossRef]
10. C. K. Hong and L. Mandel, “Theory of parametric frequency down conversion of light,” Phys. Rev. A 31, 2409-2418 (1985). [CrossRef] [PubMed]
11. G. Brida, M. Chekhova, M. Genovese, and L. Krivitsky, “Generation of different bell state within spontaneous parametric down-conversion phase-matching bandwidth,” Phys. Rev. A 76, 053807 (2007)
12. G. Brida, M. V. Chekhova, M. Genovese, and L. A. Krivitsky, “Two-photon entanglement generation: different Bell states within the linewidth of phase-matching,” Opt. Express 15, 10182-10188 (2007)
[CrossRef] [PubMed]
13. A. V. Smith, SNLO nonlinear optics software, Sandia National Laboratories, http://www.sandia.gov/imrl/X1118/xxtal.htm.
14. A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, 2003), pp. 69-120.
15. A. Joobeur, B. E. A. Saleh, T. S. Larchuk, and M. C. Teich, “Coherence properties of entangled light beams generated by parametric down-conversion: theory and experiment,” Phys. Rev. A 53,
4360-4371 (1996) [CrossRef] [PubMed]
16. K. Blushs and M. Auzinsh, “Validity of rate equations for Zeeman coherences for analysis of nonlinear interaction of atoms with broadband laser radiation,” Phys. Rev. A 69, 063806 (2004).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-47-11-1832","timestamp":"2014-04-21T05:04:41Z","content_type":null,"content_length":"131271","record_id":"<urn:uuid:e8ea4440-5f66-4adc-83a2-9a66982bf50a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
hort Weierstrass curves
Explicit-Formulas Database
Genus-1 curves over large-characteristic fields
Short Weierstrass curves
An elliptic curve in short Weierstrass form [database entry; Sage verification script; Sage output] has parameters a b and coordinates x y satisfying the following equations:
Affine addition formulas: (x1,y1)+(x2,y2)=(x3,y3) where
x3 = (y2-y1)^^2/(x2-x1)^^2-x1-x2
y3 = (2*x1+x2)*(y2-y1)/(x2-x1)-(y2-y1)^^3/(x2-x1)^^3-y1
Affine doubling formulas: 2(x1,y1)=(x3,y3) where
x3 = (3*x1^^2+a)^^2/(2*y1)^^2-x1-x1
y3 = (2*x1+x1)*(3*x1^^2+a)/(2*y1)-(3*x1^^2+a)^^3/(2*y1)^^3-y1
Affine negation formulas: -(x1,y1)=(x1,-y1).
The neutral element of the curve is the unique point at infinity, namely (0:1:0) in projective coordinates.
Representations for fast computations
Jacobian coordinates with a4=0 [more information] make the additional assumptions
and represent x y as X Y Z satisfying the following equations:
Jacobian coordinates with a4=-3 [more information] make the additional assumptions
and represent x y as X Y Z satisfying the following equations:
Jacobian coordinates [more information] represent x y as X Y Z satisfying the following equations:
Modified Jacobian coordinates [more information] represent x y as X Y Z T satisfying the following equations:
Projective coordinates with a4=-1 [more information] make the additional assumptions
and represent x y as X Y Z satisfying the following equations:
Projective coordinates with a4=-3 [more information] make the additional assumptions
and represent x y as X Y Z satisfying the following equations:
Projective coordinates [more information] represent x y as X Y Z satisfying the following equations:
W12 coordinates with a6=0 [more information] make the additional assumptions
and represent x y as X Y Z satisfying the following equations:
XYZZ coordinates with a4=-3 [more information] make the additional assumptions
and represent x y as X Y ZZ ZZZ satisfying the following equations:
XYZZ coordinates [more information] represent x y as X Y ZZ ZZZ satisfying the following equations:
XZ coordinates [more information] represent x y as X Z satisfying the following equations: | {"url":"http://www.hyperelliptic.org/EFD/g1p/auto-shortw.html","timestamp":"2014-04-20T10:51:06Z","content_type":null,"content_length":"5112","record_id":"<urn:uuid:cd4d2aec-d39c-4243-8c63-e8b804c32953>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
A natural way of thinking of the definition of an Artin $L$-function?
up vote 8 down vote favorite
Emil Artin knew that given a finite extension of $L/\mathbb{Q}$, the local factor of the zeta function $\zeta_{L/\mathbb{Q}}$ at the prime $p$ should be $\displaystyle\prod_{\mathfrak{p}|p}\frac{1}{1
- N(\mathfrak{p})^{-s}}$. He also knew that if $L/K$ is a class field then $\displaystyle\prod_{\mathfrak{P}|\mathfrak{p}}\frac{1}{1 - N(\mathfrak{P})^{-s}} = \displaystyle\prod_{\chi}\frac{1}{1 - \
chi{(Frob{_\mathfrak{p})}}\cdot N(\mathfrak{p})^{-s}}$ where $\mathfrak{P}$ runs over all primes in $L$ lying above $\mathfrak{p}$ and $\chi$ runs over all characters of $Gal(L/K)$.
It's natural then to
1. Define $L$-series attached to characters on $Gal(L/K)$.
2. Recognize that the definition makes sense whether or not $L/K$ is a class field.
3. In light of the fact that characters are $1$-dimensional representations of $Gal(L/K)$, ask whether there's a good definition of the $L$-series attached to a higher dimensional representation of
non-abelian $Gal(L/K)$.
But having come this far, how does one then arrive at the definition of the local factor of an $L$-series attached to a representation $\rho: Gal(L/K) \to GL_{n}(\mathbb{C})$ at a prime $\mathfrak{p}
$ unramified in $K$ as
$\displaystyle \frac{1}{\det(Id - \rho(Frob_\mathfrak{p})N(\mathfrak{p})^{-s})}$
To be sure
1. It specializes to the definition of the $L$-series attached to a character on $Gal(L/K)$.
2. It's well-defined (independent of which member of the conjugacy class $Frob_\mathfrak{p}$ one chooses).
3. One has the theorem $\zeta_{L/\mathbb{Q}} = \prod_{\rho} L(\rho, s)$ where $\rho$ ranges over irreducible representations of $Gal(L/\mathbb{Q})$, generalizing the analogous fact for characters on
Galois groups of class fields.
And perhaps the three properties listed above are sufficient to uniquely determine the definition. (Maybe one needs more than the above three, I would have to think about it it.) Maybe this is how
Artin discovered the definition. This line of thinking is similar to Feynmann's heuristic derivation of Heron's formula. But I somehow feel as though this doesn't get at the essence of things. Is
there a way of thinking about the definition of an Artin L-series that gives it more of a sense of inevitability and canonicity?
[Reposted from mathstackexchange.]
nt.number-theory algebraic-number-theory rt.representation-theory
How about staring at the Lefschetz zeta function? – Qiaochu Yuan Nov 7 '12 at 22:24
2 4. It behaves correctly with respect to inducing characters. – anon Nov 8 '12 at 0:46
add comment
3 Answers
active oldest votes
Artin's work on zeta functions began in 1923 (actually zeta functions had already played a role in his thesis on quadratic extensions of the rational function field) with an article "On the
zeta functions of certain algebraic number fields". There he studied a problem due to Dedekind which asked whether the zeta function of a number field is always divisible (in the sense that
the quotient is entire) by the zeta function of any of its subfields. Dedekind had proved this for purely cubic fields, and for abelian (and in fact metabelian, then called metacyclic)
extensions it follows from the decomposition of Dedekind's zeta functions into a product of abelian L-series due to Takagi's class field theory.
Artin then computed explicitly the zeta functions for subfields of an $S_4$-extension, where the factors contributed by a prime ideal ${\mathfrak p}$ depends on the decomposition group of ${\
mathfrak p}$, and then he sketched a similar calculation for the icosahedral group. For unramified primes, these factors all have a natural interpretation in terms of the Frobenius
automorphisms, or, in other words, come from a Galois representation. One can do worse than read Harold Stark's beautiful article in the book "From number theory to physics", where even
simpler examples all presented in all their glory.
In Artin's first article on L-series (On a new kind of L-series, 1923) Artin defined the Euler factors of the L-series attached to a Galois representation only for unramified primes. This was
sufficient (if not very satisfying) for the following reason: Artin could write the zeta function of $K$ and all of its subfields as products of his L-series. Hecke had shown in 1917 that
L-series whose Euler factors agree up to at most finitely many primes actually have equal Euler factors if both L-series satisfy the same functional equation. So if you can show that Artin's
L-series satisfy a functional equation with suitably defined (but not explicitly known) factors at the ramified and infinite primes, then everything is fine. At the end of this artice, Artin
takes up his example of the icosahedral group again.
up vote In his sequel "On the theory of L-series with general group characters" from 1930, Artin observed that the state of the theory was not satisfactory and proceeded to define the "local" factors
10 down (local class field theory was being developed simultaneously by Hasse; Artin's reciprocity law had allowed a new approach to the norm residue symbols, and this led more or less automatically
vote to local class field theory) from the start. He does this by starting with a Galois representation, observing that for ramified primes, the "Frobenius automorphism" is only defined up to
elements from the inertia group $T$, and then constructs a representation of $Z/T$, the factor group of decomposition modulo inertia group; then he uses this "piece" of the representation for
defining the local factors at ramified primes. Parts of the necessary arguments can be found in Artin's article on the group theoretic structure of the discriminant in algebraic number fields
that appeared in print in 1931.
In his letter to Hasse from Sept. 18, 1930, Artin gives the following explanation (the notation is essentially the same as in his articles):
Let ${\mathfrak p}$ be a prime ideal, $\sigma$ the associated substitution in $K/k$, which is not uniquely determined, ${\mathfrak T}$ the inertia group, and $e$ its order. Set $$ \chi({\
mathfrak p}^\nu) = \frac{1}{e} \sum_{\tau \in {\mathfrak T}} \chi(\sigma^\nu\tau)\ , $$ which is the mean of all possible values. Then $$ \log L(s,\chi) = \sum_{{\mathfrak p},\nu} \frac{\chi
({\mathfrak p}^\nu)}{\nu N{\mathfrak p}^{\nu s}} $$ is the complete definition also for divisors of the discriminant. $L(s,\chi)$ can be written as usual as a product of the form $$ L(x,\chi)
= \prod_{\mathfrak p} \frac{1}{|E-N{\mathfrak p}^{-s} A_{\mathfrak p}|}, $$ where $A_{\mathfrak p}$ is a certain matrix attached to ${\mathfrak p}$ (which may be $0$) and only has roots of
units as characteristic roots.
This explains the naive idea behind the definition: since the Frobenius is not well defined, take the mean over all possible values. Finally, Noah Snyder has written a very nice thesis on
Artin L-functions, which contains a translation of Artin's 1923 article on L-series.
add comment
(1) One possible way to do this is to start from the axiom that the local factor only depends on the local behavior of the Galois representation. Locally at an unramified prime, the Galois
representation is thus a representation of $\hat{\mathbb Z}$, in other words a matrix. Since we don't have a canonical basis that this matrix acts on, and for the
conjugacy-class-of-Frobenius reason you state, the only numbers that we have access to are the coefficients of the characteristic polynomial of Frobenius.
When all you've got are the coefficients of a polynomial, plugging something in to that polynomial seems like a pretty obvious step. If you normalize it so that you get the right answer for
characters, you get the Artin L-function.
(2) A slightly more rigorous argument uses the axiom that $L(\rho_1)L(\rho_2)=L(\rho_1\oplus \rho_2)$. Since (semisimple) Galois representations are locally direct sums of characters, you
get a unique definition.
(3) As Qiaochu suggests, you can use the Lefschetz zeta function as motivation. However, I can't think of an argument why that should be related to number theory without going through Weil's
zeta function and the Weil conjectures. Since Artin discovered his zeta function before Weil made his conjectures, this is unsatisfying as a historical approach, though it is critical to a
modern understanding of the notion and can even serve as a motivation for the Riemann zeta function!
up vote
8 down Perhaps Qiaochu can think of a better reason to use the Lefschetz zeta function.
(4) The stupidest possible way to do this is just to try to generalize the formula directly as possible. The old formula was $(1-\rho(Frob_p) N(p)^{-s})$. $\rho(Frob_p)$ is still a matrix.
$1$ and $N(p)^{-s}$ are still scalars, but everyone knows that the higher-dimensional generalization of a scalar is the corresponding scalar matrix. Then you're left with a much of matrices
you want to multiply together. You could just directly multiply them but this is bad for two reasons. First, you would really like a nice simple complex analytic function, and second, as you
pointed out the matrices are only defined up to conjugacy.
The obvious thing to do here is to take the determinant. In particular, it does not even matter whether you take the determinant before or after multiplying the matrices, even with the
conjugation indeterminacy.
Of course you probably screw up and try some wrong things, like $1-tr(\rho(Frob_p) N(p)^{-s})$. However the wrong things, because they're terrible, don't have any nice properties, so it's
pretty easy to discard them and settle on the correct one. This is probably a more accurate representation of the historical process.
+1 for the last paragraph – David Corwin Nov 7 '12 at 23:47
1 @Will: Nice answer. Do you think that your first point about local representations is historically coherent? I do not know the details, but Wikipedia says Artin introduced his function in
1923/4. By that time, was global CFT already thought idelically? Without that, or at least without a compatibility between local and global reciprocity laws, would you find it natural to
"split a global representations into local ones" as it became customary later on? – Filippo Alberto Edoardo Nov 8 '12 at 0:55
This "local representation" is not meant to imply anything $p$-adic. All I'm doing is using the fact that there is a well-defined up to conjugacy action of $Frob_p$, and viewing this
action as a representation of $\mathbb Z$ or $\hat{\mathbb Z}$. The first half of the sentence was certainly known to him, as the definition doesn't make sense without it, and the second
half is a very natural thing to do when you have the first. – Will Sawin Nov 8 '12 at 1:53
add comment
As @anon noted, it is very important to have the formation of these L-functions be compatible with induction (inducing repns...). A notion of induced repn was indeed available since the
time of Frobenius et alia, and the proof of meromorphy (Artin and Brauer) used exactly that idea. In contrast, as @Filippo A.E. noted, an appreciation of "local computations" was less
available at the time.
Thus, the "definition" of Artin L-functions was completely determined by compatibility with classfield theory and with induction (and certainly more-than-completely so after the Brauer
theorem and its application here).
up vote 6
down vote That viewpoint, with local-global ideas adjoined, and a bit more, was what Weil used for his extended class of L-functions.
Also, I think in the early 20th century people thought quite a lot about assembling abelian extensions into non-abelian towers to try to divine what "non-abelian classfield theory" should
be, so the already decades-old Frobenius notion of "induction" would have been in play.
1 Language nitpick: et alia means "and other things", while et alii means "and other people". – René Nov 8 '12 at 2:24
:) ! ........... – paul garrett Nov 8 '12 at 2:50
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory algebraic-number-theory rt.representation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/111760/a-natural-way-of-thinking-of-the-definition-of-an-artin-l-function/111773","timestamp":"2014-04-18T00:26:53Z","content_type":null,"content_length":"75667","record_id":"<urn:uuid:55c6c54b-268a-4f2a-a5a7-f71f25e88148>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
-Doped Fluorozirconate-Based Glasses and Glass Ceramics
Dataset Papers in Physics
Volume 2013 (2013), Article ID 236421, 4 pages
Dataset Paper
Judd-Ofelt Calculations for Nd^3+-Doped Fluorozirconate-Based Glasses and Glass Ceramics
^1Centre for Innovation Competence SiLi-nano, Martin Luther University of Halle-Wittenberg, Karl-Freiherr-von-Fritsch-Straße 3, 06120 Halle (Saale), Germany
^2Fraunhofer Center for Silicon Photovoltaics CSP, Walter-Hülse-Straße 1, 06120 Halle (Saale), Germany
^3Department of Electrical Engineering, South Westphalia University of Applied Sciences, Lübecker Ring 2, 59494 Soest, Germany
Received 12 September 2012; Accepted 8 October 2012
Academic Editors: F. Charra, P. Kluth, F. Song, and H. Yang
Copyright © 2013 U. Skrzypczak et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
A Judd-Ofelt analysis is performed to calculate the optical properties of Nd^3+ ions embedded in a fluorozirconate glass matrix. The changes in the Judd-Ofelt parameters were determined as a function
of the size of BaCl[2] nanocrystals grown inside the matrix. From these data, the radiative decay rates and the branching ratios of every transition in the energy range from 25.000cm^−1 to the
ground state are calculated. This was accomplished for samples containing nanocrystals with average sizes ranging from 10 to 40nm.
1. Introduction
Photonic glasses doped with rare-earth ions such as erbium, neodymium, or europium gather widespread interest because of their applicability in photonic devices. Application as, for example, the
active ion in a laser medium or as frequency converter in up- and downconverters [1, 2] requires efficient radiative decays of the ion with only minor losses to multiphonon relaxation (MPR).
Therefore, the ion needs to be embedded in a low phonon energy environment which still remains both stable and transparent. Among appropriate materials are fluorozirconate (FZ) glasses with maximum
phonon energies of less than 580cm^−1 [3–5]. They have already proved to be a convenient choice for several applications.
As shown previously [6, 7], a uniform growth of BaCl[2] nanocrystals inside such glasses can be induced by thermal treatment as soon as additional chloride is introduced at the expense of fluoride.
Since BaCl[2] has a maximum phonon energy on the order of 200cm^−1 [3, 8], MPR is rendered even less probable and, thus, rare-earth ions tend to favor radiative decays [6, 7, 9].
The optical properties of Nd^3+-doped fluorochlorozirconate (FCZ) glasses with differently sized BaCl[2] nanocrystals were studied with Judd-Ofelt theory. The sizes of the embedded nanocrystals were
obtained from X-ray diffraction in combination with Scherrer analysis. It is known that Nd^3+ ions are strongly affected by the BaCl[2] nanocrystals, but until now only speculations were possible as
to the location of rare-earth ions at the edge of nanocrystals or even the possible inclusion into them [9, 10].
From Judd-Ofelt [11–13] analysis, the phenomenological Judd-Ofelt parameters can be determined. With them a quantitative measure for the influence of BaCl[2] nanocrystals on the Nd^3+ can be
established (see [14] for a detailed discussion). Here, the radiative decay rates for each transition were calculated and are given in a compressed form.
2. Methodology
Experimental details are as follows.The FCZ glass samples [14] investigated in this study are comprised of 52ZrF[4]-10BaF[2]-10BaCl[2]-19NaCl-3.5LaF[3]-3AlF[3]-0.5InF[3]-1KCl-1NdF[3] (values in
mol%). The named chemicals were melted under an inert atmosphere, poured into a preheated (200°C) mold to avoid cracks, and then slowly cooled down to room temperature. Subsequently, the glasses were
treated thermally at a temperature between 240°C and 270°C for 20 minutes to initiate the growth of BaCl[2] nanocrystals.
The visible and near-infrared transmittance spectra, from which the oscillator strengths were obtained, were recorded at room temperature with a double-beam spectrophotometer (Perkin Elmer Lambda
Judd-Ofelt details are as follows. From the absorption cross-section spectra, the oscillator strengths are calculated with where is the mass of the electron; , the speed of light; and , the
absorption cross section.
The theoretical oscillator strengths feature contributions from electric, , and magnetic dipole transitions, . Quadrupole effects are very weak and have been neglected. The dipolar oscillator
strengths are of the following form: where is the mean frequency and is the total angular momentum quantum number. From these, the line strengths, are calculated, in which are the doubly reduced
matrix elements of the electric dipole tensor operator and are the elements of the magnetic dipole operator. These values do not depend much on the host and are given in the literature [13, 15–19].
The local field correction factors are and [20]. The Gaussian least-squares minimization of yields the evaluation of the intensity parameters (with ).
The effective refractive index of a composite material made of an FCZ glass matrix with inclusion of nanometric BaCl[2] crystallites has been calculated involving a Maxwell-Garnett approach [21]:
where and are the dielectric constants of matrix and inclusion (data taken from [22, 23]).
The radiative emission rates are calculated [14] with When is not only the ground state, the branching ratios define the percentage of the rate for each possible relaxation channel.
The tabular data given here are sectioned into samples with different BaCl[2] nanocrystal sizes. This starts with a determination for FZ glass and an untreated FCZ glass for reference and comparison.
Then in order of increasing nanocrystal sizes, the different samples are given. For each source level , the target levels of the transition along with the radiative decay rates and the corresponding
branching ratio are given. This is repeated for each sample.
3. Dataset Description
The dataset associated with this Dataset Paper consists of 6 items which are described as follows.
Dataset Item 1 (Table). Data of the radiative decay parameters in the FZ glass matrix.
• Column 1:
• Column 2:
• Column 3: (s^-1)
• Column 4:
Dataset Item 2 (Table). Data of the radiative decay parameters in the untreated FCZ glass matrix.
• Column 1:
• Column 2:
• Column 3: (s^-1)
• Column 4:
Dataset Item 3 (Table). Data of the radiative decay parameters in the FCZ glass matrix, thermally treated at 240°C. The average nanocrystal size was determined to be 12 nm [14].
• Column 1:
• Column 2:
• Column 3: (s^-1)
• Column 4:
Dataset Item 4 (Table). Data of the radiative decay parameters in the FCZ glass matrix, thermally treated at 250°C. The average nanocrystal size was determined to be 14 nm [14].
• Column 1:
• Column 2:
• Column 3: (s^-1)
• Column 4:
Dataset Item 5 (Table). Data of the radiative decay parameters in the FCZ glass matrix, thermally treated at 260°C. The average nanocrystal size was determined to be 25 nm [14].
• Column 1:
• Column 2:
• Column 3: (s^-1)
• Column 4:
Dataset Item 6 (Table). Data of the radiative decay parameters in the FCZ glass matrix, thermally treated at 270°C. The average nanocrystal size was determined to be 42 nm [14].
• Column 1:
• Column 2:
• Column 3: (s^-1)
• Column 4:
4. Concluding Remarks
A Judd-Ofelt analysis under consideration of the effective- matrix in an FZ-based glass ceramic with BaCl[2] nanocrystals of different sizes has been performed in order to evaluate the optical
properties of Nd^3+ ions embedded therein. The radiative decay rates and the branching ratios of every transition in the energy range from 25.000cm^−1 to the ground state are calculated. This is
accomplished for samples containing nanocrystals with average sizes ranging from 10 to 40nm. Using these data, the dynamics of these systems can be studied using rate equations.
Dataset Availability
The dataset associated with this Dataset Paper is dedicated to the public domain using the CC0 waiver and is available at http://dx.doi.org/10.1155/2013/236421/dataset.
This work was supported by the FhG Internal Programs under Grant no. Attract 692 034. In addition, the authors would like to thank the German Federal Ministry for Education and Research
(Bundesministerium für Bildung und Forschung) for the financial support within the Centre for Innovation Competence SiLi-nano (Project no. 03Z2HN11).
1. C. Paßlick, B. Henke, I. Császár et al., “Advances in up-and down-converted fluorescence for high efficiency solar cells using rare-earth doped fluorozirconate-based glasses and glass ceramics,”
in Next Generation (Nano) Photonic and Cell Technologies for Solar Energy Conversion, vol. 7772 of Proceedings of SPIE, August 2010. View at Publisher · View at Google Scholar · View at Scopus
2. B. Ahrens, P. T. Miclea, and S. Schweizer, “Upconverted fluorescence in Nd^3+-doped barium chloride single crystals,” Journal of Physics: Condensed Matter, vol. 210, no. 12, p. 125501, 2009. View
at Publisher · View at Google Scholar
3. C. Pfau, C. Bohley, P. T. Miclea, and S. Schweizer, “Structural phase transitions of barium halide nanocrystals in fluorozirconate glasses studied by Raman spectroscopy,” Journal of Applied
Physics, vol. 109, no. 8, Article ID 083545, 2011. View at Publisher · View at Google Scholar · View at Scopus
4. B. Bendow, P. K. Banerjee, M. G. Drexhage, J. Goltman, S. S. Mitra, and C. T. Moynihan, “Comparative study of vibrational characteristics of fluorozirconate and fluorohafnate glasses,” Journal of
the American Ceramic Society, vol. 65, no. 1, pp. C-8–C-9, 1982. View at Publisher · View at Google Scholar
5. B. Bendow, M. G. Drexhage, and H. G. Lipson, “Infrared absorption in highly transparent fluorozirconate glass,” Journal of Applied Physics, vol. 52, no. 3, pp. 1460–1461, 1981. View at Publisher
· View at Google Scholar · View at Scopus
6. B. Ahrens, P. Löper, J. C. Goldschmidt et al., “Neodymium-doped fluorochlorozirconate glasses as an upconversion model system for high efficiency solar cells,” Physica Status Solidi A, vol. 205,
no. 12, pp. 2822–2830, 2008. View at Publisher · View at Google Scholar · View at Scopus
7. C. Pfau, U. Skrzypczak, M. Miclea, C. Bohley, P. T. Miclea, and S. Schweizer, “Low phonon energy BaCl[2] nanocrystals in Nd^3+-doped fluorozirconate glasses and their influence on the
photoluminescence properties,” in Proceedings of the Materials Research Society Symposium, vol. 1404, 2012.
8. C. Bohley, J.-M. Wagner, C. Pfau, P.-T. Miclea, and S. Schweizer, “Raman spectra of barium halides in orthorhombic and hexagonal symmetry: an ab initio study,” Physical Review B, vol. 83, no. 2,
Article ID 024107, 6 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus
9. U. Skrzypczak, M. Miclea, A. Stalmashonak et al., “Time-resolved investigations of erbium ions in ZBLAN-based glasses and glass ceramics,” Physica Status Solidi C, vol. 8, no. 9, pp. 2649–2652,
2011. View at Publisher · View at Google Scholar
10. G. Soundararajan, C. Koughia, A. Edgar, C. Varoy, and S. Kasap, “Optical properties of erbium-doped fluorochlorozirconate glasses,” Journal of Non-Crystalline Solids, vol. 357, no. 11–13, pp.
2475–2479, 2011. View at Publisher · View at Google Scholar · View at Scopus
11. B. R. Judd, “Optical absorption intensities of rare-earth ions,” Physical Review, vol. 127, no. 3, pp. 750–761, 1962. View at Publisher · View at Google Scholar
12. G. S. Ofelt, “Intensities of crystal spectra of rare-earth ions,” Journal of Chemical Physics, vol. 37, no. 3, p. 511, 1962. View at Publisher · View at Google Scholar
13. B. Walsh, “Judd-Ofelt Theory: Principles and Practices,” in Advances in Spectroscopy For Lasers and Sensing, pp. 403–433, Springer, New York, NY, USA, 2006.
14. U. Skrzypczak, C. Pfau, C. Bohley, G. Seifert, and S. Schweizer, “Particle size monitoring of BaCl[2] nanocrystals in fluorozirconate glasses,” Journal of Non-Crystalline Solids, vol. 363, pp.
205–208, 2013. View at Publisher · View at Google Scholar
15. T. Suzuki, H. Kawai, H. Nasu et al., “Spectroscopic investigation of Nd^3+-doped ZBLAN glass for solar-pumped lasers,” Journal of the Optical Society of America B, vol. 28, no. 8, pp. 2001–2006,
2011. View at Publisher · View at Google Scholar
16. R. Caspary, Applied rare-earth spectroscopy for fiber laser optimization [Ph.D. thesis], Technische Universität Braunschweig, 2002.
17. A. A. Kaminskii, G. Boulon, M. Buonchristiani, B. di Bartolo, A. Kornienko, and V. Mironov, “Spectroscopy of a new laser garnet Lu[3]Sc[2]Ga[3]O[12]:Nd^3+,” Physica Status Solidi A, vol. 141, no.
2, pp. 471–494, 1994. View at Scopus
18. W. T. Carnall, P. R. Fields, and K. Rajnak, “Electronic energy levels of the trivalent lanthanide aquo ions. I. Pr^3+, Nd^3+, Pm^3+, Sm^3+, Dy^3+, Ho^3+, Er^3+, and Tm^3+,” Journal of Chemical
Physics, vol. 49, no. 10, pp. 4424–4442, 1968. View at Scopus
19. X. Qiao, X. Fan, J. Wang, and M. Wang, “Luminescence behavior of Er^3+ ions in glass-ceramics containing CaF[2] nanocrystals,” Journal of Non-Crystalline Solids, vol. 351, no. 5, pp. 357–363,
2005. View at Publisher · View at Google Scholar · View at Scopus
20. W. B. Fowler and D. L. Dexter, “Relation between absorption and emission probabilities in luminescent centers in ionic solids,” Physical Review, vol. 128, no. 5, pp. 2154–2165, 1962. View at
Publisher · View at Google Scholar · View at Scopus
21. C. F. Bohren and D. R. Huffmann, Absorption and Scattering of Light by Small Particles, John Wiley & Sons, New York, NY, USA, 1983.
22. H. H. Li, “Refractive index of alkaline earth halides and its wavelength and temperature derivatives,” Journal of Physical and Chemical Reference Data, vol. 9, no. 1, p. 161, 1980.
23. L. Wetenkamp, T. Westendorf, G. West, and A. Kober, “The effect of small composition changes on the refractive index and material dispersion in ZBLAN heavy-metal fluoride glass,” Materials
Science Forum, vol. 32-33, pp. 471–476, 1991. | {"url":"http://www.hindawi.com/journals/dpis/2013/236421/","timestamp":"2014-04-16T13:29:01Z","content_type":null,"content_length":"106016","record_id":"<urn:uuid:2a0f4979-c57b-4b81-8373-aacf8c282287>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oracle Math Tutor
Find an Oracle Math Tutor
...My strengths as a tutor are that I am very patient and I am able to come up with many different ways of explaining things until they make sense to you. There is no better feeling then helping
someone achieve their academic goals. I hope to hear from you soon!I have tutored students in this subject for the last seven years in a classroom setting.
23 Subjects: including trigonometry, algebra 1, algebra 2, biology
...Learning math is more than just learning formulas or repeating a procedure. To be successful at math students must understand what they are doing. This will enable them to think creatively,
reason effectively, approach challenges with confidence and solve problems with persistence.
26 Subjects: including algebra 1, algebra 2, calculus, grammar
...I have a Masters Degree in Math and Math Education. My specialties are Algebra and Geometry. I am also proficient in Calculus.
13 Subjects: including statistics, algebra 1, algebra 2, calculus
...Many of the students I have worked with are non-science majors and finding a way to explain difficult scientific concepts to individual students is something I find rewarding. I have helped
students who were anxious about scientific concepts and mathematics become comfortable with these concepts...
35 Subjects: including linear algebra, study skills, elementary math, chess
...Currently, I am an adjunct faculty member at Pima Community College, where I teach developmental math classes (Elementary Algebra, Intermediate Algebra, etc). As an active adjunct faculty
member, I have valuable experience teaching students with a variety of math backgrounds. I endeavor to help ...
10 Subjects: including statistics, algebra 1, algebra 2, grammar | {"url":"http://www.purplemath.com/Oracle_Math_tutors.php","timestamp":"2014-04-20T04:30:31Z","content_type":null,"content_length":"23350","record_id":"<urn:uuid:4cf62cd1-138f-4de1-9892-5f2d8a478b1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Under what circumstances are monadic computations tail-recursive?
up vote 20 down vote favorite
In Haskell Wiki's Recursion in a monad there is an example that is claimed to be tail-recursive:
f 0 acc = return (reverse acc)
f n acc = do
v <- getLine
f (n-1) (v : acc)
While the imperative notation leads us to believe that it is tail-recursive, it's not so obvious at all (at least to me). If we de-sugar do we get
f 0 acc = return (reverse acc)
f n acc = getLine >>= \v -> f (n-1) (v : acc)
and rewriting the second line leads to
f n acc = (>>=) getLine (\v -> f (n-1) (v : acc))
So we see that f occurs inside the second argument of >>=, not in a tail-recursive position. We'd need to examine IO's >>= to get an answer. Clearly having the recursive call as the last line in a do
block isn't a sufficient condition a function to be tail-recursive.
Let's say that a monad is tail-recursive iff every recursive function in this monad defined as
f = do
f ...
or equivalently
f ... = (...) >>= \x -> f ...
is tail-recursive. My question is:
1. What monads are tail-recursive?
2. Is there some general rule that we can use to immediately distinguish tail-recursive monads?
Update: Let me make a specific counter-example: The [] monad is not tail-recursive according to the above definition. If it were, then
f 0 acc = acc
f n acc = do
r <- acc
f (n - 1) (map (r +) acc)
would have to be tail-recursive. However, desugaring the second line leads to
f n acc = acc >>= \r -> f (n - 1) (map (r +) acc)
= (flip concatMap) acc (\r -> f (n - 1) (map (r +) acc))
Clearly, this isn't tail-recursive, and IMHO cannot be made. The reason is that the recursive call isn't the end of the computation. It is performed several times and the results are combined to make
the final result.
haskell monads tail-recursion
1 Just a quick note: that is tail recursive. Tail recursion simply means that the return value of the last function call is not used by the function. In your case, the value of the final f call is
not used. If you'd rather think of it pragmatically, a function is tail-recursive, if, once you do the last call in it, you can dispose of all the context associated with the function. Also, as
far as I know, there isn't anything inherently tail-recursive or not-tail-recursive about any monad. – scvalex Nov 14 '12 at 13:34
@scvalex While intuitively this makes sense, I'd like to have it formally justified. Could you show that f is tail-recursive according to the criteria stated in Tail recursion? – Petr Pudlák Nov
14 '12 at 13:42
@hammar The definition doesn't care if you can define a recursive function not of that form. It only cares that if an arbitrary function is of that form is tail-recursive or not. – Petr Pudlák Nov
14 '12 at 13:46
Can't you just inline the definition of >>= and see if the result is tail-recursive? – hammar Nov 14 '12 at 13:55
@PetrPudlák Looking at the definition on that page, I have to say that f is not tail-recursive. On the other hand, I don't agree with that definition as it seems to exclude calling any other
function as the first step of expanding the function. By that definition, f = f $ 1 is not tail-recursive. – scvalex Nov 14 '12 at 14:01
add comment
2 Answers
active oldest votes
A monadic computation that refers to itself is never tail-recursive. However, in Haskell you have laziness and corecursion, and that is what counts. Let's use this simple example:
forever :: (Monad m) => m a -> m b
forever c' = let c = c' >> c in c
Such a computation runs in constant space if and only if (>>) is nonstrict in its second argument. This is really very similar to lists and repeat:
repeat :: a -> [a]
up vote 17 down repeat x = let xs = x : xs in xs
vote accepted
Since the (:) constructor is nonstrict in its second argument this works and the list can be traversed, because you have a finite weak-head normal form (WHNF). As long as the consumer
(for example a list fold) only ever asks for the WHNF this works and runs in constant space.
The consumer in the case of forever is whatever interprets the monadic computation. If the monad is [], then (>>) is non-strict in its second argument, when its first argument is the
empty list. So forever [] will result in [], while forever [1] will diverge. In the case of the IO monad the interpreter is the very run-time system itself, and there you can think of
(>>) being always non-strict in its second argument.
add comment
What really matters is constant stack space. Your first example is tail recursive modulo cons, thanks to the laziness.
The (getLine >>=) will be executed and will evaporate, leaving us again with the call to f. What matters is, this happens in a constant number of steps - there's no thunk build-up.
Your second example,
f 0 acc = acc
f n acc = concat [ f (n - 1) $ map (r +) acc | r <- acc]
will be only linear (in n) in its thunk build-up, as the result list is accessed from the left (again due to the laziness, as concat is non-strict). If it is consumed at the head it can
run in O(1) space (not counting the linear space thunk, f(0), f(1), ..., f(n-1) at the left edge ).
up vote 4 Much worse would be
down vote
f n acc = concat [ f (n-1) $ map (r +) $ f (n-1) acc | r <- acc]
or in do-notation,
f n acc = do
r <- acc
f (n-1) $ map (r+) $ f (n-1) acc
because there is extra forcing due to information dependency. Similarly, if the bind for a given monad were a strict operation.
Yes, but how can we tell in a general case when it evaporates and when not? That's the whole point of the question. – Petr Pudlák Nov 14 '12 at 14:49
I guess we have to inline the bind definition and analyze the result. I think laziness is more important here, as is usual in Haskell. – Will Ness Nov 14 '12 at 14:54
1 Will Ness - if the "tail recursion modulo cons" optimization is implemented only for (certain) Prolog's and not generally implemented for functional languages or by GHC, there isn't
much pedagogical value discussing whether a Haskell function is in the "tail recursive modulo cons" form. This muddies up the discussion of tail recursion / tail call optimization
rather than elucidates it. – stephen tetley Nov 14 '12 at 17:34
2 @stephentetley It doesn't have to be implemented specially in Haskell; laziness gives it to us for free. – Will Ness Nov 14 '12 at 21:33
@stephentetley also, I wasn't talking about optimization. Whether the optimization is employed or not, that code is still TRMC. And under lazy evaluation no optimization is necessary,
as long as the "cons" in question (i.e. the constant-space computation before the recursive call) does not force the recursive call prematurely. – Will Ness Nov 15 '12 at 13:26
add comment
Not the answer you're looking for? Browse other questions tagged haskell monads tail-recursion or ask your own question. | {"url":"http://stackoverflow.com/questions/13379060/under-what-circumstances-are-monadic-computations-tail-recursive","timestamp":"2014-04-17T21:44:11Z","content_type":null,"content_length":"85190","record_id":"<urn:uuid:7ae6d49b-4d15-4948-8b82-0274d8cf1158>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
beta function
beta function
In statistical field theory and in quantum field theory, there is a beta function related to the renormalization group flow. This entry will be dedicated to it. Please distinguish from Euler beta
function, which is related to the gamma function that generalises the factorial operation.
Created on October 10, 2011 19:58:19 by
Zoran Škoda | {"url":"http://ncatlab.org/nlab/show/beta+function","timestamp":"2014-04-17T07:22:57Z","content_type":null,"content_length":"10806","record_id":"<urn:uuid:b3c19e0e-119b-46de-83c8-4868f5301542>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Isopleth Maps
ISOPLETH MAPS
An isopleth map generalizes and simplifies data with a continuous distribution. It shows the data as a third dimension on a map, thus isopleth maps are more common for mapping surface elevations,
amounts of precipitation, atmospheric pressure, and numerous other measurements that can be viewed statistically as a third dimension.
The third dimension is shown by a series of lines called isopleths which connect points of equal value. The isopleth interval is the difference in value between two adjacent isopleths. Note, the
values of the isopleths drawn on the map are ALWAYS multiples of the interval. Isopleths never cross or divide and always form enclosed circles, however, this occurrence may not be in the mapped
To read the value of a point on an isopleth map, answer the following questions.
What is the isopleth interval?
Where is the point?
Is the point on an isopleth? If so, what is the value of that isopleth?
If it is NOT on an isopleth, what are the values of the TWO isopleths it is between? (You must always give both values!)
NOTE: If the point is NOT on an isopleth, the map only gives a range for the value, but an approximate value can be derived by assuming an even rate of change between the two isopleths and
interpolating this APPROXIMATE VALUE. For simplicity in GY 214, Dr. Hill will NEVER examine you on the approximate values.
Answer the following questions using MAP A below.
1. What is the value of point A?
2. What is the value of point B?
3. What is the value of point C?
4. What is the value of point D?
5. May the exact value of point E be determined from the map?
6. What is the value of point E?
7. What is the approximate value of point E?
8. What is the value of point F?
9. What is the approximate value of point F?
10. What is the value of point G?
11. What is the value of point H?
12. What is the value of point I? (Note: What two isopleths would it be between?)
13. What is the value of point J?
14. What is the value of point K? (Be careful!)
15. How do the values of point H and point K compare?
Determine the value of points A through Z on MAP B. Be sure that your answers match exactly
Determine the value of points A through W on MAP C.
Answers: 1. 50; 2. 70; 3. 70; 4. 60; 5. no; 6. greater than 40 but less than 50; 7. approximately 46; 8. greater than 60 but less than 70; 9. approximately 65; 10. greater than 40 but less than 50;
11. greater than 50 but less than 60; 12. greater than 80 but less than 90; 13. greater than 30 but less than 40; 14. greater than 50 but less than 60; 15. the same. MAP B--A 70; B 60; C 50; D 50; E
90; F 80; G 80; H 90; I greater than 50 but less than 60; J 80; K greater than 80 but less than 90; L greater than 60 but less than 70; M greater than 50 but less than 60; N greater than 90 but less
than 100; O greater than 50 but less than 60; P greater than 70 but less than 80; Q greater than 70 but less than 80; R greater than 70 but less than 80; S greater than 70 but less than 80; T greater
than 70 but less than 80; U greater than 30 but less than 40; V greater than 90 but less than 100; W greater than 100 but less than 110; X greater than 70 but less than 80; Y greater than 70 but less
than 80; Z greater than 60 but less than 70. MAP C--A greater than 230 but less than 240; B 230; C 230; D greater than 230 but less than 240; E 220; F greater than 200 but less than 210; G greater
than 220 but less than 230; H greater than 230 but less than 240; I 240; J greater than 250 but less than 260; K greater than 230 but less than 240; L greater than 260 but less than 270; M 250; N
230; O 220; P greater than 210 but less than 220; Q greater than 220 but less than 230; R greater than 230 but less than 240; S greater than 240 but less than 250; T greater than 240 but less than
250; U 240; V greater than 240 but less than 250; W greater than 230 but less than 240.
Return to the Mapping Lesson | {"url":"http://www.jsu.edu/depart/geography/mhill/phygeogone/isoplth.html","timestamp":"2014-04-19T01:57:30Z","content_type":null,"content_length":"4779","record_id":"<urn:uuid:27c477c8-ba08-4b9a-b793-31bc5cfd31b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Philadelphia Trigonometry Tutor
I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I
received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school.
8 Subjects: including trigonometry, calculus, geometry, algebra 1
I'm a certified middle school and high school math teacher with experience teaching in suburban and urban schools. I've been tutoring off and on for more than 8 years (before graduating high
school). I am a passionate educator and have excellent communication skills and experience with various levels of math.
20 Subjects: including trigonometry, English, calculus, geometry
...This means I had to endure quite a bit of challenging math and or science courses. From Calculus I up to differential equations each math course created a new set of challenges for me to
overcome in hopes of obtaining my degree. From the science side of things I completed core subjects like Bio...
20 Subjects: including trigonometry, physics, calculus, geometry
...I also will tell you up front if I do not think I'd make the best tutor for a specific subject (i.e. statistics - on a high school level I could help, but that is one math course in college I
had a tough time wrapping my head around!) Outside of being a teacher's assistant, I volunteer at Belmon...
15 Subjects: including trigonometry, chemistry, calculus, statistics
I received my MEd in Middle School Math this May from Lesley University in MA, and waiting for my certification for NJ teacher license. I was trained in adolescent and cognitive psychology and
have a very strong practical Mathematics background. I have served as an educator in various roles, both part time and full time, spanning across middle school and elementary school classroom
9 Subjects: including trigonometry, geometry, statistics, algebra 1 | {"url":"http://www.purplemath.com/Philadelphia_Trigonometry_tutors.php","timestamp":"2014-04-16T19:37:19Z","content_type":null,"content_length":"24620","record_id":"<urn:uuid:8ae3d668-5d9e-4e3b-a3b6-c0c3cb17e6cb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Changes in Kinetic Energy Using Net Force
In physics, if you want to find the change in an object’s kinetic energy, you have to consider only the work done by the net force acting on the object. In other words, you convert only the work done
by the net force into kinetic energy.
For example, when you play tug-of-war against your equally strong friends, you pull against each other but nothing moves. Because there’s no movement, no work is done and you have no net increase in
kinetic energy from the two forces.
Take a look at the figure. You may want to determine the speed of the 100-kilogram refrigerator at the bottom of a 3.0-meter-long ramp, using the fact that the net work done on the refrigerator goes
into its kinetic energy. How do you do that? You start by determining the net force on the refrigerator and then find out how much work that force does. Converting that net-force work into kinetic
energy lets you calculate what the refrigerator’s speed will be at the bottom of the ramp.
You find the net force acting on an object to find its speed at the bottom of a ramp.
What’s the net force acting on the refrigerator? The component of the refrigerator’s weight acting along the ramp is
where m is the mass of the refrigerator and g is the acceleration due to gravity. The normal force is
which means that the kinetic force of friction is
where μ[k] is the kinetic coefficient of friction. The net force accelerating the refrigerator down the ramp, F[net], therefore, is
You’re most of the way there! If the 3.0-meter-long ramp is at a 30-degree angle to the ground and there’s a kinetic coefficient of friction of 0.57, plugging the numbers into this equation results
in the following:
The net force acting on the refrigerator is about 6.2 newtons. This net force acts over the entire 3.0-meter ramp, so the work done by this force is
You find that 19 joules of work goes into the refrigerator’s kinetic energy. That means you can find the refrigerator’s kinetic energy like this:
You want the speed here, so solving for v and plugging in the numbers gives you
The refrigerator will be going 0.61 meters/second at the bottom of the ramp. | {"url":"http://www.dummies.com/how-to/content/how-to-calculate-changes-in-kinetic-energy-using-n.html?cid=RSS_DUMMIES2_CONTENT","timestamp":"2014-04-19T14:57:53Z","content_type":null,"content_length":"56751","record_id":"<urn:uuid:1daf435e-5488-466b-9b36-2f43b4cae1f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry problem help!
March 31st 2010, 05:13 PM
Geometry problem help!
Three large water pipes run through the building. Enigeneers plan to enclose these water pipes within a ceramic-coated triangular casing to prevent radiation. Each of the water pipes has an
outside diameter of 60 cm. If the cross section of the casing is an equilateral triangle, what is the smallest length possible?
*I got this problem to do with a group but i have no idea how to do it. smart math people please help. its due 1st perioid!
March 31st 2010, 06:42 PM
Problem could be worded this simply:
An equilateral triangle's inscribed circle has diameter of 60 cm;
what are the triangle's side lenghts?
Use google...
March 31st 2010, 09:15 PM
Hello, shawnrabe!
We need clarification . . .
Three large water pipes run through the building.
Enigeneers plan to enclose these water pipes within a ceramic-coated triangular casing.
Each of the water pipes has an outside diameter of 60 cm.
If the cross section of the casing is an equilateral triangle, what is the smallest length possible?
Why three pipes?
I'm guessing that they are "bundled" togehter.
Then the bundle is enclosed in a triangular casing.
. . $\begin{array}{c}<br /> * \\ [-1mm]<br /> /\! \bigcirc\! \backslash \\ [-1mm]<br /> /\! \bigcirc\!\bigcirc\! \backslash \\ [-1mm]<br /> *\!-\!-\!-\!* \\[-1mm]<br /> \end{array}$
Is that it? | {"url":"http://mathhelpforum.com/geometry/136771-geometry-problem-help-print.html","timestamp":"2014-04-20T05:47:55Z","content_type":null,"content_length":"5437","record_id":"<urn:uuid:bcc09498-f9e7-42fc-bf44-b379838ac5bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 2013 [00133]
[Date Index] [Thread Index] [Author Index]
Re: Using manipulate with user entered variables and functions
• To: mathgroup at smc.vnet.net
• Subject: [mg131118] Re: Using manipulate with user entered variables and functions
• From: Bob Hanlon <hanlonr357 at gmail.com>
• Date: Wed, 12 Jun 2013 05:40:12 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-outx@smc.vnet.net
• Delivered-to: mathgroup-newsendx@smc.vnet.net
• References: <20130523080537.2A5356A0D@smc.vnet.net>
var = Union[
Cases[f, _Symbol?(! NumericQ[#] &),
Grid[{HoldForm[D["f", #]] & /@ var,
Simplify[D[f, #] & /@ var]},
Frame -> All,
FrameStyle -> {LightGray, Thin}]],
{f, {
x (x^2 + y^3),
x*Sqrt[x^2 + y^2 + z^2],
Sin[(x^2 - t)]/(x (y^2 + z^2))},
ControlType -> PopupMenu}]
Bob Hanlon
On Thu, May 23, 2013 at 4:05 AM, <nmueggen at gmail.com> wrote:
> I am trying to create a program through which a user can enter a number of
> variable names and a function of those variables. The program then
> performs calculations based on derivatives of the function with respect to
> the entered variables. Eventually this will become a tool for error
> propagation in an undergraduate physics lab.
> The following simple program allows a user to enter two variables and a
> function of those variables. It then calculates the partial derivatives of
> the function with respect to each of the variables.
> Manipulate[{D[f, var[1]], D[f, var[2]]}, {var[1], x}, {var[2], y}, {f,
> x^2 + y^3}]
> I would like to extend this program to allow for an adjustable number of
> variables. I'm imagining using nested manipulate commands to allow the
> number of variables to be user controlled. To do this I also need the
> program to be able to adjust how many derivatives are being calculated. I
> tried to
> replace the list of derivative commands with a table:
> Manipulate[
> Table[D[f, var[i]], {i, 1, 2}], {var[1], x}, {var[2], y}, {f,
> x^2 + y^3}]
> This does not work. I suspect that the Table command somehow causes it to
> calculate the derivatives before the function and/or variables are defined
> based on the manipulate controls. How do I control this order of
> evaluation?
> Thank you,
> Nathan | {"url":"http://forums.wolfram.com/mathgroup/archive/2013/Jun/msg00133.html","timestamp":"2014-04-17T18:36:53Z","content_type":null,"content_length":"27457","record_id":"<urn:uuid:19f8d372-86db-452b-a85a-13241ab5e623>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items where Research Group is "Oxford Centre for Industrial and Applied Mathematics" and Year is 2007
Number of items: 21.
Bell, C. G. and Breward, C. J. W. and Howell, P. D. and Penfold, J. and Thomas, R. K. (2007) Macroscopic modelling of the surface tension of polymer-surfactant systems. Langmuir . (In Press)
Chapman, S. J. and Howls, C. J. and King, J. R. and Olde Daalhius, A. B. (2007) Why is a shock not a caustic? The higher-order Stokes phenomenon and smoothed shock formation. Nonlinearity, 20 (10).
pp. 2425-2452. ISSN 0951-7715
Cropp, R. A. and Norbury, John and Braddock, R. D. (2007) Process-dependence of biogenic feedback effects in models of plankton dynamics. MODSIM 2007: Proceedings of the International Congress on
Modelling and Simulation . (Submitted)
Cropp, Roger and Norbury, John and Braddock, Roger D. (2007) Dimethylsulphide, clouds, and phytoplankton: Insights from a simple plankton ecosystem feedback model. Global Biogeochemical Cycles, 21 .
ISSN 0886-6236
Edwards, C. and Howison, S. D. and Ockendon, H. and Ockendon, J. R. (2007) Nonclassical shallow water flows. IMA Journal of Applied Mathematics . (In Press)
Erban, R. and Chapman, S. J. (2007) Time scale of random sequential adsorption. Physical Review E, 75 (4). 041116-041120. ISSN 1539-3755
Erban, Radek and Chapman, S. J. (2007) Reactive boundary conditions for stochastic simulations of reaction-diffusion processes. Physical Biology, 4 (1). pp. 16-28. ISSN 1478-3975
Griffiths, I. M. and Howell, P. D. (2007) Mathematical modelling of non-axisymmetric capillary tube drawing. Journal of Fluid Mechanics . ISSN 0022-1120 (Submitted)
Griffiths, I. M. and Howell, P. D. (2007) The surface-tension-driven evolution of a two-dimensional annular viscous tube. Journal of Fluid Mechanics . ISSN 0022-1120 (In Press)
Johnston, M. D. and Edwards, C. M. and Bodmer, W. F. and Maini, P. K. and Chapman, S. J. (2007) Mathematical modeling of cell population dynamics in the colonic crypt and in colorectal cancer.
Proceedings of the National Academy of Sciences, 104 (10). pp. 4008-4013. ISSN 1091-6490
Jones, G. W. and Chapman, S. J. and Allwright, D. J. (2007) Axisymmetric buckling of a spherical shell embedded in an elastic medium under uniaxial stress at infinity. The Quarterly Journal of
Mechanics and Applied Mathematics . (Submitted)
Korobeinikov, Andrei and Norbury, John and Wake, Graeme (2007) Long-term coexistence for a competitive system of spatially varying gradient reaction-diffusion equations. Nonlinear Analysis Series B:
Real World Applications . ISSN 1468-1218 (Submitted)
Moroz, I. M. and Letellier, C. and Gilmore, R. (2007) When are projections also embeddings? Physical Review E, 75 (4).
Moroz, Irene M. (2007) The Hide,Skeldon,Acheson dynamo revisited. Proceedings of the Royal Society A, 463 (2077). pp. 113-130. ISSN 1364-5021 (Paper) 1471-2946 (Online)
Norbury, John and Wei, J. and Winter, M. (2007) Stability of patterns with arbitrary period for a Ginzburg-Landau equation with a mean field. European Journal of Applied Mathematics, 18 . pp.
129-151. ISSN 0956-7925
Ockendon, J. R. and Ockendon, H. and Novikovs, A. (2007) Numerical solutions of the unsteady Fanno model for compressible pipe flow. Journal of Fluid Mechanics, 579 . pp. 493-507.
Ockendon, J. R. and Voskoboinikov, Roman and Chapman, S. J. (2007) Continuum and discrete models of dislocation pile-ups. I Pile-up at a lock. Journal of the Mechanics and Physics of Solids, 55 (9).
pp. 2007-2025. ISSN 0022-5096
Rodríguez-González, J. G. and Santillán, M. and Fowler, A. C. and Mackey, M. C. (2007) The segmentation clock in mice: Interaction between the Wnt and Notch signalling pathways. J. Theor. Biol., 248
. pp. 37-47.
Roose, T. and Chapman, S. J. and Maini, P. K. (2007) Mathematical models of avascular cancer. SIAM Review, 49 (2). pp. 179-208. ISSN 0036-1445
Jones, G. W. (2007) Static Elastic Properties of Composite Materials Containing Microspheres. PhD thesis, University of Oxford.
Machete, R. L. (2007) Modelling a Moore-Spiegel Electronic Circuit: the imperfect model scenario. PhD thesis, University of Oxford. | {"url":"http://eprints.maths.ox.ac.uk/view/groups/ociam/2007.type.html","timestamp":"2014-04-20T08:34:43Z","content_type":null,"content_length":"15657","record_id":"<urn:uuid:101f9d81-8cba-4e04-bb28-32a3140bacf0>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics and Statistics
For Starters | Sub-topics | Dictionaries and Encyclopedias | Associations and Organizations | Other Internet Guides
For Starters
• Penn State Mathematics Web Guide
A simple search engine enables the user to surf through this site's contents quickly and relatively easily. You can find links for Math department homepages, journals, preprints, commercial Web
sites, and much more.
• Mathematics : The Largest Known Primes
This site contains the following sections: Introduction (What are primes? Who cares?); The "Top Ten" Record Primes; The Complete List of the Largest Known Primes; Other Sources of Prime
Information; Notation and Definitions; and Euclid's Proof of the Infinitude of Primes.
• History of Mathematics Archives
The archive many useful resources, including biographies of more than 1300 mathematicians, articles on the development of mathematical ideas and famous curves, Mathematician of the Day,
birthplace maps, and much more. There is a search engine.
• MathSearch
This Web site searchs over 200,000 English-language documents on mathematics and statistics servers across the Web. The majority of documents are concerned with research-level and university
• Statistics Related Internet Sources
This Web site houses Dartmouth's collection of annotated statistics Internet sites. Areas include mailing lists, data, government resources, polls and surveys, professional societies and much
• Statistics - WWW Virtual Library
A huge site with links to mailing lists, newsgroups, software vendors, societies, academic departments on six continents and much more.
• Math in Daily Life
This site shows how math helps us in our daily lives. It demonstrates math concepts such as probability, compounding, growth, geometry, and relationships in situations such as gambling, savings
and investing, population growth, home decorating, and cooking. Links to other Web sites of related math interests.
• World of Mathematics
Also known as MathWorld, this site is a "comprehensive, and interactive mathematics encyclopedia intended for students, educators, math enthusiasts, and researchers." The subject index includes
algebra, applied mathematics, calculus and analysis, discrete mathematics, the foundations and history of mathematics, geometry, number theory, probability and statistics, recreational
mathematics, and topology. In addition, there is a complete alphabetical index and a good search engine. Developed in parallel with the CRC Concise Encyclopedia of Mathematics, in print and on
Preprints | E-journals | Math Software and Tools | Probability | Statistical Software and Tools
• Preprints
• E-journals
• Math Software and Tools
• Probability
□ Probability Tutorials
Offering definations, theorems, and solutions, these 20 tutorials are meant to be a complete online course on measure theory, lebesgue integration and probability. There is also a link to
chat with others using the tutorials.
□ Probability Web
The Probability Web is a collection of pages created by Phil Pollett and maintained by Bob Dodrow to serve people with interests in probability theory and its applications. Information is
presented under the following headings: Probability links, Abstracts, Listservers, Newsgroups, People, Jobs, Journals, Software, Books, Booksellers, Conferences, Publishers and Miscellaneous.
There is also a rudimentary search engine.
• Statistical Software and Tools
□ Statistical Reference Datasets
The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of
statistical software. This project was developed by the Statistical Engineering Division and the Mathematical and Computational Sciences Division within the Information Technology Laboratory
of the National Institute of Standards and Technology.
□ Interactive Statistical Calculation Pages
The Interactive Statistical Pages project represents an ongoing effort to develop and disseminate statistical analysis software in the form of web pages. Utilizing html forms, CGI and Perl
scripts, Java, JavaScript and other browser-based technologies, each web page contains within it (or invokes) all the programming needed to perform a particular computation or analysis.
Dictionaries and Encyclopedias
Associations and Organizations
Other Internet Guides | {"url":"http://admission@evansville.edu/libraries/guideMath.cfm","timestamp":"2014-04-18T10:36:24Z","content_type":null,"content_length":"37087","record_id":"<urn:uuid:384d40d7-c82d-45f3-a413-da7e434d8075>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Accelerated C++ exercise
Help with Accelerated C++ exercise
Hi all,
I'm working my way through this book and I'm having some trouble in an exercise. It asks to write a program to compute and print quartiles (the quarter of the numbers with largest values) of a
set of integers.
Here's my attempt:
#include <iostream> //for cin cout endl
#include <vector> //for vectors
#include <algorithm> //for sort
using std::cin;
using std::cout;
using std::endl;
using std::vector;
int main()
int num, count;
vector<int> vec;
vector<int>::size_type i, remainder;
cout << "Please enter integer numbers: " << endl;
while (cin >> num) {
if (vec.size()==0) {
cout << "Please rerun the program and enter integers: ";
return 1;
sort(vec.begin(), vec.end());
cout << "Program output:" << endl;
// print all the quartiles
while (i>remainder) {
cout << vec[i-1];
if (count%4==0)
cout << endl;
cout << endl;
// print any remaining numbers
for (;i>0;--i)
cout << vec[i-1] << "leftovers!";
return 0;
My problem is in the for loop. If I run the program and enter five integers, then just before this loop, DDD shows:
- i=1, remainder=1 before the loop
- goes to the for loop
- i=0, remainder=1 at the cout line
- DDD goes to stl_vector.h, and to:
operator[](size_type __n)
{ return *(this->_M_impl._M_start + __n); }
- then back to my program at the line where the for is, with i=0, remainder=1
- then to the cout line, but i is now 429467295
- then again to stl_vector.h, and then the for loop iterates, but without writing anything.
Can someone please explain where is my mistake in this program?
Sorry if the above is a bit confusing, I'm trying to provide as much as info as I can.
Many thanks for your help,
One thing I noticed is that you use "count" without initializing it, which you would see if you had "-Wall" enabled when you compile. Does it work when you initialize this first? I realize that
this doesnt seem to be the obvious main problem, but a problem nonetheless. I ran the code with your suggested input and cannot reproduce the same result.
yes, the problem was with "count" I didn't use it properly and was concentrating in the for loop to find my mistake rather than looking at other aspects of the program. I've made some
modifications around "count" and it works fine now.
I am also compiling with g++ -Wall but didn't show any warnings for not initialising this.
Many thanks!
I am also compiling with g++ -Wall but didn't show any warnings for not initialising this.
I find that very hard to believe. Maybe you built your own g++ or something, and somehow disabled this--highly unlikely. G++ does show you the warning. Unless maybe you have some archaic version
of it (which I would think stillwould show the warning).
jordan@jordan-laptop:~$ g++ -v
[output truncated]
gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9)
jordan@jordan-laptop:~$ g++ test.cpp
jordan@jordan-laptop:~$ g++ -Wall test.cpp
test.cpp: In function ‘int main()’:
test.cpp:38: warning: ‘count’ may be used uninitialized in this function
I always like to use "-Wall" and "-pedantic" when compiling.
I think I may know the reason:
spiros@lenore:~/programming/cpp.accel.mine$ g++ -v
Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.3.3-5ubuntu4' --with-bugurl=file:///usr/share/doc/gcc-4.3/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr
--enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.3 --program-suffix=-4.3
--enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-targets=all --with-tune=generic --enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu
Thread model: posix
gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4)
I tried it again with -Wall and still no errors or warnings. This is the standard gcc version for Ubuntu 9.04. I haven't upgraded to 9.10 due to sound card problems...
thanks for your help!
I have a giant block of enable features as well, however I didnt show them (so I wrote "output truncated"), because I just wanted to show the version and "-Wall" output. So I dont know what the
problem is, when you say "I think I may know the reason". From this we can see our two versions arent the same, but I dont think such a warning wasnt in your version also. But of course its
possible that is the reason (which I think would be extremely odd). However, again, something like using a variable without initializing it is certainly a big problem, so I dont think it was only
introduced after 4.3.3.
Anyways, your welcome. Who knows what other warnings (which can lead to problems/runtime errors) you arent seeing with these flags! | {"url":"http://cboard.cprogramming.com/cplusplus-programming/123411-help-accelerated-cplusplus-exercise-printable-thread.html","timestamp":"2014-04-18T00:59:19Z","content_type":null,"content_length":"13906","record_id":"<urn:uuid:d2366acd-5e8f-4b5a-942c-bba0c2c333f5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shrewsbury, MA Trigonometry Tutor
Find a Shrewsbury, MA Trigonometry Tutor
...I have tutored several students academically in geometry. I am able to work with them on proofs, theorems, and both plane and coordinate geometry. I teach students about both the general
concepts and diligence to the details in a fun and supportive environment.
26 Subjects: including trigonometry, English, linear algebra, algebra 1
...Every science and math concept is built upon the foundations laid last week, month, or year. Trying to teach a concept or technique without the right supports beneath is merely postponing the
problem until later. So often, once the root difficulty is found (and fixed!), a plethora of other problems and fears diminish or disappear.
12 Subjects: including trigonometry, chemistry, physics, calculus
...Because my undergraduate degree is in physics and mathematics and my graduate degree is in physics with extensive course work in education, I am passionate about these subjects and trying to
help students improve his/her achievement in these areas. I am also able to tutor various examination pre...
6 Subjects: including trigonometry, physics, calculus, algebra 2
...I am available to tutor the following subjects in Mathematics (middle/high school or college level): statistics, basic math, algebra, word problems, trigonometry, precalculus, calculus 1-4,
geometry, business calculus, MCAS, SAT prep, linear algebra, probability. The sessions will usually be hel...
15 Subjects: including trigonometry, calculus, statistics, geometry
...I am licensed to teach math (8-12) and the topics on the SATs are covered in the licensure. I have tutored in SAT math since 2010. I have been using American Sign Language since 2009.
9 Subjects: including trigonometry, geometry, algebra 2, SAT math | {"url":"http://www.purplemath.com/Shrewsbury_MA_Trigonometry_tutors.php","timestamp":"2014-04-19T07:16:58Z","content_type":null,"content_length":"24287","record_id":"<urn:uuid:5d69fe67-179e-4cc7-9e49-7ca60b95894d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuity and Differentiability
June 29th 2008, 01:53 AM
Continuity and Differentiability
I am confused about the definition of continuity and differentiability of n-variable functions. Also, what is the meaning of continuously differentiable?
Here are some examples that I don't understand fully. Please explain.
1. Let function f be defined on the whole xy plane. f(x,y)= 1 if x=y=/=0 and f(x,y)=0 otherwise. In this case, f is not continuous at (0,0) but both partial derivatives fx and fy exists at (0,0).
2. Let function f be defined as f(x,y)=(x^1/3 + y^1/3)^3. f(x,y) is continuous and has partial derivatives at origin (0,0) but is not differentiable there.
3. Let f be defined by f(x,y)=y^2 + x^3 sin (1/x) for x=/=0, and f(0,y)=y^2. f is differentiable at (0,0), but is not continuously differentiable there because fx(x,y) is not continuous at (0,0).
Thanks. > <
June 29th 2008, 08:29 AM
Let us start with the first one.
In single-variable calculus if $f$ is differenciable at the point then it must be continous there. This example is showing that the existence of partial derivatives does not gaurentte the
continuity of the function. Look at $\partial_x f(0,0)$. By definition it is $\lim_{h\to 0} \tfrac{f(h,0)-f(0,0)}{h} = \lim_{h\to 0}\tfrac{0}{h} = 0$. Thus, $\partial_x f(0,0) = 0$. Similarly $\
partial_y f(0,0)=0$. And so the partials exist.
June 29th 2008, 11:12 PM
Also, what is the meaning of continuously differentiable?
- differentiable
- the derivative is continuous | {"url":"http://mathhelpforum.com/calculus/42666-continuity-differentiability-print.html","timestamp":"2014-04-23T17:21:20Z","content_type":null,"content_length":"6657","record_id":"<urn:uuid:0a046bf6-d77f-46a5-ad4a-16b7f04db446>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Program Analysis Using Mixed Term and Set Constraints
Manuel F¨ahndrich ? and Alexander Aiken ?
EECS Department
University of California, Berkeley ??
Abstract. There is a tension in program analysis between precision and efficiency. In constraintbased
program analysis, at one extreme methods based on unification of equality constraints over terms are
very fast but often imprecise. At the other extreme, methods based on the resolution of inclusion
constraints over set expressions are quite precise, but are often inefficient in practice. We describe
a parameterized framework for constraintbased program analyses that allows the analysis designer
to embed terms and set expressions within each other. Constraints over these mixed expressions are
partially between equality and inclusion, which enables an entire spectrum of program analyses with
varying degrees of precision and efficiency to be expressed. We also show that there are interesting
analyses that take advantage of this mixture. In particular, we report on the design and implementation
of an uncaught exception analysis for core ML. Our results show that the analysis approaches the
efficiency of algorithm W.
1 Introduction
The HindleyMilner polymorphic type inference system [Mil78] is the classical example of a constraintbased
program analysis. It uses equality constraints over a term algebra to infer types for functional programming
languages such as ML [MTH90]. This system has inspired many other analyses based on equality constraints
(e.g. [Hen92, Ste96]). Such systems are appealing because they yield concise results and because the equality | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/178/3819775.html","timestamp":"2014-04-17T05:31:46Z","content_type":null,"content_length":"8801","record_id":"<urn:uuid:9a82e807-0895-4a44-b65f-acc711b5009d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions That Do Not Have Unique Values
When you ask for the square root of a number , you are effectively asking for the solution to the equation . This equation, however, in general has two different solutions. Both and are, for example,
solutions to the equation . When you evaluate the "function" , however, you usually want to get a single number, and so you have to choose one of these two solutions. A standard choice is that should
be positive for . This is what the Mathematica function Sqrt[x] does.
The need to make one choice from two solutions means that Sqrt[x] cannot be a true inverse function for . Taking a number, squaring it, and then taking the square root can give you a different number
than you started with.
When you evaluate , there are again two possible answers: and . In this case, however, it is less clear which one to choose.
There is in fact no way to choose so that it is continuous for all complex values of . There has to be a "branch cut"—a line in the complex plane across which the function is discontinuous.
Mathematica adopts the usual convention of taking the branch cut for to be along the negative real axis.
The branch cut in
along the negative real axis means that values of
with just above and below the axis are very different.
The discontinuity along the negative real axis is quite clear in this three-dimensional picture of the imaginary part of the square root function.
When you find an , there are, in principle, possible results. To get a single value, you have to choose a particular principal root. There is absolutely no guarantee that taking the
There are 10 possible tenth roots.
chooses one of them. In this case it is not the number whose tenth power you took.
There are many mathematical functions which, like roots, essentially give solutions to equations. The logarithm function and the inverse trigonometric functions are examples. In almost all cases,
there are many possible solutions to the equations. Unique "principal" values nevertheless have to be chosen for the functions. The choices cannot be made continuous over the whole complex plane.
Instead, lines of discontinuity, or branch cuts, must occur. The positions of these branch cuts are often quite arbitrary. Mathematica makes the most standard mathematical choices for them.
Sqrt[z]and z^s for Re, for Re ( not an integer)
Exp[z] none
trigonometric functions none
ArcSin[z]and ArcCos[z] and
ArcTan[z] and
ArcCsc[z]and ArcSec[z]
hyperbolic functions none
ArcSinh[z] and
ArcTanh[z] and
ArcSech[z] and
Some branch-cut discontinuities in the complex plane.
is a multiple-valued function, so there is no guarantee that it always gives the "inverse" of
Values of
on opposite sides of the branch cut can be very different. | {"url":"http://reference.wolfram.com/mathematica/tutorial/FunctionsThatDoNotHaveUniqueValues.html","timestamp":"2014-04-20T16:27:20Z","content_type":null,"content_length":"48257","record_id":"<urn:uuid:4783da0c-dc00-4820-a4ae-d007b86085a1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stanislav M. Mintchev
Visiting Assistant Professor of Mathematics
Department of Mathematics, Albert Nerken School of Engineering
The Cooper Union for the Advancement of Science and Art
Click here for a copy of my Curriculum Vitae in PDF format.
If you need to contact me...
Research Interests
I am an applied mathematician with specific training in dynamical systems theory and its applications. I hope to combine analytical techniques from geometry, analysis, and probability theory with
computer simulations in order to study problems from the physical and biological sciences. My current projects explore oscillation models from mathematical biology that can be cast in the setting of
applied dynamical systems. I am also interested in developing data mining algorithms with applications to the analysis of time-dependent data.
Click here for papers
In Progress (as of January 2010)
• Calculus II - Spring 2010 (
Cooper Union Moodle
• Vector Calculus - Spring 2010 (
Cooper Union Moodle
Some Links
Friends and Colleagues (non-trivial intersection)
Professional Organizations | {"url":"http://www.cims.nyu.edu/~mintchev/","timestamp":"2014-04-19T19:37:52Z","content_type":null,"content_length":"11265","record_id":"<urn:uuid:04ee1391-2dd8-405b-aa1b-d42f9b784059>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing the Inductance of a Straight Wire
Computing the Inductance of a Straight Wire
A question that comes up occasionally is whether or not you can compute the inductance of a single straight wire. This seemingly simple question actually does not really have an answer, and gives us
the opportunity to talk about a very interesting point when solving Maxwell’s equations. Anybody working in the field of computational electromagnetics should have an understanding of this key
concept, as it will help you properly set up and interpret models involving magnetic fields.
What is the Inductance of a Single Straight Wire?
Before we get to this concept, let’s look at the question again: What is the inductance of a single straight wire? To begin to address this, we consider the energy-based definition of inductance:
The inductance, L, is defined in terms of the current, I, flowing through the system, and W[m], the magnetic energy density is proportional to B^2. The magnetic field, B, exists within and outside of
the wire. Within the wire, it increases linearly outwards from the centerline, and outside of the wire it falls off as 1/r, where r is the radius. So, to compute the magnetic field and the magnetic
energy distributions we build a model based on Maxwell’s equations, in this case Ampère’s Law. A current source is applied to one end of the wire while the other end is grounded. The tangential
magnetic flux density is constrained to zero on the exterior boundaries of the model.
We can build this model in COMSOL Multiphysics using the Magnetic and Electric Fields physics interface of the AC/DC Module, which will solve for the electric fields and currents in the wire, as well
as the magnetic fields in the wire and surrounding air. The Terminal boundary condition will excite the structure, and automatically compute the inductance using the above formula. The Ground
boundary condition at the other end of the wire provides a current sink, and the Magnetic Insulation boundary condition sets the tangential magnetic flux density to zero.
Now, the fields extend infinitely around the wire, but the magnetic energy density falls off as 1/r^2, so we might think that it is sufficient to study increasing radii of our modeling domain, and
that the inductance will converge. If we try this, however, the inductance as a function of radius will look like this:
As the radius increases, the inductance also increases. No matter how large we make the modeling domain, the inductance for this model of a straight wire does not converge!
Not Only Solving for the Inductance
Things are looking rather grim if we can’t even compute this simple case correctly, right? Well, actually, we aren’t just solving for the inductance of the wire in this model. You are always solving
for the inductance of the system, and that has to include the current return path. The governing Maxwell’s equations are formulated under the assumption that current (electrons) can be neither
created nor destroyed. In fact, the Magnetic Insulation boundary conditions provide this current return path. We can see this by plotting the current in the volumes as well as the surface currents:
The red arrows in the figure above show that the current is flowing along the wire, and also flowing along the Magnetic Insulation boundary conditions as a surface current. The Magnetic Insulation
boundary condition can be thought of as representing a material with infinite conductivity along which current can flow unimpeded. The current path in the above figure is solenoidal, meaning the
current is flowing in a closed path. There is a conductive path not only from the terminal to ground boundary conditions, through the wire, but also from the ground boundary condition back to the
terminals, along the boundaries.
Solenoidal Path
This concept of a solenoidal path for the current is the key point here. Whenever you are setting up any kind of magnetic fields model, you must ensure that current can flow in a closed loop. You
don’t have a choice: this is required by Maxwell’s equations. You also have to be aware that this current return path will affect the results. In the above model, we are not only computing the
inductance due to the current flowing along the wire, we are also considering the effect of the current flowing back along the magnetic insulation boundary conditions.
When setting up any kind of magnetic fields model, you have to model a complete solenoidal current path, and the inductance that you compute from such a model is of the entire current loop.
So that is why you can’t compute the inductance of a single straight wire!
Of course, as long as the system you are analyzing is solenoidal, with current flowing in a loop, you can correctly compute the inductance.
Further Reading
To see examples that compute the mutual inductance between circular loops of wire and compares against analytic solutions, please see:
Article Tags
1. Ivar Kjelberg May 14, 2013 at 2:35 am
Well, so lets consider this as a closed loop then: hence similar to a coax induction line (per meter assumed “long”):
L = mu_r*mu_0*length/(2*pi)*ln(Rext/Rint) (H)
since both Rint and Rext are known when we design our cylinder.
and then compare this also to the known inductance of an “INF” straight conductor
L = mu_r*mu_0*length/(2*pi)*(ln(2*length/R)-3/4) (H) (approximation, ignoring cap effect)
And by normalising Wm, by using Rext, one can easily see the effect of the two cylinder caps, or even attempt the exercice by surrounding the straight inductor by a sphere instead of a cylnder
Having fun Comsoling
2. Walter Frei May 14, 2013 at 9:17 am
Yes, thank you Ivar, that’s a good point. The issue is really independent of the shape of the surrounding geometry.
3. Lingling Tang July 17, 2013 at 10:01 am
Dear Frei,
I simulated your problem about inductance of a wire,and actually the surface current density on the boundary flows from the ground to the terminal to form a loop circuit.
My question is if you plot the surface current density in the tutorial model “integrated square-shaped spiral inductor”,there will be also the so called loop, because the spiral inductor model is
almost the same to your wire model,except a little geometry difference. So, does it also mean the spiral inductor’s inductance will also change if the dimensions of the outer air domain is
What’s more, I didn’t try to test the relationship between the radii and convergence.
4. Walter Frei July 17, 2013 at 10:30 am
Yes Lingling, that is correct: The same observation can be made for any geometry, such as the “integrated square-shaped spiral inductor”. The assumption with that model is that there is a square
metallic packaging of fixed size enclosing the device which provides the current return path.
Loading Comments... | {"url":"http://www.comsol.co.in/blogs/computing-the-inductance-of-a-wire/","timestamp":"2014-04-18T08:46:59Z","content_type":null,"content_length":"60444","record_id":"<urn:uuid:f5ab112f-7689-487a-bfde-bba493c8fb6c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measurement Of No-Load Loss And Current
Introduction to test
The no-load losses are very much related to the operational performance of a transformer. As long as the transformer is operated, these losses occur. For this reason, no load losses are very
important for operational economy. No-load losses are also used in the heating test.
The no-load loss and current measurements of a transformer are made while one of the windings (usually the HV winding) is kept open and the other winding is supplied at the rated voltage and
During this test the no-load current (Io) and the no-load losses (Po) are measured.
The measured losses depend heavily on the applied voltage waveform and frequency. For this reason, the waveform of the voltage should be very sinusoidal and at rated frequency.
Normally, the measurements are made while the supply voltage is increased at equal intervals from 90% to 115% of the transformer rated voltage (Un) and this way the values at the rated voltage can
also be found.
No-load losses and currents
The no-load losses of a transformer are grouped in three main topics:
1. Iron losses at the core of the transformer,
2. Dielectric losses at the insulating material and
3. The copper losses due to no-load current.
The last two of them are very small in value and can be ignored.
So, only the iron losses are considered in determining the no-load losses.
Measuring circuit and performing the measurement
In general according to the standards, if there is less than 3% difference between the effective (U) value and the average (U’) value of the supply voltage, the shape of the wave is considered as
appropriate for measurements.
If the supply voltage is different than sinusoid, the measured no-load losses have to be corrected by a calculation. In this case, the effective (r.m.s.) value and the average (mean) value of the
voltage are different. If the readings of both voltmeter are equal, there is no need for correction.
During measurements, the supply voltage U´ is supplied to the transformer by the average value voltmeter. In this way, the foreseen induction is formed and as a result of this, the hysteresis losses
are measured correctly. The eddy-current losses should be corrected according to equation below.
P[m] = P[0] · (P[1] + k · P[2])
P[m]: Measured loss
P[0]: No-load losses where the voltage is sinusoidal
Here: P[0] = P[h] + P[E] = k[1] · f + k[2] · f^2
k = [ U / U' ]^2
P[1]: The hysteresis loss ratio in total losses (P[h]) = k[1] · f
P[2]: The eddy-curent loss ratio in total losses (P[E]) = k[2] · f^2
At 50 Hz and 60 Hz, in cold oriented sheet steel, P[1] = P[2] = % 50. So, the P[0] no-load loss becomes:
P[o] = P[m] / (P[1] + k · P[2]) where P[1] = P[2] = 0,5
According to IEC 60076-1: P[m] = P[0] · (1 + d) where d = [ (U' - U) / U' ]
During no-load loss measurement, the effective value of the no-load current of the transformer is measured as well. In general, in three phase transformers, evaluation is made according to the
average of the three phase currents.
Before the no-load measurements, the transformer might have been magnetised by direct current and it’s components (resistance measurement or impulse tests).
For this reason, the core has to be demagnetised. To do this, it has to be supplied by a voltage value (increasing and decreasing between the maximum and minimum voltage values for a few minutes)
higher than the rated voltage for a certain time and then the measurements can be made.
The no-load currents are neither symmetrical nor of equal amplitude in three phase transformers. The phase angles between voltages and currents may be different for each of three phases.
For this reason, the wattmeter readings on each of the three phases may not be equal. Sometimes one of the wattmeter values can be 0 (zero) or negative (-).
Resource: Transformer Tests – BEST Transformers
Recommended EE articles
2 Comments
Oscar Vasquez Romero
Dec 05, 2013
Que nivel de armónicos se generan en los inversores de DC AC utilizados en la generacion fotovoltaica y se esto incide al inyectar esta energía en un red de distribución electrica convencional,
el nivel de armónicos se incremente en la misma????
2. [...] and Check of Phase Displacement (on photo: OSB laboratory of BEST Transformers)IntroductionThe no-load voltage ratio between two windings of a transformer is called turn ratio.The aim of
measurement is: Confirming [...]
Leave a Comment | {"url":"http://electrical-engineering-portal.com/transformer-routine-test-measurement-of-no-load-loss-and-current","timestamp":"2014-04-16T21:58:22Z","content_type":null,"content_length":"53298","record_id":"<urn:uuid:a334aea5-85aa-4320-938a-3d1015cc2563>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
37 neural network downloads
...OLSOFT Neural Network Library is...such as other DLLs. OLSOFT Neural Network Library is the class...learn and use Back Propagation neural networks and SOFM (Self-Organizing Feature...The library
makes integration of neural networks functionality into your own...and training capacities of OLSOFT Neural Network library allow using it...in C++; supports multilayer artificial networks (Back
Propagation and SOFM); forward...
Windows 2000, XP, Vista
Nov 26, 2009
Downloads : 9
...Neural network classification results live view (like...Free software for playing with neural networks classification. Major features...Many network architectures....momentum. For better
understanding of neural networks....
Windows 7, XP, Vista
Feb 9, 2014
Downloads : 1
Predict price change in dollars from the current time to the end of the next trading session. We made scalping easy!
Most accurate downloads for neural network
Windows 2000, XP
Jul 6, 2007
Downloads : 61
...validation, and application of regression/approximation
Fast pruning algorithm creates...nested sequence of different size
Includes the multilayer perceptron (MLP),...functional link
piecewise linear
self organizing map (SOM) and...source code for applying trained
is provided, so users can...use
in their own applications. User-supplied...provided. Fast VB Graphics for
training error and cluster formation...
Windows 7, XP, Vista
Aug 22, 2013
Downloads : 29
...The NeuroSolutions for MATLAB
neural network
toolbox is...Simulink. The toolbox features 15
models,...“next to no knowledge” of
neural networks
to begin using the...enables users to build custom
in NeuroSolutions and use it...to solve your problems using
neural networks
inside the MATLAB environment....
Windows 7, 98, XP, Vista
Jun 2, 2010
Downloads : 48
...of the best of the
neural network
stock prediction programs. It...The green trace is the
net`s attempt to match the...blue curve, this means the
net can match the advanced...
Windows 98, 2000, XP, Vista
Nov 5, 2010
Downloads : 49
...Markov Automation, Finite impulse response neural network, Multivariate Stepwise Regression, Linear...
Windows 98, 2000, XP
Jul 2, 2007
Downloads : 49
...Opponent modeling uses an artificial neural network for betting pattern recognition....
Windows 7, XP, Vista
Apr 20, 2014
Downloads : 1
...neural network forecasting add-in. NeuroXL Clusterizer is...neural network add-in for data cluster analysis...
Windows 7, 98, 2000, XP, Vista, CE, Mobile
Feb 2, 2014
Downloads : 19
neural network
system for Microsoft Windows. It...makes the creation of
neural networks
easy. It allows the user...to produce multilayer
neural networks
from...user can see how the
neural networks
are working....
Windows 2000, XP
Jul 5, 2007
Downloads : 20
...highly sophisticated and advanced
neural network
that is able to look...
...and post them in real-time. Neural-Network 5-day forecast calculates...
Windows 2000, XP
Jul 5, 2007
Downloads : 21
...and application of classification type
Fast pruning algorithm creates...nested sequence of different size
Includes the multilayer perceptron (MLP),...functional link
piecewise linear
nearest neighbor classifier (NNC), self...source code for applying trained
is provided, so users can...use
in their own applications. User-supplied...provided. Fast VB Graphics for
classification error and SOM cluster...
Windows 98, 2000, XP
Jul 6, 2007
Downloads : 20
...simulation and development of LTF-C neural networks. You can train and...network, save it in...
Windows 98, Mac OS, Linux
Jun 30, 2007
Downloads : 40
...the back-propagation rule used for
and that new extended form...
Windows 98, 2000, XP
Jun 30, 2007
Downloads : 11
...neural network application that learns how the...health and fitness. All the neural network functions are managed automatically....need to know anything about neural networks. All you need to...
Windows 98, 2000, CE
Jun 26, 2007
Downloads : 19
Neural Networks
are computational paradigms which implement...of their biological counterparts, biological
neural networks.
Neural Networks
are...the brain. The implementation of
Neural Networks
for brain-like computations like...specific locations in memory. Artificial
Neural Networks
use highly distributed representations...
neural network
you must have...
neural network
by creating sample problems and...
Windows 7, 98, 2000, XP, Vista
Dec 1, 2013
Downloads : 27
...versions of three products for
neural network
design and development: NeuroSolutions,...highly graphical
neural network
development tool for Windows. This...modular, icon-based
design interface with an implementation...virtually unconstrained environment for designing
neural networks
for research or to...in and out of the
The Custom Solution Wizard is...program that will take any
neural network
created with NeuroSolutions and...
Windows 98, 2000, XP
Jul 5, 2007
Downloads : 31
...indicators in conjunction with advanced
neural networks
and genetic algorithms to...Solution Service, which includes 10
neural network
models and the end-of-day...future prices with exclusive time-based
neural networks,
5) Implement your own...
Windows 98, 2000, XP
Jul 6, 2007
Downloads : 11
...package implements an Artifical Neuronal Network (ANN), which is based on...
Windows 98, 2000, XP, Vista
Dec 19, 2009
Downloads : 31
...combines technical analysis with artificial neural networks and modern adaptive signal...based on your criteria and neural network forecasting. *Neural Network Forecasting....Neural networks
Windows 98, 2000, XP, Vista, CE
Aug 14, 2008
Downloads : 5
...neural network. The input values are forced...values are forecasted by the neural network....new neural network is created using the new...inputs and forecasted outputs. The neural networks are
compared. The inputs...are adjusted and another new neural network is created. The process...new neural network agrees with the original about...
Windows 7, 98, 2000, XP, Vista
Jul 8, 2012
Downloads : 3
...neural network system for Microsoft Windows. It...makes the creation of neural networks easy. It allows the user...to produce multilayer neural networks from...
Windows 7, 98, 2000, XP, Vista
Dec 22, 2011
Downloads : 10
...neural network forecasting tool that quickly and...in solving real-world forecasting problems. Neural networks are...modeled after the human brain, neural networks are interconnected networks
of...not require advanced knowledge of neural networks, and is integrated seamlessly...possible to save the trained network and then load it when...advances in artificial intelligence and neural
network technology, it delivers accurate...
...neural network add-in for Microsoft Excel. NeuroXL...hides the underlying complexity of neural network processes while providing graphs...integrates seamlessly with Microsoft Excel. Neural
networks are...modeled after the human brain, neural networks are interconnected networks of...NeuroXL Classifier software implements self-organizing neural networks, which perform categorization
by...advances in artificial intelligence and neural network technology, it delivers accurate...
Windows 7, XP, Vista
Apr 20, 2014
Downloads : 1
...neural network toolkit for Microsoft Excel. It...neural network forecasting tool that quickly and...not require advanced knowledge of neural networks, and is integrated seamlessly...possible to
save the trained network and then load it when...hides the underlying complexity of neural network processes while providing graphs...advances in artificial intelligence and neural network
technology, it delivers accurate...
Windows XP
Mar 13, 2009
Downloads : 5
...Create and Manage My Professional Network Set and Plan Set and...
Windows 98, 2000, XP, Vista
Aug 13, 2009
Downloads : 10
...BrainCom is an artificial neural network. Utilizing backpropagation algorithm It can...
Windows 7, XP, Vista
Mar 2, 2014
Downloads : 1
...neural network add-in for Microsoft Excel. NeuroXL...hides the underlying complexity of neural network processes while providing graphs...integrates seamlessly with Microsoft Excel. Neural
networks are...modeled after the human brain, neural networks are interconnected networks of...NeuroXL Clusterizer software implements self-organizing neural networks, which perform categorization
by...advances in artificial intelligence and neural network technology, it delivers accurate...
Windows 7, 98, 2000, XP, Vista, Mobile
May 10, 2011
Downloads : 13
...core signals, they also offer Neural Network Signals that are continually...
Windows 7, XP, Vista
Nov 17, 2011
Downloads : 2
...Optional neural network engine, tells you when to...
Windows 7, 2000, XP, Vista
Oct 13, 2012
Downloads : 4
...components for Artificial Intelligence. Includes: Neural networks, Naive Bayesian, Radial Basis...Function Network, Self Organizing Map, K-Nearest Neighbor...
Windows 7, XP, Vista
Jun 18, 2013
Downloads : 4
...Neural Network...Feed forward Neural Network classifier....Radial Basis Function Network classifier....Back propagation Neural Network trainer....Resilient back propagation Neural Network
Windows 7, XP, Vista
Jun 21, 2013
Downloads : 5
...Neural Network...Feed forward Neural Network classifier....Radial Basis Function Network classifier....Back propagation Neural Network trainer....Resilient back propagation Neural Network | {"url":"http://www.fortedownloads.com/software/neural_network/","timestamp":"2014-04-21T15:22:06Z","content_type":null,"content_length":"38400","record_id":"<urn:uuid:ac500bad-01b8-4dc4-9ed7-853e12cecc63>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
AdS/CFT goes cold
Last week Dam Son gave two nice talks about phenomenological applications of AdS/CFT:
one about heavy ions, and the other about non-relativistic conformal field theories (CFTs). While the former application is widely discussed in pubs and blogs, the latter is a relatively new
development. It seems that, after having entrenched in the heavy ion territory, particle theory has launched another offensive on the unsuspecting condensed matter folk. Not later than yesterday I
saw two new papers on the subject posted on ArXiv.
AdS/CFT as we know it relates strongly coupled gauge theories to gravity theories in one more dimension. In the original tables received at Mount Sinai by Maldacena it speaks about highly symmetric
and all-but-realistic theories: N = 4 super Yang-Mills on the gauge theory side and 10D type IIB supergravity in $AdS_5\times S_5$ background on the gravity side. Later, the correspondence was
vulgarized to allow for phenomenological applications. In particular, some success was reported in postdicting meson spectra of low-energy QCD and explaining large viscosity of the quark-gluon
plasma. Heavy ion collisions are total mess, however, and one would welcome an application in the field where the experimental conditions can be carefully tuned. Condensed matter physics enjoys that
privilege and, moreover, laboratory systems near a critical point are often described by CFT. The point is that in most of the cases these are non-relativistic CFTs.
A commonly discussed example of a condensed matter system is the so-called fermions at unitarity (what's in the name?). This system can be experimentally realized as trapped cold atoms at the
Feshbach resonance. Theoretically, it is described using a fermion field with the non-relativistic free lagrangian $\psi^\dagger \pa_t \psi - |\pa_x\psi|^2/2m$ and short range interactions provided
by the four-fermion term $c_0 (\psi^\dagger \psi)^2$. The experimental conditions can be tuned such that $c_0$ is effectively infinite. In this limit the system has the same symmetry as the free
theory and, in particular, it has scale invariance acting as $x \to \lambda x$, $t \to \lambda^2 x$. The full symmetry group includes also the non-relativistic Galilean transformations and special
conformal transformations, and it is called the Schrodinger group (because it is the symmetry group of the Schrodinger equation). Most of the intuition from relativistic CFT (scaling dimensions,
primary operators) carries over to the non-relativistic case.
The most important piece of evidence for the AdS/CFT correspondence is matching of the symmetries on both sides of the duality. For example, the relativistic conformal symmetry SO(2,4) of the SYM
gauge theory in 4D is the same as the symmetry group of the AdS spacetime. In the case at hand we have a different symmetry group so we need a different geometric background on the gravity side. The
Schrodinger group Sch(d) in d spatial dimensions can be embedded in the conformal group SO(d+2,2). For the interesting case d = 3 this shows that one should look for a deformation of the AdS
background in six space-time dimensions, one more than in the relativistic case. In the
from April this year, Dam Son identified the background with the desired symmetry properties. It goes like this
$ds^2 = \frac{-2 dx^+ dx^- + dx^i dx^i + dz^2}{z^2} - \frac{(dx^+)^2}{z^4}$.
The first term is the usual AdS metric, the last term is a deformation that reduces the symmetry down to the Schrodinger group Sch(d). The light-cone coordinate $x^-$ is compactified, and the
quantized momentum along that coordinate is identified with the mass operator in the Schrodinger algebra.
So, the hypothesis is that fermions at unitarity have a dual description in terms of a gravity theory on that funny background. Many details of the correspondence are still unclear. One obstacle
seems to be that fermions at unitarity do not have an expansion parameter analogous to the number of colors of relativistic gauge theories. A more precise formulation of the duality is clearly | {"url":"http://resonaances.blogspot.co.uk/2008/09/adscft-goes-cold.html","timestamp":"2014-04-21T13:11:36Z","content_type":null,"content_length":"89396","record_id":"<urn:uuid:eb59dffd-077c-48bc-b464-e4c31d50e7f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arcadia, CA SAT Math Tutor
Find an Arcadia, CA SAT Math Tutor
...She was voted "Favorite Teacher" two semesters in a row. Combining her passion in science, teaching and research, Sharon went on to obtain a master's in Genetic Counseling at Cal State
Northridge and a second master's in Public Health at UCLA. During her spare time, Sharon hikes with her dog, Pooch, all over Los Angeles County.
19 Subjects: including SAT math, reading, geometry, biology
...Subjects that I am experienced in tutoring includes Algebra, Geometry and Calculus. Subjects that I am available and well-studied are Linear Algebra and Differential Equations I have tutored
mathematics in my community college for one year, and the subjects varies from pre-algebra to calculus....
11 Subjects: including SAT math, calculus, geometry, algebra 1
...I received the maximum score on both the quantitative section and the analytical writing section. Although I never took the ACT exam, I received a perfect score on the math sections of the PSAT
(80/80), SAT (800/800), GRE (170/170), and GMAT (51/51), all of which cover similar material. I took two years of calculus in high school as well as advanced probability theory in college.
38 Subjects: including SAT math, reading, chemistry, statistics
...I have completed college-level math courses up through calculus III, and can help clarify the most difficult topics covered in algebra 2. I have an extensive background in the biological
sciences, and can help you excel in your most difficult course. I have spent many hours helping high school and college students understand and succeed in biology.
31 Subjects: including SAT math, chemistry, reading, English
...If your test scores don’t go up, I’ll teach it again for free. So here's my story - I was a math whiz in school (800 Math SAT), studied engineering at UCSD (BSME with honors), worked on rockets
for a while (Delta IV launch vehicles), then became a youth pastor because teaching teens is awesome! ...
24 Subjects: including SAT math, Spanish, physics, writing | {"url":"http://www.purplemath.com/Arcadia_CA_SAT_Math_tutors.php","timestamp":"2014-04-20T06:46:30Z","content_type":null,"content_length":"24127","record_id":"<urn:uuid:047b0056-c2b6-4c94-ba2d-6be738a17c23>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haddon Township, NJ Calculus Tutor
Find a Haddon Township, NJ Calculus Tutor
...Each teacher received special training on how to aide students with a variety of differences, including ADD and ADHD. There and since I have worked with several students with ADD and ADHD both
in their math content areas and with executive skills to help them succeed in all areas of their life. I have tutored test taking for many tests, including the Praxis many times.
58 Subjects: including calculus, reading, geometry, biology
...If you don't get that grade, I will refund your money, minus any commission I paid to this website. Please note that I only tutor college students, advanced high school students, returning
adult students, and those studying for standardized tests such as SAT, GRE, and professional licensure exam...
11 Subjects: including calculus, statistics, ACT Math, precalculus
...While I preferred to give my time to friends without charge, I began taking clients during my sophomore year for a senior level class named Theory of Probability. These clients were fellow
students that requested my help to better understand a very difficult topic. I was glad to help them and arranged a weekly meeting to review the course material.
20 Subjects: including calculus, chemistry, physics, geometry
...I have taken seven semesters of calculus courses, as well as two courses that required me to study the underlying foundations of calculus. I have been trained to teach Geometry according to the
Common Core Standards. I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
...I have prepared high school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of the GRE, and have helped many other students with math skills
ranging from basic arithmetic all the way up to Calculus 3 and basic linear algebra. In my free time, I en...
22 Subjects: including calculus, geometry, GRE, ASVAB | {"url":"http://www.purplemath.com/haddon_township_nj_calculus_tutors.php","timestamp":"2014-04-21T02:15:11Z","content_type":null,"content_length":"24689","record_id":"<urn:uuid:59b4dbca-ae20-4d59-800d-b1568278df06>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crossing the Bridge
Copyright © University of Cambridge. All rights reserved.
This was a very popular problem - we received over 200 correct solutions including many from Oxgangs Primary School in Edinburgh, Red Hill Field Primary School, Culford Prep School, Jebel Ali Primary
School in Dubai, Greenfields Junior School, Howell's School in Cardiff,Archbishop Temple High School, Stewart County Middle School, Hassall Grove P.S. in Australia, St.Paul's Catholic College,
Cardiff High School,Casterton Preparatory School, Outwoods Edge Primary School, Thomas Deacon Academy...
Danielle from Howell's School in Cardiff explained the strategy for getting the four friends across in the shortest time:
The trick to this problem is to get the slowest people across together because otherwise you are wasting too much time. But once you've got them across how do you make one of them not walk back. The
answer to that one is to get the fastest people across first so that when the slow people are over the fastest of the group can go quickly back with the flashlight. The two fastest can then run back
1 and 2 cross over first.
2 minutes
Then 1 goes back.
1 minute
Then 7 and 10 cross over.
10 minutes
Then 2 crosses back.
2 minutes
Then 2 and 1 cross together.
2 minutes
Total Time: 17 minutes!
Josh captured the different stages here
Patrick from Woodbridge School wrote:
The fastest solution I have found uses
Strategy 1
(the one above):
Two shortest times cross together, either comes back.
Two longest times cross together, shortest (left there from last step) comes back.
Two shortest cross again.
The second fastest solution uses
Strategy 2
Shortest crosses with longest, shortest comes back.
Shortest crosses with second longest, shortest comes back.
Two cross together.
The only speeds I can find to make
both strategies
the same time is if all the people go at the same speed.
Rhea and Macy from Mason Middle School found another set of timings when both Strategies would give identical results:
For the numbers 3, 4, 5, and 6
both Strategies
give a shortest crossing of 21 minutes.
Harry from Dumpton School compared the times involved in each strategy:
If A is quickest time to cross then B, then C, then D.
Strategy 1:
A+B cross. B goes back.
C+D cross. A goes back.
A+B cross.
Strategy 2:
A+B cross. A goes back.
A+C cross. A goes back.
A+D cross.
Time Strategy 1 - Time Strategy 2 = 2B - (A+C)
2B > (A+C) Time 1 > Time 2
Strategy 2 best
2B < (A+C) Time 1 < Time 2
Strategy 1 best
2B = (A+C) Time 1 = Time 2
Both strategies equal.
Someone calling themselves "a very old person" used the same reasoning to explain how to choose between the strategies:
If we line up the times, fastest to slowest A, B, C, D.
Strategy 1:
(A and B) -->
< -- (A or B, it doesn't matter)
(C and D) -->
< -- (A or B, whatever you didn't use last time)
(A and B) -->
Strategy 2:
(A and B) -->
< -- (A)
(A and C) -->
< -- (A)
(A and D) -->
TAKING ONLY THE LONGEST TIME OF EACH PAIR we find that
Strategy 1 has a total crossing time of B + B + D + A + B
Strategy 2 has a total crossing time of B + A + C + A + D
The time for the slowest to cross (D) is incidental in both strategies.
The difference in time between the two strategies is therefore A+C vs 2B.
So when 2B = A+C they will take the same time. Wherever A+C is greater than 2B, Strategy 1 will give us the fastest crossing. Wherever A+C is less than 2B, Strategy 2will give us the fastest
Here are examples of each case:
2B = A+C (use either Strategy) when the times are 4, 5, 6, and 7
A+C is greater than 2B (use Strategy 1) when the times are 1, 2, 7 and 10 (as in the original problem)
A+C is less than 2B (use Strategy 2) when the times are 1, 8, 9 and 10 | {"url":"http://nrich.maths.org/5916/solution?nomenu=1","timestamp":"2014-04-17T12:53:48Z","content_type":null,"content_length":"8213","record_id":"<urn:uuid:7489febf-e1ae-486f-8c3f-da80b1302699>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Module 3 - Understanding Decimals - SpaceMath@NASA
Engage your students with a press release:
Cassini Delivers Holiday Treats from Saturn!
No team of reindeer, but radio signals flying clear across the solar system from NASA's Cassini spacecraft have delivered a holiday package of glorious images. The picture shown above shows Saturn's
largest, most colorful ornament, Titan, and a smaller moon called Dione in orbit around this splendid planet, and whose rings you can see in the bottom of the picture.
The pictures that were a part of this Cassini christmas card include images of satellite in which one moon passes in front of or behind another. Cassini scientists regularly make these observations
to study the ever-changing orbits of the planet's moons. But even in these routine images, the Saturnian system shines. A few of Saturn's stark, airless, icy moons appear to dangle next to the orange
orb of Titan, the only moon in the solar system with a substantial atmosphere. Titan's atmosphere is of great interest because of its similarities to the atmosphere believed to exist long ago on
Earth soon after it formed 4.5 billion years ago.
While it may be wintry in Earth's northern hemisphere, it is currently northern spring in the Saturnian system and it will remain so for several Earth years. Current plans to extend the Cassini
mission through 2017 will supply a continued bounty of scientifically rewarding and majestic views of Saturn and its moons and rings, as spectators are treated to the passage of northern spring and
the arrival of summer in May 2017.
"As another year traveling this magnificent sector of our solar system draws to a close, all of us on Cassini wish all of you a very happy and peaceful holiday season, " said Carolyn Porco, Cassini
imaging team lead at the Space Science Institute, Boulder, Colo.
Press release date line - December 22, 2011
Press release location: [ Click Here ] | {"url":"http://spacemath.gsfc.nasa.gov/Modules/6Module3.html","timestamp":"2014-04-25T01:26:42Z","content_type":null,"content_length":"18438","record_id":"<urn:uuid:397859cf-49da-4c7f-a4a3-5c94b709e2d7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
In part two Dr Jackie Stedall and Professor Ian Stewart tell us the story of Al-Khwarismi, the mathematician who introduced the world to the radical system of Hindu numerals - the numbers zero to
nine - and how the word algebra comes from the Arabic title of one of his books.
In his book he revolutionised maths by focussing on the relationships between numbers rather than simply using maths to find the answer to particular problems. For mathematicians today, this was a
vital development in our understanding. Another legacy was his name which gives us the modern word algorithm, a process that lies at the heart of how all computers work.
Professor Nader el-Bizri tells also of the great Ibn al-Haytham, who first realised how it is that vision works.
His work with light and optics was so revolutionary that he could be seen as the father of physics, rivaling Isaac Newton for the title.
Perhaps more importantly, he was also the instigator of what we now call the scientific method. Some people have thought that such a precise approach to scientific study began in Europe, hundreds of
years later. | {"url":"http://huffduffer.com/mcmc/tags/science","timestamp":"2014-04-19T22:30:15Z","content_type":null,"content_length":"14983","record_id":"<urn:uuid:dc78df9f-d929-4b68-a9a0-64dbeb12ca7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |