text
stringlengths 256
16.4k
|
|---|
Although the title is about Lie algebras, the question body mentions Lie groups, and my answer will deal more with these. As mentioned in other answers, Lie groups show up frequently in geometry as groups of symmetries of geometric objects. For example, given a manifold $M$ we can sometimes find a Lie group $G$ that acts on $M$ in some interesting fashion, and it is then not unreasonable to hope that this action might yield information about both $G$ and $M$.
Let's look at something a bit more specific. Suppose we have a compact connected Lie group $G$ acting 'in some nice fashion' on a manifold $M$. Typically what one does in this case is break up $M$ into $G$-orbits, and then study each piece individually. Each orbit will be a homogeneous space $G/H$ of $G$, where $H$ is the stabilizer of some point in the orbit. The space $G/H$ is very symmetric-looking, and one might try to exploit the symmetry to gain some structural information. What we have done -- roughly speaking -- is cast aside the manifold and are now working primarily with the group. Of course an interesting special case is when the action of $G$ on $M$ is transitive, i.e. when there is only one $G$-orbit in $M$ so that $M=G/H$ is itself a homogeneous space. There is so much to say about manifolds of the form $G/H$ that I will restrict myself only to two things.
1) The computation of the (real) cohomology of $G/H$ becomes a problem involving the Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$ of $G$ and $H$, which are
linear algebraic objects! In particular, if $H$ is closed and connected in the compact and connected Lie group $G$ then the cohomology ring $H^\ast(G/H;\mathbb{R})$ is isomorphic to the relative Lie algebra cohomology ring $H^\ast(\mathfrak{g}, \mathfrak{h};\mathbb{R})$. For instance if $H$ is the trivial subgroup, we obtain the isomorphism $H^\ast(G;\mathbb{R}) \cong H^\ast(\mathfrak{g};\mathbb{R})$ mentioned in the OP; and indeed, computing $H^\ast(\mathfrak{g})$ is a much more tractable problem. Another interesting special case is when $H$ is a maximal torus in $G$, but I will not say more about this here...
2) Vector bundles over $G/H$ are related to the representation theory of $G$. Strictly speaking, this is only true of
equivariant vector bundles, i.e. vector bundles $\pi \colon E \to G/H$ where $G$ acts on the total space $E$ in a way that respects its action on the base $G/H$: that is, we ask that $\pi(ge) = g\pi(e)$ for all $g \in G$ and $e\in E$ and that translation between fibers $E_x \to E_{gx}$ be linear. The fiber lying over the trivial coset in $G/H$ is then seen to carry a representation of $H$. Is there an action of $G$ lurking around? Yes: $G$ acts on the sheaf cohomology $H^\ast(G/H, V)$! Thus we can relate the cohomology of $H^\ast(G/H,V)$ to the representation theory of $G$.
A very important special case is when $H$ is a maximal torus $T$ and $V$ is an equivariant (holomorphic) line bundle $L \to G/T$ (let's not fret about the "holomorphic" bit). (There is a miraculous fact that if $G$ is simply connected then
every holomorphic line bundle over $G/T$ is automatically equivariant. In particular, this means that even if $G$ isn't simply connected, then we always get an action of the Lie algebra $\mathfrak{g}$ of $G$ on $H^\ast(G/H,L)$, even if there is no corresponding action of $G$. In other words, we can use the representation theory of $\mathfrak{g}$ to study $H^\ast(G/H,L)$.) There is a very explicit description of $H^\ast(G/T, L)$ in terms of the representation theory of $G$: it turns out that either $H^\ast$ vanishes completely, or else it is nonzero in a single degree $q_L$, in which case $H^{q_L}(G/T,L)$ is an irreducible representation of $G$. (This can be made much more precise; in particular, there is an explicit description of $q_L$ and of the resulting irreducible representation in terms of weights. The key phrase here is "Borel--Weil--Bott theorem.'')
Here is a concrete example. If $G = \operatorname{SU}(2)$ and $T$ is its diagonal subgroup, then $G/T = \mathbb{C}P^1$, and one can use the Borel--Weil--Bott theorem to describe the cohomology groups $H^\ast(\mathbb{C}P^1, \mathcal{O}(n))$. For instance, the fact that $H^0(\mathbb{C}P^1, \mathcal{O}(n)) = \text{Sym}^n(\mathbb{C}^2)$ (for $n \geq 0$) comes from the fact that $\text{Sym}^n(\mathbb{C}^2)$ is the irreducible representation of $\operatorname{SU}(2)$ of highest weight $n$.
There is another obvious reason why Lie groups are important in geometry: they are themselves geometric objects (namely, manifolds)! So you cannot expect to say something about general manifolds that cannot be said about them. Since Lie groups are a relatively well-behaved class of manifolds, one can use them as a test case of or a launch pad to more general results. The same can be said about homogeneous spaces $G/H$. For example, general results like the Atiyah--Bott fixed point foruma and the Atiyah--Singer index formulas when applied to $G/T$ (where $G$ is a compact and connected Lie group and $T$ is a maximal torus) are closely related to the Weyl character formula for $G$.
|
The climate of Earth has been roughed up quite a bit last century. But it has no idea what it's got coming with this portal of yours. Earth turns into Venus.
Update: As R.M. pointed out, the amount of energy is not 'maybe a long term thing', it's the Major Issue. This has been fixed now. How much water are we talking?
Let's say your portal is 10km below sea level. Dropping from that pressure to pressure at sea level gives a flow speed of somewhat over 400 meters per second: $\sqrt{10^4m * 10\frac{m}{s^2} * 2}$ (water is incompressible so we can just use potential energy). This is well over the speed of sound, or comparable to the speed of a typical handgun bullet.
This 400 m/s flow is though the entire portal, $\pi * 5000^2 = 7.9 * 10^7 m^2$, for a total of about $30*10^9 m^3$ of water per second, that's a cube of water about 3 km or 2 miles on a side per second.
Comparing that to other rivers. The discharge of your portal is about 100 000 cubic kilometers
per hour, or about three times the the amount of water discharged by all of Earth's rivers in a year.
Comparing it to a lahar, a very destructive mud flow. These can be 100 meters deep and run at 'several tens of meters per second'. If the stream from your portal would turn into a lahar of 50 meters deep, it would be 2000 km wide. If we were to slow it down to a mere 40 meters per second (as fast as a car going over the speed limit), that would require it to be again 10 times as wide, so 20 000 km. Sailing around the entire African continent is only slightly more than that. So the
entire African coast would turn into an extremely destructive mud flow of almost 50 m deep. Given that there are mountains to the south of the Sahara, the mud flow will probably be much deeper and mostly to the North.
At this point most of my assumptions are starting to break down. I assumed the effect on the ocean surface would not be too great. It will likely be a giant maelstrom tens or maybe even hundreds of kilometers across. This means the amount of water flowing through it is going to be somewhat less. Let's cut it by a factor 10, so $30*10^8 \frac{m^3}{s}$.
The volume of Earth's oceans is about 1.3 billion cubic kilometers. There are about 30 million seconds in a year, so that's about 90 million cubic kilometers per year. So it takes about 15 years for the portal to cycle through the equivalent of all the water.
What happens to that water?
The specific heat of water is about 4 J/(K kg), so it takes about 4 kilojoules to heat a Liter of water 1 degree Celsius. That's about 4 MJ to heat a cubic meter 1 degree. The potential energy in dropping a cubic meter of water from 10 km up is about 100 MJ, so you're going to heat up your water about 25 degrees by slamming it high speed into the sand (or, quite quickly, other water).
This means that, if the Earth's energy loss from radiation would stay the same, in about 15 years all the water would have cycled through the portal once and the ocean would, on average, have heated up 25 degrees. Another 45 years and all the water will be boiling.
Water has a latent heat of about 2.3 MJ/kg, or 2300 MJ/m³. So it then takes a few centuries for all the water to boil off and turn into vapor.
What will happen to all that energy?
Normally, the Earth radiates away energy as long wave, or infra red radiation. Greenhouse gases like carbon dioxide reflect this radiation back at us.
There is a lot of carbon dioxide stored in the oceans, around 60 times that of the pre-industrial atmosphere. However, if water is heated, it can't store as much carbon dioxide. Water of about 30 degrees Celsius can store only about a third of what water of 4 degrees can store. This is actually much more complicated than dividing by a third.
So, if all the oceans heat up 25 degrees, the carbon dioxide in the atmosphere is going to increase by factor of 10 or so. We've managed to increase it by about 50% or so in the last hundred years.
To add to this, warmer water evaporates more readily, and water vapor is also a very strong greenhouse gas.
This means that the Earth wouldn't cool nearly as fast as it normally would.
And, at some point, maybe after a few years, maybe after several decades, even if you were to turn off the portal, the increased greenhouse gases and incoming energy from the Sun will will cause a runaway greenhouse effect and turn Earth into Venus.
Who dies first?
The first creatures to die is probably a fish being blasted into an unlucky scorpion at high speed. After that anything going through the portal will die. Next to go is anything within a few hundred kilometers of the Mariana Trench and anything in Northern Africa.
Europe and the rest of Africa will soon (within hours? days? weeks?) follow. The shortest path from the Saharan portal back to the Mariana Trench is through the Himalaya's, so my guess is the water will mainly flow through Africa towards the Southern Atlantic and through Southern Europe the Eastern part of Africa and the Arabian Peninsula into the Indian Ocean. Everything in its way will die.
So the America's, most of Asia and Australia will likely not flood. Australia is closest to the Mariana Trench, so the weather there will turn weird after a day. It will take probably take a few days for the effects of the sudden change of energy distribution to be transported by the jet steam towards the America's and the remaining parts of Asia, so they've got maybe another week before the freak weather begins (think hurricanes, extreme rainfall, etc.). Some animals and some humans in those parts of the world could possibly survive this for a few years.
Some speculation on what would happen if the water didn't heat up:
The remaining flow is still a good 20 times the amount of water the Gulf Stream transports at its peak, so I would wager all ocean currents stop doing what they're doing and start flowing towards the South-West Pacific.
This means no more warm water flowing to the North Atlantic so that's a new ice age for Europe. Similar goes for Japan.
Most of North Africa is going to get flooded, I have no clue what that will do with the climate. This water is going to be under 10 degrees Celsius, so it'll cool down equatorial areas by a lot.
The giant maelstrom around the Mariana Trench is going to cause a lot of mixing in the Pacific Ocean, so most of that ocean is going to be a lot colder than it was before.
All of this cold water near the surface everywhere will mean that the Earth will radiate away much less energy and that there will be much less energy to drive atmospheric processes.The internal ocean water however will heat up on average.
|
For the following exercises, draw a graph of the functions without using a calculator. Use the 9-step process for graphing from Class Notes and from the section 4.5 text.
The answers here are just the graph (step 9). Your solutions should have all steps with the information (intervals of incr/decr, local max/min, etc) as you see in the section 4.5 text examples.
294) \(y=3x^2+2x+4\)
295) \(y=x^3−3x^2+4\)
Answer: Note: should have a hole at the point (-3,2)
296) \(y=\frac{2x+1}{x^2+6x+5}\)
297) \(y=\frac{x^3+4x^2+3x}{3x+9}\)
Answer:
298) \(y=\frac{x^2+x−2}{x^2−3x−4}\)
299) \(y=\sqrt{x^2−5x+4}\)
Answer:
300) \(y=2x\sqrt{16−x^2}\)
301) \(y=\frac{cosx}{x}\), on \(x=[−2π,2π]\)
Answer:
302) \(y=e^x−x^3\)
303) \(y=x \tan x,x=[−π,π]\)
Answer:
304) \(y=x\ln(x),x>0\)
305) \(y=x^2\sin(x),x=[−2π,2π]\)
Answer:
306) For \(f(x)=\frac{P(x)}{Q(x)}\) to have an asymptote at \(y=2\) then the polynomials \(P(x)\) and \(Q(x)\) must have what relation?
307) For \(f(x)=\frac{P(x)}{Q(x)}\) to have an asymptote at \(x=0\), then the polynomials \(P(x)\) and \(Q(x).\) must have what relation?
Answer: \(Q(x).\) must have have \(x^{k+1}\) as a factor, where \(P(x)\) has \(x^k\) as a factor.
308) If \(f′(x)\) has asymptotes at \(y=3\) and \(x=1\), then \(f(x)\) has what asymptotes?
309) Both \(f(x)=\frac{1}{(x−1)}\) and \(g(x)=\frac{1}{(x−1)^2}\) have asymptotes at \(x=1\) and \(y=0.\) What is the most obvious difference between these two functions?
Answer: \(\displaystyle lim_{x→1^−f(x)and \displaystyle lim_x→1−g(x)
310) True or false: Every ratio of polynomials has vertical asymptotes.
For the following exercises, draw a graph of the functions without using a calculator. Use the 9-step process for graphing from Class Notes and from the section 4.4 text. Your solutions should have all steps with the information (intervals of incr/decr, local max/min, etc) as you see in the section 4.4 text examples.
J4.4.1) \(y=\frac{x^2+2}{x^2-4}\)
J4.4.2) \(f(x)=x-3x^{\frac{1}{3}}\)
J4.4.3) \(f(x)=x\ln x\)
Answer: Domain (0, ∞); Intercept (0,1); Symmetry Not odd, Not even; VA none, HA none, as x → ∞ , f → ∞; increasing on \((\frac{1}{e}, ∞)\); decreasing on \((0, \frac{1}{e})\); min \((\frac{1}{e}, -\frac{1}{e})\); no max; concave up (0, ∞); never concave down; no inflection point
J4.4.4) \(f(x)=x^4-6x^2\)
J4.4.5) \(f(x)=\frac{x^2}{x-2}\)
Answer: Domain \(x≠2\) ; Intercept (0,0); Symmetry Not odd, Not even; VA \(x=2\), HA none, as \(x → ∞\) , \(f → ∞\); as \(x →- ∞\) , \(f → -∞\); increasing on \( (-∞,0)\) \((4, ∞,)\), decreasing on \((0, 2)\) \((2,4)\); min \((4,8)\); max;\((0,0)\); concave up (2, ∞), concave down (-∞ , 2); inflection points \((-\sqrt{2},\frac{2}{-\sqrt{2}-2})\), \((\sqrt{2},\frac{2}{\sqrt{2}-2})\)
J4.4.6) \(f(x)=\frac{x^2-2}{x^4}\)
J4.4.7) \(f(x)=4x^{\frac{1}{3}}+x^{\frac{4}{3}}\)
Answer: Domain (-∞, ∞); Intercepts (-4,0) (0,0); Symmetry Not odd, Not even; VA none, HA none, as \( x→±∞ \) , \(f → ∞\); increasing on \( (-1,∞)\); decreasing on \((-∞,-1,)\); min \((-1,-3)\); max none; concave up \( (-∞,0)\) \((2, ∞,)\); concave down (0, 2); inflection points \((2,6\sqrt[3]{2})\)
J4.4.8) \(f(x)=\frac{1}{(1+e^x)^2}\)
J4.4.9) \(f(x)=\frac{x+3}{\sqrt{x^2+1}}\)
Answer: Domain (-∞, ∞); Intercepts (-3,0), (0,3); Symmetry Not odd, Not even; VA none, HA \(y=-1\) (as \(x→-∞ \)) , HA \(y=1\) (as \(x→∞\)); increasing on \((-∞,\frac{1}{3})\); decreasing on \( (\frac{1}{3},∞)\); max \((\frac{1}{3},\sqrt{10})\); min none; concave up \( (-∞,-\frac{1}{2})\), \((1,∞)\);concave down \( (-\frac{1}{2},1)\); inflection points \( (-\frac{1}{2},\sqrt{5})\), \((1,2\sqrt{2})\)
|
This is the spin-off of the question I previously asked.
First, let me remind you some notation from that question:
$G_0$ - compact, simply connected Lie group giving rise (by complexification) to a semi-simple Lie Group $G$
$\mathfrak{g}$ - Lie algebra of $G$
$\delta=\frac{1}{2}\sum_{\alpha>0}\alpha$ (summation over all possitive roots of $\mathfrak{g}$)
$V_{\lambda}$ - finite dimensional complex vector space on which $\mathfrak{g}$ is irreducibly represented (with the highest weight $\lambda$ and highest weight vector $v_\lambda$).
$\mathbb{P}V_\lambda$ - complex projective space of $V_\lambda$
$\pi: V_\lambda\rightarrow \mathbb{P}V_\lambda$ - mapping onto the projective space
$O_{v_{\lambda}}^0 $ - orbit of $G_0$ through $v_\lambda$
$\Omega:V_\lambda\otimes V_\lambda \rightarrow V_\lambda \otimes V_\lambda$ - second order Casimir represented on $V_\lambda\otimes V_\lambda$
Now, my the true question starts:
In the article by Lichteinstein ( check here )it is proven that the necessary and sufficient condition for $\pi(v)$ to be in $\pi(O_{v_{\lambda}}^0)$ is the following:
$\Omega (v\otimes v) =\langle 2\lambda+2\delta,2\lambda\rangle (v\otimes v)$
In the above $\langle\cdot ,\cdot \rangle $ is a standard inner product on Cartan algebra dual $\mathfrak{h}^*$ .
I came up with the way of reasoning that might simplify arguments for this statement - they rely on a rather heavy (from my perspective) machinery. $\Omega$ decomposes into a direct sum of multiples of identity acting on irreducible representations $V_\alpha$ of $\mathfrak{g}$ on which $V_\lambda\otimes V_\lambda$ decomposes:
$\Omega=\sum_{\alpha\in\mathcal{A}}\beta(\alpha)Id_{V_\alpha}$
$\beta(\alpha)$ is some function of the weight $\alpha$- see above, $\mathcal{A}$ denotes some collection of weights - there may be some overlappings (if that is the case corresponding spaces $V_\alpha$ and $V'_\alpha$ are different). .
Now, it is easy to see that in the above sum there is only one $\alpha_0=2\lambda$ (it is the weight of the irreducible representation generated on $V_\lambda\otimes V_\lambda$ by $v_\lambda\otimes v_\lambda$ ). As $\Omega$ commutes with all unitary operators in the representation of $G_0$ (see notation above) in $V_\lambda \otimes V_\lambda$, clearly: $\Omega(v\otimes v)=\beta(2 \lambda) v\otimes v$ for $v\in O_{v_\lambda}^0$.
In order to have the complete characterization of $\pi(O_{v_\lambda}^0)$ I only have to know that $v \otimes v \in V_{2 \lambda}$ implies that $v\otimes v$ is proportional to some $\Phi(g) v_\lambda \otimes \Phi(g) v_\lambda$, where $\Phi$ - representation of $G_0$ in $V_\lambda$, $g\in G_0$.
One possibility would be to allow $G_0$ to act transitively on $\pi(V_{2\lambda})$ where $V_{2 \lambda}$ treated as a subspace of $V_\lambda\otimes V_\lambda$. Yet, this condition is not fulfilled for large dimensions of $V_\lambda$. Let me state the final question.
When $v \otimes v \in V_{2 \lambda}$ ( $V_{2 \lambda}$ is treated as as subspace of $V_{\lambda}\otimes V_\lambda$) implies that $v\otimes v$ is proportional to some $\Phi(g) v_\lambda \otimes \Phi(g) v_\lambda$?
|
Skills to Develop
Identify rational numbers and irrational numbers Classify different types of real numbers
be prepared!
Before you get started, take this readiness quiz.
Identify Rational Numbers and Irrational Numbers
Congratulations! You have completed the first six chapters of this book! It's time to take stock of what you have done so far in this course and think about what is ahead. You have learned how to add, subtract, multiply, and divide whole numbers, fractions, integers, and decimals. You have become familiar with the language and symbols of algebra, and have simplified and evaluated algebraic expressions. You have solved many different types of applications. You have established a good solid foundation that you need so you can be successful in algebra.
In this chapter, we'll make sure your skills are firmly set. We'll take another look at the kinds of numbers we have worked with in all previous chapters. We'll work with properties of numbers that will help you improve your number sense. And we'll practice using them in ways that we'll use when we solve equations and complete other procedures in algebra.
We have already described numbers as counting numbers, whole numbers, and integers. Do you remember what the difference is among these types of numbers?
counting numbers 1, 2, 3, 4… whole numbers 0, 1, 2, 3, 4… integers …−3, −2, −1, 0, 1, 2, 3, 4… Rational Numbers
What type of numbers would you get if you started with all the integers and then included all the fractions? The numbers you would have form the set of rational numbers. A
rational number is a number that can be written as a ratio of two integers.
Definition: Rational Numbers
A rational number is a number that can be written in the form \(\frac{p}{q}\), where p and q are integers and q ≠ 0.
All fractions, both positive and negative, are rational numbers. A few examples are
$$\frac{4}{5}, - \frac{7}{8}, \frac{13}{4},\; and\; - \frac{20}{3}$$
Each numerator and each denominator is an integer.
We need to look at all the numbers we have used so far and verify that they are rational. The definition of rational numbers tells us that all fractions are rational. We will now look at the counting numbers, whole numbers, integers, and decimals to make sure they are rational.
Are integers rational numbers? To decide if an integer is a rational number, we try to write it as a ratio of two integers. An easy way to do this is to write it as a fraction with denominator one.
$$3 = \frac{3}{1} \quad -8 = \frac{-8}{1} \quad 0 = \frac{0}{1}$$
Since any integer can be written as the ratio of two integers, all integers are rational numbers. Remember that all the counting numbers and all the whole numbers are also integers, and so they, too, are rational.
What about decimals? Are they rational? Let's look at a few to see if we can write each of them as the ratio of two integers. We've already seen that integers are rational numbers. The integer −8 could be written as the decimal −8.0. So, clearly, some decimals are rational.
Think about the decimal 7.3. Can we write it as a ratio of two integers? Because 7.3 means \(7 \frac{3}{10}\), we can write it as an improper fraction, \(7 \frac{3}{10}\). So 7.3 is the ratio of the integers 73 and 10. It is a rational number.
In general, any decimal that ends after a number of digits (such as 7.3 or −1.2684) is a rational number. We can use the place value of the last digit as the denominator when writing the decimal as a fraction.
Example 7.1:
Write each as the ratio of two integers: (a) −15 (b) 6.81 (c) \(−3 \frac{6}{7}\).
Solution
(a) −15
Write the integer as a fraction with denominator 1. $$\frac{-15}{1}$$
(b) 6.81
Write the decimal as a mixed number. $$6 \frac{81}{100}$$ Then convert it to an improper fraction. $$\frac{681}{100}$$
(c) \(−3 \frac{6}{7}\)
Convert the mixed number to an improper fraction. $$- \frac{27}{7}$$
Exercise 7.1:
Write each as the ratio of two integers: (a) −24 (b) 3.57.
Exercise 7.2:
Write each as the ratio of two integers: (a) −19 (b) 8.41.
Let's look at the decimal form of the numbers we know are rational. We have seen that every integer is a rational number, since a = \(\frac{a}{1}\) for any integer, a. We can also change any integer to a decimal by adding a decimal point and a zero.
$$\begin{split} Integer \qquad &-2,\quad -1,\quad 0,\quad 1,\; \; 2,\; 3 \\ Decimal \qquad &-2.0, -1.0, 0.0, 1.0, 2.0, 3.0 \end{split}$$
These decimal numbers stop.
We have also seen that every fraction is a rational number. Look at the decimal form of the fractions we just considered.
$$\begin{split} Ratio\; of\; Integers \qquad \frac{4}{5},\quad -\frac{7}{8},\quad \frac{13}{14},\;&- \frac{20}{3} \\ Decimal\; forms \qquad 0.8, -0.875, 3.25, &-6.666 \ldots \\ &-6.\overline{66} \end{split}$$
These decimals either stop or repeat.
What do these examples tell you? Every rational number can be written both as a ratio of integers and as a decimal that either stops or repeats. The table below shows the numbers we looked at expressed as a ratio of integers and as a decimal.
Rational Numbers Fractions Integers Number $$\frac{4}{5}, - \frac{7}{8}, \frac{13}{4}, \frac{-20}{3}$$ $$-2, -1, 0, 1, 2, 3$$ Ratio of Integer $$\frac{4}{5}, \frac{-7}{8}, \frac{13}{4}, \frac{-20}{3}$$ $$\frac{-2}{1}, \frac{-1}{1}, \frac{0}{1}, \frac{1}{1}, \frac{2}{1}, \frac{3}{1}$$ Decimal number $$0.8, -0.875, 3.25, -6.\overline{6}$$ $$-2.0, -1.0, 0.0, 1.0, 2.0, 3.0$$ Irrational Numbers
Are there any decimals that do not stop or repeat? Yes. The number \(\pi\) (the Greek letter pi, pronounced ‘pie’), which is very important in describing circles, has a decimal form that does not stop or repeat.
$$\pi = 3.141592654 \ldots \ldots$$
Similarly, the decimal representations of square roots of numbers that are not perfect squares never stop and never repeat. For example,
$$\sqrt{5} = 2.236067978 \ldots \ldots$$
A decimal that does not stop and does not repeat cannot be written as the ratio of integers. We call this kind of number an
irrational number.
Definition: Irrational Number
An irrational number is a number that cannot be written as the ratio of two integers. Its decimal form does not stop and does not repeat.
Let's summarize a method we can use to determine whether a number is rational or irrational.
If the decimal form of a number
stops or repeats, the number is rational. does not stop and does not repeat, the number is irrational.
Example 7.2:
Identify each of the following as rational or irrational: (a) 0.58\(\overline{3}\) (b) 0.475 (c) 3.605551275…
Solution
(a) 0.58\(\overline{3}\)
The bar above the 3 indicates that it repeats. Therefore, 0.583 – is a repeating decimal, and is therefore a rational number.
(b) 0.475
This decimal stops after the 5, so it is a rational number.
(c) 3.605551275…
The ellipsis (…) means that this number does not stop. There is no repeating pattern of digits. Since the number doesn't stop and doesn't repeat, it is irrational.
Exercise 7.3:
Identify each of the following as rational or irrational: (a) 0.29 (b) 0.81\(\overline{6}\) (c) 2.515115111…
Exercise 7.4:
Identify each of the following as rational or irrational: (a) 0.2\(\overline{3}\) (b) 0.125 (c) 0.418302…
Let's think about square roots now. Square roots of perfect squares are always whole numbers, so they are rational. But the decimal forms of square roots of numbers that are not perfect squares never stop and never repeat, so these square roots are irrational.
Example 7.3:
Identify each of the following as rational or irrational: (a) 36 (b) 44
Solution
(a) The number 36 is a perfect square, since 6
2 = 36. So \(\sqrt{36}\) = 6. Therefore \(\sqrt{36}\) is rational.
(b)Remember that 6
2 = 36 and 7 2 = 49, so 44 is not a perfect square. This means \(\sqrt{44}\) is irrational.
Exercise 7.5:
Identify each of the following as rational or irrational: (a) \(\sqrt{81}\) (b) \(\sqrt{17}\)
Exercise 7.6:
Identify each of the following as rational or irrational: (a) \(\sqrt{116}\) (b) \(\sqrt{121}\)
Classify Real Numbers
We have seen that all counting numbers are whole numbers, all whole numbers are integers, and all integers are rational numbers. Irrational numbers are a separate category of their own. When we put together the rational numbers and the irrational numbers, we get the set of
real numbers. Figure 7.2 illustrates how the number sets are related.
Figure 7.2 - This diagram illustrates the relationships between the different types of real numbers.
Definition: Real Numbers
Real numbers are numbers that are either rational or irrational.
Does the term “real numbers” seem strange to you? Are there any numbers that are not “real”, and, if so, what could they be? For centuries, the only numbers people knew about were what we now call the real numbers. Then mathematicians discovered the set of
imaginary numbers. You won't encounter imaginary numbers in this course, but you will later on in your studies of algebra.
Example 7.4:
Determine whether each of the numbers in the following list is a (a) whole number, (b) integer, (c) rational number, (d) irrational number, and (e) real number.
$$−7, \frac{14}{5}, 8, \sqrt{5}, 5.9, − \sqrt{64}$$
Solution The whole numbers are 0, 1, 2, 3,… The number 8 is the only whole number given. The integers are the whole numbers, their opposites, and 0. From the given numbers, −7 and 8 are integers. Also, notice that 64 is the square of 8 so \(− \sqrt{64}\) = −8. So the integers are −7, 8, \(− \sqrt{64}\). Since all integers are rational, the numbers −7, 8, and \(− \sqrt{64}\) are also rational. Rational numbers also include fractions and decimals that terminate or repeat, so \(\frac{14}{5}\) and 5.9 are rational. The number 5 is not a perfect square, so \(\sqrt{5}\) is irrational. All of the numbers listed are real.
We'll summarize the results in a table.
Number Whole Integer Rational Irrational Real -7 \(\checkmark\) \(\checkmark\) \(\checkmark\) \(\frac{14}{5}\) \(\checkmark\) \(\checkmark\) 8 \(\checkmark\) \(\checkmark\) \(\checkmark\) \(\checkmark\) \(\sqrt{5}\) \(\checkmark\) \(\checkmark\) 5.9 \(\checkmark\) \(\checkmark\) \(- \sqrt{64}\) \(\checkmark\) \(\checkmark\) \(\checkmark\)
Exercise 7.7:
Determine whether each number is a (a) whole number,(b) integer,(c) rational number,(d) irrational number, and (e) real number: −3, \(− \sqrt{2}, 0.\overline{3}, \frac{9}{5}\), 4, \(\sqrt{49}\).
Exercise 7.8:
Determine whether each number is a (a) whole number,(b) integer,(c) rational number,(d) irrational number, and (e) real number: \(− \sqrt{25}, − \frac{3}{8}\), −1, 6, \(\sqrt{121}\), 2.041975…
Practice Makes Perfect Rational Numbers
In the following exercises, write as the ratio of two integers.
(a) 5 (b) 3.19 (a) 8 (b) −1.61 (a) −12 (b) 9.279 (a) −16 (b) 4.399
In the following exercises, determine which of the given numbers are rational and which are irrational.
0.75, 0.22\(\overline{3}\), 1.39174… 0.36, 0.94729…, 2.52\(\overline{8}\) 0.\(\overline{45}\), 1.919293…, 3.59 0.1\(\overline{3}\), 0.42982…, 1.875
In the following exercises, identify whether each number is rational or irrational.
(a) 25 (b) 30 (a) 44 (b) 49 (a) 164 (b) 169 (a) 225 (b) 216 Classifying Real Numbers
In the following exercises, determine whether each number is whole, integer, rational, irrational, and real.
−8, 0, 1.95286...., \(\frac{12}{5}, \sqrt{36}\), 9 −9 , \(−3 \frac{4}{9}, − \sqrt{9}, 0.4\overline{09}, \frac{11}{6}\), 7 \(− \sqrt{100}\), −7, \(− \frac{8}{3}\), −1, 0.77, \(3 \frac{1}{4}\) Everyday Math Field tripAll the 5th graders at Lincoln Elementary School will go on a field trip to the science museum. Counting all the children, teachers, and chaperones, there will be 147 people. Each bus holds 44 people. How many buses will be needed? Why must the answer be a whole number? Why shouldn't you round the answer the usual way? Child careSerena wants to open a licensed child care center. Her state requires that there be no more than 12 children for each teacher. She would like her child care center to serve 40 children. How many teachers will be needed? Why must the answer be a whole number? Why shouldn't you round the answer the usual way? Writing Exercises In your own words, explain the difference between a rational number and an irrational number. Explain how the sets of numbers (counting, whole, integer, rational, irrationals, reals) are related to each other. Self Check
(a) After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.
(b) If most of your checks were:
…confidently. Congratulations! You have achieved the objectives in this section. Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific.
…with some help. This must be addressed quickly because topics you do not master become potholes in your road to success. In math, every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved?
…no—I don’t get it! This is a warning sign and you must not ignore it. You should get help right away or you will quickly be overwhelmed. See your instructor as soon as you can to discuss your situation. Together you can come up with a plan to get you the help you need.
Contributors
Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/fd53eae1-fa2...49835c3c@5.191."
|
[pstricks] [Fwd: Re: binom_distribution] John Campbell jcdotcalm at shaw.ca Mon Apr 17 17:45:54 CEST 2006 Thank you very much. I did not realize that there was a new version of
pst-func.tex. I will update; thanks again.
J. Campbell
----- Original Message -----
From: "Herbert Voss" <LaTeX at zedat.fu-berlin.de>
To: "Graphics with PSTricks" <pstricks at tug.org>
Sent: Monday, April 17, 2006 4:57 AM
Subject: Re: [pstricks] [Fwd: Re: binom_distribution]
> Poul Riis wrote:
>> To my knowledge a binomial distribution is defined for integral values
>> only.
>
> you mean integer values.
> that's correct, but the lines are connected to show that
> the binomial distribution goes into the normal one for n->\infty
>
>> I don't fully understand why the following seems to work...
>> - And furthermore, I don't understand why it doesn't work for all values
>> of n and p!?
>
> the starting value (k=0) is (1-p)^n, which is a problem for n>125,
> e.g. 0.5^125\approx 2.35e-38, which is nearly the smallest value
> PostScript can handle.
>
> The latest pst-func.tex (from http://perce.de/LaTeX/pst-func/)
> has two macros \psBinomial and \psBinomialN for the normalized
> distribution. Attached an example image of this code:
>
> \documentclass[12pt]{article}
> \usepackage{pst-func}
> \pagestyle{empty}
>
> \begin{document}
>
> \psset{xunit=1cm,yunit=10cm}%
> \begin{pspicture}(-1,0)(7,0.55)%
> \psaxes[Dy=0.2,dy=0.2\psyunit]{->}(0,0)(-1,0)(7,0.5)
> \uput[-90](7,0){$k$} \uput[90](0,0.5){$P(X=k)$}
> \psBinomial[linecolor=red,markZeros,printValue,fillstyle=vlines]{6}{0.4}
> \end{pspicture}
>
> \vspace{1cm}
> \begin{pspicture}(-3,0)(4,0.55)%
> \psaxes[Dy=0.2,dy=0.2\psyunit]{->}(0,0)(-3,0)(4,0.5)
> \uput[-90](4,0){$z$} \uput[90](0,0.5){$P(Z=z)$}
> \psBinomialN[linecolor=red,markZeros,fillstyle=vlines]{6}{0.4}
> \end{pspicture}
>
> \end{document}
>
>
>
> Herbert
>
--------------------------------------------------------------------------------
> _______________________________________________
> pstricks mailing list
> pstricks at tug.org
> http://tug.org/mailman/listinfo/pstricks
>
More information about the PSTricksmailing list
|
The groups of monotonically increasing and monotonically decreasing functions have some special properties. A
monotonically increasing function is one that increases as \(x\) does for all real \(x\). A monotonically decreasing function, on the other hand, is one that decreases as \(x\) increases for all real \(x\). In particular, these concepts are helpful when studying exponential and logarithmic functions.
The graphs of exponential and logarithmic functions will be crucial here. From them we can see a general rule:
If \(a > 1\), then both of these functions are monotonically increasing:
$$f(x) = a^x$$ $$g(x) = \log_{a}(x)$$
By looking at a sufficient number of graphs, we can understand this. Over an interval on which a function is monotonically increasing (or decreasing), an output for the function will not occur more than once.
Example 1: Consider these two graphs. The red one is \(f(x) = 3^x\) while the green one is \(g(x) = 3^{x + 1}\):
Notice that as \(x\) is increasing, \(f(x)\) is increasing. Now notice this algebraic manipulation:
$$g(x) = 3^{x + 1} = 3 \cdot 3^x = 3f(x)$$
Therefore, \(g(x)\) is also monotonically increasing. You can multiply \(f(x)\) by any positive constant, and the values of the new function will continue to grow at a faster or slower rate as \(x\) increases.
Now we consider a logarithmic function.
Example 2: We will use an example involving the common logarithm. Let \(f(x) = \log_{10}(x)\). A graph is shown below:
The graph is curved upward where the function is defined (positive real numbers), in a way that it is always increasing. Any logarithm with a positive base will have a similar pattern.
Example 3: Explain why \(x = 0\) is the only solution to \(3^x = 1\). Solution: Over the real numbers, the function \(f(x)\) is monotonically increasing, so there can only be one solution, because no value of \(y\) is achieved on the graph twice.
Monotonically decreasing functions are basically the opposite of monotonically increasing functions. As a result:
If \(f(x)\) is a monotonically increasing function over some interval, then \(-f(x)\) is a monotonically decreasing function over that same interval, and vice-versa.
Here is an example of a monotonically decreasing function.
Example 4: Consider the graph of \(f(x) = -5^x\), shown below:
The function \(5^x\) is monotonically increasing, so \(f(x) = -5^x\) must be monotonically decreasing, because wherever \(5^x\) is increasing (everywhere), \(f(x)\) is decreasing.
There are some functions that are not monotonically increasing nor monotonically decreasing. There are an infinite number of these functions, and they belong to many different groups.
Main Group 1: Constant Functions
These are straight lines, so they are not decreasing or decreasing.
Main Group 2: Absolute Value Functions
Functions surrounded by an absolute value sign are always nonnegative, but then all non-constant functions of this type will have a minimum. Therefore the function will alternate between increasing and decreasing as \(x\) increases.
Main Group 3: Trigonometric Functions
Consider basic trigonometric functions such as \(\sin(x)\), which move up and down, and thus do not exclusively increase or decrease.
Main Group 4: Functions with Discontinuities
A function cannot increase or decrease over any type of discontinuity, especially when the discontinuity is caused by an undefined value (i.e. \(x = 0\) in \(f(x) = \frac{1}{x}\)).
Monotonically increasing and decreasing graphs can be identified by graphs, but this is not a very rigorous method. Still, it is good for students who do not have any calculus background. There are methods containing much more detail and rigor that involve calculus, related to the rate of change of the function as \(x\) changes.
|
Journal of Commutative Algebra J. Commut. Algebra Volume 9, Number 3 (2017), 387-412. Lattice-ordered abelian groups finitely generated as semirings Abstract
A lattice-ordered group (an $\ell $-group) $G(\oplus , \vee , \wedge )$ can naturally be viewed as a semiring $G(\vee ,\oplus )$. We give a full classification of (abelian) $\ell $-groups which are finitely generated as semirings by first showing that each such $\ell $-group has an order-unit so that we can use the results of Busaniche, Cabrer and Mundici~\cite {BCM}. Then, we carefully analyze their construction in our setting to obtain the classification in terms of certain $\ell $-groups associated to rooted trees (Theorem \ref {classify}).
This classification result has a number of interesting applications; for example, it implies a classification of finitely generated ideal-simple (commutative) semirings $S(+, \cdot )$ with idempotent addition and provides important information concerning the structure of general finitely generated ideal-simple (commutative) semirings, useful in obtaining further progress towards Conjecture~\ref {main-conj} discussed in \cite {BHJK, JKK}.
Article information Source J. Commut. Algebra, Volume 9, Number 3 (2017), 387-412. Dates First available in Project Euclid: 1 August 2017 Permanent link to this document https://projecteuclid.org/euclid.jca/1501574428 Digital Object Identifier doi:10.1216/JCA-2017-9-3-387 Mathematical Reviews number (MathSciNet) MR3685049 Zentralblatt MATH identifier 1373.06022 Subjects Primary: 06F20: Ordered abelian groups, Riesz groups, ordered linear spaces [See also 46A40] 12K10: Semifields [See also 16Y60] Secondary: 06D35: MV-algebras 16Y60: Semirings [See also 12K10] 52B20: Lattice polytopes (including relations with commutative algebra and algebraic geometry) [See also 06A11, 13F20, 13Hxx] Citation
Kala, Vítězslav. Lattice-ordered abelian groups finitely generated as semirings. J. Commut. Algebra 9 (2017), no. 3, 387--412. doi:10.1216/JCA-2017-9-3-387. https://projecteuclid.org/euclid.jca/1501574428
|
This is the question i would like to discuss, properly stated.
Given a model $M$ for a collection of set theory axioms (ZFC, for example), list all basic modal formulas $\phi$ such that $M\Vdash \phi$ and $\nVdash \phi$ (that is, $\phi$ is valid on the basic modal frame $M$, and $\phi$ is not a formula valid in the class of all basic modal frames).
I've been studying modal logic for a while now, albeit slowly. Recently, i've encountered the concept of
frame definability, which is about relating modal formulas to the class of frames they are valid on. Now, i've never had any formal training on Set Theory, but thanks to the internet, i believe a large part of it consists of stating a collection of logical sentences such that any structure with signature $(W,\in)$ that models said collection behaves much like we'd expect sets to behave.
As it turns out, the structure signature $(W,\in)$ is precisely that of the
basic modal frames, with the membership relation providing the interpretation for the $\Diamond$ modality. I can't help but wonder if there aren't any ways to define sets using modal logic, or if not, how "close" can we get to them.
Alas, i'm not sure if these questions are appropriate on a StackExchange site (too vague). So for now i'd like to know something simpler. We know that there are many modal formulas valid on the class of all frames; that's the smallest normal modal logic. My question is then, what does the set of modal formulas valid on a model of, say, ZFC, or NF, look like? Just how much bigger than the smallest normal modal logic is that set?
As a follow-up, maybe i could ask (if it isn't asking too much) if there are modal formulas that can
distinguish between different models of the same set theory, that is, by being valid on one model but not in another.
EDIT: This question's previous wording was simpler; all i asked was whether there were a non-trivially (that is, not a member of the smallest normal modal logic) valid modal formula or not. As it turns out, there's a pretty easy one, $\Diamond \top$, since every set must be a member of another set (and in fact the infinite set consisting of $\{ \Diamond \top, \Diamond \Diamond \top, \Diamond \Diamond \Diamond \top, ... \}$ is valid).
I'm changing it to a harder statement (asking to define the entire logic generated by the set model), but perhaps even more interesting would be asking if there are non-trivially valid modal formulas for a "converse set theory" $M=(W,\in_c)$ model, in which $x\in_c y $ iff $y \in x$ (and, of course, your choice of axioms would need to be altered, in order to invert the operands in the $\in$ predicate). Dunno what to ask...
|
Let $A$ be a commutative ring, $B$ (resp. $C$) be a commutative $A$-algebra endowed with a valuation $v$ (resp. $w$), not necessarily of rank 1. Assume that $v$ and $w$ induce equivalent valuations on $A$. How to construct a valuation $u$ on $B\otimes_A C$ extending $v$ and $w$?
Without loss of generality, we may assume $A$, $B$, $C$ to be fields. If $B$ is an algebraic extension of $A$, the existence of $u$ follows from the fact that extensions of a valuation to a normal extension field are conjugate to each other [Bourbaki, AC VI 8 Prop. 7]. Thus the only case left to check is when both $B$ and $C$ are purely transcendental over $A$.
Huber lists the existence of $u$ as a "simple property" of valuations [Etale cohomology of Rigid Analytic Varieties and Adic Spaces, 1.1.14 f]. No proof is given there. Are there other references for this?
Added on Aug. 5: Let us denote the value groups of $A$, $B$, $C$ by $\Gamma_A$, $\Gamma_B$, $\Gamma_C$, respectively. The value group of $u$ is an extension of $\Gamma_B$ and $\Gamma_C$ over $\Gamma_A$. How to construct such an extension of linearly ordered Abelian groups? We could put the lexicographic order on $\Gamma_B\times \Gamma_C$, but then we cannot quotient out by the diagonal image of $\Gamma_A$ as the image is not convex.
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (2) (remove)
276
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals.
282
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
|
Answer
Percent of Total is 10%. Degrees of a Circle is $36^\circ$.
Work Step by Step
$percent = \frac{part}{whole}\times100$ $percent = \frac{546}{5460}\times100=\frac{1}{10}\times100=10$ A complete circle is $360^\circ$. Find 10% of $360^\circ$. $part = percent \times whole$ $0.10\times360=36$
|
Claim: No, there is no such $\mu$.
Proof: We give an infinite sequence of AVL trees of growing size whose weight-balance value tends to $0$, contradicting the claim.
Let $C_h$ the complete tree of height $h$; it has $2^{h+1}-1$ nodes.
Let $S_h$ the
Fibonacci tree of height $h$; it has $F_{h+2} - 1$ nodes. [1,2,TAoCP 3]
Now let $(T_h)_{i\geq 1}$ with $T_h = N(S_h,C_h)$ the sequence of trees we claim to be a counter example.
Consider the weight-balancing value of the root of $T_h$ for some $h \in \mathbb{N}_+$:
$\qquad \displaystyle \begin{align*} \frac{F_{h+2}}{2^{h+1} + F_{h+2} - 1} &= \frac{1}{1 + \frac{2^{h+1}}{F_{h+2}} - \frac{1}{F_{h+2}{}}} \\&\sim \frac{F_{h+2}}{2^{h+1}} \\&= \frac{\frac{1}{\sqrt{5}}(\phi^{h+2} - \hat{\phi}^{h+2})}{2^{h+1}} \\&\sim \frac{\phi^{h+2}}{\sqrt{5}\cdot 2^{h+1}} \underset{h \to \infty}{\to} 0\end{align*}$
This concludes the proof.
Notation: $F_n$ is the $n$th Fibonacci number $\phi \approx 1.6$ is the Golden Ratio, $\hat{\phi} \approx -0.62$ its conjugate. $f \sim g$ means that $f$ and $g$ are asymptotically equal, i.e. $\lim_{n \to \infty} \frac{f(n)}{g(n)} = 1$.
Nota bene: Fibonacci trees are exactly those AVL trees with the least nodes for a given height (or, equivalently, the maximum height for a given number of nodes).
Addendum: How can we come up with Fibonacci trees if we had not overheard a professor mentioning them? Well, what would an AVL tree of height $h$ with as few nodes as possible look like? Certainly, you need a root. One of the subtrees needs to have height $h-1$, and we have to choose it with as few nodes as possible. The other one can have height $h-2$ without violating the height balancing condition, and we choose it also with as few nodes as possible. In essence, we construct the trees we want recursively! These are the first four:
[source]
We set up a recurrence for the number of nodes $f(h)$ in the thusly constructed tree with height $h$:
$\qquad \displaystyle \begin{align*} f(1) &= 1 \\ f(2) &= 2 \\ f(h) &= f(h-1) + f(h-2) + 1 \qquad n \geq 3\end{align*}$
Solving it leads to $f(h) = F_{h+2} - 1$ which we used above.
The same proof is given (with less detail) in Binary search trees of bounded balance by Nievergelt and Reingold (1972).
|
We studied
differentials in Section 4.4, where Definition 18 states that if \(y=f(x)\) and \(f\) is differentiable, then \(dy=f'(x)dx\). One important use of this differential is in Integration by Substitution. Another important application is approximation. Let \(\Delta x = dx\) represent a change in \(x\). When \(dx\) is small, \(dy\approx \Delta y\), the change in \(y\) resulting from the change in \(x\). Fundamental in this understanding is this: as \(dx\) gets small, the difference between \(\Delta y\) and \(dy\) goes to 0. Another way of stating this: as \(dx\) goes to 0, the error in approximating \(\Delta y\) with \(dy\) goes to 0.
We extend this idea to functions of two variables. Let \(z=f(x,y)\), and let \(\Delta x = dx\) and \(\Delta y=dy\) represent changes in \(x\) and \(y\), respectively. Let \(\Delta z = f(x+dx,y+dy) - f(x,y)\) be the change in \(z\) over the change in \(x\) and \(y\). Recalling that \(f_x\) and \(f_y\) give the instantaneous rates of \(z\)-change in the \(x\)- and \(y\)-directions, respectively, we can approximate \(\Delta z\) with \(dz = f_xdx+f_ydy\); in words, the total change in \(z\) is approximately the change caused by changing \(x\) plus the change caused by changing \(y\). In a moment we give an indication of whether or not this approximation is any good. First we give a name to \(dz\).
Definition 86: Total Differential
Let \(z=f(x,y)\) be continuous on an open set \(S\). Let \(dx\) and \(dy\) represent changes in \(x\) and \(y\), respectively. Where the partial derivatives \(f_x\) and \(f_y\) exist, the
total differential of \(z\) is
\[dz = f_x(x,y)dx + f_y(x,y)dy.\]
Example \(\PageIndex{1}\): Finding the total differential
Let \(z = x^4e^{3y}\). Find \(dz\).
Solution
We compute the partial derivatives: \(f_x = 4x^3e^{3y}\) and \(f_y = 3x^4e^{3y}\). Following Definition 86, we have
\[dz = 4x^3e^{3y}dx+3x^4e^{3y}dy.\]
We
can approximate \(\Delta z\) with \(dz\), but as with all approximations, there is error involved. A good approximation is one in which the error is small. At a given point \((x_0,y_0)\), let \(E_x\) and \(E_y\) be functions of \(dx\) and \(dy\) such that \(E_xdx+E_ydy\) describes this error. Then
\[\begin{align*}
\Delta z &= dz + E_xdx+ E_ydy \\ &= f_x(x_0,y_0)dx+f_y(x_0,y_0)dy + E_xdx+E_ydy. \end{align*}\]
If the approximation of \(\Delta z\) by \(dz\) is good, then as \(dx\) and \(dy\) get small, so does \(E_xdx+E_ydy\). The approximation of \(\Delta z\) by \(dz\) is even better if, as \(dx\) and \(dy\) go to 0, so do \(E_x\) and \(E_y\). This leads us to our definition of differentiability.
Definition 87: Multivariable Differentiability
Let \(z=f(x,y)\) be defined on an open set \(S\) containing \((x_0,y_0)\) where \(f_x(x_0,y_0)\) and \(f_y(x_0,y_0)\) exist. Let \(dz\) be the total differential of \(z\) at \((x_0,y_0)\), let \(\Delta z = f(x_0+dx,y_0+dy) - f(x_0,y_0)\), and let \(E_x\) and \(E_y\) be functions of \(dx\) and \(dy\) such that
\[\Delta z = dz + E_xdx + E_ydy.\] \(f\) is differentiable at\((x_0,y_0)\) if, given \(\epsilon >0\), there is a \(\delta >0\) such that if \(||\langle dx,dy\rangle|| < \delta\), then \(||\langle E_x,E_y\rangle|| < \epsilon\). That is, as \(dx\) and \(dy\) go to 0, so do \(E_x\) and \(E_y\). \(f\) is differentiable on\(S\) if \(f\) is differentiable at every point in \(S\). If \(f\) is differentiable on \(\mathbb{R}^2\), we say that \(f\) is differentiable everywhere.
Example \(\PageIndex{2}\): Showing a function is differentiable
Show \(f(x,y) = xy+3y^2\) is differentiable using Definition 87.
Solution
We begin by finding \(f(x+dx,y+dy)\), \(\Delta z\), \(f_x\) and \(f_y\).
\[\begin{align*}
f(x+dx,y+dy) &= (x+dx)(y+dy) + 3(y+dy)^2 \\ &= xy + xdy+ydx+dxdy + 3y^2+6ydy+3dy^2. \end{align*}\]
\(\Delta z = f(x+dx,y+dy) - f(x,y)\), so
\[\Delta z = xdy + ydx + dxdy + 6ydy+3dy^2.\]
It is straightforward to compute \(f_x = y\) and \(f_y = x+6y\). Consider once more \(\Delta z\):
\[\begin{align*}
\Delta z &= xdy + ydx + dxdy + 6ydy+3dy^2 \qquad \text{ (now reorder)}\\ &= ydx + xdy+6ydy+ dxdy + 3dy^2\\ &= \underbrace{(y)}_{f_x}dx + \underbrace{(x+6y)}_{f_y}dy + \underbrace{(dy)}_{E_x}dx+\underbrace{(3dy)}_{E_y}dy\\ &= f_xdx + f_ydy + E_xdx+E_ydy. \end{align*}\]
With \(E_x = dy\) and \(E_y = 3dy\), it is clear that as \(dx\) and \( dy\) go to 0, \(E_x\) and \(E_y\) also go to 0. Since this did not depend on a specific point \((x_0,y_0)\), we can say that \(f(x,y)\) is differentiable for all pairs \((x,y)\) in \(\mathbb{R}^2\), or, equivalently, that \(f\) is differentiable everywhere.
Our intuitive understanding of differentiability of functions \(y=f(x)\) of one variable was that the graph of \(f\) was "smooth.'' A similar intuitive understanding of functions \(z=f(x,y)\) of two variables is that the surface defined by \(f\) is also "smooth,'' not containing cusps, edges, breaks, etc. The following theorem states that differentiable functions are continuous, followed by another theorem that provides a more tangible way of determining whether a great number of functions are differentiable or not.
THEOREM 104: Continuity and Differentiability of Multivariable Functions
Let \(z=f(x,y)\) be defined on an open set \(S\) containing \((x_0,y_0)\).
If \(f\) is differentiable at \((x_0,y_0)\), then \(f\) is continuous at \((x_0,y_0)\).
THEOREM 105: Differentiability of Multivariable Functions
Let \(z=f(x,y)\) be defined on an open set \(S\) containing \((x_0,y_0)\).
If \(f_x\) and \(f_y\) are both continuous on \(S\), then \(f\) is differentiable on \(S\).
The theorems assure us that essentially all functions that we see in the course of our studies here are differentiable (and hence continuous) on their natural domains. There is a difference between Definition 87 and Theorem 105, though: it is possible for a function \(f\) to be differentiable yet \(f_x\) and/or \(f_y\) is
not continuous. Such strange behavior of functions is a source of delight for many mathematicians.
When \(f_x\) and \(f_y\) exist at a point but are not continuous at that point, we need to use other methods to determine whether or not \(f\) is differentiable at that point.
For instance, consider the function
\[f(x,y) = \left\{\begin{array}{cl} \frac{xy}{x^2+y^2} & (x,y)\neq (0,0) \\
0 & (x,y) = (0,0)\end{array}\right.\] We can find \(f_x(0,0)\) and \(f_y(0,0)\) using Definition 83: \[\begin{align*} f_x(0,0) &= \lim_{h\to 0} \frac{f(0+h,0) - f(0,0)}{h} \\ &= \lim_{h\to 0} \frac{0}{h^2} = 0;\\ f_y(0,0) &= \lim_{h\to 0} \frac{f(0,0+h) - f(0,0)}{h} \\ &= \lim_{h\to 0} \frac{0}{h^2} = 0. \end{align*}\]
Both \(f_x\) and \(f_y\)
exist at \((0,0)\), but they are not continuous at \((0,0)\), as
\[f_x(x,y) = \frac{y(y^2-x^2)}{(x^2+y^2)^2} \qquad \text{and}\qquad f_y(x,y) = \frac{x(x^2-y^2)}{(x^2+y^2)^2} \]
are not continuous at \((0,0)\). (Take the limit of \(f_x\) as \((x,y)\to(0,0)\) along the \(x\)- and \(y\)-axes; they give different results.) So even though \(f_x\) and \(f_y\)
exist at every point in the \(x\)-\(y\) plane, they are not continuous. Therefore it is possible, by Theorem 105, for \(f\) to not be differentiable.
Indeed, it is not. One can show that \(f\) is not continuous at \((0,0)\) (see Example 12.2.4), and by Theorem 104, this means \(f\) is not differentiable at \((0,0)\).
Approximating with the Total Differential
By the definition, when \(f\) is differentiable \(dz\) is a good approximation for \(\Delta z\) when \(dx\) and \(dy\) are small. We give some simple examples of how this is used here.
Example \(\PageIndex{3}\): Approximating with the total differential
Let \(z = \sqrt{x}\sin y\). Approximate \(f(4.1,0.8)\).
Solution
Recognizing that \(\pi/4 \approx 0.785\approx 0.8\), we can approximate \(f(4.1,0.8)\) using \(f(4,\pi/4)\). We can easily compute \(f(4,\pi/4) = \sqrt{4}\sin(\pi/4) = 2\left(\frac{\sqrt{2}}2\right) = \sqrt{2}\approx 1.414.\) Without calculus, this is the best approximation we could reasonably come up with. The total differential gives us a way of adjusting this initial approximation to hopefully get a more accurate answer.
We let \(\Delta z = f(4.1,0.8) - f(4,\pi/4)\). The total differential \(dz\) is approximately equal to \(\Delta z\), so
\[f(4.1,0.8) - f(4,\pi/4) \approx dz \quad \Rightarrow \quad f(4.1,0.8) \approx dz + f(4,\pi/4).\label{eq:totaldiff2}\]
To find \(dz\), we need \(f_x\) and \(f_y\).
\[\begin{align*}
f_x(x,y) &= \frac{\sin y}{2\sqrt{x}} \quad\Rightarrow& f_x(4,\pi/4) &= \frac{\sin \pi/4}{2\sqrt{4}} \\ & &&= \frac{\sqrt{2}/2}{4} = \sqrt{2}/8.\\ f_y(x,y) &= \sqrt{x}\cos y \quad\Rightarrow& f_y(4,\pi/4) &= \sqrt{4}\frac{\sqrt{2}}2\\ & & &= \sqrt{2}. \end{align*}\]
Approximating \(4.1\) with 4 gives \(dx = 0.1\); approximating \(0.8\) with \(\pi/4\) gives \(dy \approx 0.015\). Thus
\[\begin{align*}
dz(4,\pi/4) &= f_x(4,\pi/4)(0.1) + f_y(4,\pi/4)(0.015)\\ &= \frac{\sqrt{2}}8(0.1) + \sqrt{2}(0.015)\\ &\approx 0.039. \end{align*}\]
Returning to Equation \ref{eq:totaldiff2}, we have
\[f(4.1,0.8) \approx 0.039 + 1.414 = 1.4531.\]
We, of course, can compute the actual value of \(f(4.1,0.8)\) with a calculator; the actual value, accurate to 5 places after the decimal, is \(1.45254\). Obviously our approximation is quite good.
The point of the previous example was
not to develop an approximation method for known functions. After all, we can very easily compute \(f(4.1,0.8)\) using readily available technology. Rather, it serves to illustrate how well this method of approximation works, and to reinforce the following concept:
\[\text{"New position = old position \(+\) amount of change,'' so}\]
\[\text{ "New position \(\approx\) old position + approximate amount of change.''}\]
In the previous example, we could easily compute \(f(4,\pi/4)\) and could approximate the amount of \(z\)-change when computing \(f(4.1,0.8)\), letting us approximate the new \(z\)-value.
It may be surprising to learn that it is not uncommon to know the values of \(f\), \(f_x\) and \(f_y\) at a particular point without actually knowing the function \(f\). The total differential gives a good method of approximating \(f\) at nearby points.
Example \(\PageIndex{4}\): Approximating an unknown function
Given that \(f(2,-3) = 6\), \(f_x(2,-3) = 1.3\) and \(f_y(2,-3) = -0.6\), approximate \(f(2.1,-3.03)\).
Solution
The total differential approximates how much \(f\) changes from the point \((2,-3)\) to the point \((2.1,-3.03)\). With \(dx = 0.1\) and \(dy = -0.03\), we have
\[\begin{align*}
dz &= f_x(2,-3)dx + f_y(2,-3)dy\\ &= 1.3(0.1) + (-0.6)(-0.03) \\ &= 0.148. \end{align*}\]
The change in \(z\) is approximately \(0.148\), so we approximate \(f(2.1,-3.03)\approx 6.148.\)
Error/Sensitivity Analysis
The total differential gives an approximation of the change in \(z\) given small changes in \(x\) and \(y\). We can use this to approximate error propagation; that is, if the input is a little off from what it should be, how far from correct will the output be? We demonstrate this in an example.
Example \(\PageIndex{5}\): Sensitivity analysis
A cylindrical steel storage tank is to be built that is 10ft tall and 4ft across in diameter. It is known that the steel will expand/contract with temperature changes; is the overall volume of the tank more sensitive to changes in the diameter or in the height of the tank?
Solution
A cylindrical solid with height \(h\) and radius \(r\) has volume \(V = \pi r^2h\). We can view \(V\) as a function of two variables, \(r\) and \(h\). We can compute partial derivatives of \(V\):
\[\frac{\partial V}{\partial r} = V_r(r,h) = 2\pi rh \qquad \text{and}\qquad \frac{\partial V}{\partial h} = V_h(r,h) = \pi r^2.\]
The total differential is \(dV = (2\pi rh)dr + (\pi r^2)dh.\) When \(h = 10\) and \(r = 2\), we have \(dV = 40\pi dr + 4\pi dh\).
Note that the coefficient of \(dr\) is \(40\pi\approx 125.7\); the coefficient of \(dh\) is a tenth of that, approximately \(12.57\). A small change in radius will be multiplied by 125.7, whereas a small change in height will be multiplied by 12.57. Thus the volume of the tank is more sensitive to changes in radius than in height.
The previous example showed that the volume of a particular tank was more sensitive to changes in radius than in height. Keep in mind that this analysis only applies to a tank of those dimensions. A tank with a height of 1 ft and radius of 5 ft would be more sensitive to changes in height than in radius. One could make a chart of small changes in radius and height and find exact changes in volume given specific changes. While this provides exact numbers, it does not give as much insight as the error analysis using the total differential.
Differentiability of Functions of Three Variables
The definition of differentiability for functions of three variables is very similar to that of functions of two variables. We again start with the total differential.
Definition 88: Total Differential
Let \(w=f(x,y,z)\) be continuous on an open set \(S\). Let \(dx\), \(dy\) and \(dz\) represent changes in \(x\), \(y\) and \(z\), respectively. Where the partial derivatives \(f_x\), \(f_y\) and \(f_z\) exist, the
total differential of \(w\) is
\[dz = f_x(x,y,z)dx + f_y(x,y,z)dy+f_z(x,y,z)dz.\]
This differential can be a good approximation of the change in \(w\) when \(w = f(x,y,z)\) is
differentiable.
Definition 89: Multivariable Differentiability
Let \(w=f(x,y,z)\) be defined on an open ball \(B\) containing \((x_0,y_0,z_0)\) where \(f_x(x_0,y_0,z_0)\), \(f_y(x_0,y_0,z_0)\) and \(f_z(x_0,y_0,z_0)\) exist. Let \(dw\) be the total differential of \(w\) at \((x_0,y_0,z_0)\), let \(\Delta w = f(x_0+dx,y_0+dy,z_0+dz) - f(x_0,y_0,z_0)\), and let \(E_x\), \(E_y\) and \(E_z\) be functions of \(dx\), \(dy\) and \(dz\) such that
\[\Delta w = dw + E_xdx + E_ydy + E_zdz\]
\(f\) is differentiable at\((x_0,y_0,z_0)\) if, given \(\epsilon >0\), there is a \(\delta >0\) such that if \(||\langle dx,dy,dz\rangle|| < \delta\), then \(||\langle E_x,E_y,E_z\rangle|| < \epsilon\). \(f\) is differentiable on\(B\) if \(f\) is differentiable at every point in \(B\). If \(f\) is differentiable on \(\mathbb{R}^3\), we say that \(f\) is differentiable everywhere.
Just as before, this definition gives a rigorous statement about what it means to be differentiable that is not very intuitive. We follow it with a theorem similar to Theorem 105.
THEOREM 106: Continuity and Differentiability of Functions of Three Variables
Let \(w=f(x,y,z)\) be defined on an open ball \(B\) containing \((x_0,y_0,z_0)\).
If \(f\) is differentiable at \((x_0,y_0,z_0)\), then \(f\) is continuous at \((x_0,y_0,z_0)\). If \(f_x\), \(f_y\) and \(f_z\) are continuous on \(B\), then \(f\) is differentiable on \(B\).
This set of definition and theorem extends to functions of any number of variables. The theorem again gives us a simple way of verifying that most functions that we encounter are differentiable on their natural domains.
This section has given us a formal definition of what it means for a functions to be "differentiable,'' along with a theorem that gives a more accessible understanding. The following sections return to notions prompted by our study of partial derivatives that make use of the fact that most functions we encounter are differentiable.
|
Hey everyone do you sense anything different? Well the Project Euler counter breezed from 166 to 175 in these one or two days, and it's like I can reach 200 or 225 without much difficulty. The main reason is that a new difficulty rating is out (it may not be new for you if you visit the site frequently though), and we can access to relatively easier later numbered question easier.
In fact many of them relies on the same searching technique or a number theory trick...here let's look at one of the classic algorithm and its variation:
We start from a classical example: In how many ways we can partition a number $n$ into sum of integers? (Don't cheat by looking at the OEIS!)
def f(n): count = 0 queue = [(0,n+1)] #0: current sum = 0 #n+1: n is the highest integer to be added while queue: (x,y) = queue[0] queue = queue[1:] #Can be optimaized using other heap structure for i in range(1,min(n-x+1,y)): if x+i == n: count += 1 else: queue.append((x+i,i+1)) return count
We sort the sum from the biggest integer to the lowest so that we won't count any combination twice. For example let's take $n=4$:
Step 1: The first integer can range from 1 to 4 so the queue is now
$queue = [(3,4),(2,3),(1,2)]$
$count = 1$
$(4,5)$ is already filled up so we simply add one to the count and we don't need to put them back to the queue.
Step 2: the first one $(4,5)$ is filled so we add one to the count. For $(3,4)$ the only possibility is to add 1 to it (and add to the count). Here we get
$count = 3$
$queue = [(3,2), (2,2)]$
Note that the only possibility to fill the last queuing instance $(2,2)$ is to fill the rest with ones (1+1+1+1), and we can only put one into the instance $(3,2)$, and therefore the total count is 5.
What about the number of partitions into distinct integers? (Yeah, the so called strict partition in OEIS). We just need a bit of modification:
def f(n): count = 0 queue = [(0,n+1)] #0: current sum = 0 #n+1: n is the highest integer to be added while queue: (x,y) = queue[0] queue = queue[1:] #Can be optimaized using other heap structure for i in range(1,min(n-x+1,y)): if x+i == n: count += 1 else: queue.append((x+i,i)) return count
Each time when we push the instance back into the queue we force the next integer to be added being smaller than the last one. It naturally vanish when you don't have enough integers to add up to $n$ (like, 5+2 but you want the sum to be 10).
Define non-square numbers to be numbers that is not divisible by any square number (except 1). How many non-square numbers can we find under $n$?
Recall that we can always translate multiplication into additions using the log function. Non-square numbers are simply product of distinct primes, so we have the following:
from math import log def pgen(n): #Prime generator xs = range(n+1) xs[1] = 0 for i in range(2,int(n**0.5)+1): if xs[i] != 0: for j in xrange(i*i,n+1,i): xs[j] = 0 plist = [] for i in xs: if i != 0: plist = [log(i)]+plist return plist def f(n): target = log(n) count = 1 #1 is also a non-square number! ps = pgen(n) l = len(ps) queue = [(0,-1)] #0: current sum = 0 #-1: start from the beginning while queue: (x,y) = queue[0] queue = queue[1:] for i in range(y+1,l): if x+ps[i] <= target: count += 1 if i < l-1: #We can add another smaller prime queue.append((x+ps[i],i)) return count
This is a strict partition on some decimals (instead of integers), but once we translate the problem back to something we are familiar with it can be solved in a split second.
Given all the above algorithms, we should be able to solve the following:
Given a set of real numbers, find the number of strict partitions that sum up to $n$ so that each pair among the numbers chosen has a difference $r$.
This is actually related to one of the post-400 problem in Project Euler but I'm not going to disclose which problem it is :P
Talking about log function it can be used to locate the terms on the Fibonacci sequence (or other recurrence sequence). Consider the form
$F_n = \frac{1}{\sqrt{5}}(\frac{1+\sqrt{5}}{2})^n - \frac{1}{\sqrt{5}}(\frac{1-\sqrt{5}}{2})^n$
The second term is converging to zero so taking log we have
$n \approx \frac{\log (\sqrt{5}F_n)}{\log \phi}$
So I'll simply put two little problems here for readers to think about:
1) Write an algorithm for the following:
Input: Natural number $n$.
Output: Number of squared number (multiple of square number) but non-cube number (contain no factor of cubed number), e.g $12 = 2^2\times 3$.
2) Write an algorithm for the following:
Input:
Integer$n$.
Output: If it is on the extended Fibonacci sequence ($\left\{ F_n\right\} _{n\in \mathbb{Z}}$) return the index (any one if it is on multiple indices) it is on, and return something else if it is not on the sequence.
|
This question sparked from a long discussion in chat about the nature of $\ce{H2O2}$ and whether that molecule can be considered to rotate around the $\ce{O-O}$ axis (and hence display axial chirality) or not.
Considering two rather clear cases:
Ethane is considered to rotate freely around the $\ce{C-C}$ bond. The activation energy for rotation (equivalent to the energy difference between the staggered and eclipsed conformations) is given as $12.5\,\mathrm{\frac{kJ}{mol}}$.
Ethene on the other hand is considered to not freely rotate around the $\ce{C=C}$ bond. The energy difference between the planar and perpendicular conformations (the rotation barrier) is given at $250\,\mathrm{\frac{kJ}{mol}}$. (I am unsure of this value which I found on Yahoo answers. If I understand correctly, this should correspond with the excitation from $\unicode[Times]{x3C0}$ to $\unicode[Times]{x3C0}^*$ which I found elsewhere as $743\,\mathrm{\frac{kJ}{mol}}$.)
Somewhere between these two extreme cases must lie some kind of barrier around which rotation is severely hindered to entirely inhibited.
Attempting to calculate this myself, I remembered the Boltzmann distribution: If two states differ in energy by $\Delta E$, their relative populations $F_1$ and $F_2$ can be calculated as follows:
$$\frac{F_2}{F_1} = \mathrm{e}^{-\frac{\Delta E}{k_{\mathrm{B}} T}}$$
($F_2$ being the higher energy state and $k_\mathrm{B}$ being the Boltzmann constant.)
If I calculate relative populations from the energy differences $12.5, 19, 25$ and $250\,\mathrm{\frac{kJ}{mol}}$ for ethane, butane, $\ce{H2O2}$ and ethene, respectively (which correspond to $0.13, 0.20, 0.26$ and $2.6\,\mathrm{\frac{eV}{particle}}$) I get the following results at $300\,\mathrm{K}$:
Ethane: $6.55 \times 10^{-3}$ Butane: $4.79 \times 10^{-4}$ $\ce{H2O2}$: $4.29 \times 10^{-5}$ Ethene: $2.09 \times 10^{-44}$
Or logarithmic (ln) values of:
Ethane: $5.03$ Butane: $7.64$ $\ce{H2O2}$: $10.1$ Ethene: $101$
Again, somewhere between butane and ethene must be a threshold-ish value, above which free rotation at $300\,\mathrm{K}$ cannot be considered, but what range are we looking at; similar to what energy difference does this correspond to?
Asked the other way round, if I have an energy barrier of $25\,\mathrm{\frac{kJ}{mol}}$, what would I need to put on into the equation to calculate the temperature above which free rotation can be assumed?
|
We've all learned at school that the earth was a sphere. Actually, it is more nearly a slightly flattened sphere - an oblate ellipsoid of revolution, also called an oblate spheroid. This is an ellipse rotated about its shorter axis. What are the physical reasons for that phenomenon?
Normally in the absence of rotation, the natural tenancy of gravity is to pull the Earth together in the shape of a sphere.
However the Earth in fact bulges at the equator, and the diameter across the equatorial plane is 42.72 km more than the diameter from pole to pole.
This is due to the rotation of the Earth.
As we can see in the image above, the spinning disk appears to bulge at the points on the disk furthest from the axis of rotation.
This is because in order for the particles of the disk to remain in orbit, there must be an inward force, known as the centripetal force, given by:
$$F = \frac{mv^2}{r},$$
where $F$ is the force, $m$ is the mass of rotating body, $v$ is the velocity and $r$ is the radius of particle from the axis of rotation.
If the disk is rotating at a given angular velocity, say $\omega$, then the tangential velocity $v$, is given by $v = \omega r$.
Therefore,
$$F = m\omega^2r$$
Therefore the greater the radius of the particle, the more force is required to maintain such an orbit.
Therefore particles on the Earth near the equator, which are farthest from the axis of rotation, will buldge outwards because they require a greater inward force to maintain their orbit.
Additional details for more mathematically literate now that mathjax is enabled:
The net force on an object rotating around the equator with a radius $r$ around a planet with a gravitational force of $\frac{Gm_1m_2}{r^2}$ is the centripetal force given by,
$$F_{net} = \frac{Gm_1m_2}{r^2} - N = m\omega^2r,$$ where $N$ is the normal force.
Re-arranging the above equation gives:
$$N = \frac{Gm_1m_2}{r^2} - m\omega^2r$$
The normal force here is the perceived downward force that a rotating body observers. The equation shows that the perceived downward force is lessened due to the centripetal motion. The typical example to illustrate this is there is an appearance of 0 gravity in a satellite orbiting the Earth, because in this situation the centripetal force is exactly balanced by the gravitational force. On Earth however, the centripetal force is much less than the gravitational force, so we perceive almost the whole contribution of $mg$.
Now we will examine how the perceived gravitational force differs at different angles of latitude. Let $\theta$ represent the angle of latitude. Let $F_G$ be the force of gravity.
In vector notation we will take the $j$-direction to be parallel with the axis of rotation and the $i$-direction to be perpendicular with the axis of rotation.
In the absence of the Earth's rotation,
$$F_G = N = (-\frac{Gm_1m_2}{r^2}\cos\theta)\tilde{i} + (-\frac{Gm_1m_2}{r^2}\sin\theta)\tilde{j}$$
It is easily seen that the above equation represents the perceived force of gravity in the absence of rotation. Now the centripetal force acts only in the i-direction, since it acts perpendicular to the axis of rotation.
If we let $R_{rot}$ be the radius of rotation, then the centripetal force is $m_1\omega^2R_{rot}$, which for an angle of latitude of $\theta$ corresponds to $m_1\omega^2r\cos{\theta}$
$$N = (-\frac{Gm_1m_2}{r^2} + m_1\omega^2r)\cos{\theta}\tilde{i} + (-\frac{Gm_1m_2}{r^2})\sin{\theta}\tilde{j}$$
By comparing this equation to the case shown earlier in the absence of rotation, it is apparent that as $\theta$ is increased (angle of latitude), the effect of rotation on perceived gravity becomes negligible, since the only difference lies in the $x$-component and $\cos\theta$ approaches 0 as $\theta$ approaches 90 degrees latitude. However it can also be seen that as theta approaches 0, near the equator, the $x$-component of gravity is reduced as a result of the Earth's rotation.
Therefore, we can see that the magnitude of $N$ is slightly less at the equator than at the poles. The reduced apparent gravitational pull here is what gives rise to the slight bulging of the Earth at the equator, given that the Earth was not originally as rigid as it is today (see other answer).
Actually, the reason why the Earth is not a sphere is twofold:
the Earth is rotating and has been rotating for a long time the Earth is not perfectly rigid, it can even be considered as a viscous fluid on long timescales
If the Earth were not rotating, it would be a sphere. If the Earth had started to rotate very recently, it wouldn't be at equilibrium, thus probably not the ellipsoid of revolution we are familiar with. Last but not least, if the Earth were perfectly rigid, it wouldn't be deformed by any process, including rotation, thus still have its initial shape.
We can consider that the Earth is a fluid in hydrostatic equilibrium (i.e. a fluid at rest) at each point, taking into account both the effect of gravity and centrifugal (pseudo) force due to rotation. Then, if we look for the shape of the Earth's surface under this condition, the solution is an ellipsoid of revolution. It is very close to the actual Earth's surface which is a good evidence that our initial assumption -- rotating fluid in hydrostatic equilibrium -- is reasonable for long timescale.
The study of this question is related to the famous Clairaut's equation from the name of the famous French scientist who published the treatise
Théorie de la figure de la terre at the end of the 18th century.
NB: if we just explain the bulge at the equator referring to the effect of the centrifugal pseudo force and ignoring the hydrostatic equilibrium issue, we should conclude that the polar radius is the same with or without rotation. Though, it is smaller: about 6357 km vs 6371 km for a spherical Earth of equal volume.
That the Earth is approximately an oblate spheroid is best explained by energy.
Place a marble in a bowl. No matter where you place it, it will eventually come to rest at the bottom of the bowl. This is the position that minimizes the total energy of the marble subject to the constraint of being in the bowl. Suspend a chain between two posts. When the chain comes to rest it will take on a well-known shape, that of a catenary curve. This is the shape that minimizes the energy of the chain, subject to the constraint of being suspended between the two posts.
If you place the marble away from the bottom it will roll around for a while before coming to rest. If you pull the chain away from it's catenary shape it will swing back and forth for a while before coming to rest in that stable shape. The off-center marble and out-of-plane chain have greater potential energies than they do in their stable configuration. If at all possible, nature will attempt to minimize total potential energy. It's a consequence of the second law of thermodynamics.
In the case of the Earth, that minimum energy configuration is a surface over which the sum of the gravitational and centrifugal potential energies are constant. Something that makes the Earth deviate from this equipotential surface will result in an increase in this potential energy. The Earth will eventually adjust itself back into that minimum energy configuration. This equipotential surface would be an oblate spheroid were it not for density variations such as thick and light continental crust in one place, thin and dense oceanic crust in another.
In terms of force, the quantity we call
g is the gradient of the gravitational and centrifugal potential energies (specifically, $\vec g = -\nabla \Phi$). Since the Earth's surface is very close to being an equipotential surface and since that surface in turn is very close to being an oblate spheroid, gravitation at the poles is necessarily slightly more than it is at the equator.
This gravitational force will not be normal to the surface at places where the surface deviates from the equipotential surface. The tangential component of the gravitational force results in places where water flows downhill and in stresses and strains in the Earth's surface. The eventual responses to these tangential forces are erosion, floods, and sometimes even earthquakes that eventually bring the Earth back to its equilibrium shape.
Update: Why is this the right picture?
Based on comments elsewhere, a number of people don't understand why energy rather than force is the right way to look at this problem, or how the second law of thermodynamics comes into play.
There are a number of different ways to state the second law of thermodynamics. One is that a system tends to a state that maximizes its entropy. For example, put two blocks at two different temperatures in contact with one another. The cooler block will get warmer and the warmer block will get cooler until both blocks are at the same temperature, thanks to the second law of thermodynamics. That uniform temperature is the state that maximizes the entropy of this two block system.
Those two blocks only have thermal energy. What about a system with non-zero mechanical energy? Friction is almost inevitably going to sap kinetic energy from the system. That friction means the system's mechanical energy will decrease until it reaches a global minimum, if any. For a rotating, dissipative, self-gravitating body, that global minimum does exist and it is a (more or less) oblate spheroid shape.
|
Property AP: A discrete group $\Gamma$ has property AP (Approximation Property) if there exists a net $(\phi_i)_{i \in I}$ of finitely supported functions on $\Gamma$ such that $\phi_i \to 1 $ weak$^*$ in $B_2(\Gamma)$ i.e $w(\phi_i) \to w(1)$ for every $w \in Q(\Gamma)$.
A wanted to know, if anything is known in the literature about the Thompson's group $V$, having property-AP?
It would be great, if I am directed to some research papers regarding the same.
Thanks for the help!!
|
Diffraction-limited system
The resolution of an optical imaging system – a microscope, telescope, or camera – can be limited by factors such as imperfections in the lenses or misalignment. However, there is a fundamental maximum to the resolution of any optical system which is due to diffraction. An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be
diffraction limited. [1]
The resolution of a given instrument is proportional to the size of its objective, and inversely proportional to the wavelength of the light being observed. For telescopes with circular apertures, the size of the smallest feature in an image that is diffraction limited is the size of the Airy disc. As one decreases the size of the aperture in a lens, diffraction increases. At small apertures, such as f/22, most modern lenses are limited only by diffraction.
In astronomy, a
diffraction-limited observation is one that is limited only by the optical power of the instrument used. However, most observations from Earth are seeing-limited due to atmospheric effects. Optical telescopes on the Earth work at a much lower resolution than the diffraction limit because of the distortion introduced by the passage of light through several kilometres of turbulent atmosphere. Some advanced observatories have recently started using adaptive optics technology, resulting in greater image resolution for faint targets, but it is still difficult to reach the diffraction limit using adaptive optics.
Radiotelescopes are frequently diffraction-limited, because the wavelengths they use (from millimeters to meters) are so long that the atmospheric distortion is negligible. Space-based telescopes (such as Hubble, or a number of non-optical telescopes) always work at their diffraction limit, if their design is free of optical aberration.
Contents The Abbe diffraction limit for a microscope 1 Implications for digital photography 2 Obtaining higher resolution 3 Extending numerical aperture 3.1 Near-field techniques 3.2 Far-field techniques 3.3 Other waves 4 See also 5 References 6 External links 7 The Abbe diffraction limit for a microscope
The observation of sub-wavelength structures with microscopes is difficult because of the
Abbe diffraction limit. Ernst Abbe found in 1873 that light with wavelength λ, traveling in a medium with refractive index n and converging to a spot with angle \theta will make a spot with radius d=\frac{ \lambda}{2 n \sin \theta} [2]
The denominator n\sin \theta is called the numerical aperture (NA) and can reach about 1.4–1.6 in modern optics, hence the Abbe limit is
d = λ/2.8. Considering green light around 500 nm and a NA of 1, the Abbe limit is roughly d = λ/2 = 250 nm (0.25 μm), which is small compared to most biological cells (1 μm to 100 μm), but large compared to viruses (100 nm), proteins (10 nm) and less complex molecules (1 nm). To increase the resolution, shorter wavelengths can be used such as UV and X-ray microscopes. These techniques offer better resolution but are expensive, suffer from lack of contrast in biological samples and may damage the sample. Implications for digital photography
In a digital camera, diffraction effects interact with the effects of the regular pixel grid. The combined effect of the different parts of an optical system is determined by the convolution of the point spread functions (PSF). The point spread function of a diffraction limited lens is simply the Airy disc. The point spread function of the camera, otherwise called the instrument response function (IRF) can be approximated by a rectangle function, with a width equivalent to the pixel pitch. A more complete derivation of the modulation transfer function (derived from the PSF) of image sensors is given by Fliegel.
[3] Whatever the exact instrument response function we may note that it is largely independent of the f-number of the lens. Thus at different f-numbers a camera may operate in three different regimes, as follows: in the case where the spread of the IRF is small with respect to the spread of the diffraction PSF, in which case the system may be said to be essentially diffraction limited (so long as the lens itself is diffraction limited). in the case where the spread of the diffraction PSF is small with respect to the IRF, in which case the system is instrument limited. in the case where the spread of the PSF and IRF are of the same order of magnitude, in which case both impact the available resolution of the system.
The spread of the diffraction-limited PSF is approximated by the diameter of the first null of the Airy disk,
d/2 = 1.22 \lambda N,\,
where λ is the wavelength of the light and
N is the f-number of the imaging optics. For f/8 and green (0.5 μm wavelength) light, d = 9.76 μm. This is of the same order of magnitude as the pixel size for the majority of commercially available 'full frame' (43mm sensor diagonal) cameras and so these will operate in regime 3 for f-numbers around 8 (few lenses are close to diffraction limited at f-numbers smaller than 8). Cameras with smaller sensors will tend to have smaller pixels, but their lenses will be designed for use at smaller f-numbers and it is likely that they will also operate in regime 3 for those f-numbers for which their lenses are diffraction limited. Obtaining higher resolution
There are techniques for producing images that appear to have higher resolution than allowed by simple use of diffraction-limited optics.
[4] Although these techniques improve some aspect of resolution, they generally come at an enormous increase in cost and complexity. Usually the technique is only appropriate for a small subset of imaging problems, with several general approaches outlined below. Extending numerical aperture
For a given numerical aperture (NA), the resolution of microscopy for flat objects under coherent illumination can be improved using interferometric microscopy. Using the partial images from a holographic recording of the distribution of the complex optical field, the large aperture image can be reconstructed numerically.
[5] Another technique, 4 Pi microscopy uses two opposing objectives to double the effective numerical aperture, effectively halving the diffraction limit.
Among sub-diffraction limited techniques, structured illumination holds the distinction of being one of the only methods that can work with simple reflectance without the need for special dyes or fluorescence and at very long working distances. In this method, multiple spatially modulated illumination patterns are used to double the effective numerical aperture. In principle, the technique can be used at any range and on any target provided that illumination can be controlled. Additionally, if exogenous contrast agents are used, the technique can also achieve more than a two-fold increase in resolution.
Near-field techniques
The diffraction limit is only valid in the far field. Various near-field techniques that operate less than 1 wavelength of light away from the image plane can obtain substantially higher resolution. These techniques exploit the fact that the evanescent field contains information beyond the diffraction limit which can be used to construct very high resolution images, in principle beating the diffraction limit by a factor proportional to how far into the near field an imaging system extends. Techniques such as total internal reflectance microscopy and metamaterials-based superlens can image with resolution better than the diffraction limit by locating the objective lens extremely close (typically hundreds of nanometers) to the object. However, because these techniques cannot image beyond 1 wavelength, they cannot be used to image into objects thicker than 1 wavelength which limits their applicability.
Far-field techniques
Far-field imaging techniques are most desirable for imaging objects that are large compared to the illumination wavelength but that contain fine structure. This includes nearly all biological applications in which cells span multiple wavelengths but contain structure down to molecular scales. In recent years several techniques have shown that sub-diffraction limited imaging is possible over macroscopic distances. These techniques usually exploit optical nonlinearity in a material's reflected light to generate resolution beyond the diffraction limit.
Among these techniques, the STED microscope has been one of the most successful. In STED, multiple laser beams are used to first excite, and then quench fluorescent dyes. The nonlinear response to illumination caused by the quenching process in which adding more light causes the image to become less bright generates sub-diffraction limited information about the location of dye molecules, allowing resolution far beyond the diffraction limit provided high illumination intensities are used.
Other waves
The same equations apply to other wave-based sensors, such as radar and the human ear.
As opposed to light waves (i.e., photons), massive particles have a different relationship between their quantum mechanical wavelength and their energy. This relationship indicates that the effective "de Broglie" wavelength is inversely proportional to the momentum of the particle. For example, an electron at an energy of 10 keV has a wavelength of 0.01 nm, allowing the electron microscope (SEM or TEM) to achieve high resolution images. Other massive particles such as helium, neon, and gallium ions have been used to produce images at resolutions beyond what can be attained with visible light. Such instruments provide nanometer scale imaging, analysis and fabrication capabilities at the expense of system complexity.
See also References Born, Max; Emil Wolf (1997). Principles of Optics. Cambridge University Press. Lipson, Lipson and Tannhauser (1998). Optical Physics. United Kingdom: Cambridge. p. 340. Fliegel, Karel (December 2004). "Modeling and Measurement of Image Sensor Characteristics" (PDF). Radioengineering 13(4). Niek van Hulst (2009). "Many photons get more out of diffraction". Y.Kuznetsova; A.Neumann, S.R.Brueck (2007). "Imaging interferometric microscopy–approaching the linear systems limits of optical resolution". Puts, Erwin (September 2003). "Chapter 3: 180 mm and 280 mm lenses" (PDF). Leica R-Lenses. Describes the Leica APO-Telyt-R 280mm f/4, a diffraction-limited photographic lens.
Above Link is 404. Try: http://www.apotelyt.com/ "Leica 135mm f/3.4 APO-Telyt-M ASPH" versus "Zeiss 135mm f/2 APO-Sonnar" and Leica APO-Telyt-M 135mm f/3.4.
|
Linear Momentum
Linear momentum is the product of the mass and velocity of an object, it is conserved in elastic and inelastic collisions.
Learning Objectives
Calculate the momentum of two colliding objects
Key Takeaways Key Points Like velocity, linear momentum is a vector quantity, possessing a direction as well as a magnitude. Momentum, like energy, is important because it is a conserved quantity. The momentum of a system of particles is the sum of their momenta. If two particles have masses m 1and m 2, and velocities v 1and v 2, the total momentum is [latex]\begin{align} \text{p} &= \text{p}_1 + \text{p}_2 = \text{m}_1 \text{v}_1 + \text{m}_2 \text{v}_2\, \end{align}[/latex]. Key Terms inelastic: (As referring to an inelastic collision, in contrast to an elastic collision. ) A collision in which kinetic energy is not conserved. elastic collision: An encounter between two bodies in which the total kinetic energy of the two bodies after the encounter is equal to their total kinetic energy before the encounter. Elastic collisions occur only if there is no net conversion of kinetic energy into other forms. conservation: A particular measurable property of an isolated physical system does not change as the system evolves.
In classical mechanics, linear momentum, or simply momentum (SI unit kg m/s, or equivalently N s), is the product of the mass and velocity of an object. Mathematically it is stated as:
[latex]\mathbf{\text{p}} = \text{m} \mathbf{\text{v}}[/latex].
(Note here that p and v are vectors. ) Like velocity, linear momentum is a vector quantity, possessing a direction as well as a magnitude. Linear momentum is particularly important because it is a conserved quantity, meaning that in a closed system (without any external forces ) its total linear momentum cannot change.
Because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Momentum is conserved in both inelastic and elastic collisions. (Kinetic energy is not conserved in inelastic collisions but is conserved in elastic collisions. ) It important to note that if the collision takes place on a surface with friction, or if there is air resistance, we would need to account for the momentum of the bodies that would be transferred to the surface and/or air.
Let’s take a look at a simple, one-dimensional example: The momentum of a system of two particles is the sum of their momenta. If two particles have masses
m 1 and m 2, and velocities v 1 and v 2, the total momentum is:
[latex]\text{p} = \text{p}_1 + \text{p}_2 = \text{m}_1 \text{v}_1 + \text{m}_2 \text{v}_2\,[/latex].
Keep in mind that momentum and velocity are vectors. Therefore, in 1D, if two particles are moving in the same direction, v1 and v2 have the same sign. If the particles are moving in opposite directions they will have opposite signs.
If two particles were moving on a plane we would choose our xy-plane to be on the plane of motion. We can then write the x and y component of the total momentum as:
[latex]\text{p}_\text{x} = \text{p}_{1\text{x}} + \text{p}_{2\text{x}} = \text{m}_1 \text{v}_{1\text{x}} + \text{m}_2 \text{v}_{2\text{x}}\, \\ \text{p}_\text{y} = \text{p}_{1\text{y}} + \text{p}_{2\text{y}} = \text{m}_1 \text{v}_{1\text{y}} + \text{m}_2 \text{v}_{2\text{y}}.[/latex]
If the 2D momentum vector is decomposed into two components, the equations for each component are reduced to its 1D equivalents.
Momentum, like energy, is important because it is conserved. “Newton’s cradle” shown in is an example of conservation of momentum. As we will discuss in the next concept (on Momentum, Force, and Newton’s Second Law ), in classical mechanics, conservation of linear momentum is implied by Newton’s laws. Only a few physical quantities are conserved in nature. Studying these quantities yields fundamental insight into how nature works.
Momentum, Force, and Newton’s Second Law
In the most general form, Newton’s 2
nd law can be written as [latex]\text{F} = \frac{\text{dp}}{\text{dt}}[/latex]. Learning Objectives
Relate Newton’s Second Law to momentum and force
Key Takeaways Key Points In a closed system, without any external forces, the total momentum is constant. The familiar equation [latex]\text{F}=\text{ma}[/latex] is a special case of the more general form of the 2 ndlaw when the mass of the system is constant. Momentum conservation holds (in the absence of external force) regardless of the nature of the interparticle (or internal) force, no matter how complicated the force is between particles. Key Terms closed system: A physical system that doesn’t exchange any matter with its surroundings and isn’t subject to any force whose source is external to the system.
In a closed system (one that does not exchange any matter with the outside and is not acted on by outside forces), the total momentum is constant. This fact, known as the law of conservation of momentum, is implied by Newton’s laws of motion. Suppose, for example, that two particles interact. Because of the third law, the forces between them are equal and opposite. If the particles are numbered 1 and 2, the second law states that
[latex]\frac{\text{d} \text{p}_1}{\text{d} \text{t}} = - \frac{\text{d} \text{p}_2}{\text{d} \text{t}}[/latex]
or
[latex]\frac{\text{d}}{\text{d} \text{t}} \left(\text{p}_1+ \text{p}_2\right)= 0[/latex]
Therefore, total momentum (
p 1+ p 2) is constant. If the velocities of the particles are u 1 and u 2 before the interaction, and afterwards they are v 1 and v 2, then
[latex]\text{m}_1 \text{u}_{1} + \text{m}_2 \text{u}_{2} = \text{m}_1 \text{v}_{1} + \text{m}_2 \text{v}_{2}[/latex]
This law holds regardless of the nature of the interparticle (or internal) force, no matter how complicated the force is between particles. Similarly, if there are several particles, the momentum exchanged between each pair of particles adds up to zero, so the total change in momentum is zero.
Newton’s Second Law
Newton actually stated his second law of motion in terms of momentum: The net external force equals the change in momentum of a system divided by the time over which it changes. Using symbols, this law is
[latex]\text{F}_{\text{net}} = \frac{\Delta \text{p}}{\Delta \text{t}}[/latex],
where [latex]\text{F}_{\text{net}}[/latex] is the net external force, Δ
p is the change in momentum, and Δ t is the change in time.
This statement of Newton’s second law of motion includes the more familiar [latex]\text{F}_{\text{net}} = \text{ma}[/latex] as a special case. We can derive this form as follows. First, note that the change in momentum Δ
p is given by Δ p=Δ( mv). If the mass of the system is constant, then Δ( mv)= mΔ v. So for constant mass, Newton’s second law of motion becomes
[latex]\text{F}_{\text{net}} = \frac{\Delta \text{p}}{\Delta \text{t}} = \frac{\text{m}\Delta \text{v}}{\Delta \text{t}}[/latex].
Because Δ
v/Δ t= a, we get the familiar equation [latex]\text{F}_{\text{net}} = \text{ma}[/latex] when the mass of the system is constant. Newton’s second law of motion stated in terms of momentum is more generally applicable because it can be applied to systems where the mass is changing, such as rockets, as well as to systems of constant mass. Impulse
Impulse, or change in momentum, equals the average net external force multiplied by the time this force acts.
Learning Objectives
Explain the relationship between change in momentum and the amount of time a force acts
Key Takeaways Key Points A small force applied for a long time can produce the same momentum change as a large force applied briefly, because it is the product of the force and the time for which it is applied that is important. A force produces an acceleration, and the greater the force acting on an object, the greater its change in velocity and, hence, the greater its change in momentum. However, changing momentum is also related to how long a time the force acts. In case of a time-varying force, impulse can be calculated by integrating the force over the time duration. [latex]\text{Impulse} = \int_{\text{t}_1}^{\text{t}_2} \text{F}(\text{t}) \text{dt}[/latex]. Key Terms momentum: (of a body in motion) the product of its mass and velocity. impulse: The integral of force over time. Impulse
Forces produce either acceleration or deceleration on moving bodies, and the greater the force acting on an object, the greater its change in velocity and, hence, the greater its change in momentum. However, changing momentum is also related to how long a time the force acts. If a brief force is applied to a stalled automobile, a change in its momentum is produced. The same force applied over an extended period of time produces a greater change in the automobile’s momentum. The quantity of impulse is
force × time interval, or in shorthand notation:
[latex]\text{Impulse} = \text{F} \Delta \text{t}[/latex],
where F is the net force on the system, and Δt is the duration of the force.
From Newton’s 2nd law:
[latex]\text{F} = \frac {\Delta \text{p}}{\Delta \text{t}}[/latex] (Δp: change in momentum),
change in momentum equals the average net external force multiplied by the time this force acts.
[latex]\Delta \text{p} = \text{F} \Delta \text{t}[/latex].
Therefore, impulse as defined in the previous paragraph is simply equivalent to p.
A force sustained over a long time produces more change in momentum than does the same force applied briefly. A small force applied for a long time can produce the same momentum change as a large force applied briefly because it is the product of the force and the time for which it is applied that is important. Impulse is always equal to change in momentum and is measured in Ns (Newton seconds), as both force and the time interval are important in changing momentum.
Our definition of impulse includes an assumption that the force is constant over the time interval Δt. Forces are usually not constant. Forces vary considerably even during the brief time intervals considered. It is, however, possible to find an average effective force Feff that produces the same result as the corresponding time-varying force. shows a graph of what an actual force looks like as a function of time for a ball bouncing off the floor. The area under the curve has units of momentum and is equal to the impulse or change in momentum between times t1 and t2. That area is equal to the area inside the rectangle bounded by Feff, t1, and t2. Thus, the impulses and their effects are the same for both the actual and effective forces. Equivalently, we can simple find the area under the curve F(t) between t1 and t2 to compute the impulse in mathematical form:
[latex]\text{Impulse} = \int_{\text{t}_1}^{\text{t}_2} \text{F}(\text{t}) \text{dt}[/latex].
|
I'm trying to convert some English statements to first order logic statements and I'm trying to use Prolog to verify the translations. My question is: how do I convert a first order logic statement (having $\forall$ and $\exists$ quantifiers) into Prolog rules?
For example, there's this English statement:
Every voter votes for a candidate which some voter doesn't vote for.
and here's my translation of the English statement into first order logic:
$$\forall x, y[Voter(x) \land Candidate(y) \land Votes(x, y) \rightarrow \exists z[ Voter(z) \land \lnot Votes(z, y) ] ]$$
Now I'm not sure if this translation is correct. That's what I want to find out. My question is: how do I convert this first order logic statement to a Prolog rule?
So first I'm trying to fill a Prolog database with some facts.
human(p1).human(p2).human(p3).human(p4).human(p5).human(p6).human(p7).human(p8).human(p9).%humans who are neither voters nor candidateshuman(p10).human(p11).%humans who are only votersvoter(p1).voter(p2).voter(p3).%humans who are candidates and votersvoter(p4).voter(p5).voter(p6).candidate(p4).candidate(p5).candidate(p6).%humans who are only candidatescandidate(p7).candidate(p8).candidate(p9).%some random votesvotes(p1, p6).votes(p2, p6).votes(p3, p6).votes(p4, p7).votes(p5, p8).votes(p6, p5).
I'm using
human,
voter,
candidate, and
votes. Here are some attempts to model the statement into a Prolog rule:
rule1 :- foreach((voter(X), candidate(Y), votes(X, Y)),(voter(Z), \+votes(Z, Y))).rule2 :- foreach((human(X), voter(X), candidate(Y), votes(X, Y)),(human(Z), voter(Z), \+votes(Z, Y))).
|
The Annals of Probability Ann. Probab. Volume 20, Number 2 (1992), 801-824. Limit Theorems for Random Walks Conditioned to Stay Positive Abstract
Let $\{S_n\}$ be a random walk on the integers with negative drift, and let $A_n = \{S_k \geq 0, 1 \leq k \leq n\}$ and $A = A_\infty$. Conditioning on $A$ is troublesome because $P(A) = 0$ and there is no natural sigma-field of events "like" $A. A$ natural definition of $P(B\mid A)$ is $\lim_{n\rightarrow\infty}P(B\mid A_n)$. The main result here shows that this definition makes sense, at least for a large class of events $B$: The finite-dimensional conditional distributions for the process $\{S_k\}_{k\geq 0}$ given $A_n$ converge strongly to the finite-dimensional distributions for a measure $\mathbf{Q}$. This distribution $\mathbf{Q}$ is identified as the distribution for a stationary Markov chain on $\{0,1,\ldots\}$.
Article information Source Ann. Probab., Volume 20, Number 2 (1992), 801-824. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176989807 Digital Object Identifier doi:10.1214/aop/1176989807 Mathematical Reviews number (MathSciNet) MR1159575 Zentralblatt MATH identifier 0756.60062 JSTOR links.jstor.org Subjects Primary: 60J15 Secondary: 60G50: Sums of independent random variables; random walks Citation
Keener, Robert W. Limit Theorems for Random Walks Conditioned to Stay Positive. Ann. Probab. 20 (1992), no. 2, 801--824. doi:10.1214/aop/1176989807. https://projecteuclid.org/euclid.aop/1176989807
|
In this note, we produce a proof of Taylor’s Theorem. As in many proofs of Taylor’s Theorem, we begin with a curious start and then follow our noses forward.
Is this a new proof? I think so. But I wouldn’t bet a lot of money on it. It’s certainly new to me.
Is this a groundbreaking proof? No, not at all. But it’s cute, and I like it.
1
We begin with the following simple observation. Suppose that $f$ is two times continuously differentiable. Then for any $t \neq 0$, we see that \begin{equation} f'(t) – f'(0) = \frac{f'(t) – f'(0)}{t} t. \end{equation} Integrating each side from $0$ to $x$, we find that \begin{equation} f(x) – f(0) – f'(0) x = \int_0^x \frac{f'(t) – f'(0)}{t} t dt. \end{equation} To interpret the integral on the right in a different way, we will use the mean value theorem for integrals.
Mean Value Theorem for Integrals
Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \int_0^x g(t) h(t) dt = g(c) \int_0^x h(t) dt. \end{equation}
Suppose without loss of generality that $h(t)$ is nonnegative. Since $g$ is continuous on $[0, x]$, it attains its minimum $m$ and maximum $M$ on this interval. Thus \begin{equation} m \int_0^x h(t) dt \leq \int_0^x g(t)h(t)dt \leq M \int_0^x h(t) dt. \end{equation} Let $I = \int_0^x h(t) dt$. If $I = 0$ (or equivalently, if $h(t) \equiv 0$), then the theorem is trivially true, so suppose instead that $I \neq 0$. Then \begin{equation} m \leq \frac{1}{I} \int_0^x g(t) h(t) dt \leq M. \end{equation} By the intermediate value theorem, $g(t)$ attains every value between $m$ and $M$, and thus there exists some $c$ such that \begin{equation} g(c) = \frac{1}{I} \int_0^x g(t) h(t) dt. \end{equation} Rearranging proves the theorem.
For this application, let $g(t) = (f'(t) – f'(0))/t$ for $t \neq 0$, and $g(0) =f'{}'(0)$. The continuity of $g$ at $0$ is exactly the condition that $f'{}'(0)$exists. We also let $h(t) = t$.
For $x > 0$, it follows from the mean value theorem for integrals that there exists a $c \in [0, x]$ such that \begin{equation} \int_0^x \frac{f'(t) – f'(0)}{t} t dt = \frac{f'(c) – f'(0)}{c} \int_0^x t dt = \frac{f'(c) – f'(0)}{c} \frac{x^2}{2}. \end{equation} (Very similar reasoning applies for $x < 0$). Finally, by the mean value theorem (applied to $f’$), there exists a point $\xi \in (0, c)$ such that \begin{equation} f'{}'(\xi) = \frac{f'(c) – f'(0)}{c}. \end{equation} Putting this together, we have proved that there is a $\xi \in (0, x)$ such that \begin{equation} f(x) – f(0) – f'(0) x = f'{}'(\xi) \frac{x^2}{2}, \end{equation} which is one version of Taylor’s Theorem with a linear approximating polynomial.
This approach generalizes. Suppose $f$ is a $(k+1)$ times continuously differentiable function, and begin with the trivial observation that \begin{equation} f^{(k)}(t) – f^{(k)}(0) = \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t. \end{equation} Iteratively integrate $k$ times: first from $0$ to $t_1$, then from $0$ to $t_2$, and so on, with the $k$th interval being from $0$ to $t_k = x$.
Then the left hand side becomes \begin{equation} f(x) – \sum_{n = 0}^k f^{(n)}(0)\frac{x^n}{n!}, \end{equation} the difference between $f$ and its degree $k$ Taylor polynomial. The right hand side is
\begin{equation}\label{eq:only}\underbrace{\int _0^{t_k = x} \cdots \int _0^{t _1}} _{k \text{ times}} \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t \, dt \, dt _1 \cdots dt _{k-1}.\end{equation}
To handle this, we note the following variant of the mean value theorem for integrals.
Mean value theorem for iterated integrals
Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \underbrace{\int_0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} g(t) h(t) dt =g(c) \underbrace{\int _0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} h(t) dt. \end{equation}
In fact, this can be proved in almost exactly the same way as in the single-integral version, so we do not repeat the proof.
With this theorem, there is a $c \in [0, x]$ such that we see that \eqref{eq:only} can be written as \begin{equation} \frac{f^{(k)}(c) – f^{(k)}(0)}{c} \underbrace{\int _0^{t _k = x} \cdots \int _0^{t _1}} _{k \; \text{times}} t \, dt \, dt _1 \cdots dt _{k-1}. \end{equation} By the mean value theorem, the factor in front of the integrals can be written as $f^{(k+1)}(\xi)$ for some $\xi \in (0, x)$. The integrals can be directly evaluated to be $x^{k+1}/(k+1)! $.
Thus overall, we find that \begin{equation} f(x) = \sum_{n = 0}^n f^{(n)}(0) \frac{x^n}{n!} + f^{(k+1)}(\xi) \frac{x^{k+1}}{(k+1)!} \end{equation} for some $\xi \in (0, x)$. Thus we have proved Taylor’s Theorem (with Lagrange’s error bound).
|
What works
If you nest the definition of the fixpoint on lists inside the definition of the fixpoint on trees, the result is well-typed. This is a general principle when you have nested recursion in an inductive type, i.e. when the recursion goes through a constructor like
list.
Fixpoint size (t : LTree) : nat :=
let size_l := (fix size_l (l : list LTree) : nat :=
match l with
| nil => 0
| h::r => size h + size_l r
end) in
match t with Node l =>
1 + size_l l
end.
Or if you prefer to write this more tersely:
Fixpoint size (t : LTree) : nat :=
match t with Node l =>
1 + (fix size_l (l : list LTree) : nat :=
match l with
| nil => 0
| h::r => size h + size_l r
end) l
end.
(I have no idea who I heard it from first; this was certainly discovered independently many times.)
A general recursion predicate
More generally, you can define the “proper” induction principle on
LTree manually. The automatically generated induction principle
LTree_rect omits the hypothesis on the list, because the induction principle generator only understand non-nested strictly positive occurences of the inductive type.
LTree_rect =
fun (P : LTree -> Type) (f : forall l : list LTree, P (Node l)) (l : LTree) =>
match l as l0 return (P l0) with
| Node x => f x
end
: forall P : LTree -> Type,
(forall l : list LTree, P (Node l)) -> forall l : LTree, P l
Let's add the induction hypothesis on lists. To fulfill it in the recursive call, we call the list induction principle and pass it the tree induction principle on the smaller tree inside the list.
Fixpoint LTree_rect_nest (P : LTree -> Type) (Q : list LTree -> Type)
(f : forall l, Q l -> P (Node l))
(g : Q nil) (h : forall t l, P t -> Q l -> Q (cons t l))
(t : LTree) :=
match t as t0 return (P t0) with
| Node l => f l (list_rect Q g (fun u r => h u r (LTree_rect_nest P Q f g h u)) l)
end.
Why
The answer to why lies in the precise rules for accepting recursive functions. These rules are perforce subtle, because there is a delicate balance between allowing complex cases (such as this one, with nested recursion in the datatype) and unsoundness. The Coq reference manual introduces the language (the calculus of inductive constructions, which is the proof language of Coq), mostly with formally precise definitions, but if you want the exact rules regarding induction and coinduction you'll need go to the research papers, on this topic Eduardo Giménez's [1].
Starting with the Coq manual, in the notation of the
Fix rule, we have the fixpoint definition of $\mathsf{Fix} f_i \, \{ f_1 : A_1 := t_1 \; ; \;\; f_2 : A_2 := t_2 \}$ where:
$$\begin{align*} \Gamma_1 &= (x : \mathtt{LTree}) & A_1 &= \mathrm{nat} & t_1 &= \mathsf{case}(x, \mathtt{LTree}, \lambda y. g_1 (f_2 y)) \\ \Gamma_2 &= (l : \mathtt{list}\;\mathtt{LTree}) & A_2 &= \mathrm{nat} & t_2 &= \mathsf{case}(l, \mathtt{list \: LTree}, \lambda h \: r. g_2 (f_1 h) (f_2 r)) \\\end{align*}$$
In order for the fixpoint definition to be accepted, “if $f_j$ occurs [in $t_i$] then … the argument should be syntactically recognized as structurally smaller than [the argument passed to $f_i$]” (simplifying because the functions have a single argument). Here, we need to check that
$i=1$, $j=2$:
l must be structurally smaller than
t in
size, ok.
$i=2$, $j=1$:
h must be structurally smaller than
l in
size_l, looks ok but isn't!
$i=2$, $j=2$:
r must be structurally smaller than
l in
size_l, ok.
The reason why
h is not structurally smaller than
l according to the Coq interpreter is not clear to me. As far as I understand from discussions on the Coq-club list [1] [2], this is a restriction in the interpreter, which could in principle be lifted, but very carefully to avoid introducing an inconsistency.
References
Cocorico, the nonterminating Coq wiki: Mutual Induction
Coq-Club mailing list:
The Coq Development Team.
The Coq Proof Assistant: Reference Manual. Version 8.3 (2010). [web] ch. 4.
Eduardo Giménez.
Codifying guarded definitions with recursive schemes. In Types'94: Types for Proofs and Programs, LNCS 996. Springer-Verlag, 1994. doi:10.1007/3-540-60579-7_3 [Springer]
Eduardo Giménez.
Structural Recursive Definitions in Type Theory. In ICALP'98: Proceedings of the 25th International Colloquium on Automata, Languages and Programming. Springer-Verlag, 1998. [PDF]
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Steiner symmetry in the minimization of the first eigenvalue in problems involving the p-Laplacian
Proceedings of the American Mathematical Society, ISSN 0002-9939, 08/2016, Volume 144, Issue 8, pp. 3431 - 3440
Let \Omega \subset \mathbb{R}^N be an open bounded connected set. We consider the eigenvalue problem -\Delta _p u =\lambda \rho \vert u\vert^{p-2}u in \Omega...
Minimization | Steiner symmetry | Eigenvalue problem | MATHEMATICS | MATHEMATICS, APPLIED | minimization | REGULARITY | QUASILINEAR ELLIPTIC-EQUATIONS | OPTIMIZATION
Minimization | Steiner symmetry | Eigenvalue problem | MATHEMATICS | MATHEMATICS, APPLIED | minimization | REGULARITY | QUASILINEAR ELLIPTIC-EQUATIONS | OPTIMIZATION
Journal Article
2. On a nonconvex MINLP formulation of the Euclidean Steiner tree problem in n-space: missing proofs
Optimization Letters, ISSN 1862-4472, 07/2018, pp. 1 - 7
Journal Article
3. The effect of attending Steiner schools during childhood on health in adulthood: A multicentre cross-sectional study
PLoS one, ISSN 1932-6203, 2013, Volume 8, Issue 9, Art. e73135, pp. 14 - 8:9, Art. e73135<14
Background. It is speculated that attending Steiner schools, whose pedagogical principles include an account for healthy psycho-physical development, may have...
Waldorfpädagogik | Bildungsforschung | Erwachsenenbildung | Gesundheit | Waldorfschule | WHEEZE | LIFE-STYLE | ECZEMA | MULTIDISCIPLINARY SCIENCES | ALLERGIC DISEASE | ATOPIC SENSITIZATION | CHILDREN | Young Adult | Schools - statistics & numerical data | Cross-Sectional Studies | Humans | Middle Aged | Aged, 80 and over | Adult | Female | Male | Surveys and Questionnaires | Aged | Germany | Heart failure | Hypertension | Heart | Lung diseases, Obstructive | Multiple sclerosis | Health | Coronary heart disease | Backache | Surveys | Hypercholesterolemia | Atopic dermatitis | Analysis | Health aspects | Osteoarthritis | Schools | Pediatrics | Arrhythmia | Rhinitis | Secondary education | Mental depression | Dermatitis | Epidemiology | Health economics | Education | Biocompatibility | Physiology | Chronic obstructive pulmonary disease | Gastrointestinal symptoms | Allergies | Asthma | Arteriosclerosis | Adults | Lifestyles | Sociodemographics | Headache | Back pain | Arthritis | Angina | Sleep disorders | Biomedical materials | Pain | Immunology | Allergic rhinitis | Measles | Children | Heart diseases | Pain perception | Statistical analysis | Diabetes mellitus | Lung diseases | Health risks | Curricula | Angina pectoris | Coronary artery disease | Medicine | Pedagogy | Antibiotics | Insomnia | Obstructive lung disease | Data collection | Cancer
Waldorfpädagogik | Bildungsforschung | Erwachsenenbildung | Gesundheit | Waldorfschule | WHEEZE | LIFE-STYLE | ECZEMA | MULTIDISCIPLINARY SCIENCES | ALLERGIC DISEASE | ATOPIC SENSITIZATION | CHILDREN | Young Adult | Schools - statistics & numerical data | Cross-Sectional Studies | Humans | Middle Aged | Aged, 80 and over | Adult | Female | Male | Surveys and Questionnaires | Aged | Germany | Heart failure | Hypertension | Heart | Lung diseases, Obstructive | Multiple sclerosis | Health | Coronary heart disease | Backache | Surveys | Hypercholesterolemia | Atopic dermatitis | Analysis | Health aspects | Osteoarthritis | Schools | Pediatrics | Arrhythmia | Rhinitis | Secondary education | Mental depression | Dermatitis | Epidemiology | Health economics | Education | Biocompatibility | Physiology | Chronic obstructive pulmonary disease | Gastrointestinal symptoms | Allergies | Asthma | Arteriosclerosis | Adults | Lifestyles | Sociodemographics | Headache | Back pain | Arthritis | Angina | Sleep disorders | Biomedical materials | Pain | Immunology | Allergic rhinitis | Measles | Children | Heart diseases | Pain perception | Statistical analysis | Diabetes mellitus | Lung diseases | Health risks | Curricula | Angina pectoris | Coronary artery disease | Medicine | Pedagogy | Antibiotics | Insomnia | Obstructive lung disease | Data collection | Cancer
Journal Article
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN 0302-9743, 2015, Volume 9125, pp. 122 - 133
Conference Proceeding
5. Assessing the predictability of ANB, 1-NB, P-NB and 1-NA measurements on Steiner cephalometric analysis
Dental Press Journal of Orthodontics, ISSN 2176-9451, 2013, Volume 18, Issue 2, pp. 125 - 132
To evaluate, in the initial and final stages of corrective orthodontic treatment, the predictability of the ANB, 1-NB, PNB and 1-NA during case...
Radiography | Tooth movement | Growth and development | Malocclusion, Angle Class I - diagnostic imaging | Cephalometry - methods | Humans | Male | Reference Values | Malocclusion, Angle Class I - therapy | Cephalometry - standards | Malocclusion, Angle Class II - therapy | Maxillofacial Development - physiology | Analysis of Variance | Orthodontics, Corrective - methods | Bicuspid - surgery | Malocclusion, Angle Class II - pathology | Sex Factors | Tooth Extraction | Female | Malocclusion, Angle Class I - pathology | Retrospective Studies | Malocclusion, Angle Class II - diagnostic imaging | Anatomic Landmarks | Crescimento e desenvolvimento | Radiografia | Movimentação dentária
Radiography | Tooth movement | Growth and development | Malocclusion, Angle Class I - diagnostic imaging | Cephalometry - methods | Humans | Male | Reference Values | Malocclusion, Angle Class I - therapy | Cephalometry - standards | Malocclusion, Angle Class II - therapy | Maxillofacial Development - physiology | Analysis of Variance | Orthodontics, Corrective - methods | Bicuspid - surgery | Malocclusion, Angle Class II - pathology | Sex Factors | Tooth Extraction | Female | Malocclusion, Angle Class I - pathology | Retrospective Studies | Malocclusion, Angle Class II - diagnostic imaging | Anatomic Landmarks | Crescimento e desenvolvimento | Radiografia | Movimentação dentária
Journal Article
6. Assessing the predictability of ANB, 1-NB, P-NB and 1-NA measurements on Steiner cephalometric analysis
Dental Press Journal of Orthodontics, ISSN 2177-6709, 04/2013, Volume 18, Issue 2, pp. 125 - 132
OBJECTIVE: To evaluate, in the initial and final stages of corrective orthodontic treatment, the predictability of the ANB, 1-NB, PNB and 1-NA during case...
DENTISTRY, ORAL SURGERY & MEDICINE
DENTISTRY, ORAL SURGERY & MEDICINE
Journal Article
7. The Effect of Attending Steiner Schools during Childhood on Health in Adulthood: A Multicentre Cross-Sectional Study: e73135
PLoS ONE, 09/2013, Volume 8, Issue 9
Background It is speculated that attending Steiner schools, whose pedagogical principles include an account for healthy psycho-physical development, may have...
Asthma
Asthma
Journal Article
8. Minimization and Steiner symmetry of the first eigenvalue for a fractional eigenvalue problem with indefinite weight
04/2019
Let $\Omega\subset\mathbb{R}^N$, $N\geq 2$, be an open bounded connected set. We consider the fractional weighted eigenvalue problem $(-\Delta)^s u =\lambda...
Mathematics - Analysis of PDEs
Mathematics - Analysis of PDEs
Journal Article
Electronic Journal of Differential Equations, ISSN 1072-6691, 04/2013, Volume 2013, Issue 108, pp. 1 - 10
We consider the functional f bar right arrow integral(Omega) ((q+1)(2) vertical bar Du(f)vertical bar(2) - u(f) vertical bar u(f)vertical bar(q) f)dx, where...
Regularity | Steiner symmetry | Laplacian | Optimization problem | Rearrangements | MATHEMATICS | optimization problem | MATHEMATICS, APPLIED | rearrangements | regularity
Regularity | Steiner symmetry | Laplacian | Optimization problem | Rearrangements | MATHEMATICS | optimization problem | MATHEMATICS, APPLIED | rearrangements | regularity
Journal Article
2000, 1. ed., Clío, ISBN 9586554260, xxix, 159
Book
11. Hans‐Uwe Lammel. Klio und Hippokrates: Eine Liaison littéraire des 18. Jahrhunderts und die Folgen für die Wissenschaftskultur bis 1850 in Deutschland. (Sudhoffs Archiv, 55.). 505 pp., bibl., index. Stuttgart: Franz Steiner Verlag, 2005. €64 (paper)
Isis, ISSN 0021-1753, 06/2008, Volume 99, Issue 2, pp. 428 - 429
Journal Article
12. New Insights into the Phylogeny and Worldwide Dispersion of Two Closely Related Nematode Species, Bursaphelenchus xylophilus and Bursaphelenchus mucronatus
PLoS ONE, ISSN 1932-6203, 02/2013, Volume 8, Issue 2, p. e56288
The pinewood nematode, Bursaphelenchus xylophilus, is one of the greatest threats to coniferous forests worldwide, causing severe ecological damage and...
PORTUGAL | APHELENCHOIDIDAE | B-MUCRONATUS | WOOD | DNA | MULTIDISCIPLINARY SCIENCES | ALGORITHM | STEINER | INFERENCE | PINE WILT DISEASE | EVOLUTIONARY DISTANCE | Animals | DNA, Mitochondrial - genetics | Internationality | Phylogeography | Tylenchida - physiology | Animal Distribution | Phylogeny | Biodiversity | Tylenchida - classification | Tylenchida - genetics | Coniferous forests | Genetic markers | Mitochondrial DNA | Food supply | Nematoda | Biological evolution | Food resources | Species diversity | Ecological effects | Demographics | Immunology | Phylogenetics | Genetics | Life sciences | Bioinformatics | Deoxyribonucleic acid--DNA | Pathogens | Introduced species | Reproductive isolation | Conifers | Genetic diversity | Vectors | Dispersion | Biological diversity | Pathogenicity | Studies | Pathology | Invasive species | Gene flow | Insects | Morphology | Nematodes | Deoxyribonucleic acid
PORTUGAL | APHELENCHOIDIDAE | B-MUCRONATUS | WOOD | DNA | MULTIDISCIPLINARY SCIENCES | ALGORITHM | STEINER | INFERENCE | PINE WILT DISEASE | EVOLUTIONARY DISTANCE | Animals | DNA, Mitochondrial - genetics | Internationality | Phylogeography | Tylenchida - physiology | Animal Distribution | Phylogeny | Biodiversity | Tylenchida - classification | Tylenchida - genetics | Coniferous forests | Genetic markers | Mitochondrial DNA | Food supply | Nematoda | Biological evolution | Food resources | Species diversity | Ecological effects | Demographics | Immunology | Phylogenetics | Genetics | Life sciences | Bioinformatics | Deoxyribonucleic acid--DNA | Pathogens | Introduced species | Reproductive isolation | Conifers | Genetic diversity | Vectors | Dispersion | Biological diversity | Pathogenicity | Studies | Pathology | Invasive species | Gene flow | Insects | Morphology | Nematodes | Deoxyribonucleic acid
Journal Article
13. Disability-adjusted life years (DALYs) for 291 diseases and injuries in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010
Lancet, The, ISSN 0140-6736, 12/2012, Volume 380, Issue 9859, pp. 2197 - 2223
Summary Background Measuring disease and injury burden in populations requires a composite metric that captures both premature mortality and the prevalence and...
Internal Medicine | MORTALITY | ECONOMIC COST | POPULATION | MEDICINE, GENERAL & INTERNAL | AUSTRALIA | RISK-FACTORS | COUNTRIES | LOST | PUBLIC-HEALTH UTILITY | EXPECTANCY | COPD | Quality-Adjusted Life Years | Prevalence | Age Factors | Humans | Middle Aged | Child, Preschool | Infant | Male | Wounds and Injuries - epidemiology | Young Adult | Adolescent | Global Health - statistics & numerical data | Sex Factors | Aged, 80 and over | Adult | Female | Aged | Health Status | Child | Infant, Newborn | Disability | Research | Health aspects | Analysis | Studies | Health care | Discount rates | Disease | Mortality | Public health | Life Sciences | Human health and pathology | Wounds and Injuries | Infectious diseases | World Health | Samhällsvetenskap | Sociology (excluding Social Work, Social Psychology and Social Anthropology) | Sociologi (exklusive socialt arbete, socialpsykologi och socialantropologi) | Social Sciences | Sociology | Sociologi
Internal Medicine | MORTALITY | ECONOMIC COST | POPULATION | MEDICINE, GENERAL & INTERNAL | AUSTRALIA | RISK-FACTORS | COUNTRIES | LOST | PUBLIC-HEALTH UTILITY | EXPECTANCY | COPD | Quality-Adjusted Life Years | Prevalence | Age Factors | Humans | Middle Aged | Child, Preschool | Infant | Male | Wounds and Injuries - epidemiology | Young Adult | Adolescent | Global Health - statistics & numerical data | Sex Factors | Aged, 80 and over | Adult | Female | Aged | Health Status | Child | Infant, Newborn | Disability | Research | Health aspects | Analysis | Studies | Health care | Discount rates | Disease | Mortality | Public health | Life Sciences | Human health and pathology | Wounds and Injuries | Infectious diseases | World Health | Samhällsvetenskap | Sociology (excluding Social Work, Social Psychology and Social Anthropology) | Sociologi (exklusive socialt arbete, socialpsykologi och socialantropologi) | Social Sciences | Sociology | Sociologi
Journal Article
14. Years lived with disability (YLDs) for 1160 sequelae of 289 diseases and injuries 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010
Lancet, The, ISSN 0140-6736, 12/2012, Volume 380, Issue 9859, pp. 2163 - 2196
Summary Background Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and...
Internal Medicine | POPULATION | MEDICINE, GENERAL & INTERNAL | HEARING IMPAIRMENT | MENTAL-DISORDERS | IRON-DEFICIENCY ANEMIA | PREVALENCE | HEALTH | 187 COUNTRIES | DEPRESSIVE DISORDER | COGNITIVE FUNCTION | LIFE EXPECTANCY | Quality-Adjusted Life Years | Prevalence | Age Factors | Humans | Middle Aged | Child, Preschool | Infant | Male | Wounds and Injuries - epidemiology | Incidence | Young Adult | Adolescent | Global Health - statistics & numerical data | Sex Factors | Aged, 80 and over | Adult | Female | Aged | Health Status | Child | Infant, Newborn | Disability | Patient outcomes | Research | Studies | Disease | Mortality | Public health | Life Sciences | Wounds and Injuries | World Health | Santé publique et épidémiologie | Samhällsvetenskap | Sociology (excluding Social Work, Social Psychology and Social Anthropology) | Sociologi (exklusive socialt arbete, socialpsykologi och socialantropologi) | Social Sciences | Sociology | Sociologi
Internal Medicine | POPULATION | MEDICINE, GENERAL & INTERNAL | HEARING IMPAIRMENT | MENTAL-DISORDERS | IRON-DEFICIENCY ANEMIA | PREVALENCE | HEALTH | 187 COUNTRIES | DEPRESSIVE DISORDER | COGNITIVE FUNCTION | LIFE EXPECTANCY | Quality-Adjusted Life Years | Prevalence | Age Factors | Humans | Middle Aged | Child, Preschool | Infant | Male | Wounds and Injuries - epidemiology | Incidence | Young Adult | Adolescent | Global Health - statistics & numerical data | Sex Factors | Aged, 80 and over | Adult | Female | Aged | Health Status | Child | Infant, Newborn | Disability | Patient outcomes | Research | Studies | Disease | Mortality | Public health | Life Sciences | Wounds and Injuries | World Health | Santé publique et épidémiologie | Samhällsvetenskap | Sociology (excluding Social Work, Social Psychology and Social Anthropology) | Sociologi (exklusive socialt arbete, socialpsykologi och socialantropologi) | Social Sciences | Sociology | Sociologi
Journal Article
|
1) If $$z = \frac{ \left\{ \sqrt{3} \right\}^2 - 2 \left\{ \sqrt{2} \right\}^2 }{ \left\{ \sqrt{3} \right\} - 2 \left\{ \sqrt{2} \right\} }$$ find $\lfloor z \rfloor$.
2) Find all $x$ for which $$\left| x - \left| x-1 \right| \right| = \lfloor x \rfloor.$$ Express your answer in interval notation.
3) Which positive real number $x$ has the property that $x$, $\lfloor x \rfloor$, and $x - \lfloor x\rfloor$ form a geometric progression (in that order)? (Recall that $\lfloor x\rfloor$ means the greatest integer less than or equal to $x$.)
4) Let $$N = \sum_{k = 1}^{1000}k(\lceil \log_{\sqrt {2}}k\rceil - \lfloor \log_{\sqrt {2}}k \rfloor). $$ Find $N$.
5) Let $f(x)$ be the function whose domain is all positive real numbers defined by the formula $$f(x) = \begin{cases} \dfrac{\sqrt{2x+5}-\sqrt{x+7}}{x-2} & x \neq 2\\ k & x = 2 \end{cases}$$If $f(x)$ is continuous, what is the value of $k$?
6) Prove that $\lfloor 2x \rfloor + \lfloor 2y \rfloor \geq \lfloor x \rfloor + \lfloor y \rfloor + \lfloor x+y \rfloor$ for all real $x$ and $y$.
Thanks for help in advance!
Hmm. If I use LaTeX here in the answer some of it displays properly in the question. If I delete the LaTeX here the question looks a mess again!!
So: Use the LaTeX button and copy and paste your questions into the resulting box, having removed the $ signs. e.g.
If \(z = \frac{ \left\{ \sqrt{3} \right\}^2 - 2 \left\{ \sqrt{2} \right\}^2 }{ \left\{ \sqrt{3} \right\} - 2 \left\{ \sqrt{2} \right\} }\) find \(\lfloor z \rfloor\)
1.) If, \(z = \frac{ \left\{ \sqrt{3} \right\}^2 - 2 \left\{ \sqrt{2} \right\}^2 }{ \left\{ \sqrt{3} \right\} - 2 \left\{ \sqrt{2} \right\} } \), find \(\lfloor z \rfloor\).
The fact that \(\lfloor \sqrt{2} \rfloor = \lfloor \sqrt{3} \rfloor = 1\), should help. Rewrite the roots in brackets with \(\{\sqrt{2}\} = \sqrt{2} - 1\) and \(\{\sqrt{3}\} = \sqrt{3} - 1\).
|
875 17 1. Homework Statement
Suppose that the number of asbestos particles in a sam-
ple of 1 squared centimeter of dust is a Poisson random variable
with a mean of 1000. What is the probability that 10 squared cen-
timeters of dust contains more than 10,000 particles?
2. Homework Equations
[tex] E(aX+b) = aE(X) + b[/tex]
[tex]Var(aX) = a^2 Var(X) [/tex]
3. The Attempt at a Solution
Let X = number of asbestos particles in 1[itex]\mbox{cm}^2[/itex]. Define Y = number of asbestos particles in 10[itex]\mbox{ cm}^2[/itex]. So we have [itex]Y=10X[/itex]. Using the formula given above, we get [itex] E(Y)=10E(X)[/itex] and [itex]Var(Y) = 100 Var(X)[/itex]. But since X is a Poisson random variable, we have [itex]E(X) = \lambda = Var(X) = 1000[/itex]. So we get for Y variable, [itex]E(Y) = 10000[/itex] and [itex]Var(Y) = 100000[/itex]. Then the probability we need to find is [itex]P(Y > 10000)[/itex]. Now we use the Normal approximation here. [itex]E(Y) = 10000[/itex] and [itex]Var(Y) = 100000[/itex]. So [itex]P(Y \geq 10001.5)[/itex]. So we get the following expression
[tex]P\left(z \geq \frac{10001.5 - 10000}{\sqrt(100000)}\right)[/tex]
So now I use [itex]pnorm[/itex] function in [itex]R[/itex] , to calculate this probability. It is
[tex]\mbox{pnorm(10001.5, 10000, sqrt(100000), lower.tail=F)}[/tex]
which gives us [itex]0.4981077[/itex]. Is this right ? The solution manual for Montgomery and Runger says that [itex] E(Y) = \lambda = 10000 = Var(Y)[/itex]. Is that a mistake ?
thanks
|
What precisely is a scalar autonomous differential equation? I'm confused about what this precisely means, more so because we did not discuss this in any lectures nor is it, as far as I can tell, defined in the notes. I've tried looking it up as well, but I couldn't find anything, only specific examples. It occurs in the following question:
Show that every scalar autonomous differential equation with $f\in\mathcal{C}^1$ is of gradient-type. Furthermore, show that the matrix of a linear gradient equation is symmetric.
Here gradient-type means that for a (system of) autonomous differential equation(s) $\dot{y}=f(y)$, there exists $V:\mathbb{R}^n\supset\!\to\mathbb{R}$ such that $f=\operatorname{grad}V$ with $V\in\mathcal{C}^2$.
Furthermore, what is a (linear) gradient equation? Do they simply mean a (linear) differential equation of gradient-type? The notes are not clear about it at all.
Any help would be appreciated. Thank you!
|
I'm very confused about calculating solubility.
Everything is in temperature of $298\ \mathrm K$
From Atkins's book, exercise 16.83:
Based on solubility constant, calculate solubility of $\ce{Al(OH)3}$ in acid solution of $\mathrm{pH}=4.5$, $K_{\rm s}=1\cdot10^{-33}$.
Using stoichiometry and equilibrium constants I proceed as follows:
$\mathrm{pH}=4.5$ corresponds to $\mathrm{pOH}=9.5$ and
$$\ce{Al(OH)3 (s) \rightarrow Al^3+ (aq) + 3OH- (aq)}$$
$$K_{\rm s}=\ce{[Al^3+][OH- ]^3}$$
Let's call $\ce{[Al^3+]} = x$.
From stoichiometry and the assumption that $\ce{[OH- ]} >> 10^{-7}$ we know that $\ce{[OH- ]} \approx 3x+10^{-9.5}$.
We get a equation like
$$x(3x+10^{-9.5})^3=K_{\rm s}$$
which I solve as $x=2.39\cdot10^{-9}$, but Atkins's answer is $3.0\cdot10^{-5}$.
Atkins's solution looks like:
$$K_{\rm s}=\ce{[Al^3+][OH- ]^3}$$
$${K_{\rm s}\over[\ce{OH-}]^3}=[\ce{Al^3+}] = 3.0\cdot10^{-5}$$
It looks like Atkins assumes that $[\ce{OH-}]=10^{-9.5}$.
I have two questions:
Why does Atkins assume that $\ce{Al(OH)3}$ doesn't affect $\mathrm{pH}$?
Which one is correct?
|
Casel, Katrin; Fernau, Henning; Khosravian Ghadikolaei, Mehdi; Monnot, Jerome; Sikora, FlorianExtension of vertex cover and independent set in some classes of graphs and generalizations. International Conference on Algorithms and Complexity (CIAC) 2019
We study extension variants of the classical problems Vertex Cover and Independent Set. Given a graph \(G = (V, E)\) and a vertex set \(U \subseteq V\), it is asked if there exists a minimal vertex cover (resp. maximal independent set) \(S\) with \(U \subseteq S\) (resp. \(U \supseteq S\)). Possibly contradicting intuition, these problems tend to be NP-complete, even in graph classes where the classical problem can be solved efficiently. Yet, we exhibit some graph classes where the extension variant remains polynomial-time solvable. We also study the parameterized complexity of theses problems, with parameter \(|U|\), as well as the optimality of simple exact algorithms under ETH. All these complexity considerations are also carried out in very restricted scenarios, be it degree or topological restrictions (bipartite, planar or chordal graphs). This also motivates presenting some explicit branching algorithms for degree-bounded instances. We further discuss the price of extension, measuring the distance of \(U\) to the closest set that can be extended, which results in natural optimization problems related to extension problems for which we discuss polynomial-time approximability.
Casel, Katrin; Fernau, Henning; Khosravian Ghadikolaei, Mehdi; Monnot, Jerome; Sikora, FlorianExtension of some edge graph problems: standard and parameterized complexity. Fundamentals of Computation Theory (FCT) 2019
We consider extension variants of some edge optimization problems in graphs containing the classical Edge Cover, Matching, and Edge Dominating Set problems. Given a graph \(G=(V,E)\) and an edge set \(U \subseteq E\), it is asked whether there exists an inclusion-wise minimal (resp., maximal) feasible solution \(E'\) which satisfies a given property, for instance, being an edge dominating set (resp., a matching) and containing the forced edge set \(U\) (resp., avoiding any edges from the forbidden edge set \(E \setminus U\)). We present hardness results for these problems, for restricted instances such as bipartite or planar graphs. We counter-balance these negative results with parameterized complexity results. We also consider the price of extension, a natural optimization problem variant of extension problems, leading to some approximation results.
Casel, Katrin; Day, Joel D.; Fleischmann, Pamela; Kociumaka, Tomasz; Manea, Florin; Schmid, Markus L.Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number. International Colloquium on Automata, Languages and Programming (ICALP) 2019
We investigate the locality number, a recently introduced structural parameter for strings (with applications in pattern matching with variables), and its connection to two important graph-parameters, cutwidth and pathwidth. These connections allow us to show that computing the locality number is NP-hard but fixed parameter tractable (when the locality number or the alphabet size is treated as a parameter), and can be approximated with ratio \(O(\sqrt{\log opt \log n)\). As a by-product, we also relate cutwidth via the locality number to pathwidth, which is of independent interest, since it improves the currently best known approximation algorithm for cutwidth. In addition to these main results, we also consider the possibility of greedy-based approximation algorithms for the locality number.
Classical clustering problems search for a partition of objects into a fixed number of clusters. In many scenarios, however, the number of clusters is not known or necessarily fixed. Further, clusters are sometimes only considered to be of significance if they have a certain size. We discuss clustering into sets of minimum cardinality \(k\) without a fixed number of sets and present a general model for these types of problems. This general framework allows the comparison of different measures to assess the quality of a clustering. We specifically consider nine quality-measures and classify the complexity of the resulting problems with respect to \(k\). Further, we derive some polynomial-time solvable cases for \(k=2\) with connections to matching-type problems which, among other graph problems, then are used to compute approximations for larger values of \(k\).
This paper studies Upper Domination, i.e., the problem of computing the maximum cardinality of a minimal dominating set in a graph with respect to classical and parameterised complexity as well as approximability.
Casel, KatrinResolving Conflicts for Lower-Bounded Clustering. International Symposium on Parameterized and Exact Computation (IPEC) 2018
This paper considers the effect of non-metric distances for lower-bounded clustering, i.e., the problem of computing a partition for a given set of objects with pairwise distance, such that each set has a certain minimum cardinality (as required for anonymisation or balanced facility location problems). We discuss lower-bounded clustering with the objective to minimise the maximum radius or diameter of the clusters. For these problems there exists a 2-approximation but only if the pairwise distance on the objects satisfies the triangle inequality, without this property no polynomial-time constant factor approximation is possible. We try to resolve or at least soften this effect of non-metric distances by devising particular strategies to deal with violations of the triangle inequality ("conflicts"). With parameterised algorithmics, we find that if the number of such conflicts is not too large, constant factor approximations can still be computed efficiently. In particular, we introduce parameterised approximations with respect to not just the number of conflicts but also for the vertex cover number of the "conflict graph" (graph induced by conflicts). Interestingly, we salvage the approximation ratio of 2 for diameter while for radius it is only possible to show a ratio of 3. For the parameter vertex cover number of the conflict graph this worsening in ratio is shown to be unavoidable, unless FPT=W[2]. We further discuss improvements for diameter by choosing the (induced) \(P_3\)-cover number of the conflict graph as parameter and complement these by showing that, unless FPT=W[1], there exists no constant factor parameterised approximation with respect to the parameter split vertex deletion set.
Casel, Katrin; Fernau, Henning; Grigoriev, Alexander; Schmid, Markus L.; Whitesides, SueCombinatorial Properties and Recognition of Unit Square Visibility Graphs. International Symposium on Mathematical Foundations of Computer Science (MFCS) 2017: 30:1-30:15
Unit square (grid) visibility graphs (USV and USGV, resp.) are described by axis-parallel visibility between unit squares placed (on integer grid coordinates) in the plane. We investigate combinatorial properties of these graph classes and the hardness of variants of the recognition problem, i.e., the problem of representing USGV with fixed visibilities within small area and, for USV, the general recognition problem.
A vertex \(v in V (G)\) is said to distinguish two vertices \(x, y in V (G)\) of a graph \(G\) if the distance from \(v\) to \(x\) is different from the distance from \(v\) to \(y\). A set \(W subseteq V (G)\) is a total resolving set for a graph \(G\) if for every pair of vertices \(x, y in V (G)\), there exists some vertex \(w \in W - \x, y\}\) which distinguishes \(x\) and \(y\), while \(W\) is a weak total resolving set if for every \(x in V (G) - W\) and \(y \in W\), there exists some \(w \in W -\{y\}\) which distinguishes \(x\) and \(y\). A weak total resolving set of minimum cardinality is called a weak total metric basis of \(G\) and its cardinality the weak total metric dimension of \(G\). Our main contributions are the following ones: (a) Graphs with small and large weak total metric bases are characterised. (b) We explore the (tight) relation to independent 2-domination. (c) We introduce a new graph parameter, called weak total adjacency dimension and present results that are analogous to those presented for weak total dimension. (d) For trees, we derive a characterisation of the weak total (adjacency) metric dimension. Also, exact figures for our parameters are presented for (generalised) fans and wheels. (e) We show that for Cartesian product graphs, the weak total (adjacency) metric dimension is usually pretty small. (f) The weak total (adjacency) dimension is studied for lexicographic products of graphs.
Bazgan, Cristina; Brankovic, Ljiljana; Casel, Katrin; Fernau, Henning; Jansen, Klaus; Klein, Kim-Manuel; Lampis, Michael; Liedloff, Mathieu; Monnot, Jérôme; Paschos, Vangelis Th.Algorithmic Aspects of Upper Domination: A Parameterised Perspective. Algorithmic Aspects in Information and Management (AAIM) 2016: 113-124
This paper studies Upper Domination, i.e., the problem of computing the maximum cardinality of a minimal dominating set in a graph, with a focus on parameterised complexity. Our main results include W[1]-hardness for Upper Domination, contrasting FPT membership for the parameterised dual Co-Upper Domination. The study of structural properties also yields some insight into Upper Total Domination. We further consider graphs of bounded degree and derive upper and lower bounds for kernelisation.
Bazgan, Cristina; Brankovic, Ljiljana; Casel, Katrin; Fernau, HenningOn the Complexity Landscape of the Domination Chain. Algorithms and Discrete Applied Mathematics (CALDAM) 2016: 61-72
In this paper, we survey and supplement the complexity landscape of the domination chain parameters as a whole, including classifications according to approximability and parameterised complexity. Moreover, we provide clear pointers to yet open questions. As this posed the majority of hitherto unsettled problems, we focus on Upper Irredundance and Lower Irredundance that correspond to finding the largest irredundant set and resp. the smallest maximal irredundant set. The problems are proved NP-hard even for planar cubic graphs. While Lower Irredundance is proved not \(c \log(n)\)-approximable in polynomial time unless \(NP subseteq DTIME(n^\log \log n})\), no such result is known for Upper Irredundance. Their complementary versions are constant-factor approximable in polynomial time. All these four versions are APX-hard even on cubic graphs.
Casel, Katrin; Fernau, Henning; Gaspers, Serge; Gras, Benjamin; Schmid, Markus L.On the Complexity of Grammar-Based Compression over Fixed Alphabets. International Colloquium on Automata, Languages, and Programming (ICALP) 2016: 122:1-122:14
It is shown that the shortest-grammar problem remains NP-complete if the alphabet is fixed and has a size of at least 24 (which settles an open question). On the other hand, this problem can be solved in polynomial-time, if the number of nonterminals is bounded, which is shown by encoding the problem as a problem on graphs with interval structure. Furthermore, we present an O(3n) exact exponential-time algorithm, based on dynamic programming. Similar results are also given for 1-level grammars, i.e., grammars for which only the start rule contains nonterminals on the right side (thus, investigating the impact of the "hierarchical depth" on the complexity of the shortest-grammar problem).
Abu-Khzam, Faisal N.; Bazgan, Cristina; Casel, Katrin; Fernau, HenningBuilding Clusters with Lower-Bounded Sizes. International Symposium on Algorithms and Computation (ISAAC) 2016: 4:1-4:13
Classical clustering problems search for a partition of objects into a fixed number of clusters. In many scenarios however the number of clusters is not known or necessarily fixed. Further, clusters are sometimes only considered to be of significance if they have a certain size. We discuss clustering into sets of minimum cardinality \(k\) without a fixed number of sets and present a general model for these types of problems. This general framework allows the comparison of different measures to assess the quality of a clustering. We specifically consider nine quality-measures and classify the complexity of the resulting problems with respect to \(k\). Further, we derive some polynomial-time solvable cases for \(k = 2\) with connections to matching-type problems which, among other graph problems, then are used to compute approximations for larger values of \(k\).
We consider Upper Domination, the problem of finding a maximum cardinality minimal dominating set in a graph. We show that this problem does not admit an \(n^{1-\epsilon }\) approximation for any \(\epsilon >0\), making it significantly harder than Dominating Set, while it remains hard even on severely restricted special cases, such as cubic graphs (APX-hard), and planar subcubic graphs (NP-hard). We complement our negative results by showing that the problem admits an \(O(\Delta )\) approximation on graphs of maximum degree \(\Delta\) , as well as an EPTAS on planar graphs. Along the way, we also derive essentially tight \(n^{1-\frac{1}{d}}\) upper and lower bounds on the approximability of the related problem Maximum Minimal Hitting Set on d-uniform hypergraphs, generalising known results for Maximum Minimal Vertex Cover.
This paper discusses a problem arising in the field of privacy-protection in statistical databases: Given a \(n \times m\) \(\{0,1\}\)-matrix \(M\), is there a set of mergings which transforms \(M\) into a zero matrix and only affects a bounded number of rows/columns. “Merging” here refers to combining adjacent lines with a component-wise logical AND. This kind transformation models a generalization on OLAP-cubes also called global recoding. Counting the number of affected lines presents a new measure of information-loss for this method. Parameterized by the number of affected lines \(k\) we introduce reduction rules and an \(O^*(2.618^k)\)-algorithm for the new abstract combinatorial problem LMAL.
Algorithm Engineering
Our research focus is on theoretical computer science and algorithm engineering. We are equally interested in the mathematical foundations of algorithms and developing efficient algorithms in practice. A special focus is on random structures and methods.
|
Let $X_n$ be a sequence of independent random variables (but not necessarily identically distributed) taking values in $[-1,1]$ that have the following property:
1) The average $A_n := \frac{(X_1+ \ldots + X_n)}{n} $ converges in probability to $0$, i.e. for every $\epsilon >0 $, the probability that $|A_n| > \epsilon $ converges to $0$.
2) For some $p$, the distribution of $ n^p A_n$ converges to something non degenerate.
3) The distribution of $X_n$ converges to something non degenerate (ideally the uniform distribution, but not necessarily).
$\textbf{Question 1:}$ Is it possible for property 1) and 2) to hold and $p$ to be something bigger than $\frac{1}{2}$?
$\textbf{Question 2:}$ Is it possible for property 1), 2) and 3) to hold and $p$ to be something bigger than $\frac{1}{2}$?
$\textbf{Remark:} $ Note that if $X_n$ were identically distributed, with expectation value $0$, then 1), 2)and 3) hold with $p =\frac{1}{2}$. Property 1) is Weak Law of Large Numbers and 2) is Central Limit Theorem. In fact something stronger than 1) holds, namely almost sure convergence.
I am wondering if a better order of convergence is possible if we do not demand that the random variables are identically distributed (and we do not require almost sure convergence). Note that I am also not requiring that the expectation value of $X_n$ is actually $0$ for each $n$. I am simply requiring that the average $A_n$ converges in probability to $0$.
$\textbf{Remark $1$:}$ The answer to Question $1$ seems to be yes; take $X_n$ to be infinite summable almost surely.
$\textbf{Remark $2$:}$ The answer to Question $2$ also seems to be yes. The example suggested is as follows; let $Y_n$ be i.i.d. random variables. Define $$ X_n := Y_n+ n^{-\alpha}$$ for some $\alpha \in (0, \frac{1}{2})$. Then it seems that the distribution of $n^{1-\alpha} A_n$ will converge to a Dirac Delta not centered around the origin. I am wondering if someone can explain the reason behind this (or point out some references).
|
Answer
$$\frac{\cos\theta+1}{\tan^2\theta}=\frac{\cos\theta}{\sec\theta-1}$$ The equation has been verified to be an identity.
Work Step by Step
$$\frac{\cos\theta+1}{\tan^2\theta}=\frac{\cos\theta}{\sec\theta-1}$$ We would deal with the right side first, taking $\sec\theta=\frac{1}{\cos\theta}$ $$\frac{\cos\theta}{\sec\theta-1}$$ $$=\frac{\cos\theta}{\frac{1}{\cos\theta}-1}$$ $$=\frac{\cos\theta}{\frac{1-\cos\theta}{\cos\theta}}$$ $$=\frac{\cos^2\theta}{1-\cos\theta}$$ Now we deal with the left side, but we already have something in mind. Since the right side can be transformed all to $\cos\theta$, we would try to do the same with the left side. For $\tan\theta$, we have $\tan\theta=\frac{\sin\theta}{\cos\theta}$ $$\frac{\cos\theta+1}{\tan^2\theta}$$ $$=\frac{\cos\theta+1}{\frac{\sin^2\theta}{\cos^2\theta}}$$ $$=\frac{\cos^2\theta(\cos\theta+1)}{\sin^2\theta}$$ With $\sin^2\theta$, we would apply $\sin^2\theta=1-\cos^2\theta$ $$=\frac{\cos^2\theta(\cos\theta+1)}{1-\cos^2\theta}$$ Now we simplify $$=\frac{\cos^2\theta(\cos\theta+1)}{(1-\cos\theta)(1+\cos\theta)}$$ $$=\frac{\cos^2\theta}{1-\cos\theta}$$ Both sides are then equal. $$\frac{\cos\theta+1}{\tan^2\theta}=\frac{\cos\theta}{\sec\theta-1}$$ The equation has been verified to be an identity.
|
We have not yet verified the power rule, $\ds{d\over dx}x^a=ax^{a-1}$, for non-integer $a$. There is a close relationship between $\ds x^2$ and $\ds x^{1/2}$—these functions are inverses of each other, each "undoing'' what the other has done. Not surprisingly, this means there is a relationship between their derivatives.
Let's rewrite $\ds y=x^{1/2}$ as $\ds y^2 = x$. We say that this equation defines the function $\ds y=x^{1/2}$ implicitly because while it is not an explicit expression $y=\ldots$, it is true that if $\ds x=y^2$ then $y$ is in fact the square root function. Now, for the time being, pretend that all we know of $y$ is that $\ds x=y^2$; what can we say about derivatives? We can take the derivative of both sides of the equation: $${d\over dx}x={d\over dx}y^2.$$ Then using the chain rule on the right hand side: $$1 = 2y\left({d\over dx}y\right) = 2yy'.$$ Then we can solve for $y'$: $$y'={1\over 2y} = {1\over 2x^{1/2}}={1\over2}x^{-1/2}.$$ This is the power rule for $\ds x^{1/2}$.
There is one little difficulty here. To use the chain rule to compute $\ds d/dx(y^2)=2yy'$ we need to know that the function $y$
has aderivative. All we have shown is that if it has a derivativethen that derivative must be $\ds x^{-1/2}/2$. When using this method we willalways have to assume that the desired derivative exists, butfortunately this is a safe assumption for most such problems.
Here's another interesting feature of this calculation. The equation $\ds x=y^2$ defines more than one function implicitly: $\ds y=-\sqrt{x}$ also makes the equation true. Following exactly the calculation above we arrive at $$y'={1\over 2y} = {1\over 2(-x^{1/2})}=-{1\over2}x^{-1/2}.$$ So the single calculation leading to $y'=1/(2y)$ simultaneously computes the derivatives of both functions.
We can use the same technique to verify the product rule for any rational power. Suppose $\ds y=x^{m/n}$. Write instead $\ds x^m=y^n$ and take the derivative of each side to get $\ds mx^{m-1}=ny^{n-1}y'$. Then $$y'={mx^{m-1}\over ny^{n-1}}={mx^{m-1}\over n(x^{m/n})^{n-1}}= {m\over n}x^{m-1}x^{-m(n-1)/n}={m\over n}x^{m/n-1}.$$
This example involves an inverse function defined implicitly, but other functions can be defined implicitly, and sometimes a single equation can be used to implicitly define more than one function. Here's a familiar example. The equation $\ds r^2=x^2+y^2$ describes a circle of radius $r$. The circle is not a function $y=f(x)$ because for some values of $x$ there are two corresponding values of $y$. If we want to work with a function, we can break the circle into two pieces, the upper and lower semicircles, each of which is a function. Let's call these $y=U(x)$ and $y=L(x)$; in fact this is a fairly simple example, and it's possible to give explicit expressions for these: $\ds U(x)=\sqrt{r^2-x^2 }$ and $\ds L(x)=-\sqrt{r^2-x^2 }$. But it's somewhat easier, and quite useful, to view both functions as given implicitly by $\ds r^2=x^2+y^2$: both $\ds r^2=x^2+U(x)^2$ and $\ds r^2=x^2+L(x)^2$ are true, and we can think of $\ds r^2=x^2+y^2$ as defining both $U(x)$ and $L(x)$.
Now we can take the derivative of both sides as before, remembering that $y$ is not simply a variable but a function—in this case, $y$ is either $U(x)$ or $L(x)$ but we're not yet specifying which one. When we take the derivative we just have to remember to apply the chain rule where $y$ appears. $$ \eqalign{ {d\over dx}r^2&={d\over dx}(x^2+y^2)\cr 0&=2x+2yy'\cr y'&={-2x\over 2y}=-{x\over y}\cr }$$ Now we have an expression for $y'$, but it contains $y$ as well as $x$. This means that if we want to compute $y'$ for some particular value of $x$ we'll have to know or compute $y$ at that value of $x$ as well. It is at this point that we will need to know whether $y$ is $U(x)$ or $L(x)$. Occasionally it will turn out that we can avoid explicit use of $U(x)$ or $L(x)$ by the nature of the problem
Example 4.6.1 Find the slope of the circle $\ds 4=x^2+y^2$ at the point $\ds (1,-\sqrt{3})$. Since we know both the $x$ and $y$ coordinates of the point of interest, we do not need to explicitly recognize that this point is on $L(x)$, and we do not need to use $L(x)$ to compute $y$—but we could. Using the calculation of $y'$ from above, $$y'=-{x\over y}=-{1\over -\sqrt{3}}={1\over \sqrt{3}}.$$ It is instructive to compare this approach to others.
We might have recognized at the start that $\ds (1,-\sqrt{3})$ is on the function $\ds y=L(x)=-\sqrt{4-x^2}$. We could then take the derivative of $L(x)$, using the power rule and the chain rule, to get $$L'(x)=-{1\over 2}(4-x^2)^{-1/2}(-2x)={x\over\sqrt{4-x^2}}.$$ Then we could compute $\ds L'(1)=1/\sqrt{3}$ by substituting $x=1$.
Alternately, we could realize that the point is on $L(x)$, but use the fact that $y'=-x/y$. Since the point is on $L(x)$ we can replace $y$ by $L(x)$ to get $$y'=-{x\over L(x)}={x\over \sqrt{4-x^2}},$$ without computing the derivative of $L(x)$ explicitly. Then we substitute $x=1$ and get the same answer as before.
In the case of the circle it is possible to find the functions $U(x)$ and $L(x)$ explicitly, but there are potential advantages to using implicit differentiation anyway. In some cases it is more difficult or impossible to find an explicit formula for $y$ and implicit differentiation is the only way to find the derivative.
Example 4.6.2 Find the derivative of any function defined implicitly by $\ds yx^2+y^2=x$. We treat $y$ as an unspecified function and use the chain rule: $$\eqalign{ {d\over dx}(yx^2+y^2)&={d\over dx}x\cr (y\cdot 2x+y'\cdot x^2)+2yy'&=1\cr y'\cdot x^2+2yy'&=1-y\cdot 2x\cr y'&={1-2xy\over x^2+2y}\cr }$$
You might think that the step in which we solve for $y'$ couldsometimes be difficult—after all, we're using implicitdifferentiation here because we can't solve the equation$\ds yx^2+e^y=x$ for $y$, so maybe after taking the derivative we getsomething that is hard to solve for $y'$. In fact,
this never happens. All occurrences $y'$ come from applying the chain rule,and whenever the chain rule is used it deposits a single $y'$multiplied by some other expression. So it will always be possible togroup the terms containing $y'$ together and factor out the $y'$, justas in the previous example. If you ever get anything more difficultyou have made a mistake and should fix it before trying to continue.
It is sometimes the case that a situation leads naturally to an equation that defines a function implicitly.
Example 4.6.3 Consider all the points $(x,y)$ that have the property that the distance from $(x,y)$ to $\ds (x_1,y_1)$ plus the distance from $(x,y)$ to $\ds (x_2,y_2)$ is $2a$ ($a$ is some constant). These points form an ellipse, which like a circle is not a function but can viewed as two functions pasted together. Because we know how to write down the distance between two points, we can write down an implicit equation for the ellipse: $$\sqrt{(x-x_1)^2+(y-y_1)^2}+\sqrt{(x-x_2)^2+(y-y_2)^2}=2a.$$ Then we can use implicit differentiation to find the slope of the ellipse at any point, though the computation is rather messy.
Exercises 4.6
In exercises 1–8, find a formula for the derivative $y'$ at the point $(x,y)$:
Ex 4.6.1$\ds y^2=1+x^2$(answer)
Ex 4.6.2$\ds x^2+xy+y^2=7$(answer)
Ex 4.6.3$\ds x^3+xy^2=y^3+yx^2$(answer)
Ex 4.6.4$\ds 4\cos x \sin y = 1$(answer)
Ex 4.6.5$\ds\sqrt{x} + \sqrt{y} = 9$(answer)
Ex 4.6.6$\ds \tan(x/y) = x+ y$ (answer)
Ex 4.6.7$\ds \sin (x+y ) =xy$ (answer)
Ex 4.6.8$\ds{1\over x} + {1\over y} = 7$(answer)
Ex 4.6.9A hyperbola passing through $(8,6)$ consists of all points whose distancefrom the origin is a constant more than its distance from the point (5,2).Find the slope of the tangent line to the hyperbola at $(8,6)$.(answer)
Ex 4.6.10Compute $y'$ for the ellipse of example 4.6.3.
Ex 4.6.11If $\ds y=\log_a x$ then $\ds a^y=x$. Use implicitdifferentiation to find $\ds y'$.
Ex 4.6.12The graph of the equation $\ds x^2 - xy + y^2 = 9$ is an ellipse.Find the lines tangent to this curve at the two points where it intersects the $x$-axis. Show that these lines are parallel.(answer)
Ex 4.6.13Repeat the previous problem for the points at which the ellipse intersects the $y$-axis.(answer)
Ex 4.6.14Find the points on the ellipse from the previous two problems where the slope is horizontal and where it is vertical.(answer)
Ex 4.6.15Find an equation for the tangent line to $\ds x^4 = y^2 + x^2$ at $\ds (2, \sqrt{12})$. (This curve is the kampyle of Eudoxus.)(answer)
Ex 4.6.16Find an equation for the tangent line to $\ds x^{2/3} +y^{2/3} = a^{2/3}$ at a point $\ds (x_1 ,y_1)$ on the curve, with $\ds x_1 \neq 0$ and $\ds y_1 \neq 0$. (This curve is an astroid.)(answer)
Ex 4.6.17Find an equation for the tangent line to $\ds (x^2 +y^2 )^2 =x^2-y^2$ at a point $\ds (x_1 , y_1)$ on the curve, when $\ds y_1 \neq 0$.(This curve is a lemniscate.)(answer)
Remark 4.6.4
Definition.Two curves are orthogonal if at each point of intersection,the angle between their tangent lines is $\pi/2$. Twofamilies of curves, $\cal{A}$ and $\cal{B}$, are orthogonal trajectories of each other if given any curve $C$in $\cal{A}$ and any curve $D$ in $\cal{B}$ the curves $C$and $D$ are orthogonal.For example, the family of horizontal lines in the plane isorthogonal to the family of vertical lines in the plane.
Ex 4.6.18Show that $\ds x^2 -y^2 =5$ is orthogonal to $\ds 4x^2 +9y^2 =72$. (Hint: You need to find the intersection points of the two curves and then show that the product of the derivatives at each intersection point is $-1$.)
Ex 4.6.19Show that $\ds x^2 +y^2 = r^2$ is orthogonal to$y=mx$. Conclude that the family of circles centered at the origin isan orthogonal trajectory of the family of lines that pass through theorigin.
Note that there is a technical issue when $m=0$. The circles fail to be differentiable when they cross the $x$-axis. However, the circles are orthogonal to the $x$-axis. Explain why. Likewise, the vertical line through the origin requires a separate argument.
Ex 4.6.20For $k\not= 0$ and $c \neq 0$ show that $\ds y^2 -x^2 =k$ is orthogonal to$yx =c$. In the case where $k$ and $c$ are both zero, the curves intersect at the origin. Are the curves $\ds y^2 -x^2 =0$ and $yx=0$ orthogonal to each other?
Ex 4.6.21Suppose that $m\neq 0$. Show that the family of curves$\ds \{y=mx+b \mid b\in \R \}$ is orthogonal to thefamily of curves $\ds \{y=-(x/m)+c \mid c \in \R\}$.
|
AP Statistics Curriculum 2007 Infer 2Means Dep From Socr
(→Paired vs. Independent Testing)
(→Paired vs. Independent Testing)
Line 154: Line 154:
: \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} \sim T(df=17)\)
: \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} \sim T(df=17)\)
: \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} = {0.682 -0.637- 0 \over \sqrt{SE^2(\overline{x})+SE^2(\overline{y})}}= \) \({0.682 -0.637\over \sqrt{{0.0742^2\over 10}+ {0.0709^2\over 10}}}={0.682 -0.637\over 0.0325}=1.38\)
: \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} = {0.682 -0.637- 0 \over \sqrt{SE^2(\overline{x})+SE^2(\overline{y})}}= \) \({0.682 -0.637\over \sqrt{{0.0742^2\over 10}+ {0.0709^2\over 10}}}={0.682 -0.637\over 0.0325}=1.38\)
-
: \(p-value=P(T>1.38)= 0.100449\) and we would have failed to reject the null-hypothesis ('''incorrect!''')
+
: \(p-value=P(T>1.38)= 0.100449\) and we would have failed to reject the null-hypothesis ('''incorrect!''')
Similarly, had we incorrectly used the [[AP_Statistics_Curriculum_2007_Infer_2Means_Indep |independent design]] and constructed a corresponding Confidence interval, we would obtain an incorrect inference:
Similarly, had we incorrectly used the [[AP_Statistics_Curriculum_2007_Infer_2Means_Indep |independent design]] and constructed a corresponding Confidence interval, we would obtain an incorrect inference:
Current revision as of 19:59, 17 June 2013 General Advance-Placement (AP) Statistics Curriculum - Inferences about Two Means: Dependent Samples
In the previous chapter we saw how to do significance testing in the case of a single random sample. Now, we show how to do hypothesis testing comparing two samples and we begin with the simple case of paired samples.
Inferences About Two Means: Dependent Samples
In all study designs, it is always critical to clearly identify whether samples we compare come from dependent or independent populations. There is a general formulation for the significance testing when the samples are independent. The fact that there may be uncountable many different types of dependencies that prevents us from having a similar analysis protocol for
all dependent sample cases. However, in one specific case - paired samples - we have a theory to generalize the significance testing analysis protocol. Two populations (or samples) are dependent because of pairing (or paired) if they are linked in some way, usually by a direct relationship. For example, measure the weight of subjects before and after a six month diet. Paired Designs
These are the most common
Paired Designs, in which the idea of pairing is that members of a pair are similar to each other with respect to extraneous variables. Randomized block experiments with two units per block Observational studies with individually matched controls (e.g., clinical trials of drug efficacy - patient pre vs. post treatment results are compared) Repeated (time or treatment affected) measurements on the same individual Blocking by time – formed implicitly when replicate measurements are made at different times. Recall that for a random sample {} of the process, the population mean may be estimated by the sample average, . The standard error of is given by . Analysis Protocol for Paired Designs
To study paired data, we would like to examine the differences between each pair. Suppose {} and {} represent the 2 paired samples. Then we want to study the difference sample {}. Notice the effect of the pairings of each
X and i Y . i
Now we can clearly see that the group effect (group differences) is directly represented in the {
d } sequence. The one-sample T test is the proper strategy to analyze the difference sample { i d }, if the i X and i Y samples come from Normal distributions. i
Since we are focusing on the differences, we can use the same reasoning as we did in the single sample case to calculate the standard error (i.e., the standard deviation of the sampling distribution of ) of .
Thus, the standard error of is given by , where .
Confidence Interval of the Difference of Means
The interval estimation of the difference of two means (or
Confidence intervals) is constructed as follows. Choose a confidence level (1 − α)100%, where α is small (e.g., 0.1, 0.05, 0.025, 0.01, 0.001, etc.). Then a (1 − α)100% confidence interval for μ 1 − μ 2 is defined in terms of the T-distribution:
Both the confidence intervals and the hypothesis testing methods in the paired design require Normality of both samples. If these parametric assumptions are invalid we must use a not-parametric (distribution free test), even if the latter is less powerful.
Hypothesis Testing about the Difference of Means Null Hypothesis: H :μ o 1− μ 2= μ (e.g., μ o 1− μ 2= 0) Alternative Research Hypotheses: One sided (uni-directional): H 1:μ 1− μ 2> μ , or o H 1:μ 1− μ 2< μ o Double sided: One sided (uni-directional): Test Statistics If the two populations that the { X } and { i Y } samples were drawn from are approximately Normal, then the Test Statistics is: i . Effects of Ignoring the Pairing
The SE estimate will be
smaller for correctly paired data. If we look within each sample at the data, we notice variation from one subject to the next. This information gets incorporated into the SE for the independent t-test via s 1 and s 2. The original reason we paired was to try to control for some of this inter-subject variation, which is not of interest in the paired design. Notice that the inter-subject variation has no influence on the SE for the paired test, because only the differences were used in the calculation. The price of pairing is smaller degrees of freedom of the T-test. However, this can be compensated with a smaller SE if we had paired correctly. Pairing is used to reduce bias and increase precision in our inference. By matching/blocking we can control variation due to extraneous variables.
For example, if two groups are matched on age, then a comparison between the groups is free of any bias due to a difference in age distribution.
Pairing is a strategy of design, not an analysis tool. Pairing needs to be carried out before the data are observed. It is not correct to use the observations to make pairs after the data has been collected.
Example
Suppose we measure the thickness of plaque (mm) in the carotid artery of 10 randomly selected patients with mild atherosclerotic disease. Two measurements are taken, thickness before treatment with Vitamin E (baseline) and after two years of taking Vitamin E daily. Formulate testable hypothesis and make inference about the effect of the treatment at α = 0.05.
What makes this paired data rather than independent data? Why would we want to use pairing in this example? Data in row format
Before 0.66,0.72,0.85,0.62,0.59,0.63,0.64,0.7,0.73,0.68 After 0.6,0.65,0.79,0.63,0.54,0.55,0.62,0.67,0.68,0.64 Data in column format
Subject Before After Difference 1 0.66 0.60 0.06 2 0.72 0.65 0.07 3 0.85 0.79 0.06 4 0.62 0.63 -0.01 5 0.59 0.54 0.05 6 0.63 0.55 0.08 7 0.64 0.62 0.02 8 0.70 0.67 0.03 9 0.73 0.68 0.05 10 0.68 0.64 0.04 Mean 0.682 0.637 0.045 SD 0.0742 0.0709 0.0264
We begin first by exploring the data visually using various SOCR EDA Tools.
Line Chart of the two samples Box-And-Whisker Plot of the two samples Index plot of the differences Inference Null Hypothesis: H :μ o − μ b e f o r e = 0 a f t e r (One-sided) Alternative Research Hypotheses: H 1:μ − μ b e f o r e > 0. a f t e r Test statistics: We can use the sample summary statistics to compute the T-statistic: . p− v a l u e= P( T (> d f= 9) T = 5.4022) = 0.000216 for this (one-sided) test. o
Therefore, we
can reject the null hypothesis at α = 0.05! The left white area at the tails of the T(df=9) distribution depicts graphically the probability of interest, which represents the strength of the evidence (in the data) against the Null hypothesis. In this case, this area is 0.000216, which is much smaller than the initially set Type I error α = 0.05 and we reject the null hypothesis. You can also use the SOCR Analyses (One-Sample T-Test) to carry out these calculations as shown in the figure below. This SOCR One Sample T-test Activity provides additional hands-on demonstrations of the one-sample hypothesis testing for the difference in paired experiments. 95% = (1 − 0.05)100% (α = 0.05) Confidence interval (before-after): C I(μ − μ b e f o r e ): a f t e r Conclusion
These data show that the true mean thickness of plaque after two years of treatment with Vitamin E is statistically significantly different than before the treatment (p =0.000216). In other words, vitamin E appears to be an effective in changing carotid artery plaque after treatment. The practical effect does appear to be < 60 microns; however, this may be clinically sufficient and justify patient treatment.
Paired Test Validity
Both the confidence intervals and the hypothesis testing methods in the paired design require Normality of both samples. If these parametric assumptions are invalid, we must use a not-parametric (distribution free test), even if the latter is less powerful.
The plots below indicate that Normal assumptions are not unreasonable for these data, and hence we may be justified in using the one-sample T-test in this case.
Quantile-Quantile Data-Data plot of the two datasets: QQ-Normal plot of the before data: Paired vs. Independent Testing
Suppose we accidentally analyzed the groups independently (using the independent T-test) rather than using this paired test (this would be an incorrect way of analyzing this
before-after data). How would this change our results and findings? \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} \sim T(df=17)\) \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} = {0.682 -0.637- 0 \over \sqrt{SE^2(\overline{x})+SE^2(\overline{y})}}= \) \({0.682 -0.637\over \sqrt{{0.0742^2\over 10}+ {0.0709^2\over 10}}}={0.682 -0.637\over 0.0325}=1.38\) \(p-value=P(T>1.38)= 0.100449\) and we would have failed to reject the null-hypothesis ( incorrect!)
Similarly, had we incorrectly used the independent design and constructed a corresponding Confidence interval, we would obtain an incorrect inference:
\(CI: {\overline{x}-\overline{y} - \mu_o \pm t_{(df=17, \alpha/2)} \times SE(\overline{x}+\overline{y})} = \) \(0.045 \pm 1.740\times 0.0325 = [-0.0116 ; 0.1016].\) SOCR Home page: http://www.socr.ucla.edu
Translate this page:
|
Consider a standard setting for the development of the theory of distributions. Let $D(\Omega)$ be the space of test functions and $D'(\Omega)$ be the space of distributions ("generalized functions").
Can $\langle f,g\rangle$ be given a sensible definition for all $f,g\in D'(\Omega)$? Of course, $\langle f,\phi\rangle$ and $\langle \phi, f\rangle$ are well-defined and in $\mathbb{R}$ or $\mathbb{C}$ for any $\phi\in D(\Omega)$ and $f\in D'(\Omega)$. But more generally?
If a sensible definition can be made, it seems $\langle f,g\rangle$ may not always be a complex number. For example, for any $x\in\mathbb{R}$, there are test functions $\phi_n$ with $\phi_n\to\delta$ such that $\langle \delta, \phi_n\rangle=x$ for all $n$. Thus, $\langle \delta,\delta\rangle$ couldn't be a simple real number. But maybe a distribution?
My main motivation in asking this is to make sense of the following formula: $$ \langle \frac{1}{2\pi} e^{ikx}, e^{-ikx}\rangle = \delta. $$ This formula seems at first glance to be nonsense, but at the same time, we know its intended meaning is the Fourier integral formula: $$ f(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x') e^{-ikx'} \ dx' \ e^{ik x} \ dk $$ for a wide class of functions $f:\mathbb{R}\to\mathbb{R}$.
I'm reading Strichartz'
A Guide to Distribution Theory and Fourier Transforms. Maybe he will cover this at some point?
|
Elastic and inelastic 19.8 GeV/c proton-proton collisions in nuclear emulsion are examined using an external proton beam of the CERN Proton Synchrotron. Multiple scattering, blob density, range and angle measurements give the momentum spectra and angular distributions of secondary protons and pions. The partial cross-sections corresponding to inelastic interactions having two, four, six, eight, ten and twelve charged secondaries are found to be, respectively, (16.3±8.4) mb, (11.5 ± 6.0) mb, (4.3 ± 2.5) mb, (1.9 ± 1.3) mb, (0.5 ± 0.5) mb and (0.5±0.5)mb. The elastic cross-section is estimated to be (4.3±2.5) mb. The mean charged meson multiplicity for inelastic events is 3.7±0.5 and the average degree of inelasticity is 0.35±0.09. Strong forward and backward peaking is observed in the center-of-mass system for both secondary charged pions and protons. Distributions of energy, momentum and transverse momentum for identified charged secondaries are presented and compared with the results of work at other energies and with the results of a statistical theory of proton-proton collisions.
The differential and total cross sections for kaon pair production in the pp->ppK+K- reaction have been measured at three beam energies of 2.65, 2.70, and 2.83 GeV using the ANKE magnetic spectrometer at the COSY-Juelich accelerator. These near-threshold data are separated into pairs arising from the decay of the phi-meson and the remainder. For the non-phi selection, the ratio of the differential cross sections in terms of the K-p and K+p invariant masses is strongly peaked towards low masses. This effect can be described quantitatively by using a simple ansatz for the K-p final state interaction, where it is seen that the data are sensitive to the magnitude of an effective K-p scattering length. When allowance is made for a small number of phi events where the K- rescatters from the proton, the phi region is equally well described at all three energies. A very similar phenomenon is discovered in the ratio of the cross sections as functions of the K-pp and K+pp invariant masses and the identical final state interaction model is also very successful here. The world data on the energy dependence of the non-phi total cross section is also reproduced, except possibly for the results closest to threshold.
The production of eta mesons has been measured in the proton-proton interaction close to the reaction threshold using the COSY-11 internal facility at the cooler synchrotron COSY. Total cross sections were determined for eight different excess energies in the range from 0.5 MeV to 5.4 MeV. The energy dependence of the total cross section is well described by the available phase-space volume weighted by FSI factors for the proton-proton and proton-eta pairs.
Sigma+ hyperon production was measured at the COSY-11 spectrometer via the p p --> n K+ Sigma+ reaction at excess energies of Q = 13 MeV and Q = 60 MeV. These measurements continue systematic hyperon production studies via the p p --> p K+ Lambda/Sigma0 reactions where a strong decrease of the cross section ratio close-to-threshold was observed. In order to verify models developed for the description of the Lambda and Sigma0 production we have performed the measurement on the Sigma+ hyperon and found unexpectedly that the total cross section is by more than one order of magnitude larger than predicted by all anticipated models. After the reconstruction of the kaon and neutron four momenta, the Sigma+ is identified via the missing mass technique. Details of the method and the measurement will be given and discussed in view of theoretical models.
K+ meson production in pA (A = C, Cu, Au) collisions has been studied using the ANKE spectrometer at an internal target position of the COSY-Juelich accelerator. The complete momentum spectrum of kaons emitted at forward angles, theta < 12 degrees, has been measured for a beam energy of T(p)=1.0 GeV, far below the free NN threshold of 1.58 GeV. The spectrum does not follow a thermal distribution at low kaon momenta and the larger momenta reflect a high degree of collectivity in the target nucleus.
We present the first observation of exclusive $e^+e^-$ production in hadron-hadron collisions, using $p\bar{p}$ collision data at \mbox{$\sqrt{s}=1.96$ TeV} taken by the Run II Collider Detector at Fermilab, and corresponding to an integrated luminosity of \mbox{532 pb$^{-1}$}. We require the absence of any particle signatures in the detector except for an electron and a positron candidate, each with transverse energy {$E_T>5$ GeV} and pseudorapidity {$|\eta|<2$}. With these criteria, 16 events are observed compared to a background expectation of {$1.9\pm0.3$} events. These events are consistent in cross section and properties with the QED process \mbox{$p\bar{p} \to p + e^+e^- + \bar{p}$} through two-photon exchange. The measured cross section is \mbox{$1.6^{+0.5}_{-0.3}\mathrm{(stat)}\pm0.3\mathrm{(syst)}$ pb}. This agrees with the theoretical prediction of {$1.71 \pm 0.01$ pb}.
Using a secondary pion beam from the Argonne Zero Gradient Synchrotron we have studied the process π−p→φn in the region of the cross-section enhancement near kinematic threshold. For incident momenta between 1.6 and 2 GeV/c, we have determined production and decay angular distributions and extrapolated total cross sections from a sample of about 160 φ's above background. The production and decay distributions are consistent with isotropy over this entire incident-momentum range. The extrapolated total cross section varies between 19 and 25 μb.
Measurements have been made on 753 four-prong events obtained by exposing the Brookhaven National Laboratory 20-in. liquid hydrogen bubble chamber to 2.85-Bev protons. The partial cross sections observed for multiple meson production reactions are: pp+−(p+p→p+p+π++π−), 2.67±0.13; pn++−, 1.15±0.09; pp+−0, 0.74±0.07; d++−, 0.06±0.02; four or more meson production, 0.04±0.02, all in mb. Production of two mesons appears to occur mainly in peripheral collisions with relatively little momentum transfer. In cases of three-meson production, however, the protons are typically deflected at large angles and are more strongly degraded in energy. The 32, 32 pion-nucleon resonance dominates the interaction; there is some indication that one or both of the T=12, pion-nucleon resonances also play a part. The recently discovered resonance in a T=0, three-pion state appears to be present in the pp+−0 reaction. Results are compared with the predictions of the isobaric nucleon model of Sternheimer and Lindenbaum, and with the statistical model of Cerulus and Hagedorn. The cross section for the reaction π0+p→π++π−+p is derived using an expression from the one-pion exchange model of Drell.
The production of neutrons carrying at least 20% of the proton beam energy ($\xl > 0.2$) in $e^+p$ collisions has been studied with the ZEUS detector at HERA for a wide range of $Q^2$, the photon virtuality, from photoproduction to deep inelastic scattering. The neutron-tagged cross section, $e p\to e' X n$, is measured relative to the inclusive cross section, $e p\to e' X$, thereby reducing the systematic uncertainties. For $\xl >$ 0.3, the rate of neutrons in photoproduction is about half of that measured in hadroproduction, which constitutes a clear breaking of factorisation. There is about a 20% rise in the neutron rate between photoproduction and deep inelastic scattering, which may be attributed to absorptive rescattering in the $\gamma p$ system. For $0.64 < \xl < 0.82$, the rate of neutrons is almost independent of the Bjorken scaling variable $x$ and $Q^2$. However, at lower and higher $\xl$ values, there is a clear but weak dependence on these variables, thus demonstrating the breaking of limiting fragmentation. The neutron-tagged structure function, ${{F}^{\rm\tiny LN(3)}_2}(x,Q^2,\xl)$, rises at low values of $x$ in a way similar to that of the inclusive \ff of the proton. The total $\gamma \pi$ cross section and the structure function of the pion, $F^{\pi}_2(x_\pi,Q^2)$ where $x_\pi = x/(1-\xl)$, have been determined using a one-pion-exchange model, up to uncertainties in the normalisation due to the poorly understood pion flux. At fixed $Q^2$, $F^{\pi}_2$ has approximately the same $x$ dependence as $F_2$ of the proton.
Cross sections for the production of two isolated muons up to high di-muon masses are measured in ep collisions at HERA with the H1 detector in a data sample corresponding to an integrated luminosity of 71 pb^-1 at a centre of mass energy of sqrt{s} = 319 GeV. The results are in good agreement with Standard Model predictions, the dominant process being photon-photon interactions. Additional muons or electrons are searched for in events with two high transverse momentum muons using the full data sample corresponding to 114 pb^-1, where data at sqrt{s} = 301 GeV and sqrt{s} = 319 GeV are combined. Both the di-lepton sample and the tri-lepton sample agree well with the predictions.
The cross section for the production of $\omega$ mesons in proton-proton collisions has been measured in a previously unexplored region of incident energies. Cross sections were extracted at 92 MeV and 173 MeV excess energy, respectively. The angular distribution of the $\omega$ at $\epsilon$=173 MeV is strongly anisotropic, demonstrating the importance of partial waves beyond pure s-wave production at this energy.
Inclusive D^{*+-} production in two-photon collisions is studied with the L3 detector at LEP, using 683 pb^{-1} of data collected at centre-of-mass energies from 183 to 208 GeV. Differential cross sections are determined as functions of the transverse momentum and pseudorapidity of the D^{*+-} mesons in the kinematic region 1 GeV < P_T < 12 GeV and |eta| < 1.4. The cross sections sigma(e^+e^- -> e^+e^-D^{*+-}X) in this kinematical region is measured and the sigma(e^+e^- -> e^+e^- cc{bar}X) cross section is derived. The measurements are compared with next-to-leading order perturbative QCD calculations.
We have searched for exclusive 2-photon production in proton-antiproton collisions at sqrt{s} = 1.96 TeV, using 532/pb of integrated luminosity taken by the Run II Collider Detector at Fermilab. The event signature requires two electromagnetic showers, each with transverse energy E_T > 5 GeV and pseudorapidity |eta|<1.0, with no other particles detected in the event. Three candidate events are observed. We discuss the consistency of the three events with gamma-gamma, pi0-pi0, or eta-eta production. The probability that other processes fluctuate to 3 events or more is 1.7x10^-4. An upper limit on the cross section of p+pbar --> p+gamma-gamma+pbar is set at 410 fb with 95% confidence level.
|
Question: Is there a condition on an object $x$ of an $(\infty,2)$-category $\mathcal C$ which is equivalent to $x = Z(pt_+)$ for a unique TFT $Z$ from the $(\infty,2)$-category of framed bordisms where we only allow $2$-cobordisms for which the incoming and outgoing boundary of every component is non-empty. Remark: if I only demanded that the outgoing boundary was non-empty, this is called the non-compact bordism category non-compact -see below, and Defn 4.2.10 in http://www.math.harvard.edu/~lurie/papers/cobordism.pdf for the oriented version. Motivation: The kind of examples I have in mind are things like string topology for a non-compact oriented manifold (this would be an oriented theory rather than framed, but I want to try to separate out the conditions imposed by giving rise to a framed theory, and the fixed point data for the action of $SO(2)$. Background (from Jacob Lurie's paper linked above): The cobordism hypothesis in two dimensions states that fully extended 2d framed TFTs
$Bord_2 ^{fr} \to \mathcal C$
are equivalent to fully dualizable objects in $\mathcal C$ (where $\mathcal C$ is some symmetric monoidal $(\infty,2)$-category).
Explicitly, an object $x \in \mathcal C$ is fully dualizable if
It is dualizable (with dual $x^\vee$) The evaluation morphism $ev:x \otimes x^\vee \to 1_{\mathcal C}$ has both a right and a left adjoint.
By duality, the adjoints $ev^R$ and $ev^L$ give rise to endomorphisms $S$ and $T$ of $x$ which are inverses of each other ($S$ is called the Serre automorphism).
There is also a non-compact version (as far as I understand): Let $Bord_n ^{fr,nc}$ be the bordism category in which every connected componant of a surface has a non-empty outgoing boundary. A non-compact 2d TFT
$Bord_n^{fr,nc} \to \mathcal C$
is equivalent to a $(1 + 1/2)$-dualizable object in $\mathcal C$. That is, an object $x$ which is
Dualizable, The evaluation morphism has a right adjoint, The corresponding endomorphism of $x$ is invertible. Thoughts: Both full and 1.5 dualizibility are conditions that can be checked on the level of homotopy 2-categories. If an object $x$ is fully dualizable then the dualizing data (evaluation, unit and counit for the adjunction, etc.) are essentially uniquely determined.
The issue for me is that I don't see an obvious way to express the generators and relations for the non-empty incoming and outgoing boundary bordism category in terms of duals and adjoints. I could see a potential answer to my question along the lines of:
$x$ is dualizable, $x$ admits an automorphism $S$, giving rise to morphisms $coev^\ast = (S\otimes 1_{x^{\ast}}) \circ coev$ and $coev^! = (S^{-1} \otimes 1_{x^\ast}) \circ coev: 1_{\mathcal C} \to x \otimes x^\ast$, There are 2-morphisms $coev^! \circ ev: \to 1_{x^\ast \otimes x}$ and $1_{x^\ast \otimes x} \to coev^\ast \circ ev$ (corresponding to "saddle" cobordisms). These satisfy some relations (I am picturing something like the identity that relates the comultiplication and multiplication in a Frobenius algebra...)
In any case, if there is an answer along these lines, my question is: if an object $x$ admits such a collection of data, is this collection unique?
I hope this makes some sense...
|
1. Use the inductive definition of an to prove that \((ab)^{n} = a^{n}b^{n}\) for all nonnegative integers \(n\).
2. Give an inductive definition of \(\bigcup\limits_{i=1}^{n}S_{i}\) and use it and the two set distributive law to prove the distributive law \(A \cap \bigcup\limits_{i=1}^{n}S_{i} = \bigcup\limits_{i=1}^{n} A \cap S_{i}\).
\(\rightarrow\) 3. A hydrocarbon molecule is a molecule whose only atoms are either
carbon atoms or hydrogen atoms. In a simple molecular model of a hydrocarbon, a carbon atom will bond to exactly four other atoms and hydrogen atom will bond to exactly one other atom. Such a model is shown in Figure 2.8. We represent a hydrocarbon compound
Figure 2.8: A model of a butane molecule
with a graph whose vertices are labelled with C’s and H’s so that each C vertex has degree four and each H vertex has degree one. A hydrocarbon is called an “alkane” if the graph is a tree. Common examples are methane (natural gas), butane (one version of which is shown in Figure 2.8), propane, hexane (ordinary gasoline), octane (to make gasoline burn more slowly), etc.
How many vertices are labelled \(H\) in the graph of an alkane with exactly \(n\) vertices labelled \(C\)? An alkane is called butane if it has four carbon atoms. Why do we say one version of butane is shown in Figure 2.8?
4.
Give a recurrence for the number of ways to divide \(2n\) people into sets of two for tennis games. (Don’t worry about who serves first.) Give a recurrence for the number of ways to divide \(2n\) people into sets of two for tennis games and to determine who serves first.)
\(\rightarrow\) 5. Give a recurrence for the number of ways to divide \(4n\) people into sets of four for games of bridge. (Don’t worry about how they sit around the bridge table or who is the first dealer.)
6. Use induction to prove your result in Supplementary Problem 2 at the end of Chapter 1.
7. Give an inductive definition of the product notation \(\prod\limits^{n}_{i=1}a_{i}\).
8. Using the fact that \((ab)^{k} = a^{k}b^{k}\), use your inductive definition of
product notation in Problem 7 to prove that \(\left(\prod\limits^{n}_{i=1}a_{i}\right)^{k} = \prod\limits^{n}_{i=1}a_{i}^{k}\).
\(\rightarrow *\) 9. How many labelled trees on \(n\) vertices have exactly four vertices of degree 1? (This problem also appears in the next chapter since some ideas in that chapter make it more straightforward.)
\(\rightarrow *\) 10. The degree sequence of a graph is a list of the degrees of the vertices in nonincreasing order. For example the degree sequence of the first graph in Figure 2.4 is \((4, 3, 2, 2, 1)\). For a graph with vertices labelled 1 through \(n\), the ordered degree sequence of the graph is the sequence \(d_{1}, d_{2}, . . . d_{n}\) in which \(d_{i}\) is the degree of vertex \(i\). For example the ordered degree sequence of the first graph in Figure 2.2 is \((1, 2, 3, 3, 1, 1, 2, 1)\).
How many labelled trees are there on \(n\) vertices with ordered degree sequence \(d_{1}, d_{2}, . . . d_{n}\? (This problem appears again in the next chapter since some ideas in that chapter make it more straightforward.) * How many labelled trees are there on \(n\) vertices with with the degree sequence in which the degree \(d\) appears id times?
|
Set in present day New York City, an unknown spacecraft of alien origin expelled millions of micro blackholes each with the mass of a grape in the earth atmosphere. I like to know what happens if these millions of micro blackholes were to fall on building structures such as skyscrapers, would it trigger an extinction level event?
would it triggers extinction level event?
Since they'd evaporate more or less instantaneously (known as Hawking radiation), releasing energy according to the famous equation beginning E=, the spaceship would last a few microseconds at best, Earth would be fine.
Yes, the aliens in the ship would become extinct.
Black holes evaporate by emitting Hawking radiation
a 1-second-life black hole has a mass of $2.28 \cdot 10^5 \ \mathrm{kg}$
A grape has far less mass than that, thus the black hole would evaporate way faster than that.
An intelligent life form dropping micro black holes on Earth would thus quickly annihilate its own bombing squad in a shower of gamma ray, proving that they were not so intelligent as we thought.
The electromagnetic force from one electron on another and the gravitational force of this micro-black hole both follow an inverse square law. A grape about 1.5 cm in radius would have a mass of about 0.015 kg.
When does the gravitational force of the grape exceed the electromagnetic force between electrons ? It's when :
$$\frac r R < \sqrt{\frac {4\pi \epsilon_0Gm_em_h}{e^2}} = 6.3\times 10^{-8}$$
Meaning the black hole would have to pass less than one ten millionth of the distance between electrons to have a significant influence on one. Away from than range the electron will happily go about it's business hardly disturbed at all.
Even if a black hole passes this close the effect is only temporary. You're still nowhere near the event horizon of that black hole and so the electron will, at worst, be pulled away from it's normal motion and after some brief period when the black hole moves away it will simply recombined in some way with the bulk of atoms around it. It might cause a minute amount of damage on a molecular level (even allowing for millions of these micro black holes), but the net effect would be tiny, probably less that someone hitting a wall with their hand.
How about they expell at fraction of c so we take length contraction into question?
You seem to mean that to avoid Hawking radiation evaporation destroying these black holes before they even reach the black hole, they could be ejected at a high fraction of the speed of light.
So how high a speed is needed to avoid them evaporating before they travel 100 meters, assuming your aliens like low level flying ?
The fraction of the speed of light needed is :
$$\frac v c > \frac 1 { \sqrt{ 1 + \left( \frac {Tc} L \right)^2 } }$$
Where $L$ is the distance they must travel and $T$ is the lifetime of the micro black hole before it evaporates.
This works out at $\frac v c \approx 1 - 2\times 10^{-19}$. That's insanely close to the speed of light.
A million grapes of mass 0.015 kg will have a mass of 15,000 kg. But the energy required to get them moving at this insane fraction of the speed of light would be enormous. It equates to a mass about $2\times 10^9$ times 15,000 kg. Or to put it another way, the ship firing these micro black holes would need to have a mass-energy of about $3\times 10^{13}$ kg. The asteroid Vesta is substantially larger than this.
So this is actually a small mass in terms of asteroids and you could probably destroy Earth a lot more easily simply by grabbing some handy largish asteroids and sending them on their merry way towards Earth at some modest speed that's easily imparted with your spaceship.
Conclusion :
No need at all to mess around with ultra-relativistic micro-black holes when the universe provides you with much simpler and easy to handle "ammunition" in the form of basic asteroids.
All the answers so far take as read the veracity of Hawking Radiation. If we assume, for a moment, that this is false and that some undiscovered process prevents black hole evaporation (perhaps there is a layer of physics underlying quantum mechanics in the same way that QM underlies classical physics...); then what happens?
The black holes would fall to Earth like grapes (I assume you've removed their orbital velocity so that they don't just stay in orbit). They would accelerate like any falling body but, because of their tiny size, would not experience any air resistance. So they would arrive at the surface going at a fair old clip. If dropped from orbit, say about 400km up, this would be about 3 km/s. At the surface, what would happen? Nothing much, I'd guess... They're still so small that "solid" matter is practically a vacuum to them so they go straight through, down past the crust, mantle, core and then up the other side, out through Western Australia, back up to about 400km where they stop - and then tumble back down again. Eventually, they'd settle into a highly elliptical orbit around the centre of the Earth. The Coriolis force would make it look like the stream was scanning round the Earth every 24 hours.
Occasionally, one of them would strike a nucleus head-on and capture some of its quarks, so it would grow slightly. This process would have some positive feedback (bigger the event horizon gets, more chance of interaction) but I'm not sure what the time constant would look like. Anyone fancy doing the calculation? I'm guessing it might be aeons before it eats the Earth...
If we take "size of a grape" to mean a gram, then a million of them would be 1 metric ton. When they evaporate, they release energy several orders of magnitudes higher than a nuclear bomb. So New York would be devastated, but the earth as whole would not have large effects, although it might produce radioactive elements that would increase cancer rates.
While this may or may not be an extinction event, it has the
potential to be very bad news for at least some of those on the surface of the Earth, even if not the whole surface. The trick is that it will depend crucially on the alien ship's location, since as mentioned in the answers, the ship will be destroyed on the instant the black holes are released.
A typical grape has a mass of about 5 grams (cite: https://www.reference.com/food/many-grams-grape-weigh-fcc1e34fdbbcf843). 5 grams times a million is 5 megagrams (about five typical-sized road vehicles). The explosion into Hawking radiation can be considered effectively as very much like the instant detonation of an antimatter or nuclear explosive which ends up with the conversion of that same total mass to energy. By
$$E = mc^2$$
that is about $4.5 \times 10^{20}\ \mathrm{J}$ released, or 450 EJ. For comparison, the TSAR (largest nuclear device ever exploded) was about 0.21 EJ, so this is roughly 2000 times more explosive. While not as large as the Chicxulub asteroid strike (about 400 000 EJ), this is still a considerable bang, more than enough that were it to occur
on or near Earth's surface (i.e. the ship is "hovering" just above the skyscraper in question), it would lead to the complete annihilation of not only all of New York City (so way worse than just "a skyscraper"), but probably also the whole surrounding states, if not the entire Northwestern US due to the blast and thermal waves. So insofar as the " skyscraper" is concerned, the answer for it is: complete, instantaneous vaporization into a high-energy ball of plasma, similar to that from a very, very, very, very big nuclear explosive, along with a large amount of the surrounding city and probably also the ground it is sitting on. The expansion of this huge ball of plasma generates a very large blast wave that they lays waste to the surrounding countryside, out probably to a radius larger than New York State itself, plus formation of a large crater similar to that from an asteroid impact and resultant release of ejecta.
The other plausible alternative is if we imagine the craft is in orbit. For Low Earth orbit, we are talking about a height of 400 km above the surface. From geometry, we can then figure the amount of energy deposited at a point by using the inverse-square law, since the source will be approximately pointlike at this distance:
$$I(r) = \frac{I_0}{4\pi r^2}$$
where $I_0$ is the initial intensity, $I(r)$ that at distance $r$. Taking $r = 400\ \mathrm{km} = 400 000\ \mathrm{m}$, we get that the point on Earth's surface directly below the craft gets struck with about 223 MJ of energy per square meter, delivered effectively instantaneously. The farthest point that will experience irradiation of energy by the explosion is that for which it is just on the horizon, something we can calculate by considering when the line from the exploding craft to the point on the ground is at a right angle (so the tangent) to the line from the Earth's center to the same ground point. Geometrically, this forms a right triangle with the right angle that at the observation point, the hypotenuse is the line from the Earth's center to the craft (thus equal to $R_E + 400\ \mathrm{km}$) and the adjacent side is the line from the Earth's center to the observation point itself (thus equal to $R_E$). The length of the opposite side is then $\sqrt{(R_E + 400\ \mathrm{km})^2 - R_E^2}$ which with $R_E = 6371\ \mathrm{km}$ gives the straight-line distance as ~2300 km. To get the precise ground distance we need to take into account the curvature of the Earth's surface, and we can do that by taking the angle at the Earth's centre: since we have the hypotenuse and adjacent, we get $\cos(\theta) = \frac{\mathrm{adj}}{\mathrm{hyp}} = \frac{R_E}{R_E + 400\ \mathrm{km}}$ which gives $\theta \approx 345\ \mathrm{mrad}$ and multiplying it by $R_E$ to get the circular arc length ($s = r\theta$), which gives the ground distance as still being pretty close: ~2200 km.
Thanks to the cosine law of the angle of incidence, of course, radiation at this point will be effectively zero, so we can estimate that a radius of 1000 km will be subjected to radiation levels exceeding 100 MJ/m^2, delivered virtually instantly, chiefly as hard X/gamma rays. This will, for the most part, be absorbed in the atmosphere, but may cause interesting shock heating and chemical effects that I imagine cannot be good for anyone who happens to be underneath them. At the very least, you get a
huge cloud of oxides of nitrogen ($\mathrm{N_2 O}$, $\mathrm{NO_2}$) produced immediately, like smog - poison. I'm less sure of how to calculate how much and moreover what the effects of that will be once dispersed globally, but I can't imagine they'd be too good, either. This effect can be considered similar to that of a nearby Gamma-Ray Burst (GRB) impinging on the planet; cited as a possible cause of the Ordovician mass extinction (though I might have also heard something more recently that this has been tracked to a different cause), though here affecting only a considerably smaller area - that would have affected an entire hemisphere. Nonetheless, it might give you some clue that this is probably not going to be too good a day (week, month, year) for anyone. It may not kill everybody, but it's also not going to be anywhere close to as innocent and harmless as so many other answers and comments here seem to be painting it to be.
And this also, I should tell you, varies with just how much "million
s" actually is. This was for one million. If it's 100 million, then we are getting to around 10% of Chicxulub, and talking a lot worse.
|
11:00 AM to 12:30 PM
CSB 480
Cindy Xiong, Northwestern University
Your visual system can crunch vast arrays of numbers at a glance, providing the rest of your brain with critical values, statistics, and patterns needed to make decisions about your data. But that process can be derailed by biases at both the perceptual and cognitive levels. I demonstrate 3 instances of these biases that obstruct effective data communication. First, in the most frequently used graphs – lines and bars – reproductions of average values are systematically underestimated for lines, and overestimated for bars. Second, when people see correlational data, they often mistakenly also see a causal relationship. I’ll show how this error can be even worse for some types of graphs. Third, we’ve all experienced being overwhelmed by a confusing visualization. This may happen because the designer – an expert in the topic – thinks that you’d see what they see. I’ll describe a replication of this real-world phenomenon in the lab, showing that, when communicating patterns in data to others, it is tough for people to see a visualization from a naive perspective. I discuss why these biases happen in our brains, and prescribe ways to design visualizations to mitigate these biases.
1:00 PM to 2:00 PM
CS conference room (CSB 453)
Elad Hazan, Princeton University
Linear dynamical systems are a continuous subclass of reinforcement learning models that are widely used in robotics, finance, engineering, and meteorology. Classical control, since the works of Kalman, has focused on dynamics with Gaussian i.i.d. noise, quadratic loss functions and, in terms of provably efficient algorithms, known systems and observed state. In this talk we'll discuss how to apply new machine learning methods to control which relax all of the above: efficient control with adversarial noise, general loss functions, unknown systems, and partial observation.
Based on joint work with Naman Agarwal, Nataly Brukhim, Brian Bullins, Karan Singh, Sham Kakade, Max Simchovitz, and Cyril Zhang.
2,3,…,k: From approximating the number of edges to approximating the number of k-cliques (with a sublinear number of queries)
12:00 PM to 1:00 PM
CS conference room (CSB 453)
Dana Ron, Tel Aviv University
This talk will present an algorithms for approximating the number of k-cliques in a graph when given query access to the graph. This problem was previously studied for the cases of k=2 (edges) and k=3 (triangles). We give an algorithm that works for any k >= 3, and is actually conceptually simpler than the k=3 algorithm.
We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries.
Let n denote the number of vertices in the graph, m the number of edges, and C_k the number of k-cliques.
We design an algorithm that outputs a (1+\epsilon)-approximation (with high probability) for C_k, whose expected query complexity and running time are
O (\frac{n}{C_k^{1/k}}+\frac{m^{k/2}}{C_k}) poly (\log n, 1/\epsilon,k).
Hence, the complexity of the algorithm is sublinear in the size of the graph for C_k = \omega(m^{k/2-1}).
Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (in terms of the dependence on n, m and C_k, up to the dependence on \log n, 1/\epsilon and k).
If time permits, I will also talk shortly about follow-up work on approximate counting of $k$-cliques in bounded-arboricity graphs.
The talk is based on works with Talya Eden and C. Seshadhri.
11:30 AM to 12:30 PM
CS conference room (CSB 453)
Tulika Mitra, National University of Singapore
Internet of Things (IoT), a network of billion computing devices embedded within physical objects, is revolutionizing our lives. The IoT devices at the edge are primarily responsible only for collecting and communicating the data to the cloud, where the computationally intensive data analytics takes place. However, the data privacy and the connectivity issues - in
conjunction with the fast real-time response requirement of certain IoT applications - call for smart edge devices that should be able to support privacy-preserving, time-sensitive computation for machine intelligence on-site. We will present the computation challenges in edge computing and introduce hardware-software co-designed approaches to overcome these challenges. We will discuss the design of tiny accelerators that are completely software programmable and can speed up computation to realize the edge computing vision at ultra-low power budget. We will also demonstrate the promise of collaborative computation that engages heterogeneous processing elements in a synergistic fashion to achieve real-time edge computing.
12:00 PM to 1:00 PM
CSB 480 (Computer Science Department)
Swastik Kopparty, Rutgers University
12:00 PM to 1:00 PM
CS conference room (CSB 453)
Piotr Sankowski, University of Warsaw
12:00 PM to 1:00 PM
CSB 453
Tushar Krishna, Georgia Tech
Ever since modern computers were invented, the dream of creating artificial intelligence (AI) has captivated humanity. We are fortunate to live in an era when, thanks to deep learning (DL), computer programs have paralleled, and in many cases even surpassed human level accuracy in tasks like visual perception and speech synthesis. However, we are still far away from realizing general-purpose AI. The problem lies in the fact that the development of supervised learning based DL solutions today is mostly open loop. A typical DL model is created by hand-tuning the deep neural network (DNN) topology by a team of experts over multiple iterations, followed by training over petabytes of labeled data. Once trained, the DNN provides high accuracy for the task at hand; if the task changes, however, the DNN model needs to be re-designed and re-trained before it can be
deployed. A general-purpose AI system, in contrast, needs to have the ability to constantly interact with the environment and learn by adding and removing connections within the DNN autonomously, just like our brain does. This is known as synaptic plasticity.
In this talk we will present our research efforts towards enabling general-purpose AI leveraging plasticity in both the algorithm and hardware. First, we will present GeneSys (MICRO 2018), a HW-SW prototype of a closed loop learning system for continuously evolving the structure and weights of a DNN for the task at hand using genetic algorithms, providing 100-10000x higher performance and energy-efficiency over state-of-the-art embedded and desktop CPU and GPU systems. Next, we will present a DNN accelerator substrate called MAERI (ASPLOS 2018), built using light-weight, non-blocking, reconfigurable interconnects, that supports efficient mapping of regular and irregular DNNs with arbitrary dataflows, providing ~100% utilization of all compute units, resulting in 3X speedup and energy-efficiency over our prior work Eyeriss (ISSCC 2016). Finally, time permitting, we will describe our research in enabling rapid design-space exploration and prototyping of hardware accelerators using our dataflow DSL + cost-model called MAESTRO (MICRO 2019).
11:40 AM to 12:40 PM
CS Department 451
Shafi Goldwasser, UC Berkeley
Cryptography and Computational Learning have shared a curious history: a scientific success for one has often provided an example of an impossible task for the other. Today, the goals of the two fields are aligned. Cryptographic models and tools can and should play a role in ensuring the safe use of machine learning. We will discuss this development with its challenges and opportunities.
Host: Jeannette Wing
11:40 AM to 12:40 PM
CS Department 451
Srini Devadas, MIT
As the world becomes more connected, privacy is becoming harder to maintain. From social media services to Internet service providers to state-sponsored mass-surveillance programs, many outlets collect sensitive information about the users and the communication between them – often without the users ever knowing about it. In response, many Internet users have turned to end-to-end encryption, like Signal and TLS, to protect the content of the communication. Unfortunately, these works do little to hide the metadata of the communication, such as when and with whom a user is communicating. In scenarios where the metadata are sensitive, encryption alone is not sufficient to ensure users’ privacy.
Most prior systems that provide metadata private communication fall into one of two categories: systems that (1) provide formal privacy guarantees against global adversaries but do not scale to large numbers of users, or (2) scale easily to a large user base but do not provide strong guarantees against global adversaries. I will present two systems that aim to bridge the gap between the two categories to enable private communication with strong guarantees for many millions of users: Quark, a horizontally scalable anonymous broadcast system that defends against a global adversary who monitors the entire network and controls a significant fraction of the servers, and Crossroads, which provides metadata private communication between two honest users against the same adversary using a novel cryptographic primitive.
This talk is based on Albert Kwon's recently completed MIT PhD dissertation.
Host: Simha Sethumadhavan
11:40 AM to 12:40 PM
CSB 451
Barbara Grosz, Harvard University
Computing technologies have become pervasive in daily life, sometimes bringing unintended but harmful consequences. For students to learn to think not only about what technology they could create, but also whether they should create that technology and to recognize the ethical considerations that should constrain their design, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard's Embedded EthiCS initiative, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will begin with a short description of my experiences teaching the course "Intelligent Systems: Design and Ethical Challenges" that inspired the design of Embedded EthiCS. It will then describe the goals behind the design, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions.
Host: Augustin Chaintreau
11:40 AM to 12:40 PM
CS Department 451
Bill Freeman, MIT
|
This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker, and is a another sequel to our previous work. This is the third in a trio of papers, and completes an answer to a question posed by our advisor Jeff Hoffstein two years ago.
We have just uploaded a preprint to the arXiv giving conditions that guarantee that a sequence of numbers contains infinitely many sign changes. More generally, if the sequence consists of complex numbers, then we give conditions that guarantee sign changes in a
generalized sense.
Let $\mathcal{W}(\theta_1, \theta_2) := { re^{i\theta} : r \geq 0, \theta \in [\theta_1, \theta_2]}$ denote a wedge of complex plane.
Suppose ${a(n)}$ is a sequence of complex numbers satisfying the following conditions:
$a(n) \ll n^\alpha$, $\sum_{n \leq X} a(n) \ll X^\beta$, $\sum_{n \leq X} \lvert a(n) \rvert^2 = c_1 X^{\gamma_1} + O(X^{\eta_1})$,
where $\alpha, \beta, c_1, \gamma_1$, and $\eta_1$ are all real numbers $\geq 0$. Then for any $r$ satisfying $\max(\alpha+\beta, \eta_1) – (\gamma_1 – 1) < r < 1$, the sequence ${a(n)}$ has at least one term outside any wedge $\mathcal{W}(\theta_1, \theta_2)$ with $0 \theta_2 – \theta_1 < \pi$ for some $n \in [X, X+X^r)$ for all sufficiently large $X$.
These wedges can be thought of as just slightly smaller than a half-plane. For a complex number to escape a half plane is analogous to a real number changing sign. So we should think of this result as guaranteeing a sort of sign change in intervals of width $X^r$ for all sufficiently large $X$.
The intuition behind this result is very straightforward. If the sum of coefficients is small while the sum of the squares of the coefficients are large, then the sum of coefficients must experience a lot of cancellation. The fact that we can get quantitative results on the number of sign changes is merely a task of bookkeeping.
Both the statement and proof are based on very similar criteria for sign changes when ${a(n)}$ is a sequence of real numbers first noticed by Ram Murty and Jaban Meher. However, if in addition it is known that
\begin{equation}
\sum_{n \leq X} (a(n))^2 = c_2 X^{\gamma_2} + O(X^{\eta_2}), \end{equation}
and that $\max(\alpha+\beta, \eta_1, \eta_2) – (\max(\gamma_1, \gamma_2) – 1) < r < 1$, then generically both sequences ${\text{Re} (a(n)) }$ and $latex{ \text{Im} (a(n)) }$ contain at least one sign change for some $n$ in $[X , X + X^r)$ for all sufficiently large $X$. In other words, we can detect sign changes for both the real and imaginary parts in intervals, which is a bit more special.
It is natural to ask for even more specific detection of sign changes. For instance, knowing specific information about the distribution of the arguments of $a(n)$ would be interesting, and very closely reltated to the Sato-Tate Conjectures. But we do not yet know how to investigate this distribution.
In practice, we often understand the various criteria for the application of these two sign changes results by investigating the Dirichlet series
\begin{align} &\sum_{n \geq 1} \frac{a(n)}{n^s} \\ &\sum_{n \geq 1} \frac{S_f(n)}{n^s} \\ &\sum_{n \geq 1} \frac{\lvert S_f(n) \rvert^2}{n^s} \\ &\sum_{n \geq 1} \frac{S_f(n)^2}{n^s}, \end{align} where \begin{equation} S_f(n) = \sum_{m \leq n} a(n). \end{equation}
In the case of holomorphic cusp forms, the two previous joint projects with this group investigated exactly the Dirichlet series above. In the paper, we formulate some slightly more general criteria guaranteeing sign changes based directly on the analytic properties of the Dirichlet series involved.
In this paper, we apply our sign change results to our previous work to show that $S_f(n)$ changes sign in each interval $[X, X + X^{\frac{2}{3} + \epsilon})$ for sufficiently large $X$. Further, if there are coefficients with $\text{Im} a(n) \neq 0$, then the real and imaginary parts each change signs in those intervals.
We apply our sign change results to single coefficients of $\text{GL}(2)$ cusp forms (and specifically full integral weight holomorphic cusp forms, half-integral weight holomorphic cusp forms, and Maass forms). In large part these are minor improvements over folklore and what is known, except for the extension to complex coefficients.
We also apply our sign change results to single isolated coefficients $A(1,m)$ of $\text{GL}(3)$ Maass forms. This seems to be a novel result, and adds to the very sparse literature on sign changes of sequences associated to $\text{GL}(3)$ objects. Murty and Meher recently proved a general sign change result for $\text{GL}(n)$ objects which is similar in feel.
As a final application, we also consider sign changes of partial sums of $\nu$-normalized coefficients. Let
\begin{equation} S_f^\nu(X) := \sum_{n \leq X} \frac{a(n)}{n^{\nu}}. \end{equation} As $\nu$ gets larger, the individual coefficients $a(n)n^{-\nu}$ become smaller. So one should expect that sign changes in ${S_f^\nu(n)}$ to change based on $\nu$. And in particular, as $\nu$ gets very large, the number of sign changes of $S_f^\nu$ should decrease.
Interestingly, in the case of holomorphic cusp forms of weight $k$, we are able to show that there are sign changes of $S_f^\nu(n)$ in intervals even for normalizations $\nu$ a bit above $\nu = \frac{k-1}{2}$. This is particularly interesting as $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$, so for $\nu > \frac{k-1}{2}$ the coefficients are \emph{decreasing} with $n$. We are able to show that when $\nu = \frac{k-1}{2} + \frac{1}{6} – \epsilon$, the sequence ${S_f^\nu(n)}$ has at least one sign change for $n$ in $[X, 2X)$ for all sufficiently large $X$.
It may help to consider a simpler example to understand why this is surprising. Consider the classic example of a sequence of $b(n)$, where $b(n) = 1$ or $b(n) = -1$, randomly, with equal probability. Then the expected size of the sums of $b(n)$ is about $\sqrt n$. This is an example of \emph{square-root cancellation}, and such behaviour is a common point of comparison. Similarly, the number of sign changes of the partial sums of $b(n)$ is also expected to be about $\sqrt n$.
Suppose now that $b(n) = \frac{\pm 1}{\sqrt n}$. If the first term is $1$, then it takes more then the second term being negative to make the overall sum negative. And if the first two terms are positive, then it would take more then the following three terms being negative to make the overall sum negative. So sign changes of the partial sums are much rarer. In fact, they’re exceedingly rare, and one might barely detect more than a dozen through computational experiment (although one should still expect infinitely many).
This regularity, in spite of the decreasing size of the individual coefficients $a(n)n^{-\nu}$, suggests an interesting regularity in the sign changes of the individual $a(n)$. We do not know how to understand or measure this effect or its regularity, and for now it remains an entirely qualitative observation.
For more details and specific references, see the paper on the arXiv.
|
Solvers for ordinary differential equations (ODEs) belong to the best-studied algorithms of numerical mathematics. An ODE is an implicit statement about the relationship of a curve $x:\mathbb{R}_{\geq 0}\to\mathbb{R}^N$ to its derivative, in the form $x'(t) = f(x(t),t)$, where $x'$ is the derivative of curve, and $f$ is some function. To identify a unique solution of a particular ODE, it is typically also necessary to provide additional statements about the curve, such as its
initial value $x(t_0)=x_0$. An ODE solver is a mathematical rule that maps function and initial value, $(f,x_0)$ to an estimate $x(t)$ for the solution curve. Good solvers have certain analytical guarantees about this estimate, such as the fact that its deviation from the true solution is of a high polynomial order in the step size used by the algorithms to discretize the ODE.
One of the main theoretical contributions of the group is the development of
probabilistic versions of these solvers. In several works, we established a class of solvers for initial value problems that generalize classic solvers by taking as inputs Gaussian distributions $\mathcal{N}(x(t_0);x_0,\Psi)$, $\mathcal{GP}(f;\hat{f},\Sigma)$ over the initial value and vector field, and return a Gaussian process posterior $\mathcal{GP}(x;m,k)$ over the solution. We were able to show that these methods
have the same (linear) computational computational complexity in solver's step-size $h$ as classic methods [ ] (they are Bayesian filters)
can inherit the famous local and global polynomial convergence rates of classic solvers [ ] (i.e. $\|m-x\|\leq Ch^q$ for $q\geq 1$)
produce posterior variance estimates that are calibrated worst-case error estimates [ ] (i.e. $\|m-x\|^2\leq Ck$). In Short, they produce meaningful uncertainty
are in fact a generalization of certain famous classic ODE solvers (namely they reduce to explicit single-step Runge Kutta methods and multi-step Nordsieck methods in the limit of uninformative prior and steady-state operation, respectively. In practical operation, they offer a third, novel type of solver) [ ]
they can be generalized to produce non-Gaussian, nonparametric output while retaining many of the above properties [ ].
Together, these results provide a rich and reliable new theory for probabilistic simulation that current ongoing research projects are seeking to leverage to speed up structured simulation problems inside of machine learning algorithms.
|
[exer:4.2.1] A thermometer is moved from a room where the temperature is \(70^\circ\)F to a freezer where the temperature is \(12^\circ F\). After 30 seconds the thermometer reads \(40^\circ\)F. What does it read after 2 minutes?
[exer:4.2.2] A fluid initially at \(100^\circ\)C is placed outside on a day when the temperature is \(-10^\circ\)C, and the temperature of the fluid drops \(20^\circ\)C in one minute. Find the temperature \(T(t)\) of the fluid for \(t > 0\).
[exer:4.2.3] At 12:00 pm a thermometer reading \(10^\circ\)F is placed in a room where the temperature is \(70^\circ\)F. It reads \(56^\circ\) when it is placed outside, where the temperature is \(5^\circ\)F, at 12:03. What does it read at 12:05 pm?
[exer:4.2.4] A thermometer initially reading \(212^\circ\)F is placed in a room where the temperature is \(70^\circ\)F. After 2 minutes the thermometer reads \(125^\circ\)F.
What does the thermometer read after 4 minutes?
When will the thermometer read \(72^\circ\)F?
When will the thermometer read \(69^\circ\)F?
[exer:4.2.5] An object with initial temperature \(150^\circ\)C is placed outside, where the temperature is \(35^\circ\)C. Its temperatures at 12:15 and 12:20 are \(120^\circ\)C and \(90^\circ\)C, respectively.
At what time was the object placed outside?
When will its temperature be \(40^\circ\)C?
[exer:4.2.6] An object is placed in a room where the temperature is \(20^\circ\)C. The temperature of the object drops by \(5^\circ\)C in 4 minutes and by \(7^\circ\)C in 8 minutes. What was the temperature of the object when it was initially placed in the room?
[exer:4.2.7] A cup of boiling water is placed outside at 1:00 pm. One minute later the temperature of the water is \(152^\circ\)F. After another minute its temperature is \(112^\circ\)F. What is the outside temperature?
[exer:4.2.8] A tank initially contains 40 gallons of pure water. A solution with 1 gram of salt per gallon of water is added to the tank at 3 gal/min, and the resulting solution dranes out at the same rate. Find the quantity \(Q(t)\) of salt in the tank at time \(t > 0\).
[exer:4.2.9] A tank initially contains a solution of 10 pounds of salt in 60 gallons of water. Water with 1/2 pound of salt per gallon is added to the tank at 6 gal/min, and the resulting solution leaves at the same rate. Find the quantity \(Q(t)\) of salt in the tank at time \(t > 0\).
[exer:4.2.10] A tank initially contains 100 liters of a salt solution with a concentration of.1 g/liter. A solution with a salt concentration of.3 g/liter is added to the tank at 5 liters/min, and the resulting mixture is drained out at the same rate. Find the concentration \(K(t)\) of salt in the tank as a function of \(t\).
[exer:4.2.11] A 200 gallon tank initially contains 100 gallons of water with 20 pounds of salt. A salt solution with 1/4 pound of salt per gallon is added to the tank at 4 gal/min, and the resulting mixture is drained out at 2 gal/min. Find the quantity of salt in the tank as it is about to overflow.
[exer:4.2.12] Suppose water is added to a tank at 10 gal/min, but leaks out at the rate of 1/5 gal/min for each gallon in the tank. What is the smallest capacity the tank can have if the process is to continue indefinitely?
[exer:4.2.13] A chemical reaction in a laboratory with volume \(V\) (in ft\(^3\)) produces \(q_1\) ft\(^3\)/min of a noxious gas as a byproduct. The gas is dangerous at concentrations greater than \(\overline c\), but harmless at concentrations \(\le \overline c\). Intake fans at one end of the laboratory pull in fresh air at the rate of \(q_2\) ft\(^3\)/min and exhaust fans at the other end exhaust the mixture of gas and air from the laboratory at the same rate. Assuming that the gas is always uniformly distributed in the room and its initial concentration \(c_0\) is at a safe level, find the smallest value of \(q_2\) required to maintain safe conditions in the laboratory for all time.
[exer:4.2.14] A 1200-gallon tank initially contains 40 pounds of salt dissolved in 600 gallons of water. Starting at \(t_0=0\), water that contains 1/2 pound of salt per gallon is added to the tank at the rate of 6 gal/min and the resulting mixture is drained from the tank at 4 gal/min. Find the quantity \(Q(t)\) of salt in the tank at any time \(t > 0\) prior to overflow.
[exer:4.2.15] Tank \(T_1\) initially contain 50 gallons of pure water. Starting at \(t_0=0\), water that contains 1 pound of salt per gallon is poured into \(T_1\) at the rate of 2 gal/min. The mixture is drained from \(T_1\) at the same rate into a second tank \(T_2\), which initially contains 50 gallons of pure water. Also starting at \(t_0=0\), a mixture from another source that contains 2 pounds of salt per gallon is poured into \(T_2\) at the rate of 2 gal/min. The mixture is drained from \(T_2\) at the rate of 4 gal/min.
Find a differential equation for the quantity \(Q(t)\) of salt in tank \(T_2\) at time \(t > 0\).
Solve the equation derived in
a
to determine \(Q(t)\).
Find \(\lim_{t\to\infty}Q(t)\).
[exer:4.2.16] Suppose an object with initial temperature \(T_0\) is placed in a sealed container, which is in turn placed in a medium with temperature \(T_m\). Let the initial temperature of the container be \(S_0\). Assume that the temperature of the object does not affect the temperature of the container, which in turn does not affect the temperature of the medium. (These assumptions are reasonable, for example, if the object is a cup of coffee, the container is a house, and the medium is the atmosphere.)
Assuming that the container and the medium have distinct temperature decay constants \(k\) and \(k_m\) respectively, use Newton’s law of cooling to find the temperatures \(S(t)\) and \(T(t)\) of the container and object at time \(t\).
Assuming that the container and the medium have the same temperature decay constant \(k\), use Newton’s law of cooling to find the temperatures \(S(t)\) and \(T(t)\) of the container and object at time \(t\).
Find \(\lim._{t\to\infty}S(t)\) and \(\lim_{t\to\infty}T(t)\).
[exer:4.2.17] In our previous examples and exercises concerning Newton’s law of cooling we assumed that the temperature of the medium remains constant. This model is adequate if the heat lost or gained by the object is insignificant compared to the heat required to cause an appreciable change in the temperature of the medium. If this isn’t so, we must use a model that accounts for the heat exchanged between the object and the medium. Let \(T=T(t)\) and \(T_m=T_m(t)\) be the temperatures of the object and the medium, respectively, and let \(T_0\) and \(T_{m0}\) be their initial values. Again, we assume that \(T\) and \(T_m\) are related by Newton’s law of cooling,
\[T'=-k(T-T_m). \eqno{\rm (A)}\]
We also assume that the change in heat of the object as its temperature changes from \(T_0\) to \(T\) is \(a(T-T_0)\) and that the change in heat of the medium as its temperature changes from \(T_{m0}\) to \(T_m\) is \(a_m(T_m-T_{m0})\), where \(a\) and \(a_m\) are positive constants depending upon the masses and thermal properties of the object and medium, respectively. If we assume that the total heat of the system consisting of the object and the medium remains constant (that is, energy is conserved), then
\[a(T-T_0)+a_m(T_m-T_{m0})=0. \eqno{\rm (B)}\]
Equation (A) involves two unknown functions \(T\) and \(T_m\). Use (A) and (B) to derive a differential equation involving only \(T\).
Find \(T(t)\) and \(T_m(t)\) for \(t>0\).
Find \(\lim_{t\to\infty}T(t)\) and \(\lim_{t\to\infty}T_m(t)\).
[exer:4.2.18] Control mechanisms allow fluid to flow into a tank at a rate proportional to the volume \(V\) of fluid in the tank, and to flow out at a rate proportional to \(V^2\). Suppose \(V(0)=V_0\) and the constants of proportionality are \(a\) and \(b\), respectively. Find \(V(t)\) for \(t>0\) and find \(\lim_{t\to\infty}V(t)\).
[exer:4.2.19] Identical tanks \(T_1\) and \(T_2\) initially contain \(W\) gallons each of pure water. Starting at \(t_0=0\), a salt solution with constant concentration \(c\) is pumped into \(T_1\) at \(r\) gal/min and drained from \(T_1\) into \(T_2\) at the same rate. The resulting mixture in \(T_2\) is also drained at the same rate. Find the concentrations \(c_1(t)\) and \(c_2(t)\) in tanks \(T_1\) and \(T_2\) for \(t>0\).
[exer:4.2.20] An infinite sequence of identical tanks \(T_1\), \(T_2\), …, \(T_n\), …, initially contain \(W\) gallons each of pure water. They are hooked together so that fluid drains from \(T_n\) into \(T_{n+1}\,(n=1,2,\cdots)\). A salt solution is circulated through the tanks so that it enters and leaves each tank at the constant rate of \(r\) gal/min. The solution has a concentration of \(c\) pounds of salt per gallon when it enters \(T_1\).
Find the concentration \(c_n(t)\) in tank \(T_n\) for \(t>0\).
Find \(\lim_{t\to\infty}c_n(t)\) for each \(n\).
[exer:4.2.21] Tanks \(T_1\) and \(T_2\) have capacities \(W_1\) and \(W_2\) liters, respectively. Initially they are both full of dye solutions with concentrations \(c_{1}\) and \(c_2\) grams per liter. Starting at \(t_0=0\), the solution from \(T_1\) is pumped into \(T_2\) at a rate of \(r\) liters per minute, and the solution from \(T_2\) is pumped into \(T_1\) at the same rate.
Find the concentrations \(c_1(t)\) and \(c_2(t)\) of the dye in \(T_1\) and \(T_2\) for \(t>0\).
Find \(\lim_{t\to\infty}c_1(t)\) and \(\lim_{t\to\infty}c_2(t)\).
[exer:4.2.22] Consider the mixing problem of Example
Example \(\PageIndex{1}\):
Add text here. For the automatic number to work, you need to add the “AutoNum” template (preferably at4.2.3}, but without the assumption that the mixture is stirred instantly so that the salt is always uniformly distributed throughout the mixture. Assume instead that the distribution approaches uniformity as \(t\to\infty\). In this case the differential equation for \(Q\) is of the form
\[Q'+{a(t)\over150}Q=2\]where \(\lim_{t\to\infty}a(t)=1\).
Assuming that \(Q(0)=Q_0\), can you guess the value of \(\lim_{t\to\infty}Q(t)\)?.
Use numerical methods to confirm your guess in the these cases:
\[\mbox{\part{i}\; } a(t)=t/(1+t) \mbox{\quad \part{ii}\; } a(t)=1-e^{-t^2} \mbox{\quad \part{iii}\; } a(t)=1-\sin(e^{-t}).\]
[exer:4.2.23] Consider the mixing problem of Example
Example \(\PageIndex{1}\):
Add text here. For the automatic number to work, you need to add the “AutoNum” template (preferably at4.2.4} in a tank with infinite capacity, but without the assumption that the mixture is stirred instantly so that the salt is always uniformly distributed throughout the mixture. Assume instead that the distribution approaches uniformity as \(t\to\infty\). In this case the differential equation for \(Q\) is of the form
\[Q'+{a(t)\over t+100}Q=1\]where \(\lim_{t\to\infty}a(t)=1\).
Let \(K(t)\) be the concentration of salt at time \(t\). Assuming that \(Q(0)=Q_0\), can you guess the value of \(\lim_{t\to\infty}K(t)\)?
Use numerical methods to confirm your guess in the these cases:
\[\mbox{\part{i}\; } a(t)=t/(1+t)\; \mbox{\; \part{ii}\; } a(t)=1-e^{-t^2} \mbox{\; \part{iii}\; } a(t)=1+\sin(e^{-t}).\]
|
Some people make a habit of occasionally visiting Wikipedia’s front page and following hyperlinked phrases down a rabbit hole of articles. I do a similarly thing with Andries Brouwer’s tables of strongly regular graphs, which leads to a cascade of papers, checking BCN and computing stuff in Sage. How does that happen exactly? As usual in math, we now need to start with some definitions.
Definitions
A strongly regular graph is a $k$-regular graph where every pair of adjacent vertices has $a$ common neighbours and every pair of non-adjacent vertices has $c$ common neighbours. The tuple $(n,k,a,c)$ is called the parameter set of the strongly regular graph (or SRG). For example, the Petersen graph is a SRG with parameter set $(10,3,0,1)$. We immediately, see that these number $n,k,a$ and $c$ have to satisfy some conditions, some trivial ($n>k$ and $a,c \leq k$) and others less so.
Brouwer’s tables consists of all possible parameter sets for $n$ up to $1300$ which have passed some elementary checks. The parameters which are in green are classes where it has been confirmed that there does in fact exist at least one SRG with those parameters. The ones in red have been shown by other means (failing to satisfy the absolute or relative bound, for example) to not be parameter sets of any SRGs. The entries in yellow are parameter sets where the existence of a SRG is still open. This leads us to …
Non-existence of SRGs
Finding whether or not a putative SRG exists is a pretty interesting problem; the existence of the $57$-regular Moore graph which is a SRG with parameters $(3250, 57,0,1)$ is a long-standing open problem in algebraic graph theory. Today, we’ll look at the parameter set $(49,16,3,6)$, which holds the distinction of being the first red entry in Brouwer’s tables which is not a conference graph and also not ruled out by the absolute bound. This means that it was ruled out by a somewhat ad-hoc method; in this case, it was ruled out by Bussemaker, Haemers, Mathon and Wilbrink in this 1989 paper. However, computers have come along way since 1989, so instead of following their proof, I will give another proof, which uses computation (which was done in Sage).
That’s way too many 2’s, this SRG can’t exist
Suppose $X$ is a SRG with parameter set$(49,16,3,6)$. This graph has eigenvalues $$16, 2^{(32)}, \text{ and } -5^{(16)}$$ where the multiplicities are given as superscripts. We’ll proceed by trying to build $X$. First, we “hang a graph by a vertex” as follows:
The main thing to note right away is that neighhourhood of a vertex $x$ in $X$, denoted $\Gamma_1(x)$ in the diagram, is a $3$-regular graph on $16$ vertices. This is excellent news because there are only 4060 such graphs and one can generate them using geng, or download them from Gordon Royle’s website.
Next, we apply interlacing. We denote by $\lambda_i(G)$ the $i$th largest eigenvalue of $G$. Since $\Gamma_1(x)$ is an induced subgraph of $X$, interlacing theorem tell us $$ \lambda_i(X) \geq \lambda_i(\Gamma_1(x)) \geq \lambda_{33 + i}(X)$$ for $i=1,\ldots,16$. In particular, we see that the second largest eigenvalue of $\Gamma_1(x)$ is at most $2$. In general, this gives us all kinds of useful information (for example, that $\Gamma_1(x)$ has to be connected), but since we only care about cubic graphs on $16$ vertices, we can just go ahead and check which of them have this property. In turns out that only 35 cubic graphs on 16 vertices have this property.
If I have an induced subgraph $Y$ of $X$ on 18 vertices, the interlacing theorem would give the following: $$ \lambda_2(X) \geq \lambda_2(\Gamma_1(x)) \geq \lambda_{33}(X).$$ Recalling that the second and 33rd largest eigenvalues of $X$ are both $2$, we see that any subgraph of $X$ on 18 vertices must have its second eigenvalue equal to $2$. Since $\lambda_i(G)$ already has 16 vertices, we’re not far off. Let $y$ be a vertex in the second neighbourhood of $x$ and we will consider the subgraph $Y$ induced by $x$, the neighbours of $x$ and $y$.
In terms of the computation, we try to add $x$ and $y$ to each of the remaining 35 cubic graphs. We can always choose $y$ to be adjacent to a fixed vertex in $\Gamma_1(x)$ and so there are ${16 \choose 5}$ ways of adding the other $5$ edges incident to $y$. Thus we have 35(3003) candidates for $Y$. A quick computation later, we see that none of those graphs have $2$ as the second largest eigenvalue, which contradicts the existence of $X$.
So what? And also, now what?
It’s fun proving the non-existence of a SRG that is already known not to exist, but clearly what we actually want to do is convert some yellow entries of Brouwer’s table to either red or green. The method here (as well as that of the original paper of Bussemaker, Haemers, Mathon and Wilbrink) works because the multiplicity of $2$ as an eigenvalue is really large. The degree of the graph is equal to the multiplicity of its smallest eigenvalue, which smells of some kind of duality. This happens, for example, in the complement of Clebsch graph (which, by the way, is formally self-dual). The next open case where this occurs is $(100,33,8,12)$. In this case, the first neighbourhood of a vertex would be a $8$-regular graph on 33 vertices, with a fairly large spectral gap. We probably don’t want to be generating all such graphs, so some new idea is needed.
|
Trivial forgery: If you know the CRC polynomial, then you can add (‘xor’) any multiple of it to the message without changing the CRC.
Short answer.If the hardware can only evaluate CRCs with fixed polynomials determined by the hardware designer, it's of little use to you for making a MAC. If you can choose the polynomial and the hardware keeps it secret, well, you've already found some references on how to do this!
The standard way to make a MAC out of a CRC, which we might call a polynomial division MAC and is sometimes called Rabin fingerprinting after a similar idea by Rabin, is to choose a ; then, to authenticate a single message polynomial $m \in \operatorname{GF}(2)[x]$, transmit the polynomial $h + (m \cdot x^d \bmod g)$ as an authentication tag. secret irreducible polynomial $g \in \operatorname{GF}(2)[x]$ and a secret polynomial $h \in \operatorname{GF}(2)[x]$ both of degree $d$, uniformly at random
(The Rabin fingerprint is simply $m \bmod g$; it preserves addition of any degree ${<}d$ polynomial to $m$, enabling trivial forgery
even if masked by $h$, so it is useless as an in-band authenticator, though it is more convenient for other purposes.)
By a standard result of number theory (alternative exposition), when $d$ is prime, there are $(2^d - 2)/d$ irreducible polynomials of degree $d$; for composite $d$ there are fewer, so prime $d$ gives the most efficient key space. But this does not cover
all $d$-bit strings, and to generate a key, choosing an irreducible polynomial uniformly at random, e.g. using the algorithm in Rabin's paper (pp. 3–4), is costly. Using a product of irreducible polynomials whose degrees sum to $d$ enables cheaper key generation, but key space and tag space is wasted even more than with an irreducible $g$—the forgery probability is exponential in the number of irreducible factors.
Worse, computing polynomial division in $\operatorname{GF}(2)[x]$ by an arbitrary divisor is costly. Some CRC hardware admits programmable generator polynomials, but typical hardware uses a fixed polynomial, such as the Intel CRC32 instruction, which is useless for this—not to mention that a 32-bit tag is too small to provide any security. And in software implementations one is tempted to use table lookups to get any semblance of performance—invariably leaking secrets through timing side channels. So it's dangerous to design this into any protocols, because you're designing the temptation of timing side channels into the protocols.
There are perfectly good efficient MACs already for hardware and software: polynomial Fix a field $k$; a one-time key is a pair of elements $r, s \in k$ chosen uniformly at random, a message is a polynomial $m$ in $k$ of degree at most $n$, and the authenticator of the message is $t = m(r) + s$. The adversary's probability of success at forging a tag $t'$ on a message $m' \ne m$ given $m$ and $t$ is no more than $\#k/n$, by a standard argument of counting roots of the polynomial $m'(r) - m(r) + t - t'$ in the variable $r$. The most common examples are Poly1305, where $k$ is the prime field $\mathbb Z/(2^{130} - 5)\mathbb Z$ convenient in software, and GHASH, where $k$ is the binary field $\operatorname{GF}(2^{128})$ convenient in hardware. evaluation MACs.
Generating keys—picking elements of a field uniformly at random—is fast: for a binary field, uniform random bit strings work; for a large prime field (${>}2^{128}$), rejection sampling on uniform random bit strings works, and the probability of rejection is so small you can ignore it safely. Derive $r$ and $s$ from a PRF of a message sequence number or a sufficiently large random token and you get a many-time MAC.
The only reason to reach for a polynomial
division MAC instead of a polynomial evaluation MAC is the bizarre scenario where (a) you do not have fast binary polynomial multiplication or integer arithmetic available, yet (b) you do have hardware that can efficiently evaluate polynomial division by arbitrary degree-${\geq}128$ divisors, and these performance constraints override all other considerations in your protocol design.
|
I am currently working on the following Problem:
Imagine you are given a $d$-ary tree $T_d$, which means an infinite tree with one vertex $x_0$ on top and in which each vertex has $d$ children.
Next, I'm assigning iid random variables $\omega_e$ to all edges $e$ in the graph. Here $\omega_e$ should follow an exponential distribution with mean $d$.
Now define $$X_n:=\min\{\frac{1}{|t|}\sum_{e\in t}\omega_e: t\text{ is a subgraph of }T_d, |t|=n, x_0\in t\}.$$
where, in a slight abuse of notation, I wrote $|t|=n$ to say "$t$ has exactly $n$ edges" and $e\in t$ just means "the edge $e$ is contained in the edge set of the subgraph $t$"
So I'm basically looking for the "lightest" or "shortest" subgraphs of a given length $n$ and then compute the mean weight.
So much for the model, and here comes the actual question:
Does the sequence $X_n$ satisfy a large deviations principle for $n\to\infty$? If yes, what are speed and rate function?
I already tried applying the Gärtner-Ellis Theorem but couldn't show that $$\Lambda(\lambda)=\lim_{n\to\infty}\frac{1}{n}\log\mathbb{E}[e^{n\lambda X_n}]$$ is indeed finite for every $\lambda\in \mathbb{R}$ (which is one of the two prerequisites of the Theorem, the other being that the map $\lambda\mapsto\Lambda(\lambda)$ is differentiable.)
In case it helps: you can use that $X_n$ converges in probability to a constant $c$ for $n\to\infty$ (I already proved that).
I also proved that the series $X_n$ converges in $\mathcal{L}^1$, however almost sure convergence is still missing (and that's actually the reason why I'm interested in an LDP)...
Since this is my first question on MO, I'm not quite sure if it fits on this site. It is however an actual research problem for me and it also seemed to be too "hard" for MSE.
|
EditI realized that the key piece of information that I need is question 1, and so I'd like to rephrase this post:
What are the possible eigenvalues of nonnegative integer matrices?
Any answer to this question would be appreciated and checkmarked.
Original Question:
My question is related to this question about integer nonnegative matrices but goes in a slightly different direction. Like the previous poster, my question comes from solving linear recursions (specifically, computing the discrete modulus of a product finite subdivision rule acting on a grid).
Given $A$ a square matrix with nonnegative integer entries and $v$ a column vector of the same dimension, we can use the Jordan canonical form to get a closed expression for $A^n(v)$. If we sum the entries of $A^n(v)$, we get a function of the form $f(n)=a_1 P_1(n)\lambda_1^n+...+a_kP_k\lambda_k$, where the $\lambda$'s are the distinct eigenvalues and each $P_i$ is a
monic polynomial in $n$.
My question is, what are the possible values for the $a_i$ and the $\lambda_i$? In particular:
Can the $\lambda_i$ be any algebraic integer? Can the $a_i$ be non-integers? (My real question): Given an algebraic integer $\lambda$, can we construct two matrices $A_1$ and $A_2$ such that the ratio of their associated polynomials $\frac{f_1(n)}{f_2(n)}$ (where, as above, $f_i(n)$ is the sum of the entries of $A_1^n v$) has limit $\lambda$? The limit of such a fraction will be 0 unless the 'largest terms' of each polynomial have the same magnitude; my worry is that it would be impossible to get some algebraic integers in this way because they have Galois conjugates of equal or larger size. But it seems that the column vector $v$ might allow one to 'cancel out' unwanted eigenvalues. Is this possible? Is it even possible to get $\sqrt{3}$ in this way?
Thank you for your help! The first two questions seem like they would be easily answerable by experts in matrix theory, but google searches have led to nothing. I appreciate in advance your help.
An example of $f_1(n)$
Let $A=\left[ \begin{array}{cc} 1 & 2 \\ 0 & 2 \end{array} \right]$. It can be diagonalized as $A=\left[ \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{cc} 1 & 0 \\ 0 & 2 \end{array} \right] \left[ \begin{array}{cc} 1 & -2 \\ 0 & 1 \end{array} \right]$. Now, let $v = \left[ \begin{array}{c} 2 \\ 3 \end{array} \right] $.
Then $A^{n} v= \left[ \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{cc} 1 & 0 \\ 0 & 2^{n} \end{array} \right] \left[ \begin{array}{cc} 1 & -2 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} 2 \\ 3 \end{array} \right] = \left[ \begin{array}{cc} 1 & 2^{n+1} -2 \\ 0 & 2^{n} \end{array} \right] \left[ \begin{array}{c} 2 \\ 3 \end{array} \right] = \left[ \begin{array}{c} 3(2^{n+1})-4 \\ 3(2^{n}) \end{array} \right]$. Adding all the entries of this matrix together, we see that the growth function $f(n)$ is $3(2^{n+1} + 2^{n})-4=9(2^{n})-4$.
|
In the rectangular coordinate system, the definite integral provides a way to calculate the area under a curve. In particular, if we have a function \(y=f(x)\) defined from \(x=a\) to \(x=b\) where \(f(x)>0\) on this interval, the area between the curve and the x-axis is given by
\[A=\int ^b_af(x)dx. \nonumber\]
This fact, along with the formula for evaluating this integral, is summarized in the Fundamental Theorem of Calculus. Similarly, the arc length of this curve is given by
\[L=\int ^b_a\sqrt{1+(f′(x))^2}dx. \nonumber\]
In this section, we study analogous formulas for area and arc length in the polar coordinate system.
Areas of Regions Bounded by Polar Curves
We have studied the formulas for area under a curve defined in rectangular coordinates and parametrically defined curves. Now we turn our attention to deriving a formula for the area of a region bounded by a polar curve. Recall that the proof of the Fundamental Theorem of Calculus used the concept of a Riemann sum to approximate the area under a curve by using rectangles. For polar curves we use the Riemann sum again, but the rectangles are replaced by sectors of a circle.
Consider a curve defined by the function \(r=f(θ),\) where \(α≤θ≤β.\) Our first step is to partition the interval \([α,β]\) into
n equal-width subintervals. The width of each subinterval is given by the formula \(Δθ=(β−α)/n\), and the ith partition point \(θ_i\) is given by the formula \(θ_i=α+iΔθ\). Each partition point \(θ=θ_i\) defines a line with slope \(\tan θ_i\) passing through the pole as shown in the following graph.
The line segments are connected by arcs of constant radius. This defines sectors whose areas can be calculated by using a geometric formula. The area of each sector is then used to approximate the area between successive line segments. We then sum the areas of the sectors to approximate the total area. This approach gives a Riemann sum approximation for the total area. The formula for the area of a sector of a circle is illustrated in the following figure.
Recall that the area of a circle is \(A=πr^2\). When measuring angles in radians, 360 degrees is equal to \(2π\) radians. Therefore a fraction of a circle can be measured by the central angle \(θ\). The fraction of the circle is given by \(\dfrac{θ}{2π}\), so the area of the sector is this fraction multiplied by the total area:
\[A=(\dfrac{θ}{2π})πr^2=\dfrac{1}{2}θr^2.\]
Since the radius of a typical sector in Figure is given by \(r_i=f(θ_i)\), the area of the
ith sector is given by
\[A_i=\dfrac{1}{2}(Δθ)(f(θ_i))^2.\]
Therefore a Riemann sum that approximates the area is given by
\[A_n=\sum_{i=1}^nA_i≈\sum_{i=1}^n\dfrac{1}{2}(Δθ)(f(θ_i))^2.\]
We take the limit as \(n→∞\) to get the exact area:
\[A=\lim_{n→∞}A_n=\dfrac{1}{2}\int ^β_α(f(θ))^2dθ.\]
This gives the following theorem.
Area of a Region Bounded by a Polar Curve
Suppose \(f\) is continuous and nonnegative on the interval \(α≤θ≤β\) with \(0<β−α≤2π\). The area of the region bounded by the graph of \(r=f(θ)\) between the radial lines \(θ=α\) and \(θ=β\) is
\[\begin{align} A&=\dfrac{1}{2}\int ^β_α[f(θ)]^2 dθ \\[4pt] &=\dfrac{1}{2}\int ^β_αr^2 dθ. \label{areapolar}\end{align}\]
Example \(\PageIndex{1}\): Finding an Area of a Polar Region
Find the area of one petal of the rose defined by the equation \(r=3\sin(2θ).\)
Solution
The graph of \(r=3\sin (2θ)\) follows.
When \(θ=0\) we have \(r=3\sin(2(0))=0\). The next value for which \(r=0\) is \(θ=π/2\). This can be seen by solving the equation \(3sin(2θ)=0\) for \(θ\). Therefore the values \(θ=0\) to \(θ=π/2\) trace out the first petal of the rose. To find the area inside this petal, use Equation with \(f(θ)=3sin(2θ), α=0,\) and \(β=π/2\):
\[\begin{align*} A &=\dfrac{1}{2}\int ^β_α[f(θ)]^2dθ \\[4pt] &=\dfrac{1}{2}\int ^{π/2}_0[3\sin (2θ)]^2dθ \\[4pt] &=\dfrac{1}{2}\int ^{π/2}_09\sin^2(2θ)dθ. \end{align*}\]
To evaluate this integral, use the formula \(sin^2α=(1−cos(2α))/2\) with \(α=2θ:\)
\[\begin{align*} A&=\dfrac{1}{2}\int ^{π/2}_09\sin^2(2θ)dθ \\[4pt] &=\dfrac{9}{2}\int ^{π/2}_0\dfrac{(1−\cos(4θ))}{2}dθ \\[4pt] &=\dfrac{9}{4}(\int ^{π/2}_01−\cos(4θ)dθ) \\[4pt] &=\dfrac{9}{4}(θ−\dfrac{\sin(4θ)}{4}∣^{π/2}_0 \\[4pt] &=\dfrac{9}{4}(\dfrac{π}{2}−\dfrac{\sin 2π}{4})−\dfrac{9}{4}(0−\dfrac{\sin 4(0)}{4}) \\[4pt] &=\dfrac{9π}{8}\end{align*}\]
Exercise \(\PageIndex{1}\)
Find the area inside the cardioid defined by the equation \(r=1−\cos θ\).
Hint
Use Equation. Be sure to determine the correct limits of integration before evaluating.
Answer
\(A=3π/2\)
Example involved finding the area inside one curve. We can also use Equation \ref{areapolar} to find the area between two polar curves. However, we often need to find the points of intersection of the curves and determine which function defines the outer curve or the inner curve between these two points.
Example \(\PageIndex{2}\): Finding the Area between Two Polar Curves
Find the area outside the cardioid \(r=2+2\sin θ\) and inside the circle \(r=6\sin θ\).
Solution
First draw a graph containing both curves as shown.
To determine the limits of integration, first find the points of intersection by setting the two functions equal to each other and solving for \(θ\):
\[\begin{align*} 6 \sin θ=2+2\sin θ \\[4pt] 4\sin θ &=2 \\[4pt] \sin θ &=\dfrac{1}{2} \end{align*}.\]
This gives the solutions \(θ=\dfrac{π}{6}\) and \(θ=\dfrac{5π}{6}\), which are the limits of integration. The circle \(r=3\sin θ\) is the red graph, which is the outer function, and the cardioid \(r=2+2\sin θ\) is the blue graph, which is the inner function. To calculate the area between the curves, start with the area inside the circle between \(θ=\dfrac{π}{6}\) and \(θ=\dfrac{5π}{6}\), then subtract the area inside the cardioid between \(θ=\dfrac{π}{6}\) and \(θ=\dfrac{5π}{6}\):
\(A=\text{circle}−\text{cardioid}\)
\(=\dfrac{1}{2}\int ^{5π/6}_{π/6}[6\sin θ]^2dθ−\dfrac{1}{2}\int ^{5π/6}_{π/6}[2+2\sin θ]^2dθ\)
\(=\dfrac{1}{2}\int ^{5π/6}_{π/6}36\sin^2θdθ−\dfrac{1}{2}\int ^{5π/6}_{π/6}4+8\sin θ+4\sin^2θdθ\)
\(=\dfrac{1}{8}\int ^{5π/6}_{π/6}\dfrac{1−\cos(2θ)}{2}dθ−2\int ^{5π/6}_{π/6}1+2\sin θ+\dfrac{1−\cos(2θ)}{2}dθ\)
\(=9[θ−\dfrac{sin(2θ)}{2}]^{5π/6}_{π/6}−2[\dfrac{3θ}{2}−2\cos θ−\dfrac{\sin(2θ)}{4}]^{5π/6}_{π/6}\)
\(=9(\dfrac{5π}{6}−\dfrac{sin2(5π/6)}{2})−9(\dfrac{π}{6}−\dfrac{\sin^2(π/6)}{2})−(3(\dfrac{5π}{6})−4cos\dfrac{5π}{6}−\dfrac{sin^2(5π/6)}{2})+(3(\dfrac{π}{6})−4\cos\dfrac{π}{6}−\dfrac{\sin^2(π/6)}{2})\)
\(=4π\).
Exercise \(\PageIndex{2}\)
Find the area inside the circle \(r=4\cos θ\) and outside the circle \(r=2\).
Hint
Use Equation and take advantage of symmetry.
Answer
\(A=\dfrac{4π}{3}+4\sqrt{3}\)
In Example we found the area inside the circle and outside the cardioid by first finding their intersection points. Notice that solving the equation directly for \(θ\) yielded two solutions: \(θ=\dfrac{π}{6}\) and \(θ=\dfrac{5π}{6}\). However, in the graph there are three intersection points. The third intersection point is the origin. The reason why this point did not show up as a solution is because the origin is on both graphs but for different values of \(θ\). For example, for the cardioid we get
\[\begin{align*} 2+2\sin θ&=0 \\[4pt] \sin θ &=−1 ,\end{align*}.\]
so the values for \(θ\) that solve this equation are \(θ=\dfrac{3π}{2}+2nπ\), where \(n\) is any integer. For the circle we get
\[6\sin θ=0.\]
The solutions to this equation are of the form \(θ=nπ\) for any integer value of \(n\). These two solution sets have no points in common. Regardless of this fact, the curves intersect at the origin. This case must always be taken into consideration.
Arc Length in Polar Curves
Here we derive a formula for the arc length of a curve defined in polar coordinates. In rectangular coordinates, the arc length of a parameterized curve \((x(t),y(t))\) for \(a≤t≤b\) is given by
\[L=\int ^b_a\sqrt{\left(\dfrac{dx}{dt}\right)^2+\left(\dfrac{dy}{dt}\right)^2}dt.\]
In polar coordinates we define the curve by the equation \(r=f(θ)\), where \(α≤θ≤β.\) In order to adapt the arc length formula for a polar curve, we use the equations
\[x=r\cos θ=f(θ)\cos θ\]
and
\[y=r\sin θ=f(θ)\sin θ,\]
and we replace the parameter \(t\) by \(θ\). Then
\[\dfrac{dx}{dθ}=f′(θ)\cos θ−f(θ)\sin θ\]
\[\dfrac{dy}{dθ}=f′(θ)\sin θ+f(θ)\cos θ.\]
We replace \(dt\) by \(dθ\), and the lower and upper limits of integration are \(α\) and \(β\), respectively. Then the arc length formula becomes
\[ \begin{align*} L&=\int ^b_a\sqrt{\left(\dfrac{dx}{dt}\right)^2+\left(\dfrac{dy}{dt}\right)^2}dt \\[4pt] & =\int ^β_α\sqrt{\left(\dfrac{dx}{dθ}\right)^2+\left(\dfrac{dy}{dθ}\right)^2}dθ \\[4pt] &=\int ^β_α\sqrt{(f′(θ)\cos θ−f(θ)\sin θ)^2+(f′(θ)\sin θ+f(θ)\cos θ)^2}dθ \\[4pt] &=\int ^β_α\sqrt{(f′(θ))^2(\cos^2 θ+\sin^2 θ)+(f(θ))^2(cos^2 θ+\sin 2θ)}dθ \\[4pt] &=\int ^β_α\sqrt{(f′(θ))^2+(f(θ))^2}dθ \\[4pt] &=\int ^β_α\sqrt{r^2+\left(\dfrac{dr}{dθ}\right)^2}dθ \end{align*}\]
This gives us the following theorem.
Arc Length of a Curve Defined by a Polar Function
Let \(f\) be a function whose derivative is continuous on an interval \(α≤θ≤β\). The length of the graph of \(r=f(θ)\) from \(θ=α\) to \(θ=β\) is
\[ \begin{align} L&=\int ^β_α\sqrt{[f(θ)]^2+[f′(θ)]^2}dθ \label{arcpolar1} \\[4pt] &=\int ^β_α\sqrt{r^2+\left(\dfrac{dr}{dθ}\right)^2}dθ. \label{arcpolar2} \end{align}\]
Example \(\PageIndex{3}\): Finding the Arc Length of a cardioid
Find the arc length of the cardioid \(r=2+2\cos θ\).
Solution
When \(θ=0,r=2+2\cos 0 =4.\) Furthermore, as \(θ\) goes from \(0\) to \(2π\), the cardioid is traced out exactly once. Therefore these are the limits of integration. Using \(f(θ)=2+2\cos θ, α=0,\) and \(β=2π,\) Equation \ref{arcpolar1} becomes
\[\begin{align*} L&=\int ^β_α\sqrt{[f(θ)]^2+[f′(θ)]^2}dθ \\[4pt] &=\int ^{2π}_0\sqrt{[2+2\cos θ]^2+[−2\sin θ]^2}dθ \\[4pt] &=\int ^{2π}_0\sqrt{4+8\cos θ+4\cos^2θ+4\sin^2θ}dθ \\[4pt] &=\int ^{2π}_0\sqrt{4+8\cos θ+4(\cos^2θ+\sin^2θ)}dθ \\[4pt] &=\int ^{2π}_0\sqrt{8+8\cos θ}dθ \\[4pt] &=2\int ^{2π}_0\sqrt{2+2\cos θ}dθ. \end{align*}\]
Next, using the identity \(\cos(2α)=2\cos^2α−1,\) add 1 to both sides and multiply by 2. This gives \(2+2\cos(2α)=4\cos^2α.\) Substituting \(α=θ/2\) gives \(2+2\cos θ=4\cos^2(θ/2)\), so the integral becomes
\[\begin{align*} L=&2\int ^{2π}_0\sqrt{2+2\cos θ}dθ \\[4pt] &=2\int ^{2π}_0\sqrt{4\cos^2(\dfrac{θ}{2})}dθ \\[4pt] &=2\int ^{2π}_0∣\cos(\dfrac{θ}{2})∣dθ.\end{align*}\]
The absolute value is necessary because the cosine is negative for some values in its domain. To resolve this issue, change the limits from \(0\) to \(π\) and double the answer. This strategy works because cosine is positive between \(0\) and \(\dfrac{π}{2}\). Thus,
\[\begin{align*} L&=4\int ^{2π}_0∣\cos(\dfrac{θ}{2})∣dθ \\[4pt] &=8\int ^π_0 \cos(\dfrac{θ}{2})dθ \\[4pt] &=8(2\sin(\dfrac{θ}{2})∣^π_0 \\[4pt] &=16\end{align*}\]
Exercise \(\PageIndex{3}\)
Find the total arc length of \(r=3\sin θ\).
Hint
Use Equation \ref{arcpolar1}. To determine the correct limits, make a table of values.
Answer
\(s=3π\)
Key Concepts The area of a region in polar coordinates defined by the equation \(r=f(θ)\) with \(α≤θ≤β\) is given by the integral \(A=\dfrac{1}{2}\int ^β_α[f(θ)]^2dθ\). To find the area between two curves in the polar coordinate system, first find the points of intersection, then subtract the corresponding areas. The arc length of a polar curve defined by the equation \(r=f(θ)\) with \(α≤θ≤β\) is given by the integral \(L=\int ^β_α\sqrt{[f(θ)]^2+[f′(θ)]^2}dθ=\int ^β_α\sqrt{r^2+(\dfrac{dr}{dθ})^2}dθ\). Key Equations Area of a region bounded by a polar curve\[A=\dfrac{1}{2}\int ^β_α[f(θ)]^2dθ=\dfrac{1}{2}\int ^β_αr^2dθ \nonumber\] Arc length of a polar curve\[L=\int ^β_α\sqrt{[f(θ)]^2+[f′(θ)]^2}dθ=\int ^β_α\sqrt{r^2+(\dfrac{dr}{dθ})^2}dθ \nonumber\] Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
|
I'll give you the numbers. I'm breaking this up into 3 different terms. There's atmospheric drag, what I'll call the "hover" term, and the gravitational potential climb. I will more or less assume a flight directly up. You're welcome to use whatever term for velocity you want, as none of them will be representative. I'll take the Shuttle's speed at halfway to max Q. This is 1000 ft/s, or about 300 m/s.
You would think atmospheric drag would be very difficult. It's actually not. In any case, you would probably use the v^2 relationship for drag. But if you think about where that comes from, it basically assumes that all the air in front of you is accelerated to the speed of your craft (minus any departure from unity in the drag coefficient). So for a good approximation, just take the mass-thickness (I call mu) for the entire atmosphere, and multiply by the metric for velocity.
Also, I'll use the numbers for Falcon 9, which is a diameter of 3.66 meters and launch mass of 333,400 kg. Yes, a lot of these numbers change over the course of the flight, but in ways that are fairly obvious if you changed this to do numerical integration.
$$\Delta V (drag) = 1/2 \mu C_d A v / M $$$$= (0.5)(10 \text{ tonnes} / m^2)(0.5)\pi(3.66/2 m)^2(300 m/s)/(333.4 \text{ tonnes}) $$$$= 23.7 m/s$$
Wow. That is not much. Maybe velocity should be higher. But still, out of 10 km/s total, this is a tiny amount. Atmospheric drag complicates launches, but not much because of its Delta v value.
Next, the "hover" term. This represents the gravity drag. Again, I'm forced to assume a pretty much upward launch. I'll also compare sea-level to Mt. Everest, at a height of 8,848 m. Not that you'd set up a launchpad there, but we need this to answer the question.
$$\Delta V = g h / v= (9.8 m/s^2)(8,848 m)/(300 m/s) = 298 m/s$$
Now this is much more significant. This isn't all of the gravity drag either. It's still sucking your delta v budget after you're out of the atmosphere, until you get to full orbital velocity.
Let's move on to the gravitational potential itself.
$$\Delta V = \sqrt( g h )= \sqrt( (9.8 m/s^2) (8,848 m) ) = 294.5 m/s$$
The sum of all these is a ballpark estimate of the benefit you would get from changing your launch location from sea-level to Mt. Everest. Honestly though, you save a comparable amount just by moving it down to the equator, where the Earth's rotation gives you a bigger boost.
Anyway, this is 616.7 m/s out of a total budget of 10 km/s. So it would be less than 10%. By the rocket equation, this can still make a difference. But then again, actual costs are complicated.
|
My machine learning textbook states the following when discussing second-order Taylor series approximations in the context of Gradient descent:
The (directional) second derivative tells us how well we can expect a gradient descent step to perform. We can make a second-order Taylor series approximation to the function $f(\mathbf{x})$ around the current point $\mathbf{x}^{(0)}$:
$$f(\mathbf{x}) \approx f(\mathbf{x}^{(0)}) + (\mathbf{x} - \mathbf{x}^{(0)})^T \mathbf{g} + \dfrac{1}{2} (\mathbf{x} - \mathbf{x}^{(0)})^T\mathbf{H}(\mathbf{x} - \mathbf{x}^{(0)}),$$
where $\mathbf{g}$ is the gradient and $\mathbf{H}$ is the Hessian at $\mathbf{x}^{(0)}$. If we use a learning rate of $\epsilon$, then the new point $\mathbf{x}$ will be given by $\mathbf{x}^{(0)} - \epsilon \mathbf{g}$. Substituting this into our approximation, we obtain
$$f(\mathbf{x}^{(0)} - \epsilon \mathbf{g}) \approx f(\mathbf{x}^{(0)}) - \epsilon \mathbf{g}^T \mathbf{g} + \dfrac{1}{2} \epsilon^2 \mathbf{g}^T \mathbf{H} \mathbf{g}.$$
There are three terms here: the original value of the function, the expected improvement due to the slope of the function, and the correction we must apply to account for the curvature of the function. When this last term is too large, the gradient descent step can actually move uphill.
Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron; Bach, Francis.
Deep Learning (Page 85).
I have a couple of points of confusion with regards to this explanation:
Which term is the expected improvement due to the slope of the function, and which is the correction we must apply to account for the curvature of the function?
Why do we need a "correction" to account for the curvature of the function?
Can someone please elaborate on, and be more specific with regards to, what is meant by "when this last term is too large, the gradient descent step can actually move uphill."?
I would greatly appreciate it if people could please take the time to clarify this.
EDIT:
With regards to my first question, I deduced from here that the term $- \epsilon \mathbf{g}^T \mathbf{g}$ is the improvement due to the slope of the function (the improvement in error afforded by the magnitude of the gradient), and the term $\dfrac{1}{2} \epsilon^2 \mathbf{g}^T \mathbf{H} \mathbf{g}$ is the correction to account for the curvature of the function (correction term that incorporates the curvature of the surface as represented by the Hessian matrix).
But, as far as I can tell, $- \epsilon \mathbf{g}^T \mathbf{g}$ isn't a magnitude, since, on an $n$-dimensional Euclidean space $\mathbb{R}^n$, the magnitude/norm/$2$-norm/$L^2$-norm of a vector $\mathbf{x} = (x_1, x_2, \dots, x_n)$ is $||\mathbf{x}||_2 = \sqrt{x_1^2 + x_2^2 + \dots + x_n^2}$. So how is it the "improvement in error afforded by the
magnitude of the gradient"?
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
Journal of Cellular Physiology, ISSN 0021-9541, 05/2016, Volume 231, Issue 5, pp. 1163 - 1170
Unloading induces bone loss and causes disuse osteoporosis. However, the mechanism underlying disuse osteoporosis is still incompletely understood. Here, we...
DENSITY | ACTIVATION | PHYSIOLOGY | RESORPTION | MASS | MICE | INHIBITOR ODANACATIB | POSTMENOPAUSAL WOMEN | EXPRESSION | OSTEOPOROSIS | LEAD | CELL BIOLOGY | Bone and Bones - pathology | Muscular Disorders, Atrophic - complications | Alkaline Phosphatase - metabolism | RNA, Messenger - metabolism | X-Ray Microtomography | Bone and Bones - diagnostic imaging | Muscular Disorders, Atrophic - enzymology | Muscular Disorders, Atrophic - diagnostic imaging | Muscular Disorders, Atrophic - pathology | Imaging, Three-Dimensional | Real-Time Polymerase Chain Reaction | Osteogenesis - genetics | Calcification, Physiologic - genetics | Bone Resorption - etiology | Bone Resorption - physiopathology | Osteoclasts - pathology | Alkaline Phosphatase - genetics | Bone Density | Mice, Inbred C57BL | RNA, Messenger - genetics | Cells, Cultured | Organ Size | Bone and Bones - enzymology | Bone Resorption - enzymology | Bone Resorption - diagnostic imaging | Animals | Cathepsin K - metabolism | Cathepsin K - deficiency | Bone Marrow Cells - metabolism | Cathepsins | Osteoporosis | Bones | Gene expression | Density | Analysis
DENSITY | ACTIVATION | PHYSIOLOGY | RESORPTION | MASS | MICE | INHIBITOR ODANACATIB | POSTMENOPAUSAL WOMEN | EXPRESSION | OSTEOPOROSIS | LEAD | CELL BIOLOGY | Bone and Bones - pathology | Muscular Disorders, Atrophic - complications | Alkaline Phosphatase - metabolism | RNA, Messenger - metabolism | X-Ray Microtomography | Bone and Bones - diagnostic imaging | Muscular Disorders, Atrophic - enzymology | Muscular Disorders, Atrophic - diagnostic imaging | Muscular Disorders, Atrophic - pathology | Imaging, Three-Dimensional | Real-Time Polymerase Chain Reaction | Osteogenesis - genetics | Calcification, Physiologic - genetics | Bone Resorption - etiology | Bone Resorption - physiopathology | Osteoclasts - pathology | Alkaline Phosphatase - genetics | Bone Density | Mice, Inbred C57BL | RNA, Messenger - genetics | Cells, Cultured | Organ Size | Bone and Bones - enzymology | Bone Resorption - enzymology | Bone Resorption - diagnostic imaging | Animals | Cathepsin K - metabolism | Cathepsin K - deficiency | Bone Marrow Cells - metabolism | Cathepsins | Osteoporosis | Bones | Gene expression | Density | Analysis
Journal Article
Physical Review D, ISSN 1550-7998, 04/2014, Volume 89, Issue 7, p. 074030
Using (106.41 +/- 0.86) x 10(6) Psi(3686) events collected with the BESIII detector at BEPCII, we study for the first time the decay chi(cJ) -> eta'K+K- (J =...
ASTRONOMY & ASTROPHYSICS | BESIII | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
ASTRONOMY & ASTROPHYSICS | BESIII | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
Chemical Physics Letters, ISSN 0009-2614, 01/2018, Volume 691, pp. 258 - 261
Mobilities of Li -attached butanol isomers, ( -BuOH) Li , ( -BuOH) Li , ( -BuOH) Li , and ( -BuOH) Li , in helium gas cooled by liquid nitrogen were measured...
Butanol isomers | Ion mobility | GASEOUS-IONS | PHYSICS, ATOMIC, MOLECULAR & CHEMICAL | CHEMISTRY, PHYSICAL | STATE | RARE-GASES | POLYATOMIC IONS | MODEL | RANGE
Butanol isomers | Ion mobility | GASEOUS-IONS | PHYSICS, ATOMIC, MOLECULAR & CHEMICAL | CHEMISTRY, PHYSICAL | STATE | RARE-GASES | POLYATOMIC IONS | MODEL | RANGE
Journal Article
Physical Review Letters, ISSN 0031-9007, 2015, Volume 115, Issue 9, pp. 091803-1 - 091803-7
Journal Article
Journal of Cellular Physiology, ISSN 0021-9541, 05/2016, Volume 231, Issue 5, pp. 1163 - 1170
Journal Article
6. Observation and Spin-Parity Determination of the $X(1835)$ in $J/\psi\rightarrow\gamma K^0_S K^0_S\eta
06/2015
Phys. Rev. Lett. 115, 091803 (2015) We report an observation of the process $J/\psi\rightarrow\gamma X(1835)\rightarrow\gamma K^0_S K^0_S\eta$ at low $K^0_S...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
7. Circumsporozoite Protein-Specific K d -Restricted CD8+ T Cells Mediate Protective Antimalaria Immunity in Sporozoite-Immunized MHC-I-K d Transgenic Mice
Mediators of Inflammation, ISSN 0962-9351, 2014, Volume 2014
Although the roles of CD8+ T cells and a major preerythrocytic antigen, the circumsporozoite (CS) protein, in contributing protective antimalaria immunity...
RESPONSES | YOELII | LIVER STAGES | VACCINE | HUMANS | MALARIA SPOROZOITES | IMMUNOLOGY | IRRADIATED SPOROZOITES | PLASMODIUM-BERGHEI | CELL BIOLOGY
RESPONSES | YOELII | LIVER STAGES | VACCINE | HUMANS | MALARIA SPOROZOITES | IMMUNOLOGY | IRRADIATED SPOROZOITES | PLASMODIUM-BERGHEI | CELL BIOLOGY
Journal Article
Physics Letters B, ISSN 0370-2693, 10/2009, Volume 680, Issue 5, pp. 417 - 422
We report on the first measurement of the differential cross section of -meson photoproduction for the exclusive reaction channel. The experiment was performed...
Journal Article
Chemical Physics Letters, ISSN 0009-2614, 01/2018, Volume 691, p. 258
Journal Article
PHYSICAL REVIEW C, ISSN 0556-2813, 08/2007, Volume 76, Issue 2, p. 025208
Photoproduction of the cascade resonances has been investigated in the reactions gamma p -> K+K+(X) and gamma p -> K+K+pi(-)(X). The mass splitting of the...
SYSTEM | JLAB | ENERGY | CLAS | RESONANCE | INCLUSIVE PHOTOPRODUCTION | MASS | PHYSICS, NUCLEAR | BARYONS | RANGE | Physics - Nuclear Experiment | Nuclear Experiment | Physics
SYSTEM | JLAB | ENERGY | CLAS | RESONANCE | INCLUSIVE PHOTOPRODUCTION | MASS | PHYSICS, NUCLEAR | BARYONS | RANGE | Physics - Nuclear Experiment | Nuclear Experiment | Physics
Journal Article
Physical Review Letters, ISSN 0031-9007, 08/2015, Volume 115, Issue 9
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 05/2015, Volume 91, Issue 9
Journal Article
Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 227 - 233
We study pairs produced in collisions at using a data sample of 2.92 fb collected with the BESIII detector. We measured the asymmetry of the branching...
[formula omitted] oscillation | BESIII | Strong phase difference | experimental results | CP [asymmetry] | Nuclear and High Energy Physics | hadronic decay [psi] | anti-D0 --> K- pi | High Energy Physics - Experiment | 3.77 GeV-cms | D0–D¯0 oscillation | D0 --> K- pi | hadronic decay [D0] | Beijing Stor | annihilation [electron positron] | D0–D̅0 Oscillation | mixing [D0 anti-D0] | psi --> D0 anti-D0 | pair production [D0] | unitarity [CKM matrix]
[formula omitted] oscillation | BESIII | Strong phase difference | experimental results | CP [asymmetry] | Nuclear and High Energy Physics | hadronic decay [psi] | anti-D0 --> K- pi | High Energy Physics - Experiment | 3.77 GeV-cms | D0–D¯0 oscillation | D0 --> K- pi | hadronic decay [D0] | Beijing Stor | annihilation [electron positron] | D0–D̅0 Oscillation | mixing [D0 anti-D0] | psi --> D0 anti-D0 | pair production [D0] | unitarity [CKM matrix]
Journal Article
Physical Review D, ISSN 1550-7998, 06/2015, Volume 91, Issue 11
Journal Article
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, ISSN 0370-2693, 2014, Volume 734, pp. 227 - 233
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 06/2015, Volume 91, Issue 11
Journal Article
17. Observation of a Charged Charmoniumlike Structure in e(+)e(-) -> pi(+)pi(-) J/psi at root s=4.26 GeV
Physical Review Letters, ISSN 0031-9007, 06/2013, Volume 110, Issue 25
We study the process e(+)e(-) -> pi(+)pi(-) J/psi at a center-of-mass energy of 4.260 GeV using a 525 pb(-1) data sample collected with the BESIII detector...
PHYSICS, MULTIDISCIPLINARY
PHYSICS, MULTIDISCIPLINARY
Journal Article
18. Measurements of psi(3686) -> K-Lambda(Xi)over-bar(+) + c.c. and psi(3686) -> gamma K-Lambda(Xi)over-bar(+) + c.c
Physical Review D, ISSN 1550-7998, 05/2015, Volume 91, Issue 9
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 06/2014, Volume 89, Issue 11
Journal Article
|
Section 4.7 Fitting Exponential Models to Data
In the previous section, we saw number lines using logarithmic scales. It is also common to see two dimensional graphs with one or both axes using a logarithmic scale.
One common use of a logarithmic scale on the vertical axis is to graph quantities that are changing exponentially, since it helps reveal relative differences. This is commonly used in stock charts, since values historically have grown exponentially over time. Both stock charts below show the Dow Jones Industrial Average, from 1928 to 2010.
Both charts have a linear horizontal scale, but the first graph has a linear vertical scale, while the second has a logarithmic vertical scale. The first scale is the one we are more familiar with, and shows what appears to be a strong exponential trend, at least up until the year 2000.
Example 1
There were stock market drops in 1929 and 2008. Which was larger?
Solution
In the first graph, the stock market drop around 2008 looks very large, and in terms of dollar values, it was indeed a large drop. However, the second graph shows relative changes, and the drop in 2009 seems less major on this graph, and in fact the drop starting in 1929 was, percentage-wise, much more significant.
Specifically, in 2008, the Dow value dropped from about 14,000 to 8,000, a drop of 6,000. This is obviously a large value drop, and amounts to about a 43% drop. In 1929, the Dow value dropped from a high of around 380 to a low of 42 by July of 1932. While value-wise this drop of 338 is much smaller than the 2008 drop, it corresponds to a 89% drop, a much larger relative drop than in 2008. The logarithmic scale shows these relative changes.
The second graph above, in which one axis uses a linear scale and the other axis uses a logarithmic scale, is an example of a
semi-log graph.
Definition: Semi-log and log-log GRAPHS
A
semi-log graph is a graph with one axis using a linear scale and one axis using a logarithmic scale.
A
log-log graph is a graph with both axes using logarithmic scales.
Example 2
Plot 5 points on the graph of\(f(x)=3(2)^{x}\) on a semi-log graph with a logarithmic scale on the vertical axis.
Solution
To do this, we need to find 5 points on the graph, then calculate the logarithm of the output value. Arbitrarily choosing 5 input values,
\(x\) \(f(x)\) log(\(f(x)\)) -3 \(3(2)^{-1} = \dfrac{3}{8}\) -0.426 -1 \(3(2)^{-1} = \dfrac{3}{2}\) 0.176 0 \(3(2)^{0} = 3\) 0.477 2 \(3(2)^{2} = 12\) 1.079 5 \(3(2)^{5} = 96\) 1.982
Plotting these values on a semi-log graph,
Notice that on this semi-log scale, values from the exponential function appear linear. We can show this behavior is expected by utilizing logarithmic properties. For the function \(f(x)=ab^{x}\), finding log(\(f(x)\)) gives
\(\log \left(f(x)\right)=\log \left(ab^{x} \right)\) Utilizing the sum property of logs,
\(\log \left(f(x)\right)=\log \left(a\right)+\log \left(b^{x} \right)\) Now utilizing the exponent property, \(\log \left(f(x)\right)=\log \left(a\right)+x\log \left(b\right)\)
This relationship is linear, with log(
a) as the vertical intercept, and log( b) as the slope. This relationship can also be utilized in reverse.
Example 3
An exponential graph is plotted on semi-log axes. Find a formula for the exponential function \(g(x)\) that generated this graph.
Solution
The graph is linear, with vertical intercept at (0, 1). Looking at the change between the points (0, 1) and (4, 4), we can determine the slope of the line is \(\frac{3}{4}\). Since the output is log(\(g(x)\)), this leads to the equation \(\log \left(g(x)\right)=1+\frac{3}{4} x\).
We can solve this formula for \(g(x)\) by rewriting in exponential form and simplifying:
\(\log \left(g(x)\right)=1+\frac{3}{4} x\) Rewriting as an exponential,
\(g(x)=10^{1+\frac{3}{4} x}\) Breaking this apart using exponent rules, \(g(x)=10^{1} \cdot 10^{\frac{3}{4} x}\) Using exponent rules to group the second factor, \(g(x)=10^{1} \cdot \left(10^{\frac{3}{4} } \right)^{x}\) Evaluating the powers of 10, \(g(x)=10\left(5.623\right)^{x}\)
Exercise
An exponential graph is plotted on a semi-log graph below. Find a formula for the exponential function \(g(x)\) that generated this graph.
Answer
\(g(x) = 10^{2 - 0.5x} = 10^2 (10^{-0.5})^{x}\), \(f(x) = 100 (0.3162)^x\)
Fitting Exponential Functions to Data
Some technology options provide dedicated functions for finding exponential functions that fit data, but many only provide functions for fitting linear functions to data. The semi-log scale provides us with a method to fit an exponential function to data by building upon the techniques we have for fitting linear functions to data.
to fit an exponential function to a set of data using linearization
Find the log of the data output values
Find the linear equation that fits the (input, log(output)) pairs. This equation will be of the form log(\(f(x)\)) = \(b + mx\).
Solve this equation for the exponential function \(f(x)\)
Example 4
The table below shows the cost in dollars per megabyte of storage space on computer hard drives from 1980 to 2004, and the data is shown on a standard graph to the right, with the input changed to years after 1980.
This data appears to be decreasing exponentially. To find a function that models this decay, we would start by finding the log of the costs.
Solution
As hoped, the graph of the log of costs appears fairly linear, suggesting an exponential function will fit the original data will fit reasonably well. Using technology, we can find a linear equation to fit the log(Cost) values. Using \(t\) as years after 1980, linear regression gives the equation:
\(\log (C(t))=2.794-0.231t\)
Solving for \(C(t)\),
\(C(t)=10^{2.794-0.231t}\)
\(C(t)=10^{2.794} \cdot 10^{-0.231t}\) \(C(t)=10^{2.794} \cdot \left(10^{-0.231} \right)^{t}\) \(C(t)=622\cdot \left(0.5877\right)^{t}\)
This equation suggests that the cost per megabyte for storage on computer hard drives is decreasing by about 41% each year.
Using this function, we could predict the cost of storage in the future. Predicting the cost in the year 2020 (\(t = 40\)):
\(C(40) =622\left(0.5877\right)^{40} \approx 0.000000364\) dollars per megabyte, a really small number. That is equivalent to $0.36 per terabyte of hard drive storage.
Comparing the values predicted by this model to the actual data, we see the model matches the original data in order of magnitude, but the specific values appear quite different. This is, unfortunately, the best exponential model that can fit the data. It is possible that a non-exponential model would fit the data better, or there could just be wide enough variability in the data that no relatively simple model would fit the data any better.
Year Actual Cost per MB Cost predicted by model 1980 192.31 622.3 1984 87.86 74.3 1988 15.98 8.9 1992 4 1.1 1996 0.173 0.13 2000 0.006849 0.015 2004 0.001149 0.0018
Exercise
The table below shows the value \(V\), in billions of dollars, of US imports from China \(t\) years after 2000.
year 2000 2001 2002 2003 2004 2005 \(t\) 0 1 2 3 4 5 \(V\) 100 102.3 125.2 152.4 196
This data appears to be growing exponentially. Linearize this data and build a model to predict how many billions of dollars of imports were expected in 2011.
Answer
\(V(t) = 90.545 (1.2078)^t\). Predicting in 2011, \(V(11) = 722.45\) billion dollars.
Important Topics of this Section Semi-log graph Log-log graph Linearizing exponential functions Fitting an exponential equation to data
|
Problem: To prove the $\textsf{NP-Completeness}$ of the problem of "Packing Squares (with different side length) into A Rectangle", $\textsf{3-Partition}$ is reduced to it, as shown in the following figure.
In the $\textsf{3-Partition}$ instance, there are $n$ elements $(a_1, \cdots, a_i, \cdots, a_n)$. The target sum $t$ is $t = \frac{\sum a_i}{n/3}$.
In the reduction,
$B$ is a huge (constant) number and each $a_i$ is represented by a $(B + a_i) \times (B + a_i)$ square. The blank in the rectangle will be filled by unit ($1 \times 1$) squares. Questions:I don't quite understand the trick of "adding a huge number $B$" in the reduction. I guess it is used to force that any packing scheme will give a solution to $\textsf{3-Partition}$. But how?
Question 1: What is the trick of "adding a huge number" for in the reduction from $\textsf{3-Partition}$? Specifically, why does this reduction work? Why is this trick necessary, i.e., why wouldn't the reduction work if we left out $B$ (set $B=0$)?
I tried to identify the flaw of the proof of "any packing gives a 3-partition" but could not get the key point.
Actually I have also seen other reductions from $\textsf{3-Partition}$ that also use this trick. So,
Question 2: What is the general purpose of this trick of "adding a huge number" in the reductions from $\textsf{3-Partition}$ (
if there is)? Note: This problem is from the video lecture (from 01:15:15) by Prof. Erik Demaine. I should have first checked the original paper "Packing squares into a square". However, it is not accessible to me on the Internet. If you have a copy and would like to share, you can find my mailbox in my profile. Thanks in advance.
|
My favourite one: $[0, 1]$ is compact, i.e. every open cover of $[0, 1]$ has a finite subcover.
Proof: Suppose for a contradiction that there is an open cover $\mathcal{U}$ which does not admit any finite subcover. Thus, either $\left[ 0, \frac{1}{2} \right]$ or $\left[ \frac{1}{2}, 1 \right]$ cannot be covered with a finite number of sets from $\mathcal{U}$ - name it $I_1$. Again, one of the two $I_1$'s subintervals of length $\frac{1}{4}$ can't be covered with a finite number of sets from $\mathcal{U}$. Continuing, we get a descending sequence of intervals $I_n$ of length $\frac{1}{2^n}$ each, every of which cannot be finitely covered.
By the Cantor Intersection Theorem,
$$\bigcap_{n=1}^{\infty} I_n = \{ x \}$$
for some $x \in [0, 1].$ But there is such $U \in \mathcal{U}$ that $x \in U$ and so $I_n \subseteq U$ for some sufficiently large $n$. That's a contradiction.
But given an arbitrary cover $\mathcal{U}$, I think finding a finite subcover may be a somewhat tedious task. :p
P.S. There actually comes a procedure from the proof above:
See if $[0, 1]$ itself is covered by one set from $\mathcal{U}$. If so, we're done. If not, execute step 1. for $\left[ 0, \frac{1}{2} \right]$ and $\left[ \frac{1}{2}, 1 \right]$ to get their finite subcovers, then unite them.
The proof guarantees you will eventually find a finite subcover (i.e. you'll never end up going downwards infinitely), but you cannot tell how long it will take. So it is not
as constructive as one would expect.
|
Definition:Continuous Mapping (Topology)/Point/Open Sets
Jump to navigation Jump to search
Definition
Let $T_1 = \left({S_1, \tau_1}\right)$ and $T_2 = \left({S_2, \tau_2}\right)$ be topological spaces.
Let $f: S_1 \to S_2$ be a mapping from $S_1$ to $S_2$.
Let $x \in S_1$.
For every neighborhood $N$ of $f \left({x}\right)$ in $T_2$, there exists a neighborhood $M$ of $x$ in $T_1$ such that $f \left({M}\right) \subseteq N$. Also see
|
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Rekord szczegółowy - Podobne rekordy 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Rekord szczegółowy - Podobne rekordy 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Rekord szczegółowy - Podobne rekordy 2019-05-15 16:57 Rekord szczegółowy - Podobne rekordy 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Rekord szczegółowy - Podobne rekordy 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Rekord szczegółowy - Podobne rekordy 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Rekord szczegółowy - Podobne rekordy 2019-01-10 15:54 Rekord szczegółowy - Podobne rekordy 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Rekord szczegółowy - Podobne rekordy 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Rekord szczegółowy - Podobne rekordy
|
There is also a really nice proof using what is called reductio ad absurdum (or infinite regress), which can also be framed as a simple contradiction using the minimality property of the natural numbers.
Suppose WLOG (without loss of generality) that $$\sqrt3=\frac{u}{v}$$ for $u,v\in\mathbb{N}$ relatively prime (any positive rational number in $\mathbb{Q}$ can be expressed as a fraction in
lowest terms). The reason we can assume $u,v\in\mathbb{N}$ rather than $\mathbb{Z}$ without losing the generality of our argument is because any case in the latter category furnishes one in the former by observing that $3>0$ so that $u$ and $v$ must have the same sign, and if they are negative, then $-u,-v\in\mathbb{N}$ also has the same ratio. So then $$u^2=3\,v^2.$$ But $3$ is prime and divides the RHS, hence it divides the LHS, and that means it must divide $u$ (it is a fact, known as Euclid's Lemma, that if $p$ is prime, then $p|ab \implies p|a$ or $p|b$). But then $3|u=u_0$ means that $u=3u_1$ for some $u_1\in\mathbb{N}$, and consequently,$$9u_1^2=(3u_1)^2=3v^2 \quad\implies\quad 3u_1^2=v^2.$$At this point, if we have not assumed that $u,v$ are relatively prime,we continue by noting that $v=v_0=3v_1$ for some $v_1\in\mathbb{N}$, whence $u_1^2=3v_1^2$ $\implies\cdots\implies$$$\forall n\in\mathbb{N}:u_n=u\cdot3^{-n},~v_n=v\cdot3^{-n}\in\mathbb{N}$$which is an impossible infinite regress, also called a reductio ad absurdum (reduction to absurdity). The absurdity, impossibility, or contradiction it leads to is that, from the hypothesis, it shows that the natural numbers $u$ and $v$ are infinitely divisible, while we know that for some $N\in\mathbb{N}$ (eventually, sufficiently large), $\frac{u}{3^n}$ and $\frac{v}{3^n}$ are obviously less than $1$ and thus not whole numbers for all $n\ge N$.
A more elegant variation (avoiding these "infinite gymnastics") is to use the stipulation, which we can make without loss of generality, that $u$ and $v$ are relatively prime. Then, we can stop as soon as we deduce that $3|v$, since at this point we already knew that $3|u$.
An even more elegant variation uses the well-orderedness of $\mathbb{N}$, where we also suppose to begin with that $u$ and $v$ are minimal (or if in doubt about whether we can simultaneously require this of both variables, suppose that either $u$ or $v$ is minimal). Then, as soon as we discover our first extra factor of $3$, we already reach a contradiction.
|
Tags: beer, homebrew, northernbrewer
I’ve been brewing my own beer for roughly a year. I have a lot of my own equipment, and I’m borrowing some from a friend while I acquire the rest. I just started my fifth batch tonight, a darkish ale with above average alcohol content called Winter Warmer.
I get my supplies from Northern Brewer, a company based in Minnesota. If you have your own recipes, they have all the supplies you could want, or at least all I can imagine. They also have recipe kits of their own. You just browse their recipes, pick one, and they send all the ingredients with instructions.
Don’t worry, I’m not kidding myself into thinking I’m becoming the next Jim Cook; it’s a fun hobby, and it’s really good beer that’s significantly less expensive than buying microbrews.
Anyway, this recipe calls for something I’ve never done before, a yeast starter. See, normally you boil the ingredients for a while, cool it down, add yeast, and let it sit. Boiling breaks down the complex starches into simple starches the yeast can eat and turn into alcohol and carbon dioxide. But this recipe gives the yeast lots more starch, resulting in the higher alcohol content.
Apparently, the normal amount of yeast for a typical 5 gallon batch doesn’t cut it for this recipe. They can’t handle all that starch and get stressed out or something. I don’t know the details, but I know that in general, if the yeast are happy (they have the right food, the right temperature, no other bacteria to compete with, etc.) the beer probably won’t suck. I also know that NB recommend a yeast starter, and they haven’t steered me wrong yet.
Long story short, I just gave my yeast an all you can eat buffet and put on some Barry White. So instead of about a dozen billion yeast cells, I’ll be adding a couple hundred billion on brewing day so they can handle the job. This batch should be ready in February or so. I’m anticipating it’ll be worth the wait. It has been for each batch so far.
I like my new hobby.
Tags: Euler, LaTeX, math, TeX, textheworld
Here we go.
[; e^i\pi + 1 = 0 ;]
Thanks, Euler.
Please comment if you can see this math nicely typeset, but not the stuff in the previous post.
Tags: Firefox, LaTeX, math, stokestheorem, TeX, textheworld
This is just a test, really, of my new Firefox add-on called TeX the World.
[; \int_{\partial M} \omega = \int_M d\omega ;]
You’re supposed to see some integrals above, if this add-on works the way it’s supposed to. I’ve tried it on Facebook, but the only people who can see the math, as opposed to the code that’s supposed to generate the math, are those who have the add-on.
If it works the way it should, the above math will appear as an image within the document, and anyone, even folks without the add-on, should be able to see it.
Here goes.
|
When modeling engineering systems, it can be difficult to identify the key parameters driving system behavior because they are often buried deep within the model. Analytical models can help because they describe systems using mathematical equations, showing exactly how different parameters affect system behavior.
In this article, we will derive analytical models of the loads and bending moments on the wing of a small passenger aircraft to determine whether the wing design meets strength requirements. We will derive the models in the notebook interface in Symbolic Math Toolbox™. We will then use the data management and analysis tools in MATLAB
® to simulate the models for different scenarios to verify that anticipated bending moments are within design limits.
While this example is specific to aircraft design, analytical models are useful in all engineering and scientific disciplines –for example, they can be used to model drug interactions in biological systems, or to model pumps, compressors, and other mechanical and electrical systems.
Analytical Modeling of Wing Loads
We will evaluate the three primary loads that act on the aircraft wing: aerodynamic lift, load due to wing structure weight, and load due to the weight of the fuel contained in the wing. These loads act perpendicular to the wing surface, and their magnitude varies along the length of the wing (Figures 1a, 1b, and 1c).
We derive our analytical model of wing loads in the Symbolic Math Toolbox notebook interface, which offers an environment for managing and documenting symbolic calculations. The notebook interface provides direct support for the MuPAD language, which is optimized for handling and operating on symbolic math expressions.
We derive equations for each load component separately and then add the individual components to obtain total load.
Lift
We assume an elliptical distribution for lift across the length of the wing, resulting in the following expression for lift profile:
\[q_1(x) = ka\sqrt{L^2 - x^2}\]
where
\(L\) = length of wing
\(x\) = position along wing \(ka\) = lift profile coefficient
We can determine the total lift by integrating across the length of the wing
\[\text{Lift} = \int_0^L ka \sqrt{L^2 - x^2} \mathrm{d}x\]
Within the notebook interface we define \(q_1(x)\) and calculate its integral (Figure 2). We incorporate math equations, descriptive text, and images into our calculations to clearly document our work. Completed notebooks can be published in PDF or HTML format.
Through integration, we find that
\[\text{Lift} = \frac{\pi L^2 ka}{4}\]
We determine \(ka\) by equating the lift expression that we just calculated with the lift expressed in terms of the aircraft’s load factor. In aircraft design, load factor is the ratio of lift to total aircraft weight:
\[n = \frac{\text{Lift}}{W_{to}}\]
Load factor equals 1 during straight and level flight, and is greater than 1 when an aircraft is climbing or during other maneuvers where lift exceeds aircraft weight.
We equate our two lift expressions,
\[\frac{W_{to} n}{2} = \frac{\pi L^2 ka}{4}\]
and solve for the unknown \(ka\) term. Our analysis assumes that lift forces are concentrated on the two wings of the aircraft, which is why the left-hand side of the equation is divided by 2. We do not consider lift on the fuselage or other surfaces.
Plugging \(ka\) into our original \(q_1(x)\) expression, we obtain an expression for lift:
\[q_1(x) = \frac{2 W_{to} n \sqrt{L^2 - x^2}}{L^2 \pi}\]
An analytical model like this helps us understand how various parameters affect lift. For example, we see that lift is directly proportional to load factor (
n) and that for a load factor of 1, the maximum lift
\[\frac{2 W_{to}}{\pi L}\]
occurs at the wing root (\(x=0\)).
Weight of Wing Structure
We assume that the load caused by the weight of the wing structure is proportional to
chord length (the width of the wing), which is highest at the wing base (\(C_o\)) and tapers off toward the wing tip (\(C_t\)). Therefore, the load profile can be expressed as
\[q_w(x) = kw\left(\frac{C_t - C_o}{L} x + C_o\right)\]
We define \(q_w(x)\) and integrate it across the length of the wing to calculate the total load from the wing structure:
We then equate this structural load equation with the structural load expressed in terms of load factor and weight of the wing structure (\(W_{ws}\))
\[\frac{W_{ws} n}{2} = \frac{kw L (C_o + C_t)}{2}\]
and solve for \(kw\).
Plugging
kw into our original \(q_w(x)\) expression, we obtain an analytical expression for load due to weight of the wing structure.
\[q_w(x) = - \frac{W_{ws} n \left(C_o - \frac{x(C_o - C_t)}{L}\right)}{L(C_o + C_t)}\]
Weight of Fuel Stored in Wing
We define the load from the weight of the fuel stored in the wing as a piecewise function where load is zero when \(x > L_f\). We assume that this load is proportional to the width of the fuel tank, which is at its maximum at the base of the wing (\(C_{of}\)) and tapers off as we approach the tip of the fuel storage tank (\(C_{tf}\)). We derive \(q_f(x)\) in the same way that we derived \(q_w(x)\), resulting in an equation of the same form:
\[q_f(x) = \begin{cases} 0 & \text{if } L_f < x\\ -\frac{W_f n \left(C_{of} - \frac{x(C_{of} - C_{tf})}{L_f}\right)}{L_f (C_{of} + C_{tf})} & \text{if } x \leq L_f\end{cases}\]
Total Load
We calculate total load by adding the three individual load components. This analytical model gives a clear view of how aircraft weight and geometry parameters affect total load.
\[q_t(x) = \begin{cases} \begin{split} &\frac{n}{L^2 \pi}\left(2 W_{to}\sqrt{L^2 - x^2}\right) \\ &\quad + \frac{n\left( -\pi C_o L W_{ws} + \pi C_o W_{ws} x - \pi C_t W_{ws} x\right)}{L^2 \pi (C_o + C_t)} \end{split} & \text{if } L_f < x\\ \begin{split} &\frac{2 W_{to} n \sqrt{L^2 - x^2}}{L^2 \pi} - \frac{W_{ws} n (C_t x - C_o x +C_o L)}{L^2 (C_o + C_t)} \\ &\quad - \frac{W_f n (C_{tf} x - C_{of} x + C_{of} L_f)}{L_f^2(C_{of} + C_{tf})} \end{split} & \text{if } x \leq L_f\end{cases}\]
Defining Model Parameters and Visualizing Wing Loads
We now have an analytical model for wing loads that we can use to evaluate aircrafts with various wing dimensions and weights. The small passenger aircraft that we are modeling has the following parameters:
\(W_{to}\) = 4800 kg (total aircraft weight)
\(W_{ws}\) = 630 kg (weight of wing structure) \(W_f\) = 675 kg (weight of fuel stored in wing) \(L\) = 7 m (length of wing) \(L_f\) = 4.8 m (length of fuel tank within wing) \(C_o\) = 1.8 m (chord length at wing root) \(C_t\) = 1.4 m (chord length at wing tip) \(C_{of}\) = 1.1 m (width of fuel tank at wing root) \(C_{tf}\) = 0.85 m (width of fuel tank at Lf)
To evaluate load during the climb phase, we assume a load factor of 1.5, then plot the individual load components and total load (Figure 3).
We see that lift is the largest contributor to total load and that the maximum load of 545 N*m occurs at the end of the fuel tank. Fuel load also contributes significantly to the total load, while the weight of the wing is the smallest contributor.
While it is useful to visualize wing loads, what really concerns us are the shear force and bending moments resulting from these loads. We need to determine whether worst-case bending moments experienced by the wing are within design limits.
Deriving a Bending Moment Model
We can use the expression that we derived for load on the wing to calculate bending moment. We start by integrating total load to determine shear force:
\[V(x) = - \int q_t(x) \mathrm{d}x\]
Bending moment can then be calculated by integrating shear force:
\[M(x) = \int V(x) \mathrm{d}x\]
We write a custom function in the MuPAD language,
CalcMoment.mu. This function accepts load profile and returns the bending moment along the length of the wing (Figure 4). Symbolic Math Toolbox includes an editor, debugger, and other programming utilities for authoring custom symbolic functions in the MuPAD language.
We use this function with the aircraft parameters that we defined previously to obtain an expression for bending moment as a function of length along wing (
x) and load factor ( n)
\[\begin{cases} \begin{split} &0.27 n x^3 - 2085.0 n x - 25.31 n x^2 - 1056.56 n \\ &\quad + 10695.21 n \left(0.14 x \arcsin(0.14x) + \sqrt{1.0 - 0.02 x^2}\right) \\ &\quad - 10.39 n \left(49.0 - 1.0 x^2\right)^{\frac{3}{2}}\end{split} & \text{if } 2.4 < x \\ \begin{split} &2.77 n x^3 - 1747.5 n x - 104.64 n x^2 - 1444.25 n \\ &\quad + 10695.21 n \left(0.14 x \arcsin(0.14x) + \sqrt{1.0 - 0.02 x^2}\right) \\ &\quad - 10.39 n \left(49.0 - 1.0 x^2\right)^{\frac{3}{2}}\end{split} & \text{if } x \leq 2.4 \end{cases}\]
As with wing loads, we plot bending moment assuming a load factor of 1.5 (Figure 5).
As expected, the bending moment is highest at the wing root with a value of 8.5 kN*m. The wing is designed to handle bending moments up to 40 kN*m at the wing root, but since regulations require a safety factor of 1.5, bending moments exceeding 26.7 kN*m are unacceptable. We will simulate bending moments for various operating conditions, including conditions where the load factor is greater than 1.5, to ensure that we are not in danger of exceeding the 26.7 kN*m limit.
Simulating Bending Moment in MATLAB
The bending moment equation is saved in our notebook as moment. Using the
getVar command in Symbolic Math Toolbox, we import this variable into the MATLAB workspace as a
sym object, which can be operated on using MATLAB symbolic functions included in Symbolic Math Toolbox.
bendingMoment = getVar(nb,’moment’);
Since we want to numerically evaluate bending moments for various conditions, we convert our
sym object to its equivalent numeric function using the
matlabFunction command.
h_MOMENT = matlabFunction(bendingMoment)
h_MOMENT is a MATLAB function that accepts load factor (
n) and length along wing ( x) as inputs. Because we’re evaluating bending moments at the wing root ( x=0), load factor becomes the only variable that affects bending moments. As mentioned earlier, load factor is equal to Lift / Wto. Using the standard lift equation, and assuming the aircraft is not banking, load factor can be expressed as
\[n = \frac{\rho A C_L V^2}{2 W_{to}}\]
Where
\(\rho\) = air density —1.2 kg/m^3 \(A\) = planform area (approximately equal to total surface area of wings)—23 m^2 \(C_L\) = lift coefficient (varies with aircraft angle of attack, which ranges from 3 to 12 degrees)— 0.75 to 1.5 \(V\) = net aircraft velocity (accounts for aircraft speed and external wind conditions)—40 m/s to 88 m/s
We define these variables in MATLAB and evaluate
h_MOMENT for the range of lift coefficients and aircraft velocities listed above. We store the results in a dataset array (Figure 6), available in Statistics and Machine Learning Toolbox™.
Dataset arrays provide a convenient way to manage data in MATLAB, enabling us to filter the data to view and analyze the subsets we are most interested in. Since we want to determine whether bending moments ever exceed the 26.7 kN*m threshold, we only need the bending moment data where the load factor is greater than 1.5. We filter the dataset and plot the data for these conditions (Figure 7).
moment_filt = moment(moment.loadFactor>1.5,:) x = moment_filt.netVel; y = moment_filt.CL; z = moment_filt.maxMoment; [X,Y] = meshgrid(x,y); Z = griddata(x,y,z,X,Y); surf(X,Y,Z)
The plot shows that the peak bending moment, 19.3 kN*m, occurs when net aircraft velocity and lift coefficient are at their maximum values of 88 m/s and 1.5, respectively. This result confirms that bending moments will be safely below the 26.7 kN*m limit, even for worst-case conditions.
The Value of Analytical Modeling
Our analytical models gave us a clear view into how different aircraft parameters and operating conditions affect loads and bending moments on the aircraft wing. They enabled us to verify that the wing is able to withstand worst-case loading conditions that it could encounter during the climb phase of flight.
The models discussed in this article were used only for high-level proof-of-concept analysis, but analytical models could also be used for more detailed system modeling tasks—for example, to model the airflow near the leading edge or tip of the wing.
|
I remember arriving at the following equality: $$\lim_{n\to\infty}\sum_{k=n}^{\infty}\left(\frac{1}{n\left\lfloor\frac kn\right\rfloor}-\frac1k\right)=\gamma$$ where $\gamma$ denotes the Euler-Mascheroni constant. However, I cannot find back where I wrote its proof, and I am now having a hard time reconstructing it. I don't remember any clues as to how I derived this. I am now not even able to prove it converges (which it does, but really slowly). How can I prove this? Thanks in advance.
Fix $n\geq 1$, and consider an arbitrary integer $L \geq 1$. Consider $$\begin{align} A_n(L) &= \sum_{k=n}^{Ln-1} \left( \frac{1}{n\left\lfloor \frac{k}{n}\right\rfloor }-\frac{1}{k} \right) = \sum_{\ell=1}^{L-1} \sum_{k=\ell n}^{(\ell+1)n-1} \left( \frac{1}{n\left\lfloor \frac{k}{n}\right\rfloor}-\frac{1}{k} \right) \\ &= \sum_{\ell=1}^{L-1} \sum_{k=\ell n}^{(\ell+1)n-1} \left( \frac{1}{\ell n }-\frac{1}{k} \right) = \sum_{\ell=1}^{L-1} \frac{1}{\ell}-\sum_{\ell=1}^{L-1} \sum_{k=\ell n}^{(\ell+1)n-1} \frac{1}{k} \\ &= H_{L-1}-\sum_{k=n}^{Ln-1} \frac{1}{k} = H_{L-1} - (H_{Ln-1} - H_{n-1}) \\ &= \ln L + \gamma + o_L(1) - (\ln L + \ln n + \gamma + o_L(1) - (\ln n + \gamma + o_n(1))) \\ &= \gamma + o_n(1) + o_L(1) \end{align}$$ where I use the notation $o_N(1)$ for terms that go to $0$ when $N \to \infty$, and used the asymptotic expansion of the harmonic series: $$H_N = \ln N + \gamma + o_N(1)$$ (as well as the fact that $\ln(N-1) = \ln N + o_N(1)$) . It follows that the sum of non-negative terms $$ \sum_{k=n}^{\infty} \left( \frac{1}{n\left\lfloor \frac{k}{n}\right\rfloor }-\frac{1}{k} \right) $$ exists and is equal to $$ \sum_{k=n}^{\infty} \left( \frac{1}{n\left\lfloor \frac{k}{n}\right\rfloor }-\frac{1}{k} \right) = \lim_{L\to\infty} A_n(L) = \gamma + o_n(1). $$ This in turn shows that $$ \sum_{k=n}^{\infty} \left( \frac{1}{n\left\lfloor \frac{k}{n}\right\rfloor }-\frac{1}{k} \right) \xrightarrow[n\to\infty]{} \gamma. $$
$n\lfloor\dfrac kn\rfloor$ makes $n$ copies of the naturals times $n$ in a row.
So the partial sums from $k=n$ to $nm+n-1$ converge to $n\sum_{k=1}^m\dfrac1{nk}-\sum_{k=n}^{nm+n-1}\dfrac1k=$ $n\dfrac{\ln(m)+\gamma}n-(\ln(nm+n-1)-\ln(n))=\ln(m)-\ln\left(m+1-\frac1n\right)+\gamma$.
|
(sorry in advance for my english. Im not sure my terminology is correct)
I'm trying to solve a problem: Let $e_1$ , $e_2$ , $e_3$ be a base of $C^3$ (3-D vectors with complex elements). Let $A(g)e_i=e_{g(i)}$ be a representation of the symmetric group $S_3$ ($g\in S_3$). Also let $V=\{z_1,z_2,z_3\in C^3|z_1+z_2+z_3=0\}$ a subspace of $C^2$. The question here is to find the restriction ($P(g)$) of $A(g)$ at the subspace $V$, but i don't understand what this means.
(sorry in advance for my english. Im not sure my terminology is correct)
The problem here is to pick a basis for $V$ and write the matrices for the action of $S_3$ on $V$. The standard basis to choose is $\{\alpha_1,\alpha_2\}$ where $\alpha_1=e_1-e_2$ and $\alpha_2=e_2-e_3$. If we let $s_1=(12)$ and $s_2=(23)$, then $S_3$ is generated by $s_1$ and $s_2$ as a group (e.g. $(123)=s_1s_2$, etc.). Since representations are group homomorphisms, we can determine everything by computing the matrices for the action of $s_1$ and $s_2$. We have $$ A(s_1)\alpha_1=s_1(e_1=e_2)=(e_2-e_1)=-(e_1-e_2)=-\alpha_1 $$ and $$ A(s_1)\alpha_2=s_1(e_2-e_3)=(e_1-e_3)=(e_1-e_2)+(e_2-e_3)=\alpha_1+\alpha_2. $$ Thus, $$P(s_1)=\begin{bmatrix}-1&1\\0&1\end{bmatrix}.$$ Similarly, you can compute that $$P(s_2)=\begin{bmatrix}1&0\\1&-1\end{bmatrix}.$$ You can now get the rest of the matrices by multiplying these together as appropriate. For example \begin{align} P((123))&=P(s_1s_2)\\&=P(s_1)P(s_2)\\ &=\begin{bmatrix}-1&1\\0&1\end{bmatrix}\begin{bmatrix}1&0\\1&-1\end{bmatrix}\\ &=\begin{bmatrix}0&-1\\1&-1\end{bmatrix} \end{align}
Since $V$ is an invariant subspace of $\mathbb C^3$, $\rho(g)(v) \in V$ for every $v \in V$. Thus you have to compute the $2 \times 2$ matrix associated to every element $ g \in S_3$.
|
Physicist and spy Klaus Fuchs has expressed the opinion that Born's rule (squared complex amplitudes are interpreted as probabilities or probability densities) could be derived from something deeper. I think that this wishful thinking is demonstrably impossible. Why? We just don't have any method or theorem in mathematics or physics that could allow us not to assume any statement of the sort
the probability is \(f(\theta)\)and deduce the conclusion of the form the probability is \(f(\theta)\)For example, think about an electron whose spin is prepared to be aligned "up" with respect to an axis, and then measure the projection of the spin \(j_z\) with respect to the \(z\)-axis. The angle between the two axes is \(\theta\), the amplitude is \(\cos(\theta/2)\), up to a phase, and the probability to get "up" again is therefore \(\cos^2(\theta/2)\).
How could you possibly derive that from something "deeper"? We don't have anything "deeper" than probabilities that probabilities could be constructed from. At most, we may define probabilities as \(N/N_{\rm total}\), the frequentist formula by which we measure it – which would give us rational numbers if \(N_{\rm total}\) were some "fundamentally real" options. And we may deduce that the probability is \(p=1/N\) if \(N\) options are related by a symmetry. Or we may say that each state on a "shell of the phase space" – quantum mechanically, a subspace of the Hilbert space – has the probability \(p=1/N\) to be realized during a random evolution as envisioned by the ergodic theorem.
None of those
Ansätzecan produce the statement "the probability is \(\cos^2(\theta/2)\)" and there are no other candidates of the "methods" in mathematics and physics. So I find it rather clear that unless someone finds a totally new mathematics that finds completely new definitions or laws for probabilities, and e.g. calculates probabilities from Bessel's function of the number of Jesus' disciples (which seems like a quantity of a different type than probability, and that's the main reason why this example should sound ludicrous), it is clearly impossible to derive statements like "the probability of 'up' is \(\cos^2(\theta/2)\)" from something that says nothing about the values of probabilities.
The people saying "Born's rule smells like it's derived" never respond to the argument above – which I consider a proof of a sort. I think that if one carefully looks at the task, he will agree that the only way to deduce that
the probability is a continuous function of some variablesis to make at least some assumptions that the probability is a continuous function of some variables. Quantum mechanics including Born's rule is making statements about Nature of the form the probability is a continuous function of some variables. But if you have nothing like that as a fundamental law of physics, you just can't possibly derive any conclusion like that.
Quantum mechanics and its statistical character can't be "emergent". The statements about the values of probabilities have to appear
somewhere in our derivationsfor the first time. So the only way how a physical theory may make predictions of probabilities at all is that it contains an axiom with the formula telling us what the probabilities are, namely (in the case of quantum mechanics) Born's rule. Such a rule can't be born out of nothing or out of something unrelated to probabilities, it's that simple.
Klaus Fuchs also said another thing that implicitly hides a misconception. He says that when a spherically symmetric composite particle decays, the decay products randomly choose some directions. This symmetry breaking is ugly and the ugliness suggests that something dirty is going on, so there must be some missing physics.
Except that the correct experimental analysis of the experiments shows that the random direction is being chosen, indeed, and a correct theoretical analysis unambiguously shows that the result is correctly predicted by quantum mechanics and couldn't be predicted by any fundamentally different theory. Moreover, the claim "the singlet composite state is spherically symmetric" is being implicitly misinterpreted – to say something that is demonstrably false – by the "realists". (Just to be sure, Fuchs hasn't explicitly made this mistake but others, more hardcore "realists", have.)
What do I mean?
To be specific, let's consider the initial state \(\ket\psi\) with a composite particle, a bound state of the electron and the positron known as positronium. The relative spins of the two leptons aren't determined. As a result, due to the simple rules for the addition of two spins \(1/2+1/2\), the composite particle has either \(S=0\) or \(S=1\) which (mostly) decay to two photons and three photons, respectively, and they are called para-positronium and ortho-positronium, respectively. In Greek, "ortho-" is straight or erect and is therefore used to denote "the same direction" (but not "erect by the same sex", which wouldn't be straight), while "para-" means against and is used all over science to denote the opposite directions – in this case the directions of the spin. (The exception is Paraguay which comes from the Guarani language, not Greek, and means "born from water" or "water-born", "Para-guay".)
The \(S=0\) para-positronium state has the standard "Bell's" singlet combination of the spins\[
\ket\psi = \frac{ \ket\uparrow \ket\downarrow - \ket\downarrow \ket\uparrow }{\sqrt{2}}
\] Needless to say, this "Bell's state" was invented and heavily used some 40 years before Bell wrote it down – when Pauli and others started to play with the spins in the mid 1920s – and the "Bell's terminology" is utterly idiotic both historically and physically.
OK, the \(S=0\) condition means that the wave function is spherically symmetric. If you had a problem with the components' being odd under rotations by 360 degrees (like all fermions), you could think about bound states of two bosons and/or their orbital angular momentum etc.
People who don't quite understand quantum mechanics can never get rid of the idea that the wave function describes "what the system really is", some objectively real i.e. observation-independent or observer-independent i.e. classical degrees of freedom. Because everything seems to be predictable from the wave function, they incorrectly think,
everythingthat can be measured is spherically symmetric as well.
However, this reasoning is completely wrong.
All that Penny gave to Sheldon was the
napkin. Analogously, it is only the wave function(of the para-positronium) that is invariant under the \(SU(2)\sim SO(3)\) rotations of the three-dimensional space. (You have to appreciate The Big Bang Theoryfor its ability to explain the word "only".) And just like the napkin doesn't contain Leonard Nimoy, the wave function doesn't contain any observables whatsoever. The wave function only contains the probability amplitudes – the square roots of the probability distributions, along with some quantum phases (which affect the probabilities of outcomes for other, non-commuting observables) – and probability amplitudes are completely different things than observables. In quantum mechanics, observables – everything whose value may be shown on a measurement apparatus once you only do the measurement once – must be represented by Hermitian linear operators, not by state vectors. The probabilities of various outcomes in directions related by the rotational symmetry are the same. But the observables in different directions themselves are notsymmetric.
Let's be very explicit and slow to see what is true and what is not true.
Take the state \(\ket\psi\) for the para-positronium. Apply a rotation by the angle \(\alpha\) around some axis \(\hat n\). You will see that\[
R_{\alpha, \hat n} \ket \psi = \ket \psi.
\] The wave function i.e. state vector doesn't change under the rotation. You may write\[
R = \exp(i\alpha \vec J \cdot \hat n)
\] if you wish. You may also send \(\alpha\to 0\). For a small angle \(\alpha\), it's useful to Taylor-expand the exponential in \(R\) above. The term \(1\) will cancel against the left hand side and right hand side of \(R\ket\psi = \ket\psi\) while the following term proportional to \(\alpha\) (the others are negligible) will give us\[
\vec J \ket \psi = 0.
\] That's why the state \(\ket\psi\) carries no angular momentum. The angular momentum is the generator of the \(SU(2)\sim SO(3)\) rotations so the vanishing of the angular momentum is the same thing as the spherical symmetry – of the wave function.
However, does it mean that the measurements done in two directions away from the positronium will be the same? Not at all. If we measure the direction of the outgoing photons, for example, we will see that in some (mutually opposite) directions, there is one photon, and in other directions, there's none.
Why does the measurement of the direction of photons "break" the symmetry? The reason is always the same uncertainty principle of quantum mechanics. If you measure the angle \(\theta\) of the photons away from the \(z\)-axis, the angle \(\theta\) is an observable – a Hermitian linear operator on the Hilbert space – and it just doesn't commute with \(J\) i.e. \(\theta J \neq J \theta\) for completely analogous reasons why \(xp-px\neq 0\). Because they don't commute, they can't have certain sharp values at the same moment. And because \(J=0\) has a sharp and certain value, \(\theta\) cannot have one.
But let's study the state before it decays to the photons and use our theory of nearly everything (TONE, Lisa Randall's acronym), a quantum field theory, for that. Assuming that the quantum fields are in the positronium state, are observables spherically symmetric?
Let's pick a very particular example, the energy density \(\rho\) at some distance from the center-of-mass of the positronium. And let's pick two such points in different directions. We want to look at the difference\[
\rho(r_B, 0,0) - \rho (0,r_B,0)
\] where \(r_B\) is Bohr's radius, a length constant comparable to the radius of the positronium (average distance between the electron and the positron). OK, in a quantum field theory with the electromagnetic and electron/positron Dirac field, e.g. in QED, there is an operator such as the operator above, right?
The "realists" tend to imagine that the wave function is spherically symmetric (invariant under rotations), and because everything we can measure is a function of the wave function, everything we can measure is spherically symmetric, too. Except that this opinion is completely wrong.
Nothingthat we can measure (except for mathematical constants whose values/outcomes are determined regardless of the state) is a function of the wave function. The assumption that the realists are making is not just slightly wrong, it is totally wrong.
In particular, \(\rho(r_B,0,0)\) is a field operator, some function of the operators \(\vec E,\vec B\), and others, that exist in QED. And you shouldn't doubt that because the field operators \(\vec E, \vec B\) etc. are completely independent at two different points of space, the difference\[
\rho(r_B, 0,0) - \rho (0,r_B,0)
\] isn't vanishing as an operator equation. OK, a patiently obnoxious realist could argue, maybe this difference doesn't vanish as an operator equation but it vanishes assuming our spherically symmetric state \(\ket\psi\). So he will say that we should have\[
[\rho(r_B, 0,0) - \rho (0,r_B,0)] \ket \psi = 0.\quad (???)
\] When the difference between the energy densities at two points – two points related to each other by a rotation – is acting on the positronium state, it has to vanish due to the spherical symmetry of the positronium. But does it vanish?
If it vanished, it would mean that \(\ket\psi\) is an eigenstate of the \(\rho_P-\rho_{P'}\) difference above corresponding to the eigenvalue \(0\), so if we measure the difference, we are 100% certain to get the result \(0\). But will we get zero?
Not at all. The energy density is fluctuating in the vacuum of QED. The operators \(\rho(r_B,0,0)\) and \(\rho(0,r_B,0)\) are commuting with each other (spacelike separation, local theory) but they express two uncertain, oscillating energy densities at two different points. (The simplest master example for the oscillations is the claim that you can't say that \(x=0\) for the ground state of the harmonic oscillator.) So the difference \(\rho-\rho'\) may be positive or negative – it's random.
If you wanted a true but similar statement, it would be\[
\bra \psi [\rho(r_B, 0,0) - \rho (0,r_B,0)] \ket \psi = 0.
\] You have to add the bra-vector \(\bra \psi\) on the left side of the product as well. And this whole matrix element vanishes. In other words, the expectation value of the difference between the energy densities at two different points related by a rotation vanishes. This identity is easily proven. You may write the second density as \(\rho' = R \rho R^{-1}\) and notice that the action of \(R^{-1}\) and \(R\) on the bra- and ket-vectors is just like the action of \(1\), so you may erase those \(R\)'s. And without \(R\)'s, the two \(\rho\)'s cancel. But that only works when it's sandwiched in between the state \(\ket\psi\) on both sides, bra and ket.
The vanishing expectation value is a much weaker statement than the previous displayed formula. The individual differences of energy densities are almost certainly nonzero, only their statistical average – when you repeat the pair-measurement on the "pure positronium" many times and average the result – converges to zero.
Again,\[
[\rho(r_B, 0,0) - \rho (0,r_B,0)] \ket \psi \neq 0
\] and we could describe this nonzero value of the difference acting on \(\ket\psi\) in some detail and quantitatively. For example, the expectation value of the squared difference, \([\rho(r_B, 0,0) - \rho (0,r_B,0)]^2\), is also nonzero and calculable. When you average the
squareddifferences between the energy densities at two points related by a rotation, you will get a positive value that is greater than a certain bound – that is calculable in a similar way as in the usual proofs of the Heisenberg uncertainty inequalities. Reiteration
It's important to realize that \(J=0\) only means that the wave function, i.e. a collection of probabilities or probability amplitudes, is spherically symmetric. The wave function is
notobservable and it is notan observable. The observables must be expressed by linear operators acting on the Hilbert space and their measurements show that they (e.g. the energy densities) are notspherically symmetric. The densities at two points related by a rotation are notequal; \(\ket\psi\) is not an eigenstate of this difference operator.
The measurements that are sensitive to directions in any way are
guaranteedto break the symmetry of the wave function because they're measurements of operators that don't commutewith \(\vec J\). This breaking of the symmetry by the direction-sensitive subsequent measurement isn't the evidence for some troublein quantum mechanics. On the contrary, it's a trivial consequence or confirmation of the uncertainty principle, the main and basically only principle that distinguishes quantum mechanics from its classical limit (or counterpart).
|
DISCLAIMER: Very rough notes from class. Some additional side notes, but otherwise barely edited.
These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz.
Determinant of Lorentz transformations
We require that Lorentz transformations leave the dot product invariant, that is \( x \cdot y = x’ \cdot y’ \), or
\begin{equation}\label{eqn:qftLecture3:20} x^\mu g_{\mu\nu} y^\nu = {x’}^\mu g_{\mu\nu} {y’}^\nu. \end{equation} Explicitly, with coordinate transformations \begin{equation}\label{eqn:qftLecture3:40} \begin{aligned} {x’}^\mu &= {\Lambda^\mu}_\rho x^\rho \\ {y’}^\mu &= {\Lambda^\mu}_\rho y^\rho \end{aligned} \end{equation} such a requirement is equivalent to demanding that \begin{equation}\label{eqn:qftLecture3:500} \begin{aligned} x^\mu g_{\mu\nu} y^\nu &= {\Lambda^\mu}_\rho x^\rho g_{\mu\nu} {\Lambda^\nu}_\kappa y^\kappa \\ &= x^\mu {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu y^\nu, \end{aligned} \end{equation} or \begin{equation}\label{eqn:qftLecture3:60} g_{\mu\nu} = {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu \end{equation}
multiplying by the inverse we find
\begin{equation}\label{eqn:qftLecture3:200} \begin{aligned} g_{\mu\nu} {\lr{\Lambda^{-1}}^\nu}_\lambda &= {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu {\lr{\Lambda^{-1}}^\nu}_\lambda \\ &= {\Lambda^\alpha}_\mu g_{\alpha\lambda} \\ &= g_{\lambda\alpha} {\Lambda^\alpha}_\mu. \end{aligned} \end{equation} This is now amenable to expressing in matrix form \begin{equation}\label{eqn:qftLecture3:220} \begin{aligned} (G \Lambda^{-1})_{\mu\lambda} &= (G \Lambda)_{\lambda\mu} \\ &= ((G \Lambda)^\T)_{\mu\lambda} \\ &= (\Lambda^\T G)_{\mu\lambda}, \end{aligned} \end{equation} or \begin{equation}\label{eqn:qftLecture3:80} G \Lambda^{-1} = (G \Lambda)^\T. \end{equation}
Taking determinants (using the normal identities for products of determinants, determinants of transposes and inverses), we find
\begin{equation}\label{eqn:qftLecture3:100} det(G) det(\Lambda^{-1}) = det(G) det(\Lambda), \end{equation} or \begin{equation}\label{eqn:qftLecture3:120} det(\Lambda)^2 = 1, \end{equation} or \( det(\Lambda)^2 = \pm 1 \). We will generally ignore the case of reflections in spacetime that have a negative determinant.
Smart-alec Peeter pointed out after class last time that we can do the same thing easier in matrix notation
\begin{equation}\label{eqn:qftLecture3:140} \begin{aligned} x’ &= \Lambda x \\ y’ &= \Lambda y \end{aligned} \end{equation} where \begin{equation}\label{eqn:qftLecture3:160} \begin{aligned} x’ \cdot y’ &= (x’)^\T G y’ \\ &= x^\T \Lambda^\T G \Lambda y, \end{aligned} \end{equation} which we require to be \( x \cdot y = x^\T G y \) for all four vectors \( x, y \), that is \begin{equation}\label{eqn:qftLecture3:180} \Lambda^\T G \Lambda = G. \end{equation} We can find the result \ref{eqn:qftLecture3:120} immediately without having to first translate from index notation to matrices. Field theory
The electrostatic potential is an example of a scalar field \( \phi(\Bx) \) unchanged by SO(3) rotations
\begin{equation}\label{eqn:qftLecture3:240} \Bx \rightarrow \Bx’ = O \Bx, \end{equation} that is \begin{equation}\label{eqn:qftLecture3:260} \phi'(\Bx’) = \phi(\Bx). \end{equation} Here \( \phi'(\Bx’) \) is the value of the (electrostatic) scalar potential in a primed frame.
However, the electrostatic field is not invariant under Lorentz transformation.
We postulate that there is some scalar field \begin{equation}\label{eqn:qftLecture3:280} \phi'(x’) = \phi(x), \end{equation} where \( x’ = \Lambda x \) is an SO(1,3) transformation. There are actually no stable particles (fields that persist at long distances) described by Lorentz scalar fields, although there are some unstable scalar fields such as the Higgs, Pions, and Kaons. However, much of our homework and discussion will be focused on scalar fields, since they are the easiest to start with.
We need to first understand how derivatives \( \partial_\mu \phi(x) \) transform. Using the chain rule
\begin{equation}\label{eqn:qftLecture3:300} \begin{aligned} \PD{x^\mu}{\phi(x)} &= \PD{x^\mu}{\phi'(x’)} \\ &= \PD{{x’}^\nu}{\phi'(x’)} \PD{{x}^\mu}{{x’}^\nu} \\ &= \PD{{x’}^\nu}{\phi'(x’)} \partial_\mu \lr{ {\Lambda^\nu}_\rho x^\rho } \\ &= \PD{{x’}^\nu}{\phi'(x’)} {\Lambda^\nu}_\mu \\ &= \PD{{x’}^\nu}{\phi(x)} {\Lambda^\nu}_\mu. \end{aligned} \end{equation} Multiplying by the inverse \( {\lr{\Lambda^{-1}}^\mu}_\kappa \) we get \begin{equation}\label{eqn:qftLecture3:320} \PD{{x’}^\kappa}{} = {\lr{\Lambda^{-1}}^\mu}_\kappa \PD{x^\mu}{} \end{equation}
This should be familiar to you, and is an analogue of the transformation of the
\begin{equation}\label{eqn:qftLecture3:340} d\Br \cdot \spacegrad_\Br = d\Br’ \cdot \spacegrad_{\Br’}. \end{equation} Actions
We will start with a classical action, and quantize to determine a QFT. In mechanics we have the particle position \( q(t) \), which is a classical field in 1+0 time and space dimensions. Our action is
\begin{equation}\label{eqn:qftLecture3:360} S = \int dt \LL(t) = \int dt \lr{ \inv{2} \dot{q}^2 – V(q) }. \end{equation} This action depends on the position of the particle that is local in time. You could imagine that we have a more complex action where the action depends on future or past times \begin{equation}\label{eqn:qftLecture3:380} S = \int dt’ q(t’) K( t’ – t ), \end{equation} but we don’t seem to find such actions in classical mechanics. Principles determining the form of the action. relativity (action is invariant under Lorentz transformation) locality (action depends on fields and the derivatives at given \((t, \Bx)\). Gauge principle (the action should be invariant under gauge transformation). We won’t discuss this in detail right now since we will start with studying scalar fields. Recall that for Maxwell’s equations a gauge transformation has the form \begin{equation}\label{eqn:qftLecture3:520} \phi \rightarrow \phi + \dot{\chi}, \BA \rightarrow \BA – \spacegrad \chi. \end{equation}
Suppose we have a real scalar field \( \phi(x) \) where \( x \in \mathbb{R}^{1,d-1} \). We will be integrating over space and time \( \int dt d^{d-1} x \) which we will write as \( \int d^d x \). Our action is
\begin{equation}\label{eqn:qftLecture3:400} S = \int d^d x \lr{ \text{Some action density to be determined } } \end{equation} The analogue of \( \dot{q}^2 \) is \begin{equation}\label{eqn:qftLecture3:420} \begin{aligned} \lr{ \PD{x^\mu}{\phi} } \lr{ \PD{x^\nu}{\phi} } g^{\mu\nu} &= (\partial_\mu \phi) (\partial_\nu \phi) g^{\mu\nu} \\ &= \partial^\mu \phi \partial_\mu \phi. \end{aligned} \end{equation} This has both time and spatial components, that is \begin{equation}\label{eqn:qftLecture3:440} \partial^\mu \phi \partial_\mu \phi = \dotphi^2 – (\spacegrad \phi)^2, \end{equation} so the desired simplest scalar action is \begin{equation}\label{eqn:qftLecture3:460} S = \int d^d x \lr{ \dotphi^2 – (\spacegrad \phi)^2 }. \end{equation} The measure transforms using a Jacobian, which we have seen is the Lorentz transform matrix, and has unit determinant \begin{equation}\label{eqn:qftLecture3:480} d^d x’ = d^d x \Abs{ det( \Lambda^{-1} ) } = d^d x. \end{equation} Problems. Question: Four vector form of the Maxwell gauge transformation.
Show that the transformation
\begin{equation}\label{eqn:qftLecture3:580} A^\mu \rightarrow A^\mu + \partial^\mu \chi \end{equation} is the desired four-vector form of the gauge transformation \ref{eqn:qftLecture3:520}, that is \begin{equation}\label{eqn:qftLecture3:540} \begin{aligned} j^\nu &= \partial_\mu {F’}^{\mu\nu} \\ &= \partial_\mu F^{\mu\nu}. \end{aligned} \end{equation} Also relate this four-vector gauge transformation to the spacetime split. Answer
\begin{equation}\label{eqn:qftLecture3:560}
\begin{aligned} \partial_\mu {F’}^{\mu\nu} &= \partial_\mu \lr{ \partial^\mu {A’}^\nu – \partial_\nu {A’}^\mu } \\ &= \partial_\mu \lr{ \partial^\mu \lr{ A^\nu + \partial^\nu \chi } – \partial_\nu \lr{ A^\mu + \partial^\mu \chi } } \\ &= \partial_\mu {F}^{\mu\nu} + \partial_\mu \partial^\mu \partial^\nu \chi – \partial_\mu \partial^\nu \partial^\mu \chi \\ &= \partial_\mu {F}^{\mu\nu}, \end{aligned} \end{equation} by equality of mixed partials. Expanding \ref{eqn:qftLecture3:580} explicitly we find \begin{equation}\label{eqn:qftLecture3:600} {A’}^\mu = A^\mu + \partial^\mu \chi, \end{equation} which is \begin{equation}\label{eqn:qftLecture3:620} \begin{aligned} \phi’ = {A’}^0 &= A^0 + \partial^0 \chi = \phi + \dot{\chi} \\ \BA’ \cdot \Be_k = {A’}^k &= A^k + \partial^k \chi = \lr{ \BA – \spacegrad \chi } \cdot \Be_k. \end{aligned} \end{equation} The last of which can be written in vector notation as \( \BA’ = \BA – \spacegrad \chi \).
|
This question already has an answer here:
I am confused why we can introduce differentials into an integral when performing an integration by substitution.
Consider the integral $$\int \frac{1}{ x \sqrt{1-x} } dx.$$ We can perform the substitutions $$x=\sin^2u,$$ $$dx=2\sin u \cos{u} du ,$$ on the integral to give $$\int \frac{1}{ \sin^2u \sqrt{1-\sin^2u} } 2\sin u \cos u du .$$
Why can you treat $dx$ as a differential?
From my understanding the integration sign $\int dx$ works as as if it is an
operator, just like how $\frac {d}{dx}$ works as an operator and not as a fraction. Which means $\int$ and $dx$ should not be interpreted separately.
But at the same time treating $dx$ as a differential always work out fine so there must be some validness in treating it as an differential.
|
There seems to be two different meanings of mathematical logic. One is "Classical Logic, Intuitionistic Logic, etc" and another "Set theory, Model Theory, Recursive Theory, etc (http://en.wikipedia.org/wiki/Mathematical_logic). Then, Theory is a somewhat mixture of one from the one class, and one from another class. Am I right?
Yes. "(Mathematical) Logic" refers both to a mathematical subject, and to specific
logics (propositional, first-order, modal etc.), much the same as the term "algebra".
"Theory" is often used to denote a system which is
not a system of logic. However, theories are based on (a specific) logic; they extend the base logic with additional external content (e.g. $(\forall n\in\mathbb{N})(n + 0 = 0)$). Theories are often categorized according to which logic they use as their base; e.g. you can hear about "first-order theories". The external content can be anything one would like to talk about in a formalized manner. Probably the most famous example is the first-order Peano Arithmetic (PA).
Theories and logics are similiar, for example they both have axioms, rules of inference and we try to give appropriate semantics to both. And there are differences: if $F$ is a random well-formed-formula (wff) of logic $L$, we (often) don't care whether we can deduce either $F$ or $\neg F$ (the fact that we can't deduce $F$ from $L$ is written as $L \nvdash F$). If $F$ is a propositional variable and $L$ propositional logic, then we
want $F$ (and $\neg F$) not to be deducible from $L$. On the other hand, we try to avoid such phenomena in theories such as PA (where it's unavoidable due to incompleteness).
What does "P is a provable sentence in a theory T" mean? Does this mean "starting from the axioms of T, showing that the sentence P is true with respect to the Truth Table above"?
Roughly, yes.
When given a theory, you are given a set of axioms - which are looked at as just strings of symbols (we know they mean something, for example in $2 + 2 = 4$, we know that $+$ refers to the operations we know of; but we
pretend we don't know the "meaning"), and the set of rules of transformations, which basically say what you can do with the axioms. These two (and the language defining what is wff) are called the syntax.
When it comes to propositional logic (PL), one such system (there is more than one formalization of PL) might give you axioms like $\neg(P \wedge \neg P)$ and rules such as "When given $P$ you are allowed to deduce $P \vee S$". Things like commutation of disjunction have to be proven using such rules (even though you already know that "$\vee$" means "or", so it's obvious that commutation holds, you aren't allowed to use such outside knowledge).
On the other hand, there is semantics. Semantics doesn't use rules of transformation, but "the definition of truth". There are many tools helping you to see when is a given formula true according to such definition: truth tables are one of them. When a formula is "true" according to logic's definition of truth, we write $L \models F$. When a formula is deducible, $L \vdash F$. Property that $L \models F \Rightarrow L \vdash F$ is called completeness. $L \vdash F \Rightarrow L \models F$ is called soundness. Both hold for PL and First-order logic, so that you indeed (indirectly) know that F is deducible if it's truth table contain only $\top$'s.
|
Xypic offers many placement and formatting options for labels, including rotation.
However, I can't get this last feature to work correctly.
I have two problems with this:
with some inclinations the label won't be correctly shifted sideway (see the downleft arrow below) with some inclinations the label won't be correctly placed in the length of the arrow (see the left arrow below)
The normal aspect of the arrow being :
MWE:
\documentclass[a4paper,12pt]{article}\usepackage[all]{xy}\newcommand{\bijar}[1][]{% \ar[#1] \ar@<0.7ex>@{}[#1]|-*[@]{\sim}} % Arrow for a % bijective mapping\begin{document}\[\xymatrix{&&&& \\ &&&& \\&& {\bullet} \bijar[rr] \bijar[uurr] \bijar[uu] \bijar[uull] \bijar[ll] \bijar[ddll] \bijar[dd] \bijar[ddrr]&& \\&&&& \\&&&&}\]\end{document}
Some comments about the code: I draw this arrow in 2 arrows, one being only the arrow and the second one being an empty arrow, shifted by the desired dimension, and actually supporting the label. The reason I use that instead of regular labels is that it allows me to specify the distance between the label and the arrow. Also, I don't think it can be the problem. After all, sideway-shifted arrows are exactly in front of their normal position, they aren't shifted backward or forward.
I'm using
xy-pic 3.8.6 with
pdflatex.
Edit : After reading this question, I came up with the following definition of the macro:
\newcommand{\bijar}[1][]{% \ar[#1] \ar@<0.7ex>@{}[#1]|-*@{~}} % Arrow for a % bijective mapping
It indeed solves both of the above problems, but as you can see on the picture below, there is another problem appearing on "standard" directions :
The other problem is that it works for this because
\sim can be imitated by sloped arrow body, but for other arrows like open or closed immersions, this hack wouldn't work.
|
Can a planet, star or otherwise have a magnetic field that is stronger or have more range than its gravity?
Let's look at the proper magnetic force (as opposed to the Lorentz force on a
moving, charged object described in @KenG's answer) on a specimen $S$ of magnetized material with mass $M_S$ as a way to try to compare. Let's arbitrarily assume it has a fixed, permanent magnetic moment $m_S$. We can't use iron because it will saturate too easily.
Then let's look at how the forces scale differently with distance
$$\mathbf{F_G} = -\frac{G M_S M}{r^2}\mathbf{\hat{r}} \tag{1}$$
$$\mathbf{F_B} =\nabla (\mathbf{m_S} \cdot \mathbf{B}(\mathbf{r})) \tag{2}$$
If we reduce these to scalar equations at a radius $R$ (assume $\mathbf{m_S}$ and $\mathbf{B}$ are parallel) assume all forces are attractive, and evaluate the potentials and their gradients on the equator of the body
at it's physical radius $R$. Since the magnetic force on our dipole specimen drops off faster than the gravitational force, we have to evaluate the two at the closest physically possible distance:
$$F_G = \frac{G M_S M}{R^2} \tag{3}$$
$$F_B = \frac{3 m_S B_{r=R}}{R} \tag{4}$$
where our specimen is a distance $R$ from our field source, and it's moment $m_S$ is a magnetization of 1 Tesla times the volume of a 1 kg rare earth magnet, about 0.000125 cubic meters.
All MKS units, all rough, ballpark numbers with emphasis on strongest magnetic fields
Body R (m) M (kg) B(r=R) (T) F_G (N) F_B (N) F_B/F_GEarth 6.4E+06 6.0E+24 5.0E-05 9.8E+00 2.9E-15 3.0E-16Jupiter 7.1E+07 1.9E+27 4.2E-04 2.5E+01 2.2E-15 8.8E-17Neutron Star 1.0E+04 4.0E+30 5.0E+10 2.7E+12 1.9E+03 7.0E-10Magnetar 1.0E+04 4.0E+30 2.0E+11 2.7E+12 7.6E+03 2.8E-09
So even for a Magnetar (see also 1, 2) a kind of neutron star with a very strong magnetic field), the magnetic force on our 1kg specimen of permanent magnet is only 3 parts per billion as strong as the gravitational force.
You might see a much more favorable ratio if you compared two subatomic particles at short ranges (e.g. 1E-15 meters) but for astronomical objects, gravity seems to win smartly.
It depends on what object it's acting on. There are many objects, including stars, that have magnetic fields where Lorentz forces on charged particles like electrons and protons are stronger than the gravitational force on them.
Also remember that the strength of the Lorentz force depends on the speed of the particle moving through it, so a fast enough moving electron even here on Earth will receive a larger magnetic force than a force of gravity. This is how the Earth's magnetic field is able to contain charged particles in the Van Allen belts that its gravity could not contain.
It isn't impossible, but the short answer is "no".
A gravitational field will accelerate all matter and energy equally while a magnetic field will only accelerate moving electric charges (other magnets).
The force due to gravity is proportional to the
inverse square of the distance, and the force due to magnetism asymptotically approaches the inverse cube of the distance. At some critical distance the gravitational force will become stronger than the magnetic force.
Unless most of the large body was magnetic, even over the magnetic poles the magnetic field would probably be too low to levitate a typical magnet in the large body's gravitational field.
|
Here's a very simple method: Find the largest number below $2^n$ that is a safe prime. Use standard primality tests for $p$ and $q = (p - 1)/2$. For example, $2^{2048} - 1942289$ is the largest safe prime below $2^{2048}$.
But you didn't specify what you want this for. If you want to use this with Diffie–Hellman to resist discrete logarithms, then that won't be a good option. The bit lengths you describe are designed for DSA, which has different security requirements.
For an $n$-bit safe prime
to use with Diffie–Hellman, you want few small subgroups and you want to destroy the structure that the SNFS exploits, so you can pick the smallest $c$ such that $$p = 2^n - 2^{n - 64} - 1 + 2^{64} (\lfloor 2^{n - 130} \pi \rfloor + c)$$ is a safe prime and congruent to 7 modulo 8. The latter condition ensures that 2 generates the subgroup of quadratic residues in $(\mathbb Z/p\mathbb Z)^\times$, of prime order $(p - 1)/2$. This is the technique used by RFC 3526 to pick standard groups at sizes from 1536 to 8192; the technique is described in RFC 2412, Appendix E.
This technique is sometimes called NUMS, for nothing-up-my-sleeves, because it uses the conventional transcendental constant $\pi$ instead of some inexplicable string of 1920 bits. There's no security significance to $\pi$ except that it destroys some structure the SNFS could exploit—you could use $e$ instead, or $e^\pi$, or $\cos 1$, or all manner of other options to get a result you want if you knew of a small, say one in a million, fraction of primes that admitted a back door. For this reason, may I interest you in doing Diffie–Hellman over rigidly selected elliptic-curve groups free of magic constants instead? As a bonus, you get higher performance, smaller keys, easier defense against timing side channels, a number of high-quality implementations,
and cooler names like X25519.
I'm not sure offhand what all the security requirements for Elgamal encryption: approximately nobody uses Elgamal encryption these days. Naively, if the recipient given $(c, d)$ yields $d \cdot c^{-x}$ where $x$ is the secret exponent, then the adversary can apply the Lim–Lee active small-subgroup attack by supplying $d = 1$ and $c$ of small orders $n_0$, $n_1$, $n_2$,
etc., to learn points ${g_0}^x$, ${g_1}^x$, etc., to which they can apply discrete logarithms in small subgroups to recover $x \bmod n_0$, $x \bmod n_1$, etc., and reconstruct $x$ with the Chinese remainder theorem.
Could I interest you in replacing your use of Elgamal encryption by X25519 in a NaCl crypto_box or libsodium crypto_box_seal, which have none of these finicky considerations and run much faster with fewer side channels and have smaller ciphertext expansion and are widely implemented and understood?
|
Gold Member
525 92
It's been a long time since my last exam on QM, so now I'm struggling with some basic concept that clearly I didn't understand very well.
1) The Sch. Eq for a free particle is ##-\frac {\hbar}{2m} \frac {\partial ^2 \psi}{\partial x^2} = E \psi## and the solutions are plane waves of the form ##\psi(x) = Ae^{1kx} + Be^{-ikx}##. This functions can not be normalized thus they do not represent a physical phenomenon, but if I superimpose all of them with an integral on ##k## I get the "true" solution (the wave packet). This implies that a free particle with definite energy does not exist (only superposition of states with different energies can exist). This bugs me a lot. For example, think about an atom hit by an ionizing radiation: at some point an electron will be kicked out of the shell and now, if I wait some time, I have a free electron (so a free particle) and what about its energy? It should be defined by the law of conservation of energy... 2) I'm reading some lecture notes about scattering. Why does everyone take the incoming particle to be described by the state ##\psi_i = e^{i \mathbf k \cdot \mathbf r}## if it is not normalizable ? It seems to me they all assume the particle to be inside a box of length ##L## and forget about about the normalization constant. But why ?
1) The Sch. Eq for a free particle is ##-\frac {\hbar}{2m} \frac {\partial ^2 \psi}{\partial x^2} = E \psi## and the solutions are plane waves of the form ##\psi(x) = Ae^{1kx} + Be^{-ikx}##. This functions can not be normalized thus they do not represent a physical phenomenon, but if I superimpose all of them with an integral on ##k## I get the "true" solution (the wave packet). This implies that a free particle with definite energy does not exist (only superposition of states with different energies can exist). This bugs me a lot. For example, think about an atom hit by an ionizing radiation: at some point an electron will be kicked out of the shell and now, if I wait some time, I have a free electron (so a free particle) and what about its energy? It should be defined by the law of conservation of energy...
2) I'm reading some lecture notes about scattering. Why does everyone take the incoming particle to be described by the state ##\psi_i = e^{i \mathbf k \cdot \mathbf r}## if it is not normalizable ? It seems to me they all assume the particle to be inside a box of length ##L## and forget about about the normalization constant. But why ?
|
The answer is
no, of course.
Consider, for instance, the following languages:
$L_1 = \{ \langle M \rangle \# \langle M \rangle \mid \, \text{$M$ is a TM which does not halt on input $\langle M \rangle$} \}$ $L = \{ \langle M \rangle \# \langle M \rangle \mid \, \text{$M$ is a TM} \}$ $L_2 = L \cup \{ \langle M \rangle \# w \mid \, \text{$M$ is a TM which does not halt on input $w$} \}$
Then it is easy to see neither $L_1$ nor $L_2$ are recursively enumerable (i.e., Turing-recognizable), despite $L$ being even context-sensitive. (In fact, you could even make $L$ context-free by replacing the second $\langle M \rangle$ with its reversed copy.)
The upshot is that set inclusion (on its own) tells us very little about the sets involved being recursively enumerable or not, unless the symmetric difference between them is finite. In the case of $L_1 \subseteq L \subseteq L_2$, we
do know, for instance, that $|L \setminus L_1|$ or $|L_2 \setminus L|$ being finite implies that $L_2$ is also not recursively enumerable.
|
Assume that a matrix $\mathbf{R}$ of size $N\times N$ is given such that $r_{i,j} \ge r_{i,j+1}$ for any $i$ and $j$, where $r_{i,j}$ is the $i$th row and $j$th column element of $\mathbf{R}$.
I will solve the following problem: $$ \begin{array}{cl} \displaystyle \max_{K_i, i=1,\ldots,N} & \displaystyle f(K_1,\ldots,K_N)=\sum_{i=1}^N \sum_{j=1}^{K_i}r_{i,j}\\ \text{subject to} & \displaystyle \sum_{i=1}^N K_i = L, \end{array} $$ where $L$ is not greater than $N$.
Literally speaking, the above problem is to select $L$ items out of $N$ items, allowing duplicate selection, where the score obtained by selecting an item decreases as the number of times selected increases.
My greedy algorithm is as follows:
Given r[i,j] for all i and j.for i=1:N K[i] = 1.endrepeat i_star = argmax r[i,K_i] over i. K[i_star] = K[i_star] + 1.until L = sum K[i] over all i
Obviously, the above greedy algorithm will give an optimal solution to the above problem. However, I want to prove the solution derived by the algorithm is optimal.
I am trying to prove it using mathematical induction, but my proof is not as rigid as I hope since I failed to prove the optimal substructure of the problem. The optimal substructure implies that an optimal solution when $L=k + 1$ contains an optimal solution when $L=k$.
How can I prove it mathematically rigorously?
|
We want to understand the integral
$\displaystyle \int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 + t^2)^n}. \ \ \ \ \ (1)$
Although fattybake mentions the residue theorem, we won’t use that at all. Instead, we will be very clever.
We will do a technique that was once very common (up until the 1910s or so), but is much less common now: let’s multiply by $latex {\displaystyle \Gamma(n) = \int_0^\infty u^n e^{-u} \frac{\mathrm{d}u}{u}}$. This yields
$latex \displaystyle \int_0^\infty \int_{-\infty}^\infty \left(\frac{u}{1 + t^2}\right)^n e^{-u}\mathrm{d}t \frac{\mathrm{d}u}{u} = \int_{-\infty}^\infty \int_0^\infty \left(\frac{u}{1 + t^2}\right)^n e^{-u} \frac{\mathrm{d}u}{u}\mathrm{d}t, \ \ \ \ \ (2)$
where I interchanged the order of integration because everything converges really really nicely. Do a change of variables, sending $latex {u \mapsto u(1+t^2)}$. Notice that my nicely behaving measure $latex {\mathrm{d}u/u}$ completely ignores this change of variables, which is why I write my $latex {\Gamma}$ function that way. Also be pleased that we are squaring $latex {t}$, so that this is positive and doesn’t mess with where we are integrating. This leads us to
$\displaystyle \int_{-\infty}^\infty \int_0^\infty u^n e^{-u + -ut^2} \frac{\mathrm{d}u}{u}\mathrm{d}t = \int_0^\infty \int_{-\infty}^\infty u^n e^{-u + -ut^2} \mathrm{d}t\frac{\mathrm{d}u}{u},$
where I change the order of integration again. Now we have an inner $latex {t}$ integral that we can do, as it’s just the standard Gaussian integral (google this if this doesn’t make sense to you). The inner integral is
$latex \displaystyle \int_{-\infty}^\infty e^{-ut^2} \mathrm{d}t = \sqrt{\pi / u}. $
Putting this into the above yields
$latex \displaystyle \sqrt{\pi} \int_0^\infty u^{n-1/2} e^{-u} \frac{\mathrm{d}u}{u}, \ \ \ \ \ (4)$
which is exactly the definition for $latex {\Gamma(n-\frac12) \cdot \sqrt \pi}$.
But remember, we multiplied everything by $latex {\Gamma(n)}$ to start with. So we divide by that to get the result:
$latex \displaystyle \int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 + t^2)^n} = \dfrac{\sqrt{\pi} \Gamma(n-\frac12)}{\Gamma(n)} \ \ \ \ \ (5)$
Finally, a copy of the latex file itself.
|
Let the binary relation computed by a nondeterministic transducer be the relation between input strings and the possible output strings the transducer can produce (and accept) for the given input string.
It is easy to see that the nondeterministic transducers using polynomial time can compute different binary relations than the nondeterministic transducers using logarithmic space. (Trivial examples like $\{(1^n,ww^R):n\in\mathbb N, |w|\leq n\}$ suffice to see this.
Edit: This assumes that the output is generated symbol by symbol, see András Salamon's comment. If random access to write only output memory is assumed, then $\{(1^n,w):n\in\mathbb N, w\in D_2, |w|\leq 2n\}$ where $D_2$ is the Dyck language of all strings of matching brackets over two pairs of brackets, say $\{ [,], (,) \}$ is an example. However, proving that this example really works is non-trivial.)
Can anything of interest be concluded from this basic observation? For example, concluding that there must exist an oracle $A$ such that $NL^A \neq NP^A$ would be of interest, but things don't work that way. Or concluding that directly programming in NP is easier that directly programming in NL, because more powerful
subroutines are available. That sort of sounds true, even so subroutines generally can have more that a single input string and produce more than a single output string. (So one could try to prove that if all those possible subroutines would be equal, then there could not exist an oracle for which the relativized decision problems are different. But even this will probably fail.) Edit: WLOG, we can restrict us to NL tranducers that never repeat a configuration. These are exactly those NL transducers that only use polynomial time. The random access to write only output memory is well outside NL. But it is still inside NP, so the composition of those NL transducers could be done inside NP. This should be good enough to conclude things about powerful subroutinesor oracles, if this is possible at all. (Even the symbol by symbol output tape was outside of NL, it was just less obvious.) The oracles $A$ such that $NL^A \neq NP^A$ are white box oracles, where the problem $A$ is (expected to be) outside of the considered complexity classes. The conclusions about subroutinesfrom the basic observation might tell us something about black box oracles from the same or weaker complexity classes. Or they might at least provide a motivation to prove black box separations between complexity classes relative to oracles from weaker complexity classes.
|
Answer
$29^\circ$
Work Step by Step
The measure of an inscribed angle of a circle is one-half the measure of its intercepted arc, thus $\angle B=\frac{1}{2}\overset{\frown}{AC}$.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
In physics, the wave function is a mathematical function $\psi: \mathbb{R}^3 \to \mathbb{C}$. In the discussion of
fermions and bosons we can talk about how the wave function behave under the interchange of two particles. There are two fundamental cases:
$$ \psi(x,y) = \pm \psi(y,x)$$
If there is a "+" we get bosons, in the case of "-" we get a fermion. Indeed we can construct functions in 3 variables which do the same thing:
$$ \psi(x,y,z) = \psi(y,z,x) = \psi(z,y,x) = - \psi(y,x,z) = - \psi(x,z,y)= - \psi(z,y,x)$$
Indeed, any function in two variables can be split into the symmetric and anti-symmetric part:
$$ \psi(x,y) = \frac{1}{2}\Big[ \underbrace{\psi(x,y) + \psi(y,x)}_S \Big] + \frac{1}{2}\Big[ \underbrace{\psi(x,y) - \psi(y,x)}_{A} \Big]$$
In terms of representation theory we are showing a very special case of Schur-Weyl duality (or plethysm? I forget the name): $V^2 = \wedge^2 \,V \oplus \mathrm{Sym}^2(V) $
We can write two different projection operators. One is "take the symmetric part":
\begin{eqnarray*} S\psi(x,y) &=& \tfrac{1}{2} \big[ \psi(x,y) + \psi(y,x) \big] \\ A\psi(x,y) &=& \tfrac{1}{2} \big[ \psi(x,y) - \psi(y,x) \big]\end{eqnarray*}
These two projection operators are examples of Young symmetrizers. For 3 or more particles there are more examples, using young tableaux.
I am stopping here to save my work in case my computer crashes (as it sometimes does).
|
I must say I find it odd (and a bit worrisome) that one can find textbooks whereby you learn the definition of a Banach algebra (and even a Banach *-algebra) without seeing examples that are not $C^*$-algebras. There is more to life than $B(H)$...
Anyway, some commutative examples which are naturally algebras of functions. In every case the involution is just conjugation of functions.
1) For $G$ a locally compact abelian group (think ${\mathbb Z}^k$ or ${\mathbb T}^k$ or ${\mathbb R}^k$) with dual group $\Gamma$, take$$A(G) = \{ f\in C_0(G) \mid \widehat{f} \in \ell^1(\Gamma) \} $$the so-called Fourier algebra of $G$. (One can define $A(G)$ for arbitrary locally compact groups but the definition is more technical.)
2) Algebras of Lipschitz/H\"older functions. Take your favourite compact metric space $(X,d)$, take some $0<\alpha<1$, and define$$ L_\alpha(f) = \sup_{x,y\in K; x\neq y} \frac{ \vert f(x)-f(y) \vert }{d(x,y)^\alpha} $$then take$$ {\rm Lip}_\alpha(X,d) = \{ f: X\to {\mathbb C} \mid L_\alpha(f)<\infty \} $$equipped with the norm $\Vert f \Vert_\alpha := \Vert f\Vert_{\infty} + L_\alpha(f)$.
3) The algebra $C^k[0,1]$ of $k$-times continuously differentiable functions on $[0,1]$ (for $k\geq 1$), equipped with the natural norm built out of the sup-norms of the derivatives.
If you are willing to consider Banach algebras without involution then there are ${\rm many}^{\rm many}$ more examples.
|
According to WolframAlpha, $i^i=e^{-\pi/2}$ but I don't know how I can prove it.
Write $i=e^{\frac{\pi}{2}i}$, then $i^i=(e^{\frac{\pi}{2}i})^i = e^{-\frac{\pi}{2}} \in \mathbb{R}$. Be careful though, taking complex powers is more... complex... than it may appear on first sight $-$ see here for more info.
In particular, it's not well-defined (until we make some choice that makes it well-defined); we could just have well written $i=e^{\frac{5\pi}{2}i}$ and obtained $i^i=e^{-\frac{5\pi}{2}}$. But $i^i$ can't be equal to both $e^{-\frac{\pi}{2}}$ and $e^{-\frac{5\pi}{2}}$ can it?
Despite the lack-of-well-defined-ness, though, $i^i$ is always real, no matter which '$i^{\text{th}}$ power of $i$' we decide to take.
More depth: If $z,\alpha \in \mathbb{C}$ then we can define$$z^{\alpha} = \exp(\alpha \log z)$$where $\exp w$ is defined in some independent manner, e.g. by its power series. The complex logarithm is defined by$$\log z = \log \left| z \right| + i\arg z$$and therefore depends on our choice of range of argument. If we fix a range of argument, though, then $z^{\alpha}$ becomes well-defined.
Now, here, $z=i$ and so $\log i = i\arg i$, so $$i^i = \exp (i \cdot i\arg i) = \exp (-\arg i)$$ so no matter what we choose for our range of argument, we always have $i^i \in \mathbb{R}$.
Fun stuff, eh?
Here's a proof that I absolutely do not believe: take its complex conjugate, which is $\bigl({\bar i}\bigr)^{\bar i}=(1/i)^{-i}=i^i$. Since complex conjugation leaves it fixed, it’s real!
EDIT: In answer to @Isaac’s comment, I think that to justify the formula above, you have to go through exactly the same arguments that most of the other answerers did. For complex numbers $u$ and $v$, we define $u^v=\exp(v\log u)$. Now, the exponential and the logarithm are defined by series with all real coefficients; alternatively you can say that they are analytic, sending reals to reals. Thus $\overline{\exp u}=\exp(\bar u)$ and $\overline{\log(u)}=\log\bar u$. The result follows, always sweeping under the rug the fact that the logarithm is not well defined.
Using the representation that $i = e^{i \pi/2}$, we have $i^i = \left(e^{i\pi/2}\right)^i = e^{i^2\pi/2} = e^{-\pi/2}$.
$i = e^{i\pi/2}$ comes from the representation that $e^{i\theta} = \cos(\theta)+i\sin(\theta)$, which for $\theta = \pi/2$ gives us $e^{i\pi/2} = \cos \pi/2 + i \sin \pi/2 = 0+i\cdot 1 = i$.
Edit: To add to the other fantastic answers/comments, this is the result on the
principal branch. Others have commented that you can equivalently represent $i = e^{i(2k+1/2)\pi}$ and obtain other real-valued answers for $i^i$. Wolfram Alpha gives you $e^{-\pi/2}$ because its default setting is to return the principal value.
Edit again:
It may seem weird that we resort to this "out of nowhere" polar representation of complex numbers, but it is a powerful tool.
Over the reals, the concept that "exponentiation = repeated multiplication" breaks down when you have non-integer exponents, so you have to start defining exponentiation using suprema of sets, which exploits the ordered field nature of the reals.
The complex field is not an ordered field, so the equivalent notion of a supremum doesn't exist. So how do we take
any number to the power $i$, let alone a complex number? The polar representation allows us to deal with this issue in a rather clever fashion.
$i^i$ takes infinitely many values:
$$i^i = e^{i \log i} = e^{i(i\pi/2 + 2 \pi i m)} = e^{-\pi/2}e^{-2 \pi m},$$
where $m$ is an integer.
This would come right from Euler's formula. Let's derive it first. There are many ways to derive it though, the Taylor series method being the most popular; here I’ll go through a different proof. Let the polar form of the complex number be equal to $z$ . $$\implies z = \cos x + i\sin x$$ Differentiating on both sides we get,
$$\implies \dfrac{dz}{dx} = -\sin x + i\cos x$$
$$\implies dz = (-\sin x + i\cos x)dx$$ Integrating on both sides,
$$\implies \displaystyle \int \frac{dz}{z} = i \int dx$$ $$\implies \log_e z = ix + K$$ Since $K = 0$, (Set $x = 0$ in the equation), we have, $$\implies z = e^{ix}$$ $$\implies e^{ix} = \cos x + i\sin x$$ The most famous example of a completely real number being equal to a real raised to an imaginary is $$\implies e^{i\pi} = -1$$ which is Euler’s identity. To find $i$ to the power $i$ we would have to put $ x = \frac{\pi}2$ in Euler's formula. We would get $$e^{i\frac{\pi}2} = \cos \frac{\pi}2 + i\sin \frac{\pi}2$$ $$e^{i\frac{\pi}2} = i$$ $${(e^{i\frac{\pi}2})}^{i} = i^{i}$$ $$i^{i} = {e^{i^{2}\frac{\pi}2}} = {e^{-\frac{\pi}2}}$$ $$i^{i} = {e^{-\frac{\pi}2}} = 0.20787957635$$ This value of $i$ to the power $i$ is derived from the principal values of $\sin$ and $\cos$ which would satisfy this equation. There are infinite angles through which this can be evaluated; since $\sin$ and $\cos$ are periodic functions.
Let say: $f(z) = z^\theta $
We know Euler's formula: $e^{i \theta} = \cos(\theta) + i \sin(\theta)$ Using it we will get: $$z^\theta = e^{\theta \ln(z)} = e^{\theta (\ln\|z\| + i\arg (z))} = e^{\theta \ln\|z\| }e^{ i \theta \arg (z))}= e^{\theta \ln\|z\| }{(\cos(\theta\arg (z)) + i \sin(\theta\arg (z)))} z \in C$$
So if $$z = i \wedge \theta = i \implies$$ $$ z^\theta = i^i = e^{i \ln(i)} = e^{i (\ln\|i\| + i\arg (i))} = e^{i \ln\|i\| }e^{ i i \arg (i))}= e^{i \ln(1) }e^{- \frac{\pi}{2}+2\pi k}= e^0 e^{- \frac{\pi}{2}+2\pi k}=e^{- \frac{\pi}{2}+2\pi k} $$ which is a bunch of Real numbers depending $k \in \mathbb Z$
So it is already proved that $i^i$ is a real number.
$i^i=x$ so $i^{(i^2)}=x^i$ So $\frac{1}{i} = x^i$ So $-i=x^i$ so $i = -(x^i)$ So $i^i = -x^{(-ix)^i}$
|
There is a certain pattern to learning mathematics that I got used to in primary and secondary school. It starts like this: first, there are only positive numbers. We have 3 apples, or 2 apples, or maybe 0 apples, and that’s that. Sometime after realizing that 100 apples is a lot of apples (I’m sure that’s how my 6 year old self would have thought of it), we learn that we might have a negative number. That’s how I learned that they don’t always tell us everything, and that sometimes the things that they do tell us have silly names.
We know how the story goes – for a while, there aren’t remainders in division. Imaginary numbers don’t exist. Under no circumstance can we divide or multiply by infinity, or divide by zero. And this doesn’t go away: in my calculus courses (and the ones I’ve helped instruct), almost every function is continuous (at least mostly) and continuity is equivalent to ‘being able to draw it without lifting a pencil.’ It would be absolutely impossible to conceive of a function that’s continuous precisely at the irrationals, for instance (and let’s not talk about $latex G_\delta$ or $latex F_\sigma$). And so the pattern goes on.
So when I hit my first class where I learned and used the pigeon-hole principle regularly (which I think was my combinatorics class? Michelle – if you’re reading this, perhaps you remember), I thought the name “pigeon-hole” was another one of those names that will get tossed. And I was wrong.
I was in a seminar today, listening to someone talk about improving results related to equidistribution theorems, approximating reals by rationals, and… the Dirichlet Box Principle. And there was much talking of pigeons and their holes (albeit a bit stranger, and far more ergodic-sounding than what I first learned on).
Not knowing much ergodic theory (or any at all, really), I began to think about a related problem. A standard application of pigeonholing is to show that any real number can be approximated to arbitrary accuracy by a rational $latex \frac{p}{q}$. What if we restricted our $latex p,q$ to be prime? I.e., are prime ratios dense in (say) $latex \mathbb{R}^+$?
More after the fold –
So I seek to answer that question in a few different ways. It’s nice to come across problems that can be done, but that I haven’t done before. We’ll do three (somewhat) independent proofs.
First Method: Brute Force
The Prime Number Theorem (wiki) asserts that $latex \pi(n) \sim \frac{n}{\log n}$, and correspondingly that the nth prime $latex p_n \approx n \log n$. So then we might hope that if $latex \frac{n \log n}{m \log m}$ is dense in $latex \mathbb{R}^+$, prime ratios would be dense too. Fortunately, showing that $latex \frac{n \log n}{m \log m}$ is dense is straightforward. For the rest, we use this proposition:
Proposition If $latex p_n \sim q_n$, then $latex \frac{p_n}{p_m}$ is dense iff $latex \frac{q_n}{q_m}$ is dense. Proof : Since $latex p_n \sim q_n$, for any $latex \epsilon > 0$ there is some $latex N$ s.t. for all $latex n,m > N$, we have that $latex |1 – \frac{p_n}{q_n}| < \epsilon$. Thus $latex 1 – \epsilon < \frac{p_n}{q_n} < 1 + \epsilon$. Similarly, we can say that $latex 1 – \epsilon < \frac{q_m}{p_m} < 1 + \epsilon$.
Putting these together, we see that
$latex (1 – \epsilon)^2 < \frac{p_n}{p_m} \frac{q_m}{q_n} < (1 + \epsilon)^2$
$latex \frac{p_m}{p_n}(1-\epsilon) ^2< \frac{q_m}{q_n} < \frac{p_m}{p_n} (1 + \epsilon)^2$
If $latex \frac{p_m}{p_n}$ is dense, then in particular for any real number $latex r$, we can choose some $latex n,m > N$ s.t. $latex r (1 – \epsilon) < \frac{p_m}{p_n} < r(1 + \epsilon)$. Putting this together again, we see that
$latex r(1-\epsilon)^3 < \frac{q_m}{q_n} < r(1 – \epsilon)^3$
And so $latex q_n/q_m$ is dense as well. The proof of the converse is identical. $latex \diamondsuit$
Now that we have that, we ask: is it true that $latex \frac{n \log n}{m \log m}$ is dense? In short, yes. Now that we’ve gotten rid of the prime number restriction, this is far simpler. So I leave it out – but leave a comment if it’s unclear.
Method 2: Closer to Proper Pigeonholing
In a paper by Dusart (link to arxiv), Estimates of Some Functions over Primes without R.H., it is proved that for $latex x > 400 000$, there is always a prime in the interval $latex [x, x(1 + \frac{1}{25 \log^2 x}]$. We can use this to show the density of prime ratios as well. In fact, let’s be a little slick. If prime ratios are dense in the rationals, then since the rationals are dense in the reals we’ll be set. So suppose we wanted to get really close to some rational $latex \frac{a}{b}$. Then consider pairs of intervals $latex [an, an(1 + \frac{1}{25 \log ^2 an}], [bn, bn(1 + \frac{1}{25 log ^2 bn}]$ for $latex n$ large enough that $latex an, bn > 400 000$. We know there are primes $latex p_n, q_n$ in each pair of intervals.
Then our result follows from the fact that $latex \displaystyle \lim \frac{an}{bn} = \frac{a}{b} = \lim \frac{an(1 + \frac{1}{25 \log ^2 an})}{bn(1 + \frac{1}{25 \log ^2 bn})}$ and the sandwich theorem.
We pause for an aside: a friend of mine today, after the colloqium, was asking about why the pigeonhole principle was called the pigeonhole principle. Why not the balls in baskets lemma? Or the sock lemma or the box principle (which it is also called, but with far less regularity as far as I can tell)? So we considered calling it the sandwich theorem: if I have 4 different meats, but only enough bread for 3 sandwiches, then one sandwich will get at least 2 meats. What if we simply called every theorem the sandwich theorem, and came up with some equally unenlightening metaphorical explanation? Oof – deliberate obfuscation.
Method 3: First Principles
We do not actually need the extreme power of Dusart’s bound (which is not to say it’s not a great result – it is). In fact, we need nothing other than the prime number theorem itself.
Lemma For any $latex \epsilon > 0$, there exists some number $latex N$ s.t. for all $latex x > N$, there is a prime in the interval $latex [x, x(1+\epsilon)]$. Proof: Directly use the prime number theorem to show that $latex \lim \frac{\pi(n(1 + \epsilon)}{\pi(n)} = 1+\epsilon > 1$ Proposition Prime ratios are dense in the positive reals. Proof: For any real $latex r$ and $latex \epsilon > 0$, we want primes $latex p,q$ s.t. $latex |p/q – r| < \epsilon$, or equivalently $latex qr – q\epsilon < p < qr + q\epsilon$. Then let $latex \epsilon’ = \epsilon/r$. Let $latex N = N(\epsilon’)$ be the bound from the last lemma, and let $latex q$ be any prime with $latex qr > N$. Then since there’s a prime in $latex [x, x(1 + \epsilon’)]$, let $latex x = qr$ to complete the proof. $latex \diamondsuit$
To end, I wanted to note a related, cool result. If $latex P$ is the set of primes, then $latex \sin P$ is dense in $latex [-1,1]$. This is less trivial, but it follows from a result from Vinogradov saying that the sequence of prime multiples of a fixed irrational number is equidistributed modulo 1. And this not at all immediately obvious to me.
|
Hi.
On Wednesday at 9:05 CET / 8:05 UCT you can participate in the GYM version of the finals of 2017-2018 Russian Olympiad in Informatics (ROI), Day 1. And on Thursday there will be day 2, same time.
5 hours, 4 problems, IOI-style scoring with subtasks. Statements will be available in three languages: English, Russian, Polish.
We wanted to use those problems in a camp in Warsaw so we had to import the problems to some system anyway. Then why not Polygon+Codeforces and thus allowing everybody to participate? Huge thanks to MikeMirzayanov for helping me with using GYM.
I will post a very short editorial in English here, after the contest.
Extraction of radium
For each column find the maximum value and for each row find the maximum value (and store their positions). This way we know the initial answer. When an update comes, check if it's greater than the previous maximum in this column/row. If yes and that previous maximum was previously suitable for extraction, mark it as non-suitable and decrease the answer by $$$1$$$. And increase by $$$1$$$ if the changed unit square is maximum in its row and column.
Innophone
Sort buyers by $$$a_i$$$. For a fixed price $$$A$$$ of Innophone Plus, we know how many buyers buy this version, and we can sort the remaining buyers by $$$b_i$$$ and then choose such price $$$B$$$ that $$$countSmaller \cdot B$$$ is maximized, where $$$countSmaller$$$ is the number of remaining buyers with $$$b_i \le B$$$. If we sorted them by $$$b_i$$$, then $$$countSmaller$$$ is just the index in this sorted array.
We should consider $$$A$$$ increasingly and add new values $$$b_i$$$ to a structure that will tell us the maximum value of $$$index \cdot element$$$ in a sorted order. We can do that with square root decomposition. Before anything happens, let's gather all $$$b_i$$$, sort them, and split into buckets of size $$$\sqrt N$$$. When a new $$$b_i$$$ appears, we add it to a corresponding bucket and we rebuilt that bucket in $$$O(\sqrt N \cdot \log N)$$$ or without $$$log$$$ if you avoid sorting every time. When we want to find the maximum value of $$$countSmaller \cdot element$$$ in some bucket ($$$countSmaller$$$ is the number of smaller elements in the whole structure), let $$$x$$$ denote the number of elements in previous buckets and let $$$i$$$ denote the index of some element in this bucket. Then we want to minimize $$$(i + x) \cdot value = x \cdot value + i \cdot value = x \cdot P + Q$$$ for some constants $$$P$$$ and $$$Q$$$. This means we need a convex hull of lines for each bucket, and then we can binary search the minimum for some $$$x$$$ (the number of elements in previous buckets) in $$$O(\log n)$$$.
The total complexity is $$$O(n \sqrt n \log n)$$$. You can improve it to $$$O(n \sqrt n)$$$ if you sort by $$$b_i$$$ only once and then use it instead of resorting a bucket each time something new appears, and if you store a pointer to optimal element in each bucket instead of binary searching (a pointer can only increase because $$$x$$$ can only increase).
Quantum teleportation
Let's say you are in the cell $$$(i, j)$$$. Find all processors $$$(i2, j2)$$$ that $$$i < i2$$$ and $$$j < j2$$$. Find the minimum distance to one of them. Lemma: among those found processors, it's optimal to jump to those that have that minimum distance. Otherwise, it would be not worse to make two shorter jumps.
For a subtask with $$$x_i \neq x_j$$$ and $$$y_i \neq y_j$$$, this gives us $$$O(1)$$$ possible jumps from each cell. You can run Dijkstra in $$$O(K \cdot A \cdot \log K)$$$, where $$$A$$$ is the complexity of big integers.
To make it work for the last subtask, create a fake vertex in the graph for the L-shape to which you can jump. Once you get there in the priority queue of Dijkstra, find all not-yet-processed processors in that shape. For that, you can use sets of positions of not-yet-processed processors for each row and each column.
Big integers can be done in a standard way to get complexity $$$O(\frac{N}{32})$$$ per operation, or you can use the lemma that the optimal path will have at most $$$O(N^{2/3})$$$ bits set to $$$1$$$, so you can store a sorted list of those bits (powers of two), and you can do an operation in $$$O(N^{2/3})$$$
Viruses
$$$p = 1$$$ is easy, just check if the virus is completely safe in its initial cell. Let's focus on $$$p = 2$$$.
For a pair $$$(i, j)$$$, we want to check if the virus $$$i$$$ can get to the cell $$$j$$$ and be safe there, that is all viruses that would overpower him there can be killed.
Lemma: we can consider only cases where each cell is attacked at most once (except for the cell $$$j$$$ with virus $$$i$$$). Otherwise, if there are two attacks to the same cell, we can ignore the first one (if this virus should go somewhere else, he can do it from his initial cell).
For a pair $$$(i, j)$$$ (as defined earlier), the viruses are split into dangerous viruses that we want to get rid of, and the remaining viruses. We must check if "the remaining viruses" can take all the cells with the dangerous viruses. This can be done easily in $$$O(n^2)$$$. The total complexity would be $$$O(n^4)$$$.
To make it $$$O(n^3)$$$, let's fix the cell $$$j$$$ and then consider viruses in an order sorted decreasingly by strength in the cell $$$j$$$. Add safe viruses one by one. For each of remaining $$$N-1$$$ cell, maintain the position of the strongest safe virus. When a new safe virus appears (because we moved to the left by $$$1$$$ in the permutation), update the "positions of strongest safe virus" for all cells. We want to check if the currently considered virus in the cell $$$j$$$ is safe, that is he can get there in the first move, and all stronger viruses (dangerous viruses) can be killed by other (safe) viruses. For each dangerous virus, we check if he is weaker than "the strongest safe virus" in his initial cell~--- otherwise, that cell will always have a virus that is dangerous, and thus the current pair $$$(i, j)$$$ is bad.
Or you can use bitsets. For a virus in its initial cell, store a bitset of viruses that can kill him. You can get $$$O(n^4/32)$$$ or even $$$O(n^3/32)$$$ this way.
Second day tomorrow, same time.
Thank you for participation.
Addition without carry
Let's describe a polynomial solution first.
Let's check if the answer fits in $$$k$$$ bits, that is, check if the answer is smaller than $$$2^k$$$. Go from bit $$$k-1$$$. If there are two numbers with 1 on that position, we finish and the answer is NO. If there are zero numbers with 1 on that position, choose the maximum of remaining numbers $$$a_i$$$ and increase it to $$$2^{k-1}$$$ (digit 1 here and 0's to the right).
This way we can go from higher bits, check if we can have 0 here in the answer (by running the check described above for remaining $$$k-1$$$ positions), and put 1 here if the check fails.
To make the above solution quite fast, use hashing or some string algorithm to be able to quickly compare suffixes (to be able to find the maximum $$$a_i$$$).
The idea for the intended solution is the following. Sort numbers decreasingly. Let $$$B$$$ be the maximum value of $$$L_i + i - 1$$$ in this array ($$$L_i$$$ is the length of $$$a_i$$$). The answer length is then $$$B$$$ or $$$B + 1$$$ (the latter one comes from chaning each number to the next available power of two). I would say that this idea comes from understanding the slow approach described above.
To check if $$$B$$$ is possible, we only care about critical numbers: those that $$$L_i + i - 1 = B$$$. We must quickly go to the next (biggest) critical number, say that its first digit 1 must stay untouched, and put the remaining suffix (without this 1) back in the array. That might change the ordering and thus new critical numbers will appear. We need segment tree of sorted suffixes to quickly go to the next critical numbers. Segment tree allows us to find a number with $$$L_i + i - 1 = B$$$.
Decryption
One of possible solutions is greedy. Let $$$i$$$ be the position of the first element smaller than $$$a_0$$$. Find the position $$$j$$$ of the maximum element in $$$[0, i-1]$$$. Let the first segment be $$$[0, j]$$$. Repeat the process starting from $$$j+1$$$.
Quick sort
To get 80 points, we can insert each element to its position in $$$log(n)$$$ operations. To move the number $$$n$$$ to the end of the array, let's apply $$$S(i, n)$$$ where $$$i$$$ is the current position of this number. This operation moves the number to the middle of segment $$$[i, n]$$$. This way, in one operation we can halve the distance from a number to its position at the end of the array. So, in $$$log(n)$$$ operations we move $$$n$$$ to the end, and then we forget about that last element and we move $$$n-1$$$, and so on.
The last subtask is hard and it's worth only 20 points, so I think it's optimal to skip it during a contest. But let's see a solution.
Let's simulate the process from the end (then print operations in the reversed order). We start from the sorted permutation $$$[1, 2, \ldots, n]$$$ and we want to get the permutation from the input. The reversed operation changes a segmnet $$$[a, b, c, d, e, f]$$$ into $$$[d, a, e, b, f, c]$$$. Let's focus on element $$$c$$$ here. In one move, we can double the index of $$$c$$$ in the whole array, if we are able to choose a long segment with $$$c$$$ being the last element in the left half of the segment. If the index of $$$c$$$ is at least $$$n/2$$$, then we can get $$$c$$$ to the end of the array in one move! It still takes $$$log(n)$$$ operations to move an element from the beginning to the end, but most positions require only $$$O(1)$$$ operations. The array of costs is something like $$$[4, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1]$$$, while previously it was $$$[4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 2, 2, 1]$$$. The $$$i$$$-th element is the number of operations required to move an element from position $$$i$$$ to position $$$n$$$. The sum of costs is $$$2 \cdot n$$$.
The remaining issue is that maybe we first move one element from position $$$1$$$ to position $$$n$$$, then the next needed element is at the position $$$1$$$ and we must move it to $$$n-1$$$, and so on – we move each element from position $$$1$$$, so the number of operations is $$$n \cdot \log(n)$$$. To avoid that, we must shuffle the array first. Either apply several hundred random operations, or generate a random permutation $$$P$$$ and then first run the algorithm to change the initial permutation into $$$P$$$ and then $$$P$$$ into the sorted permutation. In fact, we must do the opposite because we simulate the process from the end. Anyway, this way organizers can't create a special malicious test against your solution.
Robomarathon
The case $$$p = 1$$$ is easier. To minimize the place of robot $$$i$$$, we should put one active device there and nowhere else. Any other scenario gives all other robots the starting time (relatively to robot $$$i$$$) that is not-worse that in this scenario. We need to count robots that will be faster than robot $$$i$$$. Let's focus on counting those with index $$$j < i$$$ and then we can reverse the sequence and repeat the algorithm.
Robot $$$i$$$ will lose against all robots $$$j < i$$$ that $$$a_j - j < a_i - i$$$. To count such indices $$$j$$$, we can change each value $$$a_x$$$ to $$$(a_x - x)$$$ and then we have a sequence in which: for each element, we want to compute the number of previous elements that are greater. This can be done by renumerating values to range $$$[1, n]$$$ and then doing a segment tree, or by sorting values and then considering them from small to big, marking their positions in the array and using segment tree to know how many marked positions there are on the left.
Now we move to the case $$$p = 2$$$. To maximize the place of robot $$$i$$$, the intuition is we should put devices far away from $$$i$$$. If we put a device somewhere, it's usually better to move it away from robot $$$i$$$, so maybe we should put it on track $$$1$$$ or on track $$$n$$$. If we put a device in one of those two tracks, it doesn't make sense to put devices anywhere else. In the following scheme, 'x' denotes devices. The second way is only worse for us, because it improves the starting time of some robots by $$$1$$$, including robot $$$i$$$, so the place of robot $$$i$$$ won't get worse.
x _ _ _ _ _ i _ _ _ <--- version A x x _ _ _ _ i _ _ _
If we put one device on track $$$1$$$ and one on track $$$n$$$, the starting time of $$$i$$$ will be $$$min(i-1, n-i)$$$. We don't affect it by putting more devices near the further of two endpoints: $$$1$$$ or $$$n$$$. On the scheme below, the second way to put devices only helps other robots, while not improving the situation of robot $$$i$$$.
x _ _ i _ _ _ _ _ x x _ _ i _ _ x x x x <--- version B
It doesn't make sense to put devices even closer to $$$i$$$ because it wouldn't improve the situation of any robot relatively to robot $$$i$$$ (the difference between starting times).
Versions A and B are the only strategies we need to consider, plus version A has two possibilities (putting the only device on track $$$1$$$ or track $$$n$$$). We need to implement each of them to compute a possible place of every $$$i$$$ and then, for every $$$i$$$, print the maximum of three computed possible places.
In version A with the device on track 1, robot $$$i$$$ will be beaten by all $$$j$$$ that $$$a_j + j < a_i + i$$$. This is even easier than the case $$$p = 1$$$.
In version B, for each $$$i$$$, we need to answer several queries like
"give me the number of indices $$$j$$$ in some interval with value $$$a_j - j$$$ smaller than $$$X$$$". Interval $$$[L, R]$$$ can be replaced with $$$[1, R]$$$ minus $$$[1, L-1]$$$. Now we need to answer only prefixes, so we go from left to right, extending the prefix one by one, and we need three data structures — one for $$$a_j - j$$$, one for $$$a_j + j$$$, one for $$$a_j$$$, to answer queries "count the number of values smaller than given $$$X$$$". You can actually use this for version A too.
If possible in your solution, use BIT/Fenwick instead of a segment tree to get the full score. You might get ~90 with slower implementation.
|
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
|
That was an excellent post and qualifies as a treasure to be found on this site!
wtf wrote:
When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that
. In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/
Thanks for the link and I would have showcased it all on its own had I seen it first
The point I am making is something different. I am pointing out that:
All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out.
I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate.
But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate.
There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets.
Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity?
1) The rigorization of Newton's calculus culminated with infinitary set theory.
Newton discovered his theory of gravity using calculus, which he invented for that purpose.
I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it.
However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun.
I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant.
2) Einstein's gneral relativity uses Riemann's differential geometry.
In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics.
Isn't this the same problem as previous? dx=0?
3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series.
In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math.
I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'?
4) Quantum theory is functional analysis
.
If you took linear algebra, then functional analysis
can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series.
Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert
space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete
, meaning that it has no holes in it. It's like the real numbers and not like the rational numbers.
QM rests on the mathematics of uncountable sets, in an essential way.
Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces.
Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations."
We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such.
Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it."
He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god.
I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it.
ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig.
Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol
5) Cantor was led to set theory from Fourier series.
In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity).
I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity.
In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here:
If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory.
Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake.
I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat
Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing....
I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality.
It just means we're using averages rather than discrete actualities and it's close enough.
I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is.
I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself.
Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess
Don't take my pushback too seriously
I'd prefer if we could collaborate as colleagues rather than competing.
|
This is only a comment about classes of rings in which
no counter-examples can be found.
Since a one-dimensional GCD domain is a Bézout domain [1, Corollary 3.9], GCD domains will not provide any counter-example.
We will show that Dedekind domains will not provide any counter-example either. To do so, we will use the following well-known lemma (see e.g., Matsumura's "Commutative Ring Theory") and two subsequent claims.
Lemma. Let $R$ be a commutative ring with identity. Let $\mathfrak{a}, \mathfrak{b}, \mathfrak{c}$ be ideals of $R$. Assume that $\mathfrak{c}$ is co-maximal with both $\mathfrak{a}$ and $\mathfrak{b}$, i.e., $\mathfrak{a} + \mathfrak{c} = \mathfrak{b} + \mathfrak{c} = R$. Then $\mathfrak{c}$ is co-maximal with $\mathfrak{ab}$.
Proof. $R = (\mathfrak{a} + \mathfrak{c})(\mathfrak{b} + \mathfrak{c}) \subseteq \mathfrak{ab} + \mathfrak{c} \subseteq R$.
The following claim is a general fact about commutative domains.
Claim 1. Let $R$ be a commutative domain with identity. Let $M(R)$ be the submonoid of $R \setminus \{0\}$ generated by the units of $R$ together with the prime elements $p$ such that $Rp$ is a maximal ideal of $R$. Let $S(R)$ be the subset of $R \setminus \{0\}$ consisting of the non-zero elements $a$ such that $Ra + Rb$ is principal for every $b \in R$. Then $M(R) \subseteq S(R)$.
Proof. Let $a = up_1^{\alpha_1} \cdots p_n^{\alpha_n} \in M(R)$ where $u$ is a unit of $R$ and where the elements $p_i$ are distinct prime elements such that $Rp_i$ is a maximal ideal for every $i$. Let us show by induction on $s = \alpha_1 + \cdots + \alpha_n$ that $a$ belongs to $S(R)$. If $s = 0$, it is immediate. Let us suppose that $s > 0$ and let $b \in R$. We can certainly assume that $b \neq 0$. Let $\beta_i$ be the largest integer such that $p_i^{\beta_i}$ divides both $a$ and $b$ and set $d = p_1^{\beta_1} \cdots p_n^{\beta_n}$. If $d$ is not a unit, then the induction hypothesis applies to $a/d$ and yields a Bézout relation with $b/d$ so that $a \in S(R)$. Otherwise, the element $b$ cannot be divided by any of the $p_i$. Thus $Rb$ is co-maximal with $Rp_i$ for every $i$. By the above lemma, we deduce that $Rb$ is co-maximal with $Ra$, and hence $a \in S(R)$.
Our last claim establishes that no counter-example to OP's condition can be found in a Dedekind domain.
Claim 2. Let $R$ be Dedekind domain and let $a \in R$ such that $Ra \cap Rb$ is principal for every $b \in R$. Then $a \in M(R)$.
Proof. Let $a$ be a non-zero element in $R \setminus M(R)$. By hypothesis, there is a maximal ideal $\mathfrak{m}$ appearing in the decomposition of $Ra$ which is not principal. By the Chinese Remainder Theorem, we can find $b \in R$ such that $b \in \mathfrak{m} \setminus \mathfrak{m}^2$ and $b$ doesn't belong to any of the other maximal ideals containing $a$. In the group of fractional ideals of $R$, we have $Ra \cap Rb = (Rab)\mathfrak{m}^{-1}$ so that $Ra \cap Rb$ cannot be principal.
[1] P. Sheldon, "Prime ideals in GCD domains".
|
The usual simple algorithm for finding the median element in an array $A$ of $n$ numbers is:
Sample $n^{3/4}$ elements from $A$ with replacement into $B$ Sort $B$ and find the rank $|B|\pm \sqrt{n}$ elements $l$ and $r$ of $B$ Check that $l$ and $r$ are on opposite sides of the median of $A$ and that there are at most $C\sqrt{n}$ elements in $A$ between $l$ and $r$ for some appropriate constant $C > 0$. Fail if this doesn't happen. Otherwise, find the median by sorting the elements of $A$ between $l$ and $r$
It's not hard to see that this runs in linear time and that it succeeds with high probability. (All the bad events are large deviations away from the expectation of a binomial.)
An alternate algorithm for the same problem, which is more natural to teach to students who have seen quick sort is the one described here: Randomized Selection
It is also easy to see that this one has linear expected running time: say that a "round" is a sequence of recursive calls that ends when one gives a 1/4-3/4 split, and then observe that the expected length of a round is at most 2. (In the first draw of a round, the probability of getting a good split is 1/2 and then after actually increases, as the algorithm was described so round length is dominated by a geometric random variable.)
So now the question:
Is it possible to show that randomized selection runs in linear time with high probability?
We have $O(\log n)$ rounds, and each round has length at least $k$ with probability at most $2^{-k+1}$, so a union bound gives that the running time is $O(n\log\log n)$ with probability $1-1/O(\log n)$.
This is kind of unsatisfying, but is it actually the truth?
|
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Подробная запись - Подобные записи 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Подробная запись - Подобные записи 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Подробная запись - Подобные записи 2019-05-15 16:57 Подробная запись - Подобные записи 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Подробная запись - Подобные записи 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Подробная запись - Подобные записи 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Подробная запись - Подобные записи 2019-01-10 15:54 Подробная запись - Подобные записи 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Подробная запись - Подобные записи 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Подробная запись - Подобные записи
|
Convert from Decimal Notation to Scientific Notation
Remember working with place value for whole numbers and decimals? Our number system is based on powers of 10. We use tens, hundreds, thousands, and so on. Our decimal numbers are also based on powers of tens—tenths, hundredths, thousandths, and so on.
Consider the numbers 4000 and 0.004. We know that 4000 means 4 × 1000 and 0.004 means 4 × \(\frac{1}{1000}\). If we write the 1000 as a power of ten in exponential form, we can rewrite these numbers in this way:
$$\begin{split} &4000 \qquad \qquad 0.004 \\ &4 \times 1000 \qquad 4 \times \frac{1}{1000} \\ &4 \times 10^{3} \qquad \; \; 4 \times \frac{1}{10^{3}} \\ &\qquad \qquad \quad \; \; \; 4 \times 10^{-3} \end{split}$$
When a number is written as a product of two numbers, where the first factor is a number greater than or equal to one but less than 10, and the second factor is a power of 10 written in exponential form, it is said to be in
scientific notation.
Definition: Scientific Notation
A number is expressed in scientific notation when it is of the form a × 10
n where a ≥ 1 and a < 10 and n is an integer.
It is customary in scientific notation to use × as the multiplication sign, even though we avoid using this sign elsewhere in algebra.
Scientific notation is a useful way of writing very large or very small numbers. It is used often in the sciences to make calculations easier.
If we look at what happened to the decimal point, we can see a method to easily convert from decimal notation to scientific notation.
In both cases, the decimal was moved 3 places to get the first factor, 4, by itself.
The power of 10 is positive when the number is larger than 1: 4000 = 4 × 10 3. The power of 10 is negative when the number is between 0 and 1: 0.004 = 4 × 10 −3.
Example 10.74:
Write 37,000 in scientific notation.
Solution
Step 1: Move the decimal point so that the first factor is greater than or equal to 1 but less than 10. Step 2: Count the number of decimal places, n, that the decimal point was moved.
3.70000
4 places
Step 3: Write the number as a product with a power of 10. 3.7×10 4
If the original number is:
Step 4: Check.10 4 is 10,000 and 10,000 times 3.7 will be 37,000. 37,000 = 3.7×10 4
Exercise 10.147:
Write in scientific notation: 96,000.
Exercise 10.148:
Write in scientific notation: 48,300.
HOW TO: CONVERT FROM DECIMAL NOTATION TO SCIENTIFIC NOTATION
Step 1. Move the decimal point so that the first factor is greater than or equal to 1 but less than 10.
Step 2. Count the number of decimal places, n, that the decimal point was moved.
Step 3. Write the number as a product with a power of 10.
If the original number is:
greater than 1, the power of 10 will be 10 n. between 0 and 1, the power of 10 will be 10 −n.
Step 4. Check.
Example 10.75:
Write in scientific notation: 0.0052.
Solution
Move the decimal point to get 5.2, a number between 1 and 10. Count the number of decimal places the point was moved. 3 places Write as a product with a power of 10. 5.2 × 10 −3 Check your answer: $$\begin{split} 5.2 &\times 10^{-3} \\ 5.2 &\times \frac{1}{10^{3}} \\ 5.2 &\times \frac{1}{1000} \\ 5.2 &\times 0.001 \\ 0.&0052 \end{split}$$ 0.0052 = 5.2 × 10 −3
Exercise 10.149:
Write in scientific notation: 0.0078.
Exercise 10.150:
Write in scientific notation: 0.0129.
Convert Scientific Notation to Decimal Form
How can we convert from scientific notation to decimal form? Let’s look at two numbers written in scientific notation and see.
$$\begin{split} &9.12 \times 10^{4} \qquad \qquad 9.12 \times 10^{-4} \\ &9.12 \times 10,000 \qquad 9.12 \times 0.0001 \\ &91,200 \qquad \qquad \quad 0.000912 \end{split}$$
If we look at the location of the decimal point, we can see an easy method to convert a number from scientific notation to decimal form.
In both cases the decimal point moved 4 places. When the exponent was positive, the decimal moved to the right. When the exponent was negative, the decimal point moved to the left.
Example 10.76:
Convert to decimal form: 6.2 × 10
3. Solution
Step 1: Determine the exponent, n, on the factor 10. 6.2 × 10 3 Step 2: Move the decimal point n places, adding zeros if needed. 6,200 Step 3: Check to see if your answer makes sense. 10 3 is 1000 and 1000 times 6.2 will be 6,200. 6.2 × 10 3 = 6,200
Exercise 10.151:
Convert to decimal form: 1.3 × 10
3.
Exercise 10.152:
Convert to decimal form: 9.25 × 10
4.
HOW TO: CONVERT SCIENTIFIC NOTATION TO DECIMAL FORM
Step 1. Determine the exponent, n, on the factor 10.
Step 2. Move the decimal n places, adding zeros if needed.
If the exponent is positive, move the decimal point n places to the right. If the exponent is negative, move the decimal point |n| places to the left.
Step 3. Check.
Example 10.77:
Convert to decimal form: 8.9 × 10
−2. Solution
Determine the exponent n, on the factor 10. The exponent is −2. Move the decimal point 2 places to the left. Add zeros as needed for placeholders. 0.089 8.9 × 10 −2 = 0.089 The Check is left to you.
Exercise 10.153:
Convert to decimal form: 1.2 × 10
−4.
Exercise 10.154:
Convert to decimal form: 7.5 × 10
−2. Multiply and Divide Using Scientific Notation
We use the Properties of Exponents to multiply and divide numbers in scientific notation.
Example 10.78:
Multiply. Write answers in decimal form: (4 × 10
5)(2 × 10 −7). Solution
Use the Commutative Property to rearrange the factors. 4 • 2 • 10 5 • 10 −7 Multiply 4 by 2 and use the Product Property to multiply 10 5 by 10 −7. 8 × 10 −2 Change to decimal form by moving the decimal two places left. 0.08
Exercise 10.155:
Multiply. Write answers in decimal form: (3 × 10
6)(2 × 10 −8).
Exercise 10.156:
Multiply. Write answers in decimal form: (3 × 10
−2)(3 × 10 −1).
Example 10.79:
Divide. Write answers in decimal form: \(\frac{9 \times 10^{3}}{3 \times 10^{−2}}\).
Solution
Separate the factors. $$\frac{9}{3} \times \frac{10^{3}}{10^{-2}}$$ Divide 9 by 3 and use the Quotient Property to divide 10 3 by 10 −2. 3 × 10 5 Change to decimal form by moving the decimal five places right. 300,000
Exercise 10.157:
Divide. Write answers in decimal form:\frac{8 \times 10^{4}}{2 \times 10^{-1}}.
Exercise 10.158:
Divide. Write answers in decimal form:\frac{8 \times 10^{2}}{4 \times 10^{-2}}.
ACCESS ADDITIONAL ONLINE RESOURCES
Practice Makes Perfect Use the Definition of a Negative Exponent
In the following exercises, simplify.
5 −3 8 −2 3 −4 2 −5 7 −1 10 −1 2 −3+ 2 −2 3 −2+ 3 −1 3 −1+ 4 −1 10 −1+ 2 −1 10 0− 10 −1+ 10 −2 2 0− 2 −1+ 2 −2 (a) (−6) −2(b) −6 −2 (a) (−8) −2(b) −8 −2 (a) (−10) −4(b) −10 −4 (a) (−4) −6(b) −4 −6 (a) 5 • 2 −1(b) (5 • 2) −1 (a) 10 • 3 −1(b) (10 • 3) −1 (a) 4 • 10 −3(b) (4 • 10) −3 (a) 3 • 5 −2(b) (3 • 5) −2 n −4 p −3 c −10 m −5 (a) 4x −1(b) (4x) −1(c) (−4x) −1 (a) 3q −1(b) (3q) −1(c) (−3q) −1 (a) 6m −1(b) (6m) −1(c) (−6m) −1 (a) 10k −1(b) (10k) −1(c) (−10k) −1 Simplify Expressions with Integer Exponents
In the following exercises, simplify.
p −4• p 8 r −2• r 5 n −10• n 2 q −8• q 3 k −3• k −2 z −6• z −2 a • a −4 m • m −2 p 5• p −2• p −4 x 4• x −2• x −3 a 3b −3 u 2v −2 (x 5y −1)(x −10y −3) (a 3b −3)(a −5b −1) (uv −2)(u −5v −4) (pq −4)(p −6q −3) (−2r −3s 9)(6r 4s −5) (−3p −5q 8)(7p 2q −3) (−6m −8n −5)(−9m 4n 2) (−8a −5b −4)(−4a 2b 3) (a 3) −3 (q 10) −10 (n 2) −1 (x 4) −1 (y −5) 4 (p −3) 2 (q −5) −2 (m −2) −3 (4y −3) 2 (3q −5) 2 (10p −2) −5 (2n −3) −6 u 9u −2 b 5b −3 x −6x 4 m 5m −2 q 3q 12 r 6r 9 n −4n −10 p −3p −6 Convert from Decimal Notation to Scientific Notation
In the following exercises, write each number in scientific notation.
45,000 280,000 8,750,000 1,290,000 0.036 0.041 0.00000924 0.0000103 The population of the United States on July 4, 2010 was almost 310,000,000. The population of the world on July 4, 2010 was more than 6,850,000,000. The average width of a human hair is 0.0018 centimeters. The probability of winning the 2010 Megamillions lottery is about 0.0000000057. Convert Scientific Notation to Decimal Form
In the following exercises, convert each number to decimal form.
4.1 × 10 2 8.3 × 10 2 5.5 × 10 8 1.6 × 10 10 3.5 × 10 −2 2.8 × 10 −2 1.93 × 10−5 6.15 × 10−8 In 2010, the number of Facebook users each day who changed their status to ‘engaged’ was 2 × 10 4. At the start of 2012, the US federal budget had a deficit of more than $1.5 × 10 13. The concentration of carbon dioxide in the atmosphere is 3.9 × 10 −4. The width of a proton is 1 × 10 −5of the width of an atom. Multiply and Divide Using Scientific Notation
In the following exercises, multiply or divide and write your answer in decimal form.
(2 × 10 5)(2 × 10 −9) (3 × 10 2)(1 × 10 −5) (1.6 × 10 −2)(5.2 × 10 −6) (2.1 × 10 −4)(3.5 × 10 −2) \(\frac{6 \times 10^{4}}{3 \times 10^{−2}}\) \(\frac{8 \times 10^{6}}{4 \times 10^{−1}}\) \(\frac{7 \times 10^{-2}}{1 \times 10^{−8}}\) \(\frac{5 \times 10^{-3}}{1 \times 10^{−10}}\) Everyday Math CaloriesIn May 2010 the Food and Beverage Manufacturers pledged to reduce their products by 1.5 trillion calories by the end of 2015. Write 1.5 trillion in decimal notation. Write 1.5 trillion in scientific notation. Length of a yearThe difference between the calendar year and the astronomical year is 0.000125 day. Write this number in scientific notation. How many years does it take for the difference to become 1 day? Calculator displayMany calculators automatically show answers in scientific notation if there are more digits than can fit in the calculator’s display. To find the probability of getting a particular 5-card hand from a deck of cards, Mario divided 1 by 2,598,960 and saw the answer 3.848 × 10 −7. Write the number in decimal notation. Calculator displayMany calculators automatically show answers in scientific notation if there are more digits than can fit in the calculator’s display. To find the number of ways Barbara could make a collage with 6 of her 50 favorite photographs, she multiplied 50 • 49 • 48 • 47 • 46 • 45. Her calculator gave the answer 1.1441304 × 10 10. Write the number in decimal notation. Writing Exercises (a) Explain the meaning of the exponent in the expression 2 3. (b) Explain the meaning of the exponent in the expression 2 −3. When you convert a number from decimal notation to scientific notation, how do you know if the exponent will be positive or negative? Self Check
(a) After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.
(b) After looking at the checklist, do you think you are well prepared for the next section? Why or why not?
Contributors
Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/fd53eae1-fa2...49835c3c@5.191."
|
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 46, Number 2 (2016), 559-570. An application of Cohn's rule to convolutions of univalent harmonic mappings Abstract
Dorff et al.~\cite {do and no} proved that the harmonic convolutions of the standard right half-plane mapping $F_0=H_0+\overline {G}_0$ (where $H_0+G_0=z/(1-z)$ and $G_0'=-zH_0'$) and mappings $f_\beta =h_\beta +\overline {g}_{\beta }$ (where $f_\beta $ are obtained by shearing of analytic vertical strip mappings with dilatation $e^{i\theta }z^n$, $n=1,2$, $\theta \in \mathbb {R}$) are in $S_H^0$ and are convex in the direction of the real axis. In this paper, by using Cohn's rule, we generalize this result by replacing the standard right half-plane mapping $F_0$ with a family of right half-plane mappings $F_a=H_a+\overline {G}_a$ (with $H_a+G_a=z/(1-z)$ and $G'_a/H'_a= {(a-z)}/{(1-az)}$, $a\in (-1,1)$) and including the cases $n=3$ and $n=4$ (in addition to $n=1$ and $n=2$) for dilatations of $f_\beta $.
Article information Source Rocky Mountain J. Math., Volume 46, Number 2 (2016), 559-570. Dates First available in Project Euclid: 26 July 2016 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1469537477 Digital Object Identifier doi:10.1216/RMJ-2016-46-2-559 Mathematical Reviews number (MathSciNet) MR3529083 Zentralblatt MATH identifier 1359.31002 Citation
Kumar, Raj; Gupta, Sushma; Singh, Sukhjit; Dorff, Michael. An application of Cohn's rule to convolutions of univalent harmonic mappings. Rocky Mountain J. Math. 46 (2016), no. 2, 559--570. doi:10.1216/RMJ-2016-46-2-559. https://projecteuclid.org/euclid.rmjm/1469537477
|
65 4 1. Homework Statement
This isn't exactly a problem but rather a problem in understanding the derivation of the phenomenon, or more precisely, one step in the derivation.
In the following we will consider the EPR pair of two spin ##1/2## particles, where the state can be written as
$$ \vert \psi\rangle =\frac{1}{\sqrt{2}}(\vert 0,1\rangle - \vert 1,0\rangle).$$ Now let us assume that Alice and Bob have each one of the two particles of the EPR pair. Alice has another particle with spin ##1/2## in the state ##\vert \phi\rangle##. The state of the whole system, all three particles, is therefore given by
$$\begin{align*}\vert \phi\rangle \otimes \vert \psi\rangle &= \vert \phi\rangle \otimes \frac{1}{\sqrt{2}}(\vert 0,1\rangle - \vert 1,0\rangle)\\
&= \frac{1}{\sqrt{2}} (\vert\phi,0\rangle \otimes \vert 1\rangle - \vert\phi,1\rangle \otimes \vert 0\rangle). \end{align*}$$ Now Alice can measure her two particles, for example using ##P_i= \vert \chi_i\rangle\langle \chi_i\vert, i\in \{1,2,3,4\}## and
$$\begin{align*}
\vert\chi_1\rangle &= \frac{1}{\sqrt{2}}(\vert 0,1\rangle - \vert 1,0\rangle)\\
\vert\chi_2\rangle &= \frac{1}{\sqrt{2}}(\vert 0,1\rangle + \vert 1,0\rangle)\\
\vert\chi_1\rangle &= \frac{1}{\sqrt{2}}(\vert 0,0\rangle - \vert 1,1\rangle)\\
\vert\chi_1\rangle &= \frac{1}{\sqrt{2}}(\vert 0,0\rangle + \vert 1,1\rangle).
\end{align*}$$
Up until this point I understand the definitions and the idea. The problem arises when I try to calculate for example
$$P_1 \vert \phi\rangle\otimes\vert\psi\rangle = \frac{1}{2} \vert \chi_1\rangle \otimes (-\vert 1\rangle\langle 1\vert\phi\rangle - \vert 0\rangle\langle 0\vert\phi\rangle )$$
2. Homework Equations
All given above.
3. The Attempt at a Solution
We first need to figure out how ##P_i## acts on the tensor product of the states. Expanding the state gives
$$ P_1 \vert \phi\rangle\otimes\vert\psi\rangle = \frac{1}{\sqrt{2}} P_1(\vert\phi\rangle \otimes\vert 0\rangle \otimes \vert 1\rangle - \vert\phi\rangle \otimes\vert 1\rangle \otimes \vert 0\rangle). $$ Form this we can conclude that ##P_i## is of the form ##P_i = A\otimes B \otimes C##, where ##A,B,C## can be any operator. I tried to compute now the follwoing:
$$\begin{align*}\vert \chi_1\rangle\langle \chi_1\vert
&= \frac{1}{2}(\vert 0,1\rangle - \vert 1,0\rangle)(\langle 0,1 \vert -\langle 1,0\vert)\\
&= \frac{1}{2} (\vert 0\rangle \otimes\vert1\rangle - \vert 1\rangle \otimes\vert0\rangle)(\langle 0\vert\otimes\langle1 \vert -\langle 1\vert\otimes\langle0\vert),
\end{align*}$$
but can't really proceed from here since I don't really know how to calculate this... I suspect that after finishing this calculation I could define ##A\otimes B := \vert \chi_1\rangle\langle \chi_1\vert ##. Then I would only need to find ##C##, but I'm not really sure how to do that...
Am I doing something completely wrong here, or is this the right approach?
|
Equivalence of Definitions of Normal Extension Contents Theorem
Let $L / K$ be an algebraic field extension.
Let $L / K$ be a field extension.
Then $L / K$ is a
normal extension if and only if: for every irreducible polynomial $f \in K \left[{x}\right]$ with at least one root in $L$, $f$ splits completely in $L$.
Let $L / K$ be a field extension.
Let $\overline K$ be the algebraic closure of $K$.
Then $L/K$ is a normal extension if and only if: $\sigma \left({L}\right) = L$
for each $\sigma \in \operatorname{Gal} \left({L / K}\right)$.
Proof Definition $1$ implies Definition $2$
Let $\alpha \in L$ be an arbitrary element.
Let $\sigma: L \mapsto \overline K$ be an arbitrary embedding of $L$ fixing $K$.
We wish to show that $\sigma \left({\alpha}\right)\in L$.
Since $\sigma$ fixes $K$, $\sigma \left({\alpha}\right)$ must also be a root of $m_\alpha$.
By our assumption, $\alpha \in L$ implies that all roots of $m_\alpha$ are in $L$ and consequently $\sigma \left({\alpha}\right) \in L$.
$\Box$
Definition $2$ implies Definition $1$
Again, let $\alpha \in L$ and let $m_\alpha \in K \left[{x}\right]$ be its minimal polynomial over $K$.
We must show that for every root $\beta$ of $m_\alpha$, there exists an embedding $\sigma_\beta$, of $L$ in $\overline K$ such that $\sigma_\beta \left({\alpha}\right) = \beta$.
Consider the intermediate field $K \left[{\alpha}\right]\subset L$.
$\sigma_\beta \restriction_{K \left[{\alpha}\right]} = \tau_\beta$
By our assumption, $\sigma_\beta \left({L}\right) = L$ for each $\beta$.
Consequently, every root of $m_\alpha$ is in $L$.
$\blacksquare$
|
Is there an algorithm for finding the shortest path in an undirected weighted graph?
All-Pairs-Shortest Path
Given a graph $G = (V, E)$ find the shortest path between
any two nodes $u,v \in V$. It can be solved by Floyd-Warshall's Algorithm in time $O(|V|^3).$ Many believe the APSP problem requires $\Omega(n^3)$ time, but it remains open if there exists algorithms taking $O(n^{3 - \delta} \cdot \text{poly}(\log M))$, where $\delta > 0$ and edge weights are in the range $[-M, M]$.
The reasoning for this is upon close examination we see that the APSP problem can be solved by matrix multiplication. If we replace the operators to $\{\text{min}, +\}$ instead of $\{ +, \cdot \}$ we may use the framework for matrix multiplication to compute the solution. What is interesting is if there exists sub-cubic algorithms for the APSP problem, then there exists sub-cubic algorithms for many related graph and matrix problems [1].
|
This question prompted a reformulation:
What is a really good example of a situation where keeping track of isomorphisms leads to tangible benefit?
I believe this to be a serious question because it actually is oftentimes a good idea casually to identify isomorphism classes. To bring up an intermediate-level example I've alluded to often, consider the classification of topological surfaces. When I explain it to students, I do somewhat consciously write equalities as I manipulate one shape into another homeomorphic one. I even do it rather quickly to encourage intuitive associations that are likely to be useful. In any case, for arguments of that sort, it would be really tedious, and probably pointless, to write down isomorphisms with any precision.
Meanwhile, at other times, I've also joined in the chorus of criticism that greets the conflation of equality and isomorphism.
The problem is it's quite challenging to come up with really striking examples where this care is rewarded. Let me start off with a somewhat specialized class of examples. These come from
descent theory. The setting is a map $$X\rightarrow Y,$$ which is usually submersive, in some sense suitable to the situation. You would like criteria for an object $V$ lying over $X$, say a fiber bundle, to arise as a pull-back of an object on $Y$. There is a range of formalism to deal with this problem, but I'll just mention two cases. One is when $Y=X/G$, the orbit space of a group action on $X$. For $V$ to be pulled-back from $Y$, we should have $g^*(V)\simeq V$ for each $g\in G$. But that's not enough. What is actually required is that there be a collection of isomorphisms $$f_g: g^*(V)\simeq V$$ that are compatible with the group structure. This means something like $$f_{gh}=f_g\circ f_h,$$ except you have to twist in an obvious way to take into account the correct domain. So you see, I have at least to introduce notation for the isomorphisms involved to formulate the right condition. In practice, when you want to construct something on $Y$ starting from something on $X$, you have to specify the $f_g$ rather precisely.
Another elementary case is when $X$ is an open covering $(U_i)$ of $Y$. Then an object on $Y$ is typically equivalent to a collection $V_i$ of objects, one on each $U_i$, but with additional data. Here as well, $V_i$ and $V_j$ obviously have to agree on the intersections. But that's again not enough. Rather there should be a collection of isomorphisms $$\phi_{ji}: V_i|U_i\cap U_j\simeq V_j|U_i\cap U_j$$that are compatible on the triple overlaps:$$\phi_{kj}\circ \phi_{ji}=\phi_{ki}.$$ Incidentally, for something like a vector bundle, since any two of the same rank are locally 'the same,' it's clear that keeping track of isomorphisms will be the key to the transition from collections of local objects to a global object. The formalism is concretely applied in situations where you can define some objects only locally, but would like to glue them together to get a global object. For a really definite example that comes immediately to mind, there is the determinant of cohomology for vector bundles on a family of varieties over a parameter space $Y$. Because a choice of resolution is involved in defining this determinant, which might exist only locally on $Y$, Knudsen and Mumford struggled quite a bit to show that the local constructions glue together. Then Grothendieck suggested the remedy of defining the determinant provisionally as a
signed line bundle, which then allowed them to nail down the correct $\phi_{ji}$. These days, this determinant is a very widely useful tool, for example, in generating line bundles on moduli spaces.
I apologize if this last paragraph is a bit too convoluted for non-specialists. Part of my reason for writing it down is to illustrate that my main examples for bolstering the 'keep track of isomorphisms' paradigm are a bit too advanced for most undergraduates.
So, to conclude, I'd be quite happy to hear of better examples. As already suggested above, it would be nice to have them be accessible but substantively illuminating. If you would like to discuss, say, different bases for vector spaces, it would be good if the language of isomorphism etc. clarifies matters in a really obvious way, as opposed to a sets-and-elements exposition.
Added: Oh, if you have advanced examples, I would certainly like to hear about them as well.
Added: I see now there are three levels at least to distinguish:
Regarding objects as equal vs. regarding them as isomorphic vs. paying attention to specific isomorphisms.
I somehow conflated the two transitions in the course of asking the question. Of course I'm happy to see good examples illustrating the nature of either, but I'm especially interested in the second refinement.
Added yet again:: I'm grateful to everyone for contributing nice examples, and to Urs Schreiber who put in some effort to instruct me over at the n-category cafe. As I mentioned to Urs there, it would be especially nice to see examples of the following sort.
One usually thinks $X=Y$;
A careful analysis encourages the view $X\simeq Y$;
This perspective leads to genuinely new insight and benefit.
Even better would be if some specific knowledge of the isomorphism in 2. is important. Of course, more than two objects might be involved. I was initially hoping for some input from combinatorics, with the emphasis on 'bijective proofs' and all that. Anything?
Added, 14 May:
OK, I hope this will be the last addition. Because this question flowed over to the n-category cafe, I ended up having a small discussion there as well. I thought I'd copy here my last response, in case anyone else is interested.
n-cafe post:
I suppose it's obvious by now that I'm using a specific request to drive home the need for 'small but striking examples' in favor of category theory.
Last fall, Eugenia Cheng told me of a visit to some university to give a colloquium talk. The host greeted her with the observation that he doesn't regard category theory as a field of research. OK, he was probably a bit extreme, but milder versions of that view are quite common. Now, one possible response is to regard all such people as unreasonable and talk just to friends (who of course are the reasonable people!). This is not entirely bad, because that might be a way to buy time and gain enough stability to eventually prove the earth-shattering result that will show everyone! Another way is to take up the skepticism as a constructive everyday challenge. This I suppose is what everyone here is doing at some level, anyways.
Other than the derived loop space, which is not exactly small, Urs' examples are all of the simple subtle sort that can, over time, contribute to a really important change in scientific outlook and maybe even the infrastructure of a truly glorious theory. For example, I agree wholeheartedly about the horrors of the old tensor formalism. But it's not unreasonable to ask for more striking accessible evidence of utility when it comes to the current state of category theory.
The importance of small insights and language that gradually accumulate into the edifice of a coherent and powerful theory is the usual interpretation of Grothendieck's 'rising sea' philosophy. However, the process is hardly ever smooth along the way, especially the question of acceptance by the community. I'm not a historian, but I've studied arithmetic geometry long enough to have some sense of the changing climate surrounding etale cohomology theory, for example, over the last several decades. The full proof of the Weil conjectures took a while to come about, as you know. Acceptance came slowly with many bits and pieces sporadically giving people the sense that all those subtleties and abstractions are really worthwhile. Fortunately, the rationality of the zeta function was proved early on. However, there was a pretty concrete earlier proof of that as well using $p$-adic analysis, so I doubt it would have been the big theorem that convinced everyone. One real breakthrough came in the late sixties when Deligne used etale cohomology to show that Ramanujan's conjecture on his tau function could be reduced to the Weil conjectures. There was no way to do this without etale cohomology and the conjecture in question concerned something very precise, the growth rate of natural arithmetic functions. This could even be checked numerically, so impressed people in the same way that experimental verification of a theoretical prediction does in physics. Clearly something deep was going on. Of course there were many other indications. The construction of entirely new representations of the Galois group of $\mathbb{Q}$ with very rich properties, the unification of Galois cohomology and topological cohomology, a clean interpretation of arithmetic duality theorems that gave a re-interpretation of class field theory, and so on.
For myself, being a fan of you folks here, I believe this kind of process is going on in category theory. But I don't think you have to be too unreasonable to doubt it. In a similar vein, I don't agree with Andrew Wiles' view that physics will be irrelevant for number theory, but also think his pessimism is perfectly sensible.
I think I'm trying to make the obvious point that the presence of pessimists can be very helpful to the development of a theory, in so far as the optimists interact with them in constructive ways. I haven't been coming to this site much lately, because the bit of internet time I have tends to be absorbed by Math Overflow. But I did catch David's recent post on Frank Quinn's article, which ended up as a catalyst for my MO question.
At the Boston conference following the proof of Fermat's last theorem, I've been told Hendrik Lenstra said something like this: 'When I was young, I knew I wanted to solve Diophantine equations. I also knew I didn't want to represent functors. Now I have to represent functors to solve Diophantine equations!' So should we conclude that he was foolish to avoid representable functors for so long? I wouldn't.
This response to the MO question brings up the importance of knowing the specific isomorphism between some Hilbert spaces given by the Fourier transform. This is an excellent example, especially when we consider how it relates to the different realizations of the representations of the Heisenberg group and the attendant global issues, say as you vary over a family of polarizations. But I couldn't resist recalling Irving Segal's insistence that 'There's only
one Hilbert space!' Obviously, he knew, among many other things, the different realizations of the Stone-Von-Neumann representation as well as anyone, so you can take your own guess as to the reasoning behind that proclamation. He certainly may have lost something through that kind of philosophical intransigence. But I suspect that he, and many around him, gained something as well.
|
In this question we are interested in the number of limit cycles which appear in the following perturbational system:
\begin{equation}\cases{ x'=y -x^{2}+\epsilon P(x,y) \\ y'=-x+\epsilon Q(x,y) } \end{equation} where $P$ and $Q$ are polynomials of degree n. The unperturbed system (i.e, $\epsilon=0$) is not a Hamiltonian vector field. So we multiply the above vector field by the integrating factor "$e^{-2y}$". After this multiplication, the (new) unperturbed system is a Hamiltonian vector field with Hamiltonian \begin{equation} H(x,y)=e^{-2y}(y-x^{2}+1/2) \end{equation}
The unperturbed system has a unique singularity at the origin, which is of center type. The region of closed orbits surrounding the center is $\{(x,y)|0\leq H(x,y) \leq 1/2\}$. According to theory of Abelian integrals and its relation to the second part of the Hilbert 16th problem, the number of limit cycles of the perturbed system is (equal to and ) closely related to the number of zeroes of the following integral function $I:[0,\;1/2]\to \mathbb{R}$:
\begin{equation} I(c)=\int_{H^{-1}(c)} e^{-2y}(Pdy-Qdx) \end{equation}
Motivated by a famous result of A. Varchenko about the finitness of the number of the zeros of abelian integrals of polynomial perturbation of polynomial Hamiltonians,(Which is proved in "Arnold Gusein Zadeh Varchenko, singularities of Differantiable maps Vol II), I have the following two questions:
Questions:
1)Assume that, P and Q are fixed(given) polynomials. Is it true to say that either $I$ is identically zero or it has only a finite number of zeros? Is there an explicit formula for function $I(c)$? Or at least can we compute $\lim_{c \to 0} I(c)$?
2) If the answer to the above question is affirmative, can we control the number of zeroes of $I$, in terms of degree of polynomials $P,Q$?
Note 1: The proof of Varchenko is essentially based on algebraic geometry.In fact, they consider the polynomial perturbation of polynomial hamiltonian systems.(Please see page 5, section 3.5 of this paper. But in the situation of this question, the non algebraic integrating factor $e^{-2y}$ destroys the algebro geometric feature of the problem. So How can one remedy this problem? Note 2: As a possible resolution to avoid the non algebraic term $e^{-2y}$ we consider the following approach which can be applied for every algebraic perturbation of algebraic vector fields where the unperturbed system has a band of closed orbit(center) but it is not a Hamiltonian vector field and we have a non algebraic integrating factor.
Consider the polynomial vector field $$\begin{cases} x'=P+\epsilon A\\ y'=Q+\epsilon B \end{cases}$$
where the unperturbed system has a band of closed orbit We assume that there is a straight line $\ell$ parametrized by a real parameter $h\in \mathbb{R}$ with the following property: Periodic orbits of the unperturbed system intersect $\ell$ transversally and $s'(h) \neq 0$ for all $h$ where s(h) is the slope of the unperturbed vector field at point $h$. This is just opposite to the concept of "Isocline". For example this is the case for the lienard vector field \begin{equation}\cases{ x'=y -x^{2}+\epsilon P(x,y) \\ y'=-x+\epsilon Q(x,y) } \end{equation}
when $\ell$ is the $x$ axis. But it is not the case for the isocline $y$ axis.
We take a point $h \in \ell$. By $p(h)$ we mean the value of the Poincare return map $p$ at $h$ hence $p(h) \in \ell$ is the first return map for the orbit starting at $h \in \ell$. WLOG We may assume that the orientation of the solution curve is anti clock wise. Consider the simple closed curve $\gamma $ consist of the solution curve $h \to p(h)$ and the part of the straight line $p(h) \to h$. If $p(h) \neq h$ then the the integral of $\int_{\gamma} \kappa_{g} \neq 2\pi$ since two different points of the line $\ell$ has different slops. (This is a consequence of the Gauss Bonnet theorem). So for $\epsilon$ sufficiently small, the above curvature integral is $2 \pi$
IF AND ONLY IF $p(h)=h$.
Now we compute the integral of the curvature:
$$T(h)=\int_{\tilde{\gamma}}(\frac{(P+\epsilon A)(Q_{x}+\epsilon B_{x})+(Q+\epsilon B)(Q_{y}+\epsilon B_{y})}{P^2+Q^2})dx -(\frac{(Q+\epsilon B)(P_{y}+\epsilon A_{y})+(P+\epsilon A)(P_{x}+\epsilon A_{x})}{P^2+Q^2})dy$$
where $\tilde{\gamma}$ is the same as $\gamma$ but we remove the straight line $p(h) \to h$. (Similar to the orbit in the following picture which start from $r_{0}$ and ends to the $p(r_{0}$
The above integral is equal to $2\pi +\epsilon c(h) + O(\epsilon^2)$
where $c(h) $ is the following:
$$c(\alpha)= \int_{\alpha} (\frac{(AQ_{x}+PB_{x}+BQ_{y}+QB_{y})(P^2+Q^2)+2AP+2BQ}{{(P^2+Q^2)}^2})dx-\int_{\alpha}(\frac{(BP_{y}+QA_{y}+AP_{x}+PA_{x})(P^2+Q^2)+2AP+2BQ}{{(P^2+Q^2)}^2})dy$$
where $\alpha $ is the closed orbit of the unperturbed system starting at $h$.
Now in the integral $c(h)$ we do not have any non algebraic term.
So this would possibly imply that the zero set of $c(h)$ is equal to the zero set of $I(h)$ where $I(h) $ is the standard abelian integral $I(h)= \int_{\alpha(h)} Ady-Bdx$. Because both zero sets are corresponding points of generating limit cycles.
Is the later statement true? Is the computation of $c(h) $ correct? Does this situation make facilities for computations of abelian integrals to count the number of generating limit cycles?
|
This is a post for my math 100 calculus class of fall 2013. In this post, I give the 4th week’s recitation worksheet (no solutions yet – I’m still writing them up). More pertinently, we will also go over the most recent quiz and common mistakes. Trig substitution, it turns out, is not so easy.
Before we hop into the details, I’d like to encourage you all to avail of each other, your professor, your ta, and the MRC in preparation for the first midterm (next week!).
1. The quiz
There were two versions of the quiz this week, but they were very similar. Both asked about a particular trig substitution
$latex \displaystyle \int_3^6 \sqrt{36 – x^2} \mathrm{d} x $
And the other was
$latex \displaystyle \int_{2\sqrt 2}^4 \sqrt{16 – x^2} \mathrm{d}x. $
They are very similar, so I’m only going to go over one of them. I’ll go over the first one. We know we are to use trig substitution. I see two ways to proceed: either draw a reference triangle (which I recommend), or think through the Pythagorean trig identities until you find the one that works here (which I don’t recommend).
We see a $latex {\sqrt{36 – x^2}}$, and this is hard to deal with. Let’s draw a right triangle that has $latex {\sqrt{36 – x^2}}$ as a side. I’ve drawn one below. (Not fancy, but I need a better light).
In this picture, note that $latex {\sin \theta = \frac{x}{6}}$, or that $latex {x = 6 \sin \theta}$, and that $latex {\sqrt{36 – x^2} = 6 \cos \theta}$. If we substitute $latex {x = 6 \sin \theta}$ in our integral, this means that we can replace our $latex {\sqrt{36 – x^2}}$ with $latex {6 \cos \theta}$. But this is a substitution, so we need to think about $latex {\mathrm{d} x}$ too. Here, $latex {x = 6 \sin \theta}$ means that $latex {\mathrm{d}x = 6 \cos \theta}$.
Some people used the wrong trig substitution, meaning they used $latex {x = \tan \theta}$ or $latex {x = \sec \theta}$, and got stuck. It’s okay to get stuck, but if you notice that something isn’t working, it’s better to try something else than to stare at the paper for 10 minutes. Other people use $latex {x = 6 \cos \theta}$, which is perfectly doable and parallel to what I write below. Another common error was people forgetting about the $latex {\mathrm{d}x}$ term entirely. But it’s important!.
Substituting these into our integral gives
$latex \displaystyle \int_{?}^{??} 36 \cos^2 (\theta) \mathrm{d}\theta, $
where I have included question marks for the limits because, as after most substitutions, they are different. You have a choice: you might go on and put everything back in terms of $latex {x}$ before you give your numerical answer; or you might find the new limits now.
It’s not correct to continue writing down the old limits. The variable has changed, and we really don’t want $latex {\theta}$ to go from $latex {3}$ to $latex {6}$.
If you were to find the new limits, then you need to consider: if $latex {x=3}$ and $latex {\frac{x}{6} = \sin \theta}$, then we want a $latex {\theta}$ such that $latex {\sin \theta = \frac{3}{6}= \frac{1}{2}}$, so we might use $latex {\theta = \pi/6}$. Similarly, when $latex {x = 6}$, we want $latex {\theta}$ such that $latex {\sin \theta = 1}$, like $latex {\theta = \pi/2}$.
Note that these were two arcsine calculations, which we would have to do even if we waited until after we put everything back in terms of $latex {x}$ to evaluate. Some people left their answers in terms of these arcsines. As far as mistakes go, this isn’t a very serious one. But this is the sort of simplification that is expected of you on exams, quizzes, and homeworks. In particular, if something can be written in a much simpler way through the unit circle, then you should do it if you have the time.
So we could rewrite our integral as
$latex \displaystyle \int_{\pi/6}^{\pi/2} 36 \cos^2 (\theta) \mathrm{d}\theta. $
How do we integrate $latex {\cos^2 \theta}$? We need to make use of the identity $latex {\cos^2 \theta = \dfrac{1 + \cos 2\theta}{2}}$.
You should know this identity for this midterm. Now we have
$latex \displaystyle 36 \int_{\pi/6}^{\pi/2}\left(\frac{1}{2} + \frac{\cos 2 \theta}{2}\right) \mathrm{d}\theta = 18 \int_{\pi/6}^{\pi/2}\mathrm{d}\theta + 18 \int_{\pi/6}^{\pi/2}\cos 2\theta \mathrm{d}\theta. $
The first integral is extremely simple and yields $latex {6\pi}$ The second integral has antiderivative $latex {\dfrac{\sin 2 \theta}{2}}$ (
Don’t forget the $latex {2}$ on bottom!), and we have to evaluate $latex {\big[9 \sin 2 \theta \big]_{\pi/6}^{\pi/2}}$, which gives $latex {-\dfrac{9 \sqrt 3}{2}}$. You should know the unit circle sufficiently well to evaluate this for your midterm.
And so the final answer is $latex {6 \pi – \dfrac{9 \sqrt 2}{2} \approx 11.0553}$. (You don’t need to be able to do that approximation).
Let’s go back a moment and suppose you didn’t re”{e}valuate the limits once you substituted in $latex {\theta}$. Then, following the same steps as above, you’d be left with
$latex \displaystyle 18 \int_{?}^{??}\mathrm{d}\theta + 18 \int_{?}^{??}\cos 2\theta \mathrm{d}\theta = \left[ 18 \theta \right]_?^{??} + \left[ 9 \sin 2 \theta \right]_?^{??}. $
Since $latex {\frac{x}{6} = \sin \theta}$, we know that $latex {\theta = \arcsin (x/6)}$. This is how we evaluate the left integral, and we are left with $latex {[18 \arcsin(x/6)]_3^6}$. This means we need to know the arcsine of $latex {1}$ and $latex {\frac 12}$. These are exactly the same two arcsine computations that I referenced above! Following them again, we get $latex {6\pi}$ as the answer.
We could do the same for the second part, since $latex {\sin ( 2 \arcsin (x/6))}$ when $latex {x = 3}$ is $latex {\sin (2 \arcsin \frac{1}{2} ) = \sin (2 \cdot \frac{\pi}{6} ) = \frac{\sqrt 3}{2}}$; and when $latex {x = 6}$ we get $latex {\sin (2 \arcsin 1) = \sin (2 \cdot \frac{\pi}{2}) = \sin (\pi) = 0}$.
Putting these together, we see that the answer is again $latex {6\pi – \frac{9\sqrt 3}{2}}$.
Or, throwing yet another option out there, we could do something else (a little bit wittier, maybe?). We have this $latex {\sin 2\theta}$ term to deal with. You might recall that $latex {\sin 2 \theta = 2 \sin \theta \cos \theta}$, the so-called double-angle identity.
Then $latex {9 \sin 2\theta = 18 \sin \theta \cos \theta}$. Going back to our reference triangle, we know that $latex {\cos \theta = \dfrac{\sqrt{36 – x^2}}{6}}$ and that $latex {\sin \theta = \dfrac{x}{6}}$. Putting these together,
$latex \displaystyle 9 \sin 2 \theta = \dfrac{ x\sqrt{36 – x^2} }{2}. $
When $latex {x=6}$, this is $latex {0}$. When $latex {x = 3}$, we have $latex {\dfrac{ 3\sqrt {27}}{2} = \dfrac{9\sqrt 3}{2}}$.
And fortunately, we get the same answer again at the end of the day. (phew).
2. The worksheet
Finally, here is the worksheet for the day. I’m working on their solutions, and I’ll have that up by late this evening (sorry for the delay).
Ending tidbits – when I was last a TA, I tried to see what were the good predictors of final grade. Some things weren’t very surprising – there is a large correlation between exam scores and final grade. Some things were a bit surprising – low homework scores correlated well with low final grade, but high homework scores didn’t really have a strong correlation with final grade at all; attendance also correlated weakly. But one thing that really stuck with me was the first midterm grade vs final grade in class: it was really strong. For a bit more on that, I refer you to my final post from my Math 90 posts.
|
[Click here for a PDF of this post with nicer formatting]
Many authors pull the definitions of the raising and lowering (or ladder) operators out of their butt with no attempt at motivation. This is pointed out nicely in [1] by Eli along with one justification based on factoring the Hamiltonian.
In [2] is a small exception to the usual presentation. In that text, these operators are defined as usual with no motivation. However, after the utility of these operators has been shown, the raising and lowering operators show up in a context that does provide that missing motivation as a side effect.
It doesn’t look like the author was trying to provide a motivation, but it can be interpreted that way.
When seeking the time evolution of Heisenberg-picture position and momentum operators, we will see that those solutions can be trivially expressed using the raising and lowering operators. No special tools nor black magic is required to find the structure of these operators. Unfortunately, we must first switch to both the Heisenberg picture representation of the position and momentum operators, and also employ the Heisenberg equations of motion. Neither of these last two fit into standard narrative of most introductory quantum mechanics treatments. We will also see that these raising and lowering “operators” could also be introduced in classical mechanics, provided we were attempting to solve the SHO system using the Hamiltonian equations of motion.
I’ll outline this route to finding the structure of the ladder operators below. Because these are encountered trying to solve the time evolution problem, I’ll first show a simpler way to solve that problem. Because that simpler method depends a bit on lucky observation and is somewhat unstructured, I’ll then outline a more structured procedure that leads to the ladder operators directly, also providing the solution to the time evolution problem as a side effect.
The starting point is the Heisenberg equations of motion. For a time independent Hamiltonian \( H \), and a Heisenberg operator \( A^{(H)} \), those equations are
\begin{equation}\label{eqn:harmonicOscDiagonalize:20}
\ddt{A^{(H)}} = \inv{i \Hbar} \antisymmetric{A^{(H)}}{H}. \end{equation}
Here the Heisenberg operator \( A^{(H)} \) is related to the Schrodinger operator \( A^{(S)} \) by
\begin{equation}\label{eqn:harmonicOscDiagonalize:60}
A^{(H)} = U^\dagger A^{(S)} U, \end{equation}
where \( U \) is the time evolution operator. For this discussion, we need only know that \( U \) commutes with \( H \), and do not need to know the specific structure of that operator. In particular, the Heisenberg equations of motion take the form
\begin{equation}\label{eqn:harmonicOscDiagonalize:80}
\begin{aligned} \ddt{A^{(H)}} &= \inv{i \Hbar} \antisymmetric{A^{(H)}}{H} \\ &= \inv{i \Hbar} \antisymmetric{U^\dagger A^{(S)} U}{H} \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} U H – H U^\dagger A^{(S)} U } \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} H U – U^\dagger H A^{(S)} U } \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{A^{(S)}}{H} U. \end{aligned} \end{equation}
The Hamiltonian for the harmonic oscillator, with Schrodinger-picture position and momentum operators \( x, p \) is
\begin{equation}\label{eqn:harmonicOscDiagonalize:40}
H = \frac{p^2}{2m} + \inv{2} m \omega^2 x^2, \end{equation}
so the equations of motions are
\begin{equation}\label{eqn:harmonicOscDiagonalize:100}
\begin{aligned} \ddt{x^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{\frac{p^2}{2m}} U \\ &= \inv{2 m i \Hbar} U^\dagger \lr{ i \Hbar \PD{p}{p^2} } U \\ &= \inv{m } U^\dagger p U \\ &= \inv{m } p^{(H)}, \end{aligned} \end{equation}
and
\begin{equation}\label{eqn:harmonicOscDiagonalize:120} \begin{aligned} \ddt{p^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{\inv{2} m \omega^2 x^2 } U \\ &= \frac{m \omega^2}{2 i \Hbar} U^\dagger \lr{ -i \Hbar \PD{x}{x^2} } U \\ &= -m \omega^2 U^\dagger x U \\ &= -m \omega^2 x^{(H)}. \end{aligned} \end{equation}
In the Heisenberg picture the equations of motion are precisely those of classical Hamiltonian mechanics, except that we are dealing with operators instead of scalars
\begin{equation}\label{eqn:harmonicOscDiagonalize:140}
\begin{aligned} \ddt{p^{(H)}} &= -m \omega^2 x^{(H)} \\ \ddt{x^{(H)}} &= \inv{m } p^{(H)}. \end{aligned} \end{equation}
In the text the ladder operators are used to simplify the solution of these coupled equations, since they can decouple them. That’s not really required since we can solve them directly in matrix form with little work
\begin{equation}\label{eqn:harmonicOscDiagonalize:160}
\ddt{} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix} = \begin{bmatrix} 0 & -m \omega^2 \\ \inv{m} & 0 \end{bmatrix} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix}, \end{equation}
or, with length scaled variables
\begin{equation}\label{eqn:harmonicOscDiagonalize:180}
\begin{aligned} \ddt{} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} &= \begin{bmatrix} 0 & -\omega \\ \omega & 0 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \sigma_y \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix}. \end{aligned} \end{equation}
Writing \( y = \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \), the solution can then be written immediately as
\begin{equation}\label{eqn:harmonicOscDiagonalize:200}
\begin{aligned} y(t) &= \exp\lr{ -i \omega \sigma_y t } y(0) \\ &= \lr{ \cos \lr{ \omega t } I – i \sigma_y \sin\lr{ \omega t } } y(0) \\ &= \begin{bmatrix} \cos\lr{ \omega t } & \sin\lr{ \omega t } \\ -\sin\lr{ \omega t } & \cos\lr{ \omega t } \end{bmatrix} y(0), \end{aligned} \end{equation}
or
\begin{equation}\label{eqn:harmonicOscDiagonalize:220}
\begin{aligned} \frac{p^{(H)}(t)}{m \omega} &= \cos\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \sin\lr{ \omega t } x^{(H)}(0) \\ x^{(H)}(t) &= -\sin\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \cos\lr{ \omega t } x^{(H)}(0). \end{aligned} \end{equation}
This solution depends on being lucky enough to recognize that the matrix has a Pauli matrix as a factor (which squares to unity, and allows the exponential to be evaluated easily.)
If we hadn’t been that observant, then the first tool we’d have used instead would have been to diagonalize the matrix. For such diagonalization, it’s natural to work in completely dimensionless variables. Such a non-dimensionalisation can be had by defining
\begin{equation}\label{eqn:harmonicOscDiagonalize:240}
x_0 = \sqrt{\frac{\Hbar}{m \omega}}, \end{equation}
and dividing the working (operator) variables through by those values. Let \( z = \inv{x_0} y \), and \( \tau = \omega t \) so that the equations of motion are
\begin{equation}\label{eqn:harmonicOscDiagonalize:260}
\frac{dz}{d\tau} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} z. \end{equation}
This matrix can be diagonalized as
\begin{equation}\label{eqn:harmonicOscDiagonalize:280}
A = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = V \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} V^{-1}, \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:300}
V = \inv{\sqrt{2}} \begin{bmatrix} i & -i \\ 1 & 1 \end{bmatrix}. \end{equation}
The equations of motion can now be written
\begin{equation}\label{eqn:harmonicOscDiagonalize:320}
\frac{d}{d\tau} \lr{ V^{-1} z } = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} \lr{ V^{-1} z }. \end{equation}
This final change of variables \( V^{-1} z \) decouples the system as desired. Expanding that gives
\begin{equation}\label{eqn:harmonicOscDiagonalize:340}
\begin{aligned} V^{-1} z &= \inv{\sqrt{2}} \begin{bmatrix} -i & 1 \\ i & 1 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i \frac{p^{(H)}}{m \omega} + x^{(H)} \\ i \frac{p^{(H)}}{m \omega} + x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} a^\dagger \\ a \end{bmatrix}, \end{aligned} \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:n} \begin{aligned} a^\dagger &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ -i \frac{p^{(H)}}{m \omega} + x^{(H)} } \\ a &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ i \frac{p^{(H)}}{m \omega} + x^{(H)} }. \end{aligned} \end{equation}
Lo and behold, we have the standard form of the raising and lowering operators, and can write the system equations as
\begin{equation}\label{eqn:harmonicOscDiagonalize:360}
\begin{aligned} \ddt{a^\dagger} &= i \omega a^\dagger \\ \ddt{a} &= -i \omega a. \end{aligned} \end{equation}
It is actually a bit fluky that this matched exactly, since we could have chosen eigenvectors that differ by constant phase factors, like
\begin{equation}\label{eqn:harmonicOscDiagonalize:380}
V = \inv{\sqrt{2}} \begin{bmatrix} i e^{i\phi} & -i e^{i \psi} \\ 1 e^{i\phi} & e^{i \psi} \end{bmatrix}, \end{equation}
so
\begin{equation}\label{eqn:harmonicOscDiagonalize:341}
\begin{aligned} V^{-1} z &= \frac{e^{-i(\phi + \psi)}}{\sqrt{2}} \begin{bmatrix} -i e^{i\psi} & e^{i \psi} \\ i e^{i\phi} & e^{i \phi} \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i e^{i\phi} \frac{p^{(H)}}{m \omega} + e^{i\phi} x^{(H)} \\ i e^{i\psi} \frac{p^{(H)}}{m \omega} + e^{i\psi} x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} e^{i\phi} a^\dagger \\ e^{i\psi} a \end{bmatrix}. \end{aligned} \end{equation}
To make the resulting pairs of operators Hermitian conjugates, we’d want to constrain those constant phase factors by setting \( \phi = -\psi \). If we were only interested in solving the time evolution problem no such additional constraints are required.
The raising and lowering operators are seen to naturally occur when seeking the solution of the Heisenberg equations of motion. This is found using the standard technique of non-dimensionalisation and then seeking a change of basis that diagonalizes the system matrix. Because the Heisenberg equations of motion are identical to the classical Hamiltonian equations of motion in this case, what we call the raising and lowering operators in quantum mechanics could also be utilized in the classical simple harmonic oscillator problem. However, in a classical context we wouldn’t have a justification to call this more than a change of basis.
References
[1] Eli Lansey.
The Quantum Harmonic Oscillator Ladder Operators, 2009. URL http://behindtheguesses.blogspot.ca/2009/03/quantum-harmonic-oscillator-ladder.html. [Online; accessed 18-August-2015].
[2] Jun John Sakurai and Jim J Napolitano.
Modern quantum mechanics, chapter {Time Development of the Oscillator}. Pearson Higher Ed, 2014.
|
I have come across the following deceptively simple expression:
$$ H_n^s=\sum_{j=1}^n(-1)^{j-1}\left(\begin{array}{c}n\\j\end{array}\right)j^{-s} $$
We have (using eg mathematica, though probably not difficult to prove): $H_n^0=1$, $H_n^1=H_n$ (the harmonic numbers), expressions involving hypergeometric series with unit argument for integer $s<0$ and involving polygamma functions for integer $s>1$. For fixed $n$ the sum is of course finite. My (closely related) questions are:
Does this reduce to values of a known special function for arbitrary real (or complex) $s$?
What is its asymptotic expansion for large $n$?
Is there an efficient numerical method (avoiding cancellations) of evaluating it for large $n$?
Edit: Using Noam's approach I found two more terms that check numerically:
$$ H_n^s=\frac{(\ln n)^s}{\Gamma(s+1)}+\frac{\gamma(\ln n)^{s-1}}{\Gamma(s)}+\frac{6\gamma^2+\pi^2}{12\Gamma(s-1)}(\ln n)^{s-2}+\ldots $$
where $\gamma=0.577\ldots$ is the Euler constant. Further asymptotics very welcome.
|
A vector is a quantity consisting of a non-negative magnitude and a direction. We could represent a vector in two dimensions as \((m,\theta)\), where \(m\) is the magnitude and \(\theta\) is the direction, measured as an angle from some agreed upon direction. For example, we might think of the vector \( (5,45^\circ)\) as representing "5 km toward the northeast''; that is, this vector might be a
displacement vector, indicating, say, that your grandfather walked 5 kilometers toward the northeast to school in the snow. On the other hand, the same vector could represent a velocity, indicating that your grandfather walked at 5 km/hr toward the northeast. What the vector does not indicate is where this walk occurred: a vector represents a magnitude and a direction, but not a location. Pictorially it is useful to represent a vector as an arrow; the direction of the vector, naturally, is the direction in which the arrow points; the magnitude of the vector is reflected in the length of the arrow.
It turns out that many, many quantities behave as vectors, e.g., displacement, velocity, acceleration, force. Already we can get some idea of their usefulness using displacement vectors. Suppose that your grandfather walked 5 km NE and then 2 km SSE; if the terrain allows, and perhaps armed with a compass, how could your grandfather have walked directly to his destination? We can use vectors (and a bit of geometry) to answer this question. We begin by noting that since vectors do not include a specification of position, we can "place'' them anywhere that is convenient. So we can picture your grandfather's journey as two displacement vectors drawn head to tail:
The displacement vector for the shortcut route is the vector drawn with a dashed line, from the tail of the first to the head of the second. With a little trigonometry, we can compute that the third vector has magnitude approximately 4.62 and direction \( 21.43^\circ\), so walking 4.62 km in the direction \( 21.43^\circ\) north of east (approximately ENE) would get your grandfather to school. This sort of calculation is so common, we dignify it with a name: we say that the third vector is the
sum of the other two vectors. There is another common way to picture the sum of two vectors. Put the vectors tail to tail and then complete the parallelogram they indicate; the sum of the two vectors is the diagonal of the parallelogram:
This is a more natural representation in some circumstances. For example, if the two original vectors represent forces acting on an object, the sum of the two vectors is the net or effective force on the object, and it is nice to draw all three with their tails at the location of the object.
We also define
scalar multiplication for vectors: if \(\bf A\) is a vector \((m,\theta)\) and \(a\ge 0\) is a real number, the vector \(a\bf A\) is \((am,\theta)\), namely, it points in the same direction but has \(a\) times the magnitude. If \(a < 0\), \(a\bf A\) is \((|a|m,\theta+\pi)\), with \(|a|\) times the magnitude and pointing in the opposite direction (unless we specify otherwise, angles are measured in radians).
Now we can understand subtraction of vectors: \({\bf A}-{\bf B}={\bf A}+(-1){\bf B}\):
Note that as you would expect, \({\bf B} + ({\bf A}-{\bf B}) = {\bf A}\).
We can represent a vector in ways other than \((m,\theta)\), and in fact \((m,\theta)\) is not generally used at all. How else could we describe a particular vector? Consider again the vector \( (5,45^\circ)\). Let's draw it again, but impose a coordinate system. If we put the tail of the arrow at the origin, the head of the arrow ends up at the point \( (5/\sqrt2,5/\sqrt2)\approx(3.54, 3.54)\).
In this picture the coordinates \((3.54,3.54)\) identify the head of the arrow, provided we know that the tail of the arrow has been placed at \((0,0)\). Then in fact the vector can always be identified as \((3.54,3.54)\), no matter where it is placed; we just have to remember that the numbers 3.54 must be interpreted as a
change from the position of the tail, not as the actual coordinates of the arrow head; to emphasize this we will write \(\langle 3.54,3.54\rangle\) to mean the vector and \((3.54,3.54)\) to mean the point. Then if the vector \(\langle 3.54,3.54\rangle\) is drawn with its tail at \((1,2)\) it looks like this:
Consider again the two part trip: 5 km NE and then 2 km SSE. The vector representing the first part of the trip is \( \langle 5/\sqrt2,5/\sqrt2\rangle\), and the second part of the trip is represented by \(\langle 2\cos(-3\pi/8),2\sin(-3\pi/8)\rangle \approx\langle 0.77,-1.85 \rangle\). We can represent the sum of these with the usual head to tail picture:
It is clear from the picture that the coordinates of the destination point are \( (5/\sqrt2+2\cos(-3\pi/8),5/\sqrt2+2\sin(-3\pi/8))\) or approximately \((4.3,1.69)\), so the sum of the two vectors is \( \langle 5/\sqrt2+2\cos(-3\pi/8),5/\sqrt2+2\sin(-3\pi/8)\rangle \approx \langle 4.3,1.69\rangle\). Adding the two vectors is easier in this form than in the \((m,\theta)\) form, provided that we're willing to have the answer in this form as well.
It is easy to see that scalar multiplication and vector subtraction are also easy to compute in this form: \(a\langle v,w\rangle=\langle av,aw\rangle\) and \( \langle v_1,w_1\rangle - \langle v_2,w_2\rangle =\langle v_1-v_2,w_1-w_2\rangle\). What about the magnitude? The magnitude of the vector \(\langle v,w\rangle\) is still the length of the corresponding arrow representation; this is the distance from the origin to the point \((v,w)\), namely, the distance from the tail to the head of the arrow. We know how to compute distances, so the magnitude of the vector is simply \( \sqrt{v^2+w^2}\), which we also denote with absolute value bars: \( |\langle v,w\rangle|=\sqrt{v^2+w^2}\).
In three dimensions, vectors are still quantities consisting of a magnitude and a direction, but of course there are many more possible directions. It's not clear how we might represent the direction explicitly, but the coordinate version of vectors makes just as much sense in three dimensions as in two. By \(\langle 1,2,3\rangle\) we mean the vector whose head is at \((1,2,3)\) if its tail is at the origin. As before, we can place the vector anywhere we want; if it has its tail at \((4,5,6)\) then its head is at \((5,7,9)\). It remains true that arithmetic is easy to do with vectors in this form:
\[\eqalign{ &a\langle v_1,v_2,v_3\rangle=\langle av_1,av_2,av_3\rangle\cr &\langle v_1,v_2,v_3\rangle + \langle w_1,w_2,w_3\rangle =\langle v_1+w_1,v_2+w_2,v_3+w_3\rangle\cr &\langle v_1,v_2,v_3\rangle - \langle w_1,w_2,w_3\rangle =\langle v_1-w_1,v_2-w_2,v_3-w_3\rangle\cr} \]
The magnitude of the vector is again the distance from the origin to the head of the arrow, or \( |\langle v_1,v_2,v_3\rangle|=\sqrt{v_1^2+v_2^2+v_3^2}\).
Three particularly simple vectors turn out to be quite useful: \({\bf i}=\langle1,0,0\rangle\), \({\bf j}=\langle0,1,0\rangle\), and \({\bf k}=\langle0,0,1\rangle\). These play much the same role for vectors that the axes play for points. In particular, notice that
\[\eqalign{ \langle v_1,v_2,v_3\rangle &= \langle v_1,0,0\rangle + \langle 0,v_2,0\rangle + \langle 0,0,v_3\rangle\cr &=v_1\langle1,0,0\rangle + v_2\langle0,1,0\rangle + v_3\langle0,0,1\rangle\cr &= v_1{\bf i} + v_2{\bf j} + v_3{\bf k}\cr }\]
We will frequently want to produce a vector that points from one point to another. That is, if \(P\) and \(Q\) are points, we seek the vector \(\bf x\) such that when the tail of \(\bf x\) is placed at \(P\), its head is at \(Q\); we refer to this vector as \( \overrightarrow{\strut PQ}\). If we know the coordinates of \(P\) and \(Q\), the coordinates of the vector are easy to find.
Example 12.2.1
Suppose \(P=(1,-2,4)\) and \(Q=(-2,1,3)\). The vector \( \overrightarrow{\strut PQ}\) is \(\langle -2-1,1--2,3-4\rangle=\langle -3,3,-1\rangle\) and \( \overrightarrow{\strut QP}=\langle 3,-3,1\rangle\).
|
The use of superscripts and subscripts is very common in mathematical expressions involving exponents, indexes, and in some special operators. This article explains how to write superscripts and subscripts in simple expressions, integrals, summations, et cetera.
Contents
Definite integrals are some of the most common mathematical expressions, let's see an example:
In LaTeX, subscripts and superscripts are written using the symbols
^ and
_, in this case the x and y exponents where written using these codes. The codes can also be used in some types of mathematical symbols, in the integral included in the example the
_ is used to set the lower bound and the
^ for the upper bound. The command
\limits changes the way the limits are displayed in the integral, if not present the limits would be next to the integral symbol instead of being on top and bottom. (see the reference guide)
The symbols
_ and
^ can also be combined in the same expression, for example:
If the expression contains long superscripts or subscripts, these need to be collected in braces, as LaTeX normally applies the mathematical commands
^ and
_ only to the following character:
Subscripts and superscripts can be nested and combined in various ways. When nesting subscripts/superscripts, however, remember that each command must refer to a single element; this can be a single letter or number, as in the examples above, or a more complex mathematical expression collected in braces or brackets. For example:
Some mathematical operators may require subscripts and superscripts. The most frequent cases are those of the integral
\int (check the introduction) and the summation (
\sum) operators, whose bounds are typeset precisely with subscripts and superscripts.
For other frequently used operators that require subscripts/superscripts check the reference guide.
Additional examples and operators
LaTeX markup Renders as
a_{n_i}
\int_{i=1}^n
\sum_{i=1}^{\infty}
\prod_{i=1}^n
\cup_{i=1}^n
\cap_{i=1}^n
\oint_{i=1}^n
\coprod_{i=1}^n
There are also a
bigcup and
bigcap commands similar to
cup and cap but larger for larger expressions.
For more information see
|
Phosphoric acid has basicity of 3 i.e it can loose 3 $\ce{H+}$ while phosphorous acid has basicity of 2. Why is phosphorous acid more acidic than phosphoric acid? Acidity refers to the ability to liberate protons. Phosphoric acid liberates more protons than phosphorous acid (as the basicity of phosphoric acid is 3 and that of phosphorous acid is 2). Why then is phosphorous acid more acidic than phosphoric acid?
I'm pretty sure it is because phosphorus acid is more polar than phosphoric acid. This is because in the phosphoric acid there are more OHs and an O so it kind of balances out more towards the center. In phosphorus acid, there is a H on the main carbon. Hydrogen doesn't pull as much on electrons as Oxygens do and hence that part of the molecule becomes more positive causing a stronger dipole moment than that in phosphoric acid.
You’re using a wrong definition of
acidity. Acidity and its adjective acidic do not mean ‘number of protons liberated eventually.’ Instead, acidity is well defined as the equilibrium constant $K_\mathrm{a}$ of the corresponding first deprotonation reaction $(1)$:
$$\begin{align}\ce{H_nA &<=> H_{($n-1$)}A- + H+}\tag{1}\\[0.6em] K_\mathrm{a} &= \frac{[\ce{H_{($n-1$)}A-}][\ce{H+}]}{[\ce{H_nA}]}\tag{2}\end{align}$$
For phosphoric acid, the corresponding deprotonation is reaction $(3)$ and the equilibrium constant is as written; for phosphorous acid, see $(4)$.
$$\begin{align}\ce{H3PO4 &<=> H2PO4- + H+} &&K_\mathrm{a} = 7.11 \times 10^{-3}\tag{3}\\[0.8em] \ce{H3PO3 &<=> H2PO3- + H+} && K_\mathrm{a} = 5.01 \times 10^{-2}\tag{4}\end{align}$$
Phosphorous acid has the higher acidity constant corresponding to a lower $\mathrm{p}K_\mathrm{a}$ value and is thus more acidic than phosphoric acid.
|
Forgot password? New user? Sign up
Existing user? Log in
Hello, I propose this problem to the Brilliant community. Hope you enjoy it! This problem was one of the questions in Olympiads.
What is the remainder obtained of the long division x81+x49+x25+x9+xx3−x\large \dfrac{x^{81}+x^{49}+x^{25}+x^{9}+x}{x^3-x}x3−xx81+x49+x25+x9+x?
What is the remainder obtained of the long division x81+x49+x25+x9+xx3−x\large \dfrac{x^{81}+x^{49}+x^{25}+x^{9}+x}{x^3-x}x3−xx81+x49+x25+x9+x?
Note by Puneet Pinku 3 years, 1 month ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Can you just point out the mistake in my solution:
Let the remainder be r(x)=(Ax^2+Bx+C).
let P(x) be the polynomial on the numerator.
P(x)=(x^3-x)g(x)+r(x)
setting x=0,
P(0)=r(0)=C
or,C=0...................(1)
setting x=1,
P(1)=r(1)=A+B
or,A+B=5................(2)
setting x=-1,
P(-1)=r(-1)=A-B
or,A-B=-5..................(3)
Solving (1),(2),and (3), we get A=C=0,and B=5.
So, remainder=5x (ans)
Log in to reply
The problem is, that x cannot be 0. That would make the denominator of the fraction zero (division by zero).
We see that after dividing by xxx, we have the expression
x80+x48+x24+x8+1(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8+1
Now consider
x80+x48+x24+x8+1(x+1)(x−1)−5(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}-\frac{5}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8+1−(x+1)(x−1)5
=x80+x48+x24+x8−4(x+1)(x−1)=\frac{x^{80}+x^{48}+x^{24}+x^8-4}{(x+1)(x-1)}=(x+1)(x−1)x80+x48+x24+x8−4
We see that x−1x-1x−1 and x+1x+1x+1 are factors of x80+x48+x24+x8−4x^{80}+x^{48}+x^{24}+x^8-4x80+x48+x24+x8−4 by the factor theorem as 1 and -1 are roots of this polynomial. Hence we can write
x80+x48+x24+x8−4(x+1)(x−1)=x80+x48+x24+x8+1(x+1)(x−1)−5(x+1)(x−1)=p(x)+0(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8-4}{(x+1)(x-1)}=\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}-\frac{5}{(x+1)(x-1)}=p(x)+\frac{0}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8−4=(x+1)(x−1)x80+x48+x24+x8+1−(x+1)(x−1)5=p(x)+(x+1)(x−1)0
For some polynomial p(x)p(x)p(x)
Therefore
x80+x48+x24+x8+1(x+1)(x−1)=p(x)+5(x+1)(x−1)\frac{x^{80}+x^{48}+x^{24}+x^8+1}{(x+1)(x-1)}=p(x)+\frac{5}{(x+1)(x-1)}(x+1)(x−1)x80+x48+x24+x8+1=p(x)+(x+1)(x−1)5.
One way to solve this is to simplify by the common factor x first, and then to use algebraic long division in a faster way ( " +...+ " after recognising the repetitive parts):
x80+x48+x24+x8+1x2−1=x78+x76+...+x48+2x46+2x44+...+2x24+3x22+3x20+...+3x8+4x6+4x4+4x2+4+5x2−1 \frac {x^{80}+x^{48}+x^{24}+x^8+1}{x^2 - 1} = x^{78}+x^{76}+...+x^{48}+2x^{46}+2x^{44}+...+2x^{24}+3x^{22}+3x^{20}+...+3x^8+4x^6+4x^4+4x^2+4+ \frac { \boxed {5} }{x^2-1} x2−1x80+x48+x24+x8+1=x78+x76+...+x48+2x46+2x44+...+2x24+3x22+3x20+...+3x8+4x6+4x4+4x2+4+x2−15
How to find that 1 will be the coefficient for this much time or 4 will be there for only few numbers and lastly 5 will come... I mean can you explain the pattern a bit more clearly...
For the algebraic (or polynomial) long division method in general, you can find many notes, videos etc. on the Internet (e.g. https://brilliant.org/wiki/polynomial-division/ or https://revisionmaths.com/advanced-level-maths-revision/pure-maths/algebra/algebraic-long-division ).
Just follow the method in the case of this division and you will see. (The coefficient increases at some points, because you will have the same powers of x from your remainder (at the previous step) and you also have an original term there (e.g. x48+x48=2x46 x^{48 }+ x^{48} = 2x^{46} x48+x48=2x46
@Zee Ell – Did you perform the whole long division or somehow you analyzed and figured out the coefficients??? I recently found a new method to solve it..... I will be posting it as question.....
@Puneet Pinku – I started the whole, but jumped to the key points (where the remainders "got company" from the original polynomial) after recognising the pattern. With further analysis, the process can be shortened even further.
Problem Loading...
Note Loading...
Set Loading...
|
We have seen how to create, or derive, a new function $f'(x)$ from a function $f(x)$, summarized in the paragraph containing equation 2.1.1. Now that we have the concept of limits, we can make this more precise.
Definition 2.4.1 The derivative of a function $f$, denoted $f'$, is $$f'(x)=\lim_{\Delta x\to 0} {f(x+\Delta x)-f(x)\over \Delta x}.$$
We know that $f'$ carries important information about the original function $f$. In one example we saw that $f'(x)$ tells us how steep the graph of $f(x)$ is; in another we saw that $f'(x)$ tells us the velocity of an object if $f(x)$ tells us the position of the object at time $x$. As we said earlier, this same mathematical idea is useful whenever $f(x)$ represents some changing quantity and we want to know something about how it changes, or roughly, the "rate'' at which it changes. Most functions encountered in practice are built up from a small collection of "primitive'' functions in a few simple ways, for example, by adding or multiplying functions together to get new, more complicated functions. To make good use of the information provided by $f'(x)$ we need to be able to compute it for a variety of such functions.
We will begin to use different notations for the derivative of a function. While initially confusing, each is often useful so it is worth maintaining multiple versions of the same thing.
Consider again the function $\ds f(x)=\sqrt{625-x^2}$.We have computed the derivative $\ds f'(x)=-x/\sqrt{625-x^2}$, and havealready noted that if we use the alternate notation$\ds y=\sqrt{625-x^2}$ then we might write $\ds y'=-x/\sqrt{625-x^2}$.Another notation is quite different, and in time it will become clearwhy it is often a useful one. Recall that to compute the thederivative of $f$ we computed $$\lim_{\Delta x\to0} {\sqrt{625-(7+\Delta x)^2} - 24\over \Delta x}.$$The denominator here measures a distance in the $x$ direction,sometimes called the "run'', and the numerator measures a distance inthe $y$ direction, sometimes called the "rise,'' and "rise overrun'' is the slope of a line. Recall that sometimes such a numerator isabbreviated $\Delta y$, exchanging brevity for a more detailedexpression. So in general, a derivative is given by$$y'=\lim_{\Delta x\to0} {\Delta y\over \Delta x}.$$To recall the form of the limit, we sometimes say instead that$${dy\over dx}=\lim_{\Delta x\to0} {\Delta y\over \Delta x}.$$ In other words, $dy/dx$ is another notation for the derivative, andit reminds us that it is related to an actual slope between twopoints. This notation is called
Leibniz notation , after Gottfried Leibniz, who developed the fundamentalsof calculus independently, at about the same time that Isaac Newtondid. Again, since we often use $f$ and $f(x)$ to mean the originalfunction, we sometimes use $df/dx$ and $df(x)/dx$ to refer to thederivative. If the function $f(x)$ is written out in full we oftenwrite the last of these something like this$$f'(x)={d\over dx}\sqrt{625-x^2}$$with the function written to the side, instead of trying to fit it intothe numerator.
Example 2.4.2 Find the derivative of $\ds y=f(t)=t^2$.
We compute $$ \eqalign{ y' = \lim_{\Delta t\to0}{\Delta y\over\Delta t}&= \lim_{\Delta t\to0}{(t+\Delta t)^2-t^2\over\Delta t}\cr &=\lim_{\Delta t\to0}{t^2+2t\Delta t+\Delta t^2-t^2\over\Delta t}\cr &=\lim_{\Delta t\to0}{2t\Delta t+\Delta t^2\over\Delta t}\cr &=\lim_{\Delta t\to0} 2t+\Delta t=2t.\cr} $$ Remember that $\Delta t$ is a single quantity, not a "$\Delta$'' times a "$t$'', and so $\ds \Delta t^2$ is $\ds (\Delta t)^2$ not $\ds \Delta (t^2)$.
Example 2.4.3 Find the derivative of $y=f(x)=1/x$.
The computation: $$ \eqalign{ y' = \lim_{\Delta x\to0}{\Delta y\over\Delta x}&= \lim_{\Delta x\to0}{ {1\over x+\Delta x} - {1\over x}\over \Delta x}\cr &=\lim_{\Delta x\to0}{ {x\over x(x+\Delta x)} - {x+\Delta x\over x(x+\Delta x)}\over \Delta x}\cr &=\lim_{\Delta x\to0}{ {x-(x+\Delta x)\over x(x+\Delta x)}\over \Delta x}\cr &=\lim_{\Delta x\to0} {x-x-\Delta x\over x(x+\Delta x)\Delta x}\cr &=\lim_{\Delta x\to0} {-\Delta x\over x(x+\Delta x)\Delta x}\cr &=\lim_{\Delta x\to0} {-1\over x(x+\Delta x)}={-1\over x^2}\cr } $$
Note. If you happen to know some "derivative formulas'' froman earlier course, for the time being you should pretend that you donot know them.In examples like the ones above and the exercises below, you are requiredto know how to find the derivative formula starting from basic principles.We will later develop some formulas so that we do not always need todo such computations, but we will continue to need to know how to dothe more involved computations.
Sometimes one encounters a point in the domain of a function $y=f(x)$ wherethere is
no derivative, because there is no tangent line. In orderfor the notion of the tangent line at a point to make sense, the curve mustbe "smooth'' at that point. This means that if you imagine a particletraveling at some steady speed along the curve, then the particle does notexperience an abrupt change of direction. There are two types ofsituations you should be aware of—corners and cusps—where there's asudden change of direction and hence no derivative.
Example 2.4.4 Discuss the derivative of the absolute value function $y=f(x)=|x|$.
If $x$ is positive, then this is the function $y=x$, whose derivative isthe constant 1. (Recall that when $y=f(x)=mx+b$, the derivative is theslope $m$.) If $x$ is negative, then we're dealing with the function $y=-x$,whose derivative is the constant $-1$. If $x=0$, then the function hasa corner, i.e., there is no tangent line. A tangent line would have to point in the direction of the curve—but there are
two directions of the curve that come together at the origin. We cansummarize this as$$ y'=\cases{1&if $x>0$;\cr-1&if $x< 0$;\cr\hbox{undefined}&if $x=0$.\cr}$$
Example 2.4.5
Discuss the derivative of the function $\ds y=x^{2/3}$, shown in figure 2.4.1. We will later see how to compute this derivative; for now we use the fact that $\ds y'=(2/3)x^{-1/3}$. Visually this looks much like the absolute value function, but it technically has a cusp, not a corner. The absolute value function has no tangent line at 0 because there are (at least) two obvious contenders—the tangent line of the left side of the curve and the tangent line of the right side. The function $\ds y=x^{2/3}$ does not have a tangent line at 0, but unlike the absolute value function it can be said to have a single direction: as we approach 0 from either side the tangent line becomes closer and closer to a vertical line; the curve is vertical at 0. But as before, if you imagine traveling along the curve, an abrupt change in direction is required at 0: a full 180 degree turn.
In practice we won't worry much about the distinction between these examples; in both cases the function has a "sharp point'' where there is no tangent line and no derivative.
Exercises 2.4
Ex 2.4.1Find the derivative of $\ds y=f(x)=\sqrt{169-x^2}$.(answer)
Ex 2.4.2Find the derivative of $\ds y=f(t)=80-4.9t^2$.(answer)
Ex 2.4.3Find the derivative of $\ds y=f(x)=x^2-(1/x)$.(answer)
Ex 2.4.4Find the derivative of $\ds y=f(x)=ax^2+bx+c$ (where $a$, $b$, and $c$ are constants).(answer)
Ex 2.4.5Find the derivative of $\ds y=f(x)=x^3$.(answer)
Ex 2.4.6Shown is the graph of a function $f(x)$. Sketch the graph of $f'(x)$by estimating the derivative at a number of points in the interval:estimate the derivative at regular intervals from one end of theinterval to the other, and also at "special'' points, as when thederivative is zero. Make sure you indicate any places where thederivative does not exist.
Ex 2.4.7Shown is the graph of a function $f(x)$. Sketch the graph of $f'(x)$by estimating the derivative at a number of points in the interval:estimate the derivative at regular intervals from one end of theinterval to the other, and also at "special'' points, as when thederivative is zero. Make sure you indicate any places where thederivative does not exist.
Ex 2.4.8Find the derivative of $\ds y=f(x)=2/\sqrt{2x+1}$ (answer)
Ex 2.4.9Find the derivative of $y=g(t)=(2t-1)/(t+2)$(answer)
Ex 2.4.10Find an equation for the tangent line to the graph of $\ds f(x)=5-x-3x^2$ at the point $x=2$(answer)
Ex 2.4.11Find a value for $a$ so that the graph of $\ds f(x)=x^2+ax-3$ has a horizontal tangent line at $x=4$.(answer)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.