text
stringlengths 256
16.4k
|
|---|
While the first half of the answer (v1) by John Rennie provides correct timescales, for the process we are discussing its second half is completely wrong.
Objects falling into a black hole enter the horizon in finite time by the clocks of outside observers
Let me elaborate: At this wikipedia page we can see the solution for a particle falling toward a black hole. We are interested in the asymptotic behavior of the radial coordinate as the particle approaches the horizon (or rather
old horizon, before this infalling particle is incorporated in the black hole):$$r(t)\approx r_s\left(1+\exp\left(\frac{-c(t-t_0)}{r_s}\right)\right),\tag{*}$$here $t$ is the Schwarzschild time or time by the clock of outside observer, $r_s$ is Schwarzschild radius (without the mass of a falling object), and a constant $t_0$ is determined by when and how the object started falling into the black hole.
Now consider the outgoing null geodesics (trajectories of massless particles such as photons flying away from the black hole) near the horizon of this black hole. If we disregard the effect of the falling object they would satisfy the equation$$r(t)\approx r_s\left(1+\exp\left(\frac{c(t-t_1)}{r_s}\right)\right).$$But, if we consider the area near the horizon where our object is falling, we cannot disregard its influence on these trajectories. As radial null geodesics cross the worldline of the infalling object they are deflected by the gravitational field of this object (however small) and as a result they remain near the horizon for a longer time. And if this intersection occurs while that object is at a distance of $r_s+\delta r_s $ then this geodesic would no longer be able to escape the black hole, and be on the new horizon. The new value of horizon radius: $r_s+\delta r_s $ is the sum of old radius and addition from the mass/energy ($\delta m$) of falling object $\delta r_s \approx \frac{2 G \delta m}{c^2}$ (minus the losses of energy on radiation etc.) While the details depend on the geometry of the fall, the most important fact is that according to (*), the value of radial coordinate of $r=r_s+\delta r_s$ is achieved
in a finite time according to an external observer: $$ \Delta t\approx \frac{r_s}{c}\ln\left(\frac{ r_s}{\delta r_s}\right)\approx \frac{r_s}{c}\ln\left(\frac{M}{\delta m}\right).$$Even when we have very light object falling into a very large black hole the resulting time interval is quite small by human standards. For example if we take a photon of cosmic microwave background with energy $k_\text{B}\cdot 3\,\text{K}$ falling into Sagittarius A*, then logarithm would be about 175 and $\Delta t$ about two hours. So such photon falling from a radius of $3 r_s$ into a supermassive black hole would cross event horizon of a black hole in a couple of hours by the clock of external observer.
To illustrate this process look at the spacetime diagram from the book by Andrew Hamilton p. 166:
This image uses Eddington–Finkelstein coordinates that are more suited for study of the near horizon processes. Purple curve is an
apparent horizon before the falling particle $r_s=0.5$, red curve is the worldline of the falling particle and we see that it crosses the true event horizon when $r=1$.Null geodesics (thin black lines) between purple line and horizon are starting as outgoing but fall back into the black hole after being deflected by the falling particle.
Another point to note is that event horizon is a global construct and it depends not only on the past but also on the future of the black hole. So if in the future something else would fall into a black hole, the event horizon right now is expanding to accommodate the future increase in mass. (Of course if the increase in mass is small, and/or far away into the future, the horizon would be almost constant). So there is no instantaneous expansions.
Also relevant question: At what moment will matter falling into a black hole affect its size?
In this answer I completely ignored the effects of Hawking radiation and potential 'shrinking' of horizons due to it. This is well justified at current epoch for both stellar mass and supermassive black holes. At the very least temperature of cosmic microwave background is much greater than Hawking temperature of black holes, so their horizons would always be growing by absorbing CMB quanta.
|
McCarthy, D., Mikkola, K., Continuity and completeness of strongly independent preorders.
Mathematical Social Sciences, 93 (2018): 141–145.
Expected utility theory has three main axioms:
completeness, continuity, and independence. Completeness is dubious in many normative settings, so what happens when completeness is dropped? We show that there is a surprising difficulty. Independence and continuity axioms
Let $X$ be a convex set, that is, $\alpha x + (1-\alpha) y \in X$ for any $x, y \in X$, $\alpha \in (0,1)$. An example is the set of probability measures on a finite set of outcomes. Let $\succsim$ be a preorder (a reflexive, transitive binary relation) on $X$.
Consider the strongest and most natural version of independence,
strong independence. (SI)For $x,y,z\in X$ and $\alpha\in(0,1)$, $x\succsim y \iff \alpha x+(1-\alpha)z\succsim \alpha y+(1-\alpha)z$.
Continuity axioms also come in different forms. The two most common are
Archimedean and mixture continuity. (Ar)For $x$, $y$, $z\in X$, if $x \succ y \succ z$, then $(1-\alpha)x+\alpha z \succ y$ for some $\alpha \in (0,1)$. (MC)For $x$, $y$, $z \in X$, if $(1 – \alpha) x+\alpha y \succ z \text{ for all } \alpha \in (0,1]$, then $x \succsim z$.
If $\seX$ can be incomplete, it is natural to consider a slight strengthening of Ar.
(Ar$^{+}$)For $x,y,z\in X$, if $x \succ y$, then $(1-\alpha)x+\alpha z\succ y$ for some $\alpha \in (0,1)$. anypositive chance of $z$, no matter how small, would disturb the preference $x \succ y$. Ar$^+$ extends this by replacing “worse than” with “worse than or incomparable with”. Main result 1. Ar and MC essentially rule out incompleteness. 2. Ar$^+$ and MC formally rule out incompleteness. Stated more precisely, the first claim is that Ar and MC imply that comparability is an equivalence relation. But this fits poorly with intuitions about comparability.
To use an example of Joseph Raz, suppose that a legal career and a musical career are incomparable. Then if comparability is an equivalence relation, it cannot be that both are better than a career cleaning toilets.
To gain some intuition about why the theorem is true, it is worth working through the following example, adapted from Aumann (1962).
\begin{align*}
x \succsim_{\text{wp}} y &\iff x = y \text{ or } x_i > y_i \text{ for } i=1,2. \\
x \succsim_{\text{sp}} y &\iff x_i \geq y_i \text{ for } i=1,2.
\end{align*}
Then $\succsim_{\text{wp}}$ satisfies Ar and Ar$^+$, but not MC. Conversely, $\succsim_{\text{sp}}$ satisfies MC but not Ar or Ar$^+$.
Discussion
Here are a couple of reasons why the theorem is relevant for philosophers.
First, it raises a puzzle about what we should think about continuity axioms, and about the reliability of our intuitions about continuity. There are many discussions of incomplete strongly independent preorders in the economics literature. Both Mixture Continuity and variations on the Archimedean axioms feature heavily. So it seems that both styles of axioms are regarded as (a) plausible, and (b) comparably plausible. But in the presence of incompleteness, they say essentially opposite things. If one is right, the other is wrong. What should we make of this?
Second, moral philosophers often assume incompleteness, then present informal continuity arguments, whether they are explicitly stated that way or not. Since continuity behaves quite subtly and counterintuitively in the presence of incompleteness, I think they should be more careful.
|
Divergence and curl are two measurements of vector fields that are very useful in a variety of applications. Both are most easily understood by thinking of the vector field as representing a flow of a liquid or gas; that is, each vector in the vector field should be interpreted as a velocity vector. Roughly speaking, divergence measures the tendency of the fluid to collect or disperse at a point, and curl measures the tendency of the fluid to swirl around the point. Divergence is a scalar, that is, a single number, while curl is itself a vector. The magnitude of the curl measures how much the fluid is swirling, the direction indicates the axis around which it tends to swirl. These ideas are somewhat subtle in practice, and are beyond the scope of this course.
Recall that if \(f\) is a function, the gradient of \(f\) is given by
$$\nabla f=\left\langle {\partial f\over\partial x},{\partial f\over\partial y},{\partial f\over\partial z}\right\rangle.$$
A useful mnemonic for this (and for the divergence and curl, as it turns out) is to let
$$\nabla = \left\langle{\partial \over\partial x},{\partial \over\partial y},{\partial \over\partial z}\right\rangle,$$
that is, we pretend that \(\nabla\) is a vector with rather odd looking entries. Recalling that \(\langle u,v,w\rangle a=\langle ua,va,wa\rangle\), we can then think of the gradient as
$$\nabla f=\left\langle{\partial \over\partial x},{\partial \over\partial y},{\partial \over\partial z}\right\rangle f = \left\langle {\partial f\over\partial x},{\partial f\over\partial y},{\partial f\over\partial z}\right\rangle,$$
that is, we simply multiply the \(f\) into the vector. The divergence and curl can now be defined in terms of this same odd vector \(\nabla\) by using the cross product and dot product.
The divergence of a vector field \({\bf F}=\langle f,g,h\rangle\) is
$$\nabla \cdot {\bf F} =\left\langle{\partial \over\partial x},{\partial \over\partial y},{\partial \over\partial z}\right\rangle\cdot \langle f,g,h\rangle = {\partial f\over\partial x}+{\partial
g\over\partial y}+{\partial h\over\partial z}.$$
The curl of \(\bf F\) is
$$\nabla\times{\bf F} = \left|\matrix{{\bf i}&{\bf j}&{\bf k}\cr {\partial \over\partial x}&{\partial \over\partial y}&{\partial \over\partial z}\cr f&g&h\cr}\right| =\left\langle {\partial h\over\partial y}-{\partial g\over\partial z}, {\partial f\over\partial z}-{\partial h\over\partial x}, {\partial g\over\partial x}-{\partial f\over\partial y}\right\rangle.$$
Here are two simple but useful facts about divergence and curl.
the divergence of the curl
\[\nabla\cdot(\nabla\times{\bf F})=0.\]
In words, this says that the divergence of the curl is zero.
the curl of a gradient
\[\nabla\times(\nabla f) = {\bf 0}.\]
That is, the curl of a gradient is the zero vector. Recalling that gradients are conservative vector fields, this says that the curl of a conservative vector field is the zero vector. Under suitable conditions, it is also true that if the curl of \(\bf F\) is \(\bf 0\) then \(\bf F\) is conservative. (Note that this is exactly the same test that we discussed in section 16.3.)
Example \(\PageIndex{3}\):
Let \({\bf F} = \langle e^z,1,xe^z\rangle\). Then \(\nabla\times{\bf F} = \langle 0,e^z-e^z,0\rangle = {\bf 0}\). Thus, \(\bf F\) is conservative, and we can exhibit this directly by finding the corresponding \(f\).
Since \(f_x=e^z\), \(f=xe^z+g(y,z)\). Since \(f_y=1\), it must be that \(g_y=1\), so \(g(y,z)=y+h(z)\). Thus \(f=xe^z+y+h(z)\) and
$$xe^z = f_z = xe^z + 0 + h'(z),$$
so \(h'(z)=0\), i.e., \(h(z)=C\), and \(f=xe^z+y+C\).
We can rewrite Green's Theorem using these new ideas; these rewritten versions in turn are closer to some later theorems we will see.
Suppose we write a two dimensional vector field in the form \({\bf F}=\langle P,Q,0\rangle\), where \(P\) and \(Q\) are functions of \(x\) and \(y\). Then
$$\nabla\times {\bf F} =\left|\matrix{{\bf i}&{\bf j}&{\bf k}\cr {\partial \over\partial x}&{\partial \over\partial y}&{\partial \over\partial z}\cr P&Q&0\cr}\right|= \langle 0,0,Q_x-P_y\rangle,$$
and so \((\nabla\times {\bf F})\cdot{\bf k}=\langle 0,0,Q_x-P_y\rangle\cdot\langle 0,0,1\rangle = Q_x-P_y\). So Green's Theorem says
$$ \begin{align} \int_{\partial D} {\bf F}\cdot d{\bf r} &=\int_{\partial D} P\,dx +Q\,dy \label{eq41} \\ &= \iint\limits_{D} Q_x-P_y \,dA \label{eq42} \\ &=\iint\limits_{D}(\nabla\times {\bf F})\cdot{\bf k}\,dA. \label{eq43} \end{align} \nonumber $$
Roughly speaking, Equation \ref{eq43} adds up the curl (tendency to swirl) at each point in the region; the left-most integral adds up the tangential components of the vector field around the entire boundary. Green's Theorem says these are equal, or roughly, that the sum of the "microscopic'' swirls over the region is the same as the "macroscopic'' swirl around the boundary.
Next, suppose that the boundary \(\partial D\) has a vector form \({\bf r}(t)\), so that \({\bf r}'(t)\) is tangent to the boundary, and \({\bf T}={\bf r}'(t)/|{\bf r}'(t)|\) is the usual unit tangent vector. Writing \({\bf r}=\langle x(t),y(t)\rangle\) we get
$${\bf T}={\langle x',y'\rangle\over|{\bf r}'(t)|}$$
and then
$${\bf N}={\langle y',-x'\rangle\over|{\bf r}'(t)|}$$
is a unit vector perpendicular to $\bf T$, that is, a unit normal to the boundary. Now
$$\eqalign{ \int_{\partial D} {\bf F}\cdot{\bf N}\,ds&= \int_{\partial D} \langle P,Q\rangle\cdot{\langle y',-x'\rangle\over|{\bf r}'(t)|} |{\bf r}'(t)|dt= \int_{\partial D} Py'\,dt - Qx'\,dt\cr &=\int_{\partial D} P\,dy - Q\,dx =\int_{\partial D} - Q\,dx+P\,dy.\cr
}$$
So far, we've just rewritten the original integral using alternate notation. The last integral looks just like the right side of Green's Theorem except that \(P\) and \(Q\) have traded places and \(Q\) has acquired a negative sign. Then applying
Green's Theorem we get
$$\int_{\partial D} - Q\,dx+P\,dy=\iint\limits_{D} P_x+Q_y\,dA=\iint\limits_{D} \nabla\cdot{\bf F}\,dA.$$
Summarizing the long string of equalities,
\[\int_{\partial D} {\bf F}\cdot{\bf N}\,ds =\iint\limits_{D} \nabla\cdot{\bf F}\,dA.\]
Roughly speaking, the first integral adds up the flow across the boundary of the region, from inside to out, and the second sums the divergence (tendency to spread) at each point in the interior. The theorem roughly says that the sum of the "microscopic'' spreads is the same as the total spread across the boundary and out of the region.
|
Assuming a race car drives around in a circle of radius r, center (0,0), linear velocity v, and ignoring centrifugal forces and friction I can calculate the position at any time, t. Angular velocity
$$\omega = \frac{v}{r}$$
degrees turned through (assume at time t = 0 car is at position (r,0)):
$$\theta = \omega t$$
So the positional (x,y) coordinates, p, at time t, are
$$p(t) =(r\cos(\theta) , r\sin(\theta ))$$
Please correct me if I am wrong up to this point. Now I want to include slipping, so that the car will be thrust outwards by the centrifugal (?) force, with friction force countering it. Supposing the centrifugal force is stronger, then there will be a resultant force in the direction tangential to the linear velocity at time t.
Suppose this force is $F$, then I want to find the modified position of the car at time t, as it won't be travelling in a circle but, I think, will be travelling in an increasing spiral (this is theoretical, I haven't taken into account spinning around its axis or crashing into a barrier).
So this is where I am going to use very naive maths/physics to try and find the modified position.
Assume the car has mass $m$.
Then the acceleration outward from the centre of the circle will be
$$a = \frac{F}{m}$$
Now I'm going to assume acceleration is constant to make my life simpler, and I can fudge a value for outward velocity u,
$$u = \int_{0}^{t} a dt = at$$
and more dodgy physics, I can calculate the position vector by multiplying by t, assuming constant velocity outwards,
$$\text{position} = (u t \cos(\theta),u t \sin(\theta))$$
Remembering at time t, the angle is $\theta$
I can see that the new position, let's say $q(t)$, is given by
$$q(t) = p(t)+(u t\cos(\theta), u t\sin(\theta)) =p(t)+ (\frac{Ft}{m} t\cos(\theta),\frac{Ft}{m}t\sin(\theta))$$
One problem I have noted, just looking at this is that $q(t) \tilde{}t^{2}$ since to get the position from acceleration I multiply by t twice, which seems too much, the car will spiral too fast.
I am very skeptical this is correct. Please correct me where I have gone wrong.
Edit:
Also I just pulled the value $F$ from thin air. Could you explain how I could calculate it based on the cars velocity, mass, radius of circle, coefficient of friction of surface or any other variables?
|
Learning Outcomes
Evaluate an algebraic expression given values for the variables. Recognize given values in a word problem and evaluate an expression using these values.
There are many formulas that are encountered in a statistics class and the values of each variable will be given. It will be your task to carefully evaluate the expression after plugging in each of the given values into the formula. In order to be successful you should not rush through the process and you need to be aware of the order of operations and use parentheses when necessary.
Example \(\PageIndex{1}\)
Suppose that equation of the regression line for the number of days a week, \(x\), a person exercises and the number of days, \(\hat y\), a year a person is sick is:
\(\hat y=12.5\:-\:1.6x\)
We use \(\hat y\) instead of \(y\) since this is a prediction instead of an actual data value's y-coordinate. Use this regression line to predict the number of times a person who exercises 4 days a week will be sick this year.
Solution
The first step is always to identify the variable or variables that are given. In this case, we have 4 days of exercise a week, so:
\(x=4\)
Next, we plug in to get:
\(\hat y=12.5\:-\:1.6(4) = 6.1\)
Since we are predicting the number of days a year being sick, it is a good idea to round to the nearest whole number. We get that the best prediction for the number of sick days for a person who exercises 4 days per week is that they will be sick 6 days this year.
Example \(\PageIndex{2}\)
For a yes/no question, a sample size is considered large enough to use a Normal distribution if
\(np>5\) and \(nq\:>5\)
where \(n\) is the sample size, \(p\) is the proportion of Yes answers, and \(q\) is the proportion of No answers. A survey was given to 59 American adults asking them if they were food insecure today. 6.8% of them said they were food insecure today. Was the sample size large enough to use the Normal distribution?
Solution
Our first task is to list out each of the needed variables. Let's start with \(n\), the sample size. We are given that 59 Americans were surveyed. Thus
\(n=59\)
Next, we will find \(p\), the proportion of Yes answers. We are given that 6.8% said Yes. Since this is a percent and not a proportion, we must convert the percent to a proportion by moving the decimal place two places to the right. It helps to place a 0 to the left of the 6, so that the decimal point has a place to go. A common error is to rush through this and wrongly write down 0.68. Instead, the proportion is:
\(p=0.068\)
Our next task is to find \(q\), the proportion of No answers. For a Yes/No question, the proportion of Yes answers and the proportion of No answers must always add up to 1. Thus:
\(q=1-0.068\:=\:0.932\)
Now we are ready to plug into the two inequalities:
\(np=59\times0.068=4.012\)
and
\(nq=59\times0.932=54.988\)
Although \(nq\:=\:54.988>5\), we have \(np\:=\:4.012<5\), so the sample size was not large enough to use the Normal distribution.
Example \(\PageIndex{3}\)
For a quantitative study, the sample size, \(n\), needed in order to produce a confidence interval with a margin of error no more than \(\pm E\), is
\(n=\left(\frac{z\sigma}{E}\right)^2\)
where \(z\) is a value that is determined from the confidence level and \(\sigma\) is the population standard deviation. You want to conduct a survey to estimate the population mean amount of years it takes psychologists to get through college and you require a margin of error of no more than \(\pm0.1\) years. Suppose that you know that the population standard deviation is 1.3 years. If you want a 95% confidence interval that comes with a \(z = 1.96\), at least how many psychologists must you survey? Round your answer up.
Solution
We start out by identifying the given values for each variable. Since we want a margin of error of no more than \(\pm0.1\), we have:
\(E\:=\:0.1\)
We are told that the population standard is 1.3, so:
\(\sigma=1.3\)
We are also given the value of \(z\):
\(z=1.96\)
Now put this into the formula to get:
\(n=\left(\frac{1.96\times1.3}{0.1}\right)^2\)
We put this into a calculator or computer to get:
\(\left(1.96\times1.3\div0.1\right)^2=649.2304\)
We round up and can conclude that we need to survey 650 psychologists.
Example \(\PageIndex{4}\)
Based on the Central Limit Theorem, the standard deviation of the sampling distribution when samples of size \(n\) are taken from a population with standard deviation, \(\sigma\), is given by:
\(\sigma_\bar x=\frac{\sigma}{\sqrt{n}}" width="122\)
If the population standard deviation for the number of customers who walk into a fast food restaurant is 12, what is the standard deviation of the sampling distribution for samples of size 35? Round your answer to two decimal places.
Solution
First we identify each of the given variables. Since the population standard deviation was 12, we have:
\(\sigma=12\)
We are told that the sample size is 35, so:
\(n=35\)
Now we put these numbers into the formula for the standard deviation of the sampling distribution to get:
\(\sigma_\bar x=\frac{12}{\sqrt{35}}\)
We are now ready to put this into our calculator or computer. We put in:
\(\sigma_x=\frac{12}{\sqrt{35}}=12\div(35^\wedge 0.5) = 2.02837\)
Rounded to two decimal places, we can say that the standard deviation of the sampling distribution is 2.03.
Example \(\PageIndex{5}\)
The z-score for a given sample mean \(\bar x\) for a sampling distribution with population mean \(\mu \), population standard deviation \(\sigma \), and sample size \(n\) is given by:
\(z=\frac{\bar x-\mu}{\frac{\sigma}{\sqrt{n}}}\)
An environmental scientist collected data on the amount of glacier retreat. She measured 45 glaciers. The population mean retreat is 22 meters and the population standard deviation is 16 meters. The sample mean for her data was 27 meters and the sample standard deviation for her data was 18 meters. What was the z-score?
Solution
First we identify each of the given variables. Since the sample mean was 27, we have:
\(\bar x = 27\)
We are told that the population mean is 22 meters, so:
\(\mu=22\)
We are also given that the population standard deviation is 16 meters, hence:
\(\sigma=16\)
Finally, since she measured 45 glaciers, we have:
\(n=45\)
Now we put the numbers into the formula for the z-score to get:
\(z=\frac{27-22}{\frac{16}{\sqrt{45}}}" width="115\)
We are now ready to put this into our calculator or computer. We must pay attention to the order of operations and put parentheses around the numerator, since the subtraction happens for this expression before the division. We also must put parentheses around the denominator. We put in:
\(z=\left(27-22\right)\div\left(16\div\sqrt{45}\right)=2.0963\)
Exercise
You want to come up with a 90% confidence interval for the proportion of people in your community who are obese and require a margin of error of no more than \(\pm3\%\). According to the Journal of the American Medical Association (JAMA) 34% of all Americans are obese. The equation to find the sample size, \(n\), needed in order to come up with a confidence interval is:
\(n=p\left(1-p\right)\left(\frac{z}{E}\right)^2\)
where \(p\) is the preliminary estimate for the population proportion. Based on calculations, \(z=1.645\). How many people in your community must you survey?
Evaluating Algebraic Expressions (L2.1)
https://youtu.be/HLjUT8Kvc5U
|
The line bundle $\mathcal O_{\mathbb CP^1}(-1) = \{(z,\ell)|~z \in \ell \} $ is a submanifold of $\mathbb C^2 \times \mathbb CP^1$ with bundle map the projection. Thus we can the restrict the coordinates of $\mathbb C^2 \times \mathbb CP^1$ to coordinates of $\mathcal O(-1)$.
What about its powers? Are they also such submanifolds? I was thinking that maybe $$\mathcal O_{\mathbb CP^1}(-n) = \{((z_0^n, z_1^n),[z_0,z_1])\}\subset \mathbb C^2 \times \mathbb CP^1.$$
This would give the right transition functions $(\frac {z_1}{z_0})^n$. But I don't think that it is well defined, because the function $[z_0: z_1] \mapsto (z_0^n, z_1^n)$ is not injective (look at the roots of unity).
So how would one define coordinates on $\mathcal O_{\mathbb CP^1}(-n)$? I am especially interested in the case $n=2$.
|
Statistical significance is junk science, and its big piles of nonsense are spoiling the research of more than particle physicists.Wow. It's remarkable because with this deep misunderstanding of the very key part of any rational thinking, this Gentleman can't possibly understand anything about the proper verification of theories in economics, his field, either. I would argue that because of this lethal flaw in the author's approach to rational reasoning, it is guaranteed at 5 sigma that your humble correspondent and many other physicists and scientists simply have to be better economists than Mr Ziliak, too. He just can't have a clue about the scientific approach to anything.
Statistical significance is absolutely paramount in the verification of hypotheses in all natural sciences as well as all social sciences that more or less successfully try to emulate the scientific character and success of the natural sciences.
Only in mathematics, we may construct rigorous proofs that don't need to mention any probabilities because in principle, the probability that a mathematical proof is right may be verified to be 100 percent. There's no noise and no uncertainty in a rigorous mathematical proof.
However, this "optimistic observation" has two major limitations. One of them is that mathematics doesn't directly apply to the real world. As long as mathematical concepts, theorems, and their proofs are considered rigorous, they can't be reliably and accurately identified with anything in the real world. So they tell us nothing about Nature, humans, or the society. Claims about Nature, humans, or the society simply don't belong to mathematics. They can't be absolutely certain. They can't be rigorous in the truly mathematical sense.
The second limitation is that people aren't infallible so for various reasons, even a mathematical proof has a nonzero probability to be wrong. Even if a proof is carefully verified etc., there's always a nonzero probability that the brain or the computer performed an invalid operation that led to the confirmation of a proof that is actually erroneous. The embedding of mathematicians' brains in Nature guarantees that these brains can't quite share the perfectly clean, infallible features of the idealized world of mathematics.
In natural sciences, the verification and falsification of hypotheses – and falsification in particular is the basic methodology that makes observations relevant (and observations have to be relevant for anything that we call science) – always involves measurements that have some uncertainty, a nonzero error margin, or a risk that a phenomenon is caused by different causes than those we want to search for. This is a fact: the world is simply messy and complicated. It is partly unpredictable. It is not a clean and transparent celestial sphere with perfectly spherical angels.
We may develop mathematical models and theories that are meant to match the observations and they may be free of any remarks about error margins, backgrounds, or false positives. But as soon as we do anything that remotely involves the theories' verification – and in sciences, the verification ultimately boils down to empirical verification – we simply have to acknowledge that each measured quantity has a nonzero error margin because it can't be measured quite accurately. We must acknowledge that an event that looks like a proof of some new phenomenon predicted by a theory was actually caused by a more mundane – while perhaps more rare and less likely – effect that combines the known mechanisms.
We must not only acknowledge it but we must also quantify all these things. We must know whether the error margin of a measurement is small enough so that the measurement is useful and trustworthy concerning the validity of a proposition. In the same way, we must know whether it's conceivable that the event apparently proving a new effect is actually caused by a combination of an older, less extraordinary theory combined with some reasonable amount of good luck.
For all these things, we have to quantify the probabilities.
The Higgs boson was officially discovered once the probability that the pairs of photons or Z-bosons with the right energies that really look like coming from a new, 125-126 GeV heavy particle, were so numerous that such a spike in the number of these events was very unlikely to appear without a new particle. By "very unlikely", particle physicists mean the chance "1 in 3 million", also known as "5 sigma", that the excess was a fluke that appeared in a world without a new particle.
Some disciplines of science try to be as hard and reliable as particle physics so they adopted the same 5-sigma (1 in 3 million) standard for discovery; most other disciplines, especially soft sciences such as medical research, climate science, psychology, and others, are often satisfied with 3-sigma (1 in 300) or even 2-sigma (1 in 20) evidence.
The number of sigmas determine the deviations from the null hypothesis. A null hypothesis is some simple enough explanation "without new players" that admits some controllable noise according to some calculable statistical treatment. If it predicts that a quantity \(X\) has the value \(X_0\pm \Delta X\) where the distribution is normal (and it is very often almost exactly normal, and even if it is not normal, we usually know what it looks like and we can calculate the probabilities for other distributions as well), i.e. \(C\times \exp[-(X-X_0)^2/2\Delta X^2]\) where \(C\) is chosen so that the "total probability of any possibility" equals one, then it is possible to calculate that the probability that \(X\) doesn't belong to the interval \((X_0-5\Delta,X_0+5\Delta X)\) is approximately 1 over 3 million which is so tiny that physicists are willing to take the risk and announce the discovery.
The total significance of the deviation from the Higgs-less null hypothesis is now around 10 sigma or so which makes us really sure that the Higgs-like excess isn't just a fluke. The probability that the excess is just a fluke – a collection of coincidences – is much smaller than 1 in a quadrillion. These numbers are so large because \(\exp(-x^2)\) decreases really quickly with \(x\), more quickly than exponentially, in fact.
When the discrepancy between a theory and the observation becomes this high, we may eliminate the null hypothesis (in this case, a crippled Standard Model where the Higgs is amputated). This is the process of falsification and it's the key empirically rooted procedure by which any science makes some progress in its ability to distinguish viable hypotheses from the disproved ones. To disprove a (null) hypothesis is this straightforward. On the other hand, we can never "quite prove" any detailed theory because there's always a possibility (and, with an exception of the truly final theory, pretty much certainty) that more extensive and accurate experiments in the future will falsify the latest best theory, too. Equivalently, the absence of a statistically significant (e.g. 2-sigma or 5-sigma) deviation in the latest data doesn't mean that the null hypothesis is right and will be right forever. It just means that the deviations as displayed in the performed experiments are smaller than a certain bound which implies that the current theory is "practically" correct. In the future, a discrepancy may be found in more accurate, refined, or extensive experiments that may see tinier or subtler effects than what we can see today.
One simply can't ever deduce any conclusions from the empirical data with absolute certainty. It's always important to acknowledge that an uncertainty is there. And because such an uncertainty may compromise the conclusions, it's always important (sometimes more important, sometimes less important, but never quite forgettable) to quantify the uncertainty, i.e. to know how large it is. The most invariant way of quantification is ultimately one in terms of the probability that a conclusion is invalid because an anomalous observation or a "smoking gun" wasn't really caused by the new effect whose existence we wanted to prove but rather by some good luck (or bad luck) – an amount of luck that can't be quite small (because, as we assume, the observation doesn't look like the most typical prediction of the null hypothesis) but it can't be too large (because it may still realistically happen).
All this methodology is absolutely essential for any controlled, reliable enough empirical tests of any theory or any hypothesis in any natural or social science. We may only discuss how high our certainty should be for us to authoritatively claim that our experiments or observations have established something (the requirements may depend on the context a little bit). 5-sigma is the usual standard of the hardest sciences (led by particle physics) for the discovery. It wouldn't hurt if other sciences adopted the same standards. When a dataset produces 2-sigma excesses, which still has a substantial, "1 in 20" risk of a false positive, you only need a 2.5 squared i.e. 6.25 times larger dataset to achieve a 5-sigma excess where the risk of a false positive is just "1 in 3 million". I am confident that science would be much clearer if surveys with mere 2-sigma excesses were summarized as inconclusive ones. Lots of bad and questionable results in soft sciences are caused by their low standards on how many sigmas we need. These bad apples have far-reaching consequences because many other papers try to build on these bad apples, and so on.
But if someone wants to abandon the null hypothesis testing and the notion of statistical significance in general, he is surely throwing out the baby with the bath water. He can't possibly understand how proper science is done; he couldn't have possibly done any empirical research that could be uncontroversially considered scientific. In fact, as we have often emphasized on this blog, all predictions of fundamental theories of physics ultimately have to be probabilistic (even if you remove all the technological limitations of measurement devices etc.) because quantum mechanical postulates have to be universally valid in the whole Universe and every small or large corner of it.
Mr Ziliak tries to excuse his silly remarks by some confusing assertions about the nature of particle physicists' claims about the Higgs boson. The 5-sigma excess doesn't prove the Higgs boson, he says: it could be a Prometheus particle, too. But if he's serious, he misunderstands what terminology means in physics – and science. You are free to use the name "Prometheus" for the Higgs boson; after all, many of us use many other names at various points, such as the God particle or the BEH boson (only Peter Higgs really noticed the extra bosonic excitation named after him). But while the people are free to choose their language and terminology, physics isn't about terminology. Physics is about the observable phenomena. So even if the source of the bump were Prometheus according to your terminology and your belief system, it's still empirically demonstrated that this Prometheus behaves as the Higgs boson. If it looks like a God particle, walks like a God particle, and barks like a Dog particle, then it is a God particle (if you change one Dog to God). It doesn't matter whether someone says it's a Prometheus, too.
At the beginning, the new particle was given uncertain names and it was Higgs-like because there was clearly a new particle-like effect and its properties were compatible with the properties of a Higgs boson. Later, as we were more certain and knew more accurate values of the properties, we became able to falsify the theory that the bump is caused by something that differs too much from the Standard Model Higgs boson. At this point, we have everything we need to call it the Standard Model Higgs boson. By this claim, we don't mean that the Standard Model will forever be the right and complete theory for all observations. It almost certainly won't be. But the observed properties of the Higgs boson falsify so many competing hypotheses and are so nontrivially close to the predictions of the Standard Model Higgs boson that there's no reason not to use this name for the object. So the new particle may be a Prometheus but according to the physical definition of "being a Higgs boson", it is clearly a Higgs boson, too. Physics determines whether something is a Higgs boson by its decays, rates of production, mass, and other interactions, and if those things agree with the Higgs boson's property, then the particle – whether it is God or Prometheus or anyone else – simply
isa Higgs boson and attempts to claim otherwise are just artifacts of a distorted terminology, mistakes, and demagogy.
When I talked about the certainty that the LHC has observed a new particle; a new Higgs-like particle; or a Standard-Model-like Higgs boson (these phrases are increasingly accurate and increasingly strong), I only took the (almost) purely experimental data into account. Aside from these nearly direct observations, we have nearly rock-solid theoretical arguments – that I won't offer to Mr Ziliak because he isn't smart enough to understand them as even the very rudimentary concept of statistical significance is already too hard and abstract for him – that there has to be a Higgs boson with the mass or other properties that can't differ from the observed ones by more than a relatively small amount. The Standard Model (or any theory with particles including the W- and Z-bosons and others we have known for 30 years) would simply produce inconsistent predictions (such as probabilities of some high-energy collisions exceeding 100 percent) if the Higgs boson weren't there. While an experimenter may view all these arguments as biases and he should perhaps only build on what he has seen with his own eyes, other physicists are more than free (in fact, nearly obliged) to use all the available evidence to decide about the existence of the Higgs boson (as well as any other scientific question). With this additional, mathematically sophisticated evidence added to the mix, there's really no doubt that Nature contains a Standard-Model-like Higgs boson. There's no sensible doubt about millions of other scientific claims, either. But the probability that these insights are right is never quite 100 percent although it has gotten insanely close to 100 percent in very many cases.
|
We present a search for new heavy particles, $X$, which decay via $X rightarrow WZ \to e\nu +jj$ in $p{\bar p}$ collisions at $\sqrt{s} = 1.8$ TeV. No evidence is found for production of $X$ in 110 pb$^{-1}$ of data collected by the Collider Detector at Fermilab. Limits are set at the 95% C.L. on the mass and the production of new heavy charged vector bosons which decay via $W'\to WZ$ in extended gauge models as a function of the width, $Gamma (W')$, and mixing factor between the $W'$ and the Standard Model $W$ bosons.
Threshold measurements of the associated strangeness production reactions pp --> p K(+) Lambda and pp --> p K(+) Sigma(0) are presented. Although slight differences in the shapes of the excitation functions are observed, the most remarkable feature of the data is that at the same excess energy the total cross section for the Sigma(0) production appears to be about a factor of 28 smaller than the one for the Lambda particle. It is concluded that strong Sigma(0)-p final state interactions, and in particular the Sigma-N --> Lambda-p conversion reaction, are the likely cause of the depletion for the yield in the Sigma signal. This hypothesis is in line with other experimental evidence in the literature.
Exclusive rho rho production in two-photon collisions involving a single highly virtual photon is studied with data collected at LEP at centre-of-mass energies 89GeV < \sqrt{s} < 209GeV with a total integrated luminosity of 854.7pb^-1 The cross section of the process gamma gamma^* -> rho rho is determined as a function of the photon virtuality, Q^2 and the two-photon centre-of-mass energy, Wgg, in the kinematic region: 1.2GeV^2 < Q^2 < 30GeV^2 and 1.1GeV < Wgg < 3GeV.
We present a measurement of $\sigma \cdot B(W \rightarrow e \nu)$ and $\sigma \cdot B(Z~0 \rightarrow e~+e~-)$ in proton - antiproton collisions at $\sqrt{s} =1.8$ TeV using a significantly improved understanding of the integrated luminosity. The data represent an integrated luminosity of 19.7 pb$~{-1}$ from the 1992-1993 run with the Collider Detector at Fermilab (CDF). We find $\sigma \cdot B(W \rightarrow e \nu) = 2.49 \pm 0.12$nb and $\sigma \cdot B(Z~0 \rightarrow e~+e~-) = 0.231 \pm 0.012$nb.
The reactions ee->ee+pi0+X and ee->ee+K0s+X are studied using data collected at LEP with the L3 detector at centre-of-mass energies between 189 and 202 GeV. Inclusive differential cross sections are measured as a function of the particle transverse momentum pt and the pseudo-rapidity. For pt < 1.5 GeV, the pi0 and K0s differential cross sections are described by an exponential, typical of soft hadronic processes. For pt > 1.5 GeV, the cross sections show the presence of perturbative QCD processes, described by a power-law. The data are compared to Monte Carlo predictions and to NLO QCD calculations.
We report measurements of the inclusive transverse momentum pT distribution of centrally produced kshort, kstar(892), and phi(1020) mesons up to pT = 10 GeV/c in minimum-bias events, and kshort and lambda particles up to pT = 20 GeV/c in jets with transverse energy between 25 GeV and 160 GeV in pbar p collisions. The data were taken with the CDF II detector at the Fermilab Tevatron at sqrt(s) = 1.96 TeV. We find that as pT increases, the pT slopes of the three mesons (kshort, kstar, and phi) are similar, and the ratio of lambda to kshort as a function of pT in minimum-bias events becomes similar to the fairly constant ratio in jets at pT ~ 5 GeV/c. This suggests that the particles with pT >~ 5 GeV/c in minimum-bias events are from soft jets, and that the pT slope of particles in jets is insensitive to light quark flavor (u, d, or s) and to the number of valence quarks. We also find that for pT <~ 4 GeV relatively more lambda baryons are produced in minimum-bias events than in jets.
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV were measured in the rapidity range $-0.5< y_{\rm{CMS}}<0$ for event classes corresponding to different charged-particle multiplicity densities, $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$. The mean transverse momentum values are presented as a function of $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$, as well as a function of the particle masses and compared with previous results on hyperon production. The integrated yield ratios of excited to ground-state hyperons are constant as a function of $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$. The equivalent ratios to pions exhibit an increase with $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$, depending on their strangeness content.
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in 2011 with ALICE detector at the Large Hadron Collider (LHC). Transverse momentum ($p_{\mathrm{T}}$) spectra have been measured for K$^{*}(892)^{0}$ and $\phi(1020)$ mesons via their hadronic decay channels for $p_{\mathrm{T}}$ up to 20 GeV/$c$. The measurements in pp collisions have been compared to model calculations and used to determine the nuclear modification factor and particle ratios. The K$^{*}(892)^{0}$/K ratio exhibits significant reduction from pp to central Pb-Pb collisions, consistent with the suppression of the K$^{*}(892)^{0}$ yield at low $p_{\mathrm{T}}$ due to rescattering of its decay products in the hadronic phase. In central Pb-Pb collisions the $p_{\mathrm{T}}$ dependent $\phi(1020)/\pi$ and K$^{*}(892)^{0}$/$\pi$ ratios show an enhancement over pp collisions for $p_{\mathrm{T}}$ $\sim$3 GeV/$c$, consistent with previous observations of strong radial flow. At high $p_{\mathrm{T}}$, particle ratios in Pb-Pb collisions are similar to those measured in pp collisions. In central Pb-Pb collisions, the production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons is suppressed for $p_{\mathrm{T}}> 8$ GeV/$c$. This suppression is similar to that of charged pions, kaons and protons, indicating that the suppression does not depend on particle mass or flavor in the light quark sector.
The production of Z$^0$ bosons at large rapidities in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV is reported. Z$^0$ candidates are reconstructed in the dimuon decay channel (${\rm Z}^0 \rightarrow \mu^+\mu^-$), based on muons selected with pseudo-rapidity $-4.0<\eta<-2.5$ and $p_{\rm T}>20$ GeV/$c$. The invariant yield and the nuclear modification factor, $R_{\rm AA}$, are presented as a function of rapidity and collision centrality. The value of $R_{\rm AA}$ for the 0-20% central Pb-Pb collisions is $0.67 \pm 0.11 \, \mbox{(stat.)} \, \pm 0.03 \, \mbox{(syst.)} \, \pm 0.06 \, \mbox{(corr. syst.)}$, exhibiting a deviation of $2.6 \sigma$ from unity. The results are well-described by calculations that include nuclear modifications of the parton distribution functions, while the predictions using vacuum PDFs deviate from data by $2.3\sigma$ in the 0-90% centrality class and by $3\sigma$ in the 0-20% central collisions.
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The transverse momentum ($p_{\rm T}$) differential cross section multiplied by the branching ratio is presented in the interval 1 $<$ $p_{\rm T}$ $<$ 8 GeV/$c$ at mid-rapidity, $|y|$ $<$ 0.5. The transverse momentum dependence of the $\Xi_{\rm c}^0$ baryon production relative to the D$^0$ meson production is compared to predictions of event generators with various tunes of the hadronisation mechanism, which are found to underestimate the measured cross-section ratio.
Measurements of $\mathrm{B}^*_\mathrm{s2}(5840)^0$ and $\mathrm{B}_\mathrm{s1}(5830)^0$ mesons are performed using a data sample of proton-proton collisions corresponding to an integrated luminosity of 19.6 fb$^{-1}$, collected with the CMS detector at the LHC at a centre-of-mass energy of 8 TeV. The analysis studies $P$-wave $\mathrm{B}^0_\mathrm{S}$ meson decays into $\mathrm{B}^{(*)+}\mathrm{K}^-$ and $\mathrm{B}^{(*)0}\mathrm{K}^0_\mathrm{S}$, where the $\mathrm{B}^+$ and $\mathrm{B}^0$ mesons are identified using the decays $\mathrm{B}^+\to\mathrm{J}/\psi\,\mathrm{K}^+$ and $\mathrm{B}^0\to\mathrm{J}/\psi\,\mathrm{K}^*(892)^0$. The masses of the $P$-wave $\mathrm{B}^0_\mathrm{S}$ meson states are measured and the natural width of the $\mathrm{B}^*_\mathrm{s2}(5840)^0$ state is determined. The first measurement of the mass difference between the charged and neutral $\mathrm{B}^*$ mesons is also presented. The $\mathrm{B}^*_\mathrm{s2}(5840)^0$ decay to $\mathrm{B}^0\mathrm{K}^0_\mathrm{S}$ is observed, together with a measurement of its branching fraction relative to the $\mathrm{B}^*_\mathrm{s2}(5840)^0\to\mathrm{B}^+\mathrm{K}^-$ decay.
The production of the $\rho$(770)${^{0}}$ meson has been measured at mid-rapidity $(|y|<0.5)$ in pp and centrality differential Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the Large Hadron Collider. The particles have been reconstructed in the $\rho$(770)$\rightarrow\pi^{+}\pi^{-}$ decay channel in the transverse momentum ($p_{T}$) range $0.5-11$ GeV/$c$. A centrality dependent suppression of the ratio of the integrated yields $2\rho$(770)$^{0}/(\pi^{+}+\pi^{-})$ is observed. The ratio decreases by $\sim40\%$ from pp to central Pb-Pb collisions. A study of the $p_{T}$-differential $2\rho$(770)$^{0}/(\pi^{+}+\pi^{-})$ ratio reveals that the suppression occurs at low transverse momenta, $p_{T}<2$ GeV/$c$. At higher momentum, particle ratios measured in heavy-ion and pp collisions are consistent. The observed suppression is very similar to that previously measured for the $K^{*}$(892)$^{0}/K$ ratio and is consistent with EPOS3 predictions that may imply that rescattering in the hadronic phase is a dominant mechanism for the observed suppression.
|
I am interested in learning more about testing for the bivariate probit model with an endogenous treatment regressor. I have figured some stuff out -- summary below, since I don't see much on this topic -- but other questions remain.
Here's the setup. Suppose I have a binary outcome $y_1$, which depends on $x_1$, the error $\varepsilon_1$ and binary treatment indicator $y_2$, which itself depends on $x_2$, and $z$ and error $\varepsilon_2$. The dependence is of the form $y_1=\mathbb{1}(\beta_1'x_1+\alpha y_2+\varepsilon_1>0)$ and $y_2=\mathbb{1}(\beta_2'x_2+\gamma z+\varepsilon_2>0).$ The errors int the two equations are correlated with $\rho$.
I am interesting in testing the assumptions of:
normality of the two errors terms $\rho=0$ homoskedasticity of the errors exogeneity and weak instruments for the treatment equation
Normality can be tested with goodness-of-fit Rao/Murphy score test described by Chiburis et al. (2011) and Chiburis (2010), who provide Stata code do so (
scoregof). This test embeds the bivariate normal distribution within a larger family of distributions by adding more parameters to the model and checks whether the additional parameters are all zeros using the score for the additional parameters at the biprobit estimate. It rejects when there is excess kurtosis or skewness in the error distributions. They do not recommend a variant of the Hosmer-Lemeshow test to do this based on simulations.
The correlation between the errors can be tested using a likelihood ratio test based on the idea that if $\rho=0$, the log-likelihood for the bivariate probit will be equal to the sum of the log-likelihoods from the 2 univariate probits. If you calculate Huber/White sandwich errors, this becomes a Wald test.
Some questions remain.
Should I worry about heteroskedasticity if I use robust errors? How can I test for this?
Can I differentiate heteroskedasticity from heterogeneous treatments effects?
Can I use linear IV diagnostics (like weak instruments tests) to check the non-linear probit model? Is there anything better? This seems strange to me since they estimate different treatment effects.
Are there other things I should check that I am not aware of?
|
Mobius function
The Mobius function is a multiplicative number theoretic function defined as follows: In addition, .
The Mobius function is useful for a variety of reasons.
First, it conveniently encodes Principle of Inclusion-Exclusion. For example, to count the number of positive integers less than or equal to and relatively prime to , we have
more succinctly expressed as
One unique fact about the Mobius function, which leads to the Mobius inversion formula, is that
Property 1: The function is multiplicative .
Proof:If or for a prime , we are done.Else let and where ,then .
Property 2:If for every positive integer , then .
Proof:We have
\[\sum_{d|n}\mu(d)F(\frac{n}{d})=\sum_{d|n}\mu(d)\sum_{k|n/d}f(k)=\sum_{k|n}\sum_{d|n/k}\mu(d)f(k) =\sum_{k|n}f(k)\sum_{d|n/d}\mu(d)= f(n)\] (Error compiling LaTeX. ! Missing $ inserted.)
.
The Mobius function is also closely related to the Riemann zeta function, as
|
Suppose $V$ is an $n$-dimensional vector space over $\mathbb{R}$. Let $\mathcal{L}(V,V; \mathbb{R})$ denote the set of all bilinear maps $f: V\times V \rightarrow \mathbb{R}$, i.e., $f(v,w)$ is linear in each variable separately. (We can actually show that $\mathcal{L}(V,V; \mathbb{R})$ is a vector space over $\mathbb{R}$ under vector addition and scalar multiplication.)
I want to find a basis for $\mathcal{L}(V,V; \mathbb{R})$. I was given the following hint: If $e_1, \ldots, e_n$ is a basis for $V$ and $f \in \mathcal{L}(V,V; \mathbb{R})$ then the $f(e_i, e_j)$ play an important role.
I messed around with this hint:
Suppose $e_1, \ldots, e_n$ is a basis for $V$. Let $v,w \in V$ then we can express $v$ and $w$ as linear combinations of the $e_i$ vectors, $$ v=\sum_{i=1}^n a_i e_i$$ and $$ w=\sum_{k=1}^n b_k e_k.$$
By bilinearity of $f\in \mathcal{L}(V,V; \mathbb{R})$ we have $$f(v,w)= f\left( \sum_{i=1}^n a_i e_i, \sum_{k=1}^n b_k e_k \right)=\sum_{i=1}^n \sum_{k=1}^n a_ib_k f(e_i, e_k).$$
Here is where I get stuck: $f\in \mathcal{L}(V,V; \mathbb{R})$ is just one bilinear map in the space, but how can I find a set of bilinear maps in the space that spans this entire set of bilinear maps? Thank you!
|
In page 65 of his book, Spivak is saying $\int_A \varphi . |f|$ exists if
1.$\Phi$ is a partition of unity subordinate to an open cover $O$ of $A \subset \mathbf{R}^n$, $\varphi \in O$
2.Discontinuity of $f:A \rightarrow \mathbf{R} $ is measure $0$
3.$f$ is bounded in some open set around each point of $A$.
But I can't understand why it exists.
I know in his proof of existance of partition of unity, he actually proved each $\varphi \in O$ has compact support, so we can think above integral as integration on a subset of a rectangle $\prod^n_{i=1}[a_i,b_i] $. but I think still $A$ should be Jordan measurable since otherwise the integral may not exist.
|
Let $\{p_i\}$ be the list of primes dividing $\gcd (a,b)$. Then we can write $$a=\prod p_i^{a_i}\times \prod q_j^{\alpha_j}\quad \&\quad b=\prod p_i^{b_i}\times \prod r_k^{\beta_k}$$
Where the $q_j,r_k$ are primes disjoint from each other and from the $p_i$.
We can now compute both sides of your desired inequality. We get $$\varphi(ab)=\prod p_i^{a_i+b_i-1}(p_i-1)\times \varphi\left(\prod q_j^{\alpha_j}\right)\times \varphi\left(\prod r_k^{\beta_k}\right)$$While $$\varphi(a)\varphi(b)=\prod p_i^{a_i+b_i-2}(p_i-1)^2\times \varphi\left(\prod q_j^{\alpha_j}\right)\times \varphi\left(\prod r_k^{\beta_k}\right)$$
From this we see that we can compute the ratio $$\boxed {\frac {\varphi(ab)}{\varphi(a)\varphi(b)}=\prod \frac {p_i}{p_i-1}}$$
The inequality you desire follows at once (as well as the claim that equality requires the gcd to be $1$).
Examples:
I. $a=12, b=16$. Then the only $p_i$ is $2$ and we remark that $$\varphi(192)=64=2\times \varphi(12)\times \varphi(16)$$
II. $a=18,b=60$. Then the $p_i$ are $2,3$ and we have $$\frac {\varphi(18\times 60)}{\varphi(18)\times \varphi (60)}=3=\frac 21\times \frac 32$$
III. $a=10,b=45$. In this case the ratio comes out $\frac 54$ as desired (I've included this examples just to illustrate that, of course, the ratio need not always be an integer).
|
The short but ahistorical answer is that topological string theories turn out to be examples of $(\infty,1)$-categories. The mathematical formulation of this statement is in Lurie's classification of topological field theories http://www.math.harvard.edu/~lurie/papers/cobordism.pdf (building on work of Atiyah, Segal, Getzler, Costello, Baez-Dolan, Kontsevich and probably a bunch more I'm forgetting.)
The content of this statement is that when you write down the axioms for a topological string theory, the collection of "boundary conditions" or "D-branes" look like the collection of objects in an $(\infty,1)$ category.
Of course, you can ask why the derived category of coherent sheaves. Historically, the answer to that is that it is very easy to write down a boundary condition for a holomorphic vector bundle in the topological B-model. It's not a huge leap from there to coherent sheaves, and if you start mumbling words like tachyon condensation, you can get to the derived category with a fair bit of hand waving.
That's from the physics side of things. On the math side, Kontsevich got there first, possibly by noting that the space of closed string states in the B-model ($H^\bullet(\wedge^\bullet TX)$) is exactly the Hochschild coohomology of the derived category of coherent sheaves. He then followed up by associating the (still not yet defined?) Fukaya category with the A-model and conjecturing that mirror symmetry is an equivalence of the two (with some Hodge structure goodies thrown in). Subsequently, it looks like you have to add in some things called coisotropic branes to cover all your bases, but the basic idea is right.
Kontsevich formulated all this in terms of $A_\infty$ categories which in the Lurie language turn into $(\infty,1)$ categories which are just TQFTs in disguise. So, Kontsevich's homological mirror symmetry is then the statement that two TQFTs are the same, just like mirror symmetry in string theory.
From the physics side of things, this was all a bit of a mess, but we now understand that the derived category really arises via Block's construction of the derived category (I'm being intentionally vague as to which version of the derived category) as arising from integrable super-connections of graded smooth vector bundles http://www.math.upenn.edu/~blockj/papers/BottVolume.pdf. You can see this explicitly in the physics from a few sources, particularly Kapustin, Rozansky and Saulina, and Herbst, Hori and Page, but I'm rather fond of my own contribution http://arxiv.org/abs/0808.0168.
|
This study focuses on predicting the price of Airbnb rentals in Amsterdam. To model the price of Airbnb listings, we use the characteristics of Airbnb rentals and the location of the Airbnb house, room type, the number of reviews, etc. We use random sample of 500 Airbnb listings from a larger dataset on Amsterdam Airbnb listings from the insideairbnb.com project led by Murray Cox (murray [at] murraycox.com). Every variable we use in our analysis is described in Table 1. To contextualize our analysis, we looked at past research done on the factors that might influence the price of Airbnb rentals.
Perez-Sanchez et. al examined which factors determine the price of Airbnb rentals in four Spanish cities. They examined 26 variables within four broad categories: location specifics, accommodation characteristics, surroundings, and advertising strategies. They used data from Airdna (airdna.co) and found that price decreases the further away the rental is from the shore, while the price increases the further the rental is from tourist zones. The second finding is particularly interesting for our analysis, because European cities tend to have the city center be a touristy area so we can see if this finding holds in Amsterdam.
Zhang et. al. studied Metro Nashville, Tennessee, and the factors that determine the price of the listings on Airbnb weighted for their location. They looked at 6 variables: the price of the listing, number of reviews, distance from a highway, distance from the local convention center, number of months since the listing got published, and the rating of the listing. Their findings show that the number of reviews, distance to convention center, and ratings are statistically significant predictors of the price of an Airbnb listing.
Both of the studies showed that the location of Airbnb rentals is a significant predictor of their price. The second one also showed evidence that the number of reviews and rating are significant to predict the price of Airbnb rentals. Hence, we decided to examine whether factors identified in above studies are significant for predicting price of Airbnb rentals in Amsterdam and explore what other factors in the data set of Amsterdam’s Airbnb listings are significant for predicting price.
The data set comes from the website Insider Airbnb, including the summary information of listings in Amsterdam. The following table shows the definition of important variables:
Variable Name Description
room_type
Three categories of rooms: private room, share room, entire home or apartment
price
The price of one night
minimum_nights
The number of minimum nights of one customer staying in one Airbnb house
number_of_reviews
The number of comments for each Airbnb on the website
availability_365
The number of available days in one year of the Airbnb house
longitude
Measurement of location, expressed in degrees
latitude
Measurement of location, expressed in degrees
For cleaning the data set of Amsterdam Airbnb listings, we eliminate all missing values, and delete variables which are clearly unrelated to the price an Airbnb rental, like the host’s ID number. In the process of EDA, we find that it’s hard to fit linear regression between variables in the data set. Hence, some modifications of the dataset help us to explore the data and build models. We mutate several new indicator variables: expensive, entire home, and trusted. The ‘expensive’ variable indicates whether the price of the rental is greater than 100 euros. The ‘entirehome’ variable indicates whether the type of the rental is the entire home or whether any rooms/spaces are shared. The ‘trusted’ variable indicates if the Airbnb has more than 10 reviews. We also create a new variable named ‘region’, by using the mean value of longitude and latitude to divide the whole city into four districts: South-west, South-east, North-west, and North-east.
After the EDA, we choose to use multiple linear and logistic regression as well as ANOVA to model the price of Airbnb listings and the odds that the price of Airbnb listings is higher than 100 euros.
In our analysis, we have examined how different factors correlate with the price of an Airbnb listing. First, we explore the relationship between price and the minimum number of nights a customer is required to pay, in order to be able to book the rental.
Figure 1 shows that the range of price might be more narrow as the log minimum number of nights increases. However, there are only a few Airbnb houses which have a large number of nights required in Amsterdam. As log minimum number of nights is lower, the prices of Airbnb houses evenly distribute between 0 to 200 euros. Customers who stay at an Airbnb for less time will have more choice, but there does not seem to be definitive relationship observable from Figure 1.
Next, we explored the relationship between price of an Airbnb rental in Amsterdam and the number of reviews it has received. Based on our experience, we speculate that most people think of rentals with more reviews as more trustworthy, which is in turn good for the host and could drive a price increase. We define a binary variable to determine whether an Airbnb is expensive, which we consider to be 100 euros (roughly equal to US dollars) on a student budget.
From Figure 2, there is no major difference in the number of reviews between the group of Airbnb priced less than 100 euros and the group of Airbnb priced greater than 100 euros.
We also explored the relationship between price and the type of the rooms in the rental. To investigate this relationship, we decided to create a binary variable for whether the listing offers the entire housing unit like home or apartment, or whether there are some spaces shared with other Airbnb customers.
From Figure 3, we see that the price of an Airbnb is generally higher if the Airbnb’s room type is entire home, or in other words if there are spaces shared with other Airbnb customers.
Finally, we also explored the relationship between the price of an Airbnb listing and the region of Amsterdam where the listing is located. While the dataset contains data on the Airbnb listing’s neighborhood, the number of available neighborhoods is large. To simplify the exploration, we divided up the city into four quadrants based on cardinal directions.
We divided Amsterdam into four districts by using the mean value of longitude and latitude. The plot between whether or not the listing is expensive and the region in which it is located we see that the price of Airbnb in the southwest and southeast area of Amsterdam tend to be more expensive. Thus, we infer that the region may be a factor of Airbnb price in Amsterdam.
To investigate the impact of the region of an Airbnb listing on its price, we conduct an Analysis of Variance. Our null hypothesis is that region does not have an impact on the price, in other words that any differences observed are due only to random variation. Our alternative hypothesis is that region does have an influence on the price.
\[ H_{0} : \mu_{SW} = \mu_{SE} = \mu_{NW} = \mu_{NE}\] \[ H_{A} : \mu_{i} \neq \mu_{j} | i,j \in \{ SW,SE,NW,NE\}\]
Df Sum Sq Mean Sq F value Pr(> F) region 3 0.410 0.138 0.533 0.665 Residuals 496 130.220 0.262
The results of our ANOVA test are shown in Table 2. The p-value for region is 0.6 so not significant and therefore we fail to reject the null hypothesis and conclude that the region of Amsterdam does not have an impact on the pricing of the Airbnb. To explore this question further, we perform linear and logistic regression and discuss their results.
Statistic N Mean St. Dev. Min Pctl(25) Pctl(75) Max Coefficient 8 0.604 1.565 -0.065 -0.027 0.134 4.447 Lower-bound 8 0.520 1.547 -0.176 -0.134 0.108 4.319 Upper-bound 8 0.688 1.584 -0.001 0.028 0.238 4.575 Significant 8 0.500 0.535 0 0 1 1
Table 3 shows the results of our Linear Regression model that models the log price in euros as the dependent variable, and the trust, entire home, minimum nights, days of availability in a year, and region as the independent variables. Because the p-values of entire home, minimum number of nights, and availability of days per year are less than 0.05, these four variables are significant predictors of log price per night. Additionally, the intercept is significant as well, which in our case represents the South-West region of Amsterdam as the reference group. So accounting for other regions, availability, minimum nights, trust, and entire home rental, the South-West region of Amsterdam is a significant predictor of log price. Choosing to stay in the South-West region of Amsterdam is going to raise the price of the listing by 85.36 euros on average, after accounting for other regions, availability, minimum nights, trust, and entire home rental.
Interpretation of the coefficients of significant variables, after accounting for other regions, availability, minimum nights, trust, and entire home rental:
On average, the price of an Airbnb in Amsterdam will increase by 1.7 euros if the room type of airbnb is the entire home.
On average, the price of an Airbnb in Amsterdam will decrease by 0.96 euros if the number of reviews is larger than 10.
On average, the price of an Airbnb will increase by 1 euro for every additional day in a year the listing is available for rent on the Airbnb website.
We used Logistic regression to determine how different factors affect whether or not an Airbnb is expensive. The threshold for being expensive is on a student budget of 100 euros, anything above that value we considered as expensive. The only two statistically significant predictors of whether or not an Airbnb listing price for one night is expensive were choosing the entire home and the number of days in a year the listing is available, after accounting for trust, minimum nights, and region of Amsterdam. Below are the exponentiated coefficients and their 95 percent confidence intervals.
Statistic N Mean St. Dev. Min Pctl(25) Pctl(75) Max Odds 8 2.098 3.466 0.283 0.855 1.090 10.643 Lower-bound 8 1.334 2.045 0.143 0.450 0.948 6.341 Upper-bound 8 3.474 6.088 0.545 1.019 1.865 18.490 Significant 8 0.375 0.518 0 0 1 1
Interpretation of significant coefficient odds ratios:
Intercept: For an Airbnb listing in the South-West region that is not available for renting, requires zero nights minimum stay, has shared rooms/spaces with less than 10 reviews the odds of it being expensive change by 0.28
Odds ratio of entire home: Comparing Airbnb listings that rent the full home/unit to those with shared rooms/spaces, the odds of being expensive for Airbnb with entire home increases by 1.3, after controlling for trust, minimum nights, the number of available days, region.
Odds ratio of the number of available days in a year: For every one day increase in the number of available days, the odds of being expensive change by 1, so no actual change to the odds.
Table 4 shows the confidence intervals. We are 95% confident that the true odds for an Airbnb becoming expensive when the customer chooses to rent the entire home is between 6.34 and 18.49, after accounting for trust, minimum nights, and region of Amsterdam. Additionally, we are 95% confident that the true odds for an Airbnb becoming expensive for a 1 day increase in availability during the year is between 1.0008548 and 1.0064249, after accounting for trust, minimum nights, and region of Amsterdam.
Table 5 shows the significance level for all of the variables in our Linear and Logistic Regressions. Entire home rental and the number of available days are both less than 0.05, so entire home and the number of available days are significant predictors an Airbnb rental being expensive.
Dependent variable:
log(price) expensive
OLS
logistic
(1) (2) trusted1 -0.046 0.262 (0.042) (0.218) entirehome
0.531
***
2.365
***
(0.051) (0.272) minimum_nights
-0.010
**
-0.027 (0.005) (0.023) availability_365
0.001
***
0.003
**
(0.0002) (0.001) regionSE -0.007 0.019 (0.057) (0.296) regionNW -0.065 -0.460 (0.057) (0.287) regionNE -0.021 -0.073 (0.065) (0.336) Constant
4.447
***
-1.261
***
(0.065) (0.340) Observations 500 500
R
2
0.212
Adjusted R
2
0.201 Log Likelihood -276.008 Akaike Inf. Crit. 568.016 Residual Std. Error 0.457 (df = 492) F Statistic
18.901
*** (df = 7; 492)
Note:
p<0.1; p<0.05; p<0.01
There are many factors that Airbnb customers take into account when determining which listing to rent. Our analysis only finds the type of listing (entire home vs room in a home), the number of days available, and the location in South-west Amsterdam to be statistically significant, after accounting for trust, minimum nights, and other regions of Amsterdam. From an economic perspective, the price of Airbnb listings is determined by supply and demand of Airbnb. At this time, the Airbnb rentals with more available days become more frequented and perhaps more competitive resulting in higher price. This may be the reason for the number of days available as a significant predictor. For the entire home vs. shared space types of Airbnb listings as a significant predictor, we think that the entire home is thought as a more comfortable and safe rental, which is characteristic for which they are willing to pay. Thus, whether the Airbnb is entire home or not will affect the price significantly. The location of an Airbnb in the South-west of Amsterdam is a significant predictor of their price. The South-west region is in part the city center and museum quarter, which might be a touristy area contradicting the conclusion of Perez-Sanchez et. al’s study. We did not examine any interaction terms, because we did not suspect any interactions during the study.
Future studies might look at examining the prices in Amsterdam using more contextualized information from other disciplines, because our approach of dividing the city into four regions based on cardinal directions is a concept that might work in cities in the US and other countries around the world, but does not seem to be reflected that well in Amsterdam.
Our dataset did not have enough information about which year the listings were from, so a longitudinal study would be more appropriate for accurate modelling the factors that influence prices in Amsterdam. We are also relying on the authors of the dataset for completeness and accuracy of the data.
Hadley Wickham (2017). tidyverse: Easily Install and Load the ‘Tidyverse’. R package version 1.2.1. https://CRAN.R-project.org/package=tidyverse
Hlavac, Marek (2018). stargazer: Well-Formatted Regression and Summary Statistics Tables. R package version 5.2.2. https://CRAN.R-project.org/package=stargazer
R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.
V. Raul Perez-Sanchez, Leticia Serrano-Estrada, Pablo Marti, & Raul-Tomas Mora-Garcia. (2018). The What, Where, and Why of Airbnb Price Determinants. Sustainability, 10(12), 4596.
Zhihua Zhang, Rachel J. C. Chen, Lee D. Han, & Lu Yang. (2017). Key Factors Affecting the Price of Airbnb Listings: A Geographically Weighted Approach. Sustainability, 9(9), 1635.
|
I'm encountering a very unusual consistency when plotting the vector fields in a cylindrical waveguide mode. In a cylindrical waveguide operating in a given mode, the $\hat{e}_r$ and $\hat{e}_\phi$ components of the $\textbf{E}$-field vector (in a cylindrical coordinate system, with the z-axis aligned longitudinally with the direction of propagation) can be written in terms of the $\textbf{B}$-field vector components as follows: $$E_r(r,\phi,z,t) = -\frac{\omega}{k_z}B_\phi$$ $$E_\phi(r,\phi,z,t) = \frac{\omega}{k_z}B_r$$
With $B_r$ and $B_\phi$ given by, with the $'$ denoting differentiation with respect to r, and where n is a zero of the $l$th Bessel function: $$B_r(r,\phi,z=0,t=0) = \frac{inkc^2}{\omega^2-(kc)^2}J_l'(nr)e^{il\phi}$$ $$B_\phi(r,\phi,z=0,t=0) = \frac{-lkc^2}{\omega^2-(kc)^2}\frac{J_l(nr)}{r}e^{il\phi}$$
These expressions can be found in this link. For good measure, I also independently derived these expressions from scratch; they
are indeed correct.
Now, to illustrate the problem I am facing, I have written some Python code that plots the $\textbf{E}$ and $\textbf{B}$-field vectors for a given mode. In this post I shall focus my attention on the well known 'dominant' TE11 mode (this mode is also the easiest case to highlight the inconsistency I am encountering). For the TE11 mode, then, my code gives the following plot for the $\textbf{B}$-field (the waveguide region is the shaded circle; ignore the outside regions):
This plot is exactly correct; there are many images available in the literature for the fields of the TE11 mode. For example, see (here, the B-field is given by the horizontal vector field lines, and the E field the vertical 'bowed' lines):
It's clear that the $\textbf{B}$-fields match. Now comes the problem. If I plot the $\textbf{E}$-field components as given by the expressions at the start of this post, I obtain the following plot:
This clearly does
not agree with the $\textbf{E}$-field plots found in the literature (yet arguably shares some similarities). However, the code is plotting the $\textbf{E}$-fields correctly, at least according to the equations given at the start of this post (note I've just used these plots to neatly illustrate my problem; this is a physics question and not a programming related question). You get the same result if you manually hand-calculate a few values of the $\textbf{E}$-field and sketch it out.
For example, the $\textbf{E}$-field vector at the point (0.03, -0.02) on the above plot. Plugging in these position values into the expressions given at the start of this post, one obtains (the units are totally arbitrary; all I care about is the direction of the vector): $$E_r=-1198, E_\phi = -940$$
This corresponds to a vector pointing 'inwards and slightly upwards', which matches what is seen on the plot at this point. You can do this for many points; the plot always agrees with what is calculated.
$\textbf{Summary of problem}$: If the expressions at the start of this post are correct, then the (coloured) vector field plots above must be correct and the all of the plots in the literature must be wrong (unlikely). Alternatively, the expressions at the start of this post are wrong, yet calculate the $\textbf{B}$-field perfectly, whilst getting the $\textbf{E}$-field completely wrong. The dilemma is that the equations for cylindrical waveguide modes are very well documented, and so it is unlikely that they too could be wrong.
|
Let's look at fixed time:
$$ E(x) = V(x)e^{-ik_0x} \equiv V(x)\phi_{k_0}(x) $$
I wrote $\phi(x)$ to highlight the fact that it is a fast phase modulation of a slowly varying function.
What does slowly varying mean? It means the Fourier transform:
$$ \tilde V(k) = \int{e^{ikx}V(x)dx} $$
has its power below some $k_V$ that satisfies:
$$ ||k_V|| \ll k_0$$
It also has some bandwidth $\Delta k \approx 2 k_V$ (there a positive and negative components for a real $V(x)$).
All this condition means is that the wavelength of $V(x)$ is much longer than the carrier wavelength, $1/k_0$. How much depends on your system. If it's microwaves in a circuit, then you have to look at the hardware. If you are solving a boundary value problem, you have to look at the conditions of the problem. A factor of "100" is certainly good, "10" is OK, and "2" is pushing it.
As pointed out in the comments, the FT of a product is a convolution of the multiplicands' FTs, and:
$$ \tilde \phi(k) = \delta(k_0) $$
Now the convolution with a delta functions is a shift operation, hence:
$$ \tilde E(k) $$
looks like
$$ \tilde V(k) $$
shifted from 0 to that carrier frequency $k_0$.
Thus, the signal has some bandwidth $\Delta k$ centered around $k_0$.
Now you can revisit the "slow" question by saying $\Delta k/2 \ll k_0$--meaning the all the power is in the positive frequencies, which is the same as the earlier condition but view slightly differenlty. They are equivalent to the conditions on the derivative stated in the OP.
Adding the time-coordinate back in doesn't change anything, unless you have a difficult dispersion relation.
|
Logic and Reasoning Challenges
This page discusses issues to be resolved in the near future. These issues pertain to relation semantics as well as inference procedures.
Contents 1 Inferring in both directions on the taxonomy 2 The problem with absence of features Inferring in both directions on the taxonomy
It is desired that annotations to higher taxa in the taxonomy be propagated to the lower taxa that are subsumed by the higher taxon; i.e. classical top down inferences. Given that the reasoner already reasons bottom upward, associating phenotype annotations from the lower level taxa to the higher level taxa, adding top-down inferencing may cause widespread inconsistencies in the data if unchecked.
The OBD reasoner can reason from annotations at the lower levels of the taxonomy to the higher levels. Given that
Danio rerio exhibits a phenotype P, the OBD reasoner infers that Danio exhibits the same phenotype P. This is reasoning up the taxonomy, using the subsumption relationship between Danio rerio and Danio. This is possible because the annotations to each taxon are (implicitly) existentially quantified. The annotation " Danio rerio exhibits increased length of maxillary barbel towards orbit" is shown in (1). The semantics are in (2).
<javascript> TTO:1001979 PHENOSCAPE:exhibits PATO:0000573^OBO_REL:inheres_in(TAO:0001938)^OBO_REL:towards(TAO:0001967) -- (1) </javascript>
<math>\exists</math> X : instance_of(X, TTO:1001979) <math>\and</math> PHENOSCAPE:exhibits(X, PATO:0000573^OBO_REL:inheres_in(TAO:0001938)^OBO_REL:towards(TAO:0001967)) -- (2) Given that Danio rerio (TTO:1001979) is subsumed by the genus Danio (TTO:101040) in the Teleost Taxonomy as shown in (3), it is possible to infer that " Danio exhibits increased length of maxillary barbel towards orbit" (4).
<javascript> TTO:1001979 OBO_REL:is_a TTO:101040 -- (3) TTO:101040 PHENOSCAPE:exhibits PATO:0000573^OBO_REL:inheres_in(TAO:0001938)^OBO_REL:towards(TAO:0001967) -- (4) </javascript>
Inferring down the taxonomy, that is using assertions at higher levels to extract inferences at lower levels, requires universal quantification. For example, the assertion that all "Siluriformes exhibit decreased width of mesethmoid bone" can be captured using OBD semantics as shown in (5). The universal semantics of this assertion is shown in (6). Siluriformes directly subsumes Ictaluridae as shown in (7). From (5) and (7), it is straightforward to infer that "Ictaluridae exhibit decreased width of mesethmoid bone" as shown in (8).
<javascript> TTO:1380 PHENOSCAPE:exhibits PATO:0000599^OBO_REL:inheres_in(TAO:0000323) -- (5) </javascript>
<math>\forall</math> X : instance_of(X, TTO:1380) <math>\and</math> PHENOSCAPE:exhibits(X, PATO:0000599^OBO_REL:inheres_in(TAO:0000323)) -- (6) <javascript> TTO:10930 OBO_REL:is_a TTO:1380 -- (7) TTO:10930 PHENOSCAPE:exhibits PATO:0000599^OBO_REL:inheres_in(TAO:0000323) -- (8) </javascript>
The problem with using top-down inferences using universally quantified statements is that currently there is no way to distinguish these from existentially quantified statements. We use the
PHENOSCAPE:exhibits relation for existentially quantified statements. Using the same relation for universally quantified statements would make it possible to extract incorrect inferences given the current configuration. Consider the subsumption relationship between Danio and Danio choprai shown in (9). If there is no distinction between existentially and universally quantified statements, it is possible to infer from (9) and (4) the erroneous conclusion that " Danio choprai exhibits increased length of maxillary barbel towards orbit" (10). At present, there are no annotations to Danio choprai.
<javascript> TTO:1052801 OBO_REL:is_a TTO:101040 -- (9) TTO:1052801 PHENOSCAPE:exhibits PATO:0000573^OBO_REL:inheres_in(TAO:0001938)^OBO_REL:towards(TAO:0001967) -- (10) </javascript>
Recall that the reasoner works in sweeps. It extracts one set of inferences (Inf-1) from the assertions (A) in its first sweep. In the next sweep, the reasoner pulls out a different set of inferences (Inf-2) from the assertions A
AS WELL AS the inferences Inf-1 from the previous sweep. The reasoner repeats these sweeps until no new inferences are added. This is why the reasoner will likely infer all taxa exhibit all phenotypes if it is used to reason both up and down the taxonomy without checking for universal and existential semantics. Possible solutions
In this section, we discuss possible approaches to resolving this issue with reasoning both up and down the taxonomy.
Different relations for different purposes
In classical first-order logic (FOL), all relations and properties asserted upon concepts (or taxa in the case of Phenoscape) are inherited by the subsumed concepts. This is because by default, all assertions about the concepts are universally quantified, i.e. hold true for ALL instances of the concept. If all cars have four wheels, and if all SUVs are cars, then all SUVs have four wheels. This is the way of top-down, classical FOL inferencing.
In Phenoscape, we have adopted the OBD schema of modeling concepts, wherein all assertions to the concepts are existentially quantified, i.e. the assertion is true with at least one instance of the concept. This is very convenient for the life sciences, where exceptions are so prevalent. As a ready example, consider how the duck-billed platypus easily overrules the "all mammals are viviparous" rule. Further, existential quantification allows us to reason up the taxonomy. If some Teleostei exhibit round fins, and all Teleostei are Ostariophysi, then some Ostariophysi exhibit round fins.
By default, we use the
PHENOSCAPE:exhibits relation to link taxa to phenotypes using existential semantics. Using the same relation to model universally quantified relationships between taxa and phenotypes, would cause incorrect inferencing and loss of data integrity. The easiest way to address this issue is to use different relations; one for universally quantified relations and the other for existentially quantified relations. Let us call these relations PHENOSCAPE:all_exhibit and PHENOSCAPE:some_exhibit respectively.
Now the OBD reasoner uses the following rule to extract inferences up the taxonomy using the
PHENOSCAPE:exhibits relation (1). Rule-1: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> exhibits(A, x) <math>\Rightarrow</math> exhibits(B, x)
This can be replaced with the following two rules, which use the two new relations,
PHENOSCAPE:all_exhibit and PHENOSCAPE:some_exhibit. (Please suggest better names for these if you can think of them). Rule-2: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> some_exhibit(A, x) <math>\Rightarrow</math> some_exhibit(B, x) Rule-3: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> all_exhibit(B, x) <math>\Rightarrow</math> all_exhibit(A, x)
This will keep the inferences from getting mixed up. Let us consider the scenario where species Sp1 and Sp2 (from genus Gen1) are asserted to exhibit phenotype Phen1. These assertions are shown in (A-1) and (A-2). The subsumption relations are shown in (A-3) and (A-4)
<javascript> Sp1 PHENOSCAPE:some_exhibit Phen1 -- (A-1) Sp2 PHENOSCAPE:some_exhibit Phen1 -- (A-2) Sp1 OBO_REL:is_a Gen1 -- (A-3) Sp2 OBO_REL:is_a Gen1 -- (A-4) </javascript>
The reasoner makes the inference (I-1) from the assertions (A-1) ~ (A-4) and the inference rule Rule-2.
<javascript> Gen1 PHENOSCAPE:some_exhibit Phen1 -- (I-1) </javascript>
Now. given this new inference (I-1), the reasoner cannot infer that all the species Sp1, Sp2, and let us say 10 other species Sp3 ~ Sp12 also exhibit Phen1, because the inference rule for some_exhibit cannot be used to infer down the taxonomy. Again, consider the assertion that ALL instances of genus Gen1 exhibit a phenotype Phen2 as shown in (A-5)
<javascript> Gen1 PHENOSCAPE:all_exhibit Phen2 -- (A-5) </javascript>
Given (A-5) and all the subsumption relations between Gen1 and the hypothetical twelve species under Gen1 (including A-3 and A-4), the reasoner uses inference rule Rule-3 to infer (I-2) ~ (I-13)
<javascript> Sp1 PHENOSCAPE:all_exhibit Phen2 -- (I-2) Sp2 PHENOSCAPE:all_exhibit Phen2 -- (I-3) .. .. Sp12 PHENOSCAPE:all_exhibit Phen2 -- (I-13) </javascript>
Again, cyclical inferences are ruled out because there are no inference rules to infer up the taxonomy using the
all-exhibit relation. What has to change?
To implement this strategy, two new relations can be defined in the Phenoscape Vocab ontology, where the current definition of the
PHENOSCAPE:exhibits relation is found. At the curation level, curators have to qualify their assertions as being either existentially or universally quantified. Specifically, the Phenex UI could tap the curator's shoulder and ask, "Ahem, does this annotation hold true for all specimens belonging to this taxa or just some specimens?" This needs some changes (no less!) to the Phenex interface and also to the character matrix format in which the data is exported. The data loader module of Phenoscape has to know this information so that the appropriate relation is used in creating the taxon-phenotype statement to be loaded into the knowledgebase. The query module will have to be modified to retrieve both inferred and asserted taxon-phenotype statements using the two different relations. The JSON format in which the data is exported needs to be modified to accommodate the two different kinds of relation statements, and lastly the UI will have to explicitly distinguish between the two. A possible simpler solution (Update: Feb 22, 2010)
It is possible to check the rank of the taxon to which the phenotype assertion is made. If the rank of the taxon is not "species", then the new relation
all_exhibit can be used in the top-down reasoning as shown below. The new rule is shown in (Rule-4) Rule-4: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> exhibit(B, x) <math>\and</math> ¬ rank(B, "species") <math>\Rightarrow</math> all_exhibit(A, x)
Assertion (A-6) is a phenotype assertion to a taxon of a higher rank, let us say a Genus Gen1. Now, let's assume Gen1 has 2 species Sp1 and Sp2. Inferences to Sp1 and Sp2 from the assertion (A-6) may use the new relation
all_exhibit as shown in (I-14) and (I-15).
<javascript> Gen1 PHENOSCAPE:exhibit Phen1 -- (A-6) Sp1 OBO_REL:is_a Gen1 -- (A-7) Sp2 OBO_REL:is_a Gen1 -- (A-8)
Sp1 PHENOSCAPE:all_exhibit Phen1 -- (I-14) Sp2 PHENOSCAPE:all_exhibit Phen1 -- (I-15) </javascript>
The inferences with the
all_exhibit relation cannot be associated with the higher taxa.
Similarly, for any taxon to which a phenotype assertion has been made, the phenotype can be inferred on the higher taxa using the
some_exhibit relation as shown in (Rule-5) Rule-5: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> exhibit(A, x) <math>\Rightarrow</math> some_exhibit(B, x) Given assertion (A-6) to a genus and assuming it belongs to a family Fam1, the inference (I-16) can be made about the family Fam1. <javascript> Gen1 PHENOSCAPE:exhibit Phen1 -- (A-6) Gen1 OBO_REL:is_a Fam1 -- (A-9)
Fam1 PHENOSCAPE:some_exhibit Phen1 -- (I-16) </javascript>
The relation some_exhibit cannot be used to infer down the taxonomy. Therefore, we can see that this new methodology can use the existing assertions that use the exhibits relation, but can distinguish the inferences that are made in both top-down and bottom-up reasoning. The relations all-exhibit and some-exhibit are only inferred, never asserted. Moreover, they can never be used to create new inferences. This distinction averts the extraction of incorrect inferences.
In this solution, no changes are necessary to the NeXML format or to Phenex. The changes need to be made to the OBD reasoner to use these two new relations and the data query module needs to be changed to deal with the two new relations as well. The REST services would not need to be modified for this purpose.
Probabilistic assertions
Uncertainties are everywhere in the life sciences. The taxon-phenotype assertions can be augmented with uncertainty factors to address this issue. Inferences could use uncertainty calculi such as the Dempster-Schafer method or Bayes conditional probability rule to derive uncertainty factors of the inferences given the uncertainty factors of the assertions.
The advantage of this strategy will be that we can continue to use the
PHENOSCAPE:exhibits relation for taxon-phenotype statements, and at the same time display the uncertainty values associated with every assertion displayed at the UI; far more intuitive than "Taxon T exhibits increased size of E AND decreased size of E." What needs to change?
Curators will have to manually enter uncertainty factors (UFs) of the assertions in the Phenex UI, which needs modification to handle these. The character matrix format needs to be modified to accommodate UFs. The data loader module needs to use reified statements around assertions to store UFs. The OBD reasoner will have to be augmented with an implementation of uncertainty calculus. The query module needs to retrieve the UF associated with every assertion, and export this in a modified JSON format. Lastly, the UI will have to add a provision to display uncertainty factors.
The problem with absence of features
Descriptions of phenotypes as used in the Phenoscape project (and a plethora of phenomena in the real world) are replete with exceptions, or aberrations from what is considered to be "normal." While canonical ontologies like the FMA and the TAO contain ontological definitions of ideal specimens, observations in the life sciences are full of aberrations to these general rules.
Phenoscape has some typical issues dealing with absence of anatomical features in certain species of Ostariophysian fishes. For example, the basihyal cartilage is found in all species of Ostariophysian fishes, except the Siluriformes. At present, this information is captured in Phenoscape using the combination of the PATO term for "absent in organism" (PATO:0000462), the "inheres_in" relation from the OBO Relations Ontology, the TAO term for "basihyal cartilage" (TAO:0001510), the "exhibits" relation from the PHENOSCAPE ontology, and the TTO term for Siluriformes (TTO:1380). This is shown below.
<javascript> TTO:1380 PHENOSCAPE:exhibits PATO:0000462^OBO_REL:inheres_in(TAO:0001510) </javascript>
In plain English, this translates to "Siluriformes exhibit absence in organism which inheres in basihyal cartilage." The semantics of this sentence are vague to say the least. Going by this methodology, it is impossible to state that basihyal cartilage is absent in Siluriformes without referring to
at least one instance of basihyal cartilage. Combining a quality absent with a feature through the inheres_in property is very misleading in itself (ex: absence inheres in cartilage), contorting the intrinsic semantics of the inheres_in relation. These problems have been discussed in Ceusters et al and Hoehndorf et al. Both these publications propose solutions to integrate these aberrant observations with canonical definitions, without causing inconsistencies in reasoning procedures.
Another issue specific to the Phenoscape project was raised by Paula at the SICB workshop. Given that basihyal cartilage is absent in Siluriformes, basihyal bone should be absent in Siluriformes as well. This is because basihyal bone develops from basihyal cartilage. This may be inferred by adding a new relation chaining rule shown below to the OBD reasoner
Rule:<math>\forall</math>F1, F2, S: absent_in(F1, S) <math>\and</math> develops_from(F2, F1) <math>\Rightarrow</math> absent_in(F2, S)
This relation chain corresponds to the observation GIVEN THAT Basihyal_Cartilage
absent_in Siluriformes AND Basihyal_Bone develops_from Basihyal_cartilage, THEN Basihyal_Bone absent_in Siluriformes. This and other similar relation chains (as per identified requirements) are to be implemented for the Phenoscape project in the future. Strategies to deal with absent features in general are also to be implemented in the near future.
Differences between the existing semantics and desired semantics of the
exhibits relation need to be resolved to address this issue. Potential strategies to implement the absence of features problem are discussed here.
|
I have to solve a problem using bipartite graphs and matchings. The way I modeled it is to have a graph $G=(A \cup B, E)$, and have the vertices in $A$ represent persons and the vertices in $B$ representing clubs. Then, the edges represent club membership.
How can I find the smallest possible value of $K$ that guarantees there is an assignment that satisfies the following conditions?
1. A person can be member of at most 50 clubs 2. Each club must have a president (which is a member of the club) 3. A person can be president of at most 5 clubs 4 Each club must have at least $K$ members
Here is a reformulation I made of the problem in mathematical terms:What value of $K$ ensures that $G$ has a $B$-covering matching $M$ knowing:
1. $ \forall v \in A, deg(v) \leq 50$ 2. $\forall v \in A, v$ is incident to at most 5 edges in $M$ 3.$ \forall v \in B, deg(v) \geq K$
I know we need to find a matching (to model the presidency relation) in $G$ that contains every vertex from $B$ but I am unsure how to find the value of $K$ that ensures that such a matching can in fact be obtained. Any edge in the matching can be incident to no more than 5 vertices in A. How should I approach the next step of the problem? Thanks
|
Recently, I have read a Schaum's book. General Topology, and it introduced the concept of cardinality and order, but it points out something that I really don't understand.
If $A\preceq B$ and $B\preceq A$, then $A \sim B$.
If $X\supseteq Y\supseteq X_1$ and $X\sim X_1$, then $X\sim Y$
Prove these statements are equivalent which are both Schroeder-Berstein Theorem. In fact, I know how to prove them separately but is there a easy way to prove equivalent?
2.In the book, it mentions 'axiom of choice' a lot. So I search it from wiki and I found that it means: $$\forall X[\emptyset\notin X\Rightarrow \exists f:X\rightarrow\cup X \forall A\in X(f(A)\in A)]$$
It is quite easy to understand. But the book said this is equivalent to Zorn's Lemma:
Let $X$ be a non-empty partially ordered set in which every totally ordered subset has an upper bound. Then X contains at least one maximal element.
Wiki also mention that $\aleph_0$ is the smallest cardinality of a infinite set. This is directly derive from axiom of choice. Both of them I don't understand why.
3.Prove Law of trichotomy: Given any pair of set , either $A\prec B$ , $B\prec A$, or $A\sim B$. It said it can be proved ny using Zorn's Lemma.
|
This is a pattern even school kids could discover (when gently pointed to). I never did conciously, and cannot remember to have been pointed to explicitly, neither at school nor later:
$$\color{red}{\mathbf{2}}\cdot 9 = 1\color{red}{\mathbf{8}}$$ $$\color{red}{\mathbf{8}}\cdot 9 = 7\color{red}{\mathbf{2}}$$
$$\color{blue}{\mathbf{3}}\cdot 9 = 2\color{blue}{\mathbf{7}}$$ $$\color{blue}{\mathbf{7}}\cdot 9 = 6\color{blue}{\mathbf{3}}$$
$$\color{green}{\mathbf{4}}\cdot 9 = 3\color{green}{\mathbf{6}}$$ $$\color{green}{\mathbf{6}}\cdot 9 = 5\color{green}{\mathbf{4}}$$
which may come as kind of a miracle when first discovering it.
In mathematical terms
$$\boxed{a\cdot (10-1) \equiv b \mod 10\ \ \ \ \Leftrightarrow\ \ \ \ \ b\cdot (10-1) \equiv a \mod 10 \\ a\cdot (10-1) \equiv b \mod 10\ \ \ \ \Leftrightarrow\ \ \ \ \ a + b = 10 \equiv 0 \mod 10}$$
This holds not only for $10$ but for every $p \in \mathbb{N}$, i.e. in every "number system":
$$\boxed{a\cdot (p-1) \equiv b \mod p\ \ \ \ \Leftrightarrow\ \ \ \ \ b\cdot (p-1) \equiv a \mod p \\ a\cdot (p-1) \equiv b \mod p\ \ \ \ \Leftrightarrow\ \ \ \ \ a + b = p \equiv 0 \mod p}$$
and is responsible for the fact that the graphical multiplication tables of $\mathbb{Z}/p\mathbb{Z}$ always looks the same for $p-1$:
I wonder if there are attempts (in educational research and literature) to make use of the simple observability of the pattern above to explain to (clever) school kids that the observed regularity is not by pure coincidence, why it is so, and what it does "mean".
|
We denote with $\mathcal{U}_0$ the family of all subsets $U \in \mathcal{D}(\Omega)$ convex and balanced such that $U \cap \mathcal{D}_K(\Omega) \in \mathcal{T}_K$, where $\mathcal{T}_K$ is the topology on $\mathcal{D}_K(\Omega):=\lbrace \varphi \in C^{\infty}(\Omega) : \mathrm{supp}(\varphi) \subset K \rbrace$, defined by seminorm $p_{K_N}(\varphi):=\sup_{|\alpha| \leq N ; x \in K} |D^\alpha \varphi(x)|$. It can be shown that $\mathcal{U}:=\lbrace \varphi + U : \varphi \in \mathcal{D}(\Omega), U \in \mathcal{U}_0 \rbrace$ is a base for the vectorial topology on $\mathcal{D}(\Omega)$. As in the book Functional Analysis by Rudin, page 152-153. In particular the topology $\mathcal{T}$ on $\mathcal{D}(\Omega)$ it is Hausdorff topology, and the question I have is on this step:
If $\varphi_1 \neq \varphi_2$ are function test, we define:
$\displaystyle U:= \lbrace \varphi \in \mathcal{D}(\Omega) : \sup_{x \in \Omega} |\varphi(x)| < \sup_{x \in \Omega} |\varphi_1(x) - \varphi_2(x)| \rbrace$
we have $U \in \mathcal{U}_0$ and $\varphi_1 \notin \varphi_2 + U$. It follows that the singleton $\lbrace \varphi_1 \rbrace$ is a closed for topology $\mathcal{T}$ (why?). Then, since $\varphi_1 \neq \varphi_2$, there is $U' \in \mathcal{U}_0$ such that $\varphi_1 + U' \cap \varphi_2 + U = \emptyset$, so $\mathcal{T}$ is Hausdorff topology. (Is it correct this conclusion?)
|
Frequent Links Folk theorem (game theory)
Folk theorem A solution concept in game theory Relationships Subset of Significance Proposed by Used for Example
For an infinitely repeated game, any Nash equilibrium payoff must weakly dominate the minmax payoff profile of the constituent stage game. This is because a player achieving less than his minmax payoff always has incentive to deviate by simply playing his minmax strategy at every history. The folk theorem is a partial converse of this: A payoff profile is said to be
feasible if it lies in the convex hull of the set of possible payoff profiles of the stage game. The folk theorem states that any feasible payoff profile that strictly dominates the minmax profile can be realized as a Nash equilibrium payoff profile, with sufficiently large discount factor.
For example, in the Prisoner's Dilemma, both players cooperating is not a Nash equilibrium. The only Nash equilibrium is given by both players defecting, which is also a mutual minmax profile. The folk theorem says that, in the infinitely repeated version of the game, provided players are sufficiently patient, there is a Nash equilibrium such that both players cooperate on the equilibrium path.
In mathematics, the term
folk theorem refers generally to any theorem that is believed and discussed, but has not been published. In order that the name of the theorem be more descriptive, Roger Myerson has recommended the phrase general feasibility theorem in the place of folk theorem for describing theorems which are of this class. [1] Sketch of proof
The proof of the non-perfect folk theorem employs what is called a
grim trigger strategy (Rubinstein 1979). All players start by playing the prescribed action and continue to do so until someone deviates. If player i deviates, all players switch to the strategy which minmaxes player i forever after. For players discounting at a high rate, the potential one-stage gain from deviation will not be enough to cover the loss from punishment. Thus all players stay on the intended path.
In more detail, assume that the payoff of a player in an infinitely repeated game is given by the
average discounted criterion with discount factor 0< δ<1: if a strategy profile results in the path of histories { h t}, player i's payoff is <math>(1-\delta) \sum_{t \geq 0} \delta^t u_i(h_t),</math>
where
u i is player i's utility in the constituent stage game G. The discount factor indicates how patient the players are.
Let
a be a pure strategy profile with payoff profile v which strictly dominates the minmax payoff profile. One can define a Nash equilibrium with v as resulting payoff profile as follows: 1. All players start by playing aand continue to play aif no deviation occurs. 2. If any one player, say player i, deviated, play the strategy profile mwhich minmaxes iforever after. 3. Ignore multilateral deviations.
If player
i gets ε more than his minmax payoff each stage by following 1, then the potential loss from punishment is <math>\frac{1}{1-\delta} \epsilon.</math>
If
δ is close to 1, this outweighs any finite one-stage gain, making the strategy a Nash equilibrium.
The above Nash equilibrium need not be subgame perfect. The threat of punishment may not be credible. Under the additional assumption that the set of feasible payoff profiles is full dimensional and the minmax profile lies in its interior, the argument can be strengthened to achieve subgame perfection as follows.
1. All players start by playing aand continue to play aif no deviation occurs. 2. If any one player, say player i, deviated, play the strategy profile mwhich minmaxes ifor Nperiods. (Choose Nand δlarge enough so that no player has incentive to deviate from phase 1.) 3. If no players deviated from phase 2, all player j≠ igets rewarded εabove j's minmax forever after, while player icontinues receiving his minmax. (Full-dimensionality and the interior assumption is needed here.) 4. If player jdeviated from phase 2, all players restart phase 2 with jas target. 5. Ignore multilateral deviations.
Player
j ≠ i now has no incentive to deviate from the punishment phase 2. This proves the subgame perfect folk theorem. Applications
It is possible to apply this class of theorems to a diverse number of fields. An application in anthropology, for example, would be that in a community where all behavior is well known, and where members of the community know that they will continue to have to deal with each other, then any pattern of behavior (traditions, taboos, etc.) may be sustained by social norms so long as the individuals of the community are better off remaining in the community than they would be leaving the community (the minimax condition).
On the other hand, MIT economist Franklin Fisher has noted that the folk theorem is not a positive theory.
[2] In considering, for instance, oligopoly behavior, the folk theorem does not tell the economist what firms will do, but rather that cost and demand functions are not sufficient for a general theory of oligopoly, and the economists must include the context within which oligopolies operate in their theory. [2]
In 2007, Borgs et al. proved that, despite the folk theorem, in the general case computing the Nash equilibria for repeated games is not easier than computing the Nash equilibria for one-shot finite games, a problem which lies in the PPAD complexity class.
[3] Notes Myerson, Roger B. Game Theory, Analysis of conflict, Cambridge, Harvard University Press (1991) Fisher, Franklin M. Games Economists Play: A Noncooperative ViewThe RAND Journal of Economics, Vol. 20, No. 1. (Spring, 1989), pp. 113–124, this particular discussion is on page 118 Christian Borgs, Jennifer Chayes, Nicole Immorlica, Adam Tauman Kalai, Vahab Mirrokni, and Christos Papadimitriou (2007). "The Myth of the Folk Theorem" (PDF). References Friedman, J. (1971), "A non-cooperative equilibrium for supergames", Review of Economic Studies 38(1): 1–12, JSTOR 2296617, doi:10.2307/2296617. Rubinstein, Ariel (1979), "Equilibrium in Supergames with the Overtaking Criterion", Journal of Economic Theory 21: 1–9, doi:10.1016/0022-0531(79)90002-4 Mas-Colell, A., Whinston, M and Green, J. (1995) Micreoconomic Theory, Oxford University Press, New York (readable; suitable for advanced undergraduates.) Tirole, J. (1988) The Theory of Industrial Organization, MIT Press, Cambridge MA (An organized introduction to industrial organization) Ratliff, J. (1996). A Folk Theorem Sampler. A set of introductory notes to the Folk Theorem.
|
Newspace parameters
Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.e (of order \(2\) and degree \(1\)) Newform invariants
Self dual: Yes Analytic conductor: \(1.79663404548\) Analytic rank: \(0\) Dimension: \(1\) Coefficient field: \(\mathbb{Q}\) Coefficient ring: \(\mathbb{Z}\) Coefficient ring index: \( 1 \) Projective image \(D_{2}\) Projective field Galois closure of \(\Q(\zeta_{12})\) Artin image size \(8\) Artin image $D_4$ Artin field Galois closure of 4.0.10800.2 Character Values
We give the values of \(\chi\) on generators for \(\left(\mathbb{Z}/3600\mathbb{Z}\right)^\times\).
\(n\) \(577\) \(901\) \(2801\) \(3151\) \(\chi(n)\) \(1\) \(1\) \(1\) \(-1\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
Label \(\iota_m(\nu)\) \( a_{2} \) \( a_{3} \) \( a_{4} \) \( a_{5} \) \( a_{6} \) \( a_{7} \) \( a_{8} \) \( a_{9} \) \( a_{10} \) 3151.1
0 0 0 0 0 0 0 0 0
Char. orbit Parity Mult. Self Twist Proved 1.a Even 1 trivial yes 3.b Odd 1 CM by \(\Q(\sqrt{-3}) \) yes 4.b Odd 1 CM by \(\Q(\sqrt{-1}) \) yes 12.b Even 1 RM by \(\Q(\sqrt{3}) \) yes
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{1}^{\mathrm{new}}(3600, [\chi])\):
\(T_{7} \) \(T_{13} \) \(\mathstrut -\mathstrut 2 \)
|
Generally, we know that energy is conserved and there is Hamiltonian mechanics which describes particle motion by energy conservation and conversion. Quantum mechanics is completely based on the conservation of energy. And everything works out very well.But, in general relativity, there seems to be no conservation of energy. Why is this? What does it mean by no energy conservation? Where does the energy come from?
Energy in General Relativity
Energy conservation arises due to invariance under translations in time, and in general this will not hold. In general relativity, we do have the analogue,
$$\nabla_\mu T^{\mu\nu} = 0$$
however this does not imply energy is conserved, because one cannot bring the expression into an integral form, as one can normally when applying Noether's theorem to field theories, where we could define a conserved current $\partial_\mu j^\mu = 0$ and a conserved charge,
$$Q = \int d^3x \, j^0.$$
Locality
It is possible to define a Landau-Lifshitz pseudo-tensor $\tau_{\mu\nu}$ which ascribes stress-energy to the gravitational field, such that,
$$\partial_\mu (T^{\mu\nu} + \tau^{\mu\nu}) = 0,$$
from which one can define momentum $P^\mu$ and angular momentum $J^{\mu\nu}$. However, the modified stress-energy and $\tau_{\mu\nu}$ itself has no geometric, coordinate free significance. It may vanish in one coordinate system and not in another.
The trouble boils down to the fact that gravitational energy cannot be localised. For electromagnetism, one can speak of a region of space-time with some energy density due to the electromagnetic field, which is responsible for inducing curvature and changing the wordlines passing through it.
Due to the equivalence principle, locally we can always define a coordinate system wherein the gravitational field vanishes, which means it does not make sense to speak of a local gravitational energy density.
Alternate Definitions
Nevertheless, there are further analogues of energy and other quantities of the Hamiltonian formalism in general relativity which are sometimes useful, though there are issues with these as aforementioned, such as either coordinate dependence, or other ambiguities. One expression is the quasilocal energy, defined as,
$$E = -\int_B d^2 x \, \frac{\delta S_{\mathrm{cl}}}{\delta N} = \frac{1}{\kappa}\int_B d^2x\, \sqrt{\sigma}k \big\rvert_{\mathrm{cl}}$$
where $B$ is the boundary of a spatial hypersurface $\Sigma$, with $\sigma$ and $k$ the metric and extrinsic curvature respectively. Contributions from flat space must be subtracted off typically, and there is an ambiguity in this due to two possible signs of the normal.
If we parametrize a system by a coordinate $\lambda$ for a path in the state space, then the action of a system is, $$S = \int_{\lambda'}^{\lambda''}d\lambda \, \left( p\dot x - \dot t H(x,p,t)\right).$$
For a classical history,
$$H_{\mathrm{cl}}\big\rvert_{\lambda''} = -\frac{\delta S_{\mathrm{cl}}}{\delta t''}$$
which is to say the energy at the boundary $\lambda''$ is minus the change in the classical action due to an increase in the final boundary time, $t(\lambda'') = t''.$ The expression for the quasilocal energy in general relativity is precisely the closest analogue to this Hamilton-Jacobi equation.
For more details and an illuminating discussion of energy conservation in general relativity, see
Quasilocal Energy in General Relativity by D. Brown and J.W. York in the text, Mathematical Aspects of Classical Field Theory 132 by the AMS.
No, energy is not always conserved in general relativity. There is no way to define the energy of an isolated system as a function of the state of the system such that total energy and momentum is conserved when ever 2 systems combine or undergo a hyperbolic orbit and the definition simplifies to the definition in special relativity for low mass and density. An electromagnetic field can't affect a gravitational field directly but can accelerate a particle which in turn affect the gravitational field and in the absense of matter, an electromagnetic field can't affect a gravitational field. According to the Wikipedia article No hair theorem, the observable state of a black hole is completely described by its mass, charge, and angular momentum. An electric field can't accelerate a charged black hole because there's no matter outside its event horizon for it to accelerate and in turn affect the fabric of space outside the event horizon so energy is not always conserved in general relativity.
|
LaTeX:Symbols
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Common Symbols Operators Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard.
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}{l}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
With respect to the coordinates $(x^{0},x^{1},x^{2},x^{3})=(v,r,\theta,\phi)$, we have the following components of the metric tensor:
$\begin{bmatrix} g_{00} & g_{01} & g_{02} & g_{03} \\[0.3em] g_{10} & g_{11} & g_{12} & g_{13} \\[0.3em] g_{20} & g_{21} & g_{22} & g_{23} \\[0.3em] g_{30} & g_{31} & g_{32} & g_{33} \end{bmatrix}$ $=\begin{bmatrix} 1-\frac{2M}{r} & 1 & 0 & 0 \\[0.3em] 1 & 0 & 0 & 0 \\[0.3em] 0 & 0 & r^{2} & 0 \\[0.3em] 0 & 0 & 0 & r^{2}sin^{2}\theta \end{bmatrix}$
Suppose we've got a family of hypersurfaces defined by $v=\mathrm{constant}$. I've been asked to characterize the normal vectors to these hypersurfaces (whether they are timelike, spacelike or null).
If I would only know the form of the normal vectors I would have no trouble determining whether they are TL, SL or null. However, I have no idea how to compute the normal vectors.
My logic so far has been to associate the family of hypersurfaces with a vector:
$h=\begin{bmatrix} d\\[0.3em] r\\[0.3em] \theta \\[0.3em] \phi \end{bmatrix}$
Where $r$, $\theta$, and $\phi$ are free, and $d\in \mathbb{R}$ is a constant (in the place of $v$).
I know that in regular flat Euclidean space $\mathbb{R}^{n}$, if you've got a function $\ f$ characterizing a hypersurface (a $\mathbb{R}^{n-1}$ object), you can find the normal vectors to the hypersurface by playing with the gradient of $\ f$.
However, this Lorentzian manifold is not flat, and I don't even know how to find a function like $\ f$ in this case - all I know is that $v$ is constant.
Can someone prod me in the right direction?
EDIT: I should mention that the above uses Eddington-Finkelstein coordinates in the Schwarzschild spacetime where $v=t+r+2m\mathrm{log}(\frac{r}{2m}-1)$.
|
I am playing around with the mean expected lifetime formula using an exponential distribution. The following is the formula I derived:
$$\frac{{e^{\lambda t_a}}(1+λt_a)-{e^{\lambda t_b}}(1+λt_b))}{{\lambda e^{-\lambda t_a}}}$$
$t_a$ is the time up until a device has lived
$t_b$ is point up to which I want to determine the expected lifetime.
The above formula was derived from the following where the exponential distribution is used: $$\int_{t_a}^{t_b} \frac{x f(x)}{1-F(t_a)}dx$$ $$\int_{t_a}^{t_b} \frac{x\lambda e^{-\lambda x}}{1-(1-e^{-\lambda t_a})}dx$$
My question is when $t_a$ and $t_b$ are very close to one another my expected remaining lifetime value does not lie between $t_a$ and $t_b$. This does not make any sense to me. Is there a mistake in my derivation or is my reasoning wrong that the expected remaining lifetime value should lie between $t_a$ and $t_b$. The problem is accentuated when I try and calculate the residual life when attempting to subtract $t_a$ from the mean expected lifetime value and then I obtain negative values.
For example, using the mean expected lifetime formula:
$$\int_{59}^{60} \frac{x\lambda e^{-\lambda x}}{1-(1-e^{-59\lambda})}dx$$
I would expect this answer to lie between 59 and 60. As this is the expected lifetime value given that it lived by 59 years, say. But for some reason if $\lambda$ is 2 for example is is below 59. But we already know that it has lived passed 59. So shouldn't the expected value be greater than 59.
|
LaTeX:Symbols
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Common Symbols Operators Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard.
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}{l}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
We already examined exponential functions and logarithms in earlier chapters. However, we glossed over some key details in the previous discussions. For example, we did not study how to treat exponential functions with exponents that are irrational. The definition of the number e is another area where the previous development was somewhat incomplete. We now have the tools to deal with these concepts in a more mathematically rigorous way, and we do so in this section.
For purposes of this section, assume we have not yet defined the natural logarithm, the number \(e\), or any of the integration and differentiation formulas associated with these functions. By the end of the section, we will have studied these concepts in a mathematically rigorous way (and we will see they are consistent with the concepts we learned earlier). We begin the section by defining the natural logarithm in terms of an integral. This definition forms the foundation for the section. From this definition, we derive differentiation formulas, define the number \(e\), and expand these concepts to logarithms and exponential functions of any base.
The Natural Logarithm as an Integral
Recall the power rule for integrals:
\[ ∫ x^n dx = \dfrac{x^{n+1}}{n+1} + C , n≠−1. \]
Clearly, this does not work when \(n=−1,\) as it would force us to divide by zero. So, what do we do with \(∫\dfrac{1}{x}dx\)? Recall from the Fundamental Theorem of Calculus that \(∫^x_1\dfrac{1}{t}dt\) is an antiderivative of 1/x. Therefore, we can make the following definition.
Definition: The Natural Logarithm
For \(x>0\), define the natural logarithm function by
\[\ln x=∫^x_1\dfrac{1}{t}dt.\]
For \(x>1\), this is just the area under the curve \(y=1/t\) from \(1\) to \(x\). For \(x<1\), we have
\[ ∫^x_1\dfrac{1}{t}dt=−∫^1_x\dfrac{1}{t}dt,\]
so in this case it is the negative of the area under the curve from \(x\) to \(1\) (see the following figure).
Notice that \(\ln 1=0\). Furthermore, the function \(y=1/t>0\) for \(x>0\). Therefore, by the properties of integrals, it is clear that \(\ln x\) is increasing for \(x>0\).
Properties of the Natural Logarithm
Because of the way we defined the natural logarithm, the following differentiation formula falls out immediately as a result of to the Fundamental Theorem of Calculus.
Definition: Derivative of the Natural Logarithm
For \(x>0\), the derivative of the natural logarithm is given by
\[ \dfrac{d}{dx} \ln x = \dfrac{1}{x}. \]
Corollary to the Derivative of the Natural Logarithm
The function \(\ln x\) is differentiable; therefore, it is continuous.
A graph of \(\ln x\) is shown in Figure. Notice that it is continuous throughout its domain of \((0,∞)\).
Example \(\PageIndex{1}\): Calculating Derivatives of Natural Logarithms
Calculate the following derivatives:
\(\dfrac{d}{dx}\ln (5x^3−2)\) \(\dfrac{d}{dx}(\ln (3x))^2\) Solution
We need to apply the chain rule in both cases.
\(\dfrac{d}{dx}\ln (5x^3−2)=\dfrac{15x^2}{5x^3−2}\) \(\dfrac{d}{dx}(\ln (3x))^2=\dfrac{2(\ln (3x))⋅3}{3x}=\dfrac{2(\ln (3x))}{x}\)
Exercise \(\PageIndex{1}\)
Calculate the following derivatives:
\(\dfrac{d}{dx}\ln (2x^2+x)\) \(\dfrac{d}{dx}(\ln (x^3))^2\) Hint
Apply the differentiation formula just provided and use the chain rule as necessary.
Answer
a. \(\dfrac{d}{dx}\ln (2x^2+x)=\dfrac{4x+1}{2x^2+x}\)
b. \(\dfrac{d}{dx}(\ln (x^3))^2=\dfrac{6\ln (x^3)}{x}\)
Note that if we use the absolute value function and create a new function \(\ln |x|\), we can extend the domain of the natural logarithm to include \(x<0\). Then \((d/(dx))\ln |x|=1/x\). This gives rise to the familiar integration formula.
Integral of \(\frac{1}{u} \, du\)
The natural logarithm is the antiderivative of the function \(f(u)=1/u\):
\[∫\dfrac{1}{u}du=\ln |u|+C.\]
Example \(\PageIndex{2}\): Calculating Integrals Involving Natural Logarithms
Calculate the integral \[ ∫\dfrac{x}{x^2+4}dx.\]
Solution
Using \(u\)-substitution, let \(u=x^2+4\). Then \(du=2xdx\) and we have
\[ ∫\dfrac{x}{x^2+4}dx=\dfrac{1}{2}∫\dfrac{1}{u}du\dfrac{1}{2}\ln |u|+C=\dfrac{1}{2}\ln ∣x^2+4∣+C=\dfrac{1}{2}\ln (x^2+4)+C.\]
Exercise \(\PageIndex{2}\)
Calculate the integral \[ ∫\dfrac{x^2}{x^3+6}dx.\]
Hint
Apply the integration formula provided earlier and use u-substitution as necessary.
Answer
\[ ∫\dfrac{x^2}{x^3+6}dx=\dfrac{1}{3}\ln ∣x^3+6∣+C\]
Although we have called our function a “logarithm,” we have not actually proved that any of the properties of logarithms hold for this function. We do so here.
Properties of the Natural Logarithm
If \(a,b>0\) and \(r\) is a rational number, then
\(\ln 1=0\) \(\ln (ab)=\ln a+\ln b\) \(\ln (\dfrac{a}{b})=\ln a−\ln b\) \(\ln (a^r)=r\ln a\)
Proof
i. By definition, \(\ln 1=∫^1_1\dfrac{1}{t}dt=0.\)
ii. We have
\(\ln (ab)=∫^{ab}_1\dfrac{1}{t}dt=∫^a_1\dfrac{1}{t}dt+∫^{ab}_a\dfrac{1}{t}dt.\)
Use \(u-substitution\) on the last integral in this expression. Let \(u=t/a\). Then \(du=(1/a)dt.\) Furthermore, when \(t=a,u=1\), and when \(t=ab,u=b.\) So we get
\(\ln (ab)=∫^a_1\dfrac{1}{t}dt+∫^{ab}_a\dfrac{1}{t}dt=∫^a_1\dfrac{1}{t}dt+∫^{ab}_1\dfrac{a}{t}⋅\dfrac{1}{a}dt=∫^a_1\dfrac{1}{t}dt+∫^b_1\dfrac{1}{u}du=\ln a+\ln b.\)
iii. Note that
\(\dfrac{d}{dx}\ln (x^r)=\dfrac{rx^{r−1}}{x^r}=\dfrac{r}{x}\).
Furthermore,
\(\dfrac{d}{dx}(r\ln x)=\dfrac{r}{x}.\)
Since the derivatives of these two functions are the same, by the Fundamental Theorem of Calculus, they must differ by a constant. So we have
\(\ln (x^r)=r\ln x+C\)
for some constant \(C\). Taking \(x=1\), we get
\(\ln (1^r)=r\ln (1)+C\)
\(0=r(0)+C\)
\(C=0.\)
Thus \(\ln (x^r)=r\ln x\) and the proof is complete. Note that we can extend this property to irrational values of \(r\) later in this section.
Part iii. follows from parts ii. and iv. and the proof is left to you.
□
Example \(\PageIndex{3}\): Using Properties of Logarithms
Use properties of logarithms to simplify the following expression into a single logarithm:
\[ \ln 9−2 \ln 3+\ln (\dfrac{1}{3}).\]
Solution
We have
\[ \ln 9−2 \ln 3+\ln (\dfrac{1}{3})=\ln (3^2)−2 \ln 3+\ln (3^{−1})=2\ln 3−2\ln 3−\ln 3=−\ln 3.\]
Exercise \(\PageIndex{3}\)
Use properties of logarithms to simplify the following expression into a single logarithm:
\[ \ln 8−\ln 2−\ln (\dfrac{1}{4})\]
Hint
Apply the properties of logarithms.
Answer
\(4\ln 2\)
Defining the Number e
Now that we have the natural logarithm defined, we can use that function to define the number \(e\).
Definition: \(e\)
The number \(e\) is defined to be the real number such that
\[\ln e=1\]
To put it another way, the area under the curve \(y=1/t\) between \(t=1\) and \(t=e\) is \(1\) (Figure). The proof that such a number exists and is unique is left to you. (Hint: Use the Intermediate Value Theorem to prove existence and the fact that \(\ln x\) is increasing to prove uniqueness.)
The number \(e\) can be shown to be
irrational, although we won’t do so here (see the Student Project in Taylor and Maclaurin Series). Its approximate value is given by
\[ e≈2.71828182846.\]
The Exponential Function
We now turn our attention to the function \(e^x\). Note that the natural logarithm is one-to-one and therefore has an inverse function. For now, we denote this inverse function by \(\exp x\). Then,
\[ \exp(\ln x)=x\]
for \(x>0\) and
\[ \ln (\exp x)=x\]
for all \(x\).
The following figure shows the graphs of \(\exp x\) and \(\ln x\).
We hypothesize that \(\exp x=e^x\). For rational values of \(x\), this is easy to show. If \(x\) is rational, then we have \(\ln (e^x)=x\ln e=x\). Thus, when \(x\) is rational, \(e^x=\exp x\). For irrational values of \(x\), we simply define \(e^x\) as the inverse function of \(\ln x\).
Definition
For any real number \(x\), define \(y=e^x\) to be the number for which
\[\ln y=\ln (e^x)=x.\]
Then we have \(e^x=\exp x\) for all \(x\), and thus
\(e^{\ln x}=x\) for \(x>0\) and \(\ln (e^x)=x\)
for all \(x\).
Properties of the Exponential Function
Since the exponential function was defined in terms of an inverse function, and not in terms of a power of \(e\) we must verify that the usual laws of exponents hold for the function \(e^x\).
Properties of the Exponential Function
If \(p\) and \(q\) are any real numbers and \(r\) is a rational number, then
\(e^pe^q=e^{p+q}\) \(\dfrac{e^p}{e^q}=e^{p−q}\) \((e^p)^r=e^{pr}\)
Proof
Note that if \(p\) and \(q\) are rational, the properties hold. However, if \(p\) or \(q\) are irrational, we must apply the inverse function definition of \(e^x\) and verify the properties. Only the first property is verified here; the other two are left to you. We have
\[ \ln (e^pe^q)=\ln (e^p)+\ln (eq)=p+q=\ln (e^{p+q}).\]
Since \(\ln x\) is one-to-one, then
\[ e^pe^q=e^{p+q}.\]
□
As with part iv. of the logarithm properties, we can extend property iii. to irrational values of \(r\), and we do so by the end of the section.
We also want to verify the differentiation formula for the function \(y=e^x\). To do this, we need to use implicit differentiation. Let \(y=e^x\). Then
\[ \begin{align} \ln y &=x \\ \dfrac{d}{dx}\ln y &=\dfrac{d}{dx}x \\ \dfrac{1}{y}\dfrac{dy}{dx}&=1 \\ \dfrac{dy}{dx} &=y. \end{align}\]
Thus, we see
\[ \dfrac{d}{dx}e^x=e^x\]
as desired, which leads immediately to the integration formula
\[ ∫e^x \,dx=e^x+C.\]
We apply these formulas in the following examples.
Example \(\PageIndex{4}\): Using Properties of Exponential Functions
Evaluate the following derivatives:
\(\dfrac{d}{dt}e^{3t}e^{t^2}\) \(\dfrac{d}{dx}e^{3x^2}\) Solution
We apply the chain rule as necessary.
\(\dfrac{d}{dt}e^{3t}e^{t^2}=\dfrac{d}{dt}e^{3t+t^2}=e^{3t+t^2}(3+2t)\) \(\dfrac{d}{dx}e^{3x^2}=e^{3x^2}6x\)
Exercise \(\PageIndex{4}\)
Evaluate the following derivatives:
\(\dfrac{d}{dx}(\dfrac{e^{x^2}}{e^{5x}})\) \(\dfrac{d}{dt}(e^{2t})^3\) Hint
Use the properties of exponential functions and the chain rule as necessary.
Answer
a. \(\dfrac{d}{dx}(\dfrac{e^{x^2}}{e^{5x}})=e^{x^{2−5x}}(2x−5)\)
b. \(\dfrac{d}{dt}(e^{2t})^3=6e^{6t}\)
Example \(\PageIndex{5}\): Using Properties of Exponential Functions
Evaluate the following integral: \[ ∫2xe^{−x^2}\,dx.\]
Solution
Using \(u\)-substitution, let \(u=−x^2\). Then \(du=−2x\,dx,\) and we have
\(∫2xe^{−x^2}\,dx=−∫e^u\,du=−e^u+C=−e^{−x^2}+C.\)
Exercise \(\PageIndex{5}\)
Evaluate the following integral: \[ ∫\dfrac{4}{e^{3x}}dx.\]
Hint
Use the properties of exponential functions and \(u-substitution\) as necessary.
Answer
\[ ∫\dfrac{4}{e^{3x}}\,dx=−\dfrac{4}{3}e^{−3x}+C\]
General Logarithmic and Exponential Functions
We close this section by looking at exponential functions and logarithms with bases other than \(e\). Exponential functions are functions of the form \(f(x)=a^x\). Note that unless \(a=e\), we still do not have a mathematically rigorous definition of these functions for irrational exponents. Let’s rectify that here by defining the function \(f(x)=a^x\) in terms of the exponential function \(e^x\). We then examine logarithms with bases other than e as inverse functions of exponential functions.
Definition: Exponential Function
For any \(a>0,\) and for any real number \(x\), define \(y=a^x\) as follows:
\[y=a^x=e^{x \ln a}.\]
Now \(a^x\) is defined rigorously for all values of \(x\). This definition also allows us to generalize property iv. of logarithms and property iii. of exponential functions to apply to both rational and irrational values of \(r\). It is straightforward to show that properties of exponents hold for general exponential functions defined in this way.
Let’s now apply this definition to calculate a differentiation formula for \(a^x\). We have
\[ \dfrac{d}{dx}a^x=\dfrac{d}{dx}e^{x\ln a}=e^{x\ln a}\ln a=a^x\ln a.\]
The corresponding integration formula follows immediately.
Derivatives and Integrals Involving General Exponential Functions
Let \(a>0.\) Then,
\[\dfrac{d}{dx}a^x=a^x\ \ln a\]
and
\[∫a^x\,dx=\dfrac{1}{\ln a}a^x+C.\]
If \(a≠1\), then the function \(a^x\) is one-to-one and has a well-defined inverse. Its inverse is denoted by \(log_ax\). Then,
\( y=log_ax\) if and only if \(x=a^y.\)
Note that general logarithm functions can be written in terms of the natural logarithm. Let \(y=log_ax.\) Then, \(x=a^y\). Taking the natural logarithm of both sides of this second equation, we get
\[\ln x=\ln (a^y)\]
\[\ln x=y\ln a\]
\[y=\dfrac{\ln x}{\ln a}\]
\[log^ax=\dfrac{\ln x}{\ln a}.\]
Thus, we see that all logarithmic functions are constant multiples of one another. Next, we use this formula to find a differentiation formula for a logarithm with base \(a\). Again, let \(y=log_ax\). Then,
\[\dfrac{dy}{dx}=\dfrac{d}{dx}(log_ax)\]
\[=\dfrac{d}{dx}(\dfrac{\ln x}{\ln a})\]
\[=(\dfrac{1}{\ln a})\dfrac{d}{dx}(\ln x)\]
\[=\dfrac{1}{\ln a}⋅\dfrac{1}{x}=\dfrac{1}{x\ln a}\]
Derivatives of General Logarithm Functions
Let \(a>0.\) Then,
\[\dfrac{d}{dx}log_ax=\dfrac{1}{x\ln a}.\]
Example \(\PageIndex{6}\): Calculating Derivatives of General Exponential and Logarithm Functions
Evaluate the following derivatives:
\(\dfrac{d}{dt}(4^t⋅2^{t^2})\) \(\dfrac{d}{dx}log_8(7x^2+4)\) Solution: We need to apply the chain rule as necessary. \(\dfrac{d}{dt}(4^t⋅2^{t^2})=\dfrac{d}{dt}(2^{2t}⋅2^{t^2})=\dfrac{d}{dt}(2^{2t+t^2})=2^{2t+t^2}\ln (2)(2+2t)\) \(\dfrac{d}{dx}log_8(7x^2+4)=\dfrac{1}{(7x^2+4)(\ln 8)}(14x)\)
Exercise \(\PageIndex{6}\)
Evaluate the following derivatives:
\(\dfrac{d}{dt}4^{t^4}\) \(\dfrac{d}{dx}log_3(\sqrt{x^2+1})\) Hint
Use the formulas and apply the chain rule as necessary.
Answer
a. \(\dfrac{d}{dt}4^{t^4}=4^{t^4}(\ln 4)(4t^3)\)
b. \(\dfrac{d}{dx}log_3(\sqrt{x^2+1})=\dfrac{x}{(\ln 3)(x^2+1)}\)
Example \(\PageIndex{7}\): Integrating General Exponential Functions
Evaluate the following integral: \[∫\dfrac{3}{2^{3x}}\,dx.\]
Solution
Use \(u-substitution\) and let \(u=−3x\). Then \(du=−3\,dx\) and we have
\[ ∫\dfrac{3}{2^{3x}}\,dx=∫3⋅2^{−3x}\,dx=−∫2^u\,du=−\dfrac{1}{\ln 2}2^u+C=−\dfrac{1}{\ln 2}2^{−3x}+C.\]
Exercise \(\PageIndex{7}\)
Evaluate the following integral: \[∫x^22^{x^3}\,dx.\]
Hint
Use the properties of exponential functions and u-substitution
Answer
\[ ∫x^22^{x^3}\,dx=\dfrac{1}{3\ln 2}2^{x^3}+C\]
Key Concepts The earlier treatment of logarithms and exponential functions did not define the functions precisely and formally. This section develops the concepts in a mathematically rigorous way. The cornerstone of the development is the definition of the natural logarithm in terms of an integral. The function \(e^x\) is then defined as the inverse of the natural logarithm. General exponential functions are defined in terms of \(e^x\), and the corresponding inverse functions are general logarithms. Familiar properties of logarithms and exponents still hold in this more rigorous context. Key Equations Natural logarithm function \(\displaystyle \ln x=∫^x_1\dfrac{1}{t}\,dt\) Exponential function\(y=e^x\) \(\ln y=\ln (e^x)=x\) Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
|
The differential equation: $y''+\omega^2 y=0$ has as a general solution: $$y=A\cos{(\omega t)}+B\sin{(\omega t)}$$
By taking: $$A=R\cos{(\omega t_0)}$$ and $$B=R\sin{(\omega t_0)}$$ We can rewrite the general solution into: $$y=R\cos{(\omega(t-t_0))}$$
However, this is also a solution: $$y=\alpha\cos{(\omega(t-t_0))+\beta\sin{(\omega(t-t_0))}}$$
QUESTION:
I understand we can rewrite this last solution, $y=\alpha\cos{(\omega(t-t_0))+\beta\sin{(\omega(t-t_0))}}$ in the form of $y=A\cos{(\omega t)}+B\sin{(\omega t)}$ (using some trig identities)
But... Can we also rewrite $y=\alpha\cos{(\omega(t-t_0))+\beta\sin{(\omega(t-t_0))}}$ in the form of $y=R\cos{(\omega(t-t_0))}$? If so, how?
|
Topic: Computing an inverse z-transform
Question from a student
Take $ x[n] = a^n\left( u[n-2]+u[n]\right) $. We then have
$ \begin{align} X(z) &= \sum_{n=-\infty}^\infty x[n]z^{-n} \text{ (by definition of the z-transform)},\\ &= \sum_{n=-\infty}^\infty a^n(u[n-2]+u[n])z^{-n}, \\ &= \sum_{n=2}^\infty a^n(z^{-n}) + \sum_{n=0}^\infty a^n(z^{-n}). \\ \text{Now let }k=-n, \\ \Rightarrow X(z) &= \sum_{k=-2}^\infty (a/z)^n + \sum_{k=0}^\infty (a/z)^n ,\\ &=\sum_{k=0}^\infty \left( (a/z)^n + 2)\right) + \sum_{k=0}^\infty \left( \frac{a}{z}\right)^n \\ & = \left(\frac{1}{1-a/z}+2\right) + \left(\frac{1}{1-a/z}\right), \\ & = \frac{z}{z-a}+2 + \frac{z}{z-a}, \\ & = \frac{z}{z-a}+2\frac{z-a}{z-a} + \frac{z}{z-a} , \\ & = \frac{4z-2a}{z-a}, \\ & = \frac{4-2a/z}{1-a/z}, \text{ for } |z|<a ??? \end{align} $
So if I end up with something that says 1/1-(1/z), I am confused. does it converge when |z|>a or when |z|<a?
~ksoong Comments/corrections from Prof. Mimi
Take $ x[n] = a^n(u[n-2]+u[n]) $. We then have $ \begin{align} X(z) &= \sum_{n=-\infty}^\infty x[n]z^{-n} \text{ (by definition of the z-transform)}, {\color{OliveGreen}\surd}\\ &= \sum_{n=-\infty}^\infty a^n(u[n-2]+u[n])z^{-n}, {\color{OliveGreen}\surd} \\ &= \sum_{n=2}^\infty a^n(z^{-n}) + \sum_{n=0}^\infty a^n(z^{-n}). {\color{OliveGreen}\surd} \\ \text{Now let }k=-n,& {\color{red}\text{This change of variable is not useful, unfortunately.}} \\ \Rightarrow X(z) &= \sum_{k=-2}^{\color{red}-\infty} (a/z)^{\color{red}n} + \sum_{k=0}^\infty (a/z)^{\color{red}n} ,{\color{red}\text{The terms inside the summation contain n, but the summation is over k.}} \\ &=\sum_{k=0}^{\color{red}-\infty} \left( (a/z)^n {\color{red} -(a/z)^{-2}-(a/z)^{-1}} )\right) + \sum_{k=0}^\infty \left( \frac{a}{z}\right)^n \\ & = \left(\frac{1}{1-a/z}+{\color{red} -(a/z)^{-2}-(a/z)^{-1}} )\right) + \left(\frac{1}{1-a/z}\right), {\color{red}\text{For this last step, you need to assume } \left| \frac{a}{z}\right|<1, \text{ else both sums diverge.}} \end{align} $
The answer to your initial question ("if I end up with something that says 1/1-(1/z), I am confused. does it converge when |z|>a or when |z|<a?") is in the last step. As you can see from this step, X(z) only converges if |a|<|z|. Note that, since a could be a complex number, it is important not to say a< |z|
A good way to check your answer is to use the Z-transform table. You can use the time shift property on the first term (a^n *u[n-2]) and the second term (a^n *u[n]) can be directly converted using the table. Your final answer should match up with what you ended with above. -Sbiddand
Anybody see anything else? Do you have more questions? Comments? Please feel free to add below.
What if we find a mistake in our computation of the Z transform in the last homework? Do we work with the wrong Z transform and end up with something that isn't the x[n] we chose for that question on homework 2 or do we take the correct Z transform for that x[n] that we chose in homework 2 and then calculate the inverse Z transform?
VG
No, use the correct answer instead. (I recommend checking all your answers using a table of z-transforms). --Mboutin 19:26, 14 September 2010 (UTC)
|
Another method, not covered by the answers above, is
finite automaton transformation. As a simple example, let us show that the regular languages are closed under the shuffle operation, defined as follows:$$L_1 \mathop{S} L_2 = \{ x_1y_1 \ldots x_n y_n \in \Sigma^* : x_1 \ldots x_n \in L_1, y_1 \ldots y_n \in L_2 \}$$You can show closure under shuffle using closure properties, but you can also show it directly using DFAs. Suppose that $A_i = \langle \Sigma, Q_i, F_i, \delta_i, q_{0i} \rangle$ is a DFA that accepts $L_i$ (for $i=1,2$). We construct a new DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ as follows: The set of states is $Q_1 \times Q_2 \times \{1,2\}$, where the third component remembers whether the next symbol is an $x_i$ (when 1) or a $y_i$ (when 2). The initial state is $q_0 = \langle q_{01}, q_{02}, 1 \rangle$. The accepting states are $F = F_1 \times F_2 \times \{1\}$. The transition function is defined by $\delta(\langle q_1, q_2, 1 \rangle, \sigma) = \langle \delta_1(q_1,\sigma), q_2, 2 \rangle$ and $\delta(\langle q_1, q_2, 2 \rangle, \sigma) = \langle q_1, \delta_2(q_2,\sigma), 1 \rangle$.
A more sophisticated version of this method involves
guessing. As an example, let us show that regular languages are closed under reversal, that is,$$ L^R = \{ w^R : w \in \Sigma^* \}. $$(Here $(w_1\ldots w_n)^R = w_n \ldots w_1$.) This is one of the standard closure operations, and closure under reversal easily follows from manipulation of regular expressions (which may be regarded as the counterpart of finite automaton transformation to regular expressions) – just reverse the regular expression. But you can also prove closure using NFAs. Suppose that $L$ is accepted by a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, where The set of states is $Q' = Q \cup \{q'_0\}$. The initial state is $q'_0$. The unique accepting state is $q_0$. The transition function is defined as follows: $\delta'(q'_0,\epsilon) = F$, and for any state $q \in Q$ and $\sigma \in \Sigma$, $\delta(q', \sigma) = \{ q : \delta(q,\sigma) = q' \}$.
(We can get rid of $q'_0$ if we allow multiple initial states.) The guessing component here is the final state of the word after reversal.
Guessing often involves also verifying. One simple example is closure under
rotation:$$ R(L) = \{ yx \in \Sigma^* : xy \in L \}. $$Suppose that $L$ is accepted by the DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, which operates as follows. The NFA first guesses $q=\delta(q_0,x)$. It then verifies that $\delta(q,y) \in F$ and that $\delta(q_0,x) = q$, moving from $y$ to $x$ non-deterministically. This can be formalized as follows: The states are $Q' = \{q'_0\} \cup Q \times Q \times \{1,2\}$. Apart from the initial state $q'_0$, the states are $\langle q,q_{curr}, s \rangle$, where $q$ is the state that we guessed, $q_{curr}$ is the current state, and $s$ specifies whether we are at the $y$ part of the input (when 1) or at the $x$ part of the input (when 2). The final states are $F' = \{\langle q,q,2 \rangle : q \in Q\}$: we accept when $\delta(q_0,x)=q$. The transitions $\delta'(q'_0,\epsilon) = \{\langle q,q,1 \rangle : q \in Q\}$ implement guessing $q$. The transitions $\delta'(\langle q,q_{curr},s \rangle, \sigma) = \langle q,\delta(q_{curr},\sigma),s \rangle$ (for every $q,q_{curr} \in Q$ and $s \in \{1,2\}$) simulate the original DFA. The transitions $\delta'(\langle q,q_f,1 \rangle, \epsilon) = \langle q,q_0,2 \rangle$, for every $q \in Q$ and $q_f \in F$, implement moving from the $y$ part to the $x$ part. This is only allowed if we've reached a final state on the $y$ part.
Another variant of the technique incorporates bounded counters. As an example, let us consider change
edit distance closure:$$ E_k(L) = \{ x \in \Sigma^* : \text{ there exists $y \in L$ whose edit distance from $x$ is at most $k$} \}. $$Given a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ for $L$, e construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$ for $E_k(L)$ as follows: The set of states is $Q' = Q \times \{0,\ldots,k\}$, where the second item counts the number of changes done so far. The initial state is $q'_0 = \langle q_0,0 \rangle$. The accepting states are $F' = F \times \{0,\ldots,k\}$. For every $q,\sigma,i$ we have transitions $\langle \delta(q,\sigma), i \rangle \in \delta'(\langle q,i \rangle, \sigma)$. Insertions are handled by transitions $\langle q,i+1 \rangle \in \delta'(\langle q,i \rangle, \sigma)$ for all $q,\sigma,i$ such that $i < k$. Deletions are handled by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \epsilon)$ for all $q,\sigma,i$ such that $i < k$. Substitutions are similarly handles by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \tau)$ for all $q,\sigma,\tau,i$ such that $i < k$.
|
The epsilon–delta limit definition $1$:
A function $f(x)$ from $\mathbf{R}$ to $\mathbf{R}$ has limit $L$ at point $x_0 \in \mathbf{R}$ if: $\bbox[yellow] {\text{for every $\epsilon > 0$ there exists a $\delta > 0$ such that whenever $|x-x_0| < \delta$ then}\ |f(x)-L| < \epsilon}$
Intuitive definition $2$:
A function $f(x)$ from $\mathbf{R}$ to $\mathbf{R}$ has limit $L$ at point $x_0 \in \mathbf{R}$ if: $\bbox[yellow] {\text{I can get $f(x)$ as close as I want to $L$ by choosing $x$ close enough to $x_0$}}$
Can anybody explain in a step-by-step manner how the definition $1$ implies definition $2$?
|
ATLAS released an interesting preprint
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at \(\sqrt{s} = 13\TeV\) with the ATLAS detectorin which gluino pairs were searched in final states with MET, many jets, and a single lepton. There were six signal regions. The last, sixth one, showed a mild but interesting excess. \(2.5\pm 0.7\) events were expected with a muon (thanks, Bill), but eight events were observed.
The excess looks intriguing when it's visualized on Figure 6.
The plot has two parts:
The observed thick red exclusion line is generally similar to the dashed black expected exclusion line. But on the upper picture, there is a clear "downward tooth" of the red line around the gluino mass of \(1200-1300\GeV\) and the lightest neutralino mass around \(400-600\GeV\), potentially properties of particles that may exist according to this hint.
On the second diagram, the excess looks like an island with \(m_{\tilde g} \sim 1250\GeV\) and the ratio of two mass differences (lightest chargino minus first lightest neutralino OVER gluino minus lightest neutralino) equal to \(0.75\) or so. However, the plot isn't quite showing the neighborhood of the most interesting values indicated by the upper plot because in the lower plot, the lightest neutralino is assumed to weigh \(60\GeV\).
The island-like shape of the exclusion line on the lower picture is interesting, nevertheless. Note that this is what the exclusion lines look like when all the wrong values of the mass are excluded and the correct mass is discovered. In this sense, the lower picture could already be a sketch of a discovery paper.
At any rate, if you go through the LHC category or search for a gluino on this blog, I think you will agree that it's far from the first hint of a gluino close enough to \(1200\GeV\) and a lightest neutralino in the \(600\GeV\) category. I am extremely far from any form of certainty that the gluino has to be found near these masses but if you offer me 100-to-1 odds like Jester did, I will happily make the bet again (or increase the existing one, if you wish).
|
Mark Alford's wrong paper claiming that there's nonlocality has the first followup, Anthony Sudbery's physics.hist-ph paper
He presented evidence that her 1956 song is, in fact, incorrect and that he may be an even better physicist than Doris Day. ;-)
Doris Day was singing:
Que sera seraSudbery sensibly argues: Whatever will be will be The future’s not ours to see Que sera sera. I will argue that quantum mechanics casts doubt on the second line of the song, which suggests that even if we can’t know it, there is a definite future...We may say that the future isn't definite and isn't yet decided – because of the probabilistic character of quantum mechanics and especially results such as the "free will theorem" which may also be phrased as an argument against "fatalism".
We don't even know what will be the right detailed questions in the future. The question "Who wins World War III" will only be relevant if there will be such a war, and so on. The observers will choose their own relevant questions – and those will depend on their previous observations.
Unfortunately, the argumentation by Sudbery doesn't quite make sense because if the second line ("whatever will be will be") is wrong, then also the first and fourth lines ("que sera sera") are wrong because they're just translations of the second line to Spanish! (Doris Mary Ann Kappelhoff picked the Spanish phrase despite her all-German ancestry.)
So if he were treating the whole song carefully, he would know that the truth value of all these three lines, and not just the second one, is the same. Moreover, he would also know that despite the "free will theorem", they are valid because they're really tautologies. Even if I believe in free will, it's still true that "whatever will be will be". ;-)
OK, a paper attacking a singer is funny. But there are more serious misunderstandings. Sudbery quoted Alford's sentence:
In ordinary life, and in science up until the advent of quantum mechanics, all the uncertainty that we encounter is presumed to be... uncertainty arising from ignorance.Quantum mechanics introduces a new source of uncertainty, thanks to the uncertainty principle. But if one is careful, he will see that Alford's sentence above is illogical, too. The fact that he denies in between the lines is that according to quantum mechanics, all uncertainty arises from ignorance, too!Ignorance and uncertainty are de facto synonyms. They are inseparable. At most, we may use them for slightly different quantities. "Ignorance" is largely used for our not knowing the truth value of Yes/No (or one/zero) propositions; while "uncertainty" is largely used for quantities with different, more complicated, especially continuous spectra – e.g. for \(\Delta x\).
But at the end, if we are ignorant about the truth value of a statement about a continuous quantity, e.g. \(x\lt 0\), then there is an uncertainty in \(x\), and if there is an uncertainty \(\Delta x\), then there exist binary propositions about \(x\), such as \(x\lt 0\), whose truth value we are ignorant about. So while the precise linguistic usage of the words "ignorance" and "uncertainty" may favor one word or the other in various contexts, the ideas that they convey are exactly the same. They follow from one another; and they may be viewed as special examples of one another, too.
One of the many points that Alford and many others don't understand is that
the uncertainty principle states that there exists a certain minimum amount of ignorance.The uncertainty principle imposes the lower bound not only on continuous products such as \[
\Delta x \cdot \Delta p\geq \frac \hbar 2.
\] It also implies the unavoidable ignorance about the truth value of Yes/No propositions. For example, if \(\Delta p\lt \infty\), then the probability that \(x\) is greater than a specific number in the interval \(x\pm \Delta x\) is a number strictly in between 0% and 100%.
Similar facts may be easily derived from the commutators just like the original uncertainty principle. The point is that the Yes/No statement such as \(x\lt 0\) is represented by a linear Hermitian projection operator \(P_{x\lt 0}\). And this operator may be written as a function of the operator \(x\) – in this case,\[
P_{x\lt 0} = \theta(-x).
\] Because \([x,p]\neq 0\), we have \([P_{x\lt 0},p]\neq 0\), too. So unless \(p\) is completely unknown, the uncertainty of the operator \(P_{x \lt 0}\) is unavoidably positive. But that conclusion is exactly equivalent to the statement that the probability that \(x\lt 0\) holds is a number different from 0% as well as 100%.
Again, the uncertainty principle tells us that probabilities strictly in between 0% and 100% are unavoidable in physics.
But we may still say that "ignorance" and "uncertainty" refer to the same intrinsic thing. Alford and soulmates try to pretend that these two words are completely different but they never give any coherent explanation in which sense they could be different. Well, there can't be any coherent explanation because they're obviously not different.
At the end, the attempts to "segregate" the uncertainty to two completely different effects is nothing else than a sign of their anti-quantum zeal. They want to talk about the uncertainty that they already knew in classical physics (which is a good one that can tolerate); and the uncertainty that quantum mechanics introduced (and they want to erase it or misinterpret it as something completely different).
On page 3, Alford divides the uncertainty to
Uncertainty arising from our ignorance. The outcome of the measurement could be predicted given accurate knowledge of the initial state of the object and the laws governing its evolution, but we don’t have sufficiently accurate information about these things to make an exact prediction. Fundamental uncertainty: the outcome of the measurement has an essentially random component, either in the evolution of the system or its effect on the measuring device. In a sense the system gets to “decide on its own” how to behave.
But that's not how Nature works. No internal mechanisms or the implied nonlocality exist in the world around us. Even the uncertainty that follows from the uncertainty principle should be interpreted as an equivalent description of the ignorance of the observer, not as some extra pseudorandom generator that the objects contain. Instead, the uncertainty principle says that the ignorance about a question in a given situation can't decrease below a certain lower bound. The uncertainty implied by the uncertainty principle is new and fundamental; but it must still be considered a part of the uncertainty described in (1).
In quantum mechanics, the most general description of the state of a physical system is in terms of the density matrix \(\rho\). The probability that a Yes/No statement encoded in the projection operator \(P\) is right is simply \({\rm Tr}(P\rho)\); it's the expectation value of the operator \(P\) (whose eigenvalues are zero and one). The expectation value of a more general quantity \(x\) is \({\rm Tr}(x\rho)\). The squared uncertainty \((\Delta x)^2\) of an operator is \[
{\rm Tr}(x^2\rho) - ({\rm Tr}(x\rho))^2
\] Now, the density matrix \(\rho\) is the exact quantum counterpart of the probability distribution \(\rho(q_i,p_i)\) on a classical phase space in classical statistical physics. So whenever \(\rho\) has several nonzero eigenvalues, there is some uncertainty – of the same kind that existed in classical physics – about the state of the system. This is analogous to the classical function \(\rho(q_i,p_i)\) that is supported by more than one point in the phase space.
Can you get rid of this uncertainty? In classical physics, in principle, you can, and if you do so, the probability distribution \(\rho(q_i,p_i)\) becomes a delta-function localized at a particular point \((q_i,p_i)\) of the phase space. Can you do it in quantum mechanics?
In quantum mechanics, the
closestthing that you can do is that you guarantee that your density matrix \(\rho\) only has one nonzero eigenvalue (equal to one); all other eigenvalues are zero. This is equivalent to \[
\exists \ket\psi:\,\,\rho = \ket\psi\bra\psi
\] The density matrix becomes a simple density matrix calculated from a pure state \(\ket\psi\). If you look at the values of \(x,p\) that this pure density matrix represents, you may make them pretty well-defined but\[
\Delta x \cdot \Delta p \geq \frac\hbar 2
\] will always hold. So in the phase space, the maximally perfectly localized state may occupy a "fuzzy cell" of the area \(2\pi\hbar\) – but not a smaller area (or volume; for many position-momentum pairs, the volume of the cell is \((2\pi \hbar)^N\).
One of my points is that the localization of \(\rho\) may be viewed as a completely analogous process. In classical physics, it may go all the way to the point where \(\rho(q_i,p_i)\) equals to a delta-function and the ignorance goes to zero. In quantum mechanics, that's not possible. The uncertainty principle guarantees that instead of a delta-function, the maximally localized distribution in the phase space occupies the area \(2\pi\hbar\). So there will always be some uncertainty in the values of \(x\) and \(p\) or most of their functions. Most pairs of operators refuse to commute with each other, so if the value of one is known, the other is uncertain etc.
But the space of allowed density matrices \(\rho\) is a compact, continuous, linear space. It is not divided to pieces; and it doesn't have any canonical subspaces. The
interpretation and consequencesof the uncertainty in quantum mechanics is exactly the same as the interpretation and consequences of the uncertainty encoded in a "spread" function \(\rho(q_i,p_i)\) on the phase space. What's different is that quantum mechanics postulates or guarantees that the ignorance or uncertainty about all physically meaningful questions can't ever go to zero.
In classical physics, models had the property that \(\rho\) could have been a delta-function and the ignorance was zero. But you could have always viewed this feature as an accidental feature of simple enough models we considered. There has never been any
important principlethat would tell you that the statistical description of any theory of physics must allowthe phase-space distribution to be equal to the delta-function.
Let me be more precise. You could have assumed and postulated this principle – it was true in all the models we call "classical" today – but this assumption has never been important for the agreement between the theory and experiments. It wasn't ever possible to use this philosophical assumption for an improved agreement between the theory and the data. It was only useful to make the theories "simple" in some way. Models of classical statistical physics were "simple" in the sense that they were always a "direct derivation" out of some deterministic theories where the uncertainty and ignorance was zero.
In quantum mechanics, it's no longer the case. Quantum mechanics involving a density matrix generalizes the descriptions in classical statistical physics with \(\rho(q_i,p_i)\) on the phase space. But the quantum mechanical models in terms of the density matrix \(\rho\) can no longer be derived from a simplified model where the uncertainty and ignorance completely disappear. The nonzero commutators redefine the realm of questions you can ask and quantities you can measure and their mutual relationships; and the omnipresent nonzero commutators guarantee that the ignorance and uncertainty cannot go away.
Instead, the description of a quantum mechanical theory that minimizes the uncertainty and ignorance is the description in terms of a pure state \(\ket\psi\). It's the "counterpart" of the delta-functions on the phase space except that the minimum blobs aren't quite delta-functions. They have the area \(2\pi\hbar\) and this nonzero area is connected with the fact that virtually all observables \(L\) have a nonzero uncertainty \(\Delta L\) and it's also true about the projection operators \(P\) whose nonzero uncertainty means that the probabilities are strictly in between 0% and 100%. If you need to know, if you compute \((\Delta P)^2\) according to the same formula used for \(\Delta x\) etc., you will get\[
(\Delta P)^2 = p_1-p_1^2 = p_1(1-p_1)
\] which only vanishes when \(p_1=0\) or \(p_1=1\), i.e. in the absence of any ignorance about the Yes/No proposition encoded by \(P\).
While the uncertainty or ignorance is bounded from below in quantum mechanics, it's completely misguided to try to divide the ignorance to "two pieces with a totally different explanation". The explanation of both is in terms of the same mathematical rules – and all the parts of the uncertainty and ignorance should always be attributed to the observer. The new feature of quantum mechanics is that it guarantees that there just can't be any "better observer" who could get rid of all the uncertainty; the commutators are nonzero for any observer so a lower bound on the ignorance or uncertainty is a universal law that no one can circumvent, not even God or an Argentine left-wing pundit who abuses Him. The usual equations involving the density matrix \(\rho\) describe the uncertainty or ignorance of "both types" and they can't be quite separated from each other once you start to write the density matrix as a sum of many terms.
|
the integral from 3 to 5 of (d/dt(Sqrt(2+1t^4)))dt using the fundamental theorem of calculus.thanks for any help
Follow Math Help Forum on Facebook and Google+
Do you actually mean:
$\displaystyle \int\left(\frac{d}{dt}\sqrt{2+t^{4}}\right)\;dt$?
Wouldn't that be $\displaystyle \sqrt{2+t^{4}}\;+\;C$?
That should strike you as a definition of an antiderivative.
Originally Posted by waite3 the integral from 3 to 5 of (d/dt(Sqrt(2+1t^4)))dt using the fundamental theorem of calculus.thanks for any help $\displaystyle \int_3^5 \frac{d}{dt} \sqrt{2 + t^4} ~dt = \sqrt{2 + 5^4} - \sqrt{2 + 3^4}$
-Dan
View Tag Cloud
|
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second.
Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec.
But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them.
So what did the guys in the EE chat say...
The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you...
A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it.
Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help?
The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names...
I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC.
I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works.
I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons....
something so "simple" turns out to be hard as duck
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it?
If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics
I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ...
I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array "
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact
He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H".
@ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level.
It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale.
according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one.
@enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization
The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT
@bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory?
These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
|
Suppose I had a stochastic partial differential equation of the form:
$\nabla^2U=F(x,D)$, where $x\in\Omega\equiv [0,1]$ and $F(x,D)$ is a function which depends on position $x$ and a uniform random coefficient D.
An approximate the stochastic solution can be easily obtained by monte-carlo simulations. That is, I can choosing different values of D randomly, obtaining the approximate solution of the PDE by some numerical scheme, then accumulate these solutions in histogram bins in the $X-U$ space.
Now then, let's suppose that the function $F$ depends not on a single random parameter, but on a vector of random parameters $\vec{D}$. For simplicity, let's assume that each component of $\vec{D}$ has the same probability distribution. Let's further assume that each component of F is a function of a single, unique random parameter only. Then,
Would a monte-carlo approach require substantially more samples than in the 1 random parameter case? If so, how can I quantify the rate of convergence in terms of the sample size and number of random parameters?
Is there a more efficient approach to approximate the solution of the stochastic pde which requires fewer random samples in the case of the vector of random parameters?
Update:
I found this presentation (slide #50), which proposes a similar problem to the one I proposed above. Slide #52 proposes a sparse grid approach to resolving this problem, but does not give too much detail about this approach. I'm really curious about the details of its implementation and whether it can be applied to the problem I proposed above.
|
Forgot password? New user? Sign up
Existing user? Log in
Hi, my name is Augusta Saraiva and I'm 15 years old, but Brilliant has my age as 21. I sent three e-mails to them but I didn't recieve any answer. Can you solve this, please?
Note by Augusta Saraiva 6 years, 2 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Sorry about the delay. We have changed your age to 15. Feel free to email me at discussions@brilliant.org, if you ever need to get in contact. Happy problem solving.
Log in to reply
Thank you so much for the attention!
Hello!! i'm having the same problem and i have mailed brilliant for help but nothing happens plz help...!! because i have decided to go for new account so that i can get my DOB correct ,,,,so plz help....plz change my age from 25 to 18 i,e 14/01/1997 thank you in advance!
Greetings!I am Lance Gill Tolentino, 16 years of age, born on October 22, 1998. May I request you to please change my age from 22 to 16. Thank you very much in advance! :)
Sir please change my age from 24 to 17 years my D.O.B. is 10 January 1999
I have the same problem. I looked up how to change my age but i cant find anything helpful here or on the internet. i was born on 11/3/03 but it says that i'm born on 1/1/2000 i think. Could you help me change that??
Please change my name to Ethan Chan birthday 24th Nov 2005. Thanks
Please change my name to sudhamsh and age to 17yrs. As I logged in brilliant.org with my mother's Facebook account name and age were wrong. So please correct it sir.
My DOB is 21-09-1999.
please also change my age to 14
Please Change my age to 15.My DOB is 22/06/1999
Hello, my name is Jaideep Khare. Brilliant is showing my age 25. Please change my age to 15 . My Date of Birth is 16 Nov. 2000
Problem Loading...
Note Loading...
Set Loading...
|
Exercise \(\PageIndex{1}\)
1. If \(\displaystyle f(x)=\sum_{n=0}^∞\frac{x^n}{n!}\) and \(\displaystyle g(x)=\sum_{n=0}^∞(−1)^n\frac{x^n}{n!}\), find the power series of \(\displaystyle \frac{1}{2}(f(x)+g(x))\) and of \(\displaystyle \frac{1}{2}(f(x)−g(x))\).
Answer
\(\displaystyle \frac{1}{2}(f(x)+g(x))=\sum_{n=0}^∞\frac{x^{2n}}{(2n)!}\) and \(\displaystyle \frac{1}{2}(f(x)−g(x))=\sum_{n=0}^∞\frac{x^{2n+1}}{(2n+1)!}\).
2. If \(\displaystyle C(x)=\sum_{n=0}^∞\frac{x^{2n}}{(2n)!}\) and \(\displaystyle S(x)=\sum_{n=0}^∞\frac{x^{2n+1}}{(2n+1)!}\), find the power series of \(\displaystyle C(x)+S(x)\) and of \(\displaystyle C(x)−S(x)\).
Exercise \(\PageIndex{2}\)
In the following exercises, use partial fractions to find the power series of each function.
1. \(\displaystyle \frac{4}{(x−3)(x+1)}\)
Answer
\(\displaystyle \frac{4}{(x−3)(x+1)}=\frac{1}{x−3}−\frac{1}{x+1}=−\frac{1}{3(1−\frac{x}{3})}−\frac{1}{1−(−x)}=−\frac{1}{3}\sum_{n=0}^∞(\frac{x}{3})^n−\sum_{n=0}^∞(−1)^nx^n=\sum_{n=0}^∞((−1)^{n+1}−\frac{1}{3n+1})x^n\)
2. \(\displaystyle \frac{3}{(x+2)(x−1)}\)
3. \(\displaystyle \frac{5}{(x^2+4)(x^2−1)}\)
Answer
\(\displaystyle \frac{5}{(x^2+4)(x^2−1)}=\frac{1}{x^2−1}−\frac{1}{4}\frac{1}{1+(\frac{x}{2})^2}=−\sum_{n=0}^∞x^{2n}−\frac{1}{4}\sum_{n=0}^∞(−1)^n(\frac{x}{2})^n=\sum_{n=0}^∞((−1)+(−1)^{n+1}\frac{1}{2^{n+2}})x^{2n}\)
4. \(\displaystyle \frac{30}{(x^2+1)(x^2−9)}\)
Exercise \(\PageIndex{3}\)
In the following exercises, express each series as a rational function.
1. \(\displaystyle \sum_{n=1}^∞\frac{1}{x^n}\)
Answer
\(\displaystyle \frac{1}{x}\sum_{n=0}^∞\frac{1}{x^n}=\frac{1}{x}\frac{1}{1−\frac{1}{x}}=\frac{1}{x−1}\)
2. \(\displaystyle \sum_{n=1}^∞\frac{1}{x^{2n}}\)
3. \(\displaystyle \sum_{n=1}^∞\frac{1}{(x−3)^{2n−1}}\)
Answer
\(\displaystyle \frac{1}{x−3}\frac{1}{1−\frac{1}{(x−3)^2}}=\frac{x−3}{(x−3)^2−1}\)
4. \(\displaystyle \sum_{n=1}^∞(\frac{1}{(x−3)^{2n−1}}−\frac{1}{(x−2)^{2n−1}})\)
Exercise \(\PageIndex{4}\)
The following exercises explore applications of
annuities.
1. Calculate the present values P of an annuity in which $10,000 is to be paid out annually for a period of 20 years, assuming interest rates of \(\displaystyle r=0.03,r=0.05\), and \(\displaystyle r=0.07\).
Answer
\(\displaystyle P=P_1+⋯+P_{20}\) where \(\displaystyle P_k=10,000\frac{1}{(1+r)^k}\). Then \(\displaystyle P=10,000\sum_{k=1}^{20}\frac{1}{(1+r)^k}=10,000\frac{1−(1+r)^{−20}}{r}\). When \(\displaystyle r=0.03,P≈10,000×14.8775=148,775.\) When \(\displaystyle r=0.05,P≈10,000×12.4622=124,622.\) When \(\displaystyle r=0.07,P≈105,940\).
2. Calculate the present values
P of annuities in which $9,000 is to be paid out annually perpetually, assuming interest rates of \(\displaystyle r=0.03,r=0.05\) and \(\displaystyle r=0.07\).
3. Calculate the annual payouts
C to be given for 20 years on annuities having present value $100,000 assuming respective interest rates of \(\displaystyle r=0.03,r=0.05,\) and \(\displaystyle r=0.07.\) Answer
In general, \(\displaystyle P=\frac{C(1−(1+r)^{−N})}{r}\) for
Nyears of payouts, or \(\displaystyle C=\frac{Pr}{1−(1+r)^{−N}}\). For \(\displaystyle N=20\) and \(\displaystyle P=100,000\), one has \(\displaystyle C=6721.57\) when \(\displaystyle r=0.03;C=8024.26\) when \(\displaystyle r=0.05\); and \(\displaystyle C≈9439.29\) when \(\displaystyle r=0.07\).
4. Calculate the annual payouts
C to be given perpetually on annuities having present value $100,000 assuming respective interest rates of \(\displaystyle r=0.03,r=0.05,\) and \(\displaystyle r=0.07\).
5. Suppose that an annuity has a present value \(\displaystyle P=1\)
million dollars. What interest rate r would allow for perpetual annual payouts of $50,000? Answer
In general, \(\displaystyle P=\frac{C}{r}.\) Thus, \(\displaystyle r=\frac{C}{P}=5×\frac{10^4}{10^6}=0.05.\)
6. Suppose that an annuity has a present value \(\displaystyle P=10\)
million dollars. What interest rate r would allow for perpetual annual payouts of $100,000?
Exercise \(\PageIndex{5}\)
In the following exercises, express the sum of each power series in terms of geometric series, and then express the sum as a rational function.
1. \(\displaystyle x+x^2−x^3+x^4+x^5−x^6+⋯\) (Hint: Group powers \(\displaystyle x^{3k}, x^{3k−1},\) and \(\displaystyle x^{3k−2}\).)
Answer
\(\displaystyle (x+x^2−x^3)(1+x^3+x^6+⋯)=\frac{x+x^2−x^3}{1−x^3}\)
2. \(\displaystyle x+x^2−x^3−x^4+x^5+x^6−x^7−x^8+⋯\) (Hint: Group powers \(\displaystyle x^{4k}, x^{4k−1},\) etc.)
3. \(\displaystyle x−x^2−x^3+x^4−x^5−x^6+x^7−⋯\) (Hint: Group powers \(\displaystyle x^{3k}, x^{3k−1}\), and \(\displaystyle x^{3k−2}\).)
Answer
\(\displaystyle (x−x^2−x^3)(1+x^3+x^6+⋯)=\frac{x−x^2−x^3}{1−x^3}\)
4. \(\displaystyle \frac{x}{2}+\frac{x^2}{4}−\frac{x^3}{8}+\frac{x^4}{16}+\frac{x^5}{32}−\frac{x^6}{64}+⋯\) (Hint: Group powers \(\displaystyle \frac{x}{2})^{3k},(\frac{x}{2})^{3k−1},\) and \(\displaystyle \frac{x}{2})^{3k−2}\).
Exercise \(\PageIndex{6}\)
In the following exercises, find the power series of \(\displaystyle f(x)g(x)\) given
f and g as defined.
1. \(\displaystyle f(x)=2\sum_{n=0}^∞x^n,g(x)=\sum_{n=0}^∞nx^n\)
Answer
\(\displaystyle a_n=2,b_n=n\) so \(\displaystyle c_n=\sum_{k=0}^nb_ka_{n−k}=2\sum_{k=0}^nk=(n)(n+1)\) and \(\displaystyle f(x)g(x)=\sum_{n=1}^∞n(n+1)x^n\)
2. \(\displaystyle f(x)=\sum_{n=1}^∞x^n,g(x)=\sum_{n=1}^∞\frac{1}{n}x^n\). Express the coefficients of \(\displaystyle f(x)g(x)\) in terms of \(\displaystyle H_n=\sum_{k=1}^n\frac{1}{k}\).
3. \(\displaystyle f(x)=g(x)=\sum_{n=1}^∞(\frac{x}{2})^n\)
Answer
\(\displaystyle a_n=b_n=2^{−n}\) so \(\displaystyle c_n=\sum_{k=1}^nb_ka_{n−k}=2^{−n}\sum_{k=1}^n1=\frac{n}{2^n}\) and \(\displaystyle f(x)g(x)=\sum_{n=1}^∞n(\frac{x}{2})^n\)
4. \(\displaystyle f(x)=g(x)=\sum_{n=1}^∞nx^n\)
Exercise \(\PageIndex{7}\)
In the following exercises, differentiate the given series expansion of
f term-by-term to obtain the corresponding series expansion for the derivative of f.
1. \(\displaystyle f(x)=\frac{1}{1+x}=\sum_{n=0}^∞(−1)^nx^n\)
Answer
The derivative of \(\displaystyle f\) is \(\displaystyle −\frac{1}{(1+x)^2}=−\sum_{n=0}^∞(−1)^n(n+1)x^n\).
2. \(\displaystyle f(x)=\frac{1}{1−x^2}=\sum_{n=0}^∞x^{2n}\)
In the following exercises, integrate the given series expansion of \(\displaystyle f\) term-by-term from zero to x to obtain the corresponding series expansion for the indefinite integral of \(\displaystyle f\).
3. \(\displaystyle f(x)=\frac{2x}{(1+x^2)^2}=\sum_{n=1}^∞(−1)^n(2n)x^{2n−1}\)
Answer
The indefinite integral of \(\displaystyle f\) is \(\displaystyle \frac{1}{1+x^2}=\sum_{n=0}^∞(−1)^nx^{2n}\).
4. \(\displaystyle f(x)=\frac{2x}{1+x^2}=2\sum_{n=0}^∞(−1)^nx^{2n+1}\)
Exercise \(\PageIndex{8}\)
In the following exercises, evaluate each infinite series by identifying it as the value of a derivative or integral of geometric series.
1. Evaluate \(\displaystyle \sum_{n=1}^∞\frac{n}{2^n}\) as \(\displaystyle f′(\frac{1}{2})\) where \(\displaystyle f(x)=\sum_{n=0}^∞x^n\).
Answer
\(\displaystyle f(x)=\sum_{n=0}^∞x^n=\frac{1}{1−x};f′(\frac{1}{2})=\sum_{n=1}^∞\frac{n}{2^{n−1}}=\frac{d}{dx}(1−x)^{−1}∣_{x=1/2}=\frac{1}{(1−x)^2}∣_{x=1/2}=4\) so \(\displaystyle \sum_{n=1}^∞\frac{n}{2^n}=2.\)
2. Evaluate \(\displaystyle \sum_{n=1}^∞\frac{n}{3^n}\) as \(\displaystyle f′(\frac{1}{3})\) where \(\displaystyle f(x)=\sum_{n=0}^∞x6n\).
3. Evaluate \(\displaystyle \sum_{n=2}^∞\frac{n(n−1)}{2^n}\) as \(\displaystyle f''(\frac{1}{2})\) where \(\displaystyle f(x)=\sum_{n=0}^∞x^n\).
Answer
\(\displaystyle f(x)=\sum_{n=0}^∞x^n=\frac{1}{1−x};f''(\frac{1}{2})=\sum_{n=2}^∞\frac{n(n−1)}{2^{n−2}}=\frac{d^2}{dx^2}(1−x)^{−1}∣_{x=1/2}=\frac{2}{(1−x)^3}∣_{x=1/2}=16\) so \(\displaystyle \sum_{n=2}^∞n\frac{(n−1)}{2^n}=4.\)
4. Evaluate \(\displaystyle \sum_{n=0}^∞\frac{(−1)^n}{n+1}\) as \(\displaystyle ∫^1_0f(t)dt\) where \(\displaystyle f(x)=\sum_{n=0}^∞(−1)^nx^{2n}=\frac{1}{1+x^2}\).
Exercise \(\PageIndex{9}\)
In the following exercises, given that \(\displaystyle \frac{1}{1−x}=\sum_{n=0}^∞x^n\), use term-by-term differentiation or integration to find power series for each function centered at the given point.
1. \(\displaystyle f(x)=lnx\) centered at \(\displaystyle x=1\) (Hint: \(\displaystyle x=1−(1−x)\))
Answer
\(\displaystyle ∫\sum(1−x)^ndx=∫\sum(−1)^n(x−1)^ndx=\sum \frac{(−1)^n(x−1)^{n+1}}{n+1}\)
2. \(\displaystyle ln(1−x)\) at \(\displaystyle x=0\)
3. \(\displaystyle ln(1−x^2)\) at \(\displaystyle x=0\)
Answer
\(\displaystyle −∫^{x^2}_{t=0}\frac{1}{1−t}dt=−\sum_{n=0}^∞∫^{x^2}_0t^ndx−\sum_{n=0}^∞\frac{x^{2(n+1)}}{n+1}=−\sum_{n=1}^∞\frac{x^{2n}}{n}\)
4. \(\displaystyle f(x)=\frac{2x}{(1−x^2)^2}\) at \(\displaystyle x=0\)
5. \(\displaystyle f(x)=tan^{−1}(x^2)\) at \(\displaystyle x=0\)
Answer
\(\displaystyle ∫^{x^2}_0\frac{dt}{1+t^2}=\sum_{n=0}^∞(−1)^n∫^{x^2}_0t^{2n}dt=\sum_{n=0}^∞(−1)^n\frac{t^{2n+1}}{2n+1}∣^{x^2}_{t=0}=\sum_{n=0}^∞(−1)^n\frac{x^{4n+2}}{2n+1}\)
6. \(\displaystyle f(x)=ln(1+x^2)\) at \(\displaystyle x=0\)
7. \(\displaystyle f(x)=∫^x_0lntdt\) where \(\displaystyle ln(x)=\sum_{n=1}^∞(−1)^{n−1}\frac{(x−1)^n}{n}\)
Answer
Term-by-term integration gives \(\displaystyle ∫^x_0lntdt=\sum_{n=1}^∞(−1)^{n−1}\frac{(x−1)^{n+1}}{n(n+1)}=\sum_{n=1}^∞(−1)^{n−1}(\frac{1}{n}−\frac{1}{n+1})(x−1)^{n+1}=(x−1)lnx+\sum_{n=2}^∞(−1)^n\frac{(x−1)^n}{n}=xlnx−x.\)
Exercise \(\PageIndex{10}\)
In the following exercises, using a substitution if indicated, express each series in terms of elementary functions and find the radius of convergence of the sum.
1. \(\displaystyle \sum_{k=0}^∞(x^k−x^{2k+1})\)
2. \(\displaystyle \sum_{k=1}^∞\frac{x^{3k}}{6k}\)
Answer
\(\displaystyle \sum_{k=1}^∞\frac{x^k}{k}=−ln(1−x)\) so \(\displaystyle \sum_{k=1}6∞\frac{x^{3k}}{6k}=−\frac{1}{6}ln(1−x^3)\). The radius of convergence is equal to 1 by the ratio test.
3. \(\displaystyle \sum_{k=1}^∞(1+x^2)^{−k}\) using \(\displaystyle y=\frac{1}{1+x^2}\)
4. \(\displaystyle \sum_{k=1}^∞2^{−kx}\) using \(\displaystyle y=2^{−x}\)
Answer
If \(\displaystyle y=2^{−x}\), then \(\displaystyle \sum_{k=1}^∞y^k=\frac{y}{1−y}=\frac{2^{−x}}{1−2^{−x}}=\frac{1}{2^x−1}\). If \(\displaystyle a_k=2^{−kx}\), then \(\displaystyle \frac{a_{k+1}}{a_k}=2^{−x}<1\) when \(\displaystyle x>0\). So the series converges for all \(\displaystyle x>0\).
Exercise \(\PageIndex{11}\)
1. Show that, up to powers \(\displaystyle x^3\) and \(\displaystyle y^3\), \(\displaystyle E(x)=\sum_{n=0}^∞\frac{x^n}{n!}\) satisfies \(\displaystyle E(x+y)=E(x)E(y)\).
2. Differentiate the series \(\displaystyle E(x)=\sum_{n=0}^∞\frac{x^n}{n!}\) term-by-term to show that \(\displaystyle E(x)\) is equal to its derivative.
3. Show that if \(\displaystyle f(x)=\sum_{n=0}^∞a_nx^n\) is a sum of even powers, that is, \(\displaystyle a_n=0\) if \(\displaystyle n\) is odd, then \(\displaystyle F=∫^x_0f(t)dt\) is a sum of odd powers, while if
I is a sum of odd powers, then F is a sum of even powers.
4. Suppose that the coefficients an of the series \(\displaystyle \sum_{n=0}^∞a_nx^n\) are defined by the recurrence relation \(\displaystyle a_n=\frac{a_{n−1}}{n}+\frac{a_{n−2}}{n(n−1)}\). For \(\displaystyle a_0=0\) and \(\displaystyle a_1=1\), compute and plot the sums \(\displaystyle S_N=\sum_{n=0}^Na_nx^n\) for \(\displaystyle N=2,3,4,5\) on \(\displaystyle [−1,1].\)
Answer
The solid curve is \(\displaystyle S_5\). The dashed curve is \(\displaystyle S_2\), dotted is \(\displaystyle S_3\), and dash-dotted is \(\displaystyle S_4\)
5. Suppose that the coefficients an of the series \(\displaystyle \sum_{n=0}^∞a_nx^n\) are defined by the recurrence relation \(\displaystyle a_n=\frac{a_{n−1}}{\sqrt{n}}−\frac{a_{n−2}}{\sqrt{n(n−1)}}\). For \(\displaystyle a_0=1\) and \(\displaystyle a_1=0\), compute and plot the sums \(\displaystyle S_N=\sum_{n=0}^Na_nx^n\) for \(\displaystyle N=2,3,4,5\) on \(\displaystyle [−1,1]\).
6. Given the power series expansion \(\displaystyle ln(1+x)=\sum_{n=1}^∞(−1)^{n−1}\frac{x^n}{n}\), determine how many terms
N of the sum evaluated at \(\displaystyle x=−1/2\) are needed to approximate \(\displaystyle ln(2)\) accurate to within 1/1000. Evaluate the corresponding partial sum \(\displaystyle \sum_{n=1}^N(−1)^{n−1}\frac{x^n}{n}\). Answer
When \(\displaystyle x=−\frac{1}{2},−ln(2)=ln(\frac{1}{2})=−\sum_{n=1}^∞\frac{1}{n2^n}\). Since \(\displaystyle \sum^∞_{n=11}\frac{1}{n2^n}<\sum_{n=11}^∞\frac{1}{2^n}=\frac{1}{2^{10}},\) one has \(\displaystyle \sum_{n=1}^{10}\frac{1}{n2^n}=0.69306…\) whereas \(\displaystyle ln(2)=0.69314…;\) therefore, \(\displaystyle N=10.\)
7. Given the power series expansion \(\displaystyle tan^{−1}(x)=\sum_{k=0}^∞(−1)^k\frac{x^{2k+1}}{2k+1}\), use the alternating series test to determine how many terms N of the sum evaluated at \(\displaystyle x=1\) are needed to approximate \(\displaystyle tan^{−1}(1)=\frac{π}{4}\) accurate to within 1/1000. Evaluate the corresponding partial sum \(\displaystyle \sum_{k=0}^N(−1)^k\frac{x^{2k+1}}{2k+1}\).
8. Recall that \(\displaystyle tan^{−1}(\frac{1}{\sqrt{3}})=\frac{π}{6}.\) Assuming an exact value of \(\displaystyle \frac{1}{\sqrt{3}})\), estimate \(\displaystyle \frac{π}{6}\) by evaluating partial sums \(\displaystyle S_N(\frac{1}{\sqrt{3}})\) of the power series expansion \(\displaystyle tan^{−1}(x)=\sum_{k=0}^∞(−1)^k\frac{x^{2k+1}}{2k+1}\) at \(\displaystyle x=\frac{1}{\sqrt{3}}\). What is the smallest number \(\displaystyle N\) such that \(\displaystyle 6S_N(\frac{1}{\sqrt{3}})\) approximates \(\displaystyle π\) accurately to within 0.001? How many terms are needed for accuracy to within 0.00001?
Answer
\(\displaystyle 6S_N(\frac{1}{\sqrt{3}})=2\sqrt{3}\sum_{n=0}^N(−1)^n\frac{1}{3^n(2n+1).}\) One has \(\displaystyle π−6S_4(\frac{1}{\sqrt{3}})=0.00101…\) and \(\displaystyle π−6S_5(\frac{1}{\sqrt{3}})=0.00028…\) so \(\displaystyle N=5\) is the smallest partial sum with accuracy to within 0.001. Also, \(\displaystyle π−6S_7(\frac{1}{\sqrt{3}})=0.00002…\) while \(\displaystyle π−6S_8(\frac{1}{\sqrt{3}})=−0.000007…\) so \(\displaystyle N=8\) is the smallest N to give accuracy to within 0.00001.
|
Defining parameters
Level: \( N \) = \( 21 = 3 \cdot 7 \) Weight: \( k \) = \( 4 \) Nonzero newspaces: \( 4 \) Newforms: \( 8 \) Sturm bound: \(128\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{4}(\Gamma_1(21))\).
Total New Old Modular forms 60 42 18 Cusp forms 36 30 6 Eisenstein series 24 12 12 Decomposition of \(S_{4}^{\mathrm{new}}(\Gamma_1(21))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
|
If there is a positive charge $q$ at the origin of a coordinate system, the electric potential $\phi$ at a distance $r$ from $q$ is (by definition, if we take the point of zero potential at infinity):
$$\phi=-\int_{\infty}^r \vec{E}\cdot d\vec{r}$$
The dot product of $\vec{E}$ and $d\vec{r}$ is $-E\text{ }dr$ because they point in opposite directions, so $$\phi=\int_{\infty}^r E\text{ } dr$$ For a positive point charge $q$, we have that: $$\phi=\frac{q}{4\pi\epsilon_0}\int_{\infty}^r \frac{dr}{r^2}$$ And evaluating the integral we arrive at: $$\phi=-\frac{q}{4\pi\epsilon_0 r}$$
However, the result should be positive according to Halliday-Resnick (fifth edition, page 608). They have the same derivation essentially, except that after evaluating the integral for some reason they get a positive potential. What's up?
|
2019 seminar talk: Isometries of combinatorial Banach spaces
Talk held by Christina Brech (Universidade de São Paulo, Brazil) at the KGRC seminar on 2019-10-10.
Abstract
The Schreier family $\mathcal{S} = \{F \in [\omega]^{<\omega}: |F| \leq \min+1\}$ induces a structure with no infinite sets of indiscernibles and this can be generalized to the context of Banach spaces. The fact that the canonical basis of the Banach space induced by the Schreier family doesn't have infinite indiscernibles gives a hint on the rigidity of this object, that is, on the fact that isometries of the space fix the basis, up to signs.
In this talk, we will give the background for the previous paragraph and will present a generalized version of it, obtained in a joint work with V. Ferenczi and A. Tcaciuc: given a regular family $\mathcal{F}$, it is possible to define its corresponding combinatorial space $X_\mathcal{F}$, which is a Banach space whose norm is defined in terms of the family $\mathcal{F}$. We prove that every isometry of a combinatorial Banach space $X_\mathcal{F}$ is induced by a signed permutation of its canonical basis.
|
I can't answer which test of a linear trend has the highest power, but this may answer some of the other questions you have regarding the special case of the ADF test.
It's illustrative to consider the regular Dickey Fuller equation (let's disregard autocorrelation for now) for $\Delta y_t:=y_t - y_{t-1}$, viz. $$y_t - y_{t-1}=\alpha + \beta t + \gamma y_{t-1} +\varepsilon _t .$$ Note that this is equivalent to (just add $y_{t-1}$ to both sides):$$y_t =\alpha + \beta t + (\gamma + 1)y_{t-1} +\varepsilon _t .$$
From the second equation it is also clear that including the intercept does not automatically make the inclusion of $y_{t-1}$ or $t$ superfluous. The confusion arises here because you can algebraically find $\Delta y_t$ in two ways. For the DF-test, it's done this way; just subtract $y_{t-1}$ from the second equation to arrive at the first. In other words, $\alpha$ in the DF equation is not capturing the trending behavior, it's the same intercept as in the levels equation.
Suppose data is indeed generated according to this equation and that $\gamma = 0$ so we are looking at a process with both a deterministic and a stochastic trend. If you are only interested in whether the data is trending or not, disregarding the difference between stochastic and deterministic trend, you can more or less just plot the data and look for yourself, but this is not so interesting a finding.
If you want to convince anyone that your data is generated by a process including a linear time trend, you can run the ADF regression with $t$ included and show that the coefficient on $t$ is significantly (in statistical as well as general meaning) different from zero. You can not, however, just regress $y_t$ on $t$ and read of the coefficient on $t$ because you are not, loosely speaking, controlling for the unit root structure (no matter whether you have autocorrelation or not). The key is that when estimating with OLS the coefficients, again loosely speaking, measure the effects of their respective variables,
after controlling for the other variables included in the regression.
If you know
a priori that the process either includes a deterministic trend or a stochastic trend, but not both, then in order to distinguish them just look at the coefficients in the ADF regression. If your estimate of $\beta$ is significant that means there is a trend over time not explained by the unit root structure, if your estimate of $\gamma$ is not different from zero, then you conclude there is a unit root in the process and not a linear trend per your assumptions that it must be either or.
Note that this is all based on the ADF test. There are many other ways to examine the time series at hand. You should start by consulting a plot. In some cases it's relatively straight forward to see if the data is evolving around a time trend, or if its just wandering upwards or downwards due to the unit root. You could also rely on information criteria, like AIC and BIC to select the correct model as indicated in another answer here. A third possibility, especially attractive if you have a large sample, is to fit the competing models on on some chunk of observations and then predict future values. Eg., fit the two competing models on observations $\{s, s+1,\dots, s+t\}$ and then predict the value at time $s+t+1$. Then move forward through the sample one period at the time and form these one period ahead out of sample forecast. You can then decide which model forecasts best by using the Diebold & Mariano test and conclude that model to be the 'correct' one.
|
I believe this can be attributed to the central limit theorem, which states that a large number of samples from a population with a well-defined variance will follow a gaussian distribution. The key idea is that because of quantum mechanics, we must treat both position and momentum as
random variables; the uncertainty principle gives us a relation between the variance of the two quantities.
We cannot talk about the "formula for position"
per se; however we can derive a deterministic formula for the wavefunction, which represents the probability density for these random variables. The exact form of the wavefunction is dependent on the problem, but can (in principle) generally be obtained from the Schrödinger equation.
Wikipedia has a good writeup for the free particle. The Hamiltonian for a free particle with fixed momentum $\mathbf{p}$ is $\mathcal{H} = \mathbf{p}^2/2m$ (the potential is zero). Eigenstates of this Hamiltonian are plane-waves in position-space (that is, their
wavefunctions oscillate throughout space and time):$$\psi(\mathbf{x}, t) = Ae^{i(\mathbf{x}\cdot\mathbf{p}-Et)/\hbar}$$that means that the probability distribution is simply:$$\left|\psi(\mathbf{x},t)\right|^2 = \left|A\right|^2$$which is a constant independent of position $\mathbf{x}$. Note that this wavefunction cannot be renormalized to unity, but the takeaway is that the particle is equally likely to be anywhere. This is consistent with the uncertainty princple: since we specified $\mathbf{p}$ exactly ($\sigma_p=0$), the uncertainty in position is infinite.
For more complex systems, the Hamiltonian is not always exactly known; this is often the case in multi-particle systems, such as atoms. In still other cases, the Hamiltonian is known but cannot be solved analytically.
|
In the Hartree-Fock self-consistent field method of solving the time-independent electronic Schroedinger equation, we seek to minimize the ground state energy, $E_{0}$, of a system of electrons in an external field with respect to the choice of spin orbitals, $\{\chi_{i}\}$.
We do this by iteratively solving the 1-electron Hartree-Fock equations,$$\hat{f}_{i}\chi(\mathbf{x}_{i})=\varepsilon\chi(\mathbf{x}_{i})$$where $\mathbf{x}_{i}$ is the spin/spatial coordinate of electron $i$, $\varepsilon$ is the orbital eigenvalue and $\hat{f}_{i}$ is the Fock operator (a 1-electron operator), with the form$$\hat{f}_{i} = -\frac{1}{2}\nabla^{2}_{i}-\sum_{A=1}^{M}\frac{Z_{A}}{r_{iA}}+V^{\mathrm{HF}}_{i}$$(the summation runs over nuclei, here, with $Z_{A}$ being the nuclear charge on nucleus A and $r_{iA}$ being the distance between electron $i$ and nucleus $A$). $V^{\mathrm{HF}}_{i}$ is the average potential felt by electron $i$ due to all the other electrons in the system. Since $V_{i}^{\mathrm{HF}}$ is dependent on the spin orbitals, $\chi_{j}$, of the other electrons, we can say that the Fock operator is dependent on it's eigenfunctions. In "Modern Quantum Chemistry" by A. Szabo and N. Ostlund, pp. 54 (the first edition) they write that
"the Hartree-Fock equation (2.52) is nonlinear and must be solved iteratively". I have studied the details of this iterative solution as part of my research, but for this question I think they are unimportant, except to state the basic structure of the method, which is: Make an initial guess of the spin-orbitals, $\{\chi_{i}\}$ and calculate $V_{i}^{\mathrm{HF}}$. Solve the eigenvalue equation above for these spin orbitals and obtain new spin-orbitals. Repeat the process with your new spin orbitals until self-consistency is reached.
In this case, self-consistency is achieved when the spin-orbitals which are used to make $V_{i}^{\mathrm{HF}}$ are the same as those obtained on solving the eigenvalue equation.
My question is this: how can we know that this convergence will occur? Why do the eigenfunctions of the successive iterative solutions in some sense "improve" towards converged case? Is it not possible that the solution could diverge? I don't see how this is prevented.
As a further question, I would be interested to know why the the converged eigenfunctions (spin orbitals) give the best (i.e. lowest) ground state energy. It seems to me that the iterative solution of the equation somehow has convergence and energy minimization "built-in". Perhaps there is some constraint built into the equations which ensures this convergence?
Cross-posted from the Physics Stack Exchange: https://physics.stackexchange.com/q/20703/why-does-iteratively-solving-the-hartree-fock-equations-result-in-convergence
|
It is said that:
Abel summation and Euler summation are not comparable.
We were able to find examples of divergent series which are Euler summable but not Abel summable, for instance $$ 1-2+4-8+16-\dots$$
However, we couldn't find any example of a divergent series which is Abel summable but not Euler summable.
Do you know such an example?
Thank you!
EDIT: Dear Peter, this is the definition of Euler summation:
Let $\sum_{n=0}^\infty a_n$ be any series. The Euler transformation of this series is defined as: \begin{equation*} \sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n\quad\text{ with }\quad b_n:=\sum_{k=0}^n\left(\begin{array}[h]{c} n \\ k \end{array}\right) a_k \end{equation*}
The series $\sum_{n=0}^\infty a_n$ is called Euler summable if the Euler transformation of this series $$\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n$$ is converges in the usual sense.
The Euler sum is then given by $$\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n.$$
|
I am now reading about the complex scaling method for solving resonance states. As far as I understand, the procedure goes like this:
Let us take the 1d potential $V(x) = A e^{-x^2} x^2 $ as an example. Here $A > 0 $ .
The full single-particle Hamiltonian is
$$ H = - \frac{\partial^2}{\partial x^2} + A e^{-x^2} x^2 . $$
The equation for the resonance state is
$$ \left(- \frac{\partial^2}{\partial x^2} + A e^{-x^2} x^2 \right ) \psi(x) = E_{complex} \psi(x) . $$
Suppose that the function $\psi(x)$ can be analytically continued into the complex plane. We there can consider the variable $x$ as a complex variable. Now consider the equation on the line $ x = \rho e^{i\theta }$. Here $\rho, \theta \in \mathbb{R}$. The equation for the function $f(\rho) \equiv \psi(\rho e^{i \theta})$ is then
$$ \left(- \frac{1}{e^{2i\theta }} \frac{\partial^2}{\partial \rho^2} + A e^{-\rho^2 e^{2i \theta}} \rho^2 e^{2i \theta} \right ) f(\rho) = E_{complex} f(\rho) . $$
The point is that now $f(\rho)$ might be normalizable and we can use conventional method to diagonalize the new Hamiltonian on the left hand side.
The problem is that this procedure essentially relies on the fact that the potential here is an analytical one. What if we have the simple square well potential? Apparently, many potentials can support resonance states but cannot be analytically continued into the complex plane.
|
I have been learning Fourier transformation of a gaussian wave packet and i don't know how to calculate
this integral:
In the above integral we try to calculate $\varphi(\alpha)$ where $\alpha$ is a standard deviation, $\alpha^2$ is variance, $x'$ is average for $x$, $p'$ is average for $p$ and:
$$\psi_\alpha = \frac{1}{\sqrt{\sqrt{\pi} \alpha}} \exp \left[ - \frac{(x-x')}{2 \alpha^2} \right] $$
For some reason author of this derivation swaps $p$ with $(p - p')$ (red color) and from $=$ sign (yellow color) forward i am completely lost. Could anyone please explain why did author did what he did? It is weird...
|
ASO: Integral Transforms - Material for the year 2019-2020
This course is highly recommended for Differential Equations 2.
8 lectures
The Laplace and Fourier Transforms aim to take a differential equation in a function $f$ and turn it into an algebraic equation involving its transform $\bar{f}$ or $\hat{f}$. Such an equation can then be solved by algebraic manipulation, and the original solution determined by recognizing its transform or applying various inversion methods.
The Dirac $\delta$-function, which is handled particularly well by transforms, is a means of rigorously dealing with ideas such as instantaneous impulse and point masses, which cannot be properly modelled using functions in the normal sense of the word. $\delta$ is an example of a
distribution or generalized function and the course provides something of an introduction to these generalized functions and their calculus.
Students will gain a range of techniques employing the Laplace and Fourier Transforms in the solution of ordinary and partial differential equations. They will also have an appreciation of generalized functions, their calculus and applications.
Motivation for a "function'' with the properties the Dirac $\delta$-function. Test functions. Continuous functions are determined by $ \phi \rightarrow \int f \phi$. Distributions and $\delta$ as a distribution. Differentiating distributions. (3 lectures)
Theory of Fourier and Laplace transforms, inversion, convolution. Inversion of some standard Fourier and Laplace transforms via contour integration.
Use of Fourier and Laplace transforms in solving ordinary differential equations, with some examples including $\delta$.
Use of Fourier and Laplace transforms in solving partial differential equations; in particular, use of Fourier transform in solving Laplace's equation and the Heat equation. (5 lectures)
S. Howison,
Practical Applied Mathematics (CUP 2005), Chapters 9 & 10 (for distributions).
P. J. Collins,
Differential and Integral Equations (OUP, 2006), Chapter 14.
W. E. Boyce & R. C. DiPrima,
Elementary Differential Equations and Boundary Value Problems, there are many editions, most recently in 2017; in all of them Chapter 6 covers Laplace Transforms.
K. F. Riley & M. P. Hobson,
Essential Mathematical Methods for the Physical Sciences (CUP 2011) Chapter 5.
H. A. Priestley,
Introduction to Complex Analysis (2nd edition, OUP, 2003) Chapters 21 and 22.
L. Debnath & P. Mikusinski,
Introduction to Hilbert Spaces with Applications, (3rd Edition, Academic Press. 2005) Chapter 6
|
We owe Paul Dirac two excellent mathematical jokes. I have amended them with a few lesser known variations.
A.
Square root of the Laplacian: we want $\Delta$ to be $D^2$ for some first order differential operator (for example, because it is easier to solve first order partial differential equations than second order PDEs). Writing it out,
$$\sum_{k=1}^n \frac{\partial^2}{\partial x_k^2}=\left(\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\right)\left(\sum_{j=1}^n \gamma_j \frac{\partial}{\partial x_j}\right) = \sum_{i,j}\gamma_i\gamma_j \frac{\partial^2}{\partial x_i x_j},$$
and equating the coefficients, we get that this is indeed true if
$$D=\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\quad\text{and}\quad \gamma_i\gamma_j+\gamma_j\gamma_i=\delta_{ij}.$$
It remains to come up with the right $\gamma_i$'s. Dirac realized how to accomplish it with $4\times 4$ matrices when $n=4$; but a neat follow-up joke is to simply define them to be the elements $\gamma_1,\ldots,\gamma_n$ of
$$\mathbb{R}\langle\gamma_1,\ldots,\gamma_n\rangle/(\gamma_i\gamma_j+\gamma_j\gamma_i - \delta_{ij}).$$
Using symmetry considerations, it is easy to conclude that the commutator of the $n$-dimensional Laplace operator $\Delta$ and the multiplication by $r^2=x_1^2+\cdots+x_n^2$ is equal to $aE+b$, where $$E=x_1\frac{\partial}{\partial x_1}+\cdots+x_n\frac{\partial}{\partial x_n}$$ is the Euler vector field. A boring way to confirm this and to determine the coefficients $a$ and $b$ is to expand $[\Delta,r^2]$ and simplify using the commutation relations between $x$'s and $\partial$'s. A more exciting way is to act on $x_1^\lambda$, where $\lambda$ is a formal variable:
$$[\Delta,r^2]x_1^{\lambda}=((\lambda+2)(\lambda+1)+2(n-1)-\lambda(\lambda-1))x_1^{\lambda}=(4\lambda+2n)x_1^{\lambda}.$$
Since $x_1^{\lambda}$ is an eigenvector of the Euler operator $E$ with eigenvalue $\lambda$, we conclude that
$$[\Delta,r^2]=4E+2n.$$
B.
Dirac delta function: if we can write
$$g(x)=\int g(y)\delta(x-y)dy$$
then instead of solving an inhomogeneous linear differential equation $Lf=g$ for each $g$, we can solve the equations $Lf=\delta(x-y)$ for each real $y$, where a linear differential operator $L$ acts on the variable $x,$ and combine the answers with different $y$ weighted by $g(y)$. Clearly, there are fewer real numbers than functions, and if $L$ has constant coefficients, using translation invariance the set of right hand sides is further reduced to just one, $\delta(x)$. In this form, the joke goes back to Laplace and Poisson.
What happens if instead of the ordinary geometric series we consider a doubly infinite one? Since
$$z(\cdots + z^{-n-1} + z^{-n} + \cdots + 1 + \cdots + z^n + \cdots)= \cdots + z^{-n} + z^{-n+1} + \cdots + z + \cdots + z^{n+1} + \cdots,$$
the expression in the parenthesis is annihilated by the multiplication by $z-1$, hence it is equal to $\delta(z-1)$. Homogenizing, we get
$$\sum_{n\in\mathbb{Z}}\left(\frac{z}{w}\right)^n=\delta(z-w)$$
This identity plays an important role in conformal field theory and the theory of vertex operator algebras.
Pushing infinite geometric series in a different direction,
$$\cdots + z^{-n-1} + z^{-n} + \cdots + 1=-\frac{z}{1-z} \quad\text{and}\quad 1 + z + \cdots + z^n + \cdots = \frac{1}{1-z},$$
which add up to $1$. This time, the sum of doubly infinite geometric series is zero!Thus the point $0\in\mathbb{Z}$ is the sum of all lattice points on the non-negative half-line and all points on the positive half-line:
$$0=[\ldots,-2,-1,0] + [0,1,2,\ldots] $$
A vast generalization is given by Brion's formula for the generating function for the lattice points in a convex lattice polytope $\Delta\subset\mathbb{R}^N$ with vertices $v\in{\mathbb{Z}}^N$ and closed inner vertex cones $C_v\subset\mathbb{R}^N$:
$$\sum_{P\in \Delta\cap{\mathbb{Z}}^N} z^P = \sum_v\left(\sum_{Q\in C_v\cap{\mathbb{Z}}^N} z^Q\right),$$
where the inner sums in the right hand side need to be interpreted as rational functions in $z_1,\ldots,z_N$.
Another great joke based on infinite series is the Eilenberg swindle, but I am too exhausted by fighting the math preview to do it justice.
|
Inclusive differential cross sections $d\sigma_{pA}/dx_F$ and $d\sigma_{pA}/dp_t^2$ for the production of \kzeros, \lambdazero, and \antilambda particles are measured at HERA in proton-induced reactions on C, Al, Ti, and W targets. The incident beam energy is 920 GeV, corresponding to $\sqrt {s} = 41.6$ GeV in the proton-nucleon system. The ratios of differential cross sections \rklpa and \rllpa are measured to be $6.2\pm 0.5$ and $0.66\pm 0.07$, respectively, for \xf $\approx-0.06$. No significant dependence upon the target material is observed. Within errors, the slopes of the transverse momentum distributions $d\sigma_{pA}/dp_t^2$ also show no significant dependence upon the target material. The dependence of the extrapolated total cross sections $\sigma_{pA}$ on the atomic mass $A$ of the target material is discussed, and the deduced cross sections per nucleon $\sigma_{pN}$ are compared with results obtained at other energies.
The proton-nucleon cross section ratio $R=Br(\Upsilon\to l^+l^-) d\sigma(\Upsilon)/dy|_{y=0} / {\sigma(J/\psi)}$ has been measured with the HERA-B spectrometer in fixed-target proton-nucleus collisions at 920 GeV proton beam energy corresponding to a proton-nucleon cms energy of sqrt{s}=41.6 GeV. The combined results for the Upsilon decay channels Upsilon $\to e^+e^-$ and Upsilon $\to\mu^+\mu^-$ yield a ratio $R=(9.0 \pm 2.1) 10^{-6}$. The corresponding Upsilon production cross section per nucleon at mid-rapidity (y=0) has been determined to be $Br(\Upsilon\to{}l^+l^-) {d\sigma(\Upsilon)/dy}|_{y=0}= 4.5 \pm 1.1 $ pb/nucleon.
Inclusive doubly differential cross sections d^2\sigma_{pA}/dx_Fdp_T^2 as a function of Feynman-x (x_F) and transverse momentum (p_T) for the production of K^0_s, Lambda^0 and anti-Lambda^0 in proton-nucleus interactions at 920 GeV are presented. The measurements were performed by HERA-B in the negative x_F range (-0.12<x_F<0.0) and for transverse momenta up to p_T= 1.6 GeV/c. Results for three target materials: carbon, titanium and tungsten are given. The ratios of production cross sections are presented and discussed. The Cronin effect is clearly observed for all three V^0 species. The atomic number dependence is parameterized as \sigma_{pA} = \sigma_{pN} \cdot A^\alpha where \sigma_{pN} is the proton-nucleon cross section. The measured values of \alpha are all near one. The results are compared with EPOS 1.67 and PYTHIA 6.3. EPOS reproduces the data to within \approx 20% except at very low transverse momentum.
Azimuthal decorrelations between the two central jets with the largest transverse momenta are sensitive to the dynamics of events with multiple jets. We present a measurement of the normalized differential cross section based on the full dataset (L=36/pb) acquired by the ATLAS detector during the 2010 sqrt(s)=7 TeV proton-proton run of the LHC. The measured distributions include jets with transverse momenta up to 1.3 TeV, probing perturbative QCD in a high energy regime.
This letter describes the observation of the light-by-light scattering process, $\gamma\gamma\rightarrow\gamma\gamma$, in Pb+Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV. The analysis is conducted using a data sample corresponding to an integrated luminosity of 1.73 nb$^{-1}$, collected in November 2018 by the ATLAS experiment at the LHC. Light-by-light scattering candidates are selected in events with two photons produced exclusively, each with transverse energy $E_{\textrm{T}}^{\gamma} > 3$ GeV and pseudorapidity $|\eta_{\gamma}| < 2.37$, diphoton invariant mass above 6 GeV, and small diphoton transverse momentum and acoplanarity. After applying all selection criteria, 59 candidate events are observed for a background expectation of 12 $\pm$ 3 events. The observed excess of events over the expected background has a significance of 8.2 standard deviations. The measured fiducial cross section is 78 $\pm$ 13 (stat.) $\pm$ 7 (syst.) $\pm$ 3 (lumi.) nb.
Using 1.8 fb-1 of pp collisions at a center-of-mass energy of 7 TeV recorded by the ATLAS detector at the Large Hadron Collider, we present measurements of the production cross sections of Upsilon(1S,2S,3S) mesons. Upsilon mesons are reconstructed using the di-muon decay mode. Total production cross sections for p_T<70 GeV and in the rapidity interval |Upsilon|<2.25 are measured to be 8.01+-0.02+-0.36+-0.31 nb, 2.05+-0.01+-0.12+-0.08 nb, 0.92+-0.01+-0.07+-0.04 nb respectively, with uncertainties separated into statistical, systematic, and luminosity measurement effects. In addition, differential cross section times di-muon branching fractions for Upsilon(1S), Upsilon(2S), and Upsilon(3S) as a function of Upsilon transverse momentum p_T and rapidity are presented. These cross sections are obtained assuming unpolarized production. If the production polarization is fully transverse or longitudinal with no azimuthal dependence in the helicity frame the cross section may vary by approximately +-20%. If a non-trivial azimuthal dependence is considered, integrated cross sections may be significantly enhanced by a factor of two or more. We compare our results to several theoretical models of Upsilon meson production, finding that none provide an accurate description of our data over the full range of Upsilon transverse momenta accessible with this dataset.
We measure the ttbar production cross section in ppbar collisions at sqrt{s}=1.96 TeV in the lepton+jets channel. Two complementary methods discriminate between signal and background, b-tagging and a kinematic likelihood discriminant. Based on 0.9 fb-1 of data collected by the D0 detector at the Fermilab Tevatron Collider, we measure sigma_ttbar=7.62+/-0.85 pb, assuming the current world average m_t=172.6 GeV. We compare our cross section measurement with theory predictions to determine a value for the top quark mass of 170+/-7 GeV.
Measurements of the kinematic distributions of $J/\psi$ mesons produced in $p-$C, $p-$Ti and $p-$W collisions at $\sqrt{s}=41.6 \mathrm{GeV}$ in the Feynman-$x$ region $-0.34 < x_{F} < 0.14$ and for transverse momentum up to $p_T = 5.4 \mathrm{GeV}/c$ are presented. The $x_F$ and $p_T$ dependencies of the nuclear suppression parameter, $\alpha$, are also given. The results are based on $2.4 \cdot 10^{5}$ $J/\psi$ mesons in both the $e^+ e^-$ and $\mu^{+}\mu^{-}$ decay channels. The data have been collected by the HERA-B experiment at the HERA proton ring of the DESY laboratory. The measurement explores the negative region of $x_{F}$ for the first time. The average value of $\alpha$ in the measured $x_{F}$ region is $0.981 \pm 0.015$. The data suggest that the strong nuclear suppression of $J/\psi$ production previously observed at high $x_F$ turns into an enhancement at negative $x_F$.
A study of WZ production in proton-proton collisions at sqrt(s) = 7 TeV is presented using data corresponding to an integrated luminosity of 4.6 fb^-1 collected with the ATLAS detector at the Large Hadron Collider in 2011. In total, 317 candidates, with a background expectation of 68+/-10 events, are observed in double-leptonic decay final states with electrons, muons and missing transverse momentum. The total cross-section is determined to be sigma_WZ(tot) = 19.0+1.4/-1.3(stat.)+/-0.9(syst.)+/-0.4(lumi.) pb, consistent with the Standard Model expectation of 17.6+1.1/-1.0 pb. Limits on anomalous triple gauge boson couplings are derived using the transverse momentum spectrum of Z bosons in the selected events. The cross section is also presented as a function of Z boson transverse momentum and diboson invariant mass.
A search for highly ionising, penetrating particles with electric charges from |q| = 2e to 6e is performed using the ATLAS detector at the CERN Large Hadron Collider. Proton-proton collision data taken at $\sqrt{s}$=7 TeV during the 2011 running period, corresponding to an integrated luminosity of 4.4 fb$^{-1}$, are analysed. No signal candidates are observed, and 95% confidence level cross-section upper limits are interpreted as mass-exclusion lower limits for a simplified Drell--Yan production model. In this model, masses are excluded from 50 GeV up to 430, 480, 490, 470 and 420 GeV for charges 2e, 3e, 4e, 5e and 6e, respectively.
The results of a search for pair production of the scalar partners of bottom quarks in 2.05 fb^-1 of pp collisions at sqrt{s} = 7 TeV using the ATLAS experiment are reported. Scalar bottoms are searched for in events with large missing transverse momentum and two jets in the final state, where both jets are identified as originating from a b-quark. In an R-parity conserving minimal supersymmetric scenario, assuming that the scalar bottom decays exclusively into a bottom quark and a neutralino, 95% confidence-level upper limits are obtained in the tilde{b}_1 - tilde{chi}^0_1 mass plane such that for neutralino masses below 60 GeV scalar bottom masses up to 390 GeV are excluded.
The results of a dedicated search for pair production of scalar partners of charm quarks are reported. The search is based on an integrated luminosity of 20.3 fb-1 of pp collisions at s=8 TeV recorded with the ATLAS detector at the LHC. The search is performed using events with large missing transverse momentum and at least two jets, where the two leading jets are each tagged as originating from c quarks. Events containing isolated electrons or muons are vetoed. In an R-parity-conserving minimal supersymmetric scenario in which a single scalar-charm state is kinematically accessible, and where it decays exclusively into a charm quark and a neutralino, 95% confidence-level upper limits are obtained in the scalar-charm–neutralino mass plane such that, for neutralino masses below 200 GeV, scalar-charm masses up to 490 GeV are excluded.
In order to study further the long-range correlations (“ridge”) observed recently in p+Pb collisions at $\sqrt{s_{NN}}$ = 5:02 TeV, the second-order azimuthal anisotropy parameter of charged particles, $v_2$, has been measured with the cumulant method using the ATLAS detector at the LHC. In a data sample corresponding to an integrated luminosity of approximately 1 microbarn-1, the parameter $v_2$ has been obtained using two- and four-particle cumulants over the pseudorapidity range |$\eta$| < 2.5. The results are presented as a function of transverse momentum and the event activity, defined in terms of the transverse energy summed over 3.1 < $\eta$ < 4.9 in the direction of the Pb beam. They show features characteristic of collective anisotropic flow, similar to that observed in Pb+Pb collisions. A comparison is made to results obtained using two-particle correlation methods, and to predictions from hydrodynamic models of p+Pb collisions. Despite the small transverse spatial extent of the p+Pb collision system, the large magnitude of $v_2$ and its similarity to hydrodynamic predictions provide additional evidence for the importance of final-state effects in p+Pb reactions.
A $6.8 \ {\rm nb^{-1}}$ sample of $pp$ collision data collected under low-luminosity conditions at $\sqrt{s} = 7$ TeV by the ATLAS detector at the Large Hadron Collider is used to study diffractive dijet production. Events containing at least two jets with $p_\mathrm{T} > 20$ GeV are selected and analysed in terms of variables which discriminate between diffractive and non-diffractive processes. Cross sections are measured differentially in $\Delta\eta^F$, the size of the observable forward region of pseudorapidity which is devoid of hadronic activity, and in an estimator, $\tilde{\xi}$, of the fractional momentum loss of the proton assuming single diffractive dissociation ($pp \rightarrow pX$). Model comparisons indicate a dominant non-diffractive contribution up to moderately large $\Delta\eta^F$ and small $\tilde{\xi}$, with a diffractive contribution which is significant at the highest $\Delta\eta^F$ and the lowest $\tilde{\xi}$. The rapidity-gap survival probability is estimated from comparisons of the data in this latter region with predictions based on diffractive parton distribution functions.
Multi-particle cumulants and corresponding Fourier harmonics are measured for azimuthal angle distributions of charged particles in $pp$ collisions at $\sqrt{s}$ = 5.02 and 13 TeV and in $p$+Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, and compared to the results obtained for low-multiplicity Pb+Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. These measurements aim to assess the collective nature of particle production. The measurements of multi-particle cumulants confirm the evidence for collective phenomena in $p$+Pb and low-multiplicity Pb+Pb collisions. On the other hand, the $pp$ results for four-particle cumulants do not demonstrate collective behaviour, indicating that they may be biased by contributions from non-flow correlations. A comparison of multi-particle cumulants and derived Fourier harmonics across different collision systems is presented as a function of the charged-particle multiplicity. For a given multiplicity, the measured Fourier harmonics are largest in Pb+Pb, smaller in $p$+Pb and smallest in $pp$ collisions. The $pp$ results show no dependence on the collision energy, nor on the multiplicity.
|
(Written by Jonathan Sheppard)
Intro
This summer the team took the previous year team’s RF probe design and both updated and upgraded it, in an attempt to optimize its functions. As with last year’s design, we used the text
Experimental Pulse NMR: A Nuts and Bolts Approach by Eiichi Fukushima and Stephen B. W. Roeder as a guide. The main sections used will be noted throughout this post.
As noted in the previous post, below are some general guidelines on the inductor coil specifications (from the section “V.C.4. Probes” (p. 373-385)): * A good coil for RF probe has a large inductance, which is given by the equation: $$L=\frac{n^2a^2}{23a\ +\ 25b},$$ where $n$ is the number of loops, and $a$ and $b$ are the radius of the loop and the length of the coil measured in centimeters. * The spacing between the turns of the coil should approximately equal to the diameter of the wire. * AWG14 wire is about the smallest that can be recommended for a free-standing coil.
To begin the process, we had to choose a desired resonance frequency for our coil. We brought it down from last summer’s 21 MHz to 18.75 MHz. Then, we had to note the range of capacitances the capacitors in our lab have: 10-1000 pF. Using these values, we used the formula for angular resonance: $$\omega_0=\frac{1}{\sqrt{LC}},$$ solving for $L$, to find the theoretical maximum inductance our could have: 7.2 µH.
We wanted to fit the coil inside the bore of our magnet, which has an inner radius of 1.7 cm, and also we wanted the length of the coil, $b$, to be at least 2.5 cm. Variable $a$ is the inner radius, so we subtracted the wire width (thickness) from our total 1.7 cm diameter. In theory, this would have been 1.37 cm. In practice, this became 1.1 cm. Thus, 1.1 is also the diameter of the tube we will be using to wrap our wire around. So, our $a$, which is the inner radius, is 1.1 / 2, giving $a$ = 0.55 cm.
We then chose our $b$ to simply be the length of our bore, 5 cm. Following the advice of A Nuts and Bolts Approach, we experimented with using both AWG14 and AWG12 copper wire, as our inductor coil would be free-standing. We settled on AWG14 (with nominal wire diameter 1.628 mm) due to the lowered requirements of our $b$ to achieve the highest possible inductance, $L$.
Using a python script, we created, we discovered the rest of our values needed to create an inductor coil. These values included $L$ (estimated inductance), $n$ (number of turns), $l$ (length of copper wire needed), $R$ (resistance of our length of copper wire), $Q$ (quality factor of our inductor coil), $C_t$ (tuning capacitance), and $C_m$ (matching capacitance). We’ll continue this post by walking through the equations used to get these values.
To begin, we used the following function: $$n=\frac{\sqrt{(23a\ +\ 25b)\ \ast\ L}}{a},$$ which you’ll notices is the equation we used to calculate the maximum inductance, solved for number of turns, $n$.
1. def findL_n(a, b, d): 2. wireWidth = 1000 3. L = 0 4. while wireWidth > d: #width in cm 5. L += .01 6. n = sqrt(((23 * a + 25 * b) * L)) / a 7. wireWidth = b / (2 * n) 8. return L, n #L in μH
We started with a ridiculously high $wireWidth$ and an inductance, $L$, of zero. Then, we increment $L$ up in each loop, updating the $wireWidth$ each time, until our wireWidth is equal to $d$ (1.628 mm, for our AWG14 copper wire). When the wireWidth is equal (or greater than) our actual wire width, the loop breaks, returning the $L$ given by the parameters we’ve chosen. In this case, we are given our number of turns as well. Using this function, we got back $L$ = 0.52 µH and
n = 15.5.
The next step was to find out how much copper wire we’d need to create the inductor coil. To do that, we needed to use our inner radius $a$ and the number of turns $n$.
1. def findCoilLength(a, n): 2. return 2 * pi * (a / 100) * n
This function makes use of multiply the circumference of each loop (converting a to meters so that our output will be given in meters), by the number of loops. The output here was $l$ = 0.53 m.
Here we needed to make a quick check on our calculation. The length of copper wire used must be lower than an eighth of a wavelength at resonance.
1. def isCoilLengthOkay(a, f_0, n): 2. eighthWavelength = c / (8 * f_0) 3. l = findCoilLength(a, n) 4. return l < eighthWavelength
To do this, we used the formula for wavelength, multiplied by
1⁄ 8 to get, you guessed it, an eighth of a wavelength. Then we simply called the function we defined before, and compared the two, and out l passed the check.
Next, we needed to find the resistance of our length of wire in order to calculate a quality factor, $Q$.
1. def findCopperResistance(l, d): 2. return 4 * l * 0.0171 / (pi * d**2) #d in mm 3. 4. def findQualityFactor(f_0, L, R): #L input μH, converted to H 5. w_0 = 2 * pi * f_0 6. return w_0 * L * 10**(-6) / R
Using the formula for copper resistance: $$R=\frac{4l\rho}{\pi d^2},$$ where $l$ is the length of wire used in meters (0.53 m in our case), $d$ is the nominal diameter of wire in millimeters (1.628 mm in our case), and $ρ$ is the resistivity (0.0171 Ω*mm2/m for copper). The function estimated our coil to have an $R$ = 0.0044 Ω. Then, plugging our $R$ into the second function alongside our angular resonance frequency,
ω 0, and inductance, $L$, using the equation $$Q=\frac{\omega_0L}{R},$$ we get a $Q$ = 14,028. In practice, this is much lower, as will be shown later.
Finally, we use all the information collected so far to calculate how much capacitance our tuning and matching capacitors ($C_t$ and $C_m$ respectively) should have.
1. def findC_t(f_0, L): #L input μH, but converted to H 2. w_0 = 2 * pi * f_0 3. return 1 / (L * 10**(-6) * w_0**2) 4. 5. def findC_m(f_0, L, R): #L input μH, but converted to H 6. Q = findQualityFactor(f_0, L, R) 7. C_t = findC_t(f_0, L) 8. w_0 = 2 * pi * f_0 9. return sqrt((Q * w_0 * L * 10**(-6) * C_t**2) / 50) - C_t
Our tuning capacitor simply required us to solve our angular resonance equation for $C$, this time, instead of $L$. Our matching capacitor required a bit more work, but it was simple enough. We used an equation given by
A Nuts and Bolts Approach, solved for $C_m$: $$R\ =\ \frac{Q\omega_0L{C_t}^2}{{(C_m\ +\ C_t)}^2}.$$ This function gave us a $C_t$ of 138 pF and a $C_m$ of 180 nF. Summary of Coil Design Process
Thus, we had all the specifications we needed to create an inductor coil to suit our needs. Below is a quick summary of all of the values above, so they’re all in one place and also to refresh your mind of them. Keep in mind these are theoretical values received from our python script. Not all of them stayed the same, as theory and practice are separate beasts.
$f_0$ = 18.75 MHz $L$ = 0.52 μH $Q$ = 14,028 $\textrm{Max }L$ = 7.2 μH $b$ = 5 cm $n$ = 15.5 $R$ = 0.0044 Ω $C_t$ = 138 pF $d$ = 1.628 mm $a$ = 0.55 cm $l$ = .53 m $C_m$ = 180 nF RF Probe Circuit Design, Building, & Testing
In accordance with
A Nuts and Bolts Approach, we used the parallel tank circuit suggested on page 381.
This design is a bit different than last summer’s, due to the fact that we more closely followed the route suggested in
A Nuts and Bolts Approach. Another difference, and one that is a bit more fundamental to the design, was to future-proof it. This came in the form of making the circuit Plug-and-Play. Before, everything, including the inductor coil and BNC cable, were hardwired into the circuit. This required the BNC cable to be spliced and soldered into the circuit. The new design made use of a BNC connector (no more destroying BNC cables) and banana clips for the inductor coil. This meant we could use varying lengths of BNC cable, depending on the situation, and that we could also try out different coils if necessary. In the second version of the design, we even added a clip for the static matching capacitors, so those could be clipped in and out at will with ease.
Another design change came in the form of the coil design. The previous year’s design created an inductor coil inside of a test tube, which in turn would fit into the bore of the magnetic field. This updated design puts the inductor coil on the outside, with no tube to fit in other than the bore itself. This would have increased the room we had for samples, if not for the fact that we upgraded to AWG14 wire from AWG18; hence, any gains were offset with the thicker wire.
And finally, the last design change came by turn the coil perpendicular to the rest of the circuit. This was due to the fact that a final form of our machine would ideally have a flow system in place to let a live Zebrafish be imaged. To do this, the flow system would have to run through the bore (and hence through our inductor coil) on both sides. So, twisting the inductor coil perpendicular allowed us to craft a tube (or simply two holes in the second design variation) through which the flow system would snake through, without hitting any circuitry and impeding our RF Probe.
Pictured to below is the first complete design using these new thoughts. It was completely 3D-printed with the exception of the white cap we salvaged and bolted our own variable capacitor to. We found a good tuning capacitance at 72 pF, and continued to test different matching capacitances.
The graph below is one of the resonance curves for matching capacitance of 100 nF, with the accompanying quality factor.
When the testing began, the resonance frequency was quite a bit lower than we wanted. By changing the matching capacitance, we managed to get it around 18.75 MHz, which was our desired resonance frequency. In this design, the matching capacitors were hard-wired into the circuit, so each test required soldering to modify the capacitors. In the picture above, you’ll notice the snap connector. After building an aluminum version (more on this in the next), we learned that this was a much simpler method of changing our static matching capacitors.
As a first design test, things went surprisingly well. However, we were seeing some odd background noise. To combat this, we decided to try again, replacing our plastic, 3D-printed enclosure for a shielding aluminum one, pictured below. The circuit was essentially the same, and we used more variable capacitors for both the tuning and matching, in addition to the static capacitors in parallel with the matching. The only change came in the form of the matching capacitor. We added a small spring connector (in keeping with our Plug-and-Play mentality), so that if we wanted to further experiment with the matching capacitance, we wouldn’t have to solder over and over again, improving the quality control of subsequent tests. An important thing to note is that, due to the banana clips, we were able to use the exact same inductor coil as our last model was tested with.
We also noticed that our matching capacitance was much too high. The TeachSpin system had a static matching capacitor of approximately 900 pF, with variable capacitors equaling 330 pF. Knowing that our system was similar to the TeachSpin (and that it was the TeachSpin we were trying to emulate) we lowered our static matching capacitor from nanofarads to picofarads, and began to see better peak-peak voltages. This went against our python program, and against our working theory. On possible explanation is that the resistance in reality is larger than the calculated resistance of the copper wire. This is exemplified below in the resonance curves, the first has a large matching capacitance and the second has a lower one. The third curve was taken with an Analog Discovery 2 System Analyzer using the SpinCore RF processor to generate a fake signal (rather than the TeachSpin synthesizer and an oscilloscope). This fake signal consisted of a 5 V
pp pulse that was 25 μs long and repeated every 50 μs. The last curve was generated using the same setup, except we used the TeachSpin magnet and RF tank circuit instead of our own.
NOTE: The final two curves are not
resonance curves, they are the amplitude of the fake signal at the tuned frequency is being detected. (To get a resonance curve, one would need to change the frequency of the fake signal and plot the amplitude of the response curve relative to the change in frequency.)
We tried using the TeachSpin receiver to measure the response to a fake signal generated by the SpinCore RF processor, but kept on getting very noisy signal with a large baseline and barely visible pulse. We hypothesize that this bad data was due to the receiver on the TeachSpin `subtracting out’ the RF frequency before output. The SpinCore RF processor is not synced with the TeachSpin so this subtraction process would not be very successful. However, the fact that our tank circuit’s sensitivity is comparable to the TeachSpin’s is notable.
Before we began to use the Analog Discovery system, we were still seeing the same noise that we noticed with our 3D-printed version. In another attempt to lessen this noise, we crafted some rudimentary shielding to encompass our system. Importantly, we also had to ground the actually body of our enclosure, and, when connected to the shielding system, this allowed our shield to become grounded as well. Without grounding, our shield wouldn’t give us much noise reduction. As in all things with this project, our goal was to match or surpass the TeachSpin system. Surprisingly, our shielding (shown below surpasses the TeachSpin system ever so slightly in its ability to reduce noise.
The big, end goal of this summer was to get some sort of signal from our system. Unfortunately, we were unable to achieve this goal. However, the sheer amount of progress we were able to make in other areas more than makes up for this. We have several designs for new circuit enclosures. Said designs are consistent and reproducible, with implementations for ease of access, such as adding banana clips to the inductor coil and using a snap connector for the static matching capacitors. We made real progress with figuring out how to use both the RF Processor and Analog Discovery System Analyzer. Used in conjunction with each other, we are well on our way to acquiring signal not dissimilar from the TeachSpin System.
One of the things we’re looking forward to working on next summer is obtaining a directional coupler to further our utilization of our RF Processor and Analog Discovery Analyzer. We will also be building a transmit/receive circuit so that we can use our magnet, RF probe, and the RF processor to be completely independent of the TeachSpin system.
|
@math: Inserting Mathematical Expressions
You can write a short mathematical expression with the
@mathcommand. Write the mathematical expression between braces, like this:
@math{(a + b) = (b + a)}
This produces the following in Info and HTML:
(a + b) = (b + a)
The
@math command has no special effect on the Info and HTMLoutput.
makeinfo expands any @-commands as usual, but itdoes not try to use produce good mathematical formatting in any way(no use of MathML, etc.). The HTML output is enclosed by
<em>...</em>, but nothing more.
However, as far as the TeX output is concerned, plain TeXmathematical commands are allowed in
@math, starting with‘
\’. In essence,
@math switches into plain TeX mathmode. (Exception: the plain TeX command
\sup, whichtypesets the mathematical operator name ‘sup’, must be accessed as
\mathopsup, due to the conflict with Texinfo’s
@supcommand.)
This allows you to use all the plain TeX math control sequences for symbols, functions, and so on, and thus get proper formatting in the TeX output, at least.
The
@sub and
@sup commands described in the previoussection produce subscripts and superscripts in HTML output as well asTeX; the plain TeX characters
_ and
^ forsubscripts and superscripts are recognized by TeX inside
@math, but do nothing special in HTML or other output formats.
It’s best to use ‘
\’ instead of ‘
@’ for any suchmathematical commands; otherwise,
makeinfo will complain.On the other hand,
makeinfo does allow input with matching(but unescaped) braces, such as ‘
k_{75}’; it complains aboutsuch bare braces in regular input.
Here’s an example:
@math{\sin 2\pi \equiv \cos 3\pi}
which looks like the input in Info and HTML:
\sin 2\pi \equiv \cos 3\pi
Since ‘
\’ is an escape character inside
@math, you canuse
@\ to get a literal backslash (
\\ will work inTeX, but you’d get the literal two characters ‘
\\’ in Info).
@\ is not defined outside of
@math, since a ‘
\’ordinarily produces a literal (typewriter) ‘
\’. You can also use
@backslashchar{} in any mode to get a typewriter backslash.See Inserting a Backslash.
For displayed equations, you must at present use TeX directly (see Raw Formatter Commands).
|
The full proof can be found here. Basically, we compare the three areas that depend on $x$ in the circle of radius $1$ shown below.
Regardless of the value of $x$, we should have
$$\text{area of sector OAC} < \text{area of triangle OAP} < \text{area of sector OBP}$$ $$\frac{1}{2}x(\cos{x})^2<\frac{1}{2}(\cos{x})(\sin{x})<\frac{x}{2}$$
I have two questions about it:
1) Shouldn't there be a $\le$ instead of the $<$ sign in the inequality above, since the areas of sector $OAC$ and the area of triangle $OAP$ both become zero when $x=\frac{\pi}{2}$?
2) If the value of $x$ is such that we end up in the fourth quadrant, the value of $\sin{x}$ becomes negative and the inequality no longer holds (since $\frac{1}{2}x(\cos{x})^2>0$ and $\frac{1}{2}(\cos{x})(\sin{x})<0$). How can we go around this?
Thanks in advance.
|
I've been practicing some Fourier series questions and then verifying my answers by generating an equivalent graph on MATLAB and comparing it with the graph generated by PSpice in simulating the same circuit.
This is my working:
The Fourier series of the source current:
\$i_s\left(t\right)=1+\frac{4}{\pi}\sum_{n=1}^{\infty}{\frac{1-\left(-1\right)^n}{n}\sin{n\pi t}}\$
Then do a source transformation to simplify the circuit: \$v_s\left(t\right)=1i_s\left(t\right)\$
Then working in the phasor domain and using voltage division:
\$\omega_n=\pi n\$
\$V_{out}=\frac{Z_L}{Z_L+3}V_s\$
\$V_{out}=\frac{j\omega_n}{j\omega_n+1}V_s\$
\$V_s=I_s=\frac{4}{\pi n}\left(1-\left(-1\right)^n\right)e^{j\left(-90\right)}\$
\$V_{out}=\left(\frac{j\omega_n}{j\omega_n+1}\right)\left(\frac{4\left(1-\left(-1\right)^n\right)}{\pi n}\right)e^{j\left(-90\right)}\$
\$V_{out}=\left(\frac{4\left(1-\left(-1\right)^n\right)}{\pi n}\right)\left(\frac{w_ne^{j\left(90\right)}}{\sqrt{1+\omega_n^2}e^{j\left(\tan^{-1}{\pi n}\right)}}\right)e^{j\left(-90\right)}\$
Taking only odd n terms since evens result in 0:
\$V_{out}=\left(\frac{8}{\sqrt{1+\pi^2n^2}}\right)e^{j\left({-\tan}^{-1}{\pi n}\right)}\$
and finally in the time domain:
\$v_{out}\left(t\right)=\sum_{k=1}^{\infty}{\frac{8}{\sqrt{1+\pi^2n^2}}\cos{\left(\pi nt-\tan^{-1}{\pi n}\right)}}\$
with n = 2k - 1 for odd terms
I plot my answer in MATLAB and it seems to be the negative of what the equivalent PSpice graph shows.
Can someone point out what is wrong please?
|
Fluvial geomorphologists describe
stream power, $\Omega$, as the 'rate of energy dissipation against the bed and banks of a river per unit downstream length'.
It is expressed as a function of water density $\rho$, channel slope $S$, discharge $Q$, and the gravitational constant $g$:
$$\Omega = \rho g Q S$$
I'm aware geomorphologists are usually interested in a local measure of power for predicting things like sediment transport.
I'd like to estimate the total power dissipated by a river network above a certain elevation ( i.e., for a watershed, above a gauging station, etc...)
[
Note: It's for fun. I'm curious about the energy dissipated for an entire mountain range.]
Here's what I've thought about so far.
Given the following:
$S = \frac{d h}{d x}$ assume $\rho$ is constant with $x$ ( probably a poor assumption) assume power relationship between discharge and drainage area, $Q = \alpha A^{\beta}$
With that I can imagine writing
$$ \begin{align} \int \Omega \ dx & = \rho g \int Q \ \frac{d h}{d x} \ dx \\ & = \alpha \rho g \int A^{\beta} \ dh \end{align} $$
Computing the change in drainage area with height should be fairly easy with GIS, but I'm left with the messy empirically-determined constants, $\alpha$ and $\beta$. Overall, I expect I'm missing a simpler principle in my approach.
Is there a simpler or more elegant way to estimate the total power dissipated by a river?
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
If we want to calculate mean magnetisation of an equilibrium two-level-system, we know that we can resolve the identity $ \mathbf{1} = \sum_i | E_i \rangle \langle E_i |$ and giving us a uniform measure over the states of the system.
I then remember my classical stat mech, shove in some Boltzmann factors and obtain the gibbs state $$\rho = Z^{-1}\sum_i \mathrm{e}^{- \beta E_i}| E_i \rangle \langle E_i | $$ and can calculate the mean magnetisation, for a two level system with $E_0 = -B/2$ and $E_1=B/2$ we then get $\langle m \rangle = \mathrm{Tr}[\sigma_z \rho]/2 = \tanh \left(\beta B/2 \right)/2$. Alls seems very good.
However if I was ignorant to this result, I might also say I can write down a state $$ | \psi \rangle = \cos \frac{\theta}{2} | 0 \rangle + \mathrm e^{\mathrm i \phi}\sin \frac{\theta}{2} | 1 \rangle$$ which hase energy $\langle \psi | H | \psi \rangle = - B \cos \theta/2$.
So since I can write down a measure over my states $$ \mathbf{1} = \frac12\iint \sin \theta \, \mathrm d \phi \, \mathrm d \theta \, | \psi \rangle \langle \psi |$$ I can also contruct some sort of Gibbs state by weighting them all by a Boltzmann factor, this gives $$\rho = \frac1{2Z}\iint \sin \theta \, \mathrm d \phi \, \mathrm d \theta \, \mathrm e^{\beta B \cos \theta/2}| \psi \rangle \langle \psi |$$ However this yields $\langle m \rangle = \mathrm{Tr}[\sigma_z \rho]/2 = \coth (\beta B/2)/2-1/(\beta B)$, the classical result.
My question is: Is there a simple physical principal to which I can point to determine the correct process (one which is more satisfactory than simply observing that one of these approaches works and the other obtains a different answer)?
Or is it a matter of accepting $\rho = \exp (-\beta H)/Z$ as the definition of thermal equilibrium? (and hence the von Neumann entropy as the correct entropy?)
|
Fractional calculus
is a part of mathematics
dealing with generalisations of the derivative
to derivatives of arbitary order (not necessarily an integer
). The name "fractional calculus" is somewhat of a misnomer since the generalisations are by no means restricted to fractions, but the label persists for historical reasons.
The fractional derivative of a function to order a is often defined implicitly by the fourier transform. The fractional derivative in a point x is a local property only when a is an integer.
Applications of the fractional calculus includes partial differential equations, especially parabolic ones where it is sometimes useful to split a time-derivative into fractional time[?].
There are many well known fields of application where we can use the fractional calculus. Just a few of them are:
Math-orientated
Chaos theory
Fractals
Control theory
Physics-orientated
Electricity
Mechanics
Heat conduction[?]
Viscoelasticity[?]
Hydrogeology[?]
Nonlinear geophysics[?]
History
(fill this in (it started about 300 years ago.))
Differintegrals
The combined differentation/integral operator used in fractional calculus is called the differintegral, and it has a couple of different forms which are all equavalent. (provided that they are initialized(used) properly.)
By far, the most common form is the Riemann-Louiville form:
<math>{}_a\mathbb{D}^q_tf(x)=\frac{1}{\Gamma(n-q)}\frac{d^n}{dx^n}\int_{a}^{t}(t-\tau)^{n-q-1}f(\tau)d\tau + \Psi(x)</math>
Forms of fractional calculus
Closely related topics
anomalous diffusion[?] --
fractional brownian motion[?] --
fractals and fractional calculus[?] --
extraordinary differential equations[?] --
partial fractional derivatives[?] --
fractional reaction-diffusion equations[?] --
fractional calculus in continuum mechanics[?]
http://mathworld.wolfram.com/FractionalCalculus[?]
http://www.diogenes.bg/fcaa/[?]
http://www.nasatech.com/Briefs/Oct02/LEW17139[?]
http://unr.edu/homepage/mcubed/FRG[?]
"An Introduction to the Fractional Calculus and Fractional Differential Equations"
by Kenneth S. Miller, Bertram Ross (Editor)
Hardcover: 384 pages ; Dimensions (in inches): 1.00 x 9.75 x 6.50
Publisher: John Wiley & Sons; 1 edition (May 19, 1993)
ASIN: 0471588849
"The Fractional Calculus; Theory and Applications of Differentiation and Integration to Arbitrary Order (Mathematics in Science and Engineering, V)"
by Keith B. Oldham, Jerome Spanier
Hardcover
Publisher: Academic Press; (November 1974)
ASIN: 0125255500
"Fractals and Fractional Calculus in Continuum Mechanics"
by A. Carpinteri (Editor), F. Mainardi (Editor)
Paperback: 348 pages
Publisher: Springer-Verlag Telos; (January 1998)
ISBN: 321182913X
"Physics of Fractal Operators"
by Bruce J. West, Mauro Bologna, Paolo Grigolini
Hardcover: 368 pages
Publisher: Springer Verlag; (January 14, 2003)
ISBN: 0387955542
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
The axiom of regularity basically says that a set must be disjoint from at least one element. I have heard this disproves self containing sets. I see how it could prevent $A=\{A\}$, but it would seem to do nothing about $B=\{B,\emptyset\}$. It is disjoint from $\emptyset$. What is a disproof of the existence of $B$, and how is it related to the axiom of regularity?
Let $A$ be any set. Then $\{A\}$ is a set, and by regularity $\{A\}$ must contain an element disjoint from $\{A\}$. The only element of $\{A\}$ is $A$, so $A\cap\{A\}=\varnothing$, and it follows immediately that $A\notin A$.
No. The axiom of regularity says this:
every non-empty set contains an element disjoint from it. So suppose there were a set $B$ such that $B = \{ B, \emptyset \}$; then the set $\{ B \}$ contains no element disjoint from $\{ B \}$:$$B \cap \{ B \} = \{ B \} \ne \emptyset$$so there is no such set $B$.
The axiom of foundation or regularity alone can not show that there is no $x$ such that $x \in x$.
The axiom of regularity (also called the axiom of foundation) asserts that every set has a $\in$ minimal element. That is, for all $x$, there exists a $y \in x$ such that there are no $z \in x$ with $z \in y$.
The axiom of foundation is not exactly equivalent to the fact that there does not exists an $x$ such that $x \notin x$. There is a two element model of foundation (extensional, union, and pairing) such that there exists a $x$ with $x \in x$. Let $M = \{x,y\}$ where $x$ and $y$ two different objects. Define $\in^\mathcal{M} = \{(x,y), (y,y)\}$. You can verify foundation holds in this model $\mathcal{M}$, but $y \in y$.
Note that the axiom of comprehension and foundation can show that there is no $x$ such that $x \in x$
Together with axiomschema of separation:
Assume that $A\in A$, then there's a set $B = \{x\in A: x=A\}$ which is non-empty since $A\in B$. Now for any $x\in B$ it's true that $x=A$, so $\forall x\in B: x\cap B = A\cap B$, but since $A\in A$ and $A \in B$ it would be that $A\cap B \ne \emptyset$. But that's equivalent to $\neg\exists x\in B: x\cap B = \emptyset$ which contradicts the axiom of regularity.
Therefore we can conclude $A\notin A$.
|
Here are two recent review articles on the topic, on the subject of dark matter itself and the broader subject of the cosmological parameters.
The short story is that we have good evidence that the Universe is flat. The main evidence for flatness is anthropic: we are still here to observe the universe some ~14 Gyr after the Big Bang. A "closed" universe, with negative curvature, will eventually stop expanding and collapse back in on itself, while an "open" universe, with positive curvature, will expand eternally. Usually the curvature of the universe is related to its energy density $Ω$, with units chosen so that $Ω=1$ is the "critical density" for a flat universe. The thing is that flatness is an unstable equilibrium, like a pencil balanced on its tip: any deviation from flatness will be magnified as time goes on. The simple fact that the universe has not already collapsed and does not already appear empty means that it's
approximately flat; run the clock backwards and you find that during the era of nucleosynthesis, in the first minutes after the Big Bang, you had to have something like $|Ω-1| \lesssim 10^{-55}$. So it stands to reason that some symmetry which we don't understand makes the Universe exactly flat, and it started off that way and is still that way. We know the total energy density of the Universe: it's $Ω=1$.
(This argument gets a little more complicated if you allow for the fact that "dark energy" is actually causing the expansion of the Universe to accelerate over time. However dark energy was less important than matter until about 5 Gyr ago, and the Universe appeared to be flat then, too. There are other arguments; see the review articles at the top if you want more.)
Now we also know the average density of baryons in the Universe. This is because in the first few minutes after the Big Bang, all of the excited, unstable baryons decayed into protons and neutrons, and the protons and neutrons formed into a fairly small number of isotopes: about 90% hydrogen and 10% helium (by number; 25% helium by mass) with part-per-million traces of deuterium and helium-3 and part-per-billion traces of lithium-7. It turns out that the fractions of those isotopes you get from Big Bang nucleosynthesis depends pretty much exclusively on the ratio of baryons to photons, and to get the concentrations that we see in our Universe you had to have about 0.6 baryons per billion photons. Those are
the same population of photons that we still see in the coldest darkest parts of the sky as the cosmic microwave background, stretched to radio frequencies by the Hubble expansion. We can measure how bright this CMB is, and that tells us its energy density; that tells us how many protons and neutrons to expect, on average, per unit volume of space. It turns out to be $Ω_\text{baryons}\approx \frac1{20}$, which is pretty poor agreement with $Ω_\text{total}=1$ that we decided had to be the case since we are not all dead.
The third bit of evidence comes from the supernova-based measurements of the Hubble constant. That lets us measure what the rate of expansion used to be in the past. It turns out, as I mentioned already, that the rate of expansion is actually
faster now than it was in the past. This is something that's not allowed in a model of the universe that has only matter, radiation, and gravity: matter and radiation are always attracted to each other, and so even in an open universe you would expect the cosmic expansion to slow down as time moved on. The best fit to the data has $Ω_\text{matter}\approx \frac13$, a pretty poor fit both our $Ω_\text{total}=1$ and also to our $Ω_\text{baryons}\approx \frac1{20}$.
These observations only show convincingly that there's a problem with the perfectly sensible hypothesis that the Universe is mostly made of matter and radiation in empty space. The fourth and strongest line of evidence comes from the small anisotropies in the cosmic microwave background. The CMB isn't
exactly the same temperature everywhere you look: some places it's a few hundred millikelvin warmer than the average, and some places a few hundred millikelvin cooler. (It's still quite uniform: the variations don't start until the fourth decimal place of the 2.73 K temperature.) It turns out that there's an enormous wealth of information in the shape of those anisotropies. The fact that the biggest variations tend to be about 1º across tells us about sound waves in the baryon-photon fluid, which is another probe for flatness. The more fine-grained fluctuations, the 0.5º "details" on the 1º "features", have variations only about half as big as the largest features; this ratio of amplitudes is an independent measure of the baryon density. The upshot is a robust and well-accepted claim that the energy density of the Universe today is approximately\begin{align}\Omega_\text{radiation} &= \text{very small} \\\Omega_\text{baryons} &= 0.05 \\\Omega_\text{other matter} &= 0.25 \\\Omega_\text{wtf} &= 0.70 \\\hline \Omega_\text{total} &= 1.00\end{align}
Note that none of this evidence gives any hint of what the dark matter and dark energy
are, just that they must be present for us to describe cosmic evolution. (There's actually lots of evidence of what it's not. For instance, the dark matter can't be relativistic neutrinos, and it also can't be massive, nonrelativistic neutrinos, for reasons which I think are buried in some of my other links.)The astronomical evidence that galaxies contain "much more" dark matter than visible matter was present first, but the cosmological arguments for non-baryonic dark matter seem to be more quantitative at present.
|
Let $(x_n)$ and $(y_n)$ be sequences such that $\lim y_n = 0$. Suppose that for all $k \in \Bbb N$ and all $m ≥ k$ we have $|x_m − x_k| ≤ y_k$. Show that $(x_n)$ is Cauchy.
I need a little guidance on how to approach the problem. As I see this is the same definition of Cauchy sequences. But I do not see how to connect everything in a logic sequence in order to have a rigorous proof.
My attempt of reasoning
I started first defining the $\lim$ of $y_n$.
For every $\varepsilon>0$ exists $N$ s.t. $n>N$ $|y_n|<\varepsilon$ for all $n>N$ Then I see that all terms of $y_n$ get smaller and smaller as $n$ gets larger. So distance between $x_m$ and $x_k$ gets smaller as the terms get bigger. But one thing that puts me off is that $m ≥ k$ and $| x_m − x_k| ≤ y_k$ why are they $\leq$?
Thanks for help in advance'
|
What lattices are isomorphic to $\mathbb{R}^{N}$ for some $N\in \mathbb{N}$, equipped with the canonical order?
Remark:
When I say $\mathbb{R}^N$, I don’t mean it to be a vector space. Instead, I refer to the Cartesian product of $N$ totally ordered sets, ${(A_i, \geq_i)}_{i=1}^{N}$ each of which is isomorphic to $\mathbb{R}$ equipped with its canonical order. Therefore, the canonical order on the Cartesian product of $A_1 \times A_2 \times...A_N$ operates as the following:
For $x=(x_1,x_2,...,x_N)$ and $ y=(y_1,...,y_N)$ both in $A_1 \times A_2 \times...A_N$:
$x \succeq y$ if for all $i$, $x_i \geq_i y_i$.
Also, $x\succ y$ if $x\succeq y$ and there is at least one $i$ such that $x_i >_i y_i$.
For example, we know that the lattice must be distributive, since $(A_1 \times A_2\times ... A_N, \succeq)$ is a distributive lattice. Also, we know that the partial order must be dense. Moreover, the lattice must be unbounded, Dedekind complete, and separable in it's order topology. But I'm looking for the simplest necessary and sufficient conditions. Thanks!
|
I have troubles understanding how (and whether) Bose-Einstein condensation works in 1-D harmonic oscillator. From my calculation it seems that in limit of infinite number of particles, almost all of them are in the ground state regardless of temperature.
I have found quite a few interesting articles about this system, both from mathematical and physical perspective (in the latter case people are even realizing it by constructing cigar-shaped optical traps), but I have not found anywhere answer satisfying me.
Here is my reasoning:
In canonical ensemble (i.e. number of particles is fixed), the thermal state is $\rho=\frac{\exp(-H/T)}{\text{Tr}[\exp(-H/T)]}$, so that eigenstate to energy $E$ has population proportional to $\exp(-E/T)$. The fact that underlying system is of bosonic nature should not - in my opinion, I might be wrong here - affect the form (as matrix exponent/Gibbs state) of thermal state anyhow.
The Hamiltonian of harmonic oscillator is $H=\sum\limits_{k=0}^{\infty} k a_k^\dagger a_k $, where $a_k$ acts on $k$-th excitation subspace and naturally $H$ can be viewed as sum of $(\text{energy of excitation})\times(\text{number of excitations})$. I have set the energy spacing $\hbar\omega$ to $1$, and shifted the ground state to $0$ for convenience. The eigenstates of Hamiltonian are simple Fock states, e.g. $H|0,2,1,0,3,7,\ldots\rangle=(2\times 1+1\times 2+3\times4+7\times5)|0,2,1,0,3,7,\ldots\rangle=51|0,2,1,0,3,7,\ldots\rangle$.
From now on, the number of particles is fixed and denoted by $N$
In the thermal state, population of some energy level $E$ is its degeneracy times $\exp(-E/T)$ (times normalization constant). In the case of harmonic oscillator (with $\hbar\omega=1$ and null ground state energy) in order to determine the degeneracy of each energy a bijection between Fock states of given energy $E$ and integer partitions of $E$ can be drawn: each integer partition with length at most $N$, e.g. $$51=1+1+2+4+4+4+5+5+5+5+5+5+5$$ corresponds to a Fock state: $k_i$ repetitions of integer $i$ resemble $k_i$ of $i$-th excitation. The partition above can be thus interpreted as Fock state $|0,2,1,0,3,7,\ldots\rangle$. If the length of integer partition is not equal to the number of particles, we just populate the ground state - or add zeroes to the partition. Therefore, number of integer partitions of $E$ with length at most $N$ is the degeneracy of energy $E$ in the system of $N$ bosons in harmonic trap. Also, the maximum length of all integer partitions of $E$ is of course $E$, as $E=1+1+\ldots+1$.
Let us set the number of particles $N=10^6$ and temperature $T=60$ (which are, if I remember correctly, about the right parameters for real-life optical traps). Quick calculation shows that the most populated state is around energy $E=2000$, but all states in this energy range have almost all particles in ground state (e.g. if 1% of particles are not in the ground state, the energy is at least $0.01*10^6=10^4$).
In the limit of infinite number of particles, this happens
regardless of temperature: almost all (i.e. except for finitely many) particles in the most probable states are in the ground state.
Is this result correct? I am confused, since Bose gas in a box has completely different behavior: depending on the dimensionality, there exists a well-defined condensation temperature ($d\ge 3$) or it is impossible ($d\le 2$).
|
So, I would love to make at least
some use of my preexisting data, no matter how small, and just out of ideas. Maybe I am just a prisoner of a Kahneman-like theatre-ticket paradox, and don't know whether I should accept the losses and move on.
Consider a system of linear equations (in a very simplified form): $$ \begin{equation} \underbrace{(A_1+A_2)}_{M}x=b \label{eq1} \tag{1} \end{equation} $$ Here, $A_{1,2},M\in \mathbb C^{n\times n}$ dense matrices, and $x,b\in \mathbb C^n$. All three, $A_1$, $A_2$, and $M=A_1+A_2$ are nicely invertable.
We
already have LU decompositions of $A_{1,2}$:$$A_1=L_1U_1\quad A_2=L_2U_2\label{eq2}\tag{2}$$
It is well known that computing the LU decomposition of $M$ can not really benefit from those precomputed LU's $\eqref{eq2}$, as it is certainly
not even close to a low-rank update. It's a full-blown full-rank update without any particularly nice structure to it. So, I do not have any hope of arriving to an LU decomposition of $M$ using $L_{1,2},U_{1,2}$.
Note, $n$ is large and none of the matrices $A_{1,2},L_{1,2},U_{1,2}$ are stored directly in a dense format. That does not really matter for the purpose of this question other than re-constructing $M$ from scratch might be more efficient than computing it via $A_1+A_2$.
Natural solutions with obvious downsides:
Compute $M$ (in whatever way you want), perform its LU decomposition and solve directly. Use an iterative method to solve and perform matrix-vector products with $A_1$ and $A_2$ separately without ever constructing $M$.
Now, I wonder if there is
something I can do with already computed $\{L_1, U_1\}$ and $\{L_2, U_2\}$ instead of just throwing them straight into the garbage bin. For example, can I use both of them in a preconditioner(s) in some way or find weird use inside the iterative method itself? I would be happy with any possible usage of the factorization that I already have.
I tried to use $\{L_1, U_1\}$ and $\{L_2, U_2\}$
separately as left preconditioners for GMRES; however, they both performed significantly worse (as expected) compared to a much simpler preconditioner (based on $M$) I usually use. The number of iterations is quite high, so there is a lot of room for preconditioner improvement.
Any other ideas regarding possible re-usage are certainly welcomed. Even if it does not lead directly to the solution of the system $\eqref{eq1}$, but can reveal some information about $M$ and its properties cheap.
|
Simple linear regression model
$$ y_i = \alpha + \beta x_i + \varepsilon $$
can be written in terms of probabilistic model behind it
$$\mu_i = \alpha + \beta x_i \\y_i \sim \mathcal{N}(\mu_i, \sigma)$$
i.e. dependent variable $Y$ follows normal distribution parametrized by mean $\mu_i$, that is a linear function of $X$ parametrized by $\alpha,\beta$, and by standard deviation $\sigma$. If you estimate such model using ordinary least squares, you do not have to bother about the probabilistic formulation, because you are searching for optimal values of $\alpha,\beta$ parameters by minimizing the squared errors of fitted values to predicted values. On another hand, you could estimate such model using maximum likelihood estimation, where you would be looking for optimal values of parameters by maximizing the likelihood function
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\alpha,\,\beta,\,\sigma} \prod_{i=1}^n \mathcal{N}(y_i; \alpha + \beta x_i, \sigma) $$
where $\mathcal{N}$ is a density function of normal distribution evaluated at $y_i$ points, parametrized by means $\alpha + \beta x_i$ and standard deviation $\sigma$.
In Bayesian approach instead of maximizing the likelihood function alone, we would assume
prior distributions for the parameters and use Bayes theorem
$$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$
The likelihood function is the same as above, but what changes is that you assume some
prior distributions for the estimated parameters $\alpha,\beta,\sigma$ and include them into the equation
$$ \underbrace{f(\alpha,\beta,\sigma\mid Y,X)}_{\text{posterior}} \propto \underbrace{\prod_{i=1}^n \mathcal{N}(y_i\mid \alpha + \beta x_i, \sigma)}_{\text{likelihood}} \; \underbrace{f_{\alpha}(\alpha) \, f_{\beta}(\beta) \, f_{\sigma}(\sigma)}_{\text{priors}} $$
"What distributions?" is a different question, since there is unlimited number of choices. For $\alpha,\beta$ parameters you could, for example assume normal distributions parametrized by some hyperparameters, or $t$-distribution if you want to assume heavier tails, or uniform distribution if you do not want to make much assumptions, but you want to assume that the parameters can be a priori "anything in the given range", etc. For $\sigma$ you need to assume some prior distribution that is bounded to be greater then zero, since standard deviation needs to be positive. This may lead to the model formulation as illustrated below by John K. Kruschke.
(source: http://www.indiana.edu/~kruschke/BMLR/)
While in maximum likelihood you were looking for a single optimal value for each of the parameters, in Bayesian approach by applying Bayes theorem you obtain the
posterior distribution of the parameters. The final estimate will depend on the information that comes from your data and from your priors, but the more information is contained in your data, the less influential are priors.
Notice that when using uniform priors, they take form $f(\theta) \propto 1$ after dropping the normalizing constants. This makes Bayes theorem proportional to likelihood function alone, so the posterior distribution will reach it's maximum at exactly the same point as maximum likelihood estimate. What follows, the estimate under uniform priors will be the same as by using ordinary least squares since minimizing the squared errors corresponds to maximizing the normal likelihood.
To estimate a model in Bayesian approach in some cases you can use
conjugate priors, so the posterior distribution is directly available (see example here). However in vast majority of cases posterior distribution will not be directly available and you will have to use Markov Chain Monte Carlo methods for estimating the model (check this example of using Metropolis-Hastings algorithm to estimate parameters of linear regression). Finally, if you are only interested in point estimates of parameters, you could use maximum a posteriori estimation, i.e.
$$ \argmax_{\alpha,\,\beta,\,\sigma} f(\alpha,\beta,\sigma\mid Y,X) $$
For more detailed description of logistic regression you can check the Bayesian logit model - intuitive explanation? thread.
For learning more you could check the following books:
Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R,
JAGS, and Stan. Academic Press.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004).
Bayesian data analysis. Chapman & Hall/CRC.
|
For our first two papers, we essentially reused the same few examples as model problems to test our method with (sine-Gordon and Brusselator). For our next paper, my advisor wanted something different and pointed towards the Grey-Scott equations. It’s a simple reaction diffusion equation as follows
\begin{align*} \frac{\partial u}{\partial t} &= d_u\Delta u – uv^2 + F(1 – u) \\ \frac{\partial v}{\partial t} &= d_v\Delta v + uv^2 – (F+k)v \end{align*} where $F, k, d_u, d_v$ are constants.
There’s a short paper (“Complex Patterns in a Simple System,” by John E. Pearson) where he plots the function for different values of $F, k$. The problem for me in replicating that paper is that Pearson employed a periodic boundary condition, which is easy to implement for finite different and spectral methods, but a bit awkward for finite element methods (if you’re not using a very specific mesh).
The solution is quite nifty. Instead of a 2D plane, we simply project the domain onto a torus. It turns out FEM code on a surface is almost the same as on a plane, unless one uses curvilinear elements which then becomes a hassle. Furthermore, visualization gives pretty cool results… take a look at the two simulations below (their parameters differ just ever so slightly, but ends up giving drastically different patterns, though both reach a steady state):
|
Klaus Warzecha's answer pretty much answers your question. But I know that this subject is easier to understand if supported by some pictures. That's why I will take the same route as Klaus at explaining the concept behind why the absorption in conjugated systems is shifted to higher wavelengths but I will provide some pictures on the way.
In a conjugated carbon chain or ring system you can think of the $\ce{C}$ atoms as $\text{sp}^{2}$-hybridized. So, each carbon has 3 $\text{sp}^{2}$ orbitals which it uses to form $\sigma$ bonds and 1 $\text{p}$ orbital which is used to form $\pi$ bonds.It is the $\text{p}$ orbitals that are responsible for the conjugation and their combinations according to the LCAO model are the interesting part since the HOMO and LUMO of the system will be among the molecular orbitals formed from the conjugated $\text{p}$ orbitals.
For a start take ethene, the simplest $\pi$-system, being comprised of only 2 carbon atoms.When you combine two atomic orbitals you get two molecular orbitals.These result from combining the $\text{p}$ orbitals either in-phase or out-of-phase.The in-phase combination is lower in energy than the original $\text{p}$ orbitals and the out-of-phase combination is higher in energy than the original $\text{p}$ orbitals.The in-phase combination accounts for the bonding molecular orbital ($\pi$), whilst the out-of-phase combination accounts for the antibonding molecular orbital ($\pi^{*}$).
Now, what happens when you lengthen the conjugated system by combining two ethene fragments?You get to butadiene.Butadiene has two $\pi$ bonds and so four electrons in the $\pi$ system.Which molecular orbitals are these electrons in?Since each molecular orbital can hold two electrons, only the two molecular orbitalslowest in energy are filled.Let's have a closer look at these orbitals.In $\Psi_1$, the lowest-energy bonding orbital, the electrons are spread out over all four carbon atoms (above and below the plane) in one continuous orbital.There is bonding between all the atoms.The other two electrons are in $\Psi_2$.This orbital has bonding interactions between carbon atoms 1 and 2, and also between 3 and 4 but an antibonding interaction between carbons 2 and 3.Overall, in both the occupied $\pi$ orbitals there are electrons between carbons 1 and 2 and between 3 and 4, but the antibonding interaction between carbons 2 and 3 in $\Psi_2$ partially cancels out the bonding interaction in $\Psi_1$.This explains why all the bonds in butadiene are not the same and why the middle bond is more like a single bond while the end bonds are double bonds.If we look closely at the coefficients on each atom in orbitals $\Psi_1$ and $\Psi_2$, it can be seen that the bonding interaction between the central carbon atoms in $\Psi_1$ is greater than the antibonding one in $\Psi_2$.Thus butadiene does have some double bond character between carbons 2 and 3, which explains why there is the slight barrier to rotation about this bond.
You can construct the molecular orbitals of butadiene by combining the molecular orbitals of the two ethene fragments in-phase and out-of-phase.
This method of construction also shows why the HOMO-LUMO gap of butadiene is smaller than that of ethene.The molecular orbital $\Psi_2$, which is the HOMO of butadiene, is the out-of-phase combination of two ethene $\pi$ orbitals, which are the HOMO of ethene.Thus, the HOMO of butadiene is higher in energy than the HOMO of ethene.Furthermore, the molecular orbital $\Psi_3$, which is the LUMO of butadiene, is the in-phase combination of two ethene $\pi^{*}$ orbitals, which are the LUMO of ethene.Thus, the LUMO of butadiene is lower in energy than the LUMO of ethene.It follows that the HOMO-LUMO energy gap is smaller in butadiene than in ethene and thus butadiene absorbs light with longer wavelenghts than ethene.
If you continue to lengthen the $\pi$ system by adding more ethene fragments you will see that the HOMO and LUMO are getting closer and closer together the longer the $\pi$ system becomes.
|
Learning Objectives
Set up a linear equation to solve a real-world application. Use a formula to solve a real-world application.
Josh is hoping to get an \(A\) in his college algebra class. He has scores of \(75\), \(82\), \(95\), \(91\), and \(94\) on his first five tests. Only the final exam remains, and the maximum of points that can be earned is \(100\). Is it possible for Josh to end the course with an \(A\)? A simple linear equation will give Josh his answer.
Many real-world applications can be modeled by linear equations. For example, a cell phone package may include a monthly service fee plus a charge per minute of talk-time; it costs a widget manufacturer a certain amount to produce
x widgets per month plus monthly operating charges; a car rental company charges a daily fee plus an amount per mile driven. These are examples of applications we come across every day that are modeled by linear equations. In this section, we will set up and use linear equations to solve such problems. Setting up a Linear Equation to Solve a Real-World Application
To set up or model a linear equation to fit a real-world application, we must first determine the known quantities and define the unknown quantity as a variable. Then, we begin to interpret the words as mathematical expressions using mathematical symbols. Let us use the car rental example above. In this case, a known cost, such as \($0.10/mi\), is multiplied by an unknown quantity, the number of miles driven. Therefore, we can write \(0.10x\). This expression represents a variable cost because it changes according to the number of miles driven.
If a quantity is independent of a variable, we usually just add or subtract it, according to the problem. As these amounts do not change, we call them fixed costs. Consider a car rental agency that charges \($0.10/mi\) plus a daily fee of \($50\). We can use these quantities to model an equation that can be used to find the daily car rental cost \(C\) .
\(C=0.10x+50 \tag{2.4.1}\)
When dealing with real-world applications, there are certain expressions that we can translate directly into math. Table \(\PageIndex{1}\) lists some common verbal expressions and their equivalent mathematical expressions.
Verbal Translation to Math Operations One number exceeds another by a \(x,x+a\) Twice a number \(2x\) One number is \(a\) more than another number \(x,x+a\) One number is a less than twice another number \(x,2x−a\) The product of a number and \(a\), decreased by \(b\) \(ax−b\) The quotient of a number and the number plus \(a\) is three times the number \(\dfrac{x}{x+a}=3x\) The product of three times a number and the number decreased by \(b\) is \(c\) \(3x(x−b)=c\)
How to: Given a real-world problem, model a linear equation to fit it
Identify known quantities. Assign a variable to represent the unknown quantity. If there is more than one unknown quantity, find a way to write the second unknown in terms of the first. Write an equation interpreting the words as mathematical operations. Solve the equation. Be sure the solution can be explained in words, including the units of measure.
Find a linear equation to solve for the following unknown quantities: One number exceeds another number by \( 17\) and their sum is \( 31\). Find the two numbers.
Solution
Let \( x\) equal the first number. Then, as the second number exceeds the first by \(17\), we can write the second number as \( x +17\). The sum of the two numbers is \(31\). We usually interpret the word is as an equal sign.
\[\begin{align*} x+(x+17)&= 31\\ 2x+17&= 31\\ 2x&= 14\\ x&= 7 \end{align*}\]
\[\begin{align*} x+17&= 7 + 17\\ &= 24\\ \end{align*}\]
The two numbers are \(7\) and \(24\) .
Exercise \(\PageIndex{1}\)
Find a linear equation to solve for the following unknown quantities: One number is three more than twice another number. If the sum of the two numbers is \(36\), find the numbers.
Answer
\(11\) and \(25\)
There are two cell phone companies that offer different packages. Company A charges a monthly service fee of \($34\) plus \($.05/min\) talk-time. Company B charges a monthly service fee of \($40\) plus \($.04/min\) talk-time.
Write a linear equation that models the packages offered by both companies. If the average number of minutes used each month is \(1,160\), which company offers the better plan? If the average number of minutes used each month is \(420\), which company offers the better plan? How many minutes of talk-time would yield equal monthly statements from both companies? Solution
a.
The model for Company A can be written as \( A =0.05x+34\). This includes the variable cost of \( 0.05x\) plus the monthly service charge of \($34\). Company B’s package charges a higher monthly fee of \($40\), but a lower variable cost of \( 0.04x\). Company B’s model can be written as \( B =0.04x+$40\).
b.
If the average number of minutes used each month is \(1,160\), we have the following:
\[\begin{align*} \text{Company A}&= 0.05(1.160)+34\\ &= 58+34\\ &= 92 \end{align*}\]
\[\begin{align*} \text{Company B}&= 0.04(1,1600)+40\\ &= 46.4+40\\ &= 86.4 \end{align*}\]
So, Company B offers the lower monthly cost of \($86.40\) as compared with the \($92\) monthly cost offered by Company A when the average number of minutes used each month is \(1,160\).
c.
If the average number of minutes used each month is \(420\), we have the following:
\[\begin{align*} \text{Company A}&= 0.05(420)+34\\ &= 21+34\\ &= 55 \end{align*}\]
\[\begin{align*} \text{Company B}&= 0.04(420)+40\\ &= 16.8+40\\ &= 56.8 \end{align*}\]
If the average number of minutes used each month is \(420\), then Company A offers a lower monthly cost of \($55\) compared to Company B’s monthly cost of \($56.80\).
d.
To answer the question of how many talk-time minutes would yield the same bill from both companies, we should think about the problem in terms of \((x,y)\) coordinates: At what point are both the \(x\)-value and the \(y\)-value equal? We can find this point by setting the equations equal to each other and solving for \(x\).\[\begin{align*} 0.05x+34&= 0.04x+40\\ 0.01x&= 6\\ x&= 600 \end{align*}\]
Check the \(x\)-value in each equation.
\(0.05(600)+34=64\)
\(0.04(600)+40=64\)
Therefore, a monthly average of \(600\) talk-time minutes renders the plans equal. See Figure \(\PageIndex{2}\).
Exercise \(\PageIndex{2}\)
Find a linear equation to model this real-world application: It costs ABC electronics company \($2.50\) per unit to produce a part used in a popular brand of desktop computers. The company has monthly operating expenses of \($350\) for utilities and \($3,300\) for salaries. What are the company’s monthly expenses?
Answer
\(C=2.5x+3,650\)
Using a Formula to Solve a Real-World Application
Many applications are solved using known formulas. The problem is stated, a formula is identified, the known quantities are substituted into the formula, the equation is solved for the unknown, and the problem’s question is answered. Typically, these problems involve two equations representing two trips, two investments, two areas, and so on. Examples of formulas include the
area of a rectangular region,
\[A=LW \tag{2.4.2}\]
the
perimeter of a rectangle,
\[P=2L+2W \tag{2.4.3}\]
and the
volume of a rectangular solid,
\[V=LWH. \tag{2.4.4}\]
When there are two unknowns, we find a way to write one in terms of the other because we can solve for only one variable at a time.
It takes Andrew \(30\; min\) to drive to work in the morning. He drives home using the same route, but it takes \(10\; min\) longer, and he averages \(10\; mi/h\) less than in the morning. How far does Andrew drive to work?
Solution
This is a distance problem, so we can use the formula \(d =rt\), where distance equals rate multiplied by time. Note that when rate is given in \(mi/h\), time must be expressed in hours. Consistent units of measurement are key to obtaining a correct solution.
First, we identify the known and unknown quantities. Andrew’s morning drive to work takes \(30\; min\), or \(12\; h\) at rate \(r\). His drive home takes \(40\; min\), or \(23\; h\), and his speed averages \(10\; mi/h\) less than the morning drive. Both trips cover distance \(d\). A table, such as Table \(\PageIndex{2}\), is often helpful for keeping track of information in these types of problems.
\(d\) \(r\) \(t\) To Work \(d\) \(r\) \(12\) To Home \(d\) \(r−10\) \(23\)
Write two equations, one for each trip.
\[d=r\left(\dfrac{1}{2}\right) \qquad \text{To work} \nonumber\]
\[d=(r-10)\left(\dfrac{2}{3}\right) \qquad \text{To home} \nonumber\]
As both equations equal the same distance, we set them equal to each other and solve for \(r\).
\[\begin{align*} r\left (\dfrac{1}{2} \right )&= (r-10)\left (\dfrac{2}{3} \right )\\ \dfrac{1}{2r}&= \dfrac{2}{3}r-\dfrac{20}{3}\\ \dfrac{1}{2}r-\dfrac{2}{3}r&= -\dfrac{20}{3}\\ -\dfrac{1}{6}r&= -\dfrac{20}{3}\\ r&= -\dfrac{20}{3}(-6)\\ r&= 40 \end{align*}\]
We have solved for the rate of speed to work, \(40\; mph\). Substituting \(40\) into the rate on the return trip yields \(30 mi/h\). Now we can answer the question. Substitute the rate back into either equation and solve for \(d\).
\[\begin{align*}d&= 40\left (\dfrac{1}{2} \right )\\ &= 20 \end{align*}\]
The distance between home and work is \(20\; mi\).
Analysis
Note that we could have cleared the fractions in the equation by multiplying both sides of the equation by the LCD to solve for \(r\) .
\[\begin{align*} r\left (\dfrac{1}{2} \right)&= (r-10)\left (\dfrac{2}{3} \right )\\ 6\times r\left (\dfrac{1}{2} \right)&= 6\times (r-10)\left (\dfrac{2}{3} \right )\\ 3r&= 4(r-10)\\ 3r&= 4r-40\\ r&= 40 \end{align*}\]
Exercise \(\PageIndex{3}\)
On Saturday morning, it took Jennifer \(3.6\; h\) to drive to her mother’s house for the weekend. On Sunday evening, due to heavy traffic, it took Jennifer \(4\; h\) to return home. Her speed was \(5\; mi/h\) slower on Sunday than on Saturday. What was her speed on Sunday?
Answer
\(45\; mi/h\)
The perimeter of a rectangular outdoor patio is \(54\; ft\). The length is \(3\; ft\) greater than the width. What are the dimensions of the patio?
Solution
The perimeter formula is standard: \(P=2L+2W\). We have two unknown quantities, length and width. However, we can write the length in terms of the width as \(L =W+3\). Substitute the perimeter value and the expression for length into the formula. It is often helpful to make a sketch and label the sides as in Figure \(\PageIndex{3}\).
Now we can solve for the width and then calculate the length.
\[\begin{align*} P&= 2L + 2W\\ 54&= 2(W+3)+2W\\ 54&= 2W+6+2W\\ 54&= 4W+6\\ 48&= 4W\\ W&= 12 \end{align*}\]
\[\begin{align*} L&= 12+3\\ L&= 15 \end{align*}\]
The dimensions are \(L = 15\; ft\) and \(W = 12\; ft\).
Exercise \(\PageIndex{4}\)
Find the dimensions of a rectangle given that the perimeter is \(110\; cm\) and the length is \(1\; cm\) more than twice the width.
Answer
\(L=37\; cm\), \(W=18\; cm\)
The perimeter of a tablet of graph paper is \(48\space{in.}^2\). The length is \(6\; in\). more than the width. Find the area of the graph paper.
Solution
The standard formula for area is \(A =LW\); however, we will solve the problem using the perimeter formula. The reason we use the perimeter formula is because we know enough information about the perimeter that the formula will allow us to solve for one of the unknowns. As both perimeter and area use length and width as dimensions, they are often used together to solve a problem such as this one.
We know that the length is \(6\; in\). more than the width, so we can write length as \(L =W+6\). Substitute the value of the perimeter and the expression for length into the perimeter formula and find the length.
\[\begin{align*} P&= 2L + 2W\\ 48&= 2(W+6)+2W\\ 48&= 2W+12+2W\\ 48&= 4W+12\\ 36&= 4W\\ W&= 9 \end{align*}\]
\[\begin{align*}L&= 9+6\\ L&= 15 \end{align*}\]
Now, we find the area given the dimensions of \(L = 15\; in\). and \(W = 9\; in\).
\[\begin{align*} A&= LW\\ A&=15(9)\\ A&= 135\space{in.}^2 \end{align*}\]
The area is \(135\space{in.}^2\).
Exercise \(\PageIndex{5}\)
A game room has a perimeter of \(70\; ft\). The length is five more than twice the width. How many \(ft^2\) of new carpeting should be ordered?
Answer
\(250\space{ft}^2\)
Find the dimensions of a shipping box given that the length is twice the width, the height is \(8\; \) in, and the volume is \(1,600\space{in.}^3\).
Solution
The formula for the volume of a box is given as \(V =LWH\), the product of length, width, and height. We are given that \(L =2W\), and \(H =8\). The volume is \(1,600\; \text{cubic inches}\).\[\begin{align*} V&= LWH\\ 1600&= (2W)W(8)\\ 1600&= 16W^2\\ 100&= W^2\\ 10&= W \end{align*}\]
The dimensions are \(L = 20\; in\), \(W= 10\; in\), and \(H = 8\; in\).
Analysis
Note that the square root of \(W^2\) would result in a positive and a negative value. However, because we are describing width, we can use only the positive result.
Media
Access these online resources for additional instruction and practice with models and applications of linear equations.
Key Concepts A linear equation can be used to solve for an unknown in a number problem. See Example. Applications can be written as mathematical problems by identifying known quantities and assigning a variable to unknown quantities. See Example. There are many known formulas that can be used to solve applications. Distance problems, for example, are solved using the \(d = rt\) formula. See Example. Many geometry problems are solved using the perimeter formula \(P =2L+2W\), the area formula \(A =LW\), or the volume formula \(V =LWH\). See Example, Example, and Example.
|
Question: Let $f,g:\mathbb{R}^2 \to \mathbb{R}^2 $ be continously differentiable functions where $f\circ g$ is defined. Let $$f = (f_1,f_2)\quad\text{and}\quad g=(g_1,g_2)$$ where all $f_1,f_2,g_1,g_2:\mathbb{R}^2\to\mathbb{R}$ are functions. What is $$\frac{\partial}{\partial x_1} [f(g(x_1,x_2))]?$$
I think the derivative should be a function from $\mathbb{R}^2$ to $\mathbb{R}^2$ as well.
Let $x =(x_1,x_2).$ By multivariate chain rule, we have $$\frac{\partial}{\partial x_1} [f(g(x_1,x_2))] = \frac{\partial}{\partial x_1} [f(g_1(x),g_2(x))] = \frac{\partial f}{\partial g_1} \cdot \frac{\partial g_1}{\partial x_1} + \frac{\partial f}{\partial g_2} \cdot \frac{\partial g_2}{\partial x_1}.$$ I am not sure whether the equation above is correct.
I notice that the multiplication in the RHS of the equation above is scalar multiplication. This means that both $$\frac{\partial f}{\partial g_1}\quad\text{and}\quad \frac{\partial f}{\partial g_2}$$ is a vector in $\mathbb{R}^2.$ But I do not understand its meaning.
|
Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing Abstract
We cross-correlate galaxy weak lensing measurements from the Dark Energy Survey (DES) year-one (Y1) data with a cosmic microwave background (CMB) weak lensing map derived from South Pole Telescope (SPT) and Planck data, with an effective overlapping area of 1289 deg$$^{2}$$. With the combined measurements from four source galaxy redshift bins, we reject the hypothesis of no lensing with a significance of $$10.8\sigma$$. When employing angular scale cuts, this significance is reduced to $$6.8\sigma$$, which remains the highest signal-to-noise measurement of its kind to date. We fit the amplitude of the correlation functions while fixing the cosmological parameters to a fiducial $$\Lambda$$CDM model, finding $$A = 0.99 \pm 0.17$$. We additionally use the correlation function measurements to constrain shear calibration bias, obtaining constraints that are consistent with previous DES analyses. Finally, when performing a cosmological analysis under the $$\Lambda$$CDM model, we obtain the marginalized constraints of $$\Omega_{\rm m}=0.261^{+0.070}_{-0.051}$$ and $$S_{8}\equiv \sigma_{8}\sqrt{\Omega_{\rm m}/0.3} = 0.660^{+0.085}_{-0.100}$$. These measurements are used in a companion work that presents cosmological constraints from the joint analysis of two-point functions among galaxies, galaxy shears, and CMB lensing using DES, SPT and Planck data.
Authors: Publication Date: Research Org.: Argonne National Lab. (ANL), Argonne, IL (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brookhaven National Lab. (BNL), Upton, NY (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) Contributing Org.: DES; SPT OSTI Identifier: 1487048 Report Number(s): arXiv:1810.02441; FERMILAB-PUB-18-513-AE 1697154 DOE Contract Number: AC02-07CH11359 Resource Type: Journal Article Journal Name: TBD Additional Journal Information: Journal Name: TBD Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS Citation Formats
Omori, Y., and et al.
Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing. United States: N. p., 2018. Web.
Omori, Y., & et al.
Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing. United States.
Omori, Y., and et al. Thu . "Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing". United States. https://www.osti.gov/servlets/purl/1487048.
@article{osti_1487048,
title = {Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing}, author = {Omori, Y. and et al.}, abstractNote = {We cross-correlate galaxy weak lensing measurements from the Dark Energy Survey (DES) year-one (Y1) data with a cosmic microwave background (CMB) weak lensing map derived from South Pole Telescope (SPT) and Planck data, with an effective overlapping area of 1289 deg$^{2}$. With the combined measurements from four source galaxy redshift bins, we reject the hypothesis of no lensing with a significance of $10.8\sigma$. When employing angular scale cuts, this significance is reduced to $6.8\sigma$, which remains the highest signal-to-noise measurement of its kind to date. We fit the amplitude of the correlation functions while fixing the cosmological parameters to a fiducial $\Lambda$CDM model, finding $A = 0.99 \pm 0.17$. We additionally use the correlation function measurements to constrain shear calibration bias, obtaining constraints that are consistent with previous DES analyses. Finally, when performing a cosmological analysis under the $\Lambda$CDM model, we obtain the marginalized constraints of $\Omega_{\rm m}=0.261^{+0.070}_{-0.051}$ and $S_{8}\equiv \sigma_{8}\sqrt{\Omega_{\rm m}/0.3} = 0.660^{+0.085}_{-0.100}$. These measurements are used in a companion work that presents cosmological constraints from the joint analysis of two-point functions among galaxies, galaxy shears, and CMB lensing using DES, SPT and Planck data.}, doi = {}, journal = {TBD}, number = , volume = , place = {United States}, year = {2018}, month = {10} } Figures / Tables: i s(z) for the 4 tomographic bins for Metacalibration. The black line shows the CMB lensing kernel.
|
Example: Let $f(x,y)=x^2+\cos{y}$. The rate of change at $f$ at $(1,0)$ in the direction of $<1,1>$ is:
A. $1$ B. $\sqrt{2}$ C. $\frac{\sqrt{3}}{2}$ D. $\pi$ E. $0$
I'm confused on how to start this. Am I supposed to find the gradient, plug in $(1,0)$ and take the dot product of this with $<1,1>$?
Thanks!
|
Despite studying the general theory for quite some time, this still eludes me.
The geodesic equation can be cast in the form $$ m\frac{d^2x^\mu}{d\tau^2}=-m\Gamma^\mu_{\alpha\beta}\frac{dx^\alpha}{d\tau}\frac{dx^\beta}{d\tau}, $$ so the connection coefficients play the role of a 4-force. The nontensorial nature of this expression is due to the fact that this 4-force essentially contains all inertial "pseudo-forces", including gravity, so it is frame dependent.
It is clear that this equation relates "coordinate-acceleration" to forces as seen by some observer. The essential question is
what kind of observer sees "gravitational force" this way?
I mean, in special relativity, for a global Lorentz-frame $(t,x,y,z)$, this frame represents a freely falling observer, whose "space" is described by the cartesian coordinates $x,y,z$.
Switching to a curvilinear coordinate system (in the spatial variables only) makes this less palatable, but I guess we can then employ a local perspective: The coordinate vectors $\partial_i|_p$ are the "yardsticks" of an observer at $p$. But what if we also change the direction of the time coordinate? What does that mean?
Given a general coordinate system in GR, what sort of observer does that coordinate system represent at a certain point $p$? How does he or she experience "space" and "time" from his/her perspective?
What if we use an orthonormal frame instead of a coordinate frame?
Note: I ask this question somewhat more generally but my aim is to be able to tell how some observers experience gravitational force.
For example if I describe Earth with a Schwarzschild-metric (assuming nonrotation) and there is an observer at a fixed point on the surface of the Earth, and a particle is moving freely (for example a projectile fired with some initial conditions), I want to be able to describe mathematically how
this observer sees the particle move, and what force does he feel is affecting the particle.
EDIT:
Since my question is apparantly confusing, I interject that I believe my question would be answered incidentally, if someone told me how to handle the following problem:
Let $(M,g)$ be a spacetime where $g=-a(r)dt^2+a(r)^{-1}dr^2+r^2(d\vartheta^2+\sin^2\vartheta d\varphi^2)$ is the Schwarzchild-metric. The Schwarzchild-metric is caused by a planet of mass $M$, whose surface is located at $r=r_p$.
At some point $(r_p,\vartheta_0,\varphi_0)$ and time $t_0$ there is an observer. The observer is motionless with respect to the origin of the coordinate-system, so it's spatial positions are described by $(r_0,\vartheta_0,\varphi_0)$ all the time.
The observer carries three rods of unit length, $\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3$ satisfying $g(\mathbf{e}_i,\mathbf{e}_j)=\delta_{ij}$ as a reference frame.
Assume a freely falling particle's worldline crosses the observer's worldline at a point (so that the observer can make local measurements), I assume that the 3-velocity the observer would measure is simply $v^i=e^\mu_i\frac{dx^\nu}{d\tau}g_{\mu\nu}$, right? Same for all 4-tensorial quantities.
But what about the gravitational force? To calculate the connection coefficients, one must know not only the frame at a point, but also in a region around the point. So what is the mathematical expression to describe how the observer detects the force acting on the particle? How is it related to $\Gamma^\mu_{\alpha\beta}$?
If I fired a cannonball from the surface of the earth, how could I use GR to find its (3-)trajectory? The 3-trajectory
I see?
|
Theory of Current Distribution
In electrochemical cell design, you need to consider three current distribution classes in the electrolyte and electrodes. These are called
primary, secondary, and tertiary, and refer to different approximations that apply depending on the relative significance of solution resistance, finite electrode kinetics, and mass transport. Here, we provide a general introduction to the concept of current distribution and discuss the topic from a theoretical stand-point.
General Introduction to Current Distribution
An electrochemical cell is characterized by the relation of the current it passes to the voltage across it. The current-voltage relation depends on diverse physical phenomena and is fundamental to performance. In a battery or fuel cell at zero current (
equilibrium), a theoretical maximum voltage can be extracted, but we want to draw current in order to extract power.
When current is drawn, there are voltage losses; equally, the current density may not be uniformly distributed on the electrode surfaces. The performance and lifetime of electrochemical cells, such as electroplating cells or batteries, is often improved by a uniform current density distribution.
By contrast, bad design leads to poor performance, such as:
Substantial losses and shortened lifetime of electrode material at practical operating currents in a battery or fuel cell Uneven plating thickness in electroplating Unprotected surfaces in a cathodic protection system
Simulating current distribution enables better understanding to avoid such problems.
The current distribution depends on several factors:
Cell geometry Cell operating conditions Electrolyte conductivity Electrode kinetics (“activation overpotential”) Mass transport of the reactants (“concentration overpotential”) Mass transport of ions in the electrolyte
Because of this complexity, many applications benefit from suitable simplification when modeling. If one of these factors dominates the cell behavior, the others may not need to be taken into account. As a consequence, successive approximations are introduced by the classifications of primary, secondary, and tertiary current distribution.
Each of the three classes of current distribution is represented in COMSOL Multiphysics by its own interface:
Primary, Secondary, and Tertiary Current Distribution. These interfaces are provided in all of the four different application-specific products available for modeling electrochemical cells: the Batteries & Fuel Cells Module, Electrodeposition Module, Corrosion Module, and Electrochemistry Module. Essential Theory
When modeling an electrochemical cell, you have to solve for the potential and current density in the electrodes and the electrolyte, respectively. You may also have to consider the contributing species concentrations and the involved electrolysis (Faradaic) reactions.
The electrodes in an electrochemical cell are normally metallic conductors and so their current-voltage relation obeys Ohm’s law:
\textbf{i}_s = -\sigma_s\nabla\phi_s\ with conservation of current \nabla\cdot\textbf{i}_s = Q_s
where \textbf{i}_s denotes the current density vector (A/m
2) in the electrode, \sigma_s denotes the conductivity (S/m), \phi_s\ the electric potential in the metallic conductor (V), and Q_s denotes a general current source term (A/m 3, normally zero).
In the electrolyte, which is an ionic conductor, the net current density can be described using the sum of fluxes of all ions:
where \textbf{i}_l denotes the current density vector (A/m
2) in the electrolyte, F denotes the Faraday constant (C/mol), and N_i the flux of species i (mol/(m 2·s)) with charge number z_i. The flux of an ion in an ideal electrolyte solution is described by the Nernst-Planck equation and accounts for the flux of solute species by diffusion, migration, and convection in the three respective additive terms:
(1)
where c_i represents the concentration of the ion i (mol/m
3), D_i the diffusion coefficient (m 2/s), u_{m,i} its mobility (s·mol/kg), \phi_l\ the electrolyte potential, and \textbf{u} the velocity vector (m/s).
On substituting the Nernst-Planck equation into the expression for current density, we find:
(2)
with conservation of current including a general electrolyte current source term Q_l (A/m
3):
As well as conservation of current in the electrodes and electrolyte, you also have to consider the interface between the electrode and the electrolyte. Here, the current must also be conserved. Current is transferred between the electrode and electrolyte domains either by an electrochemical reaction, also called electrolysis or Faradaic current, or by dynamic charging or discharging of the charged double layer of ions adjacent to the electrode, also called capacitive or non-Faradaic current.
This general treatment of electrochemical theory is usually too complicated to be practical. By assuming that one or more of the terms in Equation (2) are small, the equations can be simplified and linearized. The three different current distribution classes applied in electrochemical analysis are based on a range of assumptions made to these general equations, depending on the relative influence of the different factors affecting the current distribution as listed above. In the next blog post in the series we’ll discuss the detailed content of these assumptions: going from primary to secondary to tertiary, fewer assumptions are made. Therefore, the complexity increases, but so does the level of detail available from the simulation.
Below you can see the geometry from a modeling example of a wire electrode. This example models the primary, secondary, and tertiary current distributions of an electrochemical cell. In the open volume between the wire and the flat surfaces, electrolyte is allowed to flow. You can think of the electrochemical cell as a unit cell of a larger wire-mesh electrode — a common electrochemical cell set-up in many large-scale industrial processes.
Geometry of the electrochemical cell. Wire electrode (anode) between two flat electrodes (cathodes). Flow inlet to the left, outlet to the right. The top and bottom flat surfaces are inert. Next Up: Choosing the Right Current Distribution Interface
Now, you might be wondering which of the three current distribution interfaces you should use for your particular electrochemical cell simulations. In an upcoming blog post, we will use the wire electrode example shown here for a comparison of the three current distributions. Stay tuned!
Other Post in This Series Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Sometimes it is good to use different text for a printed manual andits corresponding Info file. In this case, you can use the
conditional commands to specify which text is for the printed manualand which is for the Info file.
@ifinfo begins segments of text that should be ignoredby TeX when ittypesets the printed manual. The segment of text appears onlyin the Info file.The
@ifinfo command should appear on a line by itself; endthe Info-only text with a line containing
@end ifinfo byitself. At the beginning of a Texinfo file, the Info permissions arecontained within a region marked by
@ifinfo and
@endifinfo. (See section Summary and Copying Permissions for Info.)
The
@iftex and
@end iftex commands are similar to the
@ifinfo and
@end ifinfo commands, except that theyspecify text that will appear in the printed manual but not in the Infofile. Likewise for
@ifhtml and
@end ifhtml, whichspecify text to appear only in HTML output.
For example,
@iftex This text will appear only in the printed manual. @end iftex @ifinfo However, this text will appear only in Info. @end ifinfo
The preceding example produces the following line:
This text will appear only in the printed manual.
Note how you only see one of the two lines, depending on whether you are reading the Info version or the printed version of this manual.
The
@titlepage command is a special variant of
@iftex thatis used for making the title and copyright pages of the printedmanual. (See section
@titlepage.)
Inside a region delineated by
@iftex and
@end iftex,you can embed some plain TeX commands. Info will ignore thesecommands since they are only in that part of the file which is seen byTeX. You can write the TeX commands as you would write them ina normal TeX file, except that you must replace the
`\' usedby TeX with an
`@'. For example, in the
@titlepagesection of a Texinfo file, you can use the TeX command
@vskip to format the copyright page. (The
@titlepagecommand causes Info to ignore the region automatically, as it doeswith the
@iftex command.)
However, many features of plain TeX will not work, as they are overridden by features of Texinfo.
You can enter plain TeX completely, and use
`\' in the TeXcommands, by delineating a region with the
@tex and
@endtex commands. (The
@tex command also causes Info to ignore theregion, like the
@iftexcommand.)
For example, here is a mathematical expression written in plain TeX:
@tex $$ \chi^2 = \sum_{i=1}^N \left (y_i - (a + b x_i) \over \sigma_i\right)^2 $$ @end tex
The output of this example will appear only in a printed manual. If you are reading this in Info, you will not see anything after this paragraph. In a printed manual, the above expression looks like this:
@set,
@clear, and
@value
You can direct the Texinfo formatting commands to format or ignore partsof a Texinfo file with the
@set,
@clear,
@ifset,and
@ifclear commands.
In addition, you can use the
@set command to set thevalue of
flag
flag to a string of characters; and use
@value{ to insert that string. You can use
flag}
@set, for example, to set a date and use
@value toinsert the date in several places in the Texinfo file.
@ifset and
@ifclear
When a
flag is set, the Texinfo formatting commands format textbetween subsequent pairs of
@ifset and
flag
@endifset commands. When the
flag is cleared, the Texinfo formattingcommands do
not format the text.
Use the
@set command to turn on, or
flag
set, a
flag; a
flag can be any single word. The format for thecommand looks like this: @set flag
Write the conditionally formatted text between
@ifset and
flag
@end ifset commands, like this:
@ifset flag conditional-text@end ifset
For example, you can create one document that has two variants, such as a manual for a `large' and `small' model:
You can use this machine to dig up shrubs without hurting them. @set large @ifset large It can also dig up fully grown trees. @end ifset Remember to replant promptly ...
In the example, the formatting commands will format the text between
@ifset large and
@end ifset because the
largeflag is set.
Use the
@clear command to turn off, or
flag
clear,a flag. Clearing a flag is the opposite of setting a flag. Thecommand looks like this: @clear flag
Write the command on a line of its own.
When
flag is cleared, the Texinfo formatting commands do
not format the text between
@ifset and
flag
@end ifset; that text is ignored and does not appear in eitherprinted or Info output.
For example, if you clear the flag of the preceding example by writingan
@clear large command after the
@set large command(but before the conditional text), then the Texinfo formatting commandsignore the text between the
@ifset large and
@end ifsetcommands. In the formatted output, that text does not appear; in bothprinted and Info output, you see only the lines that say, "You can usethis machine to dig up shrubs without hurting them. Remember to replantpromptly ...".
If a flag is cleared with an
@clear command, thenthe formatting commands format text between subsequent pairs of
flag
@ifclear and
@end ifclear commands. But if the flagis set with
@set , then the formatting commands do
flag
not format text between an
@ifclear and an
@endifclear command; rather, they ignore that text. An
@ifclearcommand looks like this:
@ifclear flag
In brief, the commands are:
@set
flag
@clear
flag
@ifset
flag
@end ifset command.If
@end ifset command.
@ifclear
flag
@end ifclear command.If
@end ifclearcommand.
@value
You can use the
@set command to specify a value for a flag,which is expanded by the
@value command. The value is a stringa characters.
Write the
@set command like this:
@set foo This is a string.
This sets the value of
foo to "This is a string."
The Texinfo formatters replace an
@value{ command withthe string to which
flag}
flag is set.
Thus, when
foo is set as shown above, the Texinfo formatters convert
@value{foo} to This is a string.
You can write an
@value command within a paragraph; but youmust write an
@set command on a line of its own.
If you write the
@set command like this:
@set foo
without specifying a string, the value of
foo is an empty string.
If you clear a previously set flag with an
@clear command, a subsequent
flag
@value{flag} command is invalid and thestring is replaced with an error message that says
`{No value for" flag"}'.
For example, if you set
foo as follows:
@set how-much very, very, very
then the formatters transform
It is a @value{how-much} wet day. into It is a very, very, very wet day.
If you write
@clear how-much
then the formatters transform
It is a @value{how-much} wet day. into It is a {No value for "how-much"} wet day.
@value Example
You can use the
@value command to limit the number of places youneed to change when you record an update to a manual.Here is how it is done in The GNU Make Manual:
Set the flags:
@set EDITION 0.35 Beta @set VERSION 3.63 Beta @set UPDATED 14 August 1992 @set UPDATE-MONTH August 1992
Write text for the first
@ifinfo section, for people reading theTexinfo file:
This is Edition @value{EDITION}, last updated @value{UPDATED}, of @cite{The GNU Make Manual}, for @code{make}, Version @value{VERSION}.
Write text for the title page, for people reading the printed manual:
@title GNU Make @subtitle A Program for Directing Recompilation @subtitle Edition @value{EDITION}, ... @subtitle @value{UPDATE-MONTH}
(On a printed cover, a date listing the month and the year looks less fussy than a date listing the day as well as the month and year.)
Write text for the Top node, for people reading the Info file:
This is Edition @value{EDITION} of the @cite{GNU Make Manual}, last updated @value{UPDATED} for @code{make} Version @value{VERSION}.
After you format the manual, the text in the first
@ifinfosection looks like this:
This is Edition 0.35 Beta, last updated 14 August 1992, of `The GNU Make Manual', for `make', Version 3.63 Beta.
When you update the manual, change only the values of the flags; you do not need to rewrite the three sections.
Go to the first, previous, next, last section, table of contents.
|
Consider the space $X = \Bbb{Z}^3$, a $\Bbb{Z}$-module. Let $M = \{ \sum_{i=1}^n c_i(p_i, q_i, r_i) : \sum_{i=1}^n c_i q_i = 0,$ where $p_i, q_i, r_i$ are either prime numbers or $0 \}$. Then is $M \approx \Bbb{Z}^2$?
Clearly, $M \subset \Bbb{Z} \times 0 \times \Bbb{Z}$. If $(x, y, z) \in M$, then clearly, summing on the 2nd component gives $y = 0$, and so... I'm having trouble seeing that you can handle the other two components independently of the second if the second is zero'd since the coefficients $c_i$ are tied up in it's sums.
For any finite $n \geq 1$ we have a system of system of 3 linear equations in $n$ unknowns in a vector $c$. Let $x, z \in \Bbb{Z}$ be arbitrary. We want there to always be a solution to:
$$ A c = (x, 0, z)^t $$
Where $A$ is composed of prime numbers or $0$. I think we just let $n \geq 3$ without loss of generality (the sums can be any finite number of terms). Also, $A$ is allowed to vary over any primes or $0$ for each result vector $(x, y, z)^t$. That should make things easier.
|
My question arises from a construction I gave in my recent answer to a question of Alexander Pruss concerning large families of independent non-measurable sets of reals. In that argument, using the continuum hypothesis and the existence of a thick Kurepa tree $T$, I produced a family of $2^{\frak c}$ many Vitali sets $\{\ A_s\mid s\in[T]\ \}$, which was almost disjoint in the sense that $A_s\cap A_t$ was countable whenever $s\neq t$. The only aspect of the Vitali relation that was used in the construction was that the Vitali equivalence classes (equivalence under rational translation) are countably infinite. Thus, the construction proves:
Theorem. If the CH holds and there is a thick Kurepa tree,then for every partition of $\mathbb{R}$ into countably infinite sets,there is an almost-disjoint family of selectors of size $2^{\frakc}$.
By
almost-disjoint here, I mean that any two distinct elementsof the family have countable intersection; by selector, Imean that each set in the family has exactly one element from eachequivalence class; and by a partition into countably infinite sets, I mean that we have an equivalence relation on $\mathbb{R}$ with every equivalence class countably infinite. To prove the theorem, simply label the nodes on the $\alpha^{th}$ level of $T$ with distinct members of the $\alpha^{th}$ equivalence class in the partition. Being thick, the tree has $2^{\frak c}$ many branches, each of which provides a selector, and any two such selectors can agree only up to some countable height in the tree, where those branches separate.
My question is whether I really needed those set-theoretic assumptions in order to make the conclusion.
Question. How much can one weaken the hypotheses of the theorem and still prove the conclusion?
For example, can we drop the thick Kurepa tree assumption? Can we omit CH? Can we prove it in ZFC? Can one show the consistency with ZFC of a counterexample?
|
I notice there is a certain similarity between the definition of a valuation ring and the definition of an ultrafilter.
To begin, take a field $K$ and let $\mathcal{A}$ be the set of subrings of $K$. Let $\mathcal{B}'$ be the class of pairs $(\nu, \Lambda)$, where $\Lambda$ is a partially ordered abelian group, and $\nu : K^\times \rightarrow \Lambda$ is a surjective map of abelian groups such that $\nu(a),\nu (b) \geq 0 \implies \nu(a+b) \geq 0$. Note that $\nu$ does necessarily respect the partial order of $\Lambda$. We form the set $\mathcal{B}$ of equivalence classes of elements in $\mathcal{B}'$, where $(\nu, \Lambda) \sim (\nu', \Lambda')$ when $\nu$ factors through $\nu'$ by an isomorphism of partially ordered abelian groups.
There is a $1$-to-$1$ correspondence between $\mathcal{A}$ and $\mathcal{B}$. We send a subring $R$ of $K$ to the abelian group $K^\times / R^\times$, with the smallest admissible partial order generated by declaring elements $r R^\times$ to be non-negative, paried with the natural map $K^\times \rightarrow K^\times /R^\times$. We send a pair $(\nu, \Lambda)$ in $\mathcal{B}$ to $\{ r \in K : \nu(r) \geq 0 \}$.
To see the similarity, take a boolean algebra $A$ with filter $F$ and a field $K$ with subring $R$ inducing a pair $(\nu, \Lambda)$. To make the similarity more clear, I want to change the notation a bit for the field $K$: for $a, b \in K$, write $a \leq b$ when $\nu(a) \leq \nu(b)$. Write $a \wedge b$ for $a + b$. Write $a^c$ for $a^{-1}$ ($c$ for complement). Then we have
1) $a, b \in R \implies a \wedge b \in R \forall a,b \in K^\times$, just as $a, b \in F \implies a \wedge b \in F \forall a, b \in A$.
2) $1 \in R$, just as $1 \in F$.
3) $a \in R, a \leq b \implies b \in R$, just as $a \in F, a \leq b \implies b \in F$.
4) $R$ is a valuation ring when $a \in R$ or $a^c \in R$ for all $a \in K^\times$, just as $F$ is an ultrafilter when $a \in F$ or $a^c \in F$ forall $a \in A$.
Can anyone illuminate the similarity going on here? How is $K^\times$ formally like a boolean algebra?
|
I am looking at the finite difference methods to solve simple $u_t=a(x,t)u_{xx}$.
There are explicit, implicit, Crank Nicolson.
The latter is said to be more accurate since the local truncation error is of second order provided all expansions are done around point $t^{n+0.5}$. However, local truncation error basically tells us how well the difference equation approximates the p.d.e. Thus, is I do expansion of CN scheme around any other point than I don't have second order anymore. Question1 How I can trust the results of the scheme if there is only a single point where second order occurs?
On the other hand if I have explicit scheme, regardless around which point I do Taylor I keep having first order, so I get why it is first order but what I don't understand is:
Question2 Why is CN supposed to give global error at the point on the grid of second order when the local error of second order is estimated at the point which is not even on the mesh?
Thanks!
EDIT: Any scheme can be written as $\frac{u^{n+1}-u^n}{\tau}=L_hu^{\theta}$. So, regardless for which $\theta$ we have RHS=$u_{xx}(x_i,t^{\theta})+(h^2)...$ So it really boils down to approximation of the first derivative on the left hand side. And here we have choice. We can take $\theta=1$ and have implicit scheme with a derivative approximated at the point $t^{n+1}$, we can take the midpoint and have a derivative being approximated at that point with higher accuracy. However, the point is not on the grid! What I don't understand is that we expand around the point which is not on the grid but measure the error at the point which is on the grid and where the expansion of the difference equation gives only first order. I am confused about it.
|
I am quite new to Mathematica and I cannot find what I'm looking for on the Stackexchange or on the Mathematica help pages.
I have the following function $$U=\int^\tau_0 \bar{u}e^{-\rho t}dt \,+ \int^\infty_\tau ve^{-\rho t}dt$$ I am trying to form a plot that shows how the value of $U$ changes as $\tau$ changes, where everything else is constant. I would like to plot from $\tau=[0,2]$.
I have set $\bar{u} = 0.5$, $\rho=1$, $v=1$ for $t>0.5$ and $v=0$ otherwise.
This is what I have tried so far:
u=0.5v=\[Piecewise] 0 t<0.5 1 t>0.5ρ=1Plot[Integrate[u E^(-ρ t), {t, 0, τ}]+Integrate[v E^(-ρ t), {t, τ, infinity}], {τ, 0, 2}]
When I run this, I only get an empty set of axes from 0 to 2. I have also tried defining the function "U", and then plotting "U", but I get the same result.
How do I form a plot for $U$ against $\tau$?
|
Scenario: ${\mathcal A}$ and ${\mathcal B}$ are two observables. Mathematically we model them by two Hermitian operators $A\colon H \to H$ and $B\colon H \to H$ on a separable Hilbert space. Physically they correspond to experiments $E_A$ and $E_B$, whose results are values in $Spec(A)$ and $Spec(B)$; repetitions produce value distributions on these spectra, expectation values, variances and higher momenta. The mathematical operator $A+B$ also is Hermitian. So let us look for an experiment which corresponds to this operator and let us study its expectation value in state $\varphi$. Naive approach: Let us try pair-experiments. Assume we have a black box producing samples of state $\varphi$. Take a sample of the state, do experiment $E_A$ and get result $a$. Sample the state again, do experiment $E_B$ and get result $b$. Call the sum $a+b$ the result of the pair-experiment.
If $Spec(A) = \{ a_1, a_2 \}$ and $Spec(B) = \{b_1, b_2\}$ then the pair experiment has spectrum $\{ a_1 + b_1, a_1 + b_2, a_2 + b_1, a_2 + b_2 \}$. Obviously the pair-experiment has to be described in $H \otimes H$ and with a completely different observable. Details are straight forward, but we have no experiment for $A + B$. :-(
Second attempt: Let us assume that ${\mathcal A}$ and ${\mathcal B}$ are compatible and $A$ and $B$ commute. Then we can do the following: Sample the state once, on that sample do experiments $E_A$ and $E_B$ in whatever sequence, receive sequence independent values $a$ and $b$ and add them. Mathematically all is good. $A$ and $B$ share an eigenbasis, the spectrum of $A + B$ is the sum of the eigenvalues (belonging to the same shared eigenspace). Expectation values work out as expected. :-) Now my question: $A + B$ still is a Hermitian operator, even if $A$ and $B$ do not commute. So I still am curious to which experiment this operator belongs to. Note: In case of the product $A \cdot B$, the operator $A\cdot B$ is no longer Hermitian if the operators do not commute, and this makes it impossible for me to ask that question for the product. My question would break the preconditions of the formalism. But in $A + B$ the formalism allows to pose this question... Update: In consequence of some comments I will try to specify my question more clearly: What is the physical meaning of the sum of two observables?
Obviously the "sum of two observables" is not the "sum of the values of the two observables". Assume that observable $A$ may have the values $2$ or $3$ and assume that observable $B$ may have the values $100$ or $200$ then the observable $A+B$ does not have the values $102$, $103$, $202$ or $203$ as a simple, naive approach might suggest or as an understanding of "sum of the values of the two observables" might suggest.
With this intuition failing, I would like to get an understanding of the physical meaning of $A + B$ starting from an understanding of $A$ and $B$.
Update 2: Adjusted the description of the pair experiment to a less misleading form.
|
Here is a solution by way of the Principle of Inclusion / Exclusion (PIE). Let $M = \{m_1, m_2, m_3, \dots ,m_{35}\}$.
We start by asking how many ways we can write $M$ as a union of four subsets $A_1, A_2, A_3, A_4$ if every element of $M$ must be in at least one subset, dropping condition (ii) for now. Consider one element, $m_i$, and let$$x_{ij} = \begin{cases}1 \qquad \text{if } m_i \in A_j\\0 \qquad \text{otherwise}\end{cases}$$ Then there are $15$ possible choices for $(x_{i1}, x_{i2}, x_{i3}, x_{i4})$: all of the $2^4$ possibilities except for $(0,0,0,0)$, since $x_i$ must be in at least one subset. So all together there are $N = 15^{35}$ ways to write $M$ as a union of four subsets.
Let $P_k$ be the set of arrangements in which every element of $M$ is in at least one $A_k$ and $A_k \cap A_{k-1} = \emptyset$, for $k=2,3,4$. We write $|P_k|$ for the size of $P_k$, and we define $S_1$ = $|P_2|+|P_3|+|P_4|$. To find $|P_2|$, fix $i$ and consider the possible choices for $(x_{i1},x_{i2},x_{i3},x_{i4})$ for an element $m_i$. All choices are possible except for those of the form $(1,1,x_{i3},{x_{i4}})$ or $(0,0,0,0)$. It turns out there are $11$ possibilities. We could count the possibilities through another application of PIE, but it's probably easier to simply make a list of all $16$ possibilities for $(x_{i1},x_{i2},x_{i3},x_{i4})$ and cross out those which are excluded. So $|P_2| = 11^{35}$. The same is true of $|P_3|$ and $|P_4|$, so $S_1 = 3 \cdot 11^{35}$.
Now define $S_2 = |P_2 \cap P_3| + |P_2 \cap P_4| + |P_3 \cap P_4|$. To find $|P_2 \cap P_3|$, we again count the number of possibilities for $(x_{i1},x_{i2},x_{i3},x_{i4})$. This time the excluded cases are those of the form $(1,1,x_{i3},x_{i4})$, $(x_{i1}, 1, 1, x_{i4})$, or $(0,0,0,0)$. There are $9$ possibilities, so $|P_2 \cap P_3| = 9^{35}$. The case $|P_3 \cap P_4|$ is similar, so $|P_3 \cap P_4| = 9^{35}$, but the case $|P_2 \cap P_4|$ is a bit different. Here the excluded cases for $(x_{i1},x_{i2},x_{i3},x_{i4})$ are those of the form $(1,1,x_{i3}, x_{i4})$, $(x_{i1},x_{i2},1,1)$, or $(0,0,0,0)$. There are $8$ possibilities, so $|P_2 \cap P_4| = 8^{35}$. Hence $S_2 = 2 \cdot 9^{35} + 8^{35}$.
Next, define $S_3 = |P_2 \cap P_3 \cap P_4|$. The excluded cases for $(x_{i1},x_{i2},x_{i3},x_{i4})$ are those of the form $(1,1,x_{i3},x_{i4})$, $(x_{i1},1,1,x_{i4})$, $(x_{i1},x_{i2},1,1)$, or $(0,0,0,0)$. There are $7$ possibilities, so $S_3 = 7^{35}$.
Finally, by PIE the number of arrangements which fall in none of the sets $P_2$, $P_3$, or $P_4$, i.e. those which satisfy all the requirements (i) and (ii), is$$\begin{align}N_0 &= N - S_1 + S_2 - S_3\\&= 15^{35} -3 \cdot 11^{35} + 2 \cdot 9^{35} + 8^{35} - 7^{35}\end{align}$$
|
I see that the formula giving the potential (interaction) energy of a dipole and an induced dipole is $$V=-\frac{C}{r^6}$$ where $$C=\frac{\mu_1^2 \alpha'_2}{4 \pi \epsilon_0}$$ and that the formula giving the potential (interaction) energy of an induced dipole and another induced dipole is $$V=-\frac{C}{r^6}$$ where $$C=\frac{3}{2}\alpha'_1\alpha'_2\frac{I_1I_2}{I_1+I_2}$$. Subscripts 1 and 2 represent the dipole or induced dipole, $\mu$ represents the dipole moment of a permanent dipole, and $\alpha$ represents the polarizability of an induced dipole. Also, $r$ represents the separation between the two in each case, and $I$ represents the ionization energy of 1 or 2. I have been unable to figure out how people have derived the interaction energy in cases that involve an induced dipole (and couldn't find it in Atkins or on Google), and was wondering if someone might be able to show me how it is done. Thank you.
I don't think you need quantum mechanics to understand what's going on in dipole-induced dipole interaction. The basic mechanism is quite simple and just the details of the calculations change by switching to a quantum description.
Polarizable molecule in an external field
So first things first. Let us consider a simple model of polarizable molecule as being a charge with valence $z$ attached to a spring of strength $k$ and zero length at rest (this is valid model in the harmonic approximation of a dipole).
Upon placing such a molecule in an electric field $\vec{E}$, the charge feels a force $ze \vec{E}$. This force pulls in one direction while the spring pulls in the opposite direction and eventually the system reaches a new mechanical equilibrium where
\begin{equation} -k \vec{d} + ze \vec{E} = \vec{0} \end{equation}
We can find then that the spring constant can be related to the size of the induced dipole $ d = || \vec{d}||$, the amount of charge displaced $ze$ and the electric field magnitude $E = ||\vec{E}||$ via:
\begin{equation} k = \frac{ze E}{d} \end{equation}
Now, to induce this dipole, of course the electric field had to work. The amount of work it provided is equal to the potential energy gained by the spring-like molecule i.e.
\begin{equation} W_{induced} = \frac{1}{2}k d^2 = \frac{1}{2} ze E d \end{equation}
Now, we can define the induced dipole $\vec{p}_i$ as being $ze \vec{d}$, it then comes that:
\begin{equation} W_{induced} = \frac{1}{2} \vec{p}_i\cdot \vec{E} \end{equation}
Now, the energy of dipole in an electric field is given by $U_d = -\vec{p}\cdot \vec{E}$. It then comes that the total energy of the dipole induced by an external field is then:
\begin{equation} U_{induced} = U_d + W_{induced} = -\frac{1}{2}\vec{p}_i \cdot \vec{E} \end{equation}
Potential energy of dipole-induced dipole system
Let us consider now that the external field is generated by permanent dipole $\vec{p}_p$ such that the interaction energy is now:
\begin{equation} U_{int} = -\frac{1}{2}\vec{p}_i \cdot \vec{E}_p \equiv \frac{1}{2}\alpha_i ||E_p||^2 \end{equation}
where I have introduced the polarizability $\alpha_i$ of the induced dipole such that $\vec{p}_i = \alpha_i \vec{E}$ in the general case.
Now, the electric field generated by a permanent dipole $\vec{p}_p$ at a point $M$ is
\begin{equation} \vec{E}_p(M) = \frac{1}{4 \pi \varepsilon_0 r^3}\left(3(\vec{p}_p \cdot \vec{u}) \: \vec{u}-\vec{p}_p \right) \end{equation}
upon taking the square of it, we get:
\begin{equation} ||\vec{E}_p(M)|| = \frac{1}{(4 \pi \varepsilon_0 r^3)^2}\left(3(\vec{p}_p \cdot \vec{u}) \: \vec{u}-\vec{p}_p \right) \cdot \left(3(\vec{p}_p \cdot \vec{u}) \: \vec{u}-\vec{p}_p \right) = \frac{p_p^2}{(4 \pi \varepsilon_0 r^3)^2}(3 \cos^2 \theta +1) \end{equation}
where the angle $\theta$ is defined such that $\vec{p}_p \cdot \vec{u} = p_p \cos \theta$.
Finally the total interaction energy reads:
\begin{equation} U_{int}(p_p, \theta, r) = \frac{\alpha_i p_p^2 (3 \cos^2 \theta +1)}{2(4 \pi \varepsilon_0 r^3)^2} \end{equation}
Approximate free energy of the system
The free energy of the system, when in contact with bath at inverse temperature $\beta$ is defined as follows:
\begin{equation} e^{-\beta \mathcal{F}(p_p,r)} \equiv \int d\Omega_p \: e^{-\beta U_{int}(p_p,\theta,r)} \end{equation}
where $d\Omega_p \equiv \sin \theta d\theta d\phi$ is the integral element of the solid angle over the possible orientation of the permanent dipole $\vec{p}_p$.
At sufficiently high temperatures, we can expand the exponential inside the integral and this gives:
\begin{equation} e^{-\beta \mathcal{F}(p_p,r)} = \int d\Omega_p \: [1-\beta U_{int}(p_p,\theta,r) + \mathcal{O}(\beta^2 U_{int}(p_p,\theta,r)^2) ] \end{equation}
this integral gives then:
\begin{equation} e^{-\beta \mathcal{F}(p_p,r)} \approx 4\pi + \frac{4\pi \beta p_p^2 \alpha}{(4\pi \varepsilon_0) r^3)^2} \end{equation}
finally
\begin{equation} \mathcal{F}(p_p,r,\alpha) = -\beta^{-1} \ln \left[ 4\pi + \frac{4\pi \beta p_p^2 \alpha}{(4\pi \varepsilon_0) r^3)^2} + \mathcal{O}(\beta^2U_{int}^2) \right] \approx - \frac{ p_p^2 \: \alpha}{(4\pi \varepsilon_0)^2) r^6} \end{equation}
This interaction is called the Debye van der Waals interaction. So you can also look it up in other textbooks to get more details, especially on the quantum treatment of what I have done here.
|
Discretizing the Weak Form Equations
This post continues our blog series on the weak formulation. In the previous post, we implemented and solved an exemplary weak form equation in the COMSOL Multiphysics software. The result was validated with simple physical arguments. Today, we will start to take a behind-the-scenes look at how the equations are discretized and solved numerically.
Our Simple Example
Recall our simple example of 1D heat transfer at steady state with no heat source, where the temperature T is a function of the position x in the domain defined by the interval 1\le x\le 5. With the boundary conditions that the outgoing flux should be 2 at the left boundary (x=1) and the temperature should be 9 at the right boundary (x=5), the weak form equation reads:
(1)
We now attempt to find a way to solve this equation numerically.
Basis Functions
To solve Eq. (1) numerically, we first divide the domain 1\le x\le 5 into four evenly spaced sub-intervals, or
mesh elements, bound by five nodal points x = 1, 2, \cdots, 5. Then, we can define a set of basis functions, or shape functions, \psi_{1L}(x), \psi_{1R}(x), \psi_{2L}(x), \psi_{2R}(x), \cdots, \psi_{4R}(x), as shown in the graph below, where there are two shape functions in each mesh element, represented by a solid line and a dashed line.
For example, in the first element (1 \le x \le 2), we have
(2)
\psi_{1L}(x) %26=%26 \left\{ \begin{array}{ll}
2-x \mbox{ for } 1 \le x \le 2,\\
0 \mbox{ elsewhere}
\end{array} \right \mbox{ (solid red line)}
\end{equation}
\psi_{1R}(x) %26=%26 \left\{ \begin{array}{ll}
x-1 \mbox{ for } 1 \le x \le 2, \\
0 \mbox{ elsewhere}
\end{array} \right \mbox{ (dashed red line)}
\end{equation}
We observe that each shape function is a simple linear function ranging from 0 to 1 within a mesh element, and vanishes outside of that mesh element.
Note: Of course, COMSOL Multiphysics allows shape functions formed with higher-order polynomials, not just linear functions. The choice of linear shape functions here is for visual clarity.
With this set of shape functions, we can approximate any arbitrary function defined in the domain 1\le x\le 5 by a simple linear combination of them:
(3)
where a_{1L}, a_{1R}, a_{2L}, a_{2R}, \cdots are some constant coefficients, one for each shape function. In the graph below, the arbitrary function u(x) is represented by the black curve. The cyan curve represents the approximation by the superposition of the shape functions (3). Each term on the right-hand side of Eq. (3) is plotted using the same color and line style as the graph above.
We see that in general the approximation (represented by the cyan curve) can be discontinuous across the boundary between adjacent mesh elements. In practice, many physical systems, including our simple example of heat conduction, are expected to have continuous solutions. For this reason, the default shape functions for most physics interfaces are
Lagrange elements, in which the shape function coefficients are constrained so that the solution is continuous across boundaries between adjacent elements. In this case, the approximation is simplified, as shown in the figure below,
where the cyan curve has been made continuous by setting the coefficients on each side of a mesh boundary to be equal: a_{1R} = a_{2L}, a_{2R} = a_{3L}, a_{3R} = a_{4L}. We also renamed the coefficients for brevity:
\begin{align}
a_1 %26\equiv a_{1L}\\
a_2 %26\equiv a_{1R} = a_{2L}\\
a_3 %26\equiv a_{2R} = a_{3L}\\
a_4 %26\equiv a_{3R} = a_{4L}\\
a_5 %26\equiv a_{4R}
\end{align}
\end{equation*}
We see that the continuity condition requires pairs of shape functions to share the same coefficient in making the approximation (3), which can now in turn be simplified by combining those pairs of shape functions into a new set of basis functions
ϕ1(x),ϕ2(x),⋯,ϕ5(x), with each function localized around a nodal point:
(4)
\begin{align}
\phi_1(x) \equiv \psi_{1L}(x) %26= \left\{ \begin{array}{ll}
2-x \mbox{ for } 1\:$\leq$\:\textit{x}\:$\leq$\:2,\\
0 \mbox{ elsewhere}
\end{array} \right
\\
\phi_2(x) \equiv \psi_{1R}(x) + \psi_{2L}(x) %26= \left\{ \begin{array}{lll}
x-1 \mbox{ for } 1\:$\leq$\:\textit{x}\:$\leq$\:2, \\
3-x \mbox{ for } 2\:\textless\:x\:$\leq$\:3, \\
0 \mbox{ elsewhere}
\end{array} \right
\\
\phi_3(x) \equiv \psi_{2R}(x) + \psi_{3L}(x) %26= \left\{ \begin{array}{lll}
x-2 \mbox{ for } 2\:$\leq$\:\textit{x}\:$\leq$\:3, \\
4-x \mbox{ for } 3\:\textless\:x\:$\leq$\:4, \\
0 \mbox{ elsewhere}
\end{array} \right
\\
\\ \cdot
\\ \cdot
\\ \cdot
\end{equation*}
As shown in the graph below, each new basis function is essentially a triangular-shaped, piecewise-linear function centered around a nodal point. Its value varies between 1 and 0 within the mesh element(s) adjacent to the nodal point, and vanishes everywhere else.
As discussed above, by choosing this new set of basis functions, we constrain the solution to be continuous across the boundary between adjacent mesh elements. Most physical systems satisfy this continuity constraint, including our simple heat transfer example here.
Now, with this new set of basis functions, the approximation (3) is simplified to
(5)
In the graph below, the arbitrary function u(x) is represented by the black curve. The cyan curve represents the approximation by the superposition of the new basis functions. Each term on the right-hand side of Eq. (5) is plotted using the same color scheme as the graph above.
As an aside, if the black curve represents the exact solution to some real modeling problem, then we see that the approximation is not very good, due to the coarseness of the mesh. Also, in general, the nodal point values a_1, a_2, \cdots are not required to lie on the exact solution, unless one is constrained to a known solution value (shown in a_5 as an example in the figure above). The discrepancy between the black and the cyan curves we see here represents the discretization error of the solution. In 2D and 3D models, there will also be a discretization error of the geometry. In my colleague Walter Frei’s blog post on meshing considerations, both types of errors are discussed in some detail. Due to these potential errors, a mesh refinement study is necessary to ensure the accuracy of modeling results.
We note that the approximation given by Eq. (5) (the cyan curve) is piecewise-linear. Thus, it’s impossible to evaluate its second derivatives numerically. As we have mentioned before, the weak formulation provides numerical benefits by reducing the order of differentiation in the equation system. In this case, only the first derivative is needed and it can be readily evaluated numerically. In a future blog entry, we will discuss an example of discontinuity in the material property that also benefits from the reduced order of differentiation.
Discretizing the Weak Form Equation in Two Steps
With the new set of basis functions defined above, we proceed to discretize the weak form equation (1) in two steps. First, the temperature function, T(x), can be approximated by the set of basis functions in the same way as in Eq. (5):
(6)
where a_1, a_2, \cdots , a_5 are unknown coefficients to be determined.
(7)
a_1 \int_1^5 \partial_x \phi_1(x) \partial_x \tilde{T}(x) \,dx + a_2 \int_1^5 \partial_x \phi_2(x) \partial_x \tilde{T}(x) \,dx + \cdots + a_5 \int_1^5 \partial_x \phi_5(x) \partial_x \tilde{T}(x) \,dx
\\
= -2 \tilde{T}_1 -\lambda_2 \tilde{T}_2 -\tilde{\lambda}_2 (a_5 -9)
\end{array}
where the temperature at the right boundary x=5, T_2, has been evaluated using the expression (6) and the fact that the basis functions are localized, leading to only one term, a_5 \phi_5(x=5) = a_5, contributing to T(x=5).
We see that there are six unknowns in the discretized version of the weak form equation (7): The five coefficients a_1, a_2, \cdots , a_5 and the one flux \lambda_2 at the right boundary. It is customary to call the unknowns
degrees of freedom. For example, here we say the (discretized) problem has “six degrees of freedom”.
To solve for the six unknowns, we need six equations. This leads to the second step of discretization. Recall from our first blog post that the role of the test functions is to sample the equation locally to clamp down the solution everywhere within the domain. Now we already have a set of localized functions, our basis functions \phi_1, \cdots, \phi_5, so we can just substitute them into the test function \tilde{T} in Eq. (7) to obtain the six equations we need.
Here is a table showing the six substitutions that will generate the six equations for us:
\tilde{T}(x) \tilde{\lambda}_2 \phi_1(x) 0 \phi_2(x) 0 \phi_3(x) 0 \phi_4(x) 0 \phi_5(x) 0 0 1
Since each of the basis functions is localized, each substitution yields an equation with a small number of terms. For example, the first substitution gives
a_1 \int_1^5 \partial_x \phi_1(x) \partial_x \phi_1(x) \,dx + a_2 \int_1^5 \partial_x \phi_2(x) \partial_x \phi_1(x) \,dx + \cdots + a_5 \int_1^5 \partial_x \phi_5(x) \partial_x \phi_1(x) \,dx
\\
= -2 \phi_1(x=1) -\lambda_2 \phi_1(x=5) -0 (a_5 -9)
\end{array}
We note that \phi_1 has non-trivial overlap only with itself and \phi_2. Therefore, only the first two terms on the left-hand side are non-zero. Also, \phi_1 is localized near the left boundary (x=1), so only the first term on the right-hand side remains. The equation now becomes
(8)
where we have evaluated the definite integrals on the left-hand side:
\begin{align}
\int_1^5 \partial_x \phi_1(x) \partial_x \phi_1(x) \,dx %26= 1\\
\int_1^5 \partial_x \phi_2(x) \partial_x \phi_1(x) \,dx %26= -1\\
\end{align}
\end{equation*}
and used the definition of \phi_1 on the right-hand side: \phi_1(x=1) = 1.
Similarly, the remaining five substitutions listed in the table above yield these equations:
(9)
\begin{align}
-a_1 + 2 a_2 -a_3 %26= 0\\
-a_2 + 2 a_3 -a_4 %26= 0\\
-a_3 + 2 a_4 -a_5 %26= 0\\
-a_4 + a_5 %26= -\lambda_2\\
0 %26= -(a_5 -9)\\
\end{align}
\end{equation*}
We now have six equations for our six unknowns and it is straightforward to verify that the solution matches what we have obtained using COMSOL Multiphysics software in the previous post. For example, the last equation immediately gives us a_5 = 9, and using the expression (6) for the temperature, we obtain its value at the right boundary:
\begin{align}
T(x=5) %26= a_1 \phi_1(x=5) + a_2 \phi_2(x=5) + \cdots + a_5 \phi_5(x=5)\\
%26= a_1 \cdot 0 + a_2 \cdot 0 + \cdots + a_5 \cdot 1\\
%26= 9\\
\begin{align}
\end{equation*}
This agrees with the fixed boundary condition that the temperature should be 9 at the right boundary. It is also easy to see that it is the term associated with the test function \tilde{\lambda}_2 that gives rise to the equation (0 = a_5 -9), as we would expect.
Matrix Representation
(10)
\begin{array}{cccccc}
1 & -1 & 0 & 0 & 0 & 0 \\
-1 & 2 & -1 & 0 & 0 & 0 \\
0 & -1 & 2 & -1 & 0 & 0 \\
0 & 0 & -1 & 2 & -1 & 0 \\
0 & 0 & 0 & -1 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 & 0
\end{array}
\right)
\left(
\begin{array}{c} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ \lambda_2 \end{array}
\right)
= \left(
\begin{array}{c} -2 \\ 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{array}
\right)
The matrix on the left-hand side is customarily called the
stiffness matrix and the vector on the right is called the load vector, due to the application of this technique in structural mechanics.
We notice two interesting facts about this matrix equation. First, there are a lot of zeros in the matrix (a so-called
sparse matrix). In a practical model where there are many more mesh elements than our four elements here, we can envision that most of the elements in the matrix will be zero. This is a direct benefit of choosing localized shape functions, and it lends itself to very efficient numerical methods to solve the equation system.
Second, the Lagrange multiplier \lambda_2 appears only in one equation (the last column of the matrix has only one non-zero element). The remaining five equations involve only the five unknown coefficients a_1, a_2, \cdots , a_5. Therefore, we can choose to solve for a_1, a_2, \cdots , a_5 using the five equations, without ever needing to solve for \lambda_2. As we briefly mentioned in the previous entry, in general, it is possible to choose not to solve for the Lagrange multiplier(s) in order to gain computation speed.
Summary and Next Topic in the Weak Form Series
Today, we reviewed the basic procedure for discretizing the weak form equation using our simple example. We took advantage of a set of localized shape functions in two steps:
Using them as a basis to approximate the real solution Substituting them one by one into the weak form equation to obtain the discretized system of equations
The resulting matrix equation is sparse, which can be solved efficiently using a computer.
In the previous blog post, when we implemented the weak form equation using COMSOL Multiphysics, the discretization was done under the hood without needing the user’s help. Next, we will show you how to inspect the stiffness matrix and load vector, as well as how to choose to solve for — or not to solve for — the Lagrange multiplier by using the
Weak Form PDE interface in the software. Kommentare (6) KATEGORIEN Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Homework Helper
4,099 5
I have some questions regarding the first two sections Einstein's paper. I'd really appreciate some guidance.
The paper can be found here: http://lorentz.phl.jhu.edu/AnnusMirabilis/AeReserveArticles/eins_lq.pdf
In section 1 of the paper, he considers a volume of gas surrounded by reflective walls. He goes on to derive what seems to me to be the Rayleigh-Jeans law using "Maxwell theory" and "kinetic gas theory".
[tex]\rho_\nu = \frac{R}{N} \frac{8\pi{\nu}^2}{L^3}T[/tex]
R/N = k (boltzmann constant) and L = c
Then in section 2... he writes: "We shall show in the follow that determination of elementary quanta given by Mr. Plank is, to a certain extent, independent of the theory of "black-body radiation" constructed by him.
He writes Planck's formula: [tex]\rho_\nu = \frac{\alpha {\nu}^3}{e^{\beta\nu/T}-1}[/tex]
and then shows that at large [tex]T/\nu[/tex] this leads to the rayleigh jeans law.
It seems like what Einstein's showing is... first... "maxwell theory" and "kinetic gas theory" => Rayleigh Jeans Law then "planck's formula" => Rayleigh Jeans Law
Is this to demonstrate that Planck's formula is not just applicable to blackbodies? But I thought both of these were already known. It seems to me that there is something significant Einstein's getting at here, but I'm missing it.
I'd appreciate any help.
The paper can be found here:
http://lorentz.phl.jhu.edu/AnnusMirabilis/AeReserveArticles/eins_lq.pdf
In section 1 of the paper, he considers a volume of gas surrounded by reflective walls. He goes on to derive what seems to me to be the Rayleigh-Jeans law using "Maxwell theory" and "kinetic gas theory".
[tex]\rho_\nu = \frac{R}{N} \frac{8\pi{\nu}^2}{L^3}T[/tex]
R/N = k (boltzmann constant) and L = c
Then in section 2... he writes:
"We shall show in the follow that determination of elementary quanta given by Mr. Plank is, to a certain extent, independent of the theory of "black-body radiation" constructed by him.
He writes Planck's formula:
[tex]\rho_\nu = \frac{\alpha {\nu}^3}{e^{\beta\nu/T}-1}[/tex]
and then shows that at large [tex]T/\nu[/tex] this leads to the rayleigh jeans law.
It seems like what Einstein's showing is... first...
"maxwell theory" and "kinetic gas theory" => Rayleigh Jeans Law
then
"planck's formula" => Rayleigh Jeans Law
Is this to demonstrate that Planck's formula is not just applicable to blackbodies? But I thought both of these were already known. It seems to me that there is something significant Einstein's getting at here, but I'm missing it.
I'd appreciate any help.
Last edited by a moderator:
|
Introduction
Encoding numerical inputs for neural networks is difficult because the representation space is very large and there is no easy way to embed numbers into a smaller space without losing information. Some of the ways to currently handle this is:
Scale inputs from minimum and maximum values to [-1, 1] One hot for each number One hot for different bins (e.g. [0-0], [1-2], [3-7], [8 – 19], [20, infty])
In small integer number ranges, these methods can work well, but they don’t scale well for wider ranges. In the input scaling approach, precision is lost making it difficult to distinguish between two numbers close in value. For the binning methods, information about the mathematical properties of the numbers such as adjacency and scaling is lost.
The desideratum of our embeddings of numbers to vectors are as follows:
able to handle numbers of arbitrary length captures mathematical relationships between numbers (addition, multiplication, etc.) able to model sequences of numbers
In this blog post, we will explore a novel approach for embedding numbers as vectors that include these desideratum.
Approach
My approach for this problem is inspired by word2vec but unlike words which follow the distributional hypothesis, numbers follow the the rules of arithmetic. Instead of finding a “corpus” of numbers to train on, we can generate random arithmetic sequences and have our network “learn” the rules of arithmetic from the generated sequences and as a side effect, be able to encode numbers as vectors and sequences as vectors.
Problem Statement
Given a sequence of length n integers \(x_1, x_2 \ldots x_n\), predict the next number in the sequence \(x_{n+1}\).
Architecture
The architecture of the system consists of three parts: the encoder, the decoder and the nested RNN.
The encoder is an RNN that takes a number represented as a sequence of digits and encodes it into a vector that represents an embedded number.
The nested RNN takes the embedded numbers and previous state to output another embedded vector that represents the next number.
The decoder then takes the embedded number and unravels it through the decoder RNN to output the digits of the next predicted number.
Formally:
Let \(X\) represent a sequence of natural numbers where \(X_{i,j}\) represents the j-th digit of the i-th number of the sequence. We also append an <eos> “digit” to the end of each number to signal the end of the number. For the sequence X = 123, 456, 789, we have \(X_{1,2} = 2, X_{3,3} = 9, X_{3,4} = <eos>\).
Let \(l_i\) be the number of digits in the i-th number of the sequence (including <eos> digit. Let \(E\) be an embedding matrix for each digit.
Let \(\vec{u}_i\) be an embedding of the i-th number in a sequence. It is computed as the final state of the encoder. Let \(\vec{v}_i\) be an embedding of the predicted (i+1)-th number in a sequence. It is computed from the output of the nested RNN and used as the initial state of the decoder.
Let \(R^e, R^d, R^n\) be the functions that gives the next state for the encoder, decoder and nested RNN respectively. Let \(O^d, O^n\) be the functions that gives the output of the current state for the decoder and nested RNN respectively.
Let \(\vec{s}^e_{i,j}\) be the state vector for \(R^e\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^d_{i,j}\) be the state vector for \(R^d\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^n_i\) represent the state vector of \(R^n\) for the i-th timestep.
Let \(z_{i,j}\) be the output of \(R^d\) at the j-th timestep of the i-th number of the sequence.
Let \(\hat{y}_{i,j}\) represent the distribution of digits for the prediction of the j-th digit of the (i+1)th number of the sequence.\(\displaystyle{\begin{eqnarray}\vec{s}^e_{i,j} &=& R^e(E[X_{i,j}], \vec{s}^e_{i, j-1})\\\vec{u}_i &=& \vec{s}^e_{i,l_i}\\ \vec{s}^n_i &=& R^n(\vec{u}_i, \vec{s}^n_{i-1})\\\vec{v_i} &=& O^n(\vec{s}^n_i)\\ \vec{z}_{i,j} &=& O^d(\vec{s}^d_{i,j})\\ \vec{s}^d_{i, j} &=& R^d(\vec{z}_{i,j-1}, \vec{s}^d_{i, j-1})\\ \hat{y}_{i,j} &=& \text{softmax}(\text{MLP}(\vec{z}_{i,j}))\\ p(X_{i+1,j})=k |X_1, \ldots, X_i, X_{i+1, 1}, \ldots X_{i+1, j-1}) &=& \hat{y}_{i,j}[k]\end{eqnarray}}\)
We use a cross-entropy loss function where \(y_{i,j}[t]\) represents the correct digit class for \(y_{i,j}\):\(\displaystyle{\begin{eqnarray}L(y, \hat{y}) &=& \sum_i \sum_j -\log \hat{y}_{i,j}[t]\end{eqnarray} }\)
Since I also find it difficult to intuitively understand what these sets of equations mean, here is a clearer diagram of the nested network:
Training
The whole network is trained end-to-end by generating random mathematical sequences and predicting the next number in the sequence. The sequences generated contains addition, subtraction, multiplication, division and exponents. The sequences generated also includes repeating series of numbers.
After 10,000 epochs of 500 sequences each, the networks converges and is reasonably able to predict the next number in a sequence. On my Macbook Pro with a Nvidia GT750M, the network implemented on Tensorflow took 1h to train.
Results
Taking a look at some sample sequences, we can see that the network is reasonably able to predict the next number.
Seq [43424, 43425, 43426, 43427] Predicted [43423, 43426, 43427, 43428] Seq [3, 4, 3, 4, 3, 4, 3, 4, 3, 4] Predicted [9, 5, 4, 3, 4, 3, 4, 3, 4, 3] Seq [2, 4, 8, 16, 32, 64, 128] Predicted [4, 8, 16, 32, 64, 128, 256] Seq [10, 9, 8, 7, 6, 5, 4, 3] Predicted [20, 10, 10, 60, 4, 4, 3, 2]
With the trained model, we can compute embeddings of individual numbers and visualize the embeddings with the t-sne algorithm.
We can see an interesting pattern when we plot the first 100 numbers (color coded by last digit). Another interesting pattern to observe is within clusters, the numbers also rotate clockwise or counterclockwise.
We can also trace the path of the embeddings sequentially, we can see that there is some structure to the positioning of the numbers.
If we look at the visualizations of the embeddings for numbers 1-1000 we can see that the clusters still exist for the last digit (each color corresponds to numbers with the same last digit)
We can also see the same structural lines for the sequential path for numbers 1 to 1000:
The inner linear pattern is formed from the number 1-99 and the outer linear pattern is formed from the numbers 100-1000.
We can also look at the embeddings of each sequence by taking the vector \(\vec{s}^n_k\) after feeding in k=8 numbers of a sequence into the model. We can visualize the sequence embeddings with t-sne using 300 sequences:
From the visualization, we can see that similar sequences are clustered together. For example, repeating patterns, quadratic sequences, linear sequences and large number sequences are grouped together. We can see that the network is able to extract some high level structure for different types of sequences.
Using this, we can see that if we encounter a sequence we can’t determine a pattern for, we can find the nearest sequence embedding to approximate the pattern type.
Code: Github
The model is written in Python using Tensorflow 1.1. The code is not very well written due to the fact that I was forced to use an outdated version of TF with underdeveloped RNN support because of OS X GPU compatibility reasons.
The code is a proof of a concept and comes from the result of stitching together outdated tutorials together.
Further improvements: bilateral RNN stack more layers attention mechanism beam search negative numbers teacher forcing
|
The abelianization of a group $G$ is an abelian group $A$ and a homomorphism $\varphi: G \to A$ such if $B$ is any abelian group, and $\phi: G \to B$ is any homomorphism, there is a unique homomorphism $\psi: A \to B$ (which might depend on $\phi$) such that $\psi\varphi = \phi$.
Now, I am reading some lecture notes, and the following is asserted.
If $G = \langle e_1, e_2, \ldots, e_n \mid w_1, w_2, \ldots, w_m\rangle$ is a finitely presented group, then$$A = \langle e_1, e_2, \ldots, e_n \mid w_1, \ldots, w_m, [e_1, e_2], \ldots, [e_i, e_j], \ldots, [e_{n - 1}, e_n]\rangle$$is a presentation of the abelianization of $G$, where the homomorphism $\varphi: G\to A$ sends the equivalence class of $w$ in $G$ to the equivalence class of $w$ in $A$ for each word $w \in G$.
To me, this is not a priori clear at all. Could anybody tell me why this is true?
|
Our new book (NAT)
Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tracts in Mathematics vol 15
uses mainly cubical, rather than simplicial, sets. The reasons are explained in the Introduction: in strict cubical higher categories we can easily express
algebraic inverse to subdivision,
a simple intuition which I have found difficult to express in simplicial terms. Thus cubes are useful for local-to-global problems. This intuition is crucial for our Higher Homotopy Seifert-van Kampen Theorem, which enables new calculations of some homotopy types, and suggests a new foundation for algebraic topology at the border between homotopy and homology.
A further reason for the connections is that they enabled an equivalence between crossed modules and certain double groupoids, and later, crossed complexes and strict cubical $\omega$-groupoids.
Also cubes have a nice tensor product and this is
crucial in the book for obtaining some homotopy classification results. See Chapter 15.
I have found that with cubes I have been able to conjecture and in the end prove theorems which have enabled new nonabelian calculations in homotopy theory, e.g. of second relative homotopy groups. So I have been happy to use cubes until someone comes up with something better. ($n$-simplicial methods, in conjunction with cubical ideas, turned out, however, to be necessary for proofs in the work with J.-L. Loday.)
See also some beamer presentations available on my preprint page.
Here is a further emphasis on the above point on algebraic structures: consider the following diagram:
From left to right pictures subdivision; from right to left pictures composition. The composition idea is well formulated in terms of double categories, and that idea is easily generalised to $n$-fold categories, and is expressed well in a cubical context. In that context one can conjecture, and eventually prove, higher dimensional Seifert-van Kampen Theorems, which allow new calculations in algebraic topology. Such multiple compositions are difficult to handle in globular or simplicial terms.
The further advantage of cubes, as mentioned in above answers, is that the formula $$I^m \times I^n \cong I^{m+n}$$ makes cubes very helpful in considering monoidal and monoidal closed structures. Most of the major results of the EMS book required cubical methods for their conjecture and proof. The main results of Chapter 15 of NAT have not been done simplicially. See for example Theorem 15.6.1, on a convenient dense subcategory closed under tensor product.
Sept 5, 2015: The paper by Vezzani arxiv::1405.4548 shows a use of cubical, rather than simplicial, methods, in motivic theory; while the paper by I. Patchkoria, HHA arXiv:1011.4870, Homology Homotopy Appl.Volume 14, Number 1 (2012), 133-158, gives a "Comparison of Cubical and Simplicial Derived Functors".
In all these cases the use of
connections in cubical methods is crucial. There is more discussion on this mathoverflow. For us connections arose in order to define commutative cubes in higher cubical categories: compare this paper.
See also this 2014 presentation The intuition for cubical methods in algebraic topology.
April 13, 2016. I should add some further information from Alberto Vezzani:
The cubical theory was better suited than the simplicial theory when dealing with (motives of) perfectoid spaces in characteristic 0. For example: degeneracy maps of the simplicial complex $\Delta$ in algebraic geometry are defined by sending one coordinate $x_i$ to the sum of two coordinates $y_j+y_{j+1}$. When one considers the perfectoid algebras obtained by taking all $p$-th roots of the coordinates, such maps are no longer defined, as $y_j+y_{j+1}$ doesn't have $p$-th roots in general. The cubical complex, on the contrary, is easily generalized to the perfectoid world.
November 29, 2016 There is more information in this paper on Modelling and Computing Homotopy Types: I which can serve as an introduction to the NAT book.
|
The Annals of Probability Ann. Probab. Volume 17, Number 1 (1989), 239-256. Hungarian Constructions from the Nonasymptotic Viewpoint Abstract
Let $x_1, \ldots, x_n$ be independent random variables with uniform distribution over $\lbrack 0, 1\rbrack$, defined on a rich enough probability space $\Omega$. Denoting by $\hat{\mathbb{F}}_n$ the empirical distribution function associated with these observations and by $\alpha_n$ the empirical Brownian bridge $\alpha_n(t) = \sqrt n(\hat{\mathbb{F}}_n(t) - t)$, Komlos, Major and Tusnady (KMT) showed in 1975 that a Brownian bridge $\mathbb{B}^0$ (depending on $n$) may be constructed on $\Omega$ in such a way that the uniform deviation $\|\alpha_n - \mathbb{B}^0\|_\infty$ between $\alpha_n$ and $\mathbb{B}^0$ is of order of $\log(n)/\sqrt n$ in probability. In this paper, we prove that a Poisson bridge $\mathbb{L}^0_n$ may be constructed on $\Omega$ (note that this construction is not the usual one) in such a way that the uniform deviations between any two of the three processes $\alpha_n, \mathbb{L}^0_n$ and $\mathbb{B}^0$ are of order of $\log(n)/\sqrt n$ in probability. Moreover, we give explicit exponential bounds for the error terms, intended for asymptotic as well as nonasymptotic use.
Article information Source Ann. Probab., Volume 17, Number 1 (1989), 239-256. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176991506 Digital Object Identifier doi:10.1214/aop/1176991506 Mathematical Reviews number (MathSciNet) MR972783 Zentralblatt MATH identifier 0667.60042 JSTOR links.jstor.org Citation
Bretagnolle, J.; Massart, P. Hungarian Constructions from the Nonasymptotic Viewpoint. Ann. Probab. 17 (1989), no. 1, 239--256. doi:10.1214/aop/1176991506. https://projecteuclid.org/euclid.aop/1176991506
|
Introduction
Encoding numerical inputs for neural networks is difficult because the representation space is very large and there is no easy way to embed numbers into a smaller space without losing information. Some of the ways to currently handle this is:
Scale inputs from minimum and maximum values to [-1, 1] One hot for each number One hot for different bins (e.g. [0-0], [1-2], [3-7], [8 – 19], [20, infty])
In small integer number ranges, these methods can work well, but they don’t scale well for wider ranges. In the input scaling approach, precision is lost making it difficult to distinguish between two numbers close in value. For the binning methods, information about the mathematical properties of the numbers such as adjacency and scaling is lost.
The desideratum of our embeddings of numbers to vectors are as follows:
able to handle numbers of arbitrary length captures mathematical relationships between numbers (addition, multiplication, etc.) able to model sequences of numbers
In this blog post, we will explore a novel approach for embedding numbers as vectors that include these desideratum.
Approach
My approach for this problem is inspired by word2vec but unlike words which follow the distributional hypothesis, numbers follow the the rules of arithmetic. Instead of finding a “corpus” of numbers to train on, we can generate random arithmetic sequences and have our network “learn” the rules of arithmetic from the generated sequences and as a side effect, be able to encode numbers as vectors and sequences as vectors.
Problem Statement
Given a sequence of length n integers \(x_1, x_2 \ldots x_n\), predict the next number in the sequence \(x_{n+1}\).
Architecture
The architecture of the system consists of three parts: the encoder, the decoder and the nested RNN.
The encoder is an RNN that takes a number represented as a sequence of digits and encodes it into a vector that represents an embedded number.
The nested RNN takes the embedded numbers and previous state to output another embedded vector that represents the next number.
The decoder then takes the embedded number and unravels it through the decoder RNN to output the digits of the next predicted number.
Formally:
Let \(X\) represent a sequence of natural numbers where \(X_{i,j}\) represents the j-th digit of the i-th number of the sequence. We also append an <eos> “digit” to the end of each number to signal the end of the number. For the sequence X = 123, 456, 789, we have \(X_{1,2} = 2, X_{3,3} = 9, X_{3,4} = <eos>\).
Let \(l_i\) be the number of digits in the i-th number of the sequence (including <eos> digit. Let \(E\) be an embedding matrix for each digit.
Let \(\vec{u}_i\) be an embedding of the i-th number in a sequence. It is computed as the final state of the encoder. Let \(\vec{v}_i\) be an embedding of the predicted (i+1)-th number in a sequence. It is computed from the output of the nested RNN and used as the initial state of the decoder.
Let \(R^e, R^d, R^n\) be the functions that gives the next state for the encoder, decoder and nested RNN respectively. Let \(O^d, O^n\) be the functions that gives the output of the current state for the decoder and nested RNN respectively.
Let \(\vec{s}^e_{i,j}\) be the state vector for \(R^e\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^d_{i,j}\) be the state vector for \(R^d\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^n_i\) represent the state vector of \(R^n\) for the i-th timestep.
Let \(z_{i,j}\) be the output of \(R^d\) at the j-th timestep of the i-th number of the sequence.
Let \(\hat{y}_{i,j}\) represent the distribution of digits for the prediction of the j-th digit of the (i+1)th number of the sequence.\(\displaystyle{\begin{eqnarray}\vec{s}^e_{i,j} &=& R^e(E[X_{i,j}], \vec{s}^e_{i, j-1})\\\vec{u}_i &=& \vec{s}^e_{i,l_i}\\ \vec{s}^n_i &=& R^n(\vec{u}_i, \vec{s}^n_{i-1})\\\vec{v_i} &=& O^n(\vec{s}^n_i)\\ \vec{z}_{i,j} &=& O^d(\vec{s}^d_{i,j})\\ \vec{s}^d_{i, j} &=& R^d(\vec{z}_{i,j-1}, \vec{s}^d_{i, j-1})\\ \hat{y}_{i,j} &=& \text{softmax}(\text{MLP}(\vec{z}_{i,j}))\\ p(X_{i+1,j})=k |X_1, \ldots, X_i, X_{i+1, 1}, \ldots X_{i+1, j-1}) &=& \hat{y}_{i,j}[k]\end{eqnarray}}\)
We use a cross-entropy loss function where \(y_{i,j}[t]\) represents the correct digit class for \(y_{i,j}\):\(\displaystyle{\begin{eqnarray}L(y, \hat{y}) &=& \sum_i \sum_j -\log \hat{y}_{i,j}[t]\end{eqnarray} }\)
Since I also find it difficult to intuitively understand what these sets of equations mean, here is a clearer diagram of the nested network:
Training
The whole network is trained end-to-end by generating random mathematical sequences and predicting the next number in the sequence. The sequences generated contains addition, subtraction, multiplication, division and exponents. The sequences generated also includes repeating series of numbers.
After 10,000 epochs of 500 sequences each, the networks converges and is reasonably able to predict the next number in a sequence. On my Macbook Pro with a Nvidia GT750M, the network implemented on Tensorflow took 1h to train.
Results
Taking a look at some sample sequences, we can see that the network is reasonably able to predict the next number.
Seq [43424, 43425, 43426, 43427] Predicted [43423, 43426, 43427, 43428] Seq [3, 4, 3, 4, 3, 4, 3, 4, 3, 4] Predicted [9, 5, 4, 3, 4, 3, 4, 3, 4, 3] Seq [2, 4, 8, 16, 32, 64, 128] Predicted [4, 8, 16, 32, 64, 128, 256] Seq [10, 9, 8, 7, 6, 5, 4, 3] Predicted [20, 10, 10, 60, 4, 4, 3, 2]
With the trained model, we can compute embeddings of individual numbers and visualize the embeddings with the t-sne algorithm.
We can see an interesting pattern when we plot the first 100 numbers (color coded by last digit). Another interesting pattern to observe is within clusters, the numbers also rotate clockwise or counterclockwise.
We can also trace the path of the embeddings sequentially, we can see that there is some structure to the positioning of the numbers.
If we look at the visualizations of the embeddings for numbers 1-1000 we can see that the clusters still exist for the last digit (each color corresponds to numbers with the same last digit)
We can also see the same structural lines for the sequential path for numbers 1 to 1000:
The inner linear pattern is formed from the number 1-99 and the outer linear pattern is formed from the numbers 100-1000.
We can also look at the embeddings of each sequence by taking the vector \(\vec{s}^n_k\) after feeding in k=8 numbers of a sequence into the model. We can visualize the sequence embeddings with t-sne using 300 sequences:
From the visualization, we can see that similar sequences are clustered together. For example, repeating patterns, quadratic sequences, linear sequences and large number sequences are grouped together. We can see that the network is able to extract some high level structure for different types of sequences.
Using this, we can see that if we encounter a sequence we can’t determine a pattern for, we can find the nearest sequence embedding to approximate the pattern type.
Code: Github
The model is written in Python using Tensorflow 1.1. The code is not very well written due to the fact that I was forced to use an outdated version of TF with underdeveloped RNN support because of OS X GPU compatibility reasons.
The code is a proof of a concept and comes from the result of stitching together outdated tutorials together.
Further improvements: bilateral RNN stack more layers attention mechanism beam search negative numbers teacher forcing
|
I can't seem to find a good way to solve this.
I tried using L'Hopitals, but the derivative of $\log(n!)$ is really ugly. I know that the answer is 1, but I do not know why the answer is one.
Any simple way to go about this?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I can't seem to find a good way to solve this.
I tried using L'Hopitals, but the derivative of $\log(n!)$ is really ugly. I know that the answer is 1, but I do not know why the answer is one.
Any simple way to go about this?
The numerator is
$$ \log(n!) = \log 1 + \log 2 + \log 3 + \cdots + \log n $$
The terms have an obvious upper bound: $\log n$. Thus,
$$ \log(n!) \leq \log n + \log n + \log n + \cdots + \log n = n \log n $$
Thus, $\log(n!) / (n \log n) \leq 1$, always.
Half of the terms have an obvious lower bound: $\log (n/2)$.
$$ \log(n!) \geq (n/2) \log(n/2) $$
Thus,
$$\lim \frac{\log n!}{n \log n} \geq \lim \frac{(n/2) \log(n/2)}{n \log n} = \frac{1}{2} $$
But we also know that three quarters of the terms have the lower bound $\log(n/4)$, so
$$\lim \frac{\log n!}{n \log n} \geq \lim \frac{(3n/4) \log(n/ 4)}{n \log n} = \frac{3}{4} $$
And so forth: we can show that the limit is bigger than every number less than 1.
And so we apply the ancient principle of exhaustion! If the limit is bigger than every number less than 1, then the limit can't be smaller than 1. But we know the limit can't be bigger than 1 either. Therefore, it must
be 1!
First, use that $n^n > n!$ for all $n > 1$, thus $n \log(n) > \log(n!)$ and so $1 > \dfrac{\log(n!)}{n \log(n)}$. Now, from a basic theorem of Stirling's approximation, we have $n \log(n) - n < \log(n!)$, so we have $1 - \dfrac{1}{\log(n)} < \dfrac{\log(n!)}{n \log(n)}$. Combining these, we have $1 - \dfrac{1}{\log(n)} < \dfrac{\log(n!)}{n \log(n)} < 1$. It is easy to see that $\lim_{n \rightarrow \infty} 1 - \dfrac{1}{\log(n)} = 1$ and trivial that $\lim_{n \rightarrow \infty}1 = 1$, so by the squeeze theorem, $\lim_{n \rightarrow \infty} \dfrac{\log(n!)}{n \log(n)} = 1$.
To prove that $n^n > n!$, it suffices to compare the terms of their product expansions (i.e. $n^n = n \cdot n \cdot n \cdots n$ ($n$ times) and $n! = 1 \cdot 2 \cdot 3 \cdots n$.).
From the Taylor series of $e^x$, we have $$e^x = 1 + \sum_{k=1}^{\infty} \dfrac{x^n}{n!}$$ From this we get that, $e^x \geq \dfrac{x^n}{n!}$, for $x \in \mathbb{R}^+$ and $n \in \mathbb{Z}^+$.
Setting $x=n$ we get that $$e^n \geq \dfrac{n^n}{n!} \implies n! \geq \left(\dfrac{n}e \right)^n$$ Hence, we have $$\log(n!) \geq n \log n - n$$ Also, note that $$\log(n!) = \sum_{k=1}^n \log(k) \leq \sum_{k=1}^n \log(n) = n \log(n)$$ We hence have $$n \log(n) - n \leq \log(n!) \leq n \log(n)$$ Now you should be able to finish it off from here.
By Stolz Cezaro
$$ \lim_n \frac{\log(n!)}{n\log(n)} = \lim_n \frac{\log(n+1)}{(n+1)\log(n+1)-n\log(n)} = \lim_n \frac{\log(n+1)}{\log(n+1)+n\log(\frac{n+1}{n})}\\= \lim_n \frac{\log(n+1)}{\log(n+1)+\log(\frac{n+1}{n})^n}= \lim_n \frac{1}{1+\frac{\log(\frac{n+1}{n})^n}{\log(n+1)}}=1 $$ the last equality following from $\lim_n \log\left(\frac{n+1}{n}\right)^n=e$
There are $n!$ ways of showing that $\frac{\ln(n!)}{n \ln n} \to 1$; here is one of them.
We start with $\ln(n!) = \sum_{k=1}^n \ln k$ and estimate $\ln k$.
$(x+1)\ln(x+1)-x \ln x = x(\ln(x+1)-\ln(x))+\ln(x+1) =x\ln(1+1/x)+\ln(x+1) $ so $ (x+1)\ln(x+1)-x \ln x -\ln(x+1)=x\ln(1+1/x)$.
Using $0 < \ln(1+z) < z$ for $0 < z < 1$, $0 < (x+1)\ln(x+1)-x \ln x -\ln(x+1) < 1$. This is just an approximate form of $\int \ln x\,dx = x \ln x - x$ or $\ln x = (x \ln x)' - 1$.
Summing for $x$ from 1 to $n-1$, $0 < \sum_{x=1}^{n-1} \big((x+1)\ln(x+1)-x \ln x -\ln(x+1)\big) < n-1 $ or, since the left part of the sum is telescoping and the right part gives $\ln(n!)$, $0 < n \ln n -\ln(n!) < n-1 < n$.
Dividing by $n \ln n$, $0 < 1-\frac{\ln(n!)}{n \ln n} < \frac{1}{\ln n}$, and this gives it to us.
Based on the basic properties of logarithms and a simple integral approximation, we can rewrite $\log(n!)$ as follows:
\begin{eqnarray} \log(n!) = \log(1 \times 2 \times 3 \times \dots \times n) = \log(1) + \log(2) + \log(3) + \cdots+ \log(n) = \end{eqnarray} \begin{eqnarray} \sum_{i=1}^{n} \log(i) \approx \int_1^n \log(x)\,\mathrm{d}x = [x\log(x) -x]_{1}^{n} = n\log(n)-n+1 \approx n\log(n) - n \end{eqnarray}
Thus,
\begin{eqnarray} \lim_{n \to \infty} \frac{\log(n!)}{n\log(n)} \approx \frac{n\log(n) - n}{n\log(n)} = 1 - \lim_{n \to \infty} \frac{1}{\log(n)} =1. \end{eqnarray}
Let us show your limit is $1$ in an elementary way without calculus. WLOG we replace $n$ by $2^n$.
$$\sum_{1\le k \le 2^n} \ln k >\sum_{1\le k\le n-1} 2^{k}\ln 2^{k}$$
Now we shall prove $$\frac{\sum_{1\le k\le n-1} k2^k}{n 2^n} \to 1.$$ Replace $k$ by $n-k$, $$\frac{\sum_{1\le k\le n-1} (n-k)2^{-k}}{n}\to 1,$$ or $$\sum_{1\le k\le n-1} \frac{k}{n}2^{-k}\to 0, $$ the rest is yours (using $k2^{-k}\to 0$).
A completely elementary way:
By the mean value theorem, we have that
$$\frac{1}{j} \le \log j - \log (j-1) \le \frac{1}{j-1}$$
Setting $j=2$ to $k$ and adding up yields
$$H_{k} - 1 \le \log k \le H_{k-1} \quad \quad (1)$$
where $H_{k}$ is the $k^{\text{th}}$ harmonic number.
Note that this shows that $\frac{H_n}{\log n} \to 1$ as $n \to \infty$.
We can easily show that (induction or otherwise)
$$ S_n = \sum_{k=1}^{n} H_k = (n+1)H_n - n \quad \quad (2)$$
Since $\frac{H_n}{\log n} \to 1$, we have that $\frac{S_n}{n \log n} \to 1$.
Setting $k=2$ to $n$ in $(1)$, adding up and using $(2)$ gives us
$$ S_n - n \le \log n! \le S_{n-1}$$
Now divide by $n \log n$, and use the result that $\frac{S_n}{n\log n} \to 1$.
don't need Stirling. For a function such as logarithm with $f(x) > 0$ and $f'(x) > 0,$ we get $$ \int_{a-1}^b \; f(x) dx < \sum_{k=a}^b \; f(k) < \int_{a}^{b+1} \; f(x) dx $$
Here $f$ is log base e, take $a=2$ and $b=n$ $$ \int_{1}^n \; \log x \; dx < \sum_{k=2}^n \; \log k < \int_{2}^{n+1} \; \log x \; dx $$
An antiderivative of $\log x$ is $x \log x - x.$
$$ n \log n - n + 1 < \log n! < (n+1) \log (n+1) - n - 1 - 2 \log 2 + 2 $$
..................
|
Exercise \(\PageIndex{1}\)
For the following functions, use the definition of a derivative to find \(f′(x)\). You may use derivative rules (will be learned in the next section to check if your answer is correct.
1) \(f(x)=6\)
2) \(f(x)=2−3x\)
3) \(f(x)=\frac{2x}{7}+1\)
4) \(f(x)=4x^2\)
5) \(f(x)=5x−x^2\)
6) \(f(x)=\sqrt{2x}\)
7) \(f(x)=\sqrt{x−6}\)
8) \(f(x)=\frac{9}{x}\)
9) \(f(x)=x+\frac{1}{x}\)
10) \(f(x)=\frac{1}{\sqrt{x}}\)
Answer
Under Construction
Exercise \(\PageIndex{2}\)
For the following exercises, use the graph of \(y=f(x)\) to sketch the graph of its derivative \(f′(x).\).
1)
2)
3)
4)
Answers to even numbered questions
2.
4.
Exercise \(\PageIndex{3}\)
For the following exercises, the given limit represents the derivative of a function \(y=f(x)\) at \(x=a\). Find \(f(x)\) and \(a\).
1) \(lim_{h→0}\frac{(1+h)^{2/3}−1}{h}\)
2) \(lim_{h→0}\frac{[3(2+h)^2+2]−14}{h}
3) \(lim_{h→0}\frac{cos(π+h)+1}{h}\)
4) \(lim_{h→0}\frac{(2+h)^4−16}{h}\)
5) \(lim_{h→0}\frac{[2(3+h)^2−(3+h)]−15}{h}\)
6) \(lim_{h→0}\frac{e^h−1}{h}\)
Answers to even numbered questions
2. \(f(x)=3x^2+2, a=2\)
4. \(f(x)=x^4, a=2\)
6. \(f(x)=e^x, a=0\)
Exercise \(\PageIndex{4}\)
For the following functions,
a. sketch the graph and
b. use the definition of a derivative to show that the function is not differentiable at \(x=1\)
1) \(f(x)=\begin{cases}2\sqrt{x} & 0≤x≤1\\3x−1 & x>1\end{cases}\
2) \(f(x)=\begin{cases}3 & x<1\\3x & x≥1\end{cases}\)
3) \(f(x)=\begin{cases}−x^2+2 & x≤1\\x & x>1\end{cases}\)
4) \(f(x)=\begin{cases}2x, & x≤1\\\frac{2}{x} & x>1\end{cases}\)
Answers to even numbered questions
2a.
b. \(lim_{h→1^−}\frac{3−3}{h}≠lim_{h→1^+}\frac{3h}{h}\)
4a.
b. \(lim_{h→1^−}\frac{2h}{h}≠lim_{h→1^+}\frac{\frac{2}{x+h}−2x}{h}.\)
Exercise \(\PageIndex{5}\)
For the following graphs,
a. determine for which values of \(x=a\) the \(lim_{x→a}f(x)\) exists but \(f\) is not continuous at \(x=a\), and
b. determine for which values of \(x=a\) the function is continuous but not differentiable at \(x=a\).
1)
2)
Answer
2. \(a. x=1, b. x=2\)
Exercise \(\PageIndex{6}\)
Use the graph to evaluate \(a. f′(−0.5), b. f′(0), c. f′(1), d. f′(2),\) and e. \(f′(3),\) if it exists.
Answer
Under Construction
Exercise \(\PageIndex{7}\)
For the following exercises, describe what the two expressions represent in terms of each of the given situations. Be sure to include units.
a. \(\frac{f(x+h)−f(x)}{h}\)
b. \(f′(x)=lim_{h→0}\frac{f(x+h)−f(x)}{h}\)
1) \(P(x)\) denotes the population of a city at time \(x\) in years.
2) \(C(x)\) denotes the total amount of money (in thousands of dollars) spent on concessions by \(x\) customers at an amusement park.
3) \(R(x)\) denotes the total cost (in thousands of dollars) of manufacturing \(x\) clock radios
4) \(g(x)\) denotes the grade (in percentage points) received on a test, given \(x\) hours of studying.
5) \(B(x) \)denotes the cost (in dollars) of a sociology textbook at university bookstores in the United States in \(x\) years since \(1990\).
6) \(p(x)\) denotes atmospheric pressure at an altitude of \(x\) feet.
Answers to even numbered questions
2a. Average rate at which customers spent on concessions in thousands per customer.
b. Rate (in thousands per customer) at which \(x\) customers spent money on concessions in thousands per customer.
4a. Average grade received on the test with an average study time between two values.
b. Rate (in percentage points per hour) at which the grade on the test increased or decreased for a given average study time of \(x\) hours.
6a. Average change of atmospheric pressure between two different altitudes.
b. Rate (torr per foot) at which atmospheric pressure is increasing or decreasing at \(x\) feet.
Exercise \(\PageIndex{8}\)
Sketch the graph of a function \(y=f(x)\) with all of the following properties:
a. \(f′(x)>0\) for \(−2≤x<1\)
b. \(f′(2)=0\)
c. \(f′(x)>0\) for \(x>2\)
d. \(f(2)=2\) and \(f(0)=1\)
e. \(lim_{x→−∞}f(x)=0\) and \(lim_{x→∞}f(x)=∞\)
f. \(f′(1)\) does not exist.
Answer Under Construction Exercise \(\PageIndex{9}\)
Suppose temperature T in degrees Fahrenheit at a height \(x\) in feet above the ground is given by \(y=T(x).\)
a. Give a physical interpretation, with units, of \(T′(x).\)
b. If we know that \(T′(1000)=−0.1,\) explain the physical meaning.
Answer
a. The rate (in degrees per foot) at which temperature is increasing or decreasing for a given height \(x.\)
b. The rate of change of temperature as altitude changes at \(1000\) feet is \(−0.1\) degrees per foot.
Exercise \(\PageIndex{10}\)
Suppose the total profit of a company is \(y=P(x)\) thousand dollars when \(x\) units of an item are sold.
a. What does \(\frac{P(b)−P(a)}{b−a}\) for \(0<a<b\) measure, and what are the units?
b. What does \(P′(x)\) measure, and what are the units?
c. Suppose that \(P′(30)=5\), what is the approximate change in profit if the number of items sold increases from \(30\) to \(31\)?
Answer
Under Construction
Exercise \(\PageIndex{11}\)
The graph in the following figure models the number of people \(N(t)\) who have come down with the flu t weeks after its initial outbreak in a town with a population of 50,000 citizens.
a. Describe what \(N′(t)\) represents and how it behaves as \(t\) increases.
b. What does the derivative tell us about how this town is affected by the flu outbreak?
Answer
a. The rate at which the number of people who have come down with the flu is changing t weeks after the initial outbreak.
b. The rate is increasing sharply up to the third week, at which point it slows down and then becomes constant.
Exercise \(\PageIndex{12}\)
For the following exercises, use the following table, which shows the height \(h\) of the Saturn \(V\) rocket for the Apollo \(11\) mission \(t\) seconds after launch.
\(Time(seconds)\) \(Height(meters)\) 0 0 1 2 2 4 3 13 4 25 5 32
1) What is the physical meaning of \(h′(t)\)? What are the units?
2) Construct a table of values for \(h′(t)\) and graph both \(h(t)\) and \(h′(t)\) on the same graph. (Hint: for interior points, estimate both the left limit and right limit and average them.)
3) The best linear fit to the data is given by \(H(t)=7.229t−4.905\), where \(H\) is the height of the rocket (in meters) and t is the time elapsed since takeoff. From this equation, determine \(H′(t)\). Graph \(H(t\) with the given data and, on a separate coordinate plane, graph \(H′(t).\)
4) The best quadratic fit to the data is given by \(G(t)=1.429t^2+0.0857t−0.1429,\) where \(G\) is the height of the rocket (in meters) and \(t\) is the time elapsed since takeoff. From this equation, determine \(G′(t)\). Graph \(G(t)\) with the given data and, on a separate coordinate plane, graph \(G′(t).\)
5) The best cubic fit to the data is given by \(F(t)=0.2037t^3+2.956t^2−2.705t+0.4683\), where \(F\) is the height of the rocket (in m) and \(t\) is the time elapsed since take off. From this equation, determine \(F′(t)\). Graph \(F(t)\) with the given data and, on a separate coordinate plane, graph \(F′(t)\). Does the linear, quadratic, or cubic function fit the data best?
6) Using the best linear, quadratic, and cubic fits to the data, determine what \(H''(t),G''(t) and F''(t)\) are. What are the physical meanings of \(H''(t),G''(t )and F''(t),\) and what are their units?
Answers to even numbered questions
2.
\(Time(seconds)\) \(h′(t)(m/s)\) 0 2 1 2 2 5.5 3 10.5 4 9.5 5 7 4. \(G′(t)=2.858t+0.0857\) 6. \(H''(t)=0,G''(t)=2.858 and f''(t)=1.222t+5.912\) represent the acceleration of the rocket, with units of meters per second squared \((m/s2).\) Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
|
This question already has an answer here:
Mathematica gives: $$S= -\frac{1}{12}\pi^2\log(2)+\frac{\log(2)^3}{6}+ \frac{7}{8}\zeta(3)$$ How can I prove it?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Mathematica gives: $$S= -\frac{1}{12}\pi^2\log(2)+\frac{\log(2)^3}{6}+ \frac{7}{8}\zeta(3)$$ How can I prove it?
Hint:
Let $$f(x):=\sum_{n=1}^\infty\frac{x^n}{n^3}.$$
Then
$$f'(x):=\sum_{n=1}^\infty\frac{x^{n-1}}{n^2},$$
$$(xf'(x))'=\sum_{n=1}^\infty\frac{x^{n-1}}{n},$$
$$(x(xf'(x))')'=\sum_{n=1}^\infty x^{n-1}=\frac1{1-x}.$$
Now backward, integrating from $0$ to $x$,
$$(xf'(x))'=\frac{\log(1-x)}x,$$
$$f'(x)=\frac1x\int_0^x\frac{\log(1-x)}xdx,$$
$$f(x)=\int_0^x\left[\frac1x\int_0^x\frac{\log(1-x)}xdx\right]dx.$$
$$S=\sum_{n=1}^{\infty}\frac{x^2}{n^3}=\text{Li}_3(x)$$ where appears the polylogarithm function. Have a look here for special values; in your case, $x=\frac 12$.
|
I have to find: $$\lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}}$$ and I want to calculate it without using L'Hospital's rule. With L'Hospital's I know that it gives $1/2$. Any ideas?
Simply differentiate $f(x)=\ln(e^x +1)$ at the point of abscissa $x=0$ and you’ll get the answer. in fact this is the definition of the derivative of $f$!!
I thought it might be instructive to present an approach that does not rely on differential calculus, but rather uses the squeeze theorem and a set of inequalities that can be obtained with pre-calculus tools only. To that end we proceed.
First note that in THIS ANSWER, I showed using only the limit definition of the exponential function along with Bernoulli's Inequality that the logarithm and exponential functions satisfy the inequalities
$$\frac{x-1}{x}\le \log(x)\le x-1\tag 1$$
and for $x<1$
$$1+x\le e^x\le \frac{1}{1-x}\tag2$$
Next, note that $\log(1+e^x)-\log(2)=\log\left(\frac{e^x+1}2\right)$. Hence, applying $(1)$, we can assert that
$$\frac{e^x -1}{e^x +1}\le \log(1+e^x)-\log(2)\le \frac{e^x-1}2\tag3$$
Then, applying $(2)$ to $(3)$ reveals
$$\frac{x}{e^x +1}\le \log(1+e^x)-\log(2)\le \frac{x}{2(1-x)}\tag4$$
Dividing $(4)$ by $x$, letting $x\to 0$, and applying the squeeze theorem yields the coveted limit
$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to 0}\frac{\log(1+e^x)-\log(2)}{x}=\frac12}$$
Tools Used: The inequalities in $(1)$ and $(2)$ and the squeeze theorem.
Let $y = e^x$
$$L = \lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}} = \lim_{y\to1}{\frac{\ln(1+y)-\ln2}{\ln y}}$$
Let $z = y -1$
$$L = \lim_{z\to0}{\frac{\ln(z + 2)-\ln2}{\ln(z+1)}} = \dfrac12\lim_{z\to0}\frac{\ln(z/2 + 1)}{z/2}\dfrac{z}{\ln(z+1)}= \dfrac12$$
This is just the derivative of $\ln\left(\frac{e^x+1}2\right)$ at $0$, which is indeed $\frac12$.
That's just the derivative of $\ln\left(e^x+1\right)$ at $0$.
If you don't see it, by Taylor's series:
$${\frac{\ln(1+e^x)-\ln2}{x}}={\frac{\ln(2+x+o(x))-\ln2}{x}}=\frac{\ln(1+\frac{x}{2}+o(x))}{x}=\frac{\frac{x}{2}+o(x)}{x}=\frac12+o(1)\to\frac12$$
Solution by
standard limits:
set $$1+e^x=2+y \quad y\to 0\implies e^x=1+y \implies x= \ln (1+y)$$
thus
$${\frac{\ln(1+e^x)-\ln2}{x}}=\frac{\ln (2+y)-\ln 2}{\ln(1+y)}=\frac{\ln (1+\frac{y}{2})}{\ln(1+y)}=\frac{\ln (1+\frac{y}{2})}{\frac y2}\frac{y}{\ln (1+y)}\frac12\to 1\cdot 1\cdot \frac12=\frac12$$
There's an amusing way to prove, if the limit exists, it must be $1/2.$ Obviously, not a complete answer, since it doesn't prove the limit exists.)
Letting $f(x)=\log(1+e^x)$ then we have $f(x)=\log(1+e^{x})=\log(e^x)+\log(1+e^{-x})=x+f(-x)$ and thus $f(x)-f(-x)=x.$
So if $L=\lim_{x\to 0}\frac{f(x)-f(0)}{x}$ then $$1=\lim_{x\to 0} \frac{f(x)-f(-x)}{x}=\lim_{x\to 0}\left[\frac{f(x)-f(0)}{x}+\frac{f(-x)-f(0)}{-x}\right]=2L$$
And thus we get $L=\frac{1}{2}$.
A more complete answer uses that we know that:
$$\lim_{y\to 0}\frac{\log(1+y)}{y}=1.$$
Then we can write:
$$\frac{\log(1+e^x)-\log2}{x}=\frac{\log\left(1+\frac{e^x-1}{2}\right)}{\frac{e^x-1}{2}}\cdot \frac{\frac{e^x-1}{2}}{x}$$
Letting $y=\frac{e^x-1}{2}$, we get that $y\to 0$ as $x\to 0$ and $x=\log(1+2y)$ so we get:
$$\lim_{x\to 0}\frac{\log(1+e^x)-\log2}{x}=\lim_{y\to 0}\frac{\log(1+y)}{y}\cdot \frac{2y}{\log(2y+1)}\cdot \frac{1}{2}=1\cdot 1\cdot\frac{1}{2}$$
This is really just a long way of proving the chain rule for the derivative of $\log(e^x+1).$
[Complete answer updated to use the neat trick of gimusi at the end, rather than using $\frac{e^x-1}{x}\to 1$ as well.]
As pointed out by one of the replies on the question, you can notice that $$ \lim_{x\to 0} \frac {ln(1+e^x)-ln2}{x} $$ is written in the format of one of the theoretical definition of derivative at point x : $$ \lim_{h\to 0} \frac {f(x+h) - f(x)}{x} $$ Therefore, x can be evaluated to 0 and $ f(x) = ln(1+e^x)$ Using this function, calculate its derivative which is $$ f'(x) = \frac {1}{1+e^x} (e^x) $$ Then you plug $ x=0 $ to get the answer $$ f'(1) = \frac{e^0}{1+e^0} = \frac{1}{2} $$
|
This problem reminds me of tension field theory and related problems in studying the shape of inflated inextensible membranes (like helium balloons). What follows is far from a solution, but some initial thoughts about the problem.
First, since you're allowing creasing and folding, by Nash-Kuiper it's enough to consider short immersions $$\phi:P\subset\mathbb{R}^2\to\mathbb{R}^3,\qquad \|d\phi^Td\phi\|_2 \leq 1$$of the piece of paper $P$ into $\mathbb{R}^3$, the intuition being that you can always "hide" area by adding wrinkling/corrugation, but cannot "create" area. It follows that we can assume, without loss of generality, that $\phi$ sends the paper boundary $\partial P$ to a curve $\gamma$ in the plane.
We can thus partition your problem into two pieces: (I) given a fixed curve $\gamma$, what is the volume of the volume-maximizing surface $M_{\gamma}$ with $\phi(\partial P) = \gamma$? (II) Can we characterize $\gamma$ for which $M_{\gamma}$ has maximum volume?
Let's consider the case where $\gamma$ is given. We can partition $M_{\gamma}$ into
1) regions of pure tension, where $d\phi^Td\phi = I$; in these regions $M_{\gamma}$ is, by definition, developable;
2) regions where one direction is in tension and one in compression, $\|d\phi^Td\phi\|_2 = 1$ but $\det d\phi^Td\phi < 1$.
We need not consider $\|d\phi^Td\phi\|_2 < 1$ as in such regions of pure compression, one could increase the volume while keeping $\phi$ a short map.
Let us look at the regions of type (2). We can trace on these regions a family of curves $\tau$ along which $\phi$ is an isometry. Since $M_{\gamma}$ maximizes volume, we can imagine the situation physically as follows: pressure inside $M_{\gamma}$ pushes against the surface, and is exactly balanced by stress along inextensible fibers $\tau$. In other words, for some stress $\sigma$ constant along each $\tau$, at all points $\tau(s)$ along $\tau$ we have$$\hat{n} = \sigma \tau''(s)$$where $\hat{n}$ the surface normal; it follows that (1) the $\tau$ follow geodesics on $M_{\gamma}$, (2) each $\tau$ has constant curvature.
The only thing I can say about problem (II) is that for the optimal $\gamma$, the surface $M_\gamma$ must meet the plane at a right angle. But there are many locally-optimal solutions that are not globally optimal (for example, consider a half-cylinder (type 1 region) with two quarter-spherical caps (type 2 region); it has volume $\approx 1.236$ liters, less than Joriki's solution).
I got curious so I implemented a quick-and-dirty tension field simulation that optimizes for $\gamma$ and $M_{\gamma}$. Source code is here (needs the header-only Eigen and Libigl libraries): https://github.com/evouga/DaurizioPaper
Here is a rendering of the numerical solution, from above and below (the volume is roughly 1.56 liters).
EDIT 2: A sketch of the orientation of $\tau$ on the surface:
|
This will be a purely mathematical treatment. It needs to be combined with some practical playing around to really "get" it.
Traveling wave
Let's start with the description of a harmonic traveling wave in one-dimension. Here "harmonic" just means the mathematical form of the wave is sinusiodal in both time and space.
For concreteness we'll using talk about some kind of transverse matter displacement wave. A wave on a string, perhaps.
The mathematical expression for the displacement of a bit of string away from it's resting position is$$ y(x,t) = A \cos(k x - \omega t) \,.$$Here $k = 2 \pi/ \lambda$ is the wave-number, $\lambda$ is the wavelength, $\omega = 2\pi/T$ is the angular frequency and $T$ is the period. Most people find it easier to think about wavelength and period, so you might be more comfortable thinking of that as$$ y(x,t) = A \cos\left(\frac{2 \pi}{\lambda}x - \frac{2 \pi}{T} t\right) \,.$$In either case this represent a continuous sinusoidal wave-train of amplitude $A$ moving to the right as time passes. Replace the $-$ with a $+$ in the argument to the cosine and the wave moves left instead.
The wavelength can have any value you want.
Standing wave
Now we consider the situation with two such wave-trains one moving in each direction. We get\begin{align*}y(x,t) &= A \cos(k x - \omega t) + A \cos(k x + \omega t) \\&= A\left[ \cos(kx)\cos(\omega t) + \sin(kx)\sin(\omega t) \right] +A\left[ \cos(kx)\cos(\omega t) - \sin(kx)\sin(\omega t) \right] \\&= 2A \cos(kx)\cos(\omega t)\end{align*}The wavelength is the same and the period is the same, but the behavior is markedly different. The combined spatial and temporal dependence we saw before has been split apart into two separate dependencies. The bumps on the cosine curves no longer move as time passes, instead they stay right where they are and their amplitude rises and falls.
That's a standing wave.
The wavelength is still arbitrary.
To be completely general we have to work the math with a arbitrary phase shift or allow some sine terms as well, but that complexity doesn't teach us anything new.
Standing wave in a confined space
OK. Let's think about a guitar or other stringed musical instrument. Pitch is related to frequency $f = 1/T$, and when I strike a particular string I get a particular note instead of an arbitrary frequency. More over, when I fret up and strike the string I get a different (higher) note.
There has to be something about holding the ends of the string at rest that forces the string to pick some frequency (or rather a set of frequencies).
And that is related to the wave waves reflect. When you send a single pulse down a taunt string at a point where it is rigidly attached, the wave pulse gets sent back to you
upside down. It's reflected and inverted. When you pluck a string that sends pulses in both directions and they get reflected, cross the string get reflected again and so on. The system is not lossless so the energy dissipates in time, but for a while there are a set of chaotic vibration on the string. The ones that last are the ones where the spatial wave fits between the ends with a node (zero) at both ends.
The image shows the three lowest frequency modes. The red lines represent the state of the system at time zero and the blue line the state after one half of a period. The gray lines represent other times. (Image original to the author.)
After a short time the string is moving in a pattern that is composed of standing waves (those inverted reflections, right) whose wavelength fits neatly. The equation is $L = \frac{2n-1}{2}\lambda$ where $L$ is the length between the fixed ends and $n$ a counting number (1, 2, 3...).
When you fret the guitar, $L$ gets smaller, so the associated wavelength must as well, but wavelength is related to frequency by the speed $c$ of wave propagation on the string $\lambda f = c$, so when the wavelength goes down the frequency goes up and you hear a higher pitch.
Electron in an atom as standing wave
In the (very wrong) Bohr model, the electron is envisioned as following a circular orbit. In that case an integer number of wavelengths would have to fit around the circle for the electron to not interfere with itself.
That's not a great model, but it is more or less the best you can do until you are ready to tackle three dimensional standing waves in spherical coordinates, so it is where I'm going to leave you.
In reality the electrons aren't little balls and they are not following path (circular or otherwise), and we call the states they occupy "orbitals" rather than "orbits" in part to remind ourselves of those difference.
|
I have a very general question about Lorentz transformations of electric and magnetic fields vs. 4-vectors . It arised from my previous post. I will describe the difficulty I encountered.
Information and problem: The electric and magnetic field in a system $S$ which can be boosted to a frame $\bar{S}$ moving at relative velocity $\mathbf{v}$. The electric field $\mathbf{E} =\mathbf{E}_\perp + \mathbf{E}_\parallel$ ($\perp$, $\parallel$ are with respect to $\mathbf{v}$) and the magnetic field $\mathbf{B} = \mathbf{B}_\perp + \mathbf{B}_\parallel$ will be transformed as:
$$\mathbf{\bar{E}}_\parallel = \mathbf{E}_\parallel$$
$$\mathbf{\bar{B}}_\parallel = \mathbf{B}_\parallel$$
$$\mathbf{\bar{E}}_\perp = \gamma(\mathbf{E}_\perp + \mathbf{v} \times \mathbf{B})$$
$$\mathbf{\bar{B}}_\perp = \gamma(\mathbf{B}_\perp - \frac{\mathbf{v}\times \mathbf{E}}{c^2}),$$
where $\gamma \equiv \frac{1}{\sqrt{1-\mathbf{v}\cdot \mathbf{v}/c^2}}$ is used.
My problem with these equations is as follows. There is an equality of vectors at each line visible. This means that if we have for instance $\mathbf{E}_\parallel = E_0 \mathbf{\hat{x}}$, $\mathbf{v} = v\mathbf{\hat{x}}$ and $\mathbf{B} = \mathbf{0}$ then $\bar{\mathbf{E}}_\parallel = \mathbf{E}_\parallel = E_0 \mathbf{\hat{x}}$ according to the definition. My problem is that $\mathbf{\bar{E}}_\parallel$ is expressed in terms of basis vector $\mathbf{\hat{x}}$ from frame $S$ and not a new basis vector say $\mathbf{\bar{\hat{x}}}$ in frame $\bar{S}$.
In the case of a Lorentz transformation of a 4-vector, we have that for example for the 4-potential $A \equiv \{A^{\mu}\} = (V/c,\mathbf{A})$ we have that $\bar{A}^{\mu} = \Lambda^{\mu}_{\nu} A^{\nu}$ and since the 4-vector itself does not change under a basis transformation we have $A = \bar{A} = \bar{A}^\mu \bar{e}_\mu = A^{\mu}e_\mu$, where $\{e_\mu\}_{\mu \in \{0,1,2,3\}}$ and $\{\bar{e}_\mu\}_{\mu \in \{0,1,2,3\}}$ are bases for $S$ and $\bar{S}$. The same Minkowski space is therefore described by different basis vectors in the 4-vector formalism when boosting to another frame. But in case of the electric and magnetic field (which I know do not form a 4-vector but are contained in the field tensor $\mathbf{F}$) do only change in components and not in the basis of $\mathbf{R}^3$. I have the feeling that 4-vectors do not change under a Lorentz transformation, only the components and the basis in which they are expressed (so the basis representation) and for 3-vectors, they change as a whole, meaning that the basis vectors can stay the same although the components change. But I am not sure about this.
Coming back to the 4-potential. The components are also called $\bar{A}_x$, $\bar{A}_y$, $\bar{A}_z$, just like one can write $\bar{E}_x$, $\bar{E}_y$, $\bar{E}_z$ after a boost. Now the problem is if we write $\mathbf{\bar{E}} = \bar{E}^j \mathbf{e}_j$: is this basis also just $\{\mathbf{\hat{x}},\mathbf{\hat{y}},\mathbf{\hat{z}}\}$? And for $\mathbf{\bar{A}} = \bar{A}_x \mathbf{\hat{x}} + \bar{A}_y \mathbf{\hat{y}} + \bar{A}_z \mathbf{\hat{z}}$: is this also a valid expression? To conclude I think the main problem lies in the question whether a Lorentz transformation changes the basis in which 3-vectors (e.g. magnetic or electric field) or the 3-vector inside a 4-vector (e.g. 4-potential) are defined. And is the basis of $\mathbf{R}^3$ different from the basis of Minkowski space?
I hope someone can explain this rigorously by using equations.
|
I have a series of data points $(x_i,y_i)$ which I expect to (approximately) follow a function $y(x)$ that asymptotes to a line at large $x$. Essentially, $f(x) \equiv y(x) - (ax + b)$ approaches zero as $x \to \infty$, and the same can probably be said of all the derivatives $f'(x)$, $f''(x)$, etc. But I don't know what the functional form for $f(x)$ is, if it even has one that can be described in terms of elementary functions.
My goal is to get the best possible estimate of the asymptotic slope $a$. The obvious crude method is to pick out the last few data points and do a linear regression, but of course this will be inaccurate if $f(x)$ does not become "flat enough" within the range of $x$ for which I have data. The obvious less-crude method is to assume that $f(x) \approx \exp(-x)$ (or some other particular functional form) and fit to that using all the data, but the simple functions I've tried like $\exp(-x)$ or $\dfrac1{x}$ don't quite match the data at lower $x$ where $f(x)$ is large. Is there a known algorithm for determining the asymptotic slope that would do better, or that could provide a value for the slope along with a confidence interval, given my lack of knowledge of exactly how the data approach the asymptote?
This sort of task tends to come up frequently in my work with various data sets, so I'm mostly interested in general solutions, but by request I'm linking to the particular data set that prompted this question. As described in comments, the Wynn $\epsilon$ algorithm gives a value that, as far as I can tell, is somewhat off. Here is a plot:
(It does look like there's a slight downward curve at high x values, but the theoretical model for this data predicts that it should be asymptotically linear.)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.