text stringlengths 256 16.4k |
|---|
As it is essentially drawing with replacement, we notice that any particular draw will be the card we are looking for with probability $\frac{1}{13}$ and will not be with probability $\frac{12}{13}$.
As each draw is
independent of one another, the probability of a specific sequence of draws will be the product of the probabilities of each draw individually. E.g. Ace-Ace-Nonace in that order will occur with probability $\frac{1}{13}\cdot\frac{1}{13}\cdot\frac{12}{13}$.
There are two possible interpretations of the question I can think of.
What is the probability that the first ace is drawn on the $n$'th draw?
This follows the geometric distribution with $p=\frac{1}{13}$. This corresponds to getting $n-1$ non-ace cards in a row followed by an ace. This occurs then with probability $\frac{12}{13}\cdot\frac{12}{13}\cdots\frac{12}{13}\cdot\frac{1}{13}=(\frac{12}{13})^{n-1}\cdot\frac{1}{13}$
What is the probability that within the first $n$ draws, at least one ace is drawn? (
but it doesn't necessarily need to be the $n$'th draw where the ace occurs)
Getting at least one ace within the first $n$ draws means that the first ace we see occurs within the first $n$ draws, so we may simply sum over the geometric distribution up to $n$. $\sum\limits_{k=1}^n(\frac{12}{13})^{k-1}\frac{1}{13}$ which may be simplified using what you know about geometric series.
An alternate method to calculate this is by recognizing that getting at least one ace within the first $n$ draws is the opposite event of getting no aces within the first $n$ draws. Getting no aces corresponds to not getting an ace for the first draw followed by not getting an ace for the second draw followed by...etc...
Getting no aces in $n$ draws has probability then $\frac{12}{13}\cdot\frac{12}{13}\cdots\frac{12}{13}=(\frac{12}{13})^n$ so getting at least one ace in $n$ draws has probability $1-(\frac{12}{13})^n$. Had you simplified your geometric sum above, it would have arrived at this answer as well.
A word of warning, I mentioned that $Pr(A\cap B)=Pr(A)\cdot Pr(B)$, that is the probability event $A$
and event $B$ simultaneously occur is the probability of the product of their respective probabilities. This is true only for independent events, and is in general not true for arbitrary events. |
Simply put, the principle of least action is about finding the "cheapest" route between two points in space-time.
How do you measure "cheapest"? If all you measure is the distance, the "cheapest" is a straight line, but often, going in a straight line is not practical. When you try to get across a mountain, for instance, the straight line may mean years of tunnel-boring. The next best thing, a straight line on the map, would require climbing over the mountain, so you might conclude that the "cheapest", in this case, is going around it.
This is what the principle of least action is all about: the "cheapest" route as a function of some measure of "cost".
How can we compute the "cheapest" route? We first must observe that if a given route is the cheapest, then neighboring routes will necessarily be more expensive, no matter how we
vary the route. So we denote the cost by $S$ and the route by $\rho$. What happens when we vary the route, keeping the endpoints fixed? What we get is $S(\rho+\eta)=S+\epsilon_S$. Since $S$ is expected to be minimal when $\eta=0$, $\eta$ itself must be zero in the first order; otherwise, its rate of change would be proportional to the rate of change of some coordinate, so if we vary that coordinate in the opposite direction, $\epsilon_S$ would become negative, contradicting the assumption that $S$ was to be minimal.
Sounds like a trivial problem in differential calculus, but there's a catch: $\eta$ is not a number, but an arbitrary function assigned to different paths between the endpoints.
As a practical example, I recently tried to derive the equation of motion for an electrically charged particle in an electrostatic field. The "cost", more commonly called the
action, of such a particle is known to be:
\[S=-m\int\sqrt{1-v^2}~dt-q\int[\phi(t,r,\theta)-\vec{v}.\vec{A}(t,r,\theta)]~dt,\]
where $\phi$ and $\vec{A}$ are the electromagnetic scalar and vector potentials, respectively, $r$ and $\theta$ are polar coordinates, and $v$ is the particle's velocity. (How this relativistic action is derived goes beyond the scope of this article, but the bottom line is this: the first term really is the action associated with a "straight line", and the second term is known from experience to correctly represent the contributions of the electromagnetic field.)
First of all, I'm assuming that $\vec{A}=0$, i.e., there's no magnetic field. The action integral then reduces to:
\[S=-m\int\sqrt{1-v^2}~dt-q\int\phi(t,r,\theta)~dt.\]
The expression for $v$ of course is $v=\sqrt{(dr/dt)^2+r^2(d\theta/dt)^2}$. Varying $r$ by $\epsilon$ and dropping terms higher-order in $\epsilon$, I get:
\begin{align}\delta\sqrt{1-v^2}&=\delta\sqrt{1-\left(\frac{dr}{dt}\right)^2-r^2\left(\frac{d\theta}{dt}\right)^2}\\
&=\sqrt{1-\left(\frac{dr}{dt}+\frac{d\epsilon}{dt}\right)^2-(r+\epsilon)^2\left(\frac{d\theta}{dt}\right)^2}-\sqrt{1-v^2}\\ &=\sqrt{1-\left(\frac{dr}{dt}\right)^2-2\frac{dr}{dt}\frac{d\epsilon}{dt}-\left(\frac{d\epsilon}{dt}\right)^2+(r^2+2r\epsilon+\epsilon^2)\left(\frac{d\theta}{dt}\right)^2}-\sqrt{1-v^2}\\ &=\sqrt{1-\left(\frac{dr}{dt}\right)^2-2\frac{dr}{dt}\frac{d\epsilon}{dt}-(r^2+2r\epsilon)\left(\frac{d\theta}{dt}\right)^2}-\sqrt{1-v^2}\\ &=\sqrt{1-v^2-2\left[\frac{dr}{dt}\frac{d\epsilon}{dt}-r\epsilon\left(\frac{d\theta}{dt}\right)^2\right]}-\sqrt{1-v^2}\\ &=\sqrt{\left\{\sqrt{1-v^2}-\frac{1}{\sqrt{1-v^2}}\left[\frac{dr}{dt}\frac{d\epsilon}{dt}-r\epsilon\left(\frac{d\theta}{dt}\right)^2\right]\right\}^2}-\sqrt{1-v^2}\\ &=\sqrt{1-v^2}-\frac{1}{\sqrt{1-v^2}}\left[\frac{dr}{dt}\frac{d\epsilon}{dt}-r\epsilon\left(\frac{d\theta}{dt}\right)^2\right]-\sqrt{1-v^2}\\ &=-\frac{1}{\sqrt{1-v^2}}\left[\frac{dr}{dt}\frac{d\epsilon}{dt}-r\epsilon\left(\frac{d\theta}{dt}\right)^2\right].\end{align}
Now I can put this back into the equation for $S$:
\begin{align}S'&=-\int m\sqrt{1-v^2}-\frac{m}{\sqrt{1-v^2}}\left[\frac{dr}{dt}\frac{d\epsilon}{dt}+r\epsilon\left(\frac{d\theta}{dt}\right)^2\right]+q\phi+q\epsilon\frac{\partial\phi}{\partial r}dt\\
&=S+\int\frac{m}{\sqrt{1-v^2}}\left[\frac{dr}{dt}\frac{d\epsilon}{dt}+r\epsilon\left(\frac{d\theta}{dt}\right)^2\right]-q\epsilon\frac{\partial\phi}{\partial r}dt.\end{align}
Similarly, if I vary $\theta$ by $\epsilon$ and drop terms higher-order in $\epsilon$, I get:
\begin{align}S''=-\int m\sqrt{1-v^2}-\frac{mr^2}{\sqrt{1-v^2}}\frac{d\theta}{dt}\frac{d\epsilon}{dt}+q\phi+q\epsilon\frac{\partial\phi}{\partial\theta}dt\\
=S+\int\frac{mr^2}{\sqrt{1-v^2}}\frac{d\theta}{dt}\frac{d\epsilon}{dt}-q\epsilon\frac{\partial\phi}{\partial\theta}dt.\end{align}
Feynman's old trick is to eliminate the terms containing $d\epsilon/dt$ by integrating in parts:
\[\int f\frac{d\epsilon}{dt}dt=f\epsilon-\int\frac{df}{dt}\epsilon dt.\]
Now is the time to remember that we're really computing a definite integral between the two endpoints, and $\epsilon$ is null at the endpoints, so the $f\epsilon$ bit will vanish. Therefore, We're left with:
\[S'=S+\int\epsilon\left[\frac{m}{\sqrt{1-v^2}}r\left(\frac{d\theta}{dt}\right)^2-\frac{d}{dt}\left(\frac{m}{\sqrt{1-v^2}}\frac{dr}{dt}\right)-q\frac{\partial\phi}{\partial r}\right]dt,\]
and
\[S''-S-\int\epsilon\left[\frac{d}{dt}\left(\frac{mr^2}{\sqrt{1-v^2}}\frac{d\theta}{dt}\right)+q\frac{\partial\phi}{\partial\theta}\right]dt.\]
Assuming that $\phi$ represents a central force field, we can observe that $\partial\phi/\partial\theta$, is going to be identically zero:
\[S''=S-\int\epsilon\frac{d}{dt}\left(\frac{mr^2}{\sqrt{1-v^2}}\frac{d\theta}{dt}\right)dt.\]
Since $\epsilon$ can be an arbitrary function, in order for the integrals to be zero (which is what we want, as per the action principle) the factor of $\epsilon$ must be zero. First we deal with the equation for $S''$, which implies:
\begin{align}\frac{d}{dt}\left(\frac{mr^2}{\sqrt{1-v^2}}\frac{d\theta}{dt}\right)&=0,\\
\frac{mr^2}{\sqrt{1-v^2}}\frac{d\theta}{dt}&=C,\\ \frac{m}{\sqrt{1-v^2}}&=\frac{C}{r^2\frac{d\theta}{dt}}.\end{align}
Substituting this into the equation for $S'$, we get:
\[C^2\frac{\sqrt{1-v^2}}{mr^3}-\frac{d}{dt}\left(\frac{C}{r^2\frac{d\theta}{dt}}\frac{dr}{dt}\right)-q\frac{\partial\phi}{\partial r}=0.\]
which simplifies as:
\begin{align}\frac{C^2\sqrt{1-v^2}}{mr^3}-\frac{d}{dt}\left(\frac{C}{r^2}\frac{dr}{d\theta}\right)-q\frac{\partial\phi}{\partial r}&=0,\\
\frac{C^2\sqrt{1-v^2}}{mr^3}-\frac{d}{dt}\left(\frac{C}{r^2}\right)\frac{dr}{d\theta}-\frac{C}{r^2}\frac{d^2r}{d\theta dt}-q\frac{\partial\phi}{\partial r}&=0,\\ \frac{C^2\sqrt{1-v^2}}{mr^3}-\frac{d}{dt}\left(\frac{C}{r^2}\right)\frac{dr}{dt}\frac{dr}{d\theta}-\frac{C}{r^2}\frac{d^2r}{d\theta dt}-q\frac{\partial\phi}{\partial r}&=0,\\ \frac{C^2\sqrt{1-v^2}}{mr^3}+\frac{2C}{r^3}\frac{dr}{dt}\frac{dr}{d\theta}-\frac{C}{r^2}\frac{d^2r}{d\theta dt}-q\frac{\partial\phi}{\partial r}&=0,\end{align}
but
\[\frac{dr}{dt}=\frac{dr}{d\theta}\frac{d\theta}{dt}=\frac{dr}{d\theta}\frac{C\sqrt{1-v^2}}{mr^2},\]
so the equation reduces to
\[\frac{C^2\sqrt{1-v^2}}{mr^3}+\frac{2C}{r^3}\frac{C\sqrt{1-v^2}}{mr^2}\left(\frac{dr}{d\theta}\right)^2-\frac{C}{r^2}\frac{d}{d\theta}\left(\frac{dr}{d\theta}\frac{C\sqrt{1-v^2}}{mr^2}\right)-q\frac{\partial\phi}{\partial r}=0.\]
One nasty derivative remains:
\[\frac{d}{d\theta}\left(\frac{dr}{d\theta}\frac{C\sqrt{1-v^2}}{mr^2}\right)=\frac{d^2r}{d\theta^2}\frac{C\sqrt{1-v^2}}{mr^2}+\frac{dr}{d\theta}\frac{d}{d\theta}\frac{C\sqrt{1-v^2}}{mr^2},\]
however
\[\frac{C\sqrt{1-v^2}}{mr^2}=\frac{d\theta}{dt},\]
and thus the second term reduces to $d^2\theta/d\theta dt$, i.e., the angular acceleration. When $v\ll 1$, this will be a small value, so it can be ignored. Which leaves us with the equation
\[\frac{C^2\sqrt{1-v^2}}{mr^3}+\frac{2C}{r^3}\frac{C\sqrt{1-v^2}}{mr^2}\left(\frac{dr}{d\theta}\right)^2-\frac{C}{r^2}\frac{d^2r}{d\theta^2}\frac{C\sqrt{1-v^2}}{mr^2}-q\frac{d\phi}{dr}=0.\]
Rearranging a little, we get:
\[\frac{C^2\sqrt{1-v^2}}{mr}+\frac{2C^2\sqrt{1-v^2}}{mr^3}\left(\frac{dr}{d\theta}\right)^2-\frac{C^2\sqrt{1-v^2}}{mr^2}\frac{d^2r}{d\theta^2}-qr^2\frac{d\phi}{dr}=0.\]
Or, after substituting $\phi=1/r$, the scalar potential of an electrical charge:
\begin{align}\frac{C^2\sqrt{1-v^2}}{mr^2}\frac{d^2r}{d\theta^2}-\frac{2C^2\sqrt{1-v^2}}{mr^3}\left(\frac{dr}{d\theta}\right)^2&=\frac{C^2\sqrt{1-v^2}}{mr}+q,\\
\frac{d^2r}{d\theta^2}-\frac{2}{r}\left(\frac{dr}{d\theta}\right)^2&=r+\frac{qmr^2}{C^2\sqrt{1-v^2}}.\end{align}
Once again making the assumption that $v\ll 1$, i.e., $\sqrt{1-v^2}=1$, this equation can be solved easily:
\[\frac{1}{r}=-\frac{qm}{C^2}+C_1\sin\theta+C_2\cos\theta.\]
This is the equation of a conic section, like an ellipse or a hyperbole, using polar coordinates $r$ and $\theta$. The actual shape depends on the charge ($q$), mass ($m$), the coupling constant ($C$), and the two constants ($C_1$ and $C_2$) that represent initial conditions.
I don't know how to solve this problem without the non-relativistic simplifications. I played a little with Maple, and obtained some solutions. The results are somewhat encouraging: although it is hard to see through the tangled mess of integration constants, it appears that what I got is indeed approximately periodic, with a precession constant of $\sqrt{1-q^2/C^2}$. This seems consistent with the standard result in Landau's and Lifshitz's tome.
References
Feynman, Richard P., The Feynman Lectures on Physics II.(chapter 19), Addison-Wesley, 1977 |
I am reading Griffiths'
Introduction to Electrodynamics in which he shows that the retarded and advanced potentials, e.g. the retarded scalar potential
$$ V(\mathbf{r},t) = \frac{1}{4\pi\epsilon_0} \int \frac{\rho(\mathbf{r}',t_r)}{\lvert \mathbf{r} - \mathbf{r}' \rvert} \mathrm{d}\tau', \quad t_r \equiv t - \frac{\lvert \mathbf{r} - \mathbf{r}' \rvert}{c},$$
are solutions to the inhomogeneous wave equation and satisfy the Lorenz condition. He then rejects the advanced potentials by invoking the principle of causality, stating that it is not unreasonable to believe that electromagnetic influences propagate forward and not backward in time.
I can only imagine that the reasonableness of this belief is rooted in our naive experience of the world as time-asymmetric. However, the time asymmetry that we experience on a daily basis are (at least usually) not due to a fundamental asymmetry in the
fundamental laws of physics, like Maxwell's equations (insofar as a classical theory can be fundamental, but to my knowledge QED is also time symmetric), but to the emergent second law of thermodynamics. It is not clear to me that we can use our intuition about macroscopic phenomena to reason about microscopic phenomena, and even if we could, the second law is only probabilistic and would not allow us to reject forward propagation of electromagnetic influences outright and declare them impossible, only conclude that they are unlikely.
Furthermore, I assume that experiments have verified the retarded and not the advanced potentials? If so, then why invoke the principle of causation at all? And more importantly, would this not show a time asymmetry in the laws of classical electrodynamics? I suppose the principle could still considered external to the theory (like the homogeneity and isotropy of space), but at least the application to this particular case would have been shown experimentally to be justified.
So what exactly is it that allows us to apply the principle of causality? |
Suppose that $X_1, X_2, ..., X_n$ is a random sample from a distribution with mean $\mu$ and variance $\sigma^2$. Suppose also that $\nu:=E[(X_1 - \mu)^4] < \infty $.
(a) Find the mean and variance of $ V_n= \frac 1n \sum_{i=1}^\infty (X_i -\mu)^2 $.
(b) Now assume that the mean $\mu$ is known. Using the weak law of large numbers, explain why $V_n$ is a good estimator of the variance $\sigma^2$.
For part (a) can I write let $V_i = (X_i -\mu)^2$ and then use the theorem for the expectation and variance of a sample mean to say the expectation of $ V_n$ is $\mu$ and the variance is $\sigma^2$? This seems like it cannot be right but I am unsure.
For part (b) will I have to use Chebyshev's inequality at some point? Is that where the fact that $\nu:=E[(X_1 - \mu)^4] < \infty $ comes in?
Any help is appreciated! |
If $\frac{{{4}^{x+3}}}{{{16}^{2x-3}}}=1$ find x
Jamb Maths 2015
From the diagram above, find x
A card is picked at random from a pack of 52 well shuffled playing cards. Find the probability of NOT picking a red cards in the pack. If there are 13 red card in the pack
Find the midpoint of S(–5,4) and T(–3, –2)
In how many ways can 7 directors sit round a table?
Given that
M = {
x : x is prime and $7\le x\le 13$}and
R = {y: y is a multiple of 3 and $6<y\le 15$}, find $M\cup R$
Two fair dice are thrown at once. What is the probability of getting same face
Simplify $\frac{0.00625\times {{10}^{-4}}}{0.0025\times {{10}^{-6}}}$
If ${{\log }_{10}}(x-3)+{{\log }_{10}}(x-2)=lo{{g}_{10}}(2{{x}^{2}}-8)$, find the value of x
A brought a car for N500,000 and was able to sell it for N350,000, what was his percentage loss? |
I am working on a problem that is similar to the standard radioactive decay rate experiment, but with a twist. In the normal experiment, one takes several different measurements of the decay rate, then computes the decay rate (or time) and the associated error. My background is in high energy physics, so while I am used to counting experiments, I am having issues with the following:
I have a twist on the same problem. I have $N$ objects which can be in state $a$ or $b$, and the number in each state is $N_a$ and $N_b$, with $N = N_a + N_b$. The system is prepared so that we know that every object is in state $a$ to begin with.
Then, I am interested in the number of objects in state a with respect to time (I will be fitting it later). Naively, I would expect the error to be just the $\sqrt{N_a}$, except that I perfectly know the system starts off in state $a$, which my intuition tells me that I would have an error of 0 on that point.
Since there could be an error in $N_a$ as well as $N_b$ (except for the dependence), I was also looking at calculating the error (assuming the errors in $N_a$ and $N_b$ are independent) using a ratio:
$$ N = N_a + N_b;N(0)=N_a $$
$$ f(N_a,N_b) = \frac{N_a}{N_a+N_b} $$
$$ \delta f = \sqrt{\frac{f(1-f)}{N}} $$
Which makes more sense to me, since the error goes to zero at the two bounds when we know that we have reached either all objects in state $a$ or in state $b$.
Could somebody direct me to a paper, or a description of what I am looking for? I am sure that this particular problem has been solved and resolved, but I do not know enough of the proper terms to get it. As well, what would people recommend for a nice, thorough treatment of calculating errors at the graduate level?
EDIT:Additional InformationSo after thinking about it, this isn't quite what I am looking for. Let me describe what I am actually measuring, and then maybe I can get a start in the right direction.
I run N simulations, each of which is prepared in state a. Then, I record the time when it switches from a->b. This is irreversible, you can't go from b->a. In the end, I have my N simulations, each with a time t. Then I plot the fraction remaining in state a at time t, call it $f(a,t)$, binning time.
Now, this fraction cannot be the PDF, since it is not normalized to 1. What I want out of the distribution is the decay rate (or time) of a given object. I also want the statement where I can look at an object and ask: What is the probability that it will be in state a at time t? Isn't that just my original distribution? I also want the error on each time bin, as well as the error on the variables in the fit (I am using MATLAB and nonlinear regression for now, so that is easy).
I realize that I might have some sort of fundamental misunderstanding of some of this, so please, let me know what I can do to improve my questions/answers. |
Measurements
Let us assume that you are faced with a specific problem. Then we can see how scientific thinking might help solve it. Suppose that you live near a large plant which manufactures cement. Smoke from the plant settles on your car and house, causing small pits in the paint. You would like to stop this air-pollution problem—but how?
As an individual you will probably have little influence, and even as part of a group of concerned citizens you may be ineffective, unless you can prove that there is a problem. Scientists have had a hand in writing most air-pollution regulations, and so you will have to employ some scientific techniques (or a scientist) to help solve your problem.
It will probably be necessary to determine how much air pollution the plant is producing. This might be done by comparing the smoke with a scale which ranges from white to gray to black, the assumption being that the darker the smoke, the more there is. For white cement dust, however, this is much less satisfactory than for black coal smoke. A better way to determine how much pollution there is would be to measure the mass of smoke particles which could be collected near your house or car. This could be done by using a pump (such as a vacuum cleaner) to suck the polluted gas through a filter. Weighing the filter before and after such an experiment would determine what mass of smoke had been collected.
Mass and Weight
Because
is the force of gravity on an object, which varies from place to place by about 0.5% as shown in the Table, we must use weight for accurate measurements of how much matter we have. We still call the process of obtaining an accurate mass "weighing". mass
Latitude, o Altitude = 0 Altitude = 10 km 0 9.78036 9.74950 30 9.79324 9.76238 60 9.81911 9.78825 90 9.83208 9.80122
The
weight of an object is actually the force of gravity, and is calculated as
\[F=\text{“W”}=m \times g \label{1}\]
Weight is measured in newtons (kg m s
-2) or pounds (lb), where 1 lb = 4.44822162 newtons. The base unit for mass is kilograms (kg), but the pound may also be used as a unit of mass (1 pound = 0.45359237 kilograms). The average value of g is usually taken to be 9.80665 m s -2, so the weight of a 1 kg mass would be
\[F = \text{ “W” } = m \times g = 1 \text{ kg} \times 9.80665 \text{ m s}^{-2} = 9.80665 \text{ N or } 2.2046 \text{ lb}\]
If the weight of the 1 kg mass were measured on an arctic exploration camp with a
load cell balance (see below) still calibrated for the average g, its weight would be:
\[ F = \text{ “W” } = m \times g = 10 \text{ kg} \times 9.83208 \text{ m s}^{-2} = 9.83208 \text{ N or } 2.2103 \text{ lb}\]
Accurate weighing is usually done with a single pan balance. The empty pan is balanced by a counterweight. When an object is placed on the pan, gravitational attraction forces the pan downward. To restore balance, a series of weights (objects of known mass) are removed from holders above the pan. The force of gravitational attraction is proportional to mass, and when the pan is balanced, the force on it must always be the same. Therefore the mass of the object being weighed must equal that of the weights that were removed. A
gives the same mass readout regardless of the force of gravity. balance
Figure \(\PageIndex{1}\) Single Pan Balance (a) Actual appearance of a modern substitution balance. (b) X-ray view showing principle of operation. When an object is placed on the balance pan, ring-shaped weights whose total mass equals that of the object are removed from holders above the pan to restore balance.
Modern laboratory "balances" are based on load cells that convert the force exerted by an object on the balance pan to an electrical signal. The load cell generally is coupled with a dedicated converter and digital display. Because the force exerted by the object depends on gravity, these are really
that measure weight, and must be calibrated frequently (against standard masses) to read a scales . mass Mass Measurements
If you kept a notebook or other record of your measurements, you would probably write down something like 0.0342 g to represent how much smoke had been collected. Such a result, which describes
the magnitude of some parameter (in this case the magnitude of the parameter, mass), is called a quantity. Notice that it consists of two parts, a number and a unit. It would be ambiguous to write just 0.0342—you might not remember later whether that was measured in units of grams, ounces, pounds, or something else. A quantity always behaves as though the number and the units are multiplied together. For example, we could write the quantity already obtained as 0.0342 × g. Using this simple rule of number × units, we can apply arithmetic and algebra to any quantity:
\[\begin{align} & 5 \text{g} + 2 \text{g} = (5 + 2) \text{g} = 7 \text{g} \\ & 5 \text{g} \div 2 \text{g} = \dfrac{5 \text{g}}{2 \text{g}} = 2.5 \text{ (the units cancel, and so we get a pure number)} \\ & 5 \text{ in} \times 2 \text{ in} = 10 \text{ in}^{2} \text{ (10 square inches)}\end{align}\]
This works perfectly well as long as we do not write equations with different
(i.e., those having units which measure different properties, like mass and length, temperature and energy, or volume and amount) on opposite sides of the equal sign. For example, applying algebra to the equation parameters
\[ 5 \text{g} = 2 \text{ in}^{2}\]
can lead to trouble in much the same way that dividing by zero does and should be avoided, because grams (g) is a unit of the
parameter mass, and the inch (in) is a unit of the parameter length. Conversions with Unity Factors Mass Unity Factors
Notice also that whether a quantity is large or small depends on the size of the units as well as the size of the number. For example, the mass of smoke has been measured as 0.0342 g, but the balance might have been set to read in milligrams (or grains in the English system). Then the reading would be 34.2 mg (or 0.527 grains). The results (0.0342 g, 34.2 mg, or 0.527 gr) are the
same quantity, the mass of smoke. One involves a smaller number and larger unit (0.0342 g), while the others have a larger number and smaller unit. So long as we are talking about the same quantity, it is a simple matter to adjust the number to go with any units we want.
We can convert among the different ways of expressing the mass with
as follows: unity factors
Since 1 mg and 0.001 g are the same parameter (mass), we can write the equation
\[ 1 \text{ mg} = 0.001 \text{ g}\]
Dividing both sides by 1 mg, we have
\[1 = \dfrac{1\text{ mg}}{1\text{ mg}} = \dfrac{0.001\text{ g}}{1\text{ mg}} \]
Since the last term of this equation equals one, it is called a
unity factor. It can be multiplied by any quantity, leaving the quantity unchanged.
We can generate another unity factor by dividing both sides by 0.001 g:
\[1 = \dfrac{1\text{ mg}}{0.001\text{ g}} = \dfrac{0.001\text{ g}}{0.001\text{ g}} \]
Example \(\PageIndex{1}\) : Mass Units
What is the mass in grams of a 5.0 grain (5 gr) aspirin tablet, given that 1 gram = 15.4323584 grains?
Solution:
\[ 5.0 \text{ gr} = 5.0 \text{ gr} \times 1 = 5.0 \text{ gr}\times \dfrac{1.0\text{ g}}{\text{15.4323 gr}}\]
The units
gr cancel, yielding the result
\[ 5.0 \text{ gr} = 5.0 \div 15.4323 \text{ g} = 0.324 \text{ g}\]
Length Unity Factors
The
parameter length may be measured in inches (in) in the English system, but scientific measurements (all measurements in the world exclusive of the U.S.) are reported in the metric units of meters (m) or some more convenient derived unit like centimeters (cm).
Figure \(\PageIndex{2}\) The length of a rod can be measured either in centimeters or inches. We can record the length either as 3.50 in or as 8.89 cm. In either case we are referring to the same quantity.
Example \(\PageIndex{2}\) : Unit Conversions
Express the length 8.89 cm in inches, given that 1 cm = 0.3937 in.
Solution
Since 1 cm and 0.3937 in are the same quantity, we can write the equation
\[1 \text{ cm} = 0.3937 \text{ in}\]
Dividing both sides by 1 cm, we have
\[1 = \dfrac{0.3937\text{ in}}{1\text{ cm}}\]
Since the right side of this equation equals one, it is called a
unity factor. It can be multiplied by any quantity, leaving the quantity unchanged.
\[ 8.89 \text{ cm} = 8.89 \text{ cm} \times 1 = 8.89 \text{ cm} \times \dfrac{0.3937\text{ in}}{\text{1 cm}}\]
The units
centimeter cancel, yielding the result 8.89 cm = 8.89 × 0.3937 in = 3.50 in
This agrees with the direct observation made in the figure.
Let us look at our air-pollution problem. It has probably already occurred to you that simply measuring the mass of smoke collected is not enough. Some other variables may affect your experiment and should also be measured if the results are to be reproducible. For example, wind direction and speed would almost certainly be important. The time of day and date when a measurement was made should be noted too. In addition you should probably specify what kind of filter you are using. Some are not fine enough to catch all the smoke particles.
Temperature
Another variable which is almost always recorded is the temperature. A thermometer is easy to use, and temperature can vary a good deal outdoors, where your experiments would have to be done.
Figure \(\PageIndex{3}\): The Celsius and Fahrenheit scales compared. Temperatures in bold face are exact and easy to reproduce. Other temperatures are approximate and somewhat variable.
In scientific work, temperatures are usually reported in degrees Celsius (°C), a scale in which the freezing point of pure water is 0°C and the normal boiling point 100°C. In the United States, however, you would be more likely to have available a thermometer calibrated in degrees Fahrenheit (°F). The relationship between these two scales of temperature is
\[\dfrac{\text{T}_{(^{o}\text{F)}} - 32}{\text{T}_{(^{o}\text{C)}}}=\dfrac{9}{5}\]
Note that the temperature scales cannot be interconverted with simple
, because they do not have a common zero point (0°C = 32°F). Rather, the mathematical unity factors above must be used. The equation above is written in terms of the function parametertemperature ( T) with the unitsor dimensionssubscripted in parentheses. Volume Measurements
More important than any of the above variables is the fact that the more air you pump through the filter, the more smoke you will collect. Since air is a gas, it is easier to measure how much you collect in terms of volume than in terms of mass, and so you might decide to do it that way. Running your pump until it had filled a plastic weather balloon would provide a crude, inexpensive volume measurement. Assuming the balloon to be approximately spherical, you could measure its diameter and calculate its volume from the formula
\[V=\dfrac{4}{3}\pi r^{3}\]
Example \(\PageIndex{3}\) : Volume Calculation
Calculate the volume of gas in a sphere whose diameter is 106 in. Express your result in cubic centimeters (cm
3). Solution
Since the radius of a sphere is half its diameter,
\[ r = \dfrac{1}{2} \times 106 \text{ in} = 53 \text{ in}\]
We can use the same equality of quantities as in Example 1 to convert the radius to centimeters. When we cube the number and units, our result will be in cubic centimeters.
\[ 1 \text{ cm} = 0.3937 \text{ in}\]
\[\dfrac{\text{1 cm}}{\text{0}\text{.3937 in}} = 1\]
\[ R = 53 \text{ in} \times \dfrac{\text{1 cm}}{\text{0}\text{.3937 in}} = \dfrac{\text{53}}{\text{0}\text{.3937}} \text{ cm}\]
Using the formula
\[\begin{align} & V =\dfrac{\text{4}}{\text{3}}\pi r^{\text{3}}=\dfrac{\text{4}}{\text{3}}\times \text{3}\text{.14159}\times ( \dfrac{\text{53}}{\text{0}\text{.3937}}\text{cm} )^{3} \\ & \text{ }=\text{10219264 cm}^{\text{3}} \\ & \end{align}\]
You can see from Examples 1 and 2 that two unity factors may be obtained from the equality
\[ 1 \text{ cm} = 0.3937 \text{ in}\]
We can use one of them to convert inches to centimeters and the other to convert centimeters to inches
. The correct factor is always the one which results in cancellation of the units we do not want.
The result in Example 2 also shows that cubic centimeters are rather small units for expressing the volume of the balloon. If we used larger units, as shown in the following example, we would not need more than 10 million of them to report our answer.
Example \(\PageIndex{4}\): Volume Unit Conversion
Express the result of Example 3 in cubic meters, given that 1 m = 100 cm.
Solution
Again we wish to use a unity factor, and since we are trying to get rid of cubic centimeters, centimeters must be in the denominator:
\[\text{1 = }\dfrac{\text{1 m}}{\text{100 cm}}\]
But this will not allow cancellation of cubic centimeters. However, note that 1
3 = 1 That is, we can raise a unity factor to any power, and it remains unity. Thus
\[\begin{align} & \text{1 =}\left( \text{ }\dfrac{\text{1 m}}{\text{100 cm}} \right)^{\text{3}}\text{ = }\dfrac{\text{1 m}^{\text{3}}}{\text{100}^{\text{3}}\text{ cm}^{\text{3}}}\text{ = }\dfrac{\text{1 m}^{\text{3}}}{\text{1 000 000 cm}^{\text{3}}} \\ \text{ and } \\ & \text{10 219 264 cm}^{\text{3}}\text{ = 10 219 264 cm}^{\text{3}}\text{ }\times \text{ }\left( \dfrac{\text{1 m}}{\text{100 cm}} \right)^{\text{3}} \\ & \text{ = 10 219 264 cm}^{\text{3}}\text{ }\times \text{ }\dfrac{\text{1 m}^{\text{3}}}{\text{100 000 cm}^{\text{3}}} \\ & \text{ = 10.219 264 m}^{\text{3}} \\ & \end{align}\] |
, and to attain this field in specific regions of the brain, the electric current should pass through different head layers via skin, fat, skull, meninges, and cortex (part of the brain). In order to model the brain, different layers should be considered, including gray and white matters.The meninges, three layers of protective tissue, cover the outer surface of the central nervous system (brain and spinal cord) and comprise three connective tissue layers viz. (from the innermost to the outermost layer) the pia mater, arachnoid and the dura mater. The meninges also
Kathrin Badstübner, Marco Stubbe, Thomas Kröger, Eilhard Mix and Jan Gimsa
Education and Research (BMBF, FKZ 01EZ0911). The custom-made stimulator system was developed in cooperation with the Steinbeis company (STZ1050, Rostock, Germany) and Dr. R. Arndt (Rückmann & Arndt, Berlin, Germany).References1 Krack P, Hariz MI, Baunez C, Guridi J, Obeso JA. Deep brain stimulation: from neurology to psychiatry? Trends Neurosci. 2010;33:474-84. https://doi.org/10.1016/j.tins.2010.07.002 10.1016/j.tins.2010.07.002 20832128Krack P Hariz MI Baunez C Guridi J Obeso JA Deep brain stimulation: from neurology to psychiatry
Lisa Röthlingshöfer, Mark Ulbrich, Sebastian Hahne and Steffen Leonhardt
magnetic field strength (h) are assigned to the edges. Hence, a system of equations, called Maxwell-Grid Equations, has to be solved for the whole calculation domain, where each cell is described by:(1)C e → = − ∂ b → ∂ t C ˜ h → = − ∂ d → ∂ t + j →$$C\overrightarrow{e}=-\frac{\partial \overrightarrow{b}}{\partial t}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{C}\overrightarrow{h}=-\frac{\partial \overrightarrow{d}}{\partial t}+\overrightarrow{j}$$(2)S ˜ d → = q S b → = 0$$\tilde
electrodes are near-to constant because of the high resistance to current of the stratum corneum in the considered frequency range [ 3 ]. This allows us to rewrite the boundary conditions, Eqs. 5 - 7 , between the probe and the uppermost skin layer n , stratum corneum, as (we drop the subindex ` eff ’ for notational convenience in the analysis)− σ n ∂ Φ ( r , H n ) ∂ z = ∑ j = 1 m I j A j [ U ( R 2 j − 1 − r ) − U ( R 2 j − 2 − r ) ] ,$$\begin{array}{}\displaystyle-\sigma_{n}\frac{\partial\Phi(r,\mathcal{H}_{n})}{\partial z}=\sum_{j=1}^{m |
The Annals of Statistics Ann. Statist. Volume 46, Number 6A (2018), 2593-2622. Debiasing the lasso: Optimal sample size for Gaussian designs Abstract
Performing statistical inference in high-dimensional models is challenging because of the lack of precise information on the distribution of high-dimensional regularized estimators.
Here, we consider linear regression in the high-dimensional regime $p>>n$ and the Lasso estimator: we would like to perform inference on the parameter vector $\theta^{*}\in\mathbb{R}^{p}$. Important progress has been achieved in computing confidence intervals and $p$-values for single coordinates $\theta^{*}_{i}$, $i\in\{1,\dots,p\}$. A key role in these new inferential methods is played by a certain debiased estimator $\widehat{\theta}^{\mathrm{d}}$. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of $\widehat{\theta}^{\mathrm{d}}$ are asymptotically Gaussian provided the true parameters vector $\theta^{*}$ is $s_{0}$-sparse with $s_{0}=o(\sqrt{n}/\log p)$.
The condition $s_{0}=o(\sqrt{n}/\log p)$ is considerably stronger than the one for consistent estimation, namely $s_{0}=o(n/\log p)$. In this paper, we consider Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition $s_{0}=o(n/(\log p)^{2})$.
The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well. For intermediate regimes, we describe the trade-off between sparsity in the coefficients $\theta^{*}$, and sparsity in the inverse covariance of the design. We further discuss several applications of our results beyond high-dimensional inference. In particular, we propose a thresholded Lasso estimator that is minimax optimal up to a factor $1+o_{n}(1)$ for i.i.d. Gaussian designs.
Article information Source Ann. Statist., Volume 46, Number 6A (2018), 2593-2622. Dates Received: June 2016 Revised: August 2017 First available in Project Euclid: 7 September 2018 Permanent link to this document https://projecteuclid.org/euclid.aos/1536307227 Digital Object Identifier doi:10.1214/17-AOS1630 Mathematical Reviews number (MathSciNet) MR3851749 Zentralblatt MATH identifier 06968593 Citation
Javanmard, Adel; Montanari, Andrea. Debiasing the lasso: Optimal sample size for Gaussian designs. Ann. Statist. 46 (2018), no. 6A, 2593--2622. doi:10.1214/17-AOS1630. https://projecteuclid.org/euclid.aos/1536307227
Supplemental materials Supplement to “Debiasing the Lasso: Optimal Sample Size for Gaussian Designs”. Due to space constraints, proof of theorems and some of the technical details as well as additional numerical studies are provided in the Supplementary Material [40]. |
Weighted linear regression is one of those things that one needs from time to time, yet it is not a built-in function of many common packages, including spreadsheet programs. On the other hand, the problem is not sufficiently complicated to make it worth one's while to learn (or relearn!) more sophisticated statistical software packages, as with a modest effort, the formulae can be derived easily from first principles.
The problem can be stated as follows. Given a set of $N$ value pairs $(x_i,y_i)$ ($i=1,...,N$), and a set of weights $W_i$, we seek the values of $A$ and $B$ such that the following weighted sum $S$ is minimal:
\begin{align}
S=\sum_{i=1}^NW_i[(Ax_i+B)-y_i]^2=&A^2\sum W_ix_i^2+2AB\sum W_ix_i-2A\sum W_ix_iy_i+\nonumber\\ &B^2\sum W_i-2B\sum W_iy_i+\sum W_iy_i^2. \end{align}
The requisite values of $A$ and $B$ can be computed by taking the partial derivative of $S$ with respect to $A$ and $B$ and demanding that they both be zero:
\begin{eqnarray}
\frac{\partial S}{\partial A}&=&2A\sum W_ix_i^2+2B\sum W_ix_i-2\sum W_ix_iy_i=0,\\ \frac{\partial S}{\partial B}&=&2A\sum W_ix_i+2B\sum W_i-2\sum W_iy_i=0, \end{eqnarray}
or, in matrix form:
\begin{equation}
\begin{pmatrix} \sum W_ix_i^2&\sum W_ix_i\\ \sum W_ix_i&\sum W_i \end{pmatrix} \begin{pmatrix} A\\B \end{pmatrix}= \begin{pmatrix} \sum W_ix_iy_i\\ \sum W_iy_i \end{pmatrix}. \end{equation}
This equation is solved as follows:
\begin{equation}
\begin{pmatrix}A\\B \end{pmatrix}=\frac{1}{\sum W_i\sum W_ix_i^2-(\sum W_ix_i)^2} \begin{pmatrix} \sum W_i&-\sum W_ix_i\\ -\sum W_ix_i&\sum W_ix_i^2 \end{pmatrix} \begin{pmatrix} \sum W_ix_iy_i\\ \sum W_iy_i \end{pmatrix}, \end{equation}
which gives
\begin{eqnarray}
A&=&\frac{\sum W_i\sum W_ix_iy_i-\sum W_ix_i\sum W_iy_i}{\sum W_i\sum W_ix_i^2-(\sum W_ix_i)^2},\\ B&=&\frac{\sum W_ix_i^2\sum W_iy_i-\sum W_ix_i\sum W_ix_iy_i}{\sum W_i\sum W_ix_i^2-(\sum W_ix_i)^2}, \end{eqnarray}
which can be readily computed if the values of $\sum W_i$, $\sum W_ix_i$, $\sum W_ix_i^2$, $\sum W_iy_i$ and $\sum W_ix_iy_i$ are available. These, in turn, can be calculated in a cumulative fashion, allowing a weighted least squares calculation to take place even on a handheld calculator that lacks sufficient memory to store all individual $x_i$, $y_i$, and $W_i$ values.
The method can be readily extended to polynomial regression of degree $n$. The function to be minimized in this case is
\begin{equation}
S=\sum_{i=1}^N W_i\left[\left(\sum_{j=1}^n A_jx_i^j+B\right)-y_i\right]^2. \end{equation}
The partial derivatives with respect to $A_j$ and $B$ are as follows:
\begin{align}
\frac{\partial S}{\partial A_j}&=2\sum_k\left(A_k\sum_iW_ix_i^{j+k}\right)+2B\sum_iW_ix_i^j-2\sum_iW_ix_i^jy_i=0,\\ \frac{\partial S}{\partial B}&=2\sum_k\left(A_k\sum_iW_ix_i^k\right)+2B\sum_iW_i-2\sum_iW_iy_i=0. \end{align}
In matrix form:
\begin{equation}
\begin{pmatrix} \Sigma W_i&\Sigma W_ix_i&\Sigma W_ix_i^2&...&\Sigma W_ix_i^n\\ \Sigma W_ix_i&\Sigma W_ix_i^2&\Sigma W_ix_i^3&...&\Sigma W_ix_i^{n+1}\\ \Sigma W_ix_i^2&\Sigma W_ix_i^3&\Sigma W_ix_i^4&...&\Sigma W_ix_i^{n+2}\\ .&.&.&...&.\\ \Sigma W_ix_i^n&\Sigma W_ix_i^{n+1}&\Sigma W_ix_i^{n+2}&...&\Sigma W_ix_i^{2n} \end{pmatrix} \begin{pmatrix} B^{~}_{~}\\ A^{~}_1\\ A^{~}_2\\ ...^{~}_{~}\\ A^{~}_n \end{pmatrix}= \begin{pmatrix} \Sigma W_iy_i\\ \Sigma W_ix_iy_i\\ \Sigma W_ix_i^2y_i\\ ...\\ \Sigma W_ix_i^ny_i \end{pmatrix}. \end{equation}
We could also write $B=A_0$ and then, in compact form:
\begin{equation}
\sum_{j=0}^n\left(A_j\sum_{i=1}^NW_ix_i^{j+k}\right)=\sum_{i=1}^NW_ix_i^ky_i,~~~(k=0,...,n). \end{equation} |
I've really been thinking about and working on this problem for a while, and I would appreciate if someone could offer any help towards the solution.
Consider the following family of hash functions that map $w$-bit numbers to $l$-bit numbers (i.e. the range is $\{0,...,m-1\}$ where $m=2^l$):
$\mathcal{H} = \{h_{A,b}|A\in \{0,1\}^{l\times w}, b \in \{0,1\}^{l\times 1}\}$, where $h_{A,b}(x) = Ax+b\mod 2$
Show that $\mathcal{H}$ is $2$-wise independent but not $3$-wise independent ($\mathcal{H}$ is $k$-wise independent if for any $k$ inputs $x_1,...,x_k$ and hash values $v_1,...,v_k$, $\Pr\limits_{h\in\mathcal{H}} \left( \bigwedge\limits_{1\le i\le k} h(x_i)=v_i \right)=m^{-k} $).
I'm first trying to show that $\mathcal{H}$ is $2$-wise independent. So consider any two inputs $x_1, x_2$ and hash values $v_1,v_2$. Then it's required that $\Pr \left( h(x_1)=v_1 \wedge h(x_2)=v_2 \right)=\frac{1}{m^2}$.
$$\Pr \left( h(x_1)=v_1 \wedge h(x_2)=v_2\right)$$
$$\Rightarrow \Pr\left( Ax_1+b = v_1 \mod 2\wedge Ax_2+b=v_2 \mod 2\right)$$
What about this equation shows that the probability is $\frac{1}{m^2}$? If we were trying to show $3$-wise independence, we would have:
$$\Pr\left( Ax_1+b = v_1 \mod 2 \wedge Ax_2+b=v_2 \mod 2 \wedge Ax_3+b=v_3 \mod 2\right)$$
Why is this
not $\frac{1}{m^3}$?
I just don't know how to evaluate an expression like $\Pr\left( Ax_1+b = v_1 \mod 2 \wedge Ax_2+b=v_2 \mod 2 \wedge Ax_3+b=v_3 \mod 2\right)$. I've tried searching online for similar problems, but none seem to help with solving this one specifically. |
I try to write a multiplication dot and use
\cdot,
4 = 2/cdot{2}. But I get the error message
Missing $ inserted. What have I forgotten?
TeX and its various derivatives (such as LaTeX) distinguish between normal prose (
text mode) and mathematics ( math mode). The simplest and original way to switch between the two is to use the math switch, which is
$.
Here is a sample document to show you how it is typically used.
\documentclass{article}\begin{document}This is some normal text, written in text-mode. In between dollar signs,such as $ x = \frac{1}{2} = \frac{10}{20} = \frac{2}{5} \cdot \frac{5}{4} $,one may write mathematics (using macros that only work in math-mode, suchas \verb:\frac:). For displayed equations, which are centered and put ontheir own line, you can put math in between \verb:\[: and \verb:\]:, like this:\[ E = mc^2 \]If you want to number equations or to align them horizontally in a particular way,you should consider using environments such as \texttt{equation}, \texttt{align},or \texttt{gather} from the \texttt{amsmath} package.\end{document} |
Let $G$ be a compact, connected Lie group. Let $x, y \in G$ be an arbitrary pair of commuting elements. Is there necessarily a torus $T \leq G$ containing $x$ and $y$?
Apparently not:
The issue arises due to discrete abelian subgroups. Consider the $SO_3(\mathbb{R})$ example from the link. Thinking geometrically, a maximal torus can be described as the circle subgroup of rotations fixing a particular direction in $\mathbb{R}^3$. Consider two orthogonal directions. Then rotation by $\pi$ radians about these two axes will commute, but they certainly do not live in a common maximal torus.
My concern comes from trying to compute the following character variety on a torus:
$\chi_G(\Sigma_1)=\{\rho: \pi_1(\Sigma_1) \to G \ \vert \rho \ \text{is a group homomorphism}\}/\text{conjugation by} \ G$
where $\Sigma_1 \cong S^1 \times S^1$ is a torus. Since $\pi_1(\Sigma_1)$ is free abelian of rank 2, it follows that such a homomorphism is determined by a choice of commuting elements $x, y \in G$. What should I make of the following argument, particularly the part characterizing a flat connection on $S^1 \times S^1$? [Edit: The link has been updated to reflect this issue]
It seems that there are oddball homomorphisms not fitting into this general setup for some compact groups. I would appreciate more examples (in other compact, connected gauge groups) of commuting elements that fail to live in a common maximal torus.
Of course, $SO_3(\mathbb{R})$ is not simply connected, but the gauge groups I am working with all are. I don't know enough about discrete abelian subgroups of compact Lie groups. Can this issue be avoided by further assuming that the gauge group is simply connected? |
It is often said that the classical charge $Q$ becomes the quantum generator $X$ after quantization. Indeed this is certainly the case for simple examples of energy and momentum. But why should this be the case mathematically?
For clarity let's assume we are doing canonical quantization, so that Poisson brackets become commutators. I assume that the reason has something to do with the relationship between classical Hamiltonian mechanics and Schrodinger's equation. Perhaps there's a simple formulation of Noether's theorem in the classical Hamiltonian setting which makes the quantum analogy precisely clear?
Any hints or references would be much appreciated!
Mathematical Background
In classical mechanics a continuous transformation of the Lagrangian which leaves the action invariant is called a symmetry. It yields a conserved charge $Q$ according to Noether's theorem. $Q$ remains unchanged throughout the motion of the system.
In quantum mechanics a continuous transformation is effected through a representation of a Lie group $G$ on a Hilbert space of states. We insist that this representation is unitary or antiunitary so that probabilities are conserved.
A continuous transformation which preserves solutions of Schrodinger's equation is called a symmetry. It is easy to prove that this is equivalent to $[U,H] = 0$ for all $U$ representing the transformation, where $H$ is the Hamiltonian operator.
We can equivalently view a continuous transformation as the conjugation action of a unitary operator on the space of Hermitian observables of the theory
$$A \mapsto UAU^\dagger = g.A$$
where $g \in G$. This immediately yields a representation of the Lie algebra on the space of observables
$$A \mapsto [X,A] = \delta A$$
$$\textrm{where}\ \ X \in \mathfrak{g}\ \ \textrm{and} \ \ e^{iX} = U \ \ \textrm{and} \ \ e^{i\delta A} = g.A$$
$X$ is typically called a generator. Clearly if $U$ describes a symmetry then $X$ will be a conserved quantity in the time evolution of the quantum system.
Edit
I've had a thought that maybe it's related to the 'Hamiltonian vector fields' for functions on a symplectic manifold. Presumably after quantization these can be associated to the Lie algebra generators, acting on wavefunctions on the manifold. Does this sound right to anyone? |
It is well known that $4$ general points in $\mathbb{P}^2$ are complete intersection of two conics. On the other hand, if $d \geq 3$, $d^2$ general points are
not a complete intersection of two curves of degree $d$. More precisely, if $d =3$ there is only one cubic passing through $9$ general points, whereas if $d \geq 4$ there is no curve of degree $d$ passing through $d^2$ general points.
While investigating some questions about factoriality of singular hypersurfaces of $\mathbb{P}^n$, I ran across the following problem, which seems quite natural to state.
Let $d \geq 3$ be a positive integer and let $Q \subset \mathbb{P}^3$ be a subset made of $d^2$ distinct points, with the following property: for a
generalprojection $\pi \colon \mathbb{P}^3 \to \mathbb{P}^2$, the subset $\pi(Q) \subset \mathbb{P}^2$ is the complete intersection of two plane curves of degree $d$.
Is it true that $Q$ itself is contained in a plane (and is the complete intersection of two curves of degree $d$)?
If not, what is a counterexample?
Any answer or reference to the existing literature will be appreciated. Thank you.
EDIT. Dimitri's answer below provides a counterexample given by $d^2$ points on a quadric surface. Are there other configurations of points with the same property? It is possible to classify them up to projective transformations (at least for small values of $d$)? |
How many independent degrees of freedom does a most general classical electromagnetic field have in presence of sources? What is the correct way to count them? In terms of the components of the electric field $(E_x,E_y,E_z)$ and magnetic field $(B_x,B_y,B_z)$, there are 6 degrees of freedom. But they are not all independent due to relations of the form $$\frac{\partial\vec{B}}{\partial t}+\nabla\times\vec{E}=0, \qquad\nabla\cdot\vec{B}=0, \qquad \frac{\partial\rho}{\partial t}+\nabla\cdot\vec{j}=0.$$ Please help! I also have another question. Does the number of independent degrees of freedom describing an EM field decrease when we consider EM field in the vacuum? Instead of counting in terms of the independent components of $A^\mu$, can we count in terms of the components of $\vec{E}$ and $\vec{B}$?
EM field has infinity of degrees of freedom, because it takes functions, not numbers, to describe its state. You are probably interested in the least possible number of scalar functions of position in space that would contain all the information about state of EM field at some time.
Obviously, 6 functions (cartesian components of electric and magnetic field) would describe the state fully, but 6 is probably not the least number possible, because those 6 functions have to obey some universal constraints.
One way to answer this is to consider an initial value problem for EM field given sources in all space and time starting from the time $t_0$.
The initial field $(\mathbf E_0, \mathbf B_0)$ has to obey these 2 restricting conditions:
$$ \nabla\cdot \mathbf E_0(\mathbf x) = \rho(\mathbf x,t_0)/\epsilon_0, ~(1) $$ $$ \nabla\cdot \mathbf B_0(\mathbf x) = 0.~(2) $$ The sources are usually assumed to obey the following condition: $$ \partial_t\rho + \nabla \cdot \mathbf j = 0.~(3) $$
There are 6 independent "equations of motion" that the fields as functions of time obey:
$$ \partial_t \mathbf E = \nabla\times\mathbf B - \mu_0\mathbf j $$ $$ \partial_t \mathbf B = -\nabla\times\mathbf E $$ These 6 equations with the above 3 conditions imply all of the Maxwell equations.
From these equations, only (1) and (2) restrict state of the field at single point of time. The continuity equation (3) does not (it only restricts evolution of the sources in time).
So, we have 2 equations for 6 functions of space. I think this means somehow we could replace those 6 functions with 4 functions, since 6-2=4.
For example, assuming $E_x, E_y, \rho$ are known functions of position, and assuming $E_z = 0$ at infinity, we can use the constraint (1) to express $E_z$:
$$ E_z(x,y,z) = \int_{-\infty}^z \rho(x,y,z')/\epsilon_0 - \partial_xE_x(x,y,z') - \partial_yE_y(x,y,z') dz'. $$ So only two functions out of three are independent; the third can be found out from those two and the sources(which are assumed to be known). |
Is it possible to formulate classical electrodynamics (in the sense of deriving Maxwell's equations) from a least-action principle, without the use of potentials? That is, is there a lagrangian which depends only on the electric and magnetic fields and which will have Maxwell's equations as its Euler-Lagrange equations?
1) Well, at the classical level, if we are allowed to introduce auxiliary variables, we can always trivially encode a set of equations of motion $$\tag{1} {\rm eom}_i = 0, \qquad i\in\{1, \ldots, n\},$$ in a brute force manner with the help of Lagrange multipliers $$\tag{2}\lambda^i, \qquad i\in\{1, \ldots, n\},$$ so that the Lagrangian density simply reads $$\tag{3}{\cal L}~=~\sum_{i=1}^n\lambda^i ~{\rm eom}_i.$$
This is for many reasons not a very satisfactory solution. (Especially if we start to think about quantum mechanical aspects. However, OP only asks about
classical physics.) Nevertheless, the above trivial rewritings (3) illustrates how it is hard to formulate and prove no-go theorems with air-tight arguments.
2) To proceed, we must impose additional conditions on the form of the action principle. Firstly, since we are forbidden to introduce gauge potentials $A_{\mu}$ as fundamental variables (that we can vary in the action principle), we will assume that the fundamental EM variables in vacuum should be given by the ${\bf E}$ and ${\bf B}$ field. Already in pure EM, it is impossible to get the $1+1+3+3=8$ Maxwell eqs. (in differential form) as Euler-Lagrange eqs. by varying only the $3+3=6$ field variables ${\bf E}$ and ${\bf B}$. So we would eventually have to introduce additional field variables, one way or another.
3a) It doesn't get any better if we try to couple EM to matter. In decoupling corners of the theory, we should be able to recover well-known special cases. E.g. in the case of EM coupled to charged point particles, say in a non-relativistic limit where there is no EM field, the Lagrangian of a single point charge should reduce to the well-known form
$$\tag{4}L~=~\frac{1}{2}mv^2$$
of a free particle. A discussion of eq. (4) can be found e.g. in this Phys.SE post. Here we will assume that eq. (4) is valid in what follows.
3b) Next question is what happens in electrostatics
$$\tag{5} m\dot{\bf v}~=~ q{\bf E}? $$
The answer is well-known
$$\tag{6} L~=~\frac{1}{2}mv^2 - V $$
with potential energy
$$\tag{7}V~=~ q\phi, $$
where $\phi$ is the scalar electric potential. However, since we are forbidden to introduce the potential $\phi$ as a fundamental variable, we must interpret it
$$\tag{8}\phi({\bf r})~:=~-\int^{\bf r} \!d{\bf r}^{\prime}\cdot{\bf E}({\bf r}^{\prime}) $$
as a functional of the electric field ${\bf E}$, which in turn is taken as the fundamental field. Note that eqs. (6)-(8) correspond to a non-local action.
3c) The straightforward generalization (from point mechanics to field theory) of eq. (7) is a potential density
$$\tag{9}{\cal V}~=~ \rho\phi, $$
where $\rho$ is an electric charge density. Readers familiar with the usual action principle for Maxwell's eqs. will recognize that we are very close to argue that the interaction term between pure EM and matter must be of the form
$$\tag{10} {\cal L}_{\rm int}~=~J^{\mu}A_{\mu},$$
even if we haven't yet discussed what should replace the standard Lagrangian
$$\tag{11} {\cal L}_{\rm EM} ~=~-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$$
for pure EM.
3d) Staying in electrostatics, let us ponder our prospects of deriving Gauss' law in differential form
$$\tag{12} \nabla \cdot {\bf E} ~=~ \rho. $$
Obviously, the rhs. of the
single eq. (12) should appear by varying the potential density (9) wrt. one of the three $E$ fields, but which one? The counting is not right. And because eq. (9) is non-local, we will in any case get an integrated version of $\rho$ rather than $\rho$ itself, which appears on the rhs. of eq. (12), and which we set out to reproduce.
3e) In conclusion, it seems hopeless to couple a EM theory (with ${\bf E}$ and ${\bf B}$ as fundamental variables) to matter, and reproduce standard classical eqs. of motion.
4) The standard remedy is to introduce $4$ (globally defined) gauge potentials $A_{\mu}$ as fundamental variables. This makes $1+3=4$ source-less Maxwell eqs. trivial, and the remaining $1+3=4$ Maxwell eqs. with sources may be derived by varying wrt. the $4$ fundamental variables $A_{\mu}$.
For instance, the standard (special relativistic) action for EM coupled to $n$ massive point charges $q_1, \ldots, q_n$, at positions ${\bf r}_1, \ldots, {\bf r}_n$, is given as
$$\tag{13} S[A^{\mu}; {\bf r}_i] ~=~\int \! dt ~L, $$
where the Lagrangian is
$$ \tag{14} L ~=~ -\frac{1}{4}\int \! d^3r ~F_{\mu\nu}F^{\mu\nu} -\sum_{i=1}^n \left(\frac{m_{0i}c^2}{\gamma({\bf v}_i)} +q_i\{\phi({\bf r}_i) - {\bf v}_i\cdot {\bf A}({\bf r}_i)\} \right). $$
The corresponding Euler-Lagrange eqs. are $4$ Maxwell eqs. with sources (when varying $A_{\mu})$, and $n$ (special relativistic) Newton's 2nd laws with Lorentz forces (when varying ${\bf r}_i)$.
I don't know if other approach is possible but this one does not workd, We start with tensor $F_{\mu\nu}$:
$$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu} $$
but forget about the 4-potential and define it to be:
$$F^{\mu\nu} = \begin{bmatrix} 0 &-E_x/c & -E_y/c & -E_z/c \\ E_x/c & 0 & -B_z & B_y \\ E_y/c & B_z & 0 & -B_x \\ E_z/c & -B_y & B_x & 0 \end{bmatrix} $$
and write the Lagrangian density as a function of the Cartesian components of the fields, say:
$$\mathcal{L}=\mathcal{L}(E_{x},..,B_{z}) $$
and
$$\mathcal{L}=-\frac{1}{\mu_0}F^{\mu\nu}F_{\mu\nu} $$
Then Euler Lagrange equations gives you (for example applied to $E_{x}$) $\displaystyle \frac{\partial \mathcal{L}}{\partial{E_{x}}}=0$ so this is not consistent.
How we can solve the problem? Thinking in another Lagrangian density, defining a new $F_{\mu\nu}$ tensor, choosing more carefully the independent fields?
In classical electrodynamics, the physical quantities of interest are the fields. The theory is already formulated "without" potentials if you think of Maxwell's equations.
The potentials come into play later if you want to simplify the equations and find solutions using i.e. Green's functions etc. However, in quantum electrodynamics, the potentials acquire a real physical role, see e.g. the Aharanov-Bohm-effect.
I really can't see the connection between the derivation of Maxwell Eq. from the least action principle and to not use potentials, we can include or remove potentials at any point, because in classical physics (not quantum) this is just a matter of definitions.
Regarding the least action derivation, to use it we need to have some Lagrangian to start with, if you suppose it given then off course you can derive all Maxwell Equations.
Anyway an interesting thing I know, that if you will just suppose that we have some force that behaves like gravity (central and proportional to the distances squared) but admits different signs of mass (which is actually charges), in other words if you suppose that Coulombэы law is given, then by applying least action principle on the Lagrangian of special relativity, you can show that there should be definitely another coupling "force", which is magnetism. |
Following the same strategy as Dr Xorile, I wrote a quick programming solution to this problem. The goal is to define a function $p_{n,k}$, the probability of winning with $n$ numbers left, if you choose the $k$-th number.
p[n_, k_] := p[n, k] = 1/n + (k - 1)/n (1 - p[k - 1]) + (n - k)/n (1 - p[n - k])
p[0] = 0;
p[1] = 1;
p[n_] := p[n] = Max[Table[p[n, k], {k, n}]]
I define the auxiliary function $p_n$ as the probability of winning with $n$ numbers if you choose ideally. I also assume that each number is equally likely to be correct. (This is not true in real life: for example, people rarely choose multiples of ten because they don't seem "random.")
With this function we can break $p_{n,k}$ into three mutually exclusive events:
You guess the correct number. This happens with probability $1/n$ (and in this case you win with probability $1$. You guess high. This happens with probability $(k-1)/n$. Your opponent goes next, and since there are $k-1$ numbers left he has a probability to win of $p_{k-1}$; your probability of winning in this scenario is therefore $1-p_{k-1}$. You guess low. The reasoning is the same as above, except there are $n-k$ numbers remaining so the probability of this event is $(n-k)/n$ and your probability of winning is $1-p_{n-k}$.
Therefore the total probability is just:
$$p_{n,k} = \frac 1 n + \frac{k-1} n (1-p_{k-1}) + \frac{n-k} n (1-p_{n-k})$$
We can also write the winning probability for the ideal strategy:
$$p_n = \max_{k\in[1,n]}p_{k,n}$$
The base case is $p_1=1$ (If there is only one number left, you win). The case $p_0=0$ in the code is not strictly necessary, as that case only contributes with probability $0$.
Running this program, I found that there are three different cases:
$n$ is even: $p_{n,k}=1/2$ $n$ is odd, $k$ is even: $p_{n,k}=(n-1)/2n$ $n$ is odd, $k$ is odd: $p_{n,k}=(n+1)/2n$
If this is true, it follows that $p_n=1/2$ for even $n$ and $(n+1)/2n$ for odd $n$.
We can prove this by induction:
If $n$ is even, then either: $k$ is odd; $k-1$ is even and $n-k$ is odd, so:$$ \begin{align} p_{n,k} &= \frac 1 n + \frac{k-1} n \frac 1 2 + \frac{n-k} n \frac{n-k-1}{2(n-k)} \\ &= \frac 2{2n} + \frac{k-1}{2n} + \frac{n-k+1}{2n} \\ &= \frac{2+k-1+n-k-1}{2n} \\ &= \frac n {2n} = \frac 1 2 \end{align} $$ $k$ is even; $k-1$ is odd and $n-k$ is even, so:$$ \begin{align} p_{n,k} &= \frac 1 n + \frac{k-1} n \frac{k-1-1}{2(k-1)} + \frac{n-k} n \frac 1 2 \\ &= \frac 2{2n} + \frac{k-2}{2n} + \frac{n-k}{2n} \\ &= \frac{2+k-2+n-k}{2n} \\ &= \frac n {2n} = \frac 1 2 \end{align} $$ If $n$ is odd and $k$ is even, $k-1$ and $n-k$ are both odd, so:$$ \begin{align} p_{n,k} &= \frac 1 n + \frac{k-1} n \frac{k-1-1}{2(k-1)} + \frac{n-k} n \frac{n-k-1}{2(n-k)} \\ &= \frac 2{2n} + \frac{k-2}{2n} + \frac{n-k+1}{2n} \\ &= \frac{2+k-2+n-k-1}{2n} \\ &= \frac{n-1}{2n} \end{align} $$ If $n$ is odd and $k$ is odd, $k-1$ and $n-k$ are both even, so:$$ \begin{align} p_{n,k} &= \frac 1 n + \frac{k-1} n \frac 1 2 + \frac{n-k} n \frac 1 2 \\ &= \frac 2{2n} + \frac{k-1}{2n} + \frac{n-k}{2n} \\ &= \frac{2+k-1+n-k}{2n} \\ &= \frac{n+1}{2n} \end{align} $$
Q.E.D.
Note that the middle number is
not always the best choice: if $n\equiv 3\mod 4$, then $k=(n+1)/2$ will be even, and your probability of winning will be decreased by $1/n$ compared to the perfect strategy. The first strategy that Dr Xorile proposes (choosing one of the endpoints) is a perfect strategy. A perfect strategy that still roughly halves the possibilities on each turn could be: Given that you know the number is between $a$ and $b$, inclusive, compute the number of numbers remaining, $n=b-a+1$. If $n$ is even, choose either of $(b+a\pm 1)/2$. Otherwise, if $n+1$ is divisible by four, choose either of $(b+a)/2\pm 1$. Otherwise choose $(b+a)/2$.
Note again the assumptions we made: that your opponent also follows a perfect strategy (although not necessarily an identical strategy), and that the answer is evenly distributed among the possible guesses. |
In the lambda calculus with no constants with the Hindley-Milner type system, you cannot get any such types where the result of a function is an unresolved type variable. All type variables have to have an “origin” somewhere. For example, the there is no term of type $\forall \alpha,\beta.\; \alpha\mathbin\rightarrow\beta$, but there is a term of type $\forall \alpha.\; \alpha\mathbin\rightarrow\alpha$ (the identity function $\lambda x.x$).
Intuitively, a term of type $\forall \alpha,\beta.\; \alpha\mathbin\rightarrow\beta$ requires being able to build an expression of type $\beta$ from thin air. It is easy to see that there is no
value which has such a type. More precisely, if the type variable $\beta$ does not appear in the type of any term variable in the environment, then there is no term of type $\beta$ which is in head normal form. You can prove this by structural induction on the term: either the variable at the head would have to have the type $\beta$, or one of the arguments would have to have a principal type involving $\beta$, i.e. there would be a smaller suitable term.
Just because there is no value of a certain type doesn't mean that there is no term of that type: there could be a term with no value, i.e. a non-terminating term (precisely speaking, a term with no normal form). The reason why there is no lambda term with such types is that all well-typed HM terms are strongly normalizing. This is a generalization of the result that states that simply typed lambda calculus is strongly normalizing. It is a consequence of the fact that System F is strongly normalizing: System F is like HM, but allows type quantifiers everywhere in types, not just at the toplevel. For example, in System F, $\Delta = \lambda x. x \, x$ has the type $(\forall \alpha. \alpha) \rightarrow (\forall \alpha. \alpha)$ — but $\Delta\,\Delta$ is not well-typed.
HM and System F are examples of type systems that have a Curry-Howard correspondence: well-typed terms correspond to proofs in a certain logic, and types correspond to formula. If a type system corresponds to a consistent theory, then that theory does not allow proving theorems such as $\forall A, \forall B, A \Rightarrow B$; therefore there is no term of the corresponding type $\forall \alpha,\beta.\; \alpha\mathbin\rightarrow\beta$. The type system allows one to deduce “theorems for free” about functions over data structures.
This result breaks down as soon as you add certain constants to the calculus. For example, if you allow a general fixpoint combinator such as $Y$, it is possible to build terms of arbitrary type: $Y (\lambda x.x)$ has the type $\forall \alpha. \alpha$. The equivalent of a general fixpoint combinator in the Curry-Howard correspondence is an axiom that states $\forall A, \forall B, A \Rightarrow B$, which makes the logic obviously unsound.
Finding the fine line between type systems that ensure strong normalization and type systems that don't is a difficult and interesting problem. It is an important problem because it determines which logics are sound, in other words which programs embody proofs of theorems. You can go a lot further than System F, but the rules become more complex. For example, the calculus of inductive constructions which is the basis of the Coq proof assistant, is strongly normalizing yet is capable of describing common inductive data structures and algorithms over them, and more.
As soon as you get to real programming languages, the correspondence breaks down. Real programming languages have features such as general recursive functions (which may not terminate), exceptions (an expression that always raises an exception never returns a value and hence can have any type in most type systems), recursive types (which allow non-termination to sneak in), etc. |
$n$ is an integer greater than $7$. How does one go about proving that $\lfloor \sqrt{n!}\rfloor\nmid n!$.
closed as off-topic by TheGeekGreek, Daniel W. Farlow, Ross Millikan, heropup, user26857 Feb 8 '17 at 23:21
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – TheGeekGreek, Daniel W. Farlow, heropup, user26857
Added: nice argument at Finding the Number of Positive integers such that $\lfloor{\sqrt{n}\rfloor} \mid n$ where all numbers $m$ such that $\lfloor \sqrt m \rfloor | m$ are of the form $m = k^2$ or $m = k^2 + k$ or $m = k^2 + 2 k;$ furthermore,
this is if and only if.
It is easy to show $n!$ is not a square by Bertrand's. If we could prove the current conjecture, that would be a proof that, for $n \geq 8,$ $$ n! \neq k^2 + k $$ and$$ n! \neq k^2 + 2k. $$That is, we would have proved that$$ 4 n! + 1 \neq 4k^2 + 4k + 1, $$$$ n! + 1 \neq k^2 + 2k + 1, $$indeed$$ 4 n! + 1 \neq v^2, $$$$ n! + 1 \neq w^2 $$for $n \geq 8.$A proof of the current conjecture would include a proof of Brocard.
This conjecture is stronger than Brocard. Also stronger than dirt.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Well, see what can be proven out of this. Interesting for $n=8,$ there is no new prime showing up in "sqrt," rather the exponent of 5 is too large.
2 sqrt 1 = 1 fact 2 = 23 sqrt 2 = 2 fact 6 = 2 34 sqrt 4 = 2^2 fact 24 = 2^3 35 sqrt 10 = 2 5 fact 120 = 2^3 3 56 sqrt 26 = 2 13 fact 720 = 2^4 3^2 57 sqrt 70 = 2 5 7 fact 5040 = 2^4 3^2 5 78 sqrt 200 = 2^3 5^2 fact 40320 = 2^7 3^2 5 79 sqrt 602 = 2 7 43 fact 362880 = 2^7 3^4 5 710 sqrt 1904 = 2^4 7 17 fact 3628800 = 2^8 3^4 5^2 711 sqrt 6317 = 6317 fact 39916800 = 2^8 3^4 5^2 7 1112 sqrt 21886 = 2 31 353 fact 479001600 = 2^10 3^5 5^2 7 1113 sqrt 78911 = 7 11273 fact 6227020800 = 2^10 3^5 5^2 7 11 1314 sqrt 295259 = 295259 fact 87178291200 = 2^11 3^5 5^2 7^2 11 1315 sqrt 1143535 = 5 228707 fact 1307674368000 = 2^11 3^6 5^3 7^2 11 1316 sqrt 4574143 = 7 31 107 197 fact 20922789888000 = 2^15 3^6 5^3 7^2 11 1317 sqrt 18859677 = 3 37 131 1297 fact 355687428096000 = 2^15 3^6 5^3 7^2 11 13 1718 sqrt 80014834 = 2 79 506423 fact 6402373705728000 = 2^16 3^8 5^3 7^2 11 13 1719 sqrt 348776576 = 2^7 139 19603 fact 121645100408832000 = 2^16 3^8 5^3 7^2 11 13 17 1920 sqrt 1559776268 = 2^2 139 2805353 fact 2432902008176640000 = 2^18 3^8 5^4 7^2 11 13 17 1921 sqrt 7147792818 = 2 3^2 19 20899979 fact 51090942171709440000 = 2^18 3^9 5^4 7^3 11 13 17 1922 sqrt 33526120082 = 2 7 13 184209451 fact 1124000727777607680000 = 2^19 3^9 5^4 7^3 11^2 13 17 19 |
Mahendra K Verma
Articles written in Pramana – Journal of Physics
Volume 61 Issue 3 September 2003 pp 577-594
Energy cascade rates and Kolmogorov’s constant for non-helical steady magnetohydrodynamic turbulence have been calculated by solving the flux equations to the first order in perturbation. For zero cross helicity and space dimension
A ≤ ∞. For higher space-dimensions, the energy fluxes are qualitatively similar, and Kolmogorov’s constant varies as 1/3. For the normalized cross helicity σ c →1, the cascade rates are proportional to (1 − σ c)/(1 + σ c , and the Kolmogorov’s constants vary significantly with σ cc.
Volume 61 Issue 4 October 2003 pp 707-724 Reasearch Articles
Renormalized viscosity, renormalized resistivity, and various energy fluxes are calculated for helical magnetohydrodynamics using perturbative field theory. The calculation is of firstorder in perturbation. Kinetic and magnetic helicities do not affect the renormalized parameters, but they induce an inverse cascade of magnetic energy. The sources for the large-scale magnetic field have been shown to be (1) energy flux from large-scale velocity field to large-scale magnetic field arising due to non-helical interactions and (2) inverse energy flux of magnetic energy caused by helical interactions. Based on our flux results, a primitive model for galactic dynamo has been constructed. Our calculations yield dynamo time-scale for a typical galaxy to be of the order of 10
8 years. Our field-theoretic calculations also reveal that the flux of magnetic helicity is backward, consistent with the earlier observations based on absolute equilibrium theory.
Volume 62 Issue 6 June 2004 pp 1327-1328 Errata
Energy cascade rates and Kolmogorov’s constant for non-helical steady magnetohydrodynamic turbulence have been calculated by solving the flux equations to the first order in perturbation. For zero cross helicity and space dimension $d = 3$, magnetic energy cascades from large length-scales to small length-scales (forward cascade). In addition, there are energy fluxes from large-scale magnetic field to small-scale velocity field, large-scale velocity field to small-scale magnetic field, and large-scale velocity field to large-scale magnetic field. Kolmogorov’s constant for magnetohydrodynamics is approximately equal to that for fluid turbulence $(\approx 1.6)$ for Alfvén ratio $0.5\leq r_{A}\leq \infty$. For higher space-dimensions, the energy fluxes are qualitatively similar, and Kolmogorov’s constant varies as $d^{1/3}$. For the normalized cross helicity $\sigma_{c}\to 1$, the cascade rates are proportional to $(1-\sigma_{c})/(1+\sigma_{c})$, and the Kolmogorov’s constants vary significantly with $\sigma_{c}$.
Volume 63 Issue 3 September 2004 pp 553-561
In this paper a procedure for large-eddy simulation (LES) has been devised for fluid and magnetohydrodynamic turbulence in Fourier space using the renormalized parameters. The parameters calculated using field theory have been taken from recent papers by Verma [1,2]. We have carried out LES on 64
3 grid. These results match quite well with direct numerical simulations of 128 3. We show that proper choice of parameter is necessary in LES.
Volume 64 Issue 3 March 2005 pp 333-341
It is well-known that incompressible turbulence is non-local in real space because sound speed is infinite in incompressible fluids. The equation in Fourier space indicates that it is non-local in Fourier space as well. However, the shell-to-shell energy transfer is local. Contrast this with Burgers equation which is local in real space. Note that the sound speed in Burgers equation is zero. In our presentation we will contrast these two equations using non-local field theory. Energy spectrum and renormalized parameters will be discussed.
Volume 65 Issue 2 August 2005 pp 297-310
In this paper we analytically compute the strength of nonlinear interactions in a triad, and the energy exchanges between wave-number shells in incompressible fluid turbulence. The computation has been done using first-order perturbative field theory. In three dimensions, magnitude of triad interactions is large for nonlocal triads, and small for local triads. However, the shell-to-shell energy transfer rate is found to be local and forward. This result is due to the fact that the nonlocal triads occupy much less Fourier space volume than the local ones. The analytical results on three-dimensional shell-to-shell energy transfer match with their numerical counterparts. In two-dimensional turbulence, the energy transfer rates to the nearby shells are forward, but to the distant shells are backward; the cumulative effect is an inverse cascade of energy.
Volume 66 Issue 2 February 2006 pp 447-453
In this paper we apply perturbative field-theoretic technique to helical turbulence. In the inertial range the kinetic helicity flux is found to be constant and forward. The universal constant
H appearing in the spectrum of kinetic helicity was found to be 2.47.
Volume 74 Issue 1 January 2010 pp 75-82 Research Articles
In this paper we investigate two-dimensional (2D) Rayleigh–B ́enard convection using direct numerical simulation in Boussinesq fluids with Prandtl number $P = 6.8$ confined between thermally conducting plates. We show through the simulation that in a small range of reduced Rayleigh number $r (770 < r < 890)$ the 2D rolls move chaotically in a direction normal to the roll axis. The lateral shift of the rolls may lead to a global flow reversal of the convective motion. The chaotic travelling rolls are observed in simulations with free-slip as well as no-slip boundary conditions on the velocity field. We show that the travelling rolls and the flow reversal are due to an interplay between the real and imaginary parts of the critical modes.
Volume 81 Issue 4 October 2013 pp 617-629 Research Articles
Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on $1024^{3}$, $2048^{3}$, and $4096^{3}$ grids using the
Volume 81 Issue 6 December 2013 pp 1037-1043
In this paper, we estimate the magnetic Reynolds number of a typical protostar before and after deuterium burning, and claim for the existence of dynamo process in both the phases, because the magnetic Reynolds number of the protostar far exceeds the critical magnetic Reynolds number for dynamo action. Using the equipartition of kinetic and magnetic energies, we estimate the steady-state magnetic field of the protostar to be of the order of kilogauss, which is in good agreement with observations.
Current Issue
Volume 93 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode |
I have tried looking online, but I couldn't find any definitive statements. It would make sense to me that Union and Intersection of two NPC languages would produce a language not necessarily in NPC. Is it also true that NPC languages are not closed under the complement, concatenation, and kleene star operations?
For all of the examples in this answer, I'm taking the alphabet to be $\{0,1\}$. Note that the languages $\emptyset$ and $\{0,1\}^*$ are definitely not
NP-complete.
The class of
NP-complete languages is not closed under intersection. For any NP-complete language $L$, let $L_0 = \{0w\mid w\in L\}$ and $L_1 = \{1w\mid w\in L\}$. $L_0$ and $L_1$ are both NP-complete but $L_0\cap L_1 = \emptyset$.
The class of
NP-complete languages is not closed under union. Given the NP-complete languages $L_0$ and $L_1$ from the previous part, let $L'_0 = L_0 \cup \{1w\mid w\in \{0,1\}^*\}\cup\{\varepsilon\}$ and $L'_1 = L_1\cup \{0w\mid w\in \{0,1\}^*\}\cup\{\varepsilon\}$. $L'_0$ and $L'_1$ are both NP-complete but $L'_0\cup L'_1 = \{0,1\}^*\!$.
The class of
NP-complete languages is not closed under concatenation. Consider the NP-complete languages $L'_0$ and $L'_1$ from the previous part. Since both languages contain $\varepsilon$, we have $L'_0L'_1 \supseteq L'_0\cup L'_1 = \{0,1\}^*\!$.
The class of
NP-complete languages is not closed under Kleene star. For any NP-complete language $L$, $L\cup \{0,1\}$ is NP-complete but $\big(L\cup \{0,1\}\big)^* = \{0,1\}^*\!$.
If the class of
NP-complete problems is closed under complementation, then NP= coNP. Whether this is true or not is one of the major open problems in complexity theory.
Take a look at the proofs for union, intersection, concatenation, and kleene star of NP languages, here. Its seems like a similar argument could be made for NP-Complete languages.
For notation let
$A$ be a oracle that decides a known NP-Complete problem like 3-SAT. See the definition of turing reducible $L_1$ and $L_2$ are NP-Complete languages $M_1$ and $M_2$ are Turing machines that decide $L_1$ and $L_2$ using $A$. $L_3$ is $L_1 \cup L_2$ $M_3$ is a turing machine that decides $L_3$
In the case of union from 1, we can create a new machine $M_3$ that decides $L_3$ by calling $M_1$ and $M_2$ as sub routines. In turn, each time $M_1$ or $M_2$ is called, $A$ is also called. So $M_3$ decides $L_3$ using $A$. By the argument from 1, the running time of $M_3$ is in P and since it uses $A$ as a subroutine, $L_3$ is NP-Complete. In other words, $L_3$ is NP-Complete for the same reason that $L_1$ and $L_2$ are NP-Complete.
The same argument can be made intersection and it looks like similar arguments could be made for concatenation, and kleene star.
In the case of compliment, it seems likely to be difficult to prove for the same reasons is difficult to prove compliment in NP. |
Can someone show that the roots and the Cartan subalgebra are dual vector spaces?
I don't see how simple roots acting on non-corresponding indices of a Cartan basis produce 0 and a simple root evaluated on its corresponding Cartan basis element equals 1.
According to the definition of dual spaces: $\beta (t_\beta)=1$ and $\beta (t_\alpha)=0$, where ${\alpha,\beta } $ (simple roots) corresponds to ${t_\alpha, t_\beta} $ (Cartan basis) respectively.
If this was true then the inner product of two simple roots would be zero. Since the inner product of roots uses the restriction of the killing form to the Cartan, and so does the root evaluation on a Cartan element.
So $(\alpha,\beta)=k (t_\alpha, t_\beta)_{H\times H}= \alpha (t_\beta)=\beta (t_\alpha)$
Where $H$ is the max Cartan subalgebra.
Since $\beta$ is be being evaluated on an uneven indice ($ t_\alpha $), as above, $\beta (t_\alpha) $ is 0, which is not true for most root systems. In most root systems the inner product of simple roots is less than 0. Also $\beta (t_\beta)$ is not always 1.
So I am confused how the dual space axioms apply here.
In addition, I see how the Cartan basis is orthonormal or at least orthogonal under its own killing form, but not under the Lie algebra's killing form restricted to the Cartan, which is non-degenerate.
I am probably confusing a bunch of stuff.
Help is really appreciated |
I have an aluminium profile that contains too many LEDs and heats up around 70°C, which is about +20°C above my expectation. Would an external black coating increase the efficiency of the heat dissipation by the aluminium?
Possibly. Implicit in your question is the assumption that radiative heat transfer is playing or could play an important role in your configuration (vs. convection). If so, then applying a "black" coating (and thus increasing the emissivity to essentially 1, with the caveat that we're talking about the maximum wavelengths of emission at 70°C) could benefit you. However, note that the coating itself may hinder heat transfer across various interfaces.
I would plug the relevant numbers into the various formulas for heat transfer ($hA_\mathrm{surface}(T-T_\infty)$ for convection, $kA_\mathrm{cross\,section}\Delta T/\Delta x$ for conduction, $\sigma\epsilon A_\mathrm{surface\,facing\,surroundings}(T^4-T^4_\infty)$ for radiation, as described in any introductory heat transfer textbook and at many locations online) to estimate the relative magnitude of convection vs. radiation before attempting to optimize a heat transfer mechanism that might be unimportant.
As an example, the natural convection coefficient $h$ can be
very broadly estimated to around $10\,\mathrm{W}\,\mathrm{m}^{-2}\,\mathrm{K}^{-1}$ by order of magnitude (I'm sure exceptions exist in certain geometries). From the numbers you've given, one can estimate that changing the emissivity from 0 to 1 (by anodizing the aluminum, for instance) could potentially boost the outgoing heat flux by a detectable amount, but probably less than 100%. Furthermore, as other posters have noted, you may have other options to increase the outgoing heat flux much more substantially—using fins or a fan, perhaps.
Yes, a black body will radiate more heat away than a white body. That much is correct.
However, this is true
for each wavelength individually. A paint that reflects red but absorbs blue will radiate blue light effectively, but not red light. And the overall effect on heat radiation is the emission of a perfect black body times the absorption coefficient for each wavelength.
Now, what kind of wavelengths does a 70°C black body radiate most? Well, most definitely not visible light. It's somewhere in the infrared spectrum. And that is the problem:
Black color says something about visible wavelengths only, not about infrared absorption. You can judge the absorption in the visible band of wavelengths, but you need a high absorption in a totally different band.
Thus, when you buy typical black color, you do not know whether it will increase or decrease the heat radiation of a 70°C object. Of course, you might guess that the absorption coefficient of the relevant infrared band is correlated with that of visible light, but that does not need to be the case. I mean, you can buy blue color, and you can buy red color, and each will absorb the light that the other reflects. Your black color will be white in some other band of wavelengths, but in which?
You seem to be wanting to make use of black body radiation. The formula for energy dissipated through black body radiation is $\sigma T^4$, where $\sigma=5.670367(13)\times10^{−8}\ \mathrm{W\ m^{-2}\ K^{-4}}$. Plugging in $T=343\ \mathrm K$, we get $785\ \mathrm{W\ m^{-2}}$. However, it would also be absorbing heat from the surroundings. If we model the surroundings as a black body emitting at temperature $300\ \mathrm K$, we get $459\ \mathrm{W\ m^{-2}}$, for a net of $326\ \mathrm{W\ m^{-2}}$. This means that for every square centimeter, you'll get getting around $33\ \mathrm{mW}$ of cooling. This is probably going to be only a small fraction of the cooling you need. Most of your cooling is coming from conduction to the air, and then convection within the air, and as others have pointed out, a black coating will likely decrease the heat conductivity of the aluminum. It will likely be more productive to increase the surface area and air flow, and decrease the surrounding temperature.
The modelling of the surrounding as being a black body at room temperature is, of course, questionable, but even without it, you'll be getting only $78\ \mathrm{mW\ cm^{-2}}$. (And if it's in an enclosed space, you may be getting less than $33\ \mathrm{mW}$, as it will be heating up the surroundings, and that heat will just be radiated back.) Furthermore, it's quite possible that the effective black body temperature of the surroundings is greater than the temperature of your profile. Unless it's in a dark room, it will be absorbing heat from whatever lighting there is in the room. Thus, its net black body heat exchange may be positive, in which case making it darker would make things worse even without taking into account conductivity. Since we're not actually dealing with perfect black body radiation, there is a possibility that you can decrease its albedo in $343\ \mathrm K$ range without significantly increasing it in the visible range, but that's rather advanced engineering.
Your heat sink gains heat from its contact with the hot LEDs and looses it to the environment (air) around it. Since it uses convection to lose heat, any coating that has less thermal conductivity than aluminium will act as an insulator and will cause it to get hotter, which will work against you.
As one of the comments point out, you can increase the amount of heat lost to the environment by increasing the surface area of your heat sink.
You can also look for a material with higher thermal conductivity so heat is transported faster inside the material.
There are thermal dissipative coatings which can be used to improve heat removal performance. These have both good IR emissivity and low thermal resistance, and are very different from regular black paint (they are not necessarily black to begin with).
Typically dissipative coatings are only used at high temperatures (hundreds of degrees). At lower temperatures, even specialized coatings only harm heat dissipation by adding thermal resistance.
Temperatures around 70°C are completely normal for modern LEDs, so unless your LEDs came with a datasheet which specifies a lower temperature, or the profile in question should remain cool for other reasons (like contact with skin), I would advise to leave it as it is. If you have to make it cooler, either find a bigger profile, a profile optimized for convection cooling (bigger surface area) or increase the airflow. |
I'm trying to learn how to apply the WKB approximation. Given the following problem:
An electron, say, in the nuclear potential$$U(r)=\begin{cases} & -U_{0} \;\;\;\;\;\;\text{ if } r < r_{0} \\ & k/r \;\;\;\;\;\;\;\;\text{ if } r > r_{0} \end{cases}$$ 1. What is the radial Schrödinger equation for the $\ell=0$ state?
2. Assuming the energy of the barrier (i.e. $k/r_{0}$) to be high, how do you use the WKB approximation to estimate the bound state energies inside the well?
For the first question, I thought the radial part of the equation of motion was the following
$$\left \{ - {\hbar^2 \over 2m r^2} {d\over dr}\left(r^2{d\over dr}\right) +{\hbar^2 \ell(\ell+1)\over 2mr^2}+V(r) \right \} R(r)=ER(r)$$
Do I simply just let $\ell=0$ and obtain the following? Which potential do I use?
$$\left \{ - {\hbar^2 \over 2m r^2} {d\over dr}\left(r^2{d\over dr}\right) +V(r) \right \} R(r)=ER(r)$$
For the other question, do I use $\int \sqrt{2m(E-V(r))}=(n+1/2)\hbar π$, where $n=0,1,2,...$ ? If so, what are the turning points? And again, which of the two potentials do I use? |
The starting point and notations used here are presented in Two puzzles on the Projective Symmetry Group(PSG)?. As we know, Invariant Gauge Group(IGG) is a
normal subgroup of Projective Symmetry Group(PSG), but it may not be a normal subgroup of $SU(2)$, like $IGG=U(1)$. But this may results in a trouble:
By definition, we can calculate the $IGG$ and $IGG'$ of the $SU(2)$ gauge equivalent mean-field Hamiltonians $H(\psi_i)$ and $H(\widetilde{\psi_i})$, respectively. And it's easy to see that for each site $i$, we have $U_i'=G_iU_iG_i^\dagger$, where $U_i'\in IGG'$ and $U_i\in IGG$, which means that $IGG'=G_i\text{ }IGG \text{ }G_i^\dagger$. Now the trouble is explicit, if $IGG$(like $U(1)$) is not a
normal subgroup of $SU(2)$, then $IGG'$ may not equal to $IGG$, so does this mean that two $SU(2)$ gauge equivalent mean-field Hamiltonians $H(\psi_i)$ and $H(\widetilde{\psi_i})$ may have different IGGs ? Or in other words, does the low-energy gauge structure depend on the choice of $SU(2)$ gauge freedom?
Thank you very much.This post imported from StackExchange Physics at 2014-03-09 08:42 (UCT), posted by SE-user K-boy |
This is a list of problems recently discussed on ##algorithms which (probably) allow better solutions. The stated complexity usually comes with no guarantee.
𝟏.
Assume you are given a sequence of $N$ prizes of positive values $A[1],\ldots A[N]$; additionally, every prize is designated as either ``type A'' or ``type B''. You are to choose at most $M$ prizes of maximal total value subject to the following condition: every chosen prize of type A precedes (has smaller index in $A$ than) every chosen prize of type B. ($M$ may be parametrically smaller than $N$.) 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $O(N \log M)$.
𝟐.
For a set $\{S_1,\ldots S_N\}$ of $N$ subsets of $\{1,2,\ldots N\}$ determine if it contains two disjoint subsets $S_i\cap S_j= \emptyset$. 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $O(N^{2.373})$.
𝟑.
Let $A$ be an integer array of length $N$ and $K$ a given integer. What is the maximal length of a contiguous subarray $X=A[p:q]$ such that every element of $X$ occurs in $X$ at least $K$ times? 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $O(N \log N)$, 𝘣𝘶𝘵 𝘴𝘪𝘮𝘱𝘭𝘦𝘳 𝘢𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮𝘴 𝘰𝘧 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘤𝘰𝘮𝘱𝘭𝘦𝘹𝘪𝘵𝘺 𝘸𝘦𝘭𝘤𝘰𝘮𝘦.
𝟒.
Let $G=(V,E)$ be a sparse directed acyclic graph with $V=N$ vertices, $E=O(N)$. Find a topological ordering of $G$ (if $(u,v)\in E$, then $u$ comes before $v$) such that $\max_{v\in V} f(v,i(v))$ is minimized, where $i(v)$ is the index of $v$ in the ordering, and $f(v,t)$ is a given function (cost to process $v$ at time $t$), monotonically increasing in $t$. 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $O(N \log^2N)$.
𝟓.
You are given a set of $N$ strings of length $M$ each from a given alphabet $A$ of $|A|=K$ symbols. Output a minimal cardinality set of strings with characters from $A\cup{*}$ such that expanding every $*$ as a single-character wildcard (say, $0{*}2{*}$ for $A=\{0,1,2\}$ turns into 9 words) you obtain the initial set. 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $O(NM^2)$ 𝘸𝘰𝘳𝘴𝘵 𝘤𝘢𝘴𝘦, $O(NM)$ 𝘦𝘹𝘱𝘦𝘤𝘵𝘦𝘥.
𝟔.
Let $A$ be a permutation of $\{1,2,\ldots N\}$, and $B$ an array of $N$ non-negative integers. Is there a polynomial algorithm that constructs a braid where $k$-th thread ends in position $A[k]$ and participates in $B[k]$ intersections on the way (or outputs ``impossible'' if no such braid exists)? The picture at $\qquad\qquad$http://bit.ly/2cIldiC is a solution for $A=(2,1,3,4)$ and $B=(3,1,2,4)$. 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: ?
𝟕.
Let $S$ be a multiset of $N$ integers with zero sum. Define $k(S)$ to be the maximal possible number of subsets in a partition of $S$ into subsets with zero sum each. What is the best $A$ such that we can find $k(S)$ in time $O^*(A^N)$? 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $A=2$.
𝟖.
Let $A$ be an integer array of length $N$ sorted in non-decreasing order, and $g>0$ a given integer value. What is the minimal number of elements of $A$ that need to be changed so that $A$ remains non-decreasing but all pairs of neighboring elements have difference at least $g$? 𝘊𝘶𝘳𝘳𝘦𝘯𝘵 𝘳𝘦𝘴𝘶𝘭𝘵: $O(N^2)$. |
Small subgroup confinement attack on the Diffie-Hellman algorithm
Let be a group $\mathbb{Z}_p^*$ where $p$ is a large prime and $\alpha$ a primitive root modulo $p$. Let's consider that Alice and Bob want to do a key agreement on the whole cyclic group $\mathbb{Z}^*_p$ using the Diffie-Hellman algorithm. The following sequence diagram illustrates how Eve can perform a small subgroup confinement attack:
By doing this, if $k$ is well chosen, the secret $S$ can be found by exhaustive search.
How to choose the $k$-value
As $p$ is a prime number, the order of $\mathbb{Z}^*_p$ is a composite, so there exist subgroups. Say $\mathbb{G}_w$ is one small subgroup of prime order $w$. So by picking $k = \frac{p-1}{w}$, the secret value $S \in \mathbb{G}_w$ and as it is a small subgroup, $S$ can be found by exhaustive search efficiently.
Why does it work?
In this section I will try to prove that $S \in \mathbb{G}_w$.
We know that $w\text{ | } (p-1)$ so $\exists k$ as $p-1 = w \times k $. Plus we know that $ord(\alpha) = p - 1$ because it is a primitive root modulo $p$ and a consequence of the Cauchy's theorem is that given an element $x$, $ord(x^k) = \frac{ord(x)}{(ord(x) \wedge k)}$. So in our case we have:
$ord(\alpha^{ab(p-1)/w}) = ord(\alpha^{abk}) = \frac{ord(\alpha)}{(ord(\alpha) \wedge abk)} = \frac{(p-1)}{((p-1) \wedge abk)} = \frac{wk}{ (wk \wedge abk)}$ and we know that $(wk \wedge abk) = k$ because $w$ is a prime number. So we obtain that $ord(\alpha^{ab(p-1)/w}) = \frac{wk}{k} = w$ so we can conclude that $S \in \mathbb{G}_w$.
Could someone approve or disapprove my demo? |
I think you need the condition that the Markov Chain is reversible.
Def: Let $X$ be an irreducible Markov chain such that $X_n$ has the stationary distribution $\mathbb{\pi}$ for all $n$. The chain is called reversible if the transition matrices of $X$ and its time-reversal $Y$ are the same, which is to say that $$\pi_iPij = \pi_jPji \:\:\:\: \forall i,j$$
Thm: Let $P$ be the transition matrix of an irreducible chain $X$, and suppose that there exists a distribution $\mathbb{\pi}$ such that
$$\pi_iPij = \pi_jPji \:\:\:\: \forall i,j$$
Then $\mathbb{\pi}$ is a stationary distribution of the chain. Furthermore, $X$ is reversible in equilibrium.
Suppose that $\mathbb{\pi}$ satisfies these conditions, then
$$\sum_{i} \pi_iPij = \sum_{i} \pi_jPji = \pi_j \sum_{i} Pji = \pi_j $$ |
Random graphs with small world topology
In graphs with
small world topology, nodes are highly clustered yet the path length between them is small. A topology like this can make search problems very difficult, since local decisions quickly propagate globally. In other words, shortcuts can mislead heuristics. Further is has been shown that many different search problems have a small world topology.
Watts and Strogatz [1] propose a model for small world graphs. First, we start with a regular graph. Disorder is introduced into the graph by randomly rewiring each edge with probability $p$. If $p=0$, the graph is completely regular and ordered. If $p=1$, the graph is completely random and disordered. Values of $0 < p < 1$ produce graphs that are neither completely regular nor completely disordered. Graphs don't have a small world topology for $p=0$ and $p=1$.
Watts and Strogatz start from a ring lattice with $n$ nodes and $k$ nearest neighbours. A node is chosen from the lattice uniformly at random, and a rewired edge is reconnected to it. If rewiring would create a duplicate edge, it is left untouched. For large, sparse graphs they demand $n \gg k \gg \ln(n) \gg 1$, where $k \gg \ln(n)$ ensures the graph remains connected.
The model of Watts and Strogatz is somewhat popular, but does have certain drawbacks. Walsh [2] investigates the effects of randomization and restart strategies in graphs generated using the model. There's also a paper by Virtanen [3], which covers other models motivated by the need of realistic modeling of complex systems.
Random simple planar graphs
Generating random simple planar graphs on $n$ vertices uniformly at random can be done efficiently. The number of planar graphs with $n$ vertices, $g_n$, can be determined using generating functions. The value of $g_n$ for $1 \leq n \leq 9$ is $1,2,8,64,1023,32071,1823707,163947848$ and $20402420291$, respectively. Since the numbers are too complicated, one is not expected to find a closed formula for them. Giménez and Noy [4] give a precise asymptotic estimate for the growth of $g_n$:$$g_n \sim g \cdot n^{-7/2} \gamma^n n!,$$where $g$ and $\gamma$ are constants determined analytically with approximate values $g \approx 0.42609$ and $\gamma \approx 27.22687$.
The proof of the result leads to a very efficient algorithm by Fusy [5]. Fusy gives an approximate size random generator and also a exact size random generator of planar graphs. The approximate size algorithm runs in linear time while the exact size algorithm runs in quadratic time. The algorithms are based on decomposition according to successive levels of connectivity: planar graph $\rightarrow$ connected $\rightarrow$ 2-connected $\rightarrow$ 3-connected $\rightarrow$ binary tree.
The algorithms then operate by translating a decomposition of a planar graph into a random generator using the framework of Boltzmann samplers by Duchon, Flajolet, Louchard and Schaeffer [6]. Given a combinatorial class, a Boltzmann sampler draws an object of size $n$ with probability to $x^n$, where $x$ is certain real parameter tuned by the user. Furthermore, the probability distribution is spread over all the objects of the class, with the property that objects of the same size have the same probability of occurring. Also, the probability distribution is uniform when restricted to a fixed size.
For a lightweight introduction, see a presentation by Fusy.
[1] D.J. Watts and S.H. Strogatz. Collective dynamics of 'small-world' networks. Nature, 393:440-442, 1998.
[2] Toby Walsh. Search in a small world. Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99-Vol2), pages 1172-1177, 1999.
[3] Satu Virtanen. Properties of nonuniform random graph models. Research Report A77, Helsinki University of Technology, Laboratory for Theoretical Computer Science, 2003.
[4] O. Giménez and M. Noy. Asymptotic enumeration and limit laws of planar graphs, arXivmath.CO/0501269. An extended abstract has appeared in Discrete Mathematics and The-oretical Computer Science AD (2005), 147-156.
[5] E. Fusy. Quadratic and linear time generation of planar graphs, Discrete Mathematics and Theoretical Computer Science AD (2005), 125-138.
[6] P. Duchon, P. Flajolet, G. Louchard, and G. Schaeffer. Boltzmann sampler for the random generation of combinatorial structures. Combinatorics, Probability and Computing, 13(4-5):577-625, 2004. |
Given two non-trivial (not $\emptyset$ or $\Sigma^*$) languages $A$, $B$ over an alphabet $\Sigma$, which of the following is correct:
a. There is a language $C$ such that $A\leq_pC$ and $B\leq_pC$.
[..]
c. There is a language $C$ such that $C\leq_pA$ and $C\leq_pB$.
According to two different sources I've seen the correct answer seems to be (a), which is exactly why I'm trying to understand two things:
Why is (a) a correct answer? Taking two languages in $EXP$, for example, I do not see why it's obvious they are polynomially reducible to each other.
Why is (c) not a correct answer? Taking a language $C\in P$ since $A$ and $B$ are not trivial there are some $a\in A, a'\notin A$ and $b\in B, b'\notin B$, and therefore given a polynomial TM $M$ deciding $C$ I can define a reduction $f$ to $A$ by $f(x)=a$ if $M(x)$ accepts and $f(x)=a'$ otherwise and similarly for $B$, so it seems to me that (c) is a correct answer. |
I'm currently taking a course in computational physics. I'm new to computational physics and programming in general. I'm using numerical recipes to try and integrate the radial Schrodinger equation with a Lennard-jones potential.
$$\left[ \frac{\hbar^2}{2m}\frac{d^2}{dr^2} + \left( E-V(r)-\frac{\hbar^2 l (l+1)}{2mr^2}\right) \right] u_l(r)=0$$
$$V(r)= \epsilon \left[ \left( \frac{\rho}{r} \right)^{12}-2\left(\frac{\rho}{r}\right)^6 \right]$$
Numerical recipes has a function called odeint which will use a fifth-order Runge-Kutta algorithm to integrate an ordinary differential equation for you. The function has an adjustable step-size which appears to be causing problems in my code. Namely, my step-size is going to zero, which causes numerical recipes to throw an error and exit prematurely.
I am doing the integration from $r_{min}=\frac{\rho}{2}$ to $r_{max}=5\rho$ numerically, and up to my minimum value analytically in order to take care of the singularity at zero. I have attached my code below, and more information on odeint and how it works can be found at: http://www.itp.uni-hannover.de/Lehre/cip/odeint_c.pdf. Can anyone help me understand where I'm going wrong?
#include <stdio.h>#include <math.h>#define NRANSI#include "nr.h"#include "nrutil.h"#define N 2float dxsav,*xp,**yp; /* defining declarations */int kmax,kount;int nrhs; /* counts function evaluations *//* Schrodinger equation and L-J parameters */double alpha = 6.12; double rho = 3.57;double epsilon = 5.9;double l = 1.0;double energy = 3;double rmin=1.785;double rmax=17.85;void derivs(float x,float y[],float dydx[]){ nrhs++; printf("xodeint check: x=%f\n", x); dydx[1] = y[2]; dydx[2]=alpha*((l*(l+1)/(alpha*x*x))+epsilon*(pow(rho/x,12)-2*pow(rho/x,6))-energy)*y[1];}int main(void){ int i,nbad,nok; float eps=1.0e-4,h1=0.1,hmin=0,x1=rmin,x2=rmax,*ystart; ystart=vector(1,N); xp=vector(1,200); yp=matrix(1,10,1,200); ystart[1]=0.93583; ystart[2]=0.17385; nrhs=0; kmax=100; dxsav=(x2-x1)/20.0;// printf("%f\n", h1); odeint(ystart,N,x1,x2,eps,h1,hmin,&nok,&nbad,derivs,rkqs); printf("\n%s %13s %3d\n","successful steps:"," ",nok); printf("%s %20s %3d\n","bad steps:"," ",nbad); printf("%s %9s %3d\n","function evaluations:"," ",nrhs); printf("\n%s %3d\n","stored intermediate values: ",kount); printf("\n%8s %18s %15s\n","r","integral","x^2"); for (i=1;i<=kount;i++) printf("%10.4f %16.6f %14.6f\n",xp[i],yp[1][i],xp[i]*xp[i]); free_matrix(yp,1,10,1,200); free_vector(xp,1,200); free_vector(ystart,1,N); return 0;}#undef NRANSI
This outputs
Numerical Recipes run-time error...stepsize underflow in rkqs...now exiting to system... |
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$
BERT
BERT is a self-supervised method, which uses just a large set of unlabeled textual data to learn representations broadly applicable for different language tasks.
At a high level, BERT’s pre-training objective, which is what’s used to get its parameters, is a Language modelling (LM) problem. LM is an instance of parametric modeling applied to language.
Typical LM task: what’s the probability that the next word is “cat” given the sentence is “The dog chased the ????”
Let’s consider a natural language sentence \(x\). In some way, we’d like to construct a loss function \(L\) for a language modeling task. We’ll keep it abstract for now, but, if we set up the model \(M\) right, and have something that generally optimizes \(L(M(\theta), x)\), then we can interpret one of BERT’s theses as the claim that this representation transfers to new domains.
That is, for some very small auxiliary model \(N\) and a set of parameters \(\theta’\) close enough to \(\theta\), we can optimize a different task’s loss (say, \(L’\), the task that tries to classify sentiment \(y\)) by minimizing \(L’(N(\omega)\circ M(\theta’),(x, y))\).
One of the reasons we might imagine this to work is by viewing networks like \(M(\theta’\) as featurizers that create a representation ready for the final layer to do a simple linear classification on.
Indeed, the last layer of a neural network performing a classification task is just a logistic regression on the features generated by the layers before it. It makes sense that those features could be useful elsewhere.
Contribution
The motivation for this kind of approach (LM pre-training and then a final fine-tuning step) versus task-specific NLP is twofold:
Data volume is much larger for the LM pre-training task The approach can solve multiple problems at once.
Thus, the contributions of the paper are:
An extremely robust, generic approach to pretraining. 11 SOTAs in one paper. Simple algorithm. Effectiveness is profound because (1) the general principle of self-supervision can likely be applied elsewhere and (2) ablation studies in the paper show that representation is the bottleneck. Technical Insights
The new training procedure and architecture that BERT provides is conceptually simple.
Bert provides deep, bidirectional, context-sensitive encodings.
Why do we need all three of these things? Let’s consider a training task, next sentence prediction (NSP) to demonstrate.
We can’t claim that this is exactly what’s going on in BERT, but clearly as humans we certainly require bidirectional context to answer. In particular, for some kind of logical relation between the entities in a sentence, we first need (bidirectional) context. I.e., to answer if “buying milk” is something we do in a store, we need to look at the verb, object, and location.
What’s more, to answer complicated queries about the coherence of two sentences, we need to layer additional reasoning beyond the logical relations we can infer at the first level. We might be able to detect inconsistencies at L0, but for more complicated interactions we need to look at a relationship between logical relationships (L1 as pictured above).
So, it may make sense that to answer logical queries of a certain nesting depth, we’d need to recursively apply our bidirectional, contextualization representation up to a corresponding depth (namely, stacking Transformers). In the example, we might imagine this query to look like:
was-it-the-same-person( who-did-this("man", "went"), who-did-this("he", "bought")) &&is-appropriate-for-location( "store", "bought", "milk")
Related work
It’s important to describe existing related work that made strides in this direction. Various previous deep learning architectures have independently proposed using LM for transfer learning to other tasks and deep, bidirectional context (but not all at once).
Training
As input, BERT uses the BooksCorpus (800M words) and English Wikipedia (2,500M words), totaling 3.3B words, split into a vocabulary of 33K word pieces. There were a few standard NLP featurization techniques applied to this as well (lower casing, for instance), though I think the architecture could’ve handled richer English input.
But what’s the output? Given just the inputs, how can we create a loss that learns a good context-sensitive representation of each word? This needs to be richer than the context-free representation of each word (i.e., the embedding that each word piece starts as in the first layer of the input to the BERT network).
We might try to recover the original input embedding, but then the network would just learn the identity function. This is the correct answer if we’re just learning on the joint distribution of \((x, x)\) between a sentence and itself.
Instead, BERT trains on sequence
recovery. That is, our input is a sentence \(x_{-i}\) missing its \(i\)-th word, and our output is the \(i\)-th word itself, \(x_i\). This is implemented efficiently with masking in practice. That is, the input-output pair is \((\text{“We went [MASK] at the mall.”}, \text{“shopping”})\). In the paper,
[MASK] is the placeholder for a missing word.
In addition, BERT adds an auxiliary task, NSP, where a special
[CLS] classification token is used at the beginning of a sentence that serves as a marker for “this token should represent the whole context of the input sentence(s),” which is then used as a single fixed-width input for classification. This improves performance slightly (see Table 15 in the original work).
That’s essentially it.
BERT = Transformer Encoder + MLM + NSP
There’s an important caveat due to training/test distribution mismatch. See the last section, Open Questions, below.
Fine-tuning
For fine tuning, we just add one more layer on top of the final encoded sequence that BERT generates.
In the case of class prediction, we apply a classifier to the fixed width embedding of the
[CLS] marker.
In the case of subsequence identification, like in SQuAD, we want to select a start and end by using a start classifier and end classifier applied to each token in the final output sequence.
For instance, a network is handed a paragraph like the following:
One of the most famous people born in Warsaw was Maria Skłodowska-Curie, who achieved international recognition for her research on radioactivity and was the first female recipient of the Nobel Prize. Famous musicians include Władysław Szpilman and Frédéric Chopin. Though Chopin was born in the village of Żelazowa Wola, about 60 km (37 mi) from Warsaw, he moved to the city with his family when he was seven months old. Casimir Pulaski, a Polish general and hero of the American Revolutionary War, was born here in 1745.
And then asked a reading comprehension question like “How old was Chopin when he moved to Warsaw with his family?” to which the answer is the subsequence “seven months old.” Hard stuff! And BERT performs at or above human level.
Conclusions
The BERT model is extremely simple, to the point where there’s a mismatch with intuition.
There seem to be some seemingly spurious decisions that don’t have a big effect on training.
First, the segment embeddings indicate different sentences in inputs, but positional embeddings provide positional information anyway. This is seemingly redundant information the network needs to learn to combine.
Second, the start and end indicators for the span predicted for SQuAD are computed independently, where it might make sense to compute the end conditional on the start position. Indeed, it’s possibly to get an end before the start (in which case the span is considered empty).
There are probably many such smaller modeling improvements we could make. But the point is that
it’s a waste of time. If anything is the most powerful table to take away from this paper, it’s Table 6.
Above any kind of task-specific tuning or model improvements, the longest pole in the tent is representation. Investing effort in finding the “right” representation (here, bidirectional, deep, contextual word piece embeddings) is what maximizes broad applicability and the potential for transfer learning.
Open Questions Transfer Learning Distribution Mismatch
At the end of Section 3.1, we notice something weird. In the masked language modeling task, our job is to derive what the
[MASK] token was.
But in the evaluation tasks,
[MASK] never appears. To combat this “mismatch” between the distribution of evaluation task tokens and that of the MLM task, occasionally full sequences are shown without the
[MASK] tokens, in which the network is expected to recover the identity functions.
Appendix C.2 digs into the robustness of BERT with respect to messing around with the distribution. This is definitely something that deserves some attention.
During pre-training, we’re minimizing a loss with respect to a distribution that doesn’t match the test distribution (where we randomly remove the mask). How is this a well-posed learning problem?
How much should we smooth the distribution with the mask removals? It’s unclear how to properly set up the “mismatch amount”.
Richer Inputs
Based on the ability of BERT to perform well even with redundant encodings (segment encoding and positional encoding), and given its large representational capacity, why operate BERT on word pieces? Why not include punctuation or even HTML markup from Wikipedia?
This kind of input could surely offer more signal for fine tuning. |
The problem is as follows:
Consider a connected, undirected, and weighted graph $G = (V, E, w)$ and an integer $0 < k < |E| - |V| + 1$. Describe and analyze and efficient algorithm to remove
at most$k$ edges from $E$ such that the resulting graph $G' = (V, E' \subseteq E, w)$ has a maximal weight minimum spanning tree over all possible $G'$.
I initially thought a greedy algorithm would work with "just remove the $k$ smallest edges as long as the graph remains connected." However, this does not work, consider the following graph and $k = 1$:
The MST of $G$ has weight 9. If we remove the minimal edge $(B, C)$, the MST of the resulting graph has weight 12. However, if we remove the edge $(A, B)$, the MST of the resulting graph has weight 13. So this greedy strategy does not work.
Other strategies, we can first note that it only helps to remove edges that are in the MST of $G$ initially. So we can first determine $T = MST(G)$. The next (inefficient) thing we could do is consider each edge in $e \in T$ and do:
Remove $e$ from $T$, cutting $T$ into $T_1$ and $T_2$. Determine the next smallest edge $e'$ spanning $T_1$ and $T_2$ in $G$. Keep track of $e$ such that it maximizes $w(e') - w(e)$.
Repeat this $k$ times. This doesn't seem very efficient though. This would be something like $O(k \cdot n \cdot (n + m))$ I think. We could optimize this a bit by keeping track of an ordered set of edges on the cuts.
I am wondering if there exists an algorithm that is $O(km \log n)$ or better. Any approaches / advice would be appreciated. |
How to solve $n$ for $n^n = 2^c$?
What's the numerical method?
I don't get this.
For $c=1000$, $n$ should be approximately $140$, right?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Alpha sometimes goes off into the complex plane when what you want is only the reals. I agree with you and get about 140.222. If you ask to solve n(ln(n))=1000 ln(2) you get what you want.
Hint: Consider this.
Hint 2: First take $\log$ on both sides.
And explicitly: The solution to your question is given by $$ n = e^{W(c\log 2)} = \frac{{c\log 2}}{{W(c\log 2)}}. $$ For $c=1000$, this gives $n \approx 140.2217$. The function $W$ is standard (ProductLog in Wolfram Mathematica).
EDIT: For large $c$, a rough but very simple approximation to the solution $n$ of $n^n = 2^c$ can be obtained as follows (cf. this, also for improvement of the approximation): $$ n \approx (c\log 2)[\log (c\log 2)]^{1/\log (c\log 2) - 1} . $$ For example, for $c=1000$ this gives $n \approx 141.2083$, not far from the exact value of about $140.2217$.
Yes, take the $log_2$ of both sides gives you:
$$n log_2(n) = c$$
You can use Newton's method to solve this:
$$x_0 = c$$ $$x_{k+1} = x_k - (x_k log(x_k) - c log(2))/(1 + log(x_k))$$
where now "log" is the natural logarithm.
This gives the solution $n \sim x_4 = 140.221667$.
Starting with a better $x_0$, like $x_0=c / log(c)$ gives you even faster convergence. With c=1000 or c=1000000, the value $x_3$ is correct with an error of $10^{-8}$. |
In this paper by analogy with a process $e^+e^-\to e^+e^-$ Andrew Strominger proposes that in black hole evaporation there is creation of soft photons and gravitons. This can be argued by infrared finiteness or by conservation of Large Gauge Symmetry and BMS charges.
Now, as far as I know, at least for a massless scalar field, the black hole background behaves as a kind of potential barrier $V(r_\ast)$ in terms of the tortoise coordinate:
Here $r_\ast = -\infty$ is the horizon and $r_\ast = \infty$ is null infinity.
Now from this potential it seems that a soft mode, i.e., one with very low energy, simply cannot escape to future null infinity. In other words it seems to reflect back.
Now, if this is also the same for soft photons and gravitons, what happens to these soft photons and soft gravitons Strominger proposes to be produced in black hole evaporation? Do they escape to future null infinity, and if not, where they end up?
Is this somewhat related to the "holographic plate" which Strominger, Hawking and Perry mention in this paper? |
Answer
$5.8 \times 10^3 \ K$
Work Step by Step
We simplify equation 16.9 to find: $ P = (\frac{P}{4 \pi R^2 \sigma})^{1/4}$ $ P = (\frac{3.9 \times 10^{26}}{4 \pi (7 \times 10^8)^2(5.7 \times 10^{-8})})^{1/4}= 5.8 \times 10^3 \ K$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
This content will become publicly available on September 18, 2020 Azimuthal separation in nearly back-to-back jet topologies in inclusive 2- and 3-jet events in pp collisions at $$\sqrt{s}=$$ 13 TeV Abstract
A measurement for inclusive 2- and 3-jet events of the azimuthal correlation between the two jets with the largest transverse momenta, $$\Delta\phi_{12}$$ , is presented. The measurement considers events where the two leading jets are nearly collinear ("back-to-back") in the transverse plane and is performed for several ranges of the leading jet transverse momentum. Proton-proton collision data collected with the CMS experiment at a center-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 35.9 fb$$^{-1}$$ are used. Predictions based on calculations using matrix elements at leading-order and next-to-leading-order accuracy in perturbative quantum chromodynamics supplemented with leading-log parton showers and hadronization are generally in agreement with the measurements. Discrepancies between the measurement and theoretical predictions are as large as 15%, mainly in the region 177$$^\circ < \Delta\phi_{12} <$$ 180$$^\circ$$. The 2- and 3-jet measurements are not simultaneously described by any of models.
Authors: Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) Contributing Org.: CMS OSTI Identifier: 1515063 Report Number(s): FERMILAB-PUB-19-234-CMS; arXiv:1902.04374; CMS-SMP-17-009; CERN-EP-2018-344 oai:inspirehep.net:1719955 Grant/Contract Number: AC02-07CH11359 Resource Type: Accepted Manuscript Journal Name: Eur.Phys.J. Additional Journal Information: Journal Volume: C79; Journal Issue: 9 Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Citation Formats
Sirunyan, Albert M, and et al. Azimuthal separation in nearly back-to-back jet topologies in inclusive 2- and 3-jet events in pp collisions at $\sqrt{s}=$ 13 TeV. United States: N. p., 2019. Web. doi:10.1140/epjc/s10052-019-7276-4.
Sirunyan, Albert M, & et al. Azimuthal separation in nearly back-to-back jet topologies in inclusive 2- and 3-jet events in pp collisions at $\sqrt{s}=$ 13 TeV. United States. doi:10.1140/epjc/s10052-019-7276-4.
Sirunyan, Albert M, and et al. Wed . "Azimuthal separation in nearly back-to-back jet topologies in inclusive 2- and 3-jet events in pp collisions at $\sqrt{s}=$ 13 TeV". United States. doi:10.1140/epjc/s10052-019-7276-4.
@article{osti_1515063,
title = {Azimuthal separation in nearly back-to-back jet topologies in inclusive 2- and 3-jet events in pp collisions at $\sqrt{s}=$ 13 TeV}, author = {Sirunyan, Albert M and et al.}, abstractNote = {A measurement for inclusive 2- and 3-jet events of the azimuthal correlation between the two jets with the largest transverse momenta, $\Delta\phi_{12}$ , is presented. The measurement considers events where the two leading jets are nearly collinear ("back-to-back") in the transverse plane and is performed for several ranges of the leading jet transverse momentum. Proton-proton collision data collected with the CMS experiment at a center-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 35.9 fb$^{-1}$ are used. Predictions based on calculations using matrix elements at leading-order and next-to-leading-order accuracy in perturbative quantum chromodynamics supplemented with leading-log parton showers and hadronization are generally in agreement with the measurements. Discrepancies between the measurement and theoretical predictions are as large as 15%, mainly in the region 177$^\circ < \Delta\phi_{12} <$ 180$^\circ$. The 2- and 3-jet measurements are not simultaneously described by any of models.}, doi = {10.1140/epjc/s10052-019-7276-4}, journal = {Eur.Phys.J.}, number = 9, volume = C79, place = {United States}, year = {2019}, month = {9} } |
Intuitively, "balanced trees" should be trees where left and right sub-trees at each node must have "approximately the same" number of nodes.
Of course, when we talk about red-black trees*(see definition at the end) being balanced, we actually mean that they are
height balanced and in that sense, they are balanced.
Suppose we try to formalize the above intuition as follows:
Definition:A Binary Tree is called $\mu$-balanced, with $0 \le \mu \leq \frac{1}{2}$, if for every node $N$, the inequality
$$ \mu \le \frac{|N_L| + 1}{|N| + 1} \le 1 - \mu$$
holds and for every $\mu' \gt \mu$, there is some node for which the above statement fails. $|N_L|$ is the number of nodes in the left sub-tree of $N$ and $|N|$ is the number of nodes under the tree with $N$ as root (including the root).
I believe, these are called
weight-balanced trees in some of the literature on this topic.
One can show that if a binary tree with $n$ nodes is $\mu$-balanced (for a constant $\mu \gt 0$), then the height of the tree is $\mathcal{O}(\log n)$, thus maintaining the nice search properties.
So the question is:
Is there some $\mu \gt 0$ such that every big enough red-black tree is $\mu$-balanced?
The definition of Red-Black trees we use (from Introduction to Algorithms by Cormen et al):
A binary search tree, where each node is coloured either red or black and
The root is black All NULL nodes are black If a node is red, then both its children are black. For each node, all paths from that node to descendant NULL nodes have the same number of black nodes.
Note: we don't count the NULL nodes in the definition of $\mu$-balanced above. (Though I believe it does not matter if we do). |
Difference between revisions of "Quasirandomness"
(→Introduction: further cleanup of vandalism)
Line 12: Line 12:
These two properties are already discussed in some detail in the [[density increment method|article on the density increment method]]: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
These two properties are already discussed in some detail in the [[density increment method|article on the density increment method]]: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
− −
==A possible definition of quasirandom subsets of <math>[3]^n</math>==
==A possible definition of quasirandom subsets of <math>[3]^n</math>==
Revision as of 09:21, 15 March 2009 Introduction
Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma.
In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a
deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density.
Needless to say, this is not the
only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit.
These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
A possible definition of quasirandom subsets of [math][3]^n[/math]
As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function.
Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math])
As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect).
Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined). |
ATP-dependent mechanics of red blood cells
ATP-dependent mechanics of red blood cells, Timo Betz, Martin Lenz, Jean-Francois Joanny, and Cecile Sykes., PNAS vol.106 no.36 (2009) [1]
Keywords Summary
The fluctuations of the membrane of red blood cells is studied using an optical deflection method similar to that used to detect the position of trapped colloids in an optical trap. Membrane fluctuations observed in the absence of ATP are regarded as occurring passively - that is they are thermal in nature. The frequency dependence of these thermal fluctiations is compared with membrane fluctuations occurring in the presence of APT, and a significant deviation from equilibrium statistics os observed.
Soft Matter
The basic experimental setup for this study is shown is Figure 1. A tightly focused laser beam is incident on the edge of a red blood cell immobilized on a wall of a well formed by two cover slips. The scattered light is picked up (in "transmission mode") with a quadrant photodiode (QPD). This is same manner in which the position of optically trapped microparticles is usually measured. The QPD consists of 4 separate photodiodes, but really only 2 are needed for this experiment. The relative amount of signal in one "side" of the QPD is proportional to the position of the scattering object, within some linear regime which is found from a simple calibration. Though the authors do not say explicitly that they take the differential signal of the two sides of the QPD, this is how it is generally done in an optical tweezer setup. It is possible that the QPD signal used is simply a sum over all four quadrants. Whatever the case, the important result is that the position of the membrane can be measured to sub-nanometer precision and with temporal resolution down to 10 <math>\mu s</math>.
The power spectral density (PSD = <math>|\tilde{x}|^2</math>, where <math>\tilde{x}</math> in the fourier transform of a time series of membrane fluctuations) of the membrane fluctuations is the basic type of data obtained using this setup. An example of such a curve is shown in figure 2. The data agree with the high frequency power law fall off predicted by theory <math>PSD \propto f^{-5/3}</math> as detailed in the appendix. The theory also relates the PSD to physical properties of the system: the membrane bending rigidity, surface tension and effective viscosity of the cell. This is a nice result, especially in the context of possibly extending this method to a diagnostic type of implementation. The authors tabulate these parameters from their fits and compare to values obtained by other well established studies and find very good agreement.
The nature of the membrane fluctuations is investigated next. The authors take a simple but insightful approach to quantifying the extent to which the membrane fluctuations are non-thermal. First, the PSD of ATP-depleted cells is measured; since ATP is the "energy carrier" in cells, these fluctuations are taken to be independent of any active process going on in the cell. In other words the fluctuations in these cell membranes are assumed to be thermally driven so that their energy is <math>k_B T</math> as can be seen from the equipartition theorem. The PSD of membrane fluctuations by cells with access to ATP are then compared to this thermal baseline by defining an effective energy <math>\frac{E_{eff}}{k_B T} = \frac{PSD^{ATP}}{PSD^{ATP-depl}} \times \frac{g^{ATP-depl}}{g^{ATP}}</math>, where the function g (defined in the supplementary information) takes into account the different physical parameters of each type of cell. Interestingly, there is a departure from <math>E_{eff} = 1</math> at low frequencies meaning that active processes inside the cell become resolvable at these long time scales (>0.1 s or <10Hz). A plot of this effective energy as a function of fluctuation frequency is shown in figure 3, for a PKC-activated cell (the significance of this plot lies in the difference between the PSD of cells with and without ATP and not in the specific nature of what PKC does to the cell). Qualitatively, we can interpret this as saying that the cell can interact with its surroundings actively through its membrane at a speed of up to 10Hz or so. This corroborates an in-vitro study done on cytoskeletal networks (reference 21 [2]) which finds a deviation from equilibrium response at roughly the same frequency.
This study is very interesting because it investigates deviations form thermal equilibrium in a living system - something which is quite to do since a lot of methods for probing the microscale dynamics rely on thermodynamics equilibrium (passive microrheology, optical tweezer calibration, etc..). This is a natural aspect of living systems to want to study and speaks of the fundamental nature of what it means for a collection of chemicals to be "living". Exaclty how does a living entity such as a cell stave off the second law of thermodynamics by resisting overwhelming accumulations of entropy over long time scales. Experiments such as this one (and reference 21 [3]) are beginning to shed light on this deep question. |
1 Mar 2010 provisioning in Layer 2 is very important to networks that are primarily is part of the IEEE 802.1Q (IEEE, 2005) which defines the architecture of . value 5, indicates that the default DSCP-to-CoS mapping scheme is followed, bearing in mind network must be adapted so that ethernet frames belonging to
Routing (ER) of data-bearing paths across networks, which will help to In networks where it is not possible to create a virtual topology at layer 3 that is a full mesh .. Q = ´. 0, Node j has no layer 3 processing. 1, Node j has layer 3 processing.
ISO 2382-10:1979 - Data processing - Vocabulary - Part 10: Operating .. ISO 12240-1:1998 - Spherical plain bearings - Part 1: Radial spherical plain .. ISO 8479:1986 - Aerospace - "Q" clamps (centre-mounted clamps) for fluid systems .
30 Jun 2009 Download Options The case for Quality of Service (QoS) in WANs/VPNs is largely of a 802.1Q tagged packet and use this value—in conjunction with the 1 maps to DSCP 8, CoS 2 maps to DSCP 16, and so on); however, the default . interfaces has a bearing on QoS design is when provisioning QoS
Dimensions; Load capacity; CAD Download; 3D; Adjustment; General advice; Techn. . *The straightness for fine straightness profiles is ± 0,3 mm per meter.
1 Apr 2008 so that its ability to meet timing constraints is not influenced by the presence of other tasks in the An approach similar in principles to Q-RAM is the one proposed in [20], where a polynomial Approaches bearing a.
So it is important for a system to provide a large variety of service qualities and to . can co-exist in a sys- tem. Bearing in mind that discrete choices are available that preserves the ordering, i.e., if q1 is “better than” q2, then fij(q1) > fij(q2).
The angular-momentum-bearing degrees of freedom involved in the fission process are identified and So much about the "rigid" rotor degrees of freedom. fluctuations of those intrinsic modes, whose angular momentum is perpendicular to the rigid Q value for the reactions 1400 MeV l65Ho + anisotropy decreases.
17 Jan 2016 Adaptive Priority Routing (APR) is first proposed during the initial D representing distance mode and Q representing quality of service (QoS) mode. . where dmax is the maximum distance of the nodes from BS {0 .. Bearing in mind that the source S wants to send data to the sink D, .. Download article.
1 Dec 2008 The Broadband Forum is a non-profit corporation organized to create This Broadband Forum Technical Report is not binding on the Broadband. Forum In doing so, it describes how to add GPON-enabled access nodes PON capacity between the traffic-bearing entities within Optical Network Units.
convergence is taking place in many areas so that different networks become able to "carry essentially similar kinds Bearing this in mind, this . dedicated capacity for a QoS level of q for service s. xs gives the number of concurrent users for
Anybody can submit their paper by mailing at [email protected] ISO The performance of these protocols is evaluated through exhaustive simulations using the OPNET .. 2] Bansod P, & Bradshaw P, Aeronautical Q, 23 (1972) 131-140. .. The regulation allows increasing the load bearing capacity of the RUPD.
The scenario of our work is a LoD-system, where a multimedia database management system (MMDBMS) many not so serious, and you have provided much important feedback. We finally I am grateful that you have been bearing with me when I sat night after night in my office. My two 6.3 QUEUE MANAGEMENT.
8 Jul 2008 requirements is crucial to a successful deployment. 'stream' needs to be 'packetized', so now there is a Sample Rate. As this has bearing on how much actual throughput is . 2 Mbit/s. 2.5 Mbit/s. 6 Mbit/s. 8 Mbit/s. [email protected] (7) . IEEE 802.1Q (also known as VLAN Tagging) was a project in the
WINKEL axial Bearing eccentric adjustable type 4.463 is availiable as type PR Dimensions; Load capacity; CAD Download; 3D; Adjusting; General advice; Techn. advice 4.463, 149,4, 108, 60, 78,5,0-82,5, 58,5-62,5, 45, 6,0-10,0, 34, 3
For P2P streaming applications, bandwidth allocation is an important factor because of its direct bearing on high quality and lower latency particular video layer depending on its download capacity and So, for each downstream peer i∈L is assigned a .. network includes three classes of downstream peers: Q1, Q2.
nonlinear biaxial stick-slip characteristics of sliding bearings and the velocity de- was that the torsional response is negligible in sliding isolated structures even in the .. provide a base period of Tb = 3 s, so that the sliding isolation system has .. 2. 2.0. J. Qob :1"7. /. 2.5:3"~ I. 9-eb: 2"0 ~, "-[. 11'8.6. 2.0. 1.4. /;. {Q) 1.0 I.
for a mobile CQ system is: How do we achieve the highest possible quality of the . Bearing these observations in mind, in MobiQual we employ. QoS-aware query modeling for the mobile nodes within the query result of q. At any given time, the . throttle fraction z ∈ [0, 1], which defines the amount of load that can be
The problem is most acute for the case of HCN, which is observed to be (t~&jT) ~ ~ ~q1~u~n~ woij u~p~ aroM soou~punq~ NJH °`II pU~ ~A ~ZHj4j `ajo SOflI1~A X 0~ ~OT X 61 9 Ol X 01 ~, 0T X 0t O~ OOS 0~6 00M~ 1~D Xl tTOOt+DUI ~H .. The absence of any carbon-bearing parent molecules in these outflows, with
12 Mar 2014 Wireless network virtualization has its real-world bearing in mobile cellular networks [2]. Though there is much work on the modelling of physical wireless . \alpha (t)=\sigma +\rho \cdot t,\kern0.5em t\ge 0, $$ At any time t, the queue length q(t) and the network delay d(t) of the slice, .. Download PDF
transfer his session/ call to a best available network bearing guaranteed quadrature (I/Q) values from ideal signal states and thus provides a measure Finally EVM is compared with SNR, BER and investigation concludes EVM channel can be characterised and quantified to guarantee QoE, from user's end so that an |
Weighted elliptic estimates for a mixed boundary system related to the Dirichlet-Neumann operator on a corner domain
School of Mathematics, Sun Yat-sen University, No.135 Xingangxi Road, Haizhu District, Guangzhou 510275, China
Based on the $ H^2 $ existence of the solution, we investigate weighted estimates for a mixed boundary elliptic system in a two-dimensional corner domain, when the contact angle $ \omega\in(0,\pi/2) $. This system is closely related to the Dirichlet-Neumann operator in the water-waves problem, and the weight we choose is decided by singularities of the mixed boundary system. Meanwhile, we also prove similar weighted estimates with a different weight for the Dirichlet boundary problem as well as the Neumann boundary problem when $ \omega\in(0,\pi) $.
Keywords:Weighted elliptic estimates, corner domain, mixed boundary problem, Dirichlet-Neumann operator. Mathematics Subject Classification:Primary: 35Q31, 35J25; Secondary: 35Q35. Citation:Mei Ming. Weighted elliptic estimates for a mixed boundary system related to the Dirichlet-Neumann operator on a corner domain. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 6039-6067. doi: 10.3934/dcds.2019264
References:
[1]
J. Banasiak and G. F. Roach,
On mixed boundary value problems of Dirichlet oblique-derivative type in plane domains with piecewise differentiable boundary,
[2]
M. Sh. Birman and G. E. Skvortsov, On the quadratic integrability of the highest derivatives of the Dirichlet problem in a domain with piecewis smooth boundary,
[3]
M. Borsuk and V. A. Kondrat'ev,
[4]
M. Costabel and M. Dauge,
General edge asymptotics of solutions of second order elliptic boundary value problems, Ⅰ,
[5]
M. Costabel and M. Dauge,
General edge asymptotics of solutions of second order elliptic boundary value problems Ⅱ.,
[6]
M. Dauge,
[7]
M. Dauge, S. Nicaise, M. Bourlard and M. S. Lubuma,
Coefficients des singularités pour des problèmes aux limites elliptiques sur un domaine à points coniques Ⅰ: résultats généraux pour le problème de Dirichlet,
[8]
G. I. Eskin, General boundary values problems for equations of principle type in a plane domain with angular points,
[9]
P. Grisvard,
[10]
P. Grisvard,
[11]
V. A. Kondrat'ev,
[12] [13] [14]
V. A. Kozlov, V. G. Mazya and J. Rossmann,
[15] [16]
Ya. B. Lopatinskiy, On one type of singular integral equations,
[17]
V. G. Maz'ya and J. Rossmann,
[18]
V. G. Maz'ya, The solvability of the Dirichlet problem for a region with a smooth irregular boundary,
[19]
V. G. Maz'ya, The behavior near the boundary of the solution of the Dirichlet problem for an elliptic equation of the second order in divergence form,
[20]
V. G. Maz'ya and B. A. Plamenevskiy,
On the coefficients in the asymptotics of solutions of elliptic boundary value problems in domains with conical points,
[21] [22]
V. G. Maz'ya and B. A. Plamenevskiy,
Coefficients in the asymptotics of the solutions of an elliptic boundary value problem in a cone,
[23]
V. G. Maz'ya and J. Rossmann,
On a problem of Babu$\breve {\rm{s}}$ka (Stable asymptotics of the solution to the Dirichlet problem for elliptic equations of second order in domains with angular points),
[24] [25]
show all references
References:
[1]
J. Banasiak and G. F. Roach,
On mixed boundary value problems of Dirichlet oblique-derivative type in plane domains with piecewise differentiable boundary,
[2]
M. Sh. Birman and G. E. Skvortsov, On the quadratic integrability of the highest derivatives of the Dirichlet problem in a domain with piecewis smooth boundary,
[3]
M. Borsuk and V. A. Kondrat'ev,
[4]
M. Costabel and M. Dauge,
General edge asymptotics of solutions of second order elliptic boundary value problems, Ⅰ,
[5]
M. Costabel and M. Dauge,
General edge asymptotics of solutions of second order elliptic boundary value problems Ⅱ.,
[6]
M. Dauge,
[7]
M. Dauge, S. Nicaise, M. Bourlard and M. S. Lubuma,
Coefficients des singularités pour des problèmes aux limites elliptiques sur un domaine à points coniques Ⅰ: résultats généraux pour le problème de Dirichlet,
[8]
G. I. Eskin, General boundary values problems for equations of principle type in a plane domain with angular points,
[9]
P. Grisvard,
[10]
P. Grisvard,
[11]
V. A. Kondrat'ev,
[12] [13] [14]
V. A. Kozlov, V. G. Mazya and J. Rossmann,
[15] [16]
Ya. B. Lopatinskiy, On one type of singular integral equations,
[17]
V. G. Maz'ya and J. Rossmann,
[18]
V. G. Maz'ya, The solvability of the Dirichlet problem for a region with a smooth irregular boundary,
[19]
V. G. Maz'ya, The behavior near the boundary of the solution of the Dirichlet problem for an elliptic equation of the second order in divergence form,
[20]
V. G. Maz'ya and B. A. Plamenevskiy,
On the coefficients in the asymptotics of solutions of elliptic boundary value problems in domains with conical points,
[21] [22]
V. G. Maz'ya and B. A. Plamenevskiy,
Coefficients in the asymptotics of the solutions of an elliptic boundary value problem in a cone,
[23]
V. G. Maz'ya and J. Rossmann,
On a problem of Babu$\breve {\rm{s}}$ka (Stable asymptotics of the solution to the Dirichlet problem for elliptic equations of second order in domains with angular points),
[24] [25]
[1]
Sándor Kelemen, Pavol Quittner.
Boundedness and a priori estimates of solutions
to elliptic systems
with Dirichlet-Neumann boundary conditions.
[2]
Grégoire Allaire, Yves Capdeboscq, Marjolaine Puel.
Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions.
[3]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[4] [5]
Dorina Mitrea, Marius Mitrea, Sylvie Monniaux.
The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains.
[6]
Yanni Guo, Genqi Xu, Yansha Guo.
Stabilization of the wave equation with
interior input delay and mixed Neumann-Dirichlet boundary.
[7]
Haitao Yang, Yibin Zhang.
Boundary bubbling solutions for a planar elliptic problem with exponential Neumann data.
[8]
Doyoon Kim, Hongjie Dong, Hong Zhang.
Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces.
[9]
Bastian Gebauer, Nuutti Hyvönen.
Factorization method and inclusions of mixed type in an inverse elliptic boundary value problem.
[10] [11]
Mourad Bellassoued, David Dos Santos Ferreira.
Stability estimates for the anisotropic wave equation from the Dirichlet-to-Neumann map.
[12]
Ping Li, Pablo Raúl Stinga, José L. Torrea.
On weighted mixed-norm Sobolev estimates for some basic parabolic equations.
[13]
Raúl Ferreira, Julio D. Rossi.
Decay estimates for a nonlocal $p-$Laplacian evolution problem
with mixed boundary conditions.
[14]
Sunghan Kim, Ki-Ahm Lee, Henrik Shahgholian.
Homogenization of the boundary value for the Dirichlet problem.
[15]
Carmen Calvo-Jurado, Juan Casado-Díaz, Manuel Luna-Laynez.
Parabolic problems with varying operators and Dirichlet and Neumann boundary conditions on varying sets.
[16]
Ihsane Bikri, Ronald B. Guenther, Enrique A. Thomann.
The Dirichlet to Neumann map - An application to the Stokes problem
in half space.
[17] [18] [19]
Kevin Arfi, Anna Rozanova-Pierrat.
Dirichlet-to-Neumann or Poincaré-Steklov operator on fractals described by
[20]
E. N. Dancer, Danielle Hilhorst, Shusen Yan.
Peak solutions for the Dirichlet problem of an elliptic system.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
You could ask the same question about a region in the plane: can you integrate the divergence of a vector field on a region to obtain a flux through the boundary curve? The answer is yes. If $R$ is any region in the plane with counterclockwise boundary curve $\partial R$, then$$
\int_{R}\left(\frac{\partial P}{\partial x}+\frac{\partial Q}{\partial y}\right) dA \;=\; \oint_{\partial R} \left(-Q\,dx + P\, dy\right).
$$for any vector field $P\hspace{0.5pt}\textbf{i}+Q\textbf{j}$. However, this two-dimensional version of the Divergence Theorem isn't anything new: it's just Green's Theorem applied to the vector field $(-Q,P)$. Note that $(-Q,P)$ is obtained from $(P,Q)$ by rotating each vector counterclockwise by $90^\circ$, so the two-dimesnonal Divergence Theorem is just a "rotated" version of Green's Theorem.
The Divergence Theorem for a surface in $\mathbb{R}^3$ is similar, except that it's basically a "rotated" version of the Curl Theorem (sometimes called Stokes' Theorem). Specifically, if $S$ is a surface with unit normal $\textbf{n}$ and $\textbf{F}$ is a vector field tangent to $S$, then we can rotate $\textbf{F}$ counterclockwise $90^\circ$ by taking its cross product with $\textbf{n}$ (so the rotated field is $\textbf{n}\times\textbf{F}$). Then we can define the "divergence" of $\textbf{F}$ on $S$ by$$
\mathrm{div}_S(\textbf{F}) \;=\; \textbf{n}\cdot \mathrm{curl}(\textbf{n}\times\textbf{F}).
$$This formula makes sense even if $\textbf{F}$ isn't tangent to $S$, since it ignores any component of $\textbf{F}$ in the normal direction.
The curl theorem tells us that$$
\int_S \textbf{n}\cdot\mathrm{curl}(\textbf{G})\;dA \;=\; \oint_{\partial S} \textbf{t}\cdot \textbf{G}\; ds
$$for any vector field $\textbf{G}$, where $\textbf{t}$ is the unit tangent vector to the curve $\partial S$ (oriented using the right-hand rule). Substituting $\textbf{G} = \textbf{n}\times\textbf{F}$ gives$$
\int_S \mathrm{div}_S(\textbf{F})\,dA \;=\; \oint_{\partial S} \textbf{t}\cdot(\textbf{n}\times\textbf{F})\; ds.
$$This is the Divergence Theorem on a surface that you're looking for. The triple product $\textbf{t}\cdot(\textbf{n}\times\textbf{F})$ computes the flux of $\textbf{F}$ through the boundary curve. Perhaps a better way to write the same formula is$$
\int_S \mathrm{div}_S(\textbf{F})\,dA \;=\; \oint_{\partial S} (\textbf{t}\times\textbf{n})\cdot\textbf{F}\; ds,
$$where $\textbf{t}\times\textbf{n}$ is a unit vector tangent to the surface, perpendicular to the boundary curve, and pointing directly "out". |
I'm self-studying the book Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares
Below is an excerpt from the book:
Independence-dimension inequality. If the $n$-vectors $\vec{a_1}, \cdots, \vec{a_k}$ are linearly independent, then $k\leq n$. In words:
A linearly independent collection of $n$-vectors can have at most $n$ elements.
Put another way:
Any collection of $n+1$ or more $n$-vectors is linearly dependent.
Proof of independence-dimension inequality. The proof is by induction on the dimension $n$.First consider a linearly independent collection $\vec{a_1}, \cdots, \vec{a_k}$ of $1$-vectors. We must have $\vec{a_1}\neq 0$. This means that every element $\vec{a_i}$ of the collection can be expressed a multiple $\vec{a_i}=(\vec{a_i}/\vec{a_1})\vec{a_1}$ of the first element $\vec{a_1}$. This contradicts linear independence unless $k=1$.
Next suppose $n\geq 2$ and the independence-dimension inequality holds for dimension $n-1$. Let $\vec{a_1}, \cdots, \vec{a_k}$ be a linearly independentg list of $n$-vectors. We need to show that $k\leq n$. We partition the vectors as
$$ \vec{a}_i= \begin{bmatrix} \vec{b_i} \\ \alpha_i \end{bmatrix}, \qquad i = 1, \ldots, k, $$ where $\vec{b_i}$ is an $(n-1)$-vector and $\alpha_i$ is a scalar.
First suppose that $\alpha_1 = \cdots = \alpha_k=0$. Then the vectors $\vec{b_1}, \cdots, \vec{b_k}$ are linearly independent: $\sum_{i=1}^k\beta_i\vec{b_i}=0$ holds if and only if $\sum_{i=1}^k\beta_i\vec{a_i}=0$, which is only possible for $\beta_1 = \ldots = \beta_k =0$ because the vectors $\vec{a_i}$ are linearly independent. The vectors $\vec{b_1}, \ldots, \vec{b_k}$ form a linearly independent collection of $(n-1)$-vectors. By the induction hypothesis we have $k\leq n-1$, so certainly $k \leq n$.
Next suppose that the scalars $\alpha_i$ are not all zero. Assume $\alpha_j\neq 0$. We define a collection of $k-1$ vectors $\vec{c_i}$ of length $n-1$ as follows:
$$ \vec{c_i} = \vec{b_i} - \frac{\alpha_i}{\alpha_j}\vec{b_j}, \qquad i = 1, \ldots, j-1, \qquad \vec{c_i}=\vec{b_{i+1}} - \frac{\alpha_{i+1}}{\alpha_j}\vec{b_j}, \qquad i = j, \ldots, k-1 $$
These $k-1$ vectors are linearly independent: If $\sum_{i=1}^{k-1} \beta_i \vec{c_i}=0$ then
\begin{equation} \tag{1}\label{eq:1} \sum_{i=1}^{j-1}\beta_i \begin{bmatrix} \vec{b_i} \\ \alpha_i \end{bmatrix} + \gamma \begin{bmatrix} \vec{b_j} \\ \alpha_j \end{bmatrix} + \sum_{i=j+1}^k \beta_{i-1} \begin{bmatrix} \vec{b_i} \\ \alpha_i \end{bmatrix} =0 \end{equation}
with $$ \gamma = -\frac{1}{\alpha_j}(\sum_{i=1}^{j-1}\beta_i\alpha_i + \sum_{i=j+1}^k \beta_{i-1}\alpha_i). $$
Since the vectors $\vec{a_i}=(\vec{b_i}, \alpha_i)$ are linearly independent, the equality $\eqref{eq:1}$ only hold when all the coefficients $\beta_i$ and $\gamma$ are all zero. This in turns implies that the vectors $\vec{c_i}, \cdots, \vec{c_{k-1}}$ are linearly independent. By the induction hypothesis $k-1 \leq n-1$, so we have established that $k \leq n$.
My Question:
After reading some times, the ideas of the proof in my understanding:
First prove
Independence-dimension inequalityholds for $n$-vectors when $n=1$
Then proves when $n>=2$ if
Independence-dimension inequalityholds for $n-1$-vectors, then it holds for $n$-vectors
I stuck with part 2. How the equation $\eqref{eq:1}$ comes from? Especially about the $\gamma$. |
3/14/15 at 9:26:53.589793238... amPilsner winter time which contains as many digits of \(\pi\) as you want – you may probably find such a moment once in a century, if you trust Stephen Wolfram. I am not cheating: I was really writing this blog post after 9 am although the precision indicated above is exaggerated LOL.
And as the screenshot above (click to zoom in) shows, the year 2015 has finally started with everything that defines it – the Run II of the Large Hadron Collider in particular. You may get the current version of the screen above at this CERN page; I clicked at "Luminosity" (the second option) to get to the energy and luminosity chart above.
You may get to the screen at any time if you find the LHC section of the right sidebar on this blog (the full dark green template) and find the hidden URL beneath the words "highest luminosity".
At any rate, the energy of each proton in the beam is now \(6.5\TeV\), as the graph shows. If two such protons collide from the opposite directions, the momentum cancels but the energy doubles: the center-of-mass energy of the proton pair at the LHC has increased to \(13\TeV\). It was just \(8\TeV\) in the 2012 Run – and it is only planned to increase to \(14\TeV\) later, probably in this year.
For a proton to increase its energy from the latent energy \(m_0c^2 = 0.94\GeV\) to \(6,500\GeV\), it must move very quickly. Back in December, I told you that the energy would reach \(6.5\TeV\) in March – it did so now, this resource is trustworthy – and calculated the Lorentz \(\gamma\) factor and other relativistic effects connected with these really huge speeds.
Moreover, if you look at the screenshot above, the luminosity chart on the left side seems to show a curve with a nonzero value. It is a blue curve which should mean that it's the luminosity measured by ATLAS – see the legend. And because the luminosity is nonzero, it should mean that there are actually collisions taking place! If that interpretation is wrong, and it probably is (because the red/green picture in the lower right corner says that zero luminosity was delivered, and the nonzero number is just "target") it's the graph that should be blamed, not me, because a nonzero luminosity should mean collisions of the relevant beams.
Anyway, let us
assumethat the collisions of proton pairs with the energy \(13\TeV\) are taking place. The Earth hasn't been devoured by a black hole yet. What a surprise! ;-)
It may be interesting to re-learn the units of luminosity again. The chart shows that the luminosity is restarted to the maximum value, \(0.37\) units, once an hour, and it gradually drops to \(0.11\) units or so. In average, the luminosity is close to \(0.25\) units, the graph tells us.
Even in the third world, in this case the land of the Apaches and Siouxes, they are interested in the European experiment. Dr Lincoln is affiliated with the Fermilab, an Illinois fan club of CERN. Continuation about detectors. It would be totally inappropriate for Lincoln to choose a favorite experiment, he stresses, especially if it were the best one among all, the CMS, that kicks ass.
The units are \((\mu {\rm b}\cdot s)^{-1}\), i.e. inverse microbarns per second. The right sidebar of this blog claims – and I have no reason not to trust it – that the highest luminosity achieved at the LHC so far was \(7.8/{\rm nb}/{\rm s}\). If my calculations are right, the numerical part is about \(20\) times greater than the recent one; and the unit was \(1,000\) times greater (a nanobarn is smaller than a microbarn, but the inverse nanobarn is larger than the inverse microbarn – the reversal is what the word "inverse" does for you).
So the luminosity right now is about \(20,000\) times smaller than the maximum one achieved in 2012. The LHC isn't running at full speed yet – but it is probably running already. Has the Run 2 of the LHC already produced some new particles? I think that even if the collisions were already allowed, and they are probably not, it is unlikely because the number of collisions was very small. But if it were some really special heavy particle that was out of reach in 2012 and became possible now, I think that it couldn't even be excluded that the tiny number of collisions has already produced something new.
The word "seven" in the theme song should become "thirteen" (teraelectronvolts) now. Among the objects enumerated at the end of this 2010 song, only the Higgs boson has been found so far.
Let me also convert the "average" luminosity in recent hours, \(0.25/{\rm \mu b}/{\rm s}\), to other units. If you multiply it by \(86400\times 365.25\), you get 7.9 million of the same units with the second replaced by year which is \(7.9/{\rm pb}/{\rm year}\). Over the year, we're used to expect dozens of inverse femtobarns, and this inverse picobarn is 1,000 times smaller. So the average luminosity was something like \(0.0008/{\rm fb}/{\rm year}\) so far. We want many femtobarns so the luminosity will have to increase thousands of times to get there, as I have already pointed out.
You may also convert the luminosity to the number of collisions per second. I will use the same conversion coefficient that we had in 2012 although the energy does change it slightly. The total delivered luminosity to one detector was about \(27/{\rm fb}\) in 2012 (the recorded one was about 92% of this figure) which corresponded to 1.9 quadrillion collision.
Now, we were getting \(0.25\) inverse microbarn per second. The inverse microbarn is one billion times smaller than the inverse femtobarn (the exponent is \(15-6=9\)). And we get two more orders of magnitude from \(27\) vs \(0.25\). In total, each second, the LHC was producing 11 orders of magnitude fewer collisions per second than 1.9 quadrillion (which is \(10^{15}\)) or \(1.9\times 10^{4}\). In other words, unless I made a mistake, we were getting about 20,000 collisions in ATLAS per second in recent hours.
Please feel free to verify it and correct me.
I guess that they will be very careful (perhaps unnecessarily careful) so the gradual increase of the luminosity will occupy the following month, and we will only get to reasonable competitive luminosities sometime in May 2015 when the "truly new physics research" will begin again. If Nature has prepared some beyond-the-Standard-Model objects and phenomena, the first trillions of collisions will have the highest chance to reveal their signs. Discoveries may place quickly – but the LHC may also decide to tell us (Nature did the decision behind the scenes) that the Standard Model continues to be OK at these energies, too. |
Using high-school algebra, you can do all sorts of despicable things with the cubic equation $x^3+px^2+qx+r=0$.
Except solve it. No matter how you manipulate that equation, there's no way you can arrive at a solution all by yourself; not unless you happen upon the idea that by substituting $y=x+p/3$, you can in fact eliminate the quadratic component of the sum, thereby reducing the equation to the form $y^3+uy+v=0$, which is more easily solvable.
It was Cardano$^\star$ who first had this spark and solved the cubic equation.
The second trick in Cardano's solution comes after the initial substitution that eliminates the second-power component from the equation:
\begin{align}x^3+px^2+qx+r&=0,\\
y&=x+\frac{p}{3},\\ y^3+\left(q-\frac{p^2}{3}\right)y+\left(r-\frac{pq}{3}+\frac{2p^3}{27}\right)&=0.\end{align}
Substituting
\begin{align}s&=q-\frac{p^2}{3},\\t&=r-\frac{pq}{3}+\frac{2p^3}{27},\end{align}
we can write down a cubic equation for $y$ with no quadratic term:
\[y^3+sy+t=0.\]
What one must notice is that a formally identical equation can be derived by expanding the third power of a sum:
\begin{align}(u+v)^3=u^3+3u^2v+3uv^2+v^3&=u^3+3uv(u+v)+v^3,\\
(u+v)^3-3uv(u+v)-(u^3+v^3)&=0.\end{align}
From this, a system of equations for $u^3$ and $v^3$ can be derived:
\begin{align}27u^3v^3&=-s^3,\\u^3+v^3&=-t.\end{align}
This can be reduced to a quadratic equation for either $u^3$ or $v^3$. What makes this solution peculiar is that whenever the original cubic equation has three real roots, this quadratic equation has no real roots at all. In fact, it was this issue that motivated the development of the theory of complex numbers, which led, much later, to the algebra of quaternions, which provide the mathematical foundation for Dirac spinors and a lot of modern quantum physics...
Here's a good example illustrating Cardano's problem:
\[x^3-6x+4=0.\]
This equation has three real roots: $2$, $-1+\sqrt{3}$, $-1-\sqrt{3}$.
Following Cardano's method, we'd have a system of equations for $U=u^3$ and $V=v^3$ as follows:
\begin{align}27UV=216\Rightarrow UV&=8,\\U+V&=-4.\end{align}
From this, a quadratic equation for $U$ can be derived:
\begin{align}U(-4-U)&=8,\\U^2+4U+8&=0.\end{align}
Unfortunately, this equation has no real roots. For Cardano and his contemporaries, this was a huge problem. Today, we can use complex numbers to find the roots:
\[U=-2\pm\sqrt{2^2-8}=-2\pm 2i.\]
Similarly,
\[V=-2\mp 2i.\]
Taking the cubic root we get:
\begin{align}&u_1=1+i,~~~~v_1=1-i,\\&x_1=u_1+v_1=1+i+1-i=2.\end{align}
But of course in the complex plane, a number has three cubic roots, two of which can be obtained from the third by multiplying by $-\frac{1}{2}\pm\frac{\sqrt{3}}{2}i$:
\begin{align}&u_2=(1+i)\left(-\frac{1}{2}+\frac{\sqrt{3}}{2}\right),~~~~v_2=(1-i)\left(-\frac{1}{2}-\frac{\sqrt{3}}{2}i\right),\\
&x_2=u_2+v_2=-1-\sqrt{3},\\ &u_3=(1+i)\left(-\frac{1}{2}-\frac{\sqrt{3}}{2}i\right),~~~~v_3=(1-i)\left(-\frac{1}{2}+\frac{\sqrt{3}}{2}i\right),\\ &x_3=u_3+v_3=-1+\sqrt{3}.\end{align}
The bottom line: we've recovered the three real roots of the original cubic equation, but in the process, we were forced to make use of the square root of negative numbers. This was enough to give old Cardano a fit: "
thus far does arithmetical subtlety go, of which this, the extreme, is, as I have said, so subtle that it is useless".
$^\star$It was
Niccolò Fontana Tartaglia who solved equations in the form $y^3+sy+t=0$, the solution of which he communicated to Gerolamo Cardano. Cardano was able to reduce other cubics to this form, and his student Lodovico Ferrari used Tartaglia's method to solve the quartic equation. Cardano published all these results in his Ars Magna. Tartaglia's solution may also have been independently discovered by Scipione del Ferro. |
I'm teaching myself bits and pieces of forcing at the moment, for the purposes of translating them into sheaf-theoretic versions. I'm trying to write down what I feel is a cleaner description of the Easton product of forcing posets, by which I mean a global description rather than one in terms of elements like $$ \left|{(\kappa,α,β) ∈ dom(p) : \kappa\leq \lambda}\right| \lt \lambda \qquad (1) $$ for $p$ in the Easton product $\prod^E P(\kappa)$, which is not very useful at the level of the category of sets if it is not the category of ZF(C)-sets.
I'm confident I understand what this condition does when you do the forcing, namely I think it puts a bound on how many sets are added to any given $\lambda$, else you might get silly things like a proper class of sets added to some set.
However, at the risk of embarrassing myself, and in the interest of educating others, here is my best guess for what this condition translates to. Consider, as Jech does (Set theory, 3rd edition), first the Easton product over some set $A$ of regular cardinals. For $\kappa\in A$, let $P(\kappa)$ be the set of functions $p:D(p)\to 2$ where $D(p)\subset \kappa\times B(\kappa)$ has cardinality less than $\kappa$. Here $B(\kappa)$ is some cardinal, not necessarily given by an Easton function on regular cardinals, and forcing using the 'usual order' on this poset will add $B(\kappa)$ subsets to each $\kappa\in A$. There is a distinguished element $\top$ of each $P(\kappa)$, namely the unique function $\emptyset \to 2$.
Let $P=\prod P(\kappa)$ be the product over $\kappa\in A$. The Easton product is the subset consisting of those $p\in P$ such that the support condition (1) holds. So what does this mean? An element $p\in P$ is a collection of functions $p_\kappa$, one for each $\kappa\in A$. Then let $supp(p) \subset A$ be the set of those $\kappa\in A$ such that $p_\kappa\not=\top$. Analogously, define $supp_\lambda(p) \subset A$ to be the set of those $\kappa\leq \lambda$ such that $p_\kappa\not=\top$. This last definition is where I am the most unsure, as the definition in Jech really involves $supp(p)\cap \lambda$, which doesn't make sense from a structural set theory point of view unless $\lambda$ is viewed as a subset of $A$ (even though it makes perfect sense in a material set theory such as ZFC).
Now assuming the definition of $supp_\lambda(p)$ is correct, the support condition on $p$ as first stated in Jech is that $$ \forall \text{ regular } \lambda, \left|supp_\lambda(p)\right| \lt \lambda.\qquad (2) $$ Apparently it's enough to enforce (2) whenever $\lambda$ is weakly inaccessible, (though Friedman just says 'inaccessible'), and that's fine by me. However, this doesn't look the same as the condition (1), which has me slightly worried. The condition (1) imposes a condition on the size of the domain of $p$ as well as (2), and in fact implies that the domain of $p_\kappa$ is smaller than $\kappa$ for all $\kappa$.
So where has my reasoning gone wrong? At this point I really just want to understand the set underlying the Easton product, rather than any intricacies of class forcing. |
So I have stumbled into a simplification operation in a maths book about integrals, but it does not provide any explanation why or how is this operation is possible.
$${5*(cos^2x-sin^2x)\over sinx+cosx}=5*(cosx-sinx)$$ I have tried to apply a number of trigonometric identities and transformations but a solution eludes me. Obviously you can just divide by 5 to get ${cos^2x-sin^2x\over sinx+cosx}=cosx-sinx$. I tried to apply $sin^2x+cos^2x=1$, in different ways, but got nowhere. I would like to know how is this simplification is possible.
So I have stumbled into a simplification operation in a maths book about integrals, but it does not provide any explanation why or how is this operation is possible.
Use the conjugate rule, $a^2 - b^2 = (a+b)(a-b)$.
Thus $$\frac{5(\cos^2 x - \sin^2 x)}{\sin x + \cos x} = \frac{5(\cos x + \sin x)(\cos x - \sin x)}{\sin x + \cos x} = 5(\cos x - \sin x)$$
$$\begin{align}{5(\cos^2x-\sin^2x)\over \sin x+\cos x} & = \frac{5(\cos x - \sin x)(\cos x + \sin x)}{\sin x + \cos x} \\ \\ & = \require{cancel}\frac{5(\cos x - \sin x)(\cancel{\cos x + \sin x})}{\cancel{\cos x+ \sin x}}\\ \\ & = 5(\cos x-\sin x)\end{align}$$
$a^2-b^2$= $(a+b)(a-b)$, hence $\cos^{2}(x)-\sin^{2}(x)$ = $(\cos x+\sin x)(\cos x-\sin x)$. |
Adhikesavalu, D and Venkatesan, K (1983)
Push–Pull Ethylenes: The Structures of 3-(2- Imidazolidinylidene)-2,4-pentanedione (I), $C_8H_{12}N_2O_2$, and 3-(1,3-Dimethyl-2- imidazolidinylidene)-2,4-pentanedione Trihydrate (II), $C_{10}H_{16}N_2O_2.3H_2O$. In: Acta Crystallographica, Section C: Crystal Structure Communications, 39 (8). pp. 1044-1048.
PDF
4.pdf - Published Version
Restricted to Registered users only
Download (568kB) | Request a copy
Abstract
(I): $M_r$= 168, triclinic, $P^-_1$, Z=2, a=5.596 (2), b = 6.938 (3), c = 10.852 (4)\AA, \alpha= 75.64 (3), \beta= 93.44 (3), \gamma= 95.47 $(3)^o$, V=406.0 $\AA^3$, $D_m$= 1.35 (by flotation using carbon tetrachloride and n-hexane), $D_x$= 1.374 $Mg m^{-3}$, \mu(Mo K\alpha, \lambda = 0.7107 \AA) = 1.08 $cm^{-l}$, F(000) = 180, T= 293 K. (II): $M_r$= 250, triclinic, $P^-_1$, Z= 2, a = 7.731(2), b=8.580(2), c=11.033(3)\AA, \alpha= 97.66 (2), \beta= 98.86 (2), \gamma= 101.78 $(2)^o$, V= 697.5 $\AA^3$, $D_m$ = 1.18 (by flotation using KI solution), $D_x$= 1.190Mg $m^{-3}$, \mu(Mo K\alpha, \gamma=0.7107 \AA)= 1.02 $cm^{-1}$, F(000) = 272, T= 293 K. Both structures were solved by direct methods and refined to R = 4.4\% for 901 reflexions for (I) and 5.7\% for 2001 reflexions for (II). The C=C bond distances are 1.451 (3) \AA in (I) and 1.468 (3)\AA in (II), quite significantly longer than the C=C bond in ethylene [1.336 (2)\AA; Bartell, Roth, Hollowell, Kuchitsu & Young (1965). J. Chem. Phys. 42, 2683-2686]. The twist angle about the C=C bond in (II) is 72.9 $(5)^o$ but molecule (I) is essentially planar, the twist angle being only 4.9 $(5)^o$.
Item Type: Journal Article Additional Information: Copyright of this article belongs to International Union of Crystallography. Department/Centre: Division of Chemical Sciences > Organic Chemistry Depositing User: Shriram Pandey Date Deposited: 22 Jul 2008 Last Modified: 29 Feb 2012 06:48 URI: http://eprints.iisc.ac.in/id/eprint/15141 Actions (login required)
View Item |
First of all, notice that the complex function $$\phi_a(z)=\frac{z-a}{1-\bar a z}$$ is a bijection from the unit closed disk in $\mathbb C$ in itself.
a) Prove now that, if $f$ is holomorphic in a domain $D$ including the closed unit disk, and $|f(z)|<1$ for all $|z|<1$, and exists $z_0$, $|z_0|<1$, such that $f(z_0)=f(-z_0)=0$, then $|f(0)|\leq |z_0|^2$.
b) Determine then all functions $f$ as in (a) that satisfy $f(0)=|z_0|^2$.
First of all, $g=f\circ \phi_{z_0}$ and $h=f\circ \phi_{-z_0}$ are in the hypothesis of Schwarz Lemma. We know then that, for example, $|g(z)|\leq |z|$ and then $|f(0)|=|g(z_0)|\leq |z_0|$.
But we want $|z_0|^2$, so, since I do not know anything about the derivatives of $g$ and $h$, i.e. about the order of $0$ as zero, I tried to modify Schwarz Lemma in the following way. If $z_0=0$ we are done. If not, choose a ball around $z_0$ which does not include $z_0^2$ and which remains disjoint from a circle of radius $r$, $|z_0|<r<1$. Let $D_r$ be the domain whose boundary is represented by the $r-circle$ and the circle around $z_0$. The function $g=f\circ \sqrt\phi_{-(z_0)^2}$ (we can choose a branch) is holomorphic in any $D_r$, and satisfies $g<1$. Therefore, since $g(0)=f(\pm z_0)=0$ (in any choice of one of the two branches, which differ only by the sign), we have that $g(z)/z$ is holomorphic on $D_r$. Now I would like to apply the maximum module principle and say that $|g(z)/z|\leq g(y)/y<1/r$, for some $y$ on the $r-circle$, and therefore, taking $r$ to $1$, say that $|g(z)/z|\leq 1$ (this is how one proves Schwarz Lemma). Then choosing $z=z_0^2$ we would end. But the boundary of $D_r$ is not just the $r$-circle.
Is there some way to conclude this proof? Or is there a better and simpler proof?
Thank you in advance. |
If we use the composite trapezoidal rule, then what is the least number of divisions $N$ for which the error of the integral $\int^1_0{e^{-x}}dx$ doesn't exceed $\frac{1}{12}\times10^{-2}$.
My guess is 11 or 5. Kindly tell me which of the answer is correct?
I obtained 11 as the answer by applying the formula
$$\Big|\frac{(b-a)^3}{12N^2}\times{(e^{-x})^{\prime\prime}_{x=\varepsilon}}\Big| \text{ = } {\frac{10^{-2}}{12}}$$ where $\varepsilon \in [0,1]$ is chosen so that it maximizes the value of $e^{-\varepsilon}$ (which I believe occurs at $\varepsilon = 0$).
Solving this equation, I get $N = 10$ (i.e. I must have atleast 11 equidistant divisions if I have to keep the error less than $\frac{10^{-2}}{12}$).
As far as 5 is concerned, I just used 5 equidistant intervals i.e $0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},1$ and I applied them in the trapezoidal rule. Now here's the problem:- When calculated the answer at $N=5$, I got a value greater that the actual value of the integral. Is it possible? If yes, why?
Which of my answer is correct because I obtained 11 by a well-established formula and 5 was just an option I hit upon and am not really sure of 5's correctness.
Thanks
Note:- I am posting this question because everywhere else nobody is giving any reply at all. I don't know if this question belongs here. If it does, then kindly reply. If it doesn't,then feel free to erase or delete or whatever :) |
Answer
The incline makes an angle of $18.7^{\circ}$ with the horizontal.
Work Step by Step
The wheelbarrow's weight of $50.0~lb$ is directed straight down. The force of $16.0~lb$ is directed up the incline at an angle of $\theta$ above the horizontal. We can draw a triangle with the $50.0~lb$ as the hypotenuse of the triangle and the force of $16.0~lb$ opposite the angle $\theta$. We can find $\theta$: $sin~\theta = \frac{16.0}{50.0}$ $\theta = arcsin(\frac{16.0}{50.0})$ $\theta = 18.7^{\circ}$ The incline makes an angle of $18.7^{\circ}$ with the horizontal. |
There are two ways to think of the Hilbert space as the space of sections of a line bundle.
First, the exponentiated Chern-Simons action on a manifold $\Sigma\times[0,1]$ is a section of the determinant line bundle $\mathcal{L}_\Sigma$ on the space of flat connections on $\Sigma$. Moreover, Wilson loops (which can be thought of as a 1d TFT) contribute $R_i$ each. So, the Hilbert space (before remembering gauge invariance) is $\Gamma(\mathcal{L}_\Sigma^k)\otimes\bigotimes_i R_i$. Now, if $\Sigma = S^2$, which is simply-connected, the space of flat connections is a point, so $\Gamma(\mathcal{L}_\Sigma^k)=\mathbf{C}$. Finally, gauge invariance picks out the $G$-invariants in $\bigotimes_i R_i$.
Note, that the Hilbert space for a non-simply-connected $\Sigma$ is nontrivial even without the punctures.
Another way to think of this Hilbert space is to recall the 2d CFT <-> 3d TFT correspondence. The idea here is the following. Correlation functions of a 2d CFT live in a certain bundle over the moduli space of complex curves $M_{g,n}$ called the bundle of conformal blocks. The Knizhnik-Zamolodchikov equations on correlation functions correspond to a (projectively) flat connection on this bundle. So, a 2d CFT associates global sections of this bundle to a topological surface $\Sigma$, this is the Hilbert space in a 3d TFT. In the case of the Chern-Simons theory, the associated 2d CFT is the Wess-Zumino-Witten model.
A down-to-earth description can be found in
S. Elitzur, G. Moore, A. Schwimmer, N. Seiberg, Remarks on the Canonical Quantization of the Chern-Simons-Witten Theory, Nucl Phys B326 (1989), 108.
Mathematically, this correspondence is an equivalence between modular functors (as defined by Segal in The definition of conformal field theory) and modular tensor categories which give rise to 3d TFTs (due to Reshetikhin and Turaev).
All of that is discussed in an excellent book Lectures on tensor categories and modular functors by Bakalov and Kirillov.This post has been migrated from (A51.SE) |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Background: Classical Mechanics is based on the Poincare-Cartan two-form
$$\omega_2=dx\wedge dp$$
where $p=\dot{x}$. Quantum mechanics is secretly a subtle modification of this. By the other hand, the so-called Born-reciprocal relativity is based on the "phase-space"-like metric
$$ds^2=dx^2-c^2dt^2+Adp^2-BdE^2$$
and its full space-time+phase-space extension:
$$ds^2=dX^2+dP^2=dx^\mu dx_\mu+\dfrac{1}{\lambda^2}dp^\nu dp_\nu$$
where $$P=\dot{X}$$
Note: particle-wave duality is something like $ x^\mu=\dfrac{h}{p_\mu}$.
In Born's reciprocal relativity you have the invariance group which is the
intersection of SO (4 +4) and the ordinary symplectic group Sp (4), related to the invariance under the symplectic transformations leaving the Poincaré-Cartan two-form invariant. The intersection of SO(8) and Sp(4) gives you, essentially, the unitary group U (4), or some "cousin" closely related to the metaplectic group.
We can try to guess an extension of Born's reciprocal relativity based on higher accelerations as an interesting academical exercise (at least it is for me). In order to do it, you have to find a symmetry which leaves spacetime+phasespace invariant, the force-momentum-space-time extended Born space-time+phase-space interval
$ds^2=dx^2+dp^2+df^2$
with $p=\dot{x}$, $ f=\dot{p}$ in this set up. Note that is is the most simple extension, but I am also interested in the problem to enlarge it to extra derivatives, like Tug, Yank,...and n-order derivatives of position. Let me continue. This last metric looks invariant under an orthogonal group SO (4+4+4) = SO (12) group (you can forget about signatures at this moment).
One also needs to have an invariant triple wedge product three-form
$$\omega_3=d X\wedge dP \wedge d F$$
something tha seems to be connected with a Nambu structure and where $P=\dot{X}$ and $F=\dot{P}$ and with invariance under the (ternary) 3-ary "symplectic" transformations leaving the above 3-form invariant.
My Question(s): I am trying to discover some (likely nontrivial) Born-reciprocal like generalized transformations for the case of "higher-order" Born-reciprocal like relativities (I am interested in that topic for more than one reason I can not tell you here). I do know what the phase-space Born-reciprocal invariance group transformations ARE (you can see them,e.g., in this nice thesis BornRelthesis) in the case of reciprocal relativity (as I told you above). So, my question, which comes from the original author of the extended Born-phase space relativity, Carlos Castro Perelman in this paper, and references therein, is a natural question in the context of higher-order Finsler-like extensions of Special Relativity, and it eventually would include the important issue of curved (generalized) relativistic phase-space-time. After the above preliminary stuff, the issue is:
What is the intersection of the group SO (12) with the
ternarygroup which leaves invariant the triple-wedge product
$$\omega_3=d X\wedge dP \wedge d F$$
More generally, I am in fact interested in the next problem. So the extra or bonus question is: what is the (n-ary?) group structure leaving invariant the (n+1)-form
$$ \omega_{n+1}=dx\wedge dp\wedge d\dot{p}\wedge\cdots \wedge dp^{(n-1)}$$
where there we include up to (n-1) derivatives of momentum in the exterior product or equivalently
$$ \omega_{n+1}=dx\wedge d\dot{x}\wedge d\ddot{x}\wedge\cdots \wedge dx^{(n)}$$
contains up to the n-th derivative of the position. In this case the higher-order metric would be:
$$ds^2=dX^2+dP^2+dF^2+\ldots+dP^{(n-1)}=dX^2+d\dot{X}^2+d\ddot{X}^2+\ldots+dX^{(n)2}$$
This metric is invariant under SO(4(n+1)) symmetry (if we work in 4D spacetime), but what is the symmetry group or invariance of the above (n+1)-form and whose intersection with the SO(4(n+1)) group gives us the higher-order generalization of the U(4)/metaplectic invariance group of Born's reciprocal relativity in phase-space?
This knowledge should allow me (us) to find the analogue of the (nontrivial) Lorentz transformations which mix the
$X,\dot{X}=P,\ddot{X}=\dot{P}=F,\ldots$
coordinates in this enlarged Born relativity theory.
Remark: In the case we include no derivatives in the "generalized phase space" of position (or we don't include any momentum coordinate in the metric) we get the usual SR/GR metric. When n=1, we get phase space relativity. When n=2, we would obtain the first of a higher-order space-time-momentum-force generalized Born relativity. I am interested in that because one of my main research topics are generalized/enlarged/enhacend/extended theories of relativity. I firmly believe we have not exhausted the power of the relativity principle in every possible direction.
I do know what the transformation are in the case where one only has X and P. I need help to find and work out myself the nontrivial transformations mixing X,P and higher order derivatives...The higher-order extension of Lorentz-Born symmetry/transformation group of special/reciprocal relativity. |
Is there a difference between continuation value ($V_t$) and utility ($U_t$) except for a possible scaling / difference in units? My question refers to the consumption-based asset pricing literature.
In standard time additive power utility settings, people seem to only talk about utility (e.g. $U_t=u(C_t)+\beta E_t[u(C_{t+1}]$). In recursive utility / Epstein-Zin-Weil settings, people often refer to a continuation value (e.g. $V_t=((1-\beta)C_t^{1-\rho}+\beta (\mathcal{R}_t(V_{t+1}))^{1-\rho})^{1/(1-\rho)}$).
It seems to me that both are fairly similar. The only reference I could find on the topic is the asset pricing book from Back (2010), in which (intuitive definitions) utility seems to be a utility measure in "utility units" while continuation value seems to be a utility measure in "consumption good units", and both are related via $U_t=u(V_t)=\frac{V_t^{1-\gamma}}{1-\gamma}$.
(Please note: Back confusingly talks about a continuation utility and uses a different notation, the one posted here is inherited from standard references. Also, you can find his book with this link.) |
Given languages $L_1,L_2$, defines $X(L_1,L_2)$ by
$\qquad X(L_1,L_2) = \{w \mid w \not\in L_1 \cup L_2 \}$
If $L_1$ and $L_2$ are regular, how can we show that $X(L_1,L2)$ is also regular?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
There are several ways to show that a language is regular (check the question How to prove a language is regular?)
Specifically for the language in your question, start with DFAs for $L_1$ and $L_2$ and try to construct an NFA for $X(L_1,L_2)$ using them. More details below:
Note that $X(L_1,L_2) = \overline{L_1} \cap \overline{L_2}$.
From the DFA of $L_1$ construct the DFA of $\overline {L_1}$ (making any final state not-final, and vice-versa). Do the same for $L_2$. Intersection of regular languages can be constructed via product machine (see this question). [Of course, if you already know that a complement of a regular language is also regular, and so the intersection of two regular languages – you are done without constructing those DFAs..]
Expanding on Zach's comment, you (
should) know the following things:
Now you should be able to pick a few of these that combined make up your $X$ language function/operator/whatever you call something like that (well, it's just a language defined in terms of others).
Just for some background, proofs for these properties can be found (IN A REALLY LARGE FONT) here.
Give this a go, if you're really stuck, I'll put a bit more in the spoilers below (but with little explanation).
$X(A_{1},A_{2}) = \overline{A_{1}\cup A_{2}} = \bar{A_{1}}\cap\bar{A_{2}}$. |
Two hours ago, he asked a question at the Physics Stack Exchange that made me LOL. He has already made all the important general plans and only asks the physics users to help him with a technical detail. So his question is:
What is the easiest way to stop a star?
No kidding. You're going to learn something about the mainstream science at the mainstream scientific institutes that do research into Earth sciences. ;-)
-->
What is the easiest way to stop a star?
I'm
concernedthat the stars are using up hydrogen nuclei at an unsustainablerate. If I was a sufficiently advanced civilisation I might want to do something about this, so that the hydrogen could be burnt in fusion reactors instead. That way, a much higher proportion of the available energy could be put to some use before it eventually becomes thermal radiation.
So my questions are:
What would be the most energy-efficient way (using known physics) to blow apart a star or otherwise prevent or greatly slow the rate at which it performs fusion? We're assuming this civilisation has access to vast amounts of energy but doesn't want to waste it unnecessarily, since the aim is to access energy from the hydrogen the star would have burned. In order for this to be worthwhile, the energy gained from doing this would have to be substantially more than the energy the process takes. What would be the astronomical signature of such an activity? If it was happening in a distant galaxy, would we be able to detect it from Earth?
LM:
I emphasized the words "concerned" and "unsustainable" to make it clear that this Gentleman is firmly within the "mainstream" discourse of the Earth sciences that is currently fully controlled by the environmental whackos.
The idea that the Sun wastes too much energy so we should better extinguish it (almost all life on Earth would stop within weeks) is such a funny ramification of the environmentalist thinking (or, more precisely, the absence of it) that I didn't resist and had to repost the question here.
For the sake of completeness, here's my answer:
Burning (and fusion) is "unsustainable" by definition because it means to convert an increasing amount of fuel to "energy" plus "waste products" and at some moment, there is no fuel left.
I am not sure whether the word "unsustainable" was used as a joke, a parody of the same nonsensical adjective that is so popular with the low-brow media these days, but I have surely laughed (because it almost sounds like you are proposing to extinguish the Sun to be truly environment-friendly). The thermonuclear reaction in the Sun has been "sustained" for 4.7 billion years and about 7.5 billion years are left before the Sun goes red giant. That's over 10 billion years – many other processes are much less sustainable than that. More importantly, there is nothing wrong about processes' and activities' being "unsustainable". All the processes in the real world are unsustainable and the most pleasant ones are the least sustainable, too.
But back to your specific project.
When it comes to energy, it is possible to blow a star apart without spending energy that exceeds the actual thermonuclear energy stored in the star. Just make a simple calculation for the Sun. Try to divide it to 2 semisuns whose mass is \(10^{30}\) kilograms, each. The current distance between the two semisuns is about \(700,000\) kilometers, the radius of the Sun. You want to separate them to a distance where the potential energy is small, comparable to that at infinity.
It means that you must "liberate" the semisuns from a potential well. The gravitational potential energy you need to spend is\[
E = \frac{G\cdot M\cdot M}{R} = \frac{6.67\times 10^{-11}\times 10^{60}}{700,000,000} =
10^{41}\,{\rm Joules}
\] That's equivalent to the energy of \(10^{24}\) kilograms (the mass of the Moon or so) completely converted to energy via \(E=mc^2\), or thermonuclear energy from burning the whole Earth of hydrogen (approximately).
You may force the Sun to do something like the "red giant" transition prematurely and save some hydrogen that is unburned. To do so, you will have to spend the amount of energy corresponding to the Earth completely burned via fusion.
But of course, the counting of the energy which was "favorable" isn't the only problem. To actually tear the Sun apart, you would have to send an object inside the Sun that would survive the rather extreme conditions over there, including 15 million Celsius degrees and 3 billion atmospheres of pressure. Needless to say, no solid can survive these conditions: any object based on atoms we know will inevitably become a plasma. A closely related fact is that ordinary matter based on nuclei and electron doesn't allow for any "higher-pressure" explosion than the thermonuclear one so there's nothing "stronger" that could be sent to the Sun as an explosive to counteract the huge pressure inside the star.
One must get used to the fact that plasma is what becomes out of anything that tries to "intervene" into the Sun – and any intruder would be quickly devoured and the Sun would restore its balance. The only possible loophole is that the amount of this stuff is large. So you may think about colliding two stars which could perhaps tear them apart and stop the fusion. This isn't easy. The energy needed to substantially change the trajectory of another star is very, very large, unless one is lucky that the stars are already going to "nearly collide" which is extremely unlikely.
Physics will not allow you to do such things. You would need a form of matter that is more extreme than the plasma in the Sun, e.g. the neutron matter, but this probably can't be much lighter (and easier to prepare, e.g. when it comes to energy) than the star itself. A black hole could only drill a hole (when fast enough) or consume the Sun (which you don't want).
However, if you allow the Sun to be eaten by a black hole, you will actually get a more efficient and more sustainable source of energy. Well, too sustainable. ;-) A black hole of the mass comparable to the solar mass would have a radius about 3 miles. It would only send roughly one photon of the 3-mile-long wavelength every nanosecond or so in the Hawking radiation and it would only evaporate after \(10^{60}\) years or so. It would be so sustainable that no one could possibly observe the energy it is emitting. However, the black hole would ultimately emit all the energy \(E=mc^2\) stored in the mass.
If there are powerful civilizations ready to do some "helioengineering", they surely don't suffer from naive and primitive misconceptions about the world such as the word "sustainable" and many other words that are so popular in the mentally retarded movement known as "environmentalism". These civilizations may do many things artificially but they surely realize that the thermonuclear reaction in the stars is a highly efficient and useful way to get the energy from the hydrogen fuel. Even some of us realize that almost all the useful energy that allowed the Earth to evolve and create life and other things came from the Sun.
The Sun may become unsustainable in 7.5 billion years but according to everything we know about Nature, it's the optimum device to provide large enough civilizations – whole planets – with energy.
Concerning ambitious but less crazy plans in the outer space, look at NASA's plans to produce a "warp drive": HTML, PDF. |
The aim of the talk is to define an invariant (« universal \( L^2 \)-torsion ») from which many others (usual \( L^2 \)-torsion, \( L^2 \)-Alexander invariant and Euler characteristic,…) can be derived, as well as the relations between them.
Let \( G \) be a group and \( \mathcal F \) a family of subgroups such that:
For all \( H \in \mathcal F, g \in G \) we have also \( H^g := gHg^{-1} \in \mathcal F _); For all \( H, H’ \in \mathcal F \) we have \( H \cap H’ \in \mathcal F \).
For example \( \mathcal F \) can be :
The trivial subgroup; Finite subgroups; Virtually cyclic subgroups; Free abelian, nilpotent, … subgroups
A model for the classifying space \e E_{\mathcal F}G \) is then a CW-complex \( X \) with a \( G \)-action such that:
If \( H \in \mathcal F \) then the subset \( X^H \) of points fixed by \( H \) is a contractible subcomplex of \( X \); Otherwise \( X^H \) is empty.
For example, if \( \mathcal F = \{\{e\}\} \) then \( E_{\mathcal F}G \) is the classifying space \( EG \) for \( G \). The classifying space can also be defined by the following universal property: it is the only G-complex \( Y \) such that for every \( G \)-action whose point stabilisers are in \( F \), there exists a \( G \)-map \( E_{\mathcal F}G \to Y \) (which is unique up to \( G \)-homotopy).
The objects of interest in this talk are the
unitary representations of a locally compact group \( G \). These are homomorphisms \( \pi: G \to \mathrm U(\mathcal H) \) where \( \mathcal U(\mathcal H \) is the group of unitary operators on a Hilbert space \( \mathcal H \). It will be required that they be continuous in the following sense: for every \( v \in \mathcal H \) the map \( G \to \mathcal H, g \mapsto \pi(g)v \) is continuous. Basic examples are the following: The trivial representation; The left-regular representation \( \lambda: G \to \mathcal U(L^2(G, \mu_{Haar})) \) acting by \( \lambda(g)f(x) = f(g^{-1}x) \).
Let \( N \) be an irreducible, compact, orientable 3–manifold whose boundary is either empty or contains only tori as connected components. Call a triple \( (G, \gamma, \phi) \)
admissible if \( G \) is a discrete group, \( \gamma : \pi_1(N) \to G \) and \( phi : \pi = \pi_1(N) \to {\mathbb Z} \) are homomorphisms such that there exists a commutative diagram: \[ \begin{array}{ccc} \pi_1(N) & \overset{\gamma}{\rightarrow} & G \\ & \underset{\phi}{\searrow} & \downarrow \\ & & \mathbb Z \end{array} \] Fix a cellulation of \( N \) and let \( C_*(\widetilde N) \) be the chain complex of the universal cover. Let \( t > 0 \) and define a representation: \[ \kappa(\gamma, \phi, t) :\left\{ \begin{array}{ll} {\mathbb Z} \pi \to {\mathbb R} G \\ g \mapsto t^{\phi(g)}\gamma(g) \end{array} \right. \] with which the twisted \( L^2 \)-complex \( \ell^2(G) \otimes_{\kappa(\gamma, \phi, t)} C_*(\widetilde N) \). Let \( \tau^{(2)}(N; \gamma, \phi)(t) \) be the \( L^2 \)-torsion of this complex in the case where it is well-defined (when the complex is weakly acyclic and all its differentials of determinant class) and 0 otherwise. Consider this construction as associating to the admissible triple \( (G, \gamma, \phi) \) a function \( tau^{(2)}(N; \gamma, \phi): ]0, +\infty [ \to [0, +\infty[ \). Rank gradient
For any finitely generated group \( H \) let \( d(H) \) be its rank, the minimal number of elements needed to generate \( H \). If \( H \) is a finite-index subgroup in a finitely generated group \( \Gamma \) then we have
\[ d(H) \le |\Gamma / H| (d(\Gamma) – 1) \] and it is thus natural to define: \[ r(\Gamma, H) = \frac{d(H) – 1}{|\Gamma / H|}. \] If \( \Gamma = \Gamma_0 \supset \Gamma_1 \supset \cdots \) is a chain of finite index subgroup then the limit: \[ \mathrm{RG}(\Gamma, \Gamma_n) = \lim_{n\to+\infty} r(\Gamma, \Gamma_n) \] exists, and is called the rank gradient of \( (\Gamma, (\Gamma_n)) \).
If \( (\Gamma_n), (\Delta_n) \) are two residual chains in the same group \( \Gamma \) (chains with \( \Gamma_n, \Delta_n \) normal in \( \Gamma \) and \( \bigcup_n \Gamma_n = \{ 1 \} = \bigcup_n \Delta_n \)), then are \( \mathrm{RG}(\Gamma, (\Gamma_n) \) and \( \mathrm{RG}(\Gamma, (\Delta_n)) \) equal?
Let \( G \) be a discrete group and \( \widetilde X \) a simply-connected CW-complex with a free \( G \)-action, and \( X \) the quotient \( G \backslash \widetilde X \). A particular case is when \( \widetilde X \) is a classifying space for \( G \), i.e. contarctible.
The aim is to study the homology groups \( H_n(\cdot; {\mathbb Z}) \) for finite covers of \( X \). For this suppose that the \( n + 1 \)-skeleton of \( X \) is finite, and take a residual chain \( G_0 = G \supset G_1 \supset \cdots \) of normal, finite-index subgroups of \( G \) such that \( \bigcap_i G_i = \{ 1 \} \). Denote \( X_i = G_i \backslash \widetilde X \). The Lûck Approximation Theorem states that:
\[ \lim_{i \to +\infty} \frac{\mathrm{rank}_{\mathbb Z} H_n(X_i ; {\mathbb Z})} {|G / G_i|} = b_n^{(2)}(\widetilde X \to X). \] The question motivating the rest of the talk will be to estimate the growth of \( t(H_n(X_i ; {\mathbb Z})) \) (where \( t(A) \) is the size of the torsion subgroup of a finitely generated Abelian group \( A \)). In full generality it is possible to say that \( \log(t(H_n(X_i ; {\mathbb Z})) / |G/G_i| \) is bounded.
Theorem (Kar–Kropholler–Nikolov):Suppose that \( G \) is amenable and that \( H_n(\widetilde X; {\mathbb Z}) = 0 \) (for example \( \widetilde X \) is contractible). Then \[ \lim_{i \to +\infty} \frac{t(H_n(X_i; {\mathbb Z}))}{|G/G_i|} = 0. \] Hyperbolic manifolds
Let \( \Sigma \) be a surface and \( f \in \mathrm{Homeo}^+(\Sigma) \). Let \( M \) be the 3–manifold obtained from \( \Sigma \times [0, 1] \) by identifying \( \Sigma \times \{0\} \) with \( \Sigma \times\{1\} \) via \( f \). If \( f \) is a pseudo-Anosov diffeomorphism then \( M \) is hyperbolic. If a 3–manifold \( M \) is obtained from this construction say that it is
fibered. A theorem of Agol states that every hyperbolic manifold has a finite cover which is fibered.
If \( M \) is fibered with fiber \( \Sigma \) and monodromy \( f \) then its fundamental group has a splitting:
\[ 1 \to \pi_1(\Sigma) \to \pi_1(M) \to {\mathbb Z} \to 1 \] coming from the presentation \[ \pi_1(M) = \langle \pi_1(\Sigma), t | \forall x \in \pi_1(\Sigma) t^{-1}xt = f_*(x) \rangle. \] More generally, if \( H \) is a group and \( f : H \to H \) is an injective morphism then the group obtained by: \[ G = \langle H, t | \forall x \in H, t^{-1}xt = f(x) \rangle \] is called an ascending HNN-extension and denoted by \( H *_f \). Then: Any semi-direct product \( H \times {\mathbb Z} \) is an ascending HNN-extension; If \( G = H *_f \) let \( \phi: G \to {\mathbb Z} \) be the mosphism defined by \( \phi|_G \equiv 0 \) and \( \phi(t) = 1 \); it will be called the induces character of the extension \( H *_f \).
Definition:Let \( G \) be a group with a finite generating set \( S \). The Bieri–Neumann–Strebel invariant is the subset \( \Sigma(G) \subset H^1(G, {\mathbb R}) \setminus \{0\} \) containing all classes \( \phi \) such that the subgraph of the Cayley graph of \( G \) induced by the subset \( \{ g \in G: \phi(g) \ge 0 \} \) is connected. Lück approximation theorem
Let \( K \) be a finite CW-complex with residually finite fundamental group \( \Gamma = \pi_1(K) \). Let \( \Gamma=\Gamma_0 \supset \Gamma_1 \supset \cdots ) \) be a residual chain in \( \Gamma \), meaning that the \( \Gamma_n \) are finite-index, normal subgroups and \( \bigcap_n \Gamma_n = \{ 1 \} \). Denote by \( widetilde K \to K \) the universal cover and \( K_n = \Gamma_n \backslash \widetilde K \). Then the Lück approximation theorem states that the \( L^2 \)-Betti numbers of the covering \( \widetilde K \to K \) are given by :
\[ \beta_q^{(2)}(\widetilde K \to K) = \lim_{n\to+\infty} \frac {b_n(K_n; {\mathbb Q})} {|\Gamma / \Gamma_n|}. \] The following question is then very natural, and was apparently first asked by Farber around 1998:
Question:Can one prove a statement similar to Lück approximation for Betti numbers with coefficients in a field of positive characteristic?
The main theme of this talk is the interplay between the algebra of group rings and the analysis behind \( L^2 \)-Betti numbers.
Geometrisation
Let \( N \) be a compact, orientable 3–manifold. Say that \( N \) is
prime if it cannot be decomposed as the connected sum of two 3–manifolds both not homeomorphic to \( \mathbb S^3 \). (By the Sphere Theorem this is equivalent to \( N \) not containing any sphere not bounding a 3-ball, and \( M \not= \mathbb S^2 \times \mathbb S^2 \).)
An embedded torus \( T \subset N \) is said to be
essential if the induced map \( \pi_1(T) \to \pi_1(N) \) is injective. By Papakryakopoulos’ Loop Theorem, if \( N \) is prime this is equivalent to \( T \) not being the boundary of an embeded solid torus.
Geometrisation Theorem (Perelman, conjectured by W. Thurston):Let \( N \) be a prime 3–manifold. Then one of the following holds: \( N \) is Seifert fibered; \( N \) is hyperbolic; \( N \) contains an incompressible torus. |
As for the first question, the answer is no. For example, what real number should represent this$$\sum_{n=0}^{\infty} 5^{n!}$$$5$-adic number? Note that in real numbers this series is obviously divergent.
As for the second question: you should know that $\Bbb{Z}_p$ is a local ring whose unique maximal ideal is $p\Bbb{Z}_p$. This means that every $p$-adic integer which is not divisible by $p$ is invertible.Moreover it is a UFD, and every element can be factorized as $up^k$ for some unit $u$, some $k \ge 0$.
So every element of $\Bbb{Q}_p$ has the form$$\frac{up^k}{vp^h} = (uv^{-1})p^{k-h}$$
EDIT: The confusion comes to your mind, since you are thinking these numbers as they were real numbers: but they are not! Let's consider for example the sequence of integers (actual integers in $\Bbb{Z}$)$$1, \ \ 1+5, \ \ 1+5+5^2, \ \ 1+5+5^2+5^3, \dots$$in $\Bbb{R}$ these sequence diverges. However, if we think it inside $\Bbb{Q}_5$, this sequence converges to the $5$-adic number$$A=\sum_{n=0}^{\infty} 5^n$$actually, this is the inverse of $-4$ in $\Bbb{Q}_5$ since$$-4A=A-5A = (1+5+5^2+5^3+5^4+ \dots)-(5+5^2+5^3+\dots) = 1$$(all of this is not true in $\Bbb{Q}_p$ for $p \neq 5$, where the sequence diverges). So you have$$\sum_{n=0}^{\infty} 5^n = -\frac{1}{4} \ \ \ \ \mbox{ in } \Bbb{Q}_5$$This is possible because of the strange topological structure of $p$-adic integers. |
First, recall that
a spacetime is a pseudo-Riemmanian manifold $(M,g)$ where $g$ has Lorentzian signature $(-,+,+,+)$.
In the paper "Gravitational Waves in General Relativity. VIII. Waves in Assymptotically Flat Space-Time" by R. Sachs, the author constructs a system of coordinates in section 2 for a
general spacetime. In other papers it is also said that such system can always be built in any spacetime. The question is about how to rigorously construct it.
For that matter, the author picks a function $u \in C^\infty(M)$ satisfying $$g^{ab} (\partial_a u) (\partial_b u)=0$$
Defining $k_a = \partial_a u$ then $k_a$ is a lightlike covector. Thus the level sets $u = \text{const}$ are null hypersurfaces. We furthermore have $k^a \nabla_a k_b=0$ meaning that the integral lines of $k$ are null geodesics - the generators of the null hypersurface. He further supposes $\nabla_a k^a \neq 0$.
The author says such $u$ can be used to build a coordinate system on its domain. To do it he says:
Let $\theta$ and $\phi$ be any pair of scalar functions that obey the equations $$k^a \nabla_a\theta = k^a\nabla_a \phi = 0 \Longrightarrow \nabla_a \theta \nabla^a\theta \nabla_b \phi\nabla^b \phi - (\nabla_a \phi \nabla^a \theta)^2=D\neq 0.$$
Here the implication sign follows from $\nabla_a k^a \neq 0$, as one can verify by a short calculation. $\theta$ and $\phi$ are constant along each ray; they should be visualized as optical angles.
So:
$u$ is an arbitrary function giving rise to lightlike level sets whose normals satisfy $\nabla_a k ^a =0$.
Then he says that there exists these angle functions that are constant on each ray. Well, that is reasonable, but why such angle functions exist? What is their domain of definition, and
how they are constructed on an arbitray spacetime? I really don't get how this is done. I mean we cannot just say "these functions exist and are good coordinates", we have to constructthese coordinates like one constructs the Riemann normal coordinates, Fermi coordinates, etc.
What he means that the implication sign follows from $\nabla_a k^a \neq 0$? I mean, if I'm not mistaken, considering the inner product among multivectors defined by $$\langle v_1\wedge\cdots \wedge v_k, w_1\wedge \cdots \wedge w_k\rangle = \det (\langle v_i,w_j\rangle)$$ the RHS of the implication sign if actually $\langle \nabla \theta\wedge \nabla \phi,\nabla\theta\wedge \nabla \phi\rangle$. That being nonzero means that $\nabla\phi$ and $\nabla \theta$ are not colinear and hence are linearly independent. But that is a
condition on top of $\nabla\phi$ and $\nabla\theta$. I don't see how it follows from $\nabla_a k^a \neq 0$.
So in summary, how to make sense of these angular coordinates and of this short argument from the author of the paper? How do we construct these on
any spacetime? |
\begin{align} u'' &+ \Gamma^0_{00}(u')^2 + 2\Gamma^0_{01}u'v' + \Gamma^0_{11}(v')^2 = 0,\\ v'' &+ \Gamma^1_{00}(u')^2 + 2\Gamma^1_{01}u'v' + \Gamma^1_{11}(v')^2 = 0, \end{align} where $\Gamma^m_{ij}(u(s),v(s))$ is the Christoffel symbol of second kind. The geodesic solution $u = u(s),v = v(s)$ is a curve defined for the interval $s\in[s_0,s_1]$. These equations can be described as a system of first order differential equations by setting $p = u'$, and $q = v'$ : \begin{align} p' &+ \Gamma^0_{00}p^2 + 2\Gamma^0_{01}pq + \Gamma^0_{11}q^2 = 0\\ q' &+ \Gamma^1_{00}p^2 + 2\Gamma^1_{01}pq + \Gamma^1_{11}q^2 = 0 \end{align} with the initial condition given as $u(s_0) = u_0, u'(s_0) = du_0, v(s_0) = v_0, v'(s_0) = dv_0$.
Here is a code snippet of my implementation
def f(y,s,C,u,v): y0 = y[0] # u y1 = y[1] # u' y2 = y[2] # v y3 = y[3] # v' dy = np.zeros_like(y) dy[0] = y1 dy[2] = y3 C = C.subs({u:y0,v:y2}) # Evaluate C for u,v = (u0,v0) dy[1] = -C[0,0][0]*dy[0]**2 -\ 2*C[0,0][1]*dy[0]*dy[2] -\ C[0,1][1]*dy[2]**2 dy[3] = -C[1,0][0]*dy[0]**2 -\ 2*C[1,0][1]*dy[0]*dy[2] -\ C[1,1][1]*dy[2]**2 return dydef solve(C,u0,s0,s1,ds): s = np.arange(s0,s1+ds,ds) # The Christoffel symbol of 2nd kind, C, is a function of (u,v) from sympy.abc import u,v return sc.odeint(f,u0,s,args=(C,u,v)) # integration method : LSODA
I have implemented several generic test cases : torus, sphere, egg carton, and catenoid. However, there seem to be some issue with the solver. On a sphere, for example, the geodesic curve is the great circle (see reference). When I try to find the geodesic curve and plot on a sphere (with same parmaters as reference provided), the curve starts to veer off. There seem to be some sort of numerical instability that is altering the course of the geodesic curve over the interval $s\in[s_0,s_1]$. Is there any way I can make my solver more stable ? I have tried to reduce the step-size of the solver, but that has not made things any better (visually at least...I could probably try to estimate the convergence rate).
Edit 1: Edit 2:
I have pasted the test case on the following link : Geodesic on a sphere.
I forgot to mention this, but I am using the following versions
SymPy : 0.7.7.dev SciPy : 0.16.0
It should now be possible for anyone to reproduce the same results. |
with multiplicative stochastic volatility, where Y is some adapted stochastic process. We prove existence–uniqueness results for weak and strong solutions of this equation under various conditions on the process Y and the coefficients a, $\sigma _{1}$, and $\sigma _{2}$. Also, we study the strong consistency of the maximum likelihood estimator for the unknown parameter θ. We suppose that Y is in turn a solution of some diffusion SDE. Several examples of the main equation and of the process Y are provided supplying the strong consistency.
We consider a multivariable functional errors-in-variables model $AX\approx B$, where the data matrices A and B are observed with errors, and a matrix parameter X is to be estimated. A goodness-of-fit test is constructed based on the total least squares estimator. The proposed test is asymptotically chi-squared under null hypothesis. The power of the test under local alternatives is discussed.
Our aim in this paper is to establish some strong stability properties of a solution of a stochastic differential equation driven by a fractional Brownian motion for which the pathwise uniqueness holds. The results are obtained using Skorokhod’s selection theorem.
In this paper, we consider two time-inhomogeneous Markov chains ${X_{t}^{(l)}}$, $l\in \{1,2\}$, with discrete time on a general state space. We assume the existence of some renewal set C and investigate the time of simultaneous renewal, that is, the first positive time when the chains hit the set C simultaneously. The initial distributions for both chains may be arbitrary. Under the condition of stochastic domination and nonlattice condition for both renewal processes, we derive an upper bound for the expectation of the simultaneous renewal time. Such a bound was calculated for two time-inhomogeneous birth–death Markov chains.
In this paper, the 2-D random closed sets (RACS) are studied by means of the Feret diameter, also known as the caliper diameter. More specifically, it is shown that a 2-D symmetric convex RACS can be approximated as precisely as we want by some random zonotopes (polytopes formed by the Minkowski sum of line segments) in terms of the Hausdorff distance. Such an approximation is fully defined from the Feret diameter of the 2-D convex RACS. Particularly, the moments of the random vector representing the face lengths of the zonotope approximation are related to the moments of the Feret diameter random process of the RACS. |
By definition, the metric tensor $\eta_{ij}$ transforms trivially under the defining rep of $SO(n,m)$.$$
\eta_{ij}=[D(g^{-T})]_{i}^{\ k}[D(g^{-T})]_{j}^{\ l}\eta_{kl}
=[D(g^{-1})]^{k}_{\ i}[D(g^{-1})]^{l}_{\ j}\eta_{kl}
$$and this holds for all $g\in SO(n,m)$. Consider a one-parameter subgroup of the defining rep with matrices $D(g)=e^{tJ}$ where $J^{i}_{\ j}$ is an element of the Lie algebra and $t$ is a real parameter. Substitute into the above equation,$$
\eta_{ij}=[e^{tJ}]^{k}_{\ i}[e^{tJ}]^{l}_{\ j}\eta_{kl}
$$and differentiate wrt $t$ at the identity $t=0$.$$
0=J^{k}_{\ i}\delta^{l}_{\ j}\eta_{kl}+\delta^{k}_{\ i}J^{l}_{\ j}\eta_{kl}
=J^{k}_{\ i}\eta_{kj}+J^{k}_{\ j}\eta_{ik}
$$This is the condition that the elements of the Lie algebra must obey. The Lie algebra elements can be generated by an antisymmetrized pair of vectors $x^{i}$, $y^{j}$.$$
J^{i}_{\ j}=x^{i}y_{j}-y^{i}x_{j}
$$where lowering is performed by the metric tensor $x_{i}=\eta_{ij}x^{j}$. The Lie algebra condition is automatically satisfied by generating the Lie algebra elements in this way. The Lie algebra elements $J_{ab}$ in the question are just made by choosing the vectors $x$ and $y$ as the basis vectors $x^{i}=\delta{^i}_{a}$, $y_{i}=\eta_{ij}\delta^{j}_{b}=\eta_{ib}$. $$
[J_{ab}]^{i}_{\ j}=\delta^{i}_{a}\eta_{jb}-\delta^{i}_{b}\eta_{ja}
$$Now compute the commutator (hopefully two different uses of square brackets is not too confusing),$$
[J_{ab},J_{cd}]^{i}_{\ j}=[J_{ab}]^{i}_{\ k}[J_{cd}]^{k}_{\ j}-[J_{cd}]^{i}_{\ k}[J_{ab}]^{k}_{\ j}
$$and a few lines of straightforward calculation gives,$$
[J_{ab},J_{cd}]^{i}_{\ j}=\eta_{bc}[J_{ad}]^{i}_{\ j}-\eta_{ac}[J_{bd}]^{i}_{\ j}-\eta_{bd}[J_{ac}]^{i}_{\ j}+\eta_{ad}[J_{bc}]^{i}_{\ j}
$$as the commutator for the defining rep. The Lie algebra is the same for all the group reps. The question asks for the commutator for a unitary rep of the group. To do this, the one-parameter unitary subgroup is $D(g)=e^{itJ}$ and so the Lie algebra elements of the defining rep are redefined as belonging to a unitary rep by the replacement $J\rightarrow iJ$. The commutator now becomes,$$
[iJ_{ab},iJ_{cd}]=\eta_{bc}iJ_{ad}-\eta_{ac}iJ_{bd}-\eta_{bd}iJ_{ac}+\eta_{ad}iJ_{bc}\\
-[J_{ab},J_{cd}]=i\eta_{bc}J_{ad}-i\eta_{ac}J_{bd}-i\eta_{bd}J_{ac}+i\eta_{ad}J_{bc}
$$which is the commutator in the question apart from an overall sign change. This is easily fixed by changing the definition of the Lie algebra elements of the defining rep to,$$
[J_{ab}]^{i}_{\ j}=\delta^{i}_{b}\eta_{ja}-\delta^{i}_{a}\eta_{jb} \ .
$$ |
Consider scattering some particles in a state collectively denoted by $i$ to a final state denote by $f$. The scattering amplitude, S-matrix is then defined by: $S_{fi}\equiv \langle f|e^{-iHt}|i\rangle$. We then separate the S-matrix into the identity and another part as $S_{fi}=\delta_{fi}+iT_{fi}$. The statement of unitarity is that $S^\dagger S=1$ which implies that $2{\rm Im}T=T^\dagger T$ which leads to the optical theorem and all that.
In field theory, what we calculate is the amplitude where we stick just $T$ between two states. That is, we only usually calculate the amplitudes where something interesting is happening.
In the study of effective field theories, I often see statements about the violation of unitarity which confuse me. For example, if we took a simple scalar field theory with a derivative interaction $\mathcal L=\frac{1}{2}(\partial\phi)^2+\lambda(\partial\phi)^4/\Lambda^4$ then we could calculate $2\to 2$ scattering and we'd find something like $\mathcal{M}_{2\to 2}\sim \lambda k^4/\Lambda^4$.
I've read and heard people say that for $k\gg \Lambda$, this leads to a violation of unitarity. I assume this means a violation of $2{\rm Im}T=T^\dagger T$. Why is this the case? Certainly the perturbative expansion breaks down in this regime but why is this connected to unitarity?
If the above example does violate unitarity, then what's the difference between the above and the case of a normal, non-derivative $\lambda\phi^4$ interaction and with $\lambda\gg 1$ the case above? The key thing seems to be that $\mathcal{M}$ gets really big in the derivative example, but this would also occur in $\lambda\phi^4$ and I would doubt that this latter theory has any violations of unitarity. |
The Annals of Probability Ann. Probab. Volume 21, Number 2 (1993), 831-860. A Nonstandard Law of the Iterated Logarithm for Trimmed Sums Abstract
Let $X_i, i \geq 1$, be independent random variables with a common distribution in the domain of attraction of a strictly stable law, and for each $n \geq 1$ let $X_{1, n} \leq \cdots \leq X_{n, n}$ denote the order statistics of $X_1, \ldots, X_n$. In 1986, S. Csorgo, Horvath and Mason showed that for each sequence $k_n, n \geq 1$, of nonnegative integers with $k_n \rightarrow \infty$ and $k_n/n \rightarrow 0$ as $n \rightarrow \infty$, the trimmed sums $S_n(k_n) = X_{k_n + 1, n} + \cdots + X_{n - k_n, n}$ converge in distribution to the standard normal distribution, when properly centered and normalized, despite the fact that the entire sums $X_1 + \cdots + X_n$ have a strictly stable limit, when properly centered and normalized. The asymptotic almost sure behavior of $S_n(k_n)$ strongly depends on the rate at which $k_n$ converges to $\infty$. The sequences $k_n \sim c \log \log n$ as $n \rightarrow \infty$ for $0 < c < \infty$ constitute a borderline case between a classical law of the iterated logarithm and a radically different behavior. This borderline case is investigated in detail for nonnegative summands $X_i$.
Article information Source Ann. Probab., Volume 21, Number 2 (1993), 831-860. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176989270 Digital Object Identifier doi:10.1214/aop/1176989270 Mathematical Reviews number (MathSciNet) MR1217568 Zentralblatt MATH identifier 0776.60040 JSTOR links.jstor.org Citation
Haeusler, Erich. A Nonstandard Law of the Iterated Logarithm for Trimmed Sums. Ann. Probab. 21 (1993), no. 2, 831--860. doi:10.1214/aop/1176989270. https://projecteuclid.org/euclid.aop/1176989270 |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
I was doing a search on Google to verify something, and I found a bunch of Web sites describing Einstein's equation.
Except that they weren't.
If you really want to know, Einstein's equation is $R_{ab}-Rg_{ab}/2=8\pi T_{ab}$, where $R_{ab}$ is the Ricci-tensor, $R$ is the scalar curvature (both quantities are derived from the Riemann curvature tensor, which in turn is a function of the metric tensor, $g_{ab}$, describing the "intrinsic curvature" of the spacetime manifold), and $T_{ab}$ is the stress-energy-momentum tensor characterizing any matter and energy fields that fill the spacetime. What this equation basically tells you is that the presence of matter and energy (the right side of the equation) determine the curvature of spacetime (the left side).
For the record, $E=mc^2$ is not, repeat, NOT Einstein's equation. Oh, it was derived by Einstein alright
*, it's just not what physicists normally call Einstein's equation. is simply the residual "rest energy" of an object in special relativity. In Newtonian mechanics, this quantity is undetermined; what matters is the relative energy levels between two states, not the absolute energy of any given state. In contrast, in special relativity this rest energy is well defined.
The fundamental assumption of special relativity is that a transformation from one coordinate system to another leaves the relativistic interval, $ds^2=c^2dt^2-dx^2-dy^2-dz^2$, unchanged. Dividing both sides with $dt^2$, this equation can be rewritten as $ds^2/dt^2=c^2-v^2$, where $v$ is the ordinary velocity.
The motion of a particle is governed by the principle of least action. The "action" is the time integral of a function called the
Lagrangian, between the start and end position of the particle's path.
In special relativity, the action of a free particle of mass $m$ is simply $S=\int-mc~ds$. This is really just the relativistic version of the statement that a freely moving particle will always choose the "shortest" path between two points.
Taking the previous expression into account, the action can be rewritten in the form of a time integral as $\int-mc\sqrt{c^2-v^2}~dt$. From this, the Lagrangian of a free particle is $L=-mc\sqrt{c^2-v^2}$.
The momentum of a particle is defined as $\vec{p}=\partial L/\partial\vec{v}=m\vec{v}/\sqrt{1-v^2/c^2}$. The energy of a particle is defined as $E=\vec{p}\vec{v}-L$. These two quantities are derived from the Lagrangian using symmetry considerations: assuming that the Lagrangian remains invariant under a spatial or time translation, it can be observed that these two quantities remain conserved. (Briefly: The principle of least action, combined with the assumption that space is homogeneous, demands that under small variations of the spatial coordinates, $\vec{x}$, the variation of $L$ will be zero, or $\delta L=\delta\vec{x}\cdot\partial L/\partial\vec{x}=0$. Since $\delta\vec{x}$ can be arbitrary, $\partial L/\partial\vec{x}$ must be zero. But $\partial L/\partial\vec{x}=d(\partial L/\partial\vec{v})/dt$, so $\vec{p}=\partial L/\partial\vec{v}$ is a conserved quantity. Conversely, time homogeneity means that $L$ does not explicitly depend on $t$, so it will be a function of coordinates and velocities. Then, $dL/dt=(\partial L/\partial\vec{x})(d\vec{x}/dt)+(\partial L/\partial\vec{v})(d\vec{v}/dt)$. But $\partial L/\partial\vec{x}=d(\partial L/\partial\vec{v})/dt$, so $dL/dt=\vec{v}\cdot d(\partial L/\partial\vec{v})/dt+(\partial L/\partial\vec{v})(d\vec{v}/dt)$. Or, $d(\vec{p}\vec{v}-L)/dt=0$, so $E=\vec{p}\vec{v}-L$ is conserved under time translations.)
The rest is a straightforward calculation:
\[E=\frac{mv^2}{\sqrt{1-\frac{v^2}{c^2}}}+\frac{mc^2\left(1-\frac{v^2}{c^2}\right)}{\sqrt{1-\frac{v^2}{c^2}}}=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}.\]
When the velocity is small, higher-order terms can be ignored and we're left with the expression $E\simeq mc^2+mv^2/2$. This is consistent with the Newtonian expression of energy, $E=mv^2/2+C$, where $C$ was an arbitrary integration constant; in the relativistic case, we're no longer free to choose any $C$, instead the "rest energy" of the particle is well defined: when $v=0$, $E=mc^2$.
To summarize, $E=mc^2$ is the consequence of four assumptions: that the relativistic interval, $ds$, is invariant under a change of coordinate systems, that space is homogeneous, that time is homogeneous, and that the motion of a free particle is governed by the requirement that the action, $S=\int-m~ds$, remains minimal between two points.
That the rest energy is well defined does suggest that an actual, physical relationship exists between matter and energy. Interesting, to be sure, and may be of some significance if you were to build an antimatter bomb (not an atomic bomb, as some writers suggest) but it's really just a minor consequence of a simple equation, nothing more. And it is not, I repeat, NOT the one physicists call Einstein's equation.
I suspect one reason $E=mc^2$ became "Einstein's equation" is because it's easy to remember, and even a mediocre high school education is enough to make one understand what it means. Nothing wrong with that, just make sure you also know that this is
not a fundamental equation, this is not why physicists revere Einstein and his work, this is not what makes the theory of relativity perhaps the most powerful physical theory known to man. It is just a simple result of a simple calculation.
Keep this in mind the next time you come across a writing that describes how Einstein set out to "discover" $E=mc^2$, or worse yet, writings like the one I recently saw that "expose" Einstein as a "fraud" or a "plagiarist" because purportedly, his "derivation" of $E=mc^2$ was "flawed".
*I was horrified to learn recently that this article was seen by some as an attempt to "prove" that $E=mc^2$ was not derived by Einstein. That is not what I am saying here! Indeed, I even decided to change the title (originally, it said " E = mc² is not Einstein's Equation") because frankly, the last thing I had in mind when I wrote this was to provide fuel for the ramblings of anti-Semitic crackpots.
The point I am making here is not that $E=mc^2$ is not Einstein's work (it most certainly is) but that $E=mc^2$ is NOT the equation that pops into most physicists' minds when you mention "Einstein's equation". See the sidebar above. Of course the pedantic might point out that what I call Einstein's equation is really Einstein's field equation(s), and thus it's okay to call $E=mc^2$ Einstein's equation, but I never much subscribed to pedantry, and in any case, both my Wald and my Landau & Lifshitz call the field equation Einstein's equation, and you cannot get much more pedantic than Landau & Lifshitz! |
Let $(X_{k},\xi _{k})_{k\in \mathbb{N}}$ be a sequence of independent copies of a pair $(X,\xi )$ where X is a random process with paths in the Skorokhod space $D[0,\infty )$ and ξ is a positive random variable. The random process with immigration $(Y(u))_{u\in \mathbb{R}}$ is defined as the a.s. finite sum $Y(u)=\sum _{k\ge 0}X_{k+1}(u-\xi _{1}-\cdots -\xi _{k})\mathbb{1}_{\{\xi _{1}+\cdots +\xi _{k}\le u\}}$. We obtain a functional limit theorem for the process $(Y(ut))_{u\ge 0}$, as $t\to \infty $, when the law of ξ belongs to the domain of attraction of an α-stable law with $\alpha \in (0,1)$, and the process X oscillates moderately around its mean $\mathbb{E}[X(t)]$. In this situation the process $(Y(ut))_{u\ge 0}$, when scaled appropriately, converges weakly in the Skorokhod space $D(0,\infty )$ to a fractionally integrated inverse stable subordinator.
In this paper we develop a general framework for quantifying how binary risk factors jointly influence a binary outcome. Our key result is an additive expansion of odds ratios as a sum of marginal effects and interaction terms of varying order. These odds ratio expansions are used for estimating the excess odds ratio, attributable proportion and synergy index for a case-control dataset by means of maximum likelihood from a logistic regression model. The confidence intervals associated with these estimates of joint effects and interaction of risk factors rely on the delta method. Our methodology is illustrated with a large Nordic meta dataset for multiple sclerosis. It combines four studies, with a total of 6265 cases and 8401 controls. It has three risk factors (smoking and two genetic factors) and a number of other confounding variables.
In this paper we propose a multi-state model for the evaluation of the conversion option contract. The multi-state model is based on age-indexed semi-Markov chains that are able to reproduce many important aspects that influence the valuation of the option such as the duration problem, the time non-homogeneity and the ageing effect. The value of the conversion option is evaluated after the formal description of this contract.
The paper deals with bonus–malus systems with different claim types and varying deductibles. The premium relativities are softened for the policyholders who are in the malus zone and these policyholders are subject to per claim deductibles depending on their levels in the bonus–malus scale and the types of the reported claims. We introduce such bonus–malus systems and study their basic properties. In particular, we investigate when it is possible to introduce varying deductibles, what restrictions we have and how we can do this. Moreover, we deal with the special case where varying deductibles are applied to the claims reported by policyholders occupying the highest level in the bonus–malus scale and consider two allocation principles for the deductibles. Finally, numerical illustrations are presented.
In the paper we study the models of time-changed Poisson and Skellam-type processes, where the role of time is played by compound Poisson-Gamma subordinators and their inverse (or first passage time) processes. We obtain explicitly the probability distributions of considered time-changed processes and discuss their properties. |
We extend the Poincaré–Borel lemma to a weak approximation of a Brownian motion via simple functionals of uniform distributions on n-spheres in the Skorokhod space $D([0,1])$. This approach is used to simplify the proof of the self-normalized Donsker theorem in Csörgő et al. (2003). Some notes on spheres with respect to $\ell _{p}$-norms are given.
The asymptotic behavior, as $T\to \infty $, of some functionals of the form $I_{T}(t)=F_{T}(\xi _{T}(t))+{\int _{0}^{t}}g_{T}(\xi _{T}(s))\hspace{0.1667em}dW_{T}(s)$, $t\ge 0$ is studied. Here $\xi _{T}(t)$ is the solution to the time-inhomogeneous Itô stochastic differential equation
$T>0$ is a parameter, $a_{T}(t,x),x\in \mathbb{R}$ are measurable functions, $|a_{T}(t,x)|\le C_{T}$ for all $x\in \mathbb{R}$ and $t\ge 0$, $W_{T}(t)$ are standard Wiener processes, $F_{T}(x),x\in \mathbb{R}$ are continuous functions, $g_{T}(x),x\in \mathbb{R}$ are measurable locally bounded functions, and everything is real-valued. The explicit form of the limiting processes for $I_{T}(t)$ is established under nonregular dependence of $a_{T}(t,x)$ and $g_{T}(x)$ on the parameter T.
In various research areas related to decision making, problems and their solutions frequently rely on certain functions being monotonic. In the case of non-monotonic functions, one would then wish to quantify their lack of monotonicity. In this paper we develop a method designed specifically for this task, including quantification of the lack of positivity, negativity, or sign-constancy in signed measures. We note relevant applications in Insurance, Finance, and Economics, and discuss some of them in detail.
This paper represents an extended version of an earlier note [10]. The concept of weighted entropy takes into account values of different outcomes, i.e., makes entropy context-dependent, through the weight function. We analyse analogs of the Fisher information inequality and entropy power inequality for the weighted entropy and discuss connections with weighted Lieb’s splitting inequality. The concepts of rates of the weighted entropy and information are also discussed.
We study random independent and identically distributed iterations of functions from an iterated function system of homeomorphisms on the circle which is minimal. We show how such systems can be analyzed in terms of iterated function systems with probabilities which are non-expansive on average.
The paper is devoted to the restricted Oppenheim expansion of real numbers ($\mathit{ROE}$), which includes already known Engel, Sylvester and Lüroth expansions as partial cases. We find conditions under which for almost all (with respect to Lebesgue measure) real numbers from the unit interval their $\mathit{ROE}$-expansion contain arbitrary digit i only finitely many times. Main results of the paper state the singularity (w.r.t. the Lebesgue measure) of the distribution of a random variable with i.i.d. increments of symbols of the restricted Oppenheim expansion. General non-i.i.d. case is also studied and sufficient conditions for the singularity of the corresponding probability distributions are found. |
In this paper we provide a systematic exposition of basic properties of integrated distribution and quantile functions. We define these transforms in such a way that they characterize any probability distribution on the real line and are Fenchel conjugates of each other. We show that uniform integrability, weak convergence and tightness admit a convenient characterization in terms of integrated quantile functions. As an application we demonstrate how some basic results of the theory of comparison of binary statistical experiments can be deduced using integrated quantile functions. Finally, we extend the area of application of the Chacon–Walsh construction in the Skorokhod embedding problem.
The paper deals with a generalization of the risk model with stochastic premiums where dependence structures between claim sizes and inter-claim times as well as premium sizes and inter-premium times are modeled by Farlie–Gumbel–Morgenstern copulas. In addition, dividends are paid to its shareholders according to a threshold dividend strategy. We derive integral and integro-differential equations for the Gerber–Shiu function and the expected discounted dividend payments until ruin. Next, we concentrate on the detailed investigation of the model in the case of exponentially distributed claim and premium sizes. In particular, we find explicit formulas for the ruin probability in the model without either dividend payments or dependence as well as for the expected discounted dividend payments in the model without dependence. Finally, numerical illustrations are presented.
This paper proves the existence and uniqueness of a solution to doubly reflected backward stochastic differential equations where the coefficient is stochastic Lipschitz, by means of the penalization method.
Stationary processes have been extensively studied in the literature. Their applications include modeling and forecasting numerous real life phenomena such as natural disasters, sales and market movements. When stationary processes are considered, modeling is traditionally based on fitting an autoregressive moving average (ARMA) process. However, we challenge this conventional approach. Instead of fitting an ARMA model, we apply an AR(1) characterization in modeling any strictly stationary processes. Moreover, we derive consistent and asymptotically normal estimators of the corresponding model parameter.
We study the frequency process $f_{1}$ of the block of 1 for a Ξ-coalescent Π with dust. If Π stays infinite, $f_{1}$ is a jump-hold process which can be expressed as a sum of broken parts from a stick-breaking procedure with uncorrelated, but in general non-independent, stick lengths with common mean. For Dirac-Λ-coalescents with $\varLambda =\delta _{p}$, $p\in [\frac{1}{2},1)$, $f_{1}$ is not Markovian, whereas its jump chain is Markovian. For simple Λ-coalescents the distribution of $f_{1}$ at its first jump, the asymptotic frequency of the minimal clade of 1, is expressed via conditionally independent shifted geometric distributions. |
let $X: \mathbb{N}^2 \to \mathbb{N}$
Let $X(a ,b)$ be the number of unique ways we can write $a$ as the sum of $b$ many numbers, where each of the $b$ numbers are co-prime to $a$. Where $a$ $\in \mathbb{N}$ and $b$ $\in \mathbb{N}$
Example:
$X(a ,2) = |\{(x, y): x + y = a$, $gcd(a, x) = gcd(a, y) = 1\}|$
I can easily show that $X(a, 2) = \frac{\phi(a)}{2}$, where $\phi$ is Euler's totient function, when $a > 2$.
Proof:
Let $\Phi_{a} = \{k: gcd(a, k) = 1, k \in \mathbb{N}\}$
let $k \in \mathbb{N}$, then it is easy to see $\forall k < a$, $\exists n \in \mathbb{N}$ such that $a = k + n$
Specifically, this is $n = a - k$
Now if we only consider $k \in \Phi_{a}$ we can see that $n \in \Phi_{a}$. Why is this so? Here is why:
Let $k \in \Phi_{a}$ i.e. $gcd(a, k) = 1$, now if we assume $n \notin \Phi_{a}$ i.e. $gcd(a, n) \neq 1$ it implies that $\exists m \in \mathbb{N}$ such that $m | a$ and $m | n$. But this means that $m | k$ (because $n = a - k$) which contradicts $gcd(a, k) = 1$. Thus our assumption was wrong and therefore $n \in \Phi_{a}$.
Hence it follows that, $\forall x \in \Phi_{a}$ $\exists y \in \Phi_{a}$ such that $x + y = a$. We also know that $|\Phi_{a}| = \phi(a)$ thus once we pair our numbers together we have exactly $\frac{\phi(a)}{2}$ unique pairs $(x, y)$. Here unique means if we have counted the pair $(x, y)$ then we do not count the pair $(y, x)$ as we consider them to be the same pair.
End of proof
Now to my actual question: Can we find the value of $X(a, b)$ in terms of $a$ and $b$ when $b > 2$? So far I have only defined the trivial cases:
$X(a ,b) = \begin{cases} 0, & b \gt a\ &OR& b + 1\equiv a\equiv 0\pmod 2 &OR& b = 1, a\gt 1\\ 1, & b = a\\ \frac{\phi(a)}{2},& b = 2, a \gt 2\\ ?,& Otherwise\end{cases}$
Just so that my question is clear, for $b = 3$ we have:
$X(a ,3) = |\{(x, y, z): x + y + z = a$, $gcd(a, x) = gcd(a, y) = gcd(a, z) = 1\}|$
This question is purely out of interest, thanks in advance for any answers. |
When first discretizing equations, it's useful to draw a diagram showing where the data lives relative to physical boundaries. Here are some schematics for visual aid:
Applying finite difference to a differential equation takes a different form for cell-centered (CC) and node (N) data.
--------------------------------------------------------------------------------------------------------------------------------
Node dataConsider N data.
Consider the Laplace stencil operating on data $u$ with $f$ on the right-hand side (RHS),
\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{boundary} = \frac{u_g - 2 u_b + u_i}{\Delta x^2} = f_b\end{equation}
Where $u_g,u_b,u_i$ are the ghost, boundary and first interior node points. This Laplacian operator may be written in matrix form as
$A = \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc}0 & 0 & 0 & & & & & & 0 \\1 & -2 & 1 & & & & & & \\0 & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & 0 \\ & & & & & & 1 & -2 & 1 \\0 & & & & & & 0 & 0 & 0 \\\end{array} \right]$
Note that this stencil reaches to the ghost points.
Dirichlet BCs
Consider Dirichlet BCs, $u_b$
is known, and the solve is only from $u_i$ onward. Therefore, the only equation we must consider is at location $i$:
\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{i} = \frac{u_b - 2 u_i + u_{i+1}}{\Delta x^2} = f_i\end{equation}
If we were to write an equation for the boundary point, we could write\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{b} = \frac{u_g - 2 u_b + u_{i}}{\Delta x^2} = f_b \rightarrow u_g = 2 u_b - u_{i} - \Delta x^2 f_b\end{equation}If we insist $f_b=0$ then we have a simplified version:\begin{equation} u_g = 2 u_b - u_{i}\end{equation}So the equation changes to\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{b} = 0 = 0\end{equation}And we may remove it from our system. Looking back the first interior point, let's move boundary value to the RHS:\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{i} = \frac{u_b - 2 u_i + u_{i+1}}{\Delta x^2} = f_i \rightarrow \frac{- 2 u_i + u_{i+1}}{\Delta x^2} = f_i - \frac{u_b}{\Delta x^2}\end{equation}
This means that the ghost point
should not enter the computations at all. They are a means to an end to apply the desired BCs. Correspondingly, the matrix $A$ is:$A = \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc}0 & 0 & 0 & & & & & & 0 \\1 & -2 & 1 & & & & & & \\0 & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & 0 \\ & & & & & & 1 & -2 & 1 \\0 & & & & & & 0 & 0 & 0 \\\end{array} \right]\\ \rightarrowA = \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc}0 & 0 & 0 & & & & & & 0 \\0 & 0 & 0 & & & & & & \\0 & \textbf{0} & -2 & 1 & & & & & \\0 & 0 & 1 & -2 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & -2 & 1 & 0 & 0 \\ & & & & & 1 & -2 & \textbf{0} & 0 \\0 & & & & & & 0 & 0 & 0 \\0 & & & & & & 0 & 0 & 0 \\\end{array} \right]$
This is nice because the equation has effectively become smaller. NOTE: the first two 0's in the above equation refer to the ghost and boundary equations.
Result
As you can see, this "truncation" of the first and last column in $A$ will effectively apply Dirichlet BCs to our system, so long $u_b$ is in fact zero, so this only works for a special case.
--------------------------------------------------------------------------------------------------------------------------------
Cell-Centered data
Consider the Laplace stencil operating on data $u$ with $f$ on the right-hand side (RHS),
\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{1} = \frac{u_g - 2 u_1 + u_{2}}{\Delta x^2} = f_1\end{equation}Where $u_g,u_1,u_2$ are the ghost, first interior and second interior cells. This Laplacian operator may be written in matrix form as
$A = \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc}1 & 1 & & & & & & & 0 \\1 & -2 & 1 & & & & & & \\ & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & \\ & & & & & & 1 & -2 & 1 \\0 & & & & & & & 1 & 1 \\\end{array} \right]$
Let's substitute $u_g$ and adjust the stencil accordingly.
Dirichlet
Consider Dirichlet BCs, $u_g$ may be computed observing that the boundary value is the average of the neighboring two cell center values.\begin{equation} u_b = \frac{u_g + u_1}{2} \rightarrow u_g = 2u_b - u_1\end{equation}So our Laplacian stencil becomes:\begin{equation} \left(\frac{\partial^2 u}{\partial x^2}\right)_{1} = \frac{(2 u_b - u_1) - 2 u_1 + u_{2}}{\Delta x^2} = f_1\end{equation}This boundary point must be moved to the RHS to maintain a consistent matrix-vector multiplication. Therefore our equation changes:\begin{equation} \frac{u_g - 2 u_1 + u_{2}}{\Delta x^2} = f_1 \rightarrow \frac{- 3 u_1 + u_{2}}{\Delta x^2} = f_1 - \frac{2 u_b}{\Delta x^2}\end{equation}Correspondingly, the matrix $A$ changes:$A = \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc}1 & 1 & & & & & & & 0 \\1 & -2 & 1 & & & & & & \\ & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & \\ & & & & & & 1 & -2 & 1 \\0 & & & & & & & 1 & 1 \\\end{array} \right]\\ \rightarrowA = \frac{1}{\Delta x^2} \left[\begin{array}{ccccccccc}0 & 0 & 0 & & & & & & 0 \\0 & -3 & 1 & & & & & & \\0 & 1 & -2 & 1 & & & & & \\ & & & \ddots & \ddots & \ddots & & & \\ & & & & & 1 & -2 & 1 & 0 \\ & & & & & & 1 & -3 & 0 \\0 & & & & & & 0 & 0 & 0 \\\end{array} \right]$
Note that the first and last equations are identities ($0=0$) which are reserved for the ghost points.
Result
So, I stand corrected with my comment (on the OP's question) about assuming cell centered data. This "truncation" only works in a special case for Node data. |
In clustering of high-dimensional data a variable selection is commonly applied to obtain an accurate grouping of the samples. For two-class problems this selection may be carried out by fitting a mixture distribution to each variable. We propose a hybrid method for estimating a parametric mixture of two symmetric densities. The estimator combines the method of moments with the minimum distance approach. An evaluation study including both extensive simulations and gene expression data from acute leukemia patients shows that the hybrid method outperforms a maximum-likelihood estimator in model-based clustering. The hybrid estimator is flexible and performs well also under imprecise model assumptions, suggesting that it is robust and suited for real problems.
Cox proportional hazards model with measurement errors is considered. In Kukush and Chernova (2017), we elaborated a simultaneous estimator of the baseline hazard rate $\lambda (\cdot )$ and the regression parameter β, with the unbounded parameter set $\varTheta =\varTheta _{\lambda }\times \varTheta _{\beta }$, where $\varTheta _{\lambda }$ is a closed convex subset of $C[0,\tau ]$ and $\varTheta _{\beta }$ is a compact set in ${\mathbb{R}}^{m}$. The estimator is consistent and asymptotically normal. In the present paper, we construct confidence intervals for integral functionals of $\lambda (\cdot )$ and a confidence region for β under restrictions on the error distribution. In particular, we handle the following cases: (a) the measurement error is bounded, (b) it is a normally distributed random vector, and (c) it has independent components which are shifted Poisson random variables.
Limit behaviour of temporal and contemporaneous aggregations of independent copies of a stationary multitype Galton–Watson branching process with immigration is studied in the so-called iterated and simultaneous cases, respectively. In both cases, the limit process is a zero mean Brownian motion with the same covariance function under third order moment conditions on the branching and immigration distributions. We specialize our results for generalized integer-valued autoregressive processes and single-type Galton–Watson processes with immigration as well.
We investigate the pricing of cliquet options in a geometric Meixner model. The considered option is of monthly sum cap style while the underlying stock price model is driven by a pure-jump Meixner–Lévy process yielding Meixner distributed log-returns. In this setting, we infer semi-analytic expressions for the cliquet option price by using the probability distribution function of the driving Meixner–Lévy process and by an application of Fourier transform techniques. In an introductory section, we compile various facts on the Meixner distribution and the related class of Meixner–Lévy processes. We also propose a customized measure change preserving the Meixner distribution of any Meixner process.
In this paper we define the fractional Cox–Ingersoll–Ross process as $X_{t}:={Y_{t}^{2}}\mathbf{1}_{\{t<\inf \{s>0:Y_{s}=0\}\}}$, where the process $Y=\{Y_{t},t\ge 0\}$ satisfies the SDE of the form $dY_{t}=\frac{1}{2}(\frac{k}{Y_{t}}-aY_{t})dt+\frac{\sigma }{2}d{B_{t}^{H}}$, $\{{B_{t}^{H}},t\ge 0\}$ is a fractional Brownian motion with an arbitrary Hurst parameter $H\in (0,1)$. We prove that $X_{t}$ satisfies the stochastic differential equation of the form $dX_{t}=(k-aX_{t})dt+\sigma \sqrt{X_{t}}\circ d{B_{t}^{H}}$, where the integral with respect to fractional Brownian motion is considered as the pathwise Stratonovich integral. We also show that for $k>0$, $H>1/2$ the process is strictly positive and never hits zero, so that actually $X_{t}={Y_{t}^{2}}$. Finally, we prove that in the case of $H<1/2$ the probability of not hitting zero on any fixed finite interval by the fractional Cox–Ingersoll–Ross process tends to 1 as $k\to \infty $.
that is, $Af(x)=\theta (\kappa -x){f^{\prime }}(x)+\frac{1}{2}{\sigma }^{2}x{f^{\prime\prime }}(x)$, $x\ge 0$ ($\theta ,\kappa ,\sigma >0$). Alfonsi [1] showed that the equation has a smooth solution with partial derivatives of polynomial growth, provided that the initial function f is smooth with derivatives of polynomial growth. His proof was mainly based on the analytical formula for the transition density of the CIR process in the form of a rather complicated function series. In this paper, for a CIR process satisfying the condition ${\sigma }^{2}\le 4\theta \kappa $, we present a direct proof based on the representation of a CIR process in terms of a squared Bessel process and its additivity property. |
Stokes flow past a periodic array of spheres
This is the 3D equivalent of flow past a periodic array of cylinders.
We compare the numerical results with the solution given by the multipole expansion of Zick and Homsy, 1982.
Note that we do not use an adaptive mesh since the 3D gaps are much wider than for the 2D case.
This is adapted from Table 2 of Zick and Homsy, 1982, where the first column is the volume fraction \Phi of the spheres and the second column is the drag coefficient K such that the force exerted on each sphere in the array is: \displaystyle F = 6\pi\mu a K U with a the sphere radius, \mu the dynamic vicosity and U the average fluid velocity.
static double zick[7][2] = { {0.027, 2.008}, {0.064, 2.810}, {0.125, 4.292}, {0.216, 7.442}, {0.343, 15.4}, {0.45, 28.1}, {0.5236, 42.1}};
We can vary the maximum level of refinement,
nc is the index of the case in the table above, the radius of the cylinder will be computed using the volume fraction \Phi.
int maxlevel = 5, nc;double radius;
This function defines the embedded volume and face fractions.
The domain is the periodic unit cube, centered on the origin.
We turn off the advection term. The choice of the maximum timestep and of the tolerance on the Poisson and viscous solves is not trivial. This was adjusted by trial and error to minimize (possibly) splitting errors and optimize convergence speed.
stokes = true; DT = 2e-2; TOLERANCE = HUGE; NITERMIN = 10;
We do the 7 cases computed by Zick & Homsy. The radius is computed from the volume fraction.
for (nc = 0; nc < 7; nc++) { maxlevel = 5; N = 1 << maxlevel; radius = pow (3.*zick[nc][0]/(4.*pi), 1./3.); run(); }}
We need an extra field to track convergence.
We initialize the embedded geometry.
sphere (cs, fs);
And set acceleration and viscosity to unity.
const face vector g[] = {1.,0.,0.}; a = g; mu = fm;
The boundary condition is zero velocity on the embedded boundary.
u.n[embed] = dirichlet(0); u.t[embed] = dirichlet(0); u.r[embed] = dirichlet(0);
We initialize the reference velocity.
foreach() un[] = u.x[];}
We check for a stationary solution.
event logfile (i++; i <= 500){ double avg = normf(u.x).avg, du = change (u.x, un)/(avg + SEPS); fprintf (fout, "%d %d %d %d %d %d %d %d %.3g %.3g %.3g %.3g %.3g\n", maxlevel, i, mgp.i, mgp.nrelax, mgp.minlevel, mgu.i, mgu.nrelax, mgu.minlevel, du, mgp.resa*dt, mgu.resa, statsf(u.x).sum, normf(p).max); fflush (fout); if (i > 1 && du < 1e-3) {
K is computed using formula 4.2 of Zick an Homsy, although the 1 - \Phi factor is a bit mysterious.
We stop.
return 1; /* stop */ }}
The drag coefficient closely matches the results of Zick & Homsy.
set xlabel 'Volume fraction'set ylabel 'K'set logscale yset gridset key top leftplot 'log' u 3:6 ps 1 lw 2 t 'Zick and Homsy, 1982', \ '' u 3:5 ps 1 pt 6 lw 2 t '5 levels', \ 'spheres.6' u 3:5 ps 1 pt 8 lw 2 t '6 levels'
This can be further quantified by plotting the relative error. Better than second-order convergence with spatial resolution is obtained.
set ylabel 'Relative error'plot 'log' u 3:(abs($6-$5)/$5) w lp t '5 levels', \ 'spheres.6' u 3:(abs($6-$5)/$5) w lp t '6 levels'
References
[sangani1982]
AS Sangani and A Acrivos. Slow flow through a periodic array of spheres.
[zick1982]
A.A. Zick and G.M. Homsy. Stokes flow through periodic arrays of spheres. |
Let $\{{\xi _{1}},{\xi _{2}},\dots \}$ be a sequence of independent but not necessarily identically distributed random variables. In this paper, the sufficient conditions are found under which the tail probability $\mathbb{P}(\,{\sup _{n\geqslant 0}}\,{\sum _{i=1}^{n}}{\xi _{i}}>x)$ can be bounded above by ${\varrho _{1}}\exp \{-{\varrho _{2}}x\}$ with some positive constants ${\varrho _{1}}$ and ${\varrho _{2}}$. A way to calculate these two constants is presented. The application of the derived bound is discussed and a Lundberg-type inequality is obtained for the ultimate ruin probability in the inhomogeneous renewal risk model satisfying the net profit condition on average.
This study introduces computation of option sensitivities (Greeks) using the Malliavin calculus under the assumption that the underlying asset and interest rate both evolve from a stochastic volatility model and a stochastic interest rate model, respectively. Therefore, it integrates the recent developments in the Malliavin calculus for the computation of Greeks: Delta, Vega, and Rho and it extends the method slightly. The main results show that Malliavin calculus allows a running Monte Carlo (MC) algorithm to present numerical implementations and to illustrate its effectiveness. The main advantage of this method is that once the algorithms are constructed, they can be used for numerous types of option, even if their payoff functions are not differentiable.
In the paper we consider time-changed Poisson processes where the time is expressed by compound Poisson-Gamma subordinators $G(N(t))$ and derive the expressions for their hitting times. We also study the time-changed Poisson processes where the role of time is played by the processes of the form $G(N(t)+at)$ and by the iteration of such processes.
A continuous-time regression model with a jointly strictly sub-Gaussian random noise is considered in the paper. Upper exponential bounds for probabilities of large deviations of the least squares estimator for the regression parameter are obtained.
The effect that weighted summands have on each other in approximations of $S={w_{1}}{S_{1}}+{w_{2}}{S_{2}}+\cdots +{w_{N}}{S_{N}}$ is investigated. Here, ${S_{i}}$’s are sums of integer-valued random variables, and ${w_{i}}$ denote weights, $i=1,\dots ,N$. Two cases are considered: the general case of independent random variables when their closeness is ensured by the matching of factorial moments and the case when the ${S_{i}}$ has the Markov Binomial distribution. The Kolmogorov metric is used to estimate the accuracy of approximation.
Confidence ellipsoids for linear regression coefficients are constructed by observations from a mixture with varying concentrations. Two approaches are discussed. The first one is the nonparametric approach based on the weighted least squares technique. The second one is an approximate maximum likelihood estimation with application of the EM-algorithm for the estimates calculation. |
I'm going through the MIT Online Course Videos on Intro. to Algorithms at here at around 38:00.
So we have a recursion formula
$\qquad T(n) = T(n/10) + T(9n/10) + O(n)$
If we build a recursion tree it looks like
T(n) -- Level 1 = c*n / \ T(n/10) T(9n/10) -- Level 2 = c*n / \ / \ T(n/100) T(9n/100) T(9n/100) T(81n/100) -- Level 3 = c*n / \ <= c*n . . . . 0(1) 0(1)
where $c$ is a constant larger than $0$.
Shortest path from the root to the leaf is $\log_{10}(n)$.
Longest path from the root to the leaf is $\log_{10/9}(n)$
Therefore, the cost could be calculated as Cost = Cost of each level * number of levels.
With the shortest path cost, we get a lower bound of $cn\log_{10}(n)$, and with the longest path cost an upper bound of $cn\log_{10/9}(n)$.
And now I have to add the costs of leaf nodes, which leads to my problem. In the video it says the total number of leaves is in $\Theta(n)$. I have trouble figuring out how he got to $\Theta(n)$.
The video further says $T(n)$ is bounded by
$\qquad cn\log_{10}(n) + O(n) \leq T(n) \leq cn\log_{10/9}(n) + O(n)$
Wouldn't it make more sense to say it's
$\qquad cn\log_{10}(n) + O(n^{\log_{10}(2)}) \leq T(n) \leq cn\log_{10/9}(n) + O(n^{\log_{10/9}(2)})$
where $\Theta(n^{log_{10}(2)})$ represents the leaves on the left and $\Theta(n^{\log_{10/9}(2)})$ represents the leaves on the right.
Or is there a way to simplify these terms to $\Theta(n)$? |
As of November, 2018, I have been working at Quansight. Quansight is a new startup founded by the same people who started Anaconda, which aims to connect companies and open source communities, and offers consulting, training, support and mentoring services. I work under the heading of Quansight Labs. Quansight Labs is a public-benefit division of Quansight. It provides a home for a "PyData Core Team" which consists of developers, community managers, designers, and documentation writers who build open-source technology and grow open-source communities around all aspects of the AI and Data Science workflow.
My work at Quansight is split between doing open source consulting for various companies, and working on SymPy. SymPy, for those who do not know, is a symbolic mathematics library written in pure Python. I am the lead maintainer of SymPy.
In this post, I will detail some of the open source work that I have done recently, both as part of my open source consulting, and as part of my work on SymPy for Quansight Labs.
Bounds Checking in Numba
As part of work on a client project, I have been working on contributing codeto the numba project. Numba is a just-in-timecompiler for Python. It lets you write native Python code and with the use ofa simple
@jit decorator, the code will be automatically sped up using LLVM.This can result in code that is up to 1000x faster in some cases:
In [1]: import numba In [2]: import numpy In [3]: def test(x): ...: A = 0 ...: for i in range(len(x)): ...: A += i*x[i] ...: return A ...: In [4]: @numba.njit ...: def test_jit(x): ...: A = 0 ...: for i in range(len(x)): ...: A += i*x[i] ...: return A ...: In [5]: x = numpy.arange(1000) In [6]: %timeit test(x) 249 µs ± 5.77 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [7]: %timeit test_jit(x) 336 ns ± 0.638 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [8]: 249/.336 Out[8]: 741.0714285714286
Numba only works for a subset of Python code, and primarily targets code that uses NumPy arrays.
Numba, with the help of LLVM, achieves this level of performance through manyoptimizations. One thing that it does to improve performance is to remove allbounds checking from array indexing. This means that if an array index is outof bounds, instead of receiving an
IndexError, you will get garbage, orpossibly a segmentation fault.
>>> import numpy as np >>> from numba import njit >>> def outtabounds(x): ... A = 0 ... for i in range(1000): ... A += x[i] ... return A >>> x = np.arange(100) >>> outtabounds(x) # pure Python/NumPy behavior Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in outtabounds IndexError: index 100 is out of bounds for axis 0 with size 100 >>> njit(outtabounds)(x) # the default numba behavior -8557904790533229732
In numba pull request #4432, I amworking on adding a flag to
@njit that will enable bounds checks for arrayindexing. This will remain disabled by default for performance purposes. Butyou will be able to enable it by passing
boundscheck=True to
@njit, or bysetting the
NUMBA_BOUNDSCHECK=1 environment variable. This will make iteasier to detect out of bounds issues like the one above. It will work like
>>> @njit(boundscheck=True) ... def outtabounds(x): ... A = 0 ... for i in range(1000): ... A += x[i] ... return A >>> x = np.arange(100) >>> outtabounds(x) # numba behavior in my pull request #4432 Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index is out of bounds
The pull request is still in progress, and many things such as the quality of the error message reporting will need to be improved. This should make debugging issues easier for people who write numba code once it is merged.
removestar
removestar is a new tool I wrote toautomatically replace
import * in Python modules with explicit imports.
For those who don't know, Python's
import statement supports so-called"wildcard" or "star" imports, like
from sympy import *
This will import every public name from the
sympy module into the currentnamespace. This is often useful because it saves on typing every name that isused in the import line. This is especially useful when working interactively,where you just want to import every name and minimize typing.
However, doing
from module import * is generally frowned upon in Python. It isconsidered acceptable when working interactively at a
python prompt, or in
__init__.py files (removestar skips
__init__.py files by default).
Some reasons why
import * is bad:
It hides which names are actually imported. It is difficult both for human readers and static analyzers such as pyflakes to tell where a given name comes from when
import *is used. For example, pyflakes cannot detect unused names (for instance, from typos) in the presence of
import *.
If there are multiple
import *statements, it may not be clear which names come from which module. In some cases, both modules may have a given name, but only the second import will end up being used. This can break people's intuition that the order of imports in a Python file generally does not matter.
import *often imports more names than you would expect. Unless the module you import defines
__all__or carefully
dels unused names at the module level,
import *will import every public (doesn't start with an underscore) name defined in the module file. This can often include things like standard library imports or loop variables defined at the top-level of the file. For imports from modules (from
__init__.py),
from module import *will include every submodule defined in that module. Using
__all__in modules and
__init__.pyfiles is also good practice, as these things are also often confusing even for interactive use where
import *is acceptable.
In Python 3,
import *is syntactically not allowed inside of a function definition.
Here are some official Python references stating not to use
import * infiles:
In general, don’t use
from modulename import *. Doing so clutters the importer’s namespace, and makes it much harder for linters to detect undefined names.
PEP 8 (the official Python style guide):
Wildcard imports (
from <module> import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools.
Unfortunately, if you come across a file in the wild that uses
import *, itcan be hard to fix it, because you need to find every name in the file that isimported from the
* and manually add an import for it. Removestar makes thiseasy by finding which names come from
* imports and replacing the importlines in the file automatically.
As an example, suppose you have a module
mymod like
mymod/ | __init__.py | a.py | b.py
with
# mymod/a.py from .b import * def func(x): return x + y
and
# mymod/b.py x = 1 y = 2
Then
removestar works like:
$ removestar -i mymod/ $ cat mymod/a.py # mymod/a.py from .b import y def func(x): return x + y
The
-i flag causes it to edit
a.py in-place. Without it, it would justprint a diff to the terminal.
For implicit star imports and explicit star imports from the same module,
removestar works statically, making use ofpyflakes. This means none of the code isactually executed. For external imports, it is not possible to work staticallyas external imports may include C extension modules, so in that case, itimports the names dynamically.
removestar can be installed with pip or conda:
pip install removestar
or if you use conda
conda install -c conda-forge removestar sphinx-math-dollar
In SymPy, we make heavy use of LaTeX math in our documentation. For example, in our special functions documentation, most special functions are defined using a LaTeX formula, like
However, the source for this math in the docstring of the function uses RST syntax:
class besselj(BesselBase): """ Bessel function of the first kind. The Bessel `J` function of order `\nu` is defined to be the function satisfying Bessel's differential equation .. math :: z^2 \frac{\mathrm{d}^2 w}{\mathrm{d}z^2} + z \frac{\mathrm{d}w}{\mathrm{d}z} + (z^2 - \nu^2) w = 0, with Laurent expansion .. math :: J_\nu(z) = z^\nu \left(\frac{1}{\Gamma(\nu + 1) 2^\nu} + O(z^2) \right), if :math:`\nu` is not a negative integer. If :math:`\nu=-n \in \mathbb{Z}_{<0}` *is* a negative integer, then the definition is .. math :: J_{-n}(z) = (-1)^n J_n(z).
Furthermore, in SymPy's documentation we have configured it so that textbetween `single backticks` is rendered as math. This was originally done forconvenience, as the alternative way is to write
:math:`\nu` everytime you want to use inline math. But this has lead to many people beingconfused, as they are used to Markdown where `single backticks` produce
code.
A better way to write this would be if we could delimit math with dollarsigns, like
$\nu$. This is how things are done in LaTeX documents, as wellas in things like the Jupyter notebook.
With the new sphinx-math-dollarSphinx extension, this is now possible. Writing
$\nu$ produces $\nu$, andthe above docstring can now be written as
class besselj(BesselBase): """ Bessel function of the first kind. The Bessel $J$ function of order $\nu$ is defined to be the function satisfying Bessel's differential equation .. math :: z^2 \frac{\mathrm{d}^2 w}{\mathrm{d}z^2} + z \frac{\mathrm{d}w}{\mathrm{d}z} + (z^2 - \nu^2) w = 0, with Laurent expansion .. math :: J_\nu(z) = z^\nu \left(\frac{1}{\Gamma(\nu + 1) 2^\nu} + O(z^2) \right), if $\nu$ is not a negative integer. If $\nu=-n \in \mathbb{Z}_{<0}$ *is* a negative integer, then the definition is .. math :: J_{-n}(z) = (-1)^n J_n(z).
We also plan to add support for
$$double dollars$$ for display math so that
..math :: is no longer needed either .
For end users, the documentation on docs.sympy.org will continue to render exactly the same, but for developers, it is much easier to read and write.
This extension can be easily used in any Sphinx project. Simply install it with pip or conda:
pip install sphinx-math-dollar
or
conda install -c conda-forge sphinx-math-dollar
Then enable it in your
conf.py:
extensions = ['sphinx_math_dollar', 'sphinx.ext.mathjax'] Google Season of Docs
The above work on sphinx-math-dollar is part of work I have been doing to improve the tooling around SymPy's documentation. This has been to assist our technical writer Lauren Glattly, who is working with SymPy for the next three months as part of the new Google Season of Docs program. Lauren's project is to improve the consistency of our docstrings in SymPy. She has already identified many key ways our docstring documentation can be improved, and is currently working on a style guide for writing docstrings. Some of the issues that Lauren has identified require improved tooling around the way the HTML documentation is built to fix. So some other SymPy developers and I have been working on improving this, so that she can focus on the technical writing aspects of our documentation.
Lauren has created a draft style guide for documentation at https://github.com/sympy/sympy/wiki/SymPy-Documentation-Style-Guide. Please take a moment to look at it and if you have any feedback on it, email me or write to the SymPy mailing list. |
Sometimes, the motion of a robot is
constrained by its task or inherent kinematic properties. For example, a robot might want to keep a cup of water level, remain in contact with a surface to write on it, or follow some curve in space that corresponds to a task. In each of these cases, the constraint on the robot's motion is defined by some function, \(f(q) : \mathcal{Q} \rightarrow \mathbb{R}^n\), which maps the robot's state space \(\mathcal{Q}\) onto a real vector value \(\mathbb{R}^n\). The constraint is considered satisfied whenever \(f(q) = \mathbf{0}\). For example, to keep the cup level, the angular distance of the cup's axis to the upright z-axis could be the constraint function. In addition, a constraint satisfying motion satisfies \(f(q) = \mathbf{0}\) at every point along the path. Constrained motion planning addresses the question of how to find a constraint satisfying motion, while still avoiding obstacles or achieving other objectives.
In general, motion planners are not aware of constraints, and will generate paths that do not satisfy the constraint. This is because the
submanifold of constraint satisfying configurations \(X = \{ q \in \mathcal{Q} \mid f(q) = \mathbf{0} \}\) is lower-dimensional compared to the state space of the robot, and thus near impossible to sample from. To plan constraint satisfying motion, OMPL provides a means to augment the state space of a robot with knowledge of the constraint, representing \(X\) as a state space. Any motion plan generated using this augmented state space will satisfy the constraint, as the primitive operations used by motions planners (e.g.,
ompl::base::StateSpace::interpolate) automatically generate constraint satisfying states. The constrained planning framework enables any sampling-based planner (included asymptotically optimal planners) to plan while respecting a constraint function.
You can represent a constraint function using
ompl::base::Constraint, where you must implement the function \(f\), and optionally the analytic Jacobian of the constraint function. If no analytic Jacobian is provided, a numerical finite central difference routine is used to approximate the Jacobian. However, this is very computationally intensive, and providing an analytic derivative is preferred. We provide a simple script
ConstraintGeneration.py that uses the SymPy Python library for symbolic differentiation of constraints, and can automatically generate constraint code that can be used in your programs. There is also
ompl::base::ConstraintIntersection, which allows for composition of multiple constraints together that must all be satisfied.
There are currently three provided representations of constrained state spaces, each of which inherits from
ompl::base::ConstrainedStateSpace. Each of these methods implements a different way to sample and traverse the underlying submanifold. The three augmented state spaces are:
ompl::base::ProjectedStateSpace, a constrained state space that uses a projection operator to find constraint satisfying motion.
ompl::base::AtlasStateSpace, a constrained state space that, while planning, builds a piecewise linear approximation of \(X\) using tangent spaces. This approximation is called an
atlas, and is used to guide planning.
ompl::base::TangentBundleStateSpace, a constrained state space similar to
ompl::base::AtlasStateSpace, but lazily evaluates the atlas.
Each of these state spaces has their own documentation that can be viewed on their page. The augmented state space approach taken by OMPL was presented in a paper presented at ISRR 2017. More information about constrained motion planning is presented in this review paper.
You can view an example of how to use the constrained planning framework in this tutorial!
Limitations
In order for the constrained planning framework to work, your underlying state space and constraint function must satisfy a few assumptions.
Contiguous Memory
As an implementation detail, the memory allocated in a state by the underlying state space must be a contiguous array of
double values. For example,
ompl::base::RealVectorStateSpace, or the state space implemented in the kinematic chain benchmark both allocate a contiguous array of
double values. This detail is required by the various constrained state spaces and constraint function in order to view them as
Eigen::VectorXds, using
Eigen::Map<Eigen::VectorXd>. Note that
ompl::base::ConstrainedStateSpace::StateType derives from this class, for convenience.
However, this assumption prevents the
ompl::base::CompoundStateSpace from being used by the constrained planning framework, as state allocation does not guarantee contiguity.
Constraint Differentiability
In general, your constraint function should be a
continuous and differentiable function of the robot's state. Singularities in the constraint function can cause bad behavior by the underlying constraint satisfaction methods.
ompl::base::AtlasStateSpace and
ompl::base::TangentBundleStateSpace both will treat singularities as obstacles in the planning process.
Required Interpolation
If you want a path that satisfies constraints
and is potentially executable by a real system, you will want to interpolate whatever path (simplified or un-simplified) you find. This can be done by simply calling
ompl::geometric::PathGeometric::interpolate(). The interpolated path will be constraint satisfying, as the interpolate routine uses the primitives afforded by the constrained state space. However, there can potentially be issues at this step, as interpolation can fail.
Interpolation Failures
Currently, each of the constrained state spaces implements interpolation on the constraint submanifold via computing a
discrete geodesic between the two query points. The discrete geodesic is a sequence of close (to approximate continuity), constraint satisfying states between two query points. The distance between each point in the discrete geodesic is tuned by the "delta" parameter in
ompl::base::ConstrainedStateSpace::setDelta(). How this discrete geodesic is computed is key to how constrained state space operates, as it is used ubiquitously throughout the code (e.g., interpolation, collision checking, motion validation, and others).
Due to the nature of how these routines are implemented, it is possible for computation of the discrete geodesic to
fail, thus causing potentially unexpected results from whatever overlying routine requested a discrete geodesic. These failures can be the result of singularities in the constraint, high curvature of the submanifold, and various other issues. However, interpolation in "regular" state spaces does not generally fail as they are analytic, such as linear interpolation in
ompl::base::RealVectorStateSpace; hence,
ompl::base::StateStace::interpolate() is assumed to always be successful. As a result, some unexpected behavior can be seen if interpolation fails during planning with a constrained state space. Increasing or decreasing the "delta" parameter in
ompl::base::ConstrainedStateSpace, increasing or decreasing the constraint satisfaction tolerance, and other hyperparameter tuning can fix these problems.
Hyperparameter Sensitivity
The constrained state spaces, in general, are sensitive to the tuning of their various hyperparameters. Some reasonable defaults are set at start, but many constrained planning problems will have different characteristics of the underlying submanifold, and as such different parameters may be needed. Some basic rules-of-thumb are provided below:
For high-dimensional ambient state spaces, many parameters can be increased in magnitude. A brief non-exhaustive list follows. Generally, the step size for manifold traversal is related to the relative curvatureof the underlying constraint submanifold. Less curvy submanifolds can permit larger step sizes (i.e., if the constraint defines a hyperplane, you can use a large step size). Play around with this value if speed is a concern, as the larger the step size is, the less time is spent traversing the manifold. For the atlas- and tangent bundle-based spaces, planners that rely on uniform sampling of the space may require a high exploration parameter so the constraint submanifold becomes covered quicker (e.g., BIT*). Additionally, planners that rely on
ompl::base::StateSampler::samplerNear()may also require a high exploration or \(\rho\) parameter in order to expand effectively.
And many others! In general, the projection-based space is less sensitive to poor parameter tuning than atlas- or tangent bundle-based space, and as such is a good starting point to validate whether a constrained planning problem is feasible. Additional Notes Constraint Projection versus Projection Evaluator
Within
ompl::base::Constraint, there is a projection function
ompl::base::Constraint::project() which maps a potentially constraint unsatisfying configuration to the constraint manifold. By default,
ompl::base::Constraint::project() implements a Newton's method which performs well in most circumstances. Note that it is possible to override this method with your own projection routine, e.g., inverse kinematics.
You might notice that there also exists
ompl::base::ProjectionEvaluator::project() This method is used by a few planners to estimate the coverage of the free space (see this tutorial for more information), as it is a "projection" into a lower-dimensional space. Although similar sounding, this concept is orthogonal to the concept of projection as used in
ompl::base::Constraint, which projects states into the lower-dimensional constraint manifold. In fact, one can use planners that use a
ompl::base::ProjectionEvaluator (e.g.,
ompl::geometric::KPIECE1,
ompl::geometric::ProjEST) in tandem with the constrained planning framework.
By default, the constrained state spaces will use
ompl::base::WrapperProjectionEvaluator to access the underlying state space's default projection evaluator. However, if you know anything about the structure of you problem, you should implement your own projection evaluator for performance. Each of the constrained planning demos has an example of a projection evaluator that can be used with constrained planning (e.g.,
SphereProjection, in ConstrainedPlanningSphere).
Want to learn more? Tutorials
Check out the tutorial, which shows how to set up a constrained planning problem in R
3. Demos
The examples (plus more) from the paper that presented the constrained planning framework are available as demo programs. All of the demos are in the
ompl/demos/constraint folder. Each of these demos supports planning for an individual planner as well as benchmarking, and complete configurability of the hyperparameters of the constrained space. Each of these demos comes with a way to visualize the results of planning. The sphere, torus, and implicit kinematic chain demos each come with a Blender file (
.blend) that visualize the results of planning in 3-D. These are all in
ompl/demos/constraint/visualization.
ConstrainedPlanningSphere [Python version]. Plan for a point in R 3constrained to the surface of a sphere, with narrow obstacles it must traverse. A similar scenario is explained in the tutorial. The Blender file
ConstrainedPlanningSphere.blendcontains a script that visualized the motion graph, original and simplified paths, and the atlas generated while planning if
ompl::base::AtlasStateSpaceor
ompl::base::TangentBundleStateSpacewere used. Simply change the path in the script window to the directory where the output of the demo was placed and hit "Run Script". The Python module NetworkX is required to visualize the graph, make sure your Blender's Python path is correctly set so it can find the module. Note that you must run the demo with
-oto dump output.
ConstrainedPlanningTorus [Python version]. Plan for a point in R 3constrained to the surface of a torus. A "maze" image is loaded (some examples are provided in
ompl/demos/constraint/mazes) to use as the set of obstacles to traverse. The start and goal point are red and green pixels in the images respectively. The Blender file
ConstrainedPlanningTorus.blendcontains a script to visualize the motion graph, simplified path, the atlas generated by planning, and the maze projected upon the torus. Similar to the sphere example, change the path in the script window and "Run Script" to see results.
ConstrainedPlanningImplicitChain [Python version]. Plan for a set of balls, each in R 3, constrained to be a unit distance apart. This imposes spherical kinematics on the balls, implicitly defining a kinematic chain. The Blender file
ConstrainedPlanningImplicitChain.blendcontains a script to show an animation of the simplified path . Similar to the sphere example, change the path in the script window and "Run Script" to see results.
ConstrainedPlanningImplicitParallel. Plan for a parallel robot made of many implicit chains. The Blender file
ConstrainedPlanningImplicitParallel.blendcontains a script to show an animation of the simplified path. Similar to the sphere example, change the path in the script window and "Run Script" to see results.
ConstrainedPlanningKinematicChain. Similar to the kinematic chain benchmark demo, but with a constraint that only allows the tip of the manipulator to move along a line. |
What's the state-of-the-art in the approximation of highly oscillatory integrals in both one dimension and higher dimensions to arbitrary precision?
I'm not entirely familiar with what's now done for cubatures (multidimensional integration), so I'll restrict myself to quadrature formulae.
There are a number of effective methods for the quadrature of oscillatory integrals. There are methods suited for finite oscillatory integrals, and there are methods for infinite oscillatory integrals.
For infinite oscillatory integrals, two of the more effective methods used are Longman's method and the modified double exponential quadrature due to Ooura and Mori. (But see also these two papers by Arieh Iserles.)
Longman's method relies on converting the oscillatory integral into an alternating series by splitting the integration interval, and then summing the alternating series with a sequence transformation method. For instance, when integrating an oscillatory integral of the form
$$\int_0^\infty f(t)\sin\,t\mathrm dt$$
one converts this into the alternating sum
$$\sum_{k=0}^\infty \int_{k\pi}^{(k+1)\pi} f(t)\sin\,t\mathrm dt$$
The terms of this alternating sum are computed with some quadrature method like Romberg's scheme or Gaussian quadrature. Longman's original method used the Euler transformation, but modern implementations replace Euler with more powerful convergence acceleration methods like the Shanks transformation or the Levin transformation.
The double exponential quadrature method, on the other hand, makes a clever change of variables, and then uses the trapezoidal rule to numerically evaluate the transformed integral.
For finite oscillatory integrals, Piessens (one of the contributors of QUADPACK) and Branders, in two papers, detail a modification of Clenshaw-Curtis quadrature (that is, constructing an Chebyshev polynomial expansion of the nonoscillatory part of the integrand). Levin's method, on the other hand, uses a collocation method for the quadrature. (I am told there is now a more practical version of the old standby, Filon's method, but I've no experience with it.)
These are the methods I remember offhand; I'm sure I've forgotten other good methods for oscillatory integrals. I will edit this answer later if I remember them.
Besides "multidimensional vs. single-dimensional" and "finite range vs. infinite range", an important categorization for methods is "one specific type of oscillator (usually Fourier-type: $\sin(t)$, $\exp(it)$, etc, or Bessel-type: $J_0(t)$, etc.) vs. more general oscillator ($\exp(i g(t))$ or even more general oscillators $w(t)$)".
At first, oscillatory integration methods focused on specific oscillators. As
J. M. said, prominent ones include Filon's method and the Clenshaw-Curtis method (these two are closely related) for finite range integrals, and series extrapolation based methods and the double-exponential method of Ooura and Mori for infinite range integrals.
More recently, some general methods have been found. Two examples:
Levin's collocation-based method for any $\exp(i g(t))$ (Levin 1982), or later for any oscillator $w(t)$ defined by a linear ODE (Levin 1996 as linked by
J. M.). Mathematica uses Levin's method for integrals not covered by the more specialized rules.
Huybrechs and Vandewalle's method based on analytic continuation along a complex path where the integrand is non-oscillatory (Huybrechs and Vandewalle 2006).
No distinction is necessary between methods for finite and infinite range integrals for the more general methods, since a compactifying transformation can be applied to an infinite range integral, leading to a finite range oscillatory integral that can still be addressed with the general method, albeit with a different oscillator.
Levin's method can be extended to multiple dimensions by iterating over the dimensions and other ways, but as far as I know all the methods described in literature so far have sample points that are an outer product of the one-dimensional sample points or some other thing that grows exponentially with dimension, so it rapidly gets out of hand. I'm not aware of more efficient methods for high dimensions; if any could be found that sample on a sparse grid in high dimensions it would be useful in applications.
Creating automatic routines for the more general methods may be difficult in most programming languages (C, Python, Fortran, etc) in which you would normally expect to program your integrand as a function/routine and pass it to the integrator routine, because the more general methods need to know the structure of the integrand (which parts look oscillatory, what type of oscillator, etc) and can't treat it as a "black box". |
Interferogram In physics, interference is a phenomenon in which two waves superimpose to form a resultant wave of greater or lower amplitude. Interference usually refers to the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, and surface water waves. Contents 1 Mechanism 2 Optical interference 3 Radio interferometry 4 Acoustic interferometry 5 Quantum interference 6 See also 7 References 8 External links Mechanism The principle of superposition of waves states that when two or more propagating waves of like type are incident on the same point, the total displacement at that point is equal to the vector sum of the displacements of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the magnitude of the displacement is the sum of the individual magnitudes – this is constructive interference. If a crest of one wave meets a trough of another wave then the magnitude of the displacements is equal to the difference in the individual magnitudes – this is known as destructive interference.
Constructive interference occurs when the phase difference between the waves is a multiple of 2π, whereas destructive interference occurs when the difference is an
odd multiple of π. If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values.
Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Between two plane waves
A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle. Interference is essentially an energy redistribution process. The energy which is lost at the destructive interference is regained at the constructive interference.One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point
B, then the relative phase changes along the x-axis. The phase difference at the point A is given by
It can be seen that the two waves are in phase when
,
and are half a cycle out of phase when
Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is
and
d f is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle θ.
The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout.
Between two spherical waves
A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right.
When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar.
Multiple beams
Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time.
It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases.
It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as for waves from to , where
.
To show that
one merely assumes the converse, then multiplies both sides by
The Fabry–Pérot interferometer uses interference between multiple reflections.
A diffraction grating can be considered to be a multiple-beam interferometer, since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating. Feynman suggests that when there are only a few sources, say two, we call it "interference", as in Young's double slit experiment, but with a large number of sources, the process is labelled "diffraction".
[1] Optical interference Because the frequency of light waves (~10 14 Hz) is too high to be detected by currently available detectors, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point r is:
The displacement of the summed waves is
The intensity of the light at
r is given by
This can be expressed in terms of the intensities of the individual waves as
Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2π. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity.
The two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state.
Light source requirements
The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration, will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap.
Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes.
[2] All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications.
A laser beam generally approximates much more closely to a monochromatic source, and it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.
Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements.
[3]
It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Some fine examples of white light fringes can be seen
Optical arrangements
To generate interference fringes, light from the source has to be divided into two waves which have then to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems.
In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach-Zehnder interferometer are examples of amplitude-division systems.
Interference can also be seen in everyday life. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively.
Applications of optical interferometry
Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement.
Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light.
[4] In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. [5]
Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement.
Radio interferometry In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas furthest apart in the array. Acoustic interferometry
An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid. It may be used to measure velocity, wavelength, absorption, or impedance. A vibrating crystal creates the ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal. The waves are then reflected back to the source and measured.
Quantum interference
Quantum mechanics Introduction
Glossary · History
If a system is in state , its wavefunction is described in Dirac or bra-ket notation as:
|\psi \rang = \sum_i |i\rang \psi_i
= \sum_{ij} \psi^*_i \psi_j \varphi^*_j\varphi_i= \sum_{i} |\psi_i|^2|\varphi_i|^2 + \sum_{ij;i \ne j} \psi^*_i \psi_j \varphi^*_j\varphi_i
where (as defined above) and similarly are the coefficients of the final state of the system. * is the complex conjugate so that , etc.
Now let's consider the situation classically and imagine that the system transited from to via an intermediate state . Then we would
classically expect the probability of the two-step transition to be the sum of all the possible intermediate steps. So we would have
\operatorname{prob}(\psi \Rightarrow \varphi) = \sum_i \operatorname{prob}(\psi \Rightarrow i \Rightarrow \varphi)
= \sum_i |\lang \psi |i \rang|^2|\lang i|\varphi \rang|^2 = \sum_i|\psi_i|^2 |\varphi_i|^2 ,
The classical and quantum derivations for the transition probability differ by the presence, in the quantum case, of the extra terms ; these extra quantum terms represent
interference between the different intermediate "alternatives". These are consequently known as the quantum interference terms, or cross terms. This is a purely quantum effect and is a consequence of the non-additivity of the probabilities of quantum alternatives. See also Active noise control Beat (acoustics) Coherence (physics) Diffraction Double-slit experiment Young's Double Slit Interferometer Haidinger fringes Hong–Ou–Mandel effect Interference lithography Interferometer List of types of interferometers Lloyd's Mirror Moiré pattern Newton's rings Thin-film interference Optical feedback Retroreflector Upfade Multipath interference Inter-flow interference Intra-flow interference Bio-Layer Interferometry N-slit interferometric equation References
Commons has media related to . Interference
Look up in , the free dictionary. interference Expressions of position and fringe spacing Java demonstration of interference Java simulation of interference of water waves 1 Java simulation of interference of water waves 2 Flash animations demonstrating interference Lissajous Curves: Interactive simulation of graphical representations of musical intervals, beats, interference, vibrating strings Animations demonstrating optical interference by QED
ca:Interferència (propagació d'ones)#Interferència òptica |
An
approximationis anything that is similar but not exactly equal to something else. The term can be applied to various properties (e.g. value, quantity, image, description) that are nearly but not exactly correct; similar, but not exactly the same e.g. The approximate time was 10 o'clock.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incompleteinformation
prevents use of exact representations.
The type of approximation used depends on the availableinformation
, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Approximation theory
is a branch of mathematics, a quantitative part of functional analysis
.Diophantine approximation
deals with approximations of real numbers
by rational numbers
. Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. It also is used when a number isnot rational
, such as the numberπ
, which often is shortened to 3.14159, or √2 to 1.414.
Numerical approximations sometimes result from using a small number ofsignificant digits
. Calculations are likely to involverounding errors
leading to approximation. Log tables
, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results.
[1]
Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.
Related to approximation of functions is theasymptotic
value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum (
k
/2)+(
k
/4)+(
k
/8)+...(
k
/2^
n
) is asymptotically equal to
k
. Unfortunately no consistent notation is used throughout mathematics and some texts will use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.
As another example, in order to accelerate the convergence rate of evolutionary algorithms,fitness approximation
—that leads to build model of the fitness function to choose smart search steps—is a good solution.
Approximation arises naturally inscientific experiments
. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.
Thehistory of science
shows that earlier theories and laws can be
approximations
to some deeper set of laws. Under the correspondence principle
, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work.
[2]
The old theory becomes an approximation to the new theory.
Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly.Physicists
often approximate the shape of the Earth
as a sphere
even though more accurate representations are possible, because many physical characteristics (e.g. gravity
) are much easier to calculate for a sphere than for other shapes.
Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other.
[3]
An approximate solution is effected by performing iterations
. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.
The use ofperturbations
to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.
The most common versions ofphilosophy of science
accept that empirical measurements
are always
approximations
—they do not perfectly represent what is being measured.
Symbols representing approximation:
≐ general approximation ≈ asymptotic analysis
Symbols used to denote items that are approximately equal are wavy or dotted equals signs.
[4] ≈(U+2248) ≃(U+2243), a combination of "≈" and "=", also used to indicate asymptotically equal to ≅(U+2245), another combination of "≈" and "=", which is used to indicate isomorphismor sometimes congruence ≊(U+224A), also a combination of "≈" and "=", used to indicate equivalence or approximate equivalence ∼(U+223C), which is also sometimes used to indicate proportionality ∽(U+223D), which is also sometimes used to indicate proportionality ≐(U+2250), which can also be used to represent the approach of a variable to a limit ≒(U+2252), which is used like "≃" in both Japaneseand Korean ≓(U+2253), a reversed variation of "≒"
(\approx), usually to indicate approximation between numbers, like
.
(\simeq), usually to indicate asymptotic equivalence between functions, like
. So writing
would be wrong, despite widely used.
(\sim), usually to indicate proportionality between functions, the same
of the line above will be
.
(\cong), usually to indicate congruence between figures, like
. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
I know that the main operations (Insert, Search, Delete) have a worst-case running time of $\mathcal{O} (h)$. But I wanted to dig into this deeper.
Basically I am having some difficulties understanding "Big-Omega" when it comes to the worst-case time complexity. I usually define them as follows:
Let $t(x)$ be the number of steps taken by an algorithm $\mathcal{A}$ on input $x$.
Let $T(n)$ be the
worst-case running time complexity of $\mathcal{A}$.
$T(n) = max(t(x))$ where max is over all inputs x of size n.
Then $T(n) \in \mathcal{O}(g(n))$ if for every input of size $n$, $\mathcal{A}$ takes
at most $c \cdot g(n)$ steps.
Moreover,
$T(n) \in \Omega(g(n))$ if
for some (there exists) inputs of size $n$, $\mathcal{A}$ takes at least $c \cdot g(n)$ steps.
Returning to BSTs....
We know that for all inputs of size $n$, in the worst case, the height of the tree is $n$, which means we need to visit all $n$ nodes in the worst-case. This is the "ultimate" worst case (forgive my lack of rigour here!), meaning it cannot get any worse, and hence $\mathcal{O}(n)$ running time. But we also know that a tree may be balanced, in which case we could argue that there exists such an input (a balanced tree) such that we would take at least $\Omega(logn)$ for the running time. This is still a "worst case", but a lower bound to that worst case.
I don't feel that is quite right, nor does it make much sense. Perhaps I am just lacking an understanding of how to determine when $T(n) \in \Omega(g(n))$.
any help appreciated! |
Consider the data generating process (DGP):
$y_{t}= x_{t}^{\prime}\beta+\varepsilon_{t}$ (1)
$\varepsilon_{t}= \sigma_{t}z_{t},\quad z_{t}\sim i.i.d.\, N\left(0,\,1\right)$ (2)
$\sigma_{t}^{2}= \sigma^{2}+\alpha\varepsilon_{t-1}^{2}+\beta\sigma_{t-1}^{2}$ (3)
Equation (1) is a model for the conditional mean of the process Eq. (2) and (3) define a model for the conditional variance of the process (in this case the residuals are Gaussian). You could estimate these equations step-by-step using OLS (same as testing for ARCH(k) effect) since OLS is consistent but OLS will be inefficient and there will be non-linear estimators such as the Maximum Likelihood estimator (ML) which will produce a lower variance. The likelihood function for the model above would look like:
$l_{T}\left(\theta\right)=-\frac{T}{2}\log\left(2\pi\right)-\frac{1}{2}\sum_{t=1}^{T}\left(\log\sigma_{t}^{2}\left(\theta\right)+\frac{\varepsilon_{t}^{2}}{\sigma_{t}^{2}\left(\theta\right)}\right)\Leftrightarrow$
$l_{T}\left(\theta\right)=-\frac{T}{2}\log\left(2\pi\right)-\frac{1}{2}\sum_{t=1}^{T}\left(\log\left(\sigma^{2}+\alpha\varepsilon_{t-1}^{2}+\beta\sigma_{t-1}^{2}\right)+\frac{\left(y_{t}-x_{t}^{\prime}\beta\right)^{2}}{\sigma^{2}+\alpha\varepsilon_{t-1}+\beta\sigma_{t-1}^{2}}\right)$
You could drop the first term when maximizing since its constant. In practice you would start by estimating Eq. (1) and saving the residuals. If all misspecification tests are okay and if you have do have ARCH effect then you would estimate Eq. (3) (or a similar ARCH/GARCH family model) on your saved residuals. Furthermore you could calculate s.e.'s from the negaative of the expected value of the Hessian matrix although robust s.e.'s are recommended in case your distributional assumption does not hold (QMLE).
To illustrate I have simulated an AR-GARCH process:
$y_{t}=\mu+\theta y_{t-1}+\varepsilon_{t}$ (4)
$\varepsilon_{t}=\sigma_{t}z_{t},\quad z_{t}\sim i.i.d.\, N\left(0,\,1\right)$ (5)
$\sigma_{t}^{2}=\sigma^{2}+\alpha\varepsilon_{t-1}^{2}+\beta\sigma_{t-1}^{2}$ (6)
with parameter values $\mu=0.01 $, $\theta=0.6 $, $\sigma^{2}=0.04 $, $\alpha=0.2 $ and $\beta=0.5 $. First I fit an AR(1) model to the simulated series:
I estimate the model using ML and get estimated values of: $y_{t}=0.0013538+0.59745y_{t-1}+\varepsilon_{t}$
Then I save the residuals an dplot the residuals and squared residuals. We see there are ARCH effects as indicated by Residual2 although I perhaps should have simulated a more persistent series.
Then I estimate a GARCH(1,1) model, Eq. (6) on the series called Residual. This gives us estimated values of: $\sigma_{t}^{2}=0.03720+0.23753\varepsilon_{t-1}^{2}+0.50754\sigma_{t-1}^{2}$
Which we can use to make forecasts of the conditional variance. Note that when dealing with financial series you will often end up with a model for the conditional mean corresponding with: $y_{t}=\mu+\varepsilon_{t} $, i.e. only a constant which correspond with the efficient market hypothesis since if it was possible to forecast the conditional mean of some givens share price it would be easy to make a profit.
Another way to estimate the model is to estimate in one go. Doing that I get a model: $y_{t}=0.003444+0.605931+\varepsilon_{t}$
$\varepsilon_{t}=\sigma_{t}z_{t}$
$\sigma_{t}^{2}=0.037121+0.237526\varepsilon_{t-1}^{2}+0.508122\sigma_{t-1}^{2}$
Which is very similar to the estimated model above which was estimated in two stages. Forecasting 10 periods into the future gives us the figure below.
We see that the first graph shows the simulated series, AR(1), and the fitted model while the second graph depicts the fitted values from the GARCH(1,1) model on the residuals from the conditional mean model.
Usually you would first estimate the mean of the model and then when you would have well specified model for the conditional mean you would proceed to test for ARCH effects. The reason for this is that if your DGP follows an AR(2) process but you estimate an AR(1) model then your residuals will exhibit autocorrelation. If your residuals exhibit autocorrealtion this implies that your squared residuals will exhibit autocorrelation but in general the converse is not true. Therefore the ARCH test will also have power against residual autocorrelation and this is the reason to make sure that your model is well specified before testing for ARCH effects.
If this did not answer your question then let me know and I will amend my answer. |
We consider a family of mixed processes given as the sum of a fractional Brownian motion with Hurst parameter $H\in (3/4,1)$ and a multiple of an independent standard Brownian motion, the family being indexed by the scaling factor in front of the Brownian motion. We analyze the underlying markets with methods from large financial markets. More precisely, we show the existence of a strong asymptotic arbitrage (defined as in Kabanov and Kramkov [Finance Stoch. 2(2), 143–172 (1998)]) when the scaling factor converges to zero. We apply a result of Kabanov and Kramkov [Finance Stoch. 2(2), 143–172 (1998)] that characterizes the notion of strong asymptotic arbitrage in terms of the entire asymptotic separation of two sequences of probability measures. The main part of the paper consists of proving the entire separation and is based on a dichotomy result for sequences of Gaussian measures and the concept of relative entropy.
A one-dimensional stochastic wave equation driven by a general stochastic measure is studied in this paper. The Fourier series expansion of stochastic measures is considered. It is proved that changing the integrator by the corresponding partial sums or by Fejèr sums we obtain the approximations of mild solution of the equation.
Fractional equations governing the distribution of reflecting drifted Brownian motions are presented. The equations are expressed in terms of tempered Riemann–Liouville type derivatives. For these operators a Marchaud-type form is obtained and a Riesz tempered fractional derivative is examined, together with its Fourier transform.
The nonlocal porous medium equation considered in this paper is a degenerate nonlinear evolution equation involving a space pseudo-differential operator of fractional order. This space-fractional equation admits an explicit, nonnegative, compactly supported weak solution representing a probability density function. In this paper we analyze the link between isotropic transport processes, or random flights, and the nonlocal porous medium equation. In particular, we focus our attention on the interpretation of the weak solution of the nonlinear diffusion equation by means of random flights.
The generalized mean-square fractional integrals ${\mathcal{J}_{\rho ,\lambda ,u+;\omega }^{\sigma }}$ and ${\mathcal{J}_{\rho ,\lambda ,v-;\omega }^{\sigma }}$ of the stochastic process X are introduced. Then, for Jensen-convex and strongly convex stochastic proceses, the generalized fractional Hermite–Hadamard inequality is establish via generalized stochastic fractional integrals.
The problem of (pathwise) large deviations for conditionally continuous Gaussian processes is investigated. The theory of large deviations for Gaussian processes is extended to the wider class of random processes – the conditionally Gaussian processes. The estimates of level crossing probability for such processes are given as an application.
Martingale-like sequences in vector lattice and Banach lattice frameworks are defined in the same way as martingales are defined in [Positivity 9 (2005), 437–456]. In these frameworks, a collection of bounded X-martingales is shown to be a Banach space under the supremum norm, and under some conditions it is also a Banach lattice with coordinate-wise order. Moreover, a necessary and sufficient condition is presented for the collection of $\mathcal{E}$-martingales to be a vector lattice with coordinate-wise order. It is also shown that the collection of bounded $\mathcal{E}$-martingales is a normed lattice but not necessarily a Banach space under the supremum norm.
We consider the infinite divisibility of distributions of some well-known inverse subordinators. Using a tail probability bound, we establish that distributions of many of the inverse subordinators used in the literature are not infinitely divisible. We further show that the distribution of a renewal process time-changed by an inverse stable subordinator is not infinitely divisible, which in particular implies that the distribution of the fractional Poisson process is not infinitely divisible. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Essentially similar question to here Different boolean degrees polynomially related? (change being error condition $\epsilon\in(0,1)$).
Let $p$ be the minimum degree (of degree $d_f$) real polynomial that represents boolean function $f$ such that $f(x)=p(x)$.
Let $p_{0,\epsilon}$ be the minimum degree (of degree $d_{0,f,\epsilon}$) real polynomial that represents boolean function $f$ such that $$f(x)=0\implies p_{0,\epsilon}(x)=0$$$$f(x)=1\implies|p_{0,\epsilon}(x)-f(x)|\leq\epsilon.$$
Let $p_{1,\epsilon}$ be the minimum degree (of degree $d_{1,f,\epsilon}$) real polynomial that represents boolean function $f$ such that $$f(x)=1\implies p_{1,\epsilon}(x)=1$$$$f(x)=0\implies|p_{1,\epsilon}(x)-f(x)|\leq\epsilon.$$
Is $d_{f}\leq d_{0,f,\epsilon}^{c_0}$ and $d_{f}\leq d_{1,f,\epsilon}^{c_1}$ for some $c_0$ and $c_1$?
Above holds if $\epsilon\in(0,\frac{1}{2})$ as mentioned here in link Different boolean degrees polynomially related?.
However what happens if $\epsilon\in(0,1)$ instead of $(0,\frac{1}{2})$ (does polynomial relation still hold)?
That is we consider $0<\epsilon<\frac{1}{2}\leq\delta<1$.
Note that defining $p_\delta$ makes little sense if $\delta\in[\frac{1}{2},1)$.
I am most interested in $\delta=1-\frac{1}{h(n)}$ with some function of $n$ (logarithmic/polynomial/exponential). |
Learning Objectives
Express the changes in the atomic number and mass number of a radioactive nuclei when particle or ray is emitted. Write and balance nuclear reactions when given symbol mass format. Appreciate a decay series for radioactive elements.
Many nuclei are radioactive; that is, they decompose by emitting particles or rays and in doing so, become a different nucleus. In our studies up to this point, atoms of one element were unable to change into different elements. That is because in all other types of changes we have talked about only the electrons were changing. In these changes, the nucleus, which contains the protons which dictate which element an atom is, is changing. All nuclei with 84 or more protons are radioactive and elements with less than 84 protons have both stable and unstable isotopes. All of these elements can go through nuclear changes and turn into different elements.
In natural radioactive decay, three common emissions occur. When these emissions were originally observed, scientists were unable to identify them as some already known particles and so named them
alpha particles (\(\alpha \)), beta particles, \(\left( \beta \right)\), and gamma rays \(\left( \gamma \right)\)
using the first three letters of the Greek alphabet. Some later time, alpha particles were identified as helium-4 nuclei, beta particles were identified as electrons, and gamma rays as a form of electromagnetic radiation like x-rays except much higher in energy and even more dangerous to living systems.
Alpha Decay
The nuclear disintegration process that emits alpha particles is called alpha decay. An example of a nucleus that undergoes alpha decay is uranium-238. The alpha decay of \(\ce{U}\)-238 is
\[\ce{_{92}^{238}U} \rightarrow \ce{_2^4He} + \ce{_{90}^{234}Th} \label{alpha1}\]
In this nuclear change, the uranium atom \(\left( \ce{_{92}^{238}U} \right)\) transmuted into an atom of thorium \(\left( \ce{_{90}^{234}Th} \right)\) and, in the process, gave off an alpha particle. Look at the symbol for the alpha particle: \(\ce{_2^4He}\). Where does an alpha particle get this symbol? The bottom number in a nuclear symbol is the number of protons. That means that the alpha particle has two protons in it which were lost by the uranium atom. The two protons also have a charge of \(+2\). The top number, 4, is the mass number or the total of the protons and neutrons in the particle. Because it has 2 protons, and a total of 4 protons and neutrons, alpha particles must also have two neutrons. Alpha particles always have this same composition: two protons and two neutrons.
Another alpha particle producer is thorium-230.
\[\ce{_{90}^{230}Th} \rightarrow \ce{_2^4He} + \ce{_{88}^{226}Ra} \label{alpha2}\]
Alpha decays occur with radioactive isotopes of radium, radon, uranium, and thorium.
Beta Decay
Another common decay process is beta particle emission, or beta decay. A beta particle is simply a high energy electron that is emitted from the nucleus. It may occur to you that we have a logically difficult situation here. Nuclei do not contain electrons and yet during beta decay, an electron is emitted from a nucleus. At the same time that the electron is being ejected from the nucleus, a neutron is becoming a proton. It is tempting to picture this as a neutron breaking into two pieces with the pieces being a proton and an electron. That would be convenient for simplicity, but unfortunately that is not what happens; more about this at the end of this section. For convenience sake, though, we will treat beta decay as a neutron splitting into a proton and an electron. The proton stays in the nucleus, increasing the atomic number of the atom by one. The electron is ejected from the nucleus and is the particle of radiation called beta.
To insert an electron into a nuclear equation and have the numbers add up properly, an atomic number and a mass number had to be assigned to an electron. The mass number assigned to an electron is zero (0) which is reasonable since the mass number is the number of protons plus neutrons and an electron contains no protons and no neutrons. The atomic number assigned to an electron is negative one (-1), because that allows a nuclear equation containing an electron to balance atomic numbers. Therefore, the nuclear symbol representing an electron (beta particle) is
\(\ce{_{-1}^0e}\) or \(\ce{_{-1}^0\beta} \label{beta1}\)
Thorium-234 is a nucleus that undergoes beta decay. Here is the nuclear equation for this beta decay.
\[\ce{_{90}^{234}Th} \rightarrow \ce{_{-1}^0e} + \ce{_{91}^{234}Pa} \label{beta2}\]
Beta decays are common with Sr-90, C-14, H-3 and S-35.
Gamma Radiation
Frequently, gamma ray production accompanies nuclear reactions of all types. In the alpha decay of \(\ce{U}\)-238, two gamma rays of different energies are emitted in addition to the alpha particle.
\[\ce{_{92}^{238}U} \rightarrow \ce{_2^4He} + \ce{_{90}^{234}Th} + 2 \ce{_0^0\gamma}\]
Virtually all of the nuclear reactions in this chapter also emit gamma rays, but for simplicity the gamma rays are generally not shown. Nuclear reactions produce a great deal more energy than chemical reactions. Chemical reactions release the difference between the chemical bond energy of the reactants and products, and the energies released have an order of magnitude of \(1 \times 10^3 \: \text{kJ/mol}\). Nuclear reactions release some of the binding energy and may convert tiny amounts of matter into energy. The energy released in a nuclear reaction has an order of magnitude of \(1 \times 10^{18} \: \text{kJ/mol}\). That means that nuclear changes involve almost
a million times more energy per atom than chemical changes!
Common gamma emitters would include I-131, Cs-137, Co-60, and Tc-99.
Virtually all of the nuclear reactions in this chapter also emit gamma rays, but for simplicity the gamma rays are generally not shown.
The essential features of each reaction are shown in Figure \(\PageIndex{2}\).
Figure \(\PageIndex{2}\): Three most common modes of nuclear decay "Nuclear Accounting"
When writing nuclear equations, there are some general rules that will help you:
The sum of the mass numbers (top numbers) on the reactant side equal the sum of the mass numbers on the product side. The atomic numbers (bottom numbers) on the two sides of the reaction will also be equal.
In the alpha decay of \(\ce{^{238}U}\) (Equation \(\ref{alpha1}\)), both atomic and mass numbers are conserved:
mass number: \(238 = 4 + 234\) atomic number: \(92 = 2 + 90\)
Confirm that this equation is correctly balanced by adding up the reactants' and products' atomic and mass numbers. Also, note that because this was an alpha reaction, one of the products is the alpha particle, \(\ce{_2^4He}\).
Note that both the mass numbers and the atomic numbers add up properly for the beta decay of Thorium-234 (Equation \(\ref{beta2}\)):
mass number: \(234 = 0 + 234\) atomic number: \(90 = -1 + 91\)
The mass numbers of the original nucleus and the new nucleus are the same because a neutron has been lost, but a proton has been gained and so the sum of protons plus neutrons remains the same. The atomic number in the process has been increased by one since the new nucleus has one more proton than the original nucleus. In this beta decay, a thorium-234 nucleus has one more proton than the original nucleus. In this beta decay, a thorium-234 nucleus has become a protactinium-234 nucleus. Protactinium-234 is also a beta emitter and produces uranium-234.
\[\ce{_{91}^{234}Pa} \rightarrow \ce{_{-1}^0e} + \ce{_{92}^{234}U} \label{nuke1}\]
Once again, the atomic number increases by one and the mass number remains the same; confirm that the equation is correctly balanced. Before starting the problems, below preview this video of your instructor balancing nuclear reactions.
Example \(\PageIndex{1}\)
Complete the following nuclear reaction by filling in the missing particle.
\[\ce{_{86}^{210}Rn} \rightarrow \ce{_2^4He} + ?\]
Solution:
This reaction is an alpha decay. We can solve this problem one of two ways:
Solution 1: When an atom gives off an alpha particle, its atomic number drops by 2 and its mass number drops by 4 leaving: \(\ce{_{84}^{206}Po}\). We know the symbol is \(\ce{Po}\), for polonium, because this is the element with 84 protons on the periodic table. Solution 2: Remember that the mass numbers on each side must total up to the same amount. The same is true of the atomic numbers. Mass numbers: \(210 = 4 + ?\) Atomic numbers: \(86 = 2 + ?\)
We are left with \(\ce{_{84}^{206}Po}\)
Example \(\PageIndex{2}\)
Write each of the following nuclear reactions.
Carbon-14, used in carbon dating, decays by beta emission. Uranium-238 decays by alpha emission. Solution:
a) Beta particles have the symbol \(\ce{_{-1}^0e}\). Emitting a beta particle causes the atomic number to increase by 1 and the mass number to not change. We get atomic numbers and symbols for elements using our periodic table. We are left with the following reaction:
\[\ce{_6^{14}C} \rightarrow \ce{_{-1}^0e} + \ce{_7^{14}N}\]
b) Alpha particles have the symbol \(\ce{_2^4He}\). Emitting an alpha particle causes the atomic number to decrease by 2 and the mass number to decrease by 4. We are left with:
\[\ce{_{92}^{238}U} \rightarrow \ce{_2^4He} + \ce{_{90}^{234}Th}\]
Decay Series
The decay of a radioactive nucleus is a move toward becoming stable. Often, a radioactive nucleus cannot reach a stable state through a single decay. In such cases, a series of decays will occur until a stable nucleus is formed. The decay of \(\ce{U}\)-238 is an example of this. The \(\ce{U}\)-238 decay series starts with \(\ce{U}\)-238 and goes through fourteen separate decays to finally reach a stable nucleus, \(\ce{Pb}\)-206 (Figure 17.3.3). There are similar decay series for \(\ce{U}\)-235 and \(\ce{Th}\)-232. The \(\ce{U}\)-235 series ends with \(\ce{Pb}\)-207 and the \(\ce{Th}\)-232 series ends with \(\ce{Pb}\)-208.
Several of the radioactive nuclei that are found in nature are present there because they are produced in one of the radioactive decay series. For example, there may have been radon on the earth at the time of its formation, but that original radon would have all decayed by this time. The radon that is present now is present because it was formed in a decay series (mostly by U-238).
Need More Practice? Turn to Section 5.E of this OER and work problems #1 and #2. Contributors
CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
Emma Gibney (Furman University) |
Question a):
Here is the output from python console.
>>> import math
>>> math.log(3,10) * (3**(3**3))
3638334640024.0996
>>> math.pow(10, 0.0996)
1.2577664324512539
The first 5 most significant digits are almost the same of that of Wolfram, 12577 vs 12580.
If we try the following at my favorite online arbitrary precision calculator,
pow(3.0000000000000000000000000000000, pow(3, pow(3, 3)))
we get
1.2580142906274913178603906982032e3638334640024
which is completely consistent with Wolfram. So Wolfram can be fully trusted on this particular result.
You might be wondering why "$3.000000$$000000$$000000$$000000$$0000000$" is entered instead of a simple "$3$". Had "$3$" been used, that calculator would have performed integer calculations without loss of precision, which will run out of 30-second time limit in our case. "$3.0$" would be enough to instruct that calculator to use floating numbers, which, however, would have only 2 digits of precision. That long string of 0s is used to specify 32 digits of precision of floating-point numbers.
Question b):
The computation above showed there are 3638334640025 digits if we express $3^{3^{3^3}}$ as a decimal integer. It we store 2 digits in one 8-bit byte, we need 1.82 terabyte storage.
However, if we ask that online calculator to compute $4^{4^{4^4}}$,
pow(4.0000000000000000000000000000000, pow(4, pow(4, 4)))
it say,
Complete loss of accurate digits
That is not a surprise since the exponent $4^{4^4}$ $=$ $1340780792994259$$7099574024998205$$8461274793658205$$9239337772356144$$3721764030073546$$9768018742981669$$0342769003185818$$6486050853753882$$8119465699464336$$49006084096$ $\approx$ $1.34×10^{154}.$We can work around this out-of-limit error by computing directly the exponent of 10 in $4^{4^{4^4}}$ by using $\log$ with base 10.
log(4.0000000000000000000000000000000,10) * pow(4, pow(4, 4))
The answer is
8.0723047260282253793826303970853e153
That number of digits, about $8\times10^{153}$ is much more than vigintillion in long scale. To store $4^{4^{4^4}}$ as a decimal integer, it is not enough even if all atoms in the known universe are used, assuming each of them can store more digits than all available computer storage on earth.
However, not a lot of memory is needed for the calculations above to obtain the result above. It is possible that a few megabyte or just several kilobytes of extra memory could support the calculations above. |
Let $A$ and $B$ be connected subspaces of a topological space $(X,\tau)$. If $A\cap B\neq\emptyset$, prove that the subspace $A\cup B$ is connected.
If the subspace $A\cup B$ is not connected, then there exist, $\mathscr{U},\mathscr{V}\subset X$ such that $\mathscr{U}\cup\mathscr{V}=A\cup B$ and $\mathscr{U}\cap\mathscr{V}=\emptyset$. $\mathscr{U},\mathscr{V}$ must belong either to $A$ or $B$, like $\mathscr{U}\in A$, which contradicts the fact $A$ and $B$ are connected. Therefore $A\cup B$ is connected.
Questions:
Is my proof right? If not. How should I prove the statement?
Thanks in advance! |
But you may raise your eyebrows with me 4+3+0+1 times
First talks have been given at ICHEP 2016 in Chicago, the main annual particle physics conference, and one of the experimental collaborations at the LHC is using this opportunity to impress lots of physicists with their new results.
Very quickly, the CMS Collaboration has released 22 new papers. (A message from the future: A few hours later, 16 were added; I analyzed those later and added some new discrepancies. The same applies to 15=14+1 additional papers a few more hours later.) I actually had to go to the 2nd and 3rd page (with 10 papers per page) – which don't display the date as clearly as the first page – to get all the new papers. That's unusual.
The third term, 1, in the number 22+16+1+15 in the title, is the new CMS diphoton paper killing the \(750\GeV\) cernette. Nothing is there at all. Totally inconclusive excesses emerged at \(620\GeV\), \(900\GeV\), and \(1300\GeV\) where a slight excess was seen by CMS in 2015, too, but nothing was there in ATLAS. (LOL, they later removed the new paper except for pages 1,2 but too bad, I already saw it, and you can see the key graph, too.)I won't pretend that I have read every letter in these 22 papers. My belief is that the number of people in the world outside ATLAS, CMS who actually read all these papers is smaller than 10 and even within the LHC collaborations, the numbers could be very low. But I can offer you the following minimum of the analysis: I really recommend to send your almost finished pheno papers on the \(750\GeV\) bump to vixra.org (a blue submit button is at the top) so that those amateur scientists have some competition and motivation. I see at least 80% of the area of every page of every new ATLAS/CMS paper for at least 0.2 seconds I spend at least 3 seconds by looking at every Brazilian chart in every paper Every paper is searched for the words "excess", "deviation[s]", and "*agree*" and I am looking for anomalous sentences with these words (to increase our combined sensitivity, you shouldn't use exactly the same algorithm) Reading whole paragraphs or pages is optional
What have I seen? Well, the first striking feature of this new bunch of papers is that we may finally read some of the papers that are based on the 2016 \(\sqrt{s}=13\TeV\) data – which were taken at most two months ago or so. The energy is the same as in 2015 so they probably know how to analyze the data and write papers quickly.
The relevant portion of the data taken by CMS in 2016 is usually given by 12.9 inverse femtobarns of data. Note that this whole 12.9 was taken in the first half of 2016. They never combine the 2015 and 2016 data. They could combine them and increase 12.9 by something like 2.7 that is used in many CMS papers based on the 2015 data.
Similarly, ATLAS began to show its results based on 13.3/fb of the 2016 data and e.g. according to this talk, it has seen no gluinos and squarks up to \(1.9\TeV\) – these bold exclusions only hold under many assumptions, however. If you want similar slides from dozens of fresh Chicago talks, perhaps before the participants see it, open this Beyond the Standard Model page and in the cyan boxes, click at the "document" icon in the upper right corners to get the files. (A similar Higgs physics URL and a joint Higgs/BSM one if you need.)
If the 2015 and 2016 data were combined and all the 20/fb of 2016 data were taken, the CMS could possibly publish papers with something like 22/fb of 2015+2016 data as of now. By the end of the year, the total luminosity should almost certainly exceed 50/fb and the 2015 contribution should become negligible.
All the 22 new CMS paper verbally confirm that everything they see is compatible with the Standard Model. Many of the graphs they observe are already very fine and you may see that the agreement is both strikingly good and nontrivial.
The opening ceremony of the 2016 Olympics in Rio is tomorrow. Lots of top athletes will be absent (not only due to the Zika-carrying mosquitoes from the logo), the games may turn out to be a 2nd class event.
This agreement between the theory and the experiment is really impressive and stunning – especially if you compare it with the opinion of a majority of the people in the world who believe that physics can't possibly understand how Nature works – but we are used to this agreement and bored by it (even by the rediscovery of the Higgs whose mass seems to be \(124.5\pm 0.5\GeV\) now). Some people even experience nightmares because of it – I don't.
You surely expect me to inform you about the results that have raised my eyebrows. Here are the coordinates:
Higgses \(H\to b\bar b \tau^+\tau^-\) using 12.9/fb, Page 12, Figure 3, some 2.5-sigma local excess for \(m_H\sim 700\GeV\). Some 2-sigma excess near \(m_H\sim 300\GeV\) is probably worth overlooking SUSY in same-sign dileptons, 12.9/fb, Page 14, Figure 6a, some 2.3-sigma local excess for \(H_T\sim 1100\GeV\) Resonances in \(Z\gamma\to\ell^+\ell^-\gamma\), 12.9/fb, Page 6, Figure 3b, some 2-sigma local excess for \(M\sim 375\GeV\). This excess is minor but CMS still seems to confirm the excess at \(375\GeV\) from \(2\times 2\) ATLAS, CMS papers I discussed in March. Note that this new graph is constructed purely from new, 2016 data – the 2015 data behind my March blog post aren't included now so this may be said to be a truly independent confirmation of an older hypothesis (although the weakening of the signal is surprising given the increased luminosity) Excited quarks \(q^*\to \gamma+{\rm jet}\), using 2.7/fb from 2015, Page 5, Figure 2, locally a 4-sigma narrow excess for \(M_{\gamma,\rm jet}=2\TeV\), see also Figure 5 how it translates to a Brazilian band From later 16: A dijet resonance, 12.9/fb, Page 7, Figure 4, 2.6 sigma for \(M\sim 850\GeV\) From later 16: Sbottom search in Higgs-to-diphotons using a razor, 2.3+12.9=15.2/fb (from the year 2015+2016=4031 testing if you sleep), Page 17, Figure 15, sbottom limits some 2-sigma or \(100\GeV\) lower than expected; see Smaria's tweet From later 16: Multilepton EWK SUSY, 12.9/fb, Page 12, Results, local excesses 1.7 and 2.5 sigma From the package of 15: Sbottom search in jets-met, 12.9/fb, Page 22, Figure 9, locally 3.1-sigma excess, bounds on sbottom mass by \(150\GeV\) lower than expected, I discuss it below
The by far largest excess among these four anomalies is the fourth one, the \(2\TeV\) excited quarks from the 2015 data. On Page 7, we're told that for the best choice of the parameter \(f=0.1\), the significance of the bump is 3.7 sigma locally and 2.84 sigma globally. We should look for ATLAS' paper on that (is it already out? Update: ATLAS saw nothing near \(2.0\TeV\) in the photon-jet channel in 2015) and for the CMS, ATLAS papers using the 2016 data to see whether it may be confirmed.
Excesses close to 4 sigma haven't been common but we've encountered several of these. We're still waiting for solid 5-sigma or stronger discoveries of new physics.
I am perhaps even more intrigued by the apparent (#3) resonance in \(M_{Z\gamma\to \mu^+\mu^-\gamma}\sim 375\GeV\) (yes, it seems stronger in the muon channel now) that has basically appeared in the fifth paper. Will ATLAS see an excess over there as well, perhaps a greater one? Update, Saturday morning: Nothing, the ATLAS excess over there has weakened into 1 of 10 very small upticks, so the \(375\GeV\) story is probably dead at this point. ATLAS' new \(Z\gamma\) paper.
Bonus: slides
You may see a CMS 2016 3.1 sigma locally excess in an off-Z, sbottom search – look at pages 7, 8 in this presentation or this lady's tweet. The sbottom limits are correspondingly by \(150\GeV\) lower than expected. It's independent from the (sbottom) anomaly #6 in the list above and I later added it as the entry (paper) #8 in my list.
Slight sbottom excesses are also seen in jets+MET but too small to make it to my list above. See also MT2, Page 11, Figure 7c for a slight excess in "one light squark" and three stop 1.9-sigma fully hadronic excesses.
More interestingly, ATLAS later revealed a 3.3-sigma excess in stops (page 9/36). See also the paper released on August 8th, on Monday.
Matt Buckley believes that some SUSY-like excess is seen in multijets. It must be a talk I haven't seen (?). Or maybe he's intrigued by Page 19/21 of this talk where the gluino limits are some \(100\GeV\) below the expected ones? But it's probably not too many sigmas, is it? The paper is this one, Figures 9,10, just 1 sigma over there.
Dear TRF readers may also ask a question to 3 ICHEP participants at Reddit. |
I want to minimize some multivariable function $\Delta(\alpha, \beta)$. I
know that this function has a zero point, $\Delta(5, 5) = 0$.
Starting from some $(\alpha, \beta)$ close to $(5,5)$ (e.g. (4.8, 5.2)), I want to use a gradient descent method to recover the 'correct' values $(5,5)$.
I can plot the surface $\alpha, \beta, \Delta$ and it clearly has a minimum point, but my algorithm fails to converge
Start with some initial $\alpha, \beta$. Compute $\Delta$
Calculate $\frac{\partial \Delta}{\partial \alpha}$ and $\frac{\partial \Delta}{\partial \beta}$ by evaluating e.g. $$ \frac{\partial \Delta}{\partial \alpha} = \frac{\Delta(\alpha+\epsilon, \beta)-\Delta(\alpha-\epsilon, \beta)}{2 \epsilon} $$
Update $\alpha, \beta$ as $$\alpha = \alpha - \gamma \frac{\partial \Delta}{\partial \alpha} $$ and the equivalent for $\beta$
Iterate
Is there any reason why this general approach should not work? |
Problem 58: Signatures for Set Equality
Suggested by Rasmus Pagh Source Dortmund 2012 Short link https://sublinear.info/58
Given $S\subseteq\{1,..n\}$, we would like to construct a fingerprint so that later, given fingerprints of two sets, we can check the equality of the two sets. There are (at least) two possible solutions to the problem:
$h(S)=\left(\sum_{i\in S} x^i\right)\bmod p$ for random $x\in \Z_p$. Update time would be roughly $\log p=\Omega(\log n)$. One would like to obtain a better update time. $h(S)=\left(\prod_{i\in S} (x-i)\right) \bmod p$, and random $x$. Insertion can be done in constant time. But the fingerprint is not linear. Question: Can we construct a fingerprint that achieves constant update time and is linear, while using $O(\log n)$ random bits? Ideally updates would include insertions and deletions. Linearity would imply, for example, that if $S_1 \subseteq S_2$ we can compute $h(S_2\backslash S_1)$ in constant time, as the difference of $h(S_2)$ and $h(S_1)$. |
On the $ L^p $ regularity of solutions to the generalized Hunter-Saxton system
1.
Department of Mathematics, University of Maine, Orono, ME 04469, USA
2.
Department of Mathematics, University of Chicago, Chicago, IL 60637, USA
3.
Department of Mathematics, Cornell University, Ithaca, NY 14850, USA
4.
Department of Mathematics, University of North Georgia, Dahlonega, GA 30533, USA
The generalized Hunter-Saxton system comprises several well-kno-wn models from fluid dynamics and serves as a tool for the study of fluid convection and stretching in one-dimensional evolution equations. In this work, we examine the global regularity of periodic smooth solutions of this system in $ L^p $, $ p \in [1,\infty) $, spaces for nonzero real parameters $ (\lambda,\kappa) $. Our results significantly improve and extend those by Wunsch et al. [
Mathematics Subject Classification:35B44, 35B10, 35B65, 35Q35, 35B40. Citation:Jaeho Choi, Nitin Krishna, Nicole Magill, Alejandro Sarria. On the $ L^p $ regularity of solutions to the generalized Hunter-Saxton system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6349-6365. doi: 10.3934/dcdsb.2019142
References:
[1] [2] [3]
S. Childress, G.R. Ierley, E.A. Spiegel and W.R. Young,
Blow-up of unsteady two-dimensional Euler and Navier-Stokes solutions having stagnation-point form,
[4] [5]
A. Constantin and D. Lannes,
The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations,
[6]
H.R. Dullin, G.A. Gottwald and D.D. Holm,
Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves,
[7] [8]
J. Escher, O. Lechtenfeld and Z. Yin,
Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation,
[9]
G. Gasper and M. Rahman,
[10] [11] [12] [13] [14] [15] [16] [17] [18]
B. Moon and Y. Liu,
Wave breaking and global existence for the generalized periodic two-component Hunter-Saxton system,
[19] [20] [21] [22] [23] [24] [25] [26]
A. Sarria and R. Saxton,
The role of initial curvature in solutions to the generalized inviscid Proudman-Johnson equation,
[27] [28] [29] [30] [31]
show all references
References:
[1] [2] [3]
S. Childress, G.R. Ierley, E.A. Spiegel and W.R. Young,
Blow-up of unsteady two-dimensional Euler and Navier-Stokes solutions having stagnation-point form,
[4] [5]
A. Constantin and D. Lannes,
The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations,
[6]
H.R. Dullin, G.A. Gottwald and D.D. Holm,
Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves,
[7] [8]
J. Escher, O. Lechtenfeld and Z. Yin,
Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation,
[9]
G. Gasper and M. Rahman,
[10] [11] [12] [13] [14] [15] [16] [17] [18]
B. Moon and Y. Liu,
Wave breaking and global existence for the generalized periodic two-component Hunter-Saxton system,
[19] [20] [21] [22] [23] [24] [25] [26]
A. Sarria and R. Saxton,
The role of initial curvature in solutions to the generalized inviscid Proudman-Johnson equation,
[27] [28] [29] [30] [31]
[1] [2]
Jibin Li.
Bifurcations and exact travelling wave solutions of the generalized two-component Hunter-Saxton system.
[3]
Jingqun Wang, Lixin Tian, Weiwei Guo.
Global exact controllability and asympotic stabilization of the periodic two-component $\mu\rho$-Hunter-Saxton system.
[4] [5] [6] [7]
Min Li, Zhaoyang Yin.
Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter-Saxton equation.
[8]
Qunying Zhang, Zhigui Lin.
Blowup, global fast and slow solutions to a parabolic system with double fronts free boundary.
[9]
Hammadi Abidi, Taoufik Hmidi, Sahbi Keraani.
On the global regularity of axisymmetric Navier-Stokes-Boussinesq system.
[10]
Irena Lasiecka, Mathias Wilke.
Maximal regularity and global existence of solutions to a quasilinear thermoelastic plate system.
[11]
Xiaojing Xu, Zhuan Ye.
Note on global
regularity of 3D generalized magnetohydrodynamic-$\alpha$ model with zero
diffusivity.
[12]
Jong-Shenq Guo, Satoshi Sasayama, Chi-Jen Wang.
Blowup rate estimate for a system of semilinear parabolic equations.
[13] [14]
Kazuo Yamazaki.
Global regularity of the two-dimensional magneto-micropolar fluid system with zero angular viscosity.
[15]
Dugan Nina, Ademir Fernando Pazoto, Lionel Rosier.
Global stabilization of a coupled system of two generalized Korteweg-de Vries type equations posed on a finite domain.
[16]
Zaihui Gan, Boling Guo, Jian Zhang.
Sharp threshold of global existence for the generalized Davey-Stewartson system in $R^2$.
[17]
Caixia Chen, Shu Wen.
Wave breaking phenomena and global solutions for a generalized periodic two-component Camassa-Holm system.
[18]
Tobias Black.
Global generalized solutions to a parabolic-elliptic Keller-Segel system with singular sensitivity.
[19] [20]
Zhaoyang Yin.
Well-posedness, blowup, and global existence for an integrable shallow water equation.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
I happen to have revised our calculus syllabus for first year biology majors about one year ago (in a French university, for that matter). I benefited a lot from my wife's experience as a math-friendly biologist.
The main point of the course is to get students able to deal with
quantitative models. For example, my wife studied the movement of cells under various circumstances.
A common model postulates that the average distance $d$ between two
positions of a cell at times $t_0$ and $t_0+T$ is given by
$$d = \alpha T^\beta$$ where $\alpha>0$ is a speed parameter and
$\beta\in[\frac12,1]$ is a parameter that measures how the movement
fits between a Brownian motion ($\beta=\frac12$)
and a purely ballistic motion ($\beta=1$).
This simple model is a great example to show how calculus can be relevant to biology.
My first point might be specific to recent French students: first-year students are often not even proficient enough with basic algebraic manipulations to be able to do anything relevant with such a model. For example, even asking to compute how $d$ changes when $T$ is multiplied by a constant needs to now how to
deal with exponents. In fact, we even had serious issues with the mere use of percentages.
One of the main point of our new calculus course is to be able to
estimate uncertainties: in particular, given that $T=T_0\pm \delta T$, $\alpha=\alpha_0\pm\delta\alpha$ and $\beta=\beta_0\pm\delta\beta$, we ask them to estimate $d$ up to order one (i.e. using first-order Taylor series). This already involves derivatives of multivariable functions, and is an important computation when you want to draw conclusions from experiments.
Another important point of the course is the
use of logarithms and exponentials, in particular to interpret log or log-log graphs. For example, in the above model, it takes a (very) little habit to see that taking logs is a good thing to do: $\log d = \beta\log T+\log \alpha$ so that plotting your data in log-log chart should give you a line (if the models accurately represent your experiments).
This then interacts with
statistics: one can find the linear regression in log-log charts to find estimates for $\alpha$ and $\beta$. But then one really gets an estimate of $\beta$ and... $\log\alpha$, so one should have a sense of how badly this uncertainty propagates to $\alpha$ ( one variable first-order Taylor series: easy peasy).
The other main goal of the course is to get them able to deal with some (ordinary) differential equations. The motivating example I chose was offered to me by the chemist of our syllabus meeting.
A common model for the kinetics of a chemical reaction$$A + B \to C$$is the second-order model: one assumes that the speed of the reaction is proportional to the product of the concentrations of the species A and B. This leads to a not-so-easy differential equation of the form$$ y'(t) = (a-y(t))(b-y(t)).$$This is a
first-order ODE with separable variables. One can solve it explicitly (a luxury!) by dividing by the second member, integrate in $t$, do a change of variable $u=y(t)$ in the left-hand-side, resolve into partial fractions the rational fraction that comes out, and remember how log is an antiderivative of the inverse function (and how to adjust for the various constants the appeared in the process). Then, you need some algebraic manipulations to transform the resulting equation into the form $y(t) = \dots$. Unfortunately and of course, we are far from being able to properly cover all this material, but we try to get the student able to follow this road later on, with their chemistry teachers.
In fact, I would love to be able to do more quantitative analysis of differential equations, but it is difficult to teach since it quickly goes beyond a few recipes. For example, I would like them to become able to tell in a glimpse the
variations of solutions to$$y'(t)=a\cdot y(t)-b \sqrt{y(t)}$$(a model of population growth for colonies of small living entities organized in circles, where death occur mostly on the edge - note how basic geometry makes an appearance here to explain the model) in terms of the initial value. Or to be able to realize that solutions to$$y'(t)=\sqrt{y(t)}$$must be sub-exponential (and what that even means...). For this kind of goals, one must first aim to basic proficiency in calculus.
To sum up,
dealing with any quantitative model needs a fair bit of calculus, in order to have a sense of what the model says, to use it with actual data, to analyze experimental data, to interpret it, etc.
To finish with a controversial point, it seems to me that, at least in my environment, biologists tend to underestimate the usefulness of calculus (and statistics, and more generally mathematics) and that improving the basic understanding of mathematics among biologists-to-be can only be beneficial. |
Let $C(x_1,\ldots,x_n)$ be a nonsigular cubic form with integral coefficients. In his Proof that $C$ fulfills the Hasse-Principle, if $n\geq 9$, Hooley used the following estimate that was provided by Katz in
Perversity and Exponential Sums:
There exists a subset $\mathcal{P}$ of primes, having positive Dirichlet density and $A<1$ such that for all $w\in \mathcal{P}$ we have $$ \sum_{\mathbf{b}(\text{mod } w)} |\sum_{\substack{t (\text{mod } w)\\ (t,w)=1}} \sum_{\mathbf{x} (\text{mod } w)}e_w(tC(\mathbf{x})+\mathbf{b}.\mathbf{x})|<Aw^{(3n+1)/2} $$ where $\mathbf{b}$ is the vector $(b_1,\ldots b_n)$, $e_q(z)=e^{2 \pi \text{i} z/q}$ and $\mathbf{a}.\mathbf{b}=\sum_{i=1}^n a_i b_i$.
Question: Can we do the same when we twist the innermost sum by $\chi(t)$ for a multiplicative character $\chi$? (I need it only in the case where $\chi$ is the legendre symbol.)
Context: I try to prove the Hasse-Principle for $C(\mathbf{x})=y^2$ and $n=6$. Everything goes through nicely up to the aforementioned problem.
Looking at prime moduli and not averaging over $\mathbf{b}$, there were sufficient results (provided by Katz) that showed that the additional character does not matter, but I was not able to find it in the averaged case. My own Algebraic-Geometry knowledge is insufficient for trying to change the proof of
Perversity and Exponential sums. |
0) Let us for simplicity assume that the Legendre transformation from Lagrangian to Hamiltonian formulation is regular.
1) The Lagrangian action $S_L[q]:=\int dt~L$ is invariant under the infinite-dimensional group of diffeomorphisms of the $n$-dimensional (generalized) position space $M$.
2) The Hamiltonian action $S_H[q,p]:=\int dt(p_i \dot{q}^i -H)$ is invariant (up to boundary terms) under the infinite-dimensional group of symplectomorphisms of the $2n$-dimensional phase space $T^*M$.
3) The group of diffeomorphisms of position space can be prolonged onto a subgroup inside the group of symplectomorphisms. (But the group of symplectomorphism is much bigger.) The above is phrased in the active picture. We can also rephrase it in the passive picture of coordinate transformations. Then we can prolong a coordinate transformation
$$q^i ~\longrightarrow~ q^{\prime j}~=~q^{\prime j}(q)$$
into the cotangent bundle $T^*M$ in the standard fashion
$$ p_i ~=~ p^{\prime}_j \frac{\partial q^{\prime j} }{\partial q^i} ~.$$
It is not hard to check that the symplectic two-form becomes invariant
$$dp^{\prime}_j \wedge dq^{\prime j}~=~ dp_i \wedge dq^i $$
(which corresponds to a symplectomorphism in the active picture).This post has been migrated from (A51.SE) |
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$
Subgaussian Concentration
This is a quick write-up of a brief conversation I had with Nilesh Tripuraneni and Aditya Guntuboyina a while ago that I thought others might find interesting.
This post focuses on the interplay between two types of concentration inequalities. Concentration inequalities usually describe some random quantity \(X\) as a constant \(c\) which it’s frequently near (henceforth, \(c\) will be our stand-in for some constant which possibly changes equation-to-equation). Basically, we can quantify how infrequent divergence \(t\) of \(X\) from \(c\) is with some rate \(r(t)\) which vanishes as \(t\rightarrow\infty\).
\[ \P\pa{\abs{X-c}>t}\le r(t)\,. \]
In fact, going forward, if \(r(t)=c’\exp(-c’’ O(g(t)))\), we’ll say \(X\)
concentrates about \(c\) in rate \(g(t)\).
Subgaussian (sg) random variables (rvs) with parameter \(\sigma^2\) exhibit a strong form of this. They have zero mean and concentrate in rate \(-t^2/\sigma^2\). Equivalently, we may write \(X\in\sg(\sigma^2)\). Subgaussian rvs decay quickly because of a characteristic about their moments. In particular, \(X\) is subgaussian if for all \(\lambda\), the following holds: \[ \E\exp\pa{\lambda X}\le \exp\pa{\frac{1}{2}\lambda^2\sigma^2}\,. \]
On the other hand, suppose we have \(n\) independent (indep) bounded (bdd) rvs \(X=\ca{X_i}_{i=1}^n\) and a function \(f\) that’s convex (cvx) in each one. Note being cvx in each variable isn’t so bad, for instance the low-rank matrix completion loss \(\norm{A-UV^\top}^2\) does this in \(U, V\). Then by BLM Thm. 6.10 (p. 180), \(f(X)\) concentrates about its mean quadratically.
This is pretty damn spiffy. You get a
function that’s nothing but a little montonic in averages, and depends on a bunch of different knobs. Said knobs spin independently, and somehow this function behaves basically constant. This one isn’t a deep property of some distribution, like sg rvs, but rather a deep property of smooth functions on product measures. A Little Motivation
Concentration lies at the heart of machine learning. For instance, take the well-known probably approximately correct (PAC) learning framework–it’s old, yes, and has been superseded by more generic techniques, but it still applies to simple classifiers we know and love. At its core, it seems to be making something analogous to a counting argument:
The set of all possible classifiers is small by assumption. Since there aren’t many classifiers overall, there can’t be many crappy classifiers. Crappy classifiers have a tendency of fucking up on random samples of data (like our training set). Therefore any solution we find that nails our training set is likely not crap (i.e., probably approximately correct).
However, this argument can be viewed from a different lens, one which exposes machinery that underlies much more expressive theories about learning like M-estimation or empirical process analysis.
The generalization errorof our well-trained classifier is no more than twice the worst generalization gap(difference between training and test errors) in our hypothesis class (symmetrization). For large sample sizes, this gap vanishes because training errors concentrate around the test errors (concentration).
For this reason, being able to identify when a random variable (such as a classifier’s generalization gap, before we see its training dataset) concentrates is useful.
OK, Get to the Point
Now that we’ve established why concentration is interesting, I’d like to present the conversation points. Namely, we have a general phenomenon, the concentration of measure.
Recall the concentration of measure from above, that for a convex, Lipschitz function \(f\) is basically constant, but requiring bounded variables. However, these are some onerous conditions.
To some degree, these conditions to be weakened. For starters, convexity need only be quasi-convexity. The Wikipedia article is a bit nebulous, but the previously linked Talagrand’s Inequality can be used to weaken this requirement (BLM Thm. 7.12, p. 230).
Still:
One can imagine that a function that’s not necessarily globally Lipschitz, but instead just coordinate-wise Lipschitz, we can still give some guarantees. Why do we need bounded random variables? Perhaps variables that are effectivelybounded most of the time are good enough.
Our goal here will be to see if there are smooth ways of relaxing the conditions above and framing the concentration rates \(r(t)\) in terms of these relaxations.
Coordinate Sensitivity and Bounded Differences
The concentration of measure bounds above rely on a global Lipschitz property: no matter which way you go, the function \(f\) must lie in a slope-bounded double cone, which can be centered at any of its points; this can be summarized by the property that our \(f:\R^n\rightarrow\R\) satisfies \(\abs{f(\vx)-f(\vy)}\le L\norm{\vx-\vy}\) for all \(\vx,\vy\)
Moreover, why does it matter that the preimage metric space of our \(f\) need to, effectively, be bounded? All that really matters is how the function \(f\) responds to changes in inputs, right?
Here’s where McDiarmid’s Inequality comes in, which says that so long as we satisfy the bounded difference property, where \[ \sup_{\vx, \vx^{(i)}}\abs{f(\vx)-f(\vx^{(i)})}\le c_i\,, \] holding wherever \(\vx, \vx^{(i)}\) only differ in position \(i\), then we concentrate with rate \(t^2/\sum_ic_i^2\). The proof basically works by computing the distance of \(f(X)\), our random observation, from \(\E f(X)\), the mean, through a series of successive approximations done by changing each coordinate, one at a time. Adding up these approximations happens to give us a martingale, and it turns out these bounded differences have a concentration (Hoeffding’s) of their own.
Notice how the rate worsens individually according to the constants \(c_i\) in each dimension.
What’s in the Middle?
We’ve seen how we can achieve concentration (that’s coordinate-wise sensitive in its bounds) by restricting ourselves to:
Well-behaved functions and bounded random inputs (Talagrand’s). Functions with bounded responses to coordinate change (McDiarmid’s).
Can we get rid of boundedness altogether now, relaxing it to the probibalistic “boundedness” that is subgaussian concentration? Well, yes and no.
How’s this possible?
Kontorovich 2014 claims concentration for generic Lipschitz functions for subgaussian inputs. At first, this may sound too good to be true. Indeed, a famous counterexample (BLM Problem 6.4, p. 211, which itself refers to LT p. 25) finds a particular \(f\) where the following holds for sufficiently large \(n\). \[ \P\ca{f(X)> \E f(X)+cn^{1/4}}\ge 1/4\,. \] Technically, the result is shown for the median, not mean value of \(f\), but by integrating the median concentration inequality for Lipschitz functions of subgaussian variables (LT p. 21), we can state the above, since the mean and median are within a constant of each of other (bdd rvs with zero mean are sg). From the proof (LT, p. 25), \(f(X)\) has rate no better than \(t^2n^{-1/2}\).
Therein lies the resolution for the apparent contradiction: we’re
pathologically dependent on the dimension factor.On the other hand, the bound proven in the aforementioned Kontorovich 2014 paper is that for sg \(X\), we can achieve a concentration rate \(t^2/\sum_i\Delta_{\text{SG}, i}^2\), where \(\Delta_{\text{SG}, i}\) is a subgaussian diameter, which for our purposes is just a constant times \(\sigma_i^2\), the subgaussian parameter for the \(i\)-th position in the \(n\)-dimensional vector \(X\). For some \(\sigma^2=\max_i\sigma^2\), note that the hidden dimensionality emerges, since the Kontorovich rate is then \(t^2/(n\sigma^2)\).
The Kontorovich paper is a nice generalization of McDiarmid’s inequality which replaces the boundedness condition with a subgaussian one. We still incur the dimensionality penalty, but we don’t care about this if we’re making a one-dimensional or fixed-\(n\) statement. In fact, the rest of the Kontorovich paper investigates scenarios where this dimensionality term is cancelled out by a shrinking \(\sigma^2\sim n^{-1}\) (in the paper, this is observed for some stable learning algorithms).
In fact, there’s even quite a bit of room between the Kontorovich bound \(t^2/n\) (fixing the sg diameter now) and the counterexample lower bound \(t^2/\sqrt{n}\). This next statement might be made out of my own ignorance, but it seems like there’s still a lot of open space to map out in terms of what rates are possible to achieve in the non-convex case, if we care about the dimension \(n\) (which we do).
References BLM - Boucheron, Lugosi, Massart (2013), Concentration Inequalities LT - Ledoux and Talagrand (1991), Probability in Banach Spaces |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.