text
stringlengths 256
16.4k
|
|---|
03-14-2015, 09:44 AM
Pages: 12
03-14-2015, 09:48 AM
Thanks for the reminder, forgot it (being a DMY-er). Happy pi-day!
03-14-2015, 09:57 AM
(03-14-2015 09:48 AM)Thomas Radtke Wrote: [ -> ]Thanks for the reminder, forgot it (being a DMY-er). Happy pi-day!
As it's 20150314 I have a long time to wait.
03-14-2015, 10:16 AM
What are you talking about? The real thing is Tau, I don't go for half a job.
Talk to you on June 28.
Talk to you on June 28.
03-14-2015, 10:30 AM
(03-14-2015 10:16 AM)Tugdual Wrote: [ -> ]What are you talking about? The real thing is Tau, I don't go for half a job. Talk to you on June 28.
What are you talking about? I only know of τ Ceti.
:D
03-14-2015, 03:17 PM
In the US we get to celebrate twice since we tell time predominantly with a 12 hour a.m. / p.m. clock. ;-)
Regards, John
Regards,
John
03-14-2015, 03:33 PM
[quote='Massimo Gnerucci' pid='30585' dateline='1426326276']
Happy Pi day, you MDY-ers! Kate Bush is my favourite female singer since long time. I'm glad that she is remembered this very especial day. Happy Pi day to you all! Live Long and Prosper!
Happy Pi day, you MDY-ers!
Kate Bush is my favourite female singer since long time.
I'm glad that she is remembered this very especial day.
Happy Pi day to you all!
Live Long and Prosper!
03-14-2015, 04:30 PM
FWIW,
\[\ln \left [ \frac{16\ln 878}{\ln \left ( 16\ln 878 \right )} \right ]=3.141 592 653 77\] \[1000\left ( \sqrt{\frac{e}{\sqrt{5}}}-1 \right )-4\pi = 89.999 994 77\] \[\frac{\sqrt{2}\left ( \pi ^{17}-80e^{\pi } \right )}{4}= 99 999 999.949\]
\[\ln \left [ \frac{16\ln 878}{\ln \left ( 16\ln 878 \right )} \right ]=3.141 592 653 77\]
\[1000\left ( \sqrt{\frac{e}{\sqrt{5}}}-1 \right )-4\pi = 89.999 994 77\]
\[\frac{\sqrt{2}\left ( \pi ^{17}-80e^{\pi } \right )}{4}= 99 999 999.949\]
03-14-2015, 04:45 PM
(03-14-2015 03:33 PM)Jlouis Wrote: [ -> ] Kate Bush is my favourite female singer since long time. I'm glad that she is remembered this very especial day.
Same here - And Aerial is bound to be her most subtle and intimate album ever..\
03-14-2015, 06:10 PM
This is the chap who invented pi:
http://www.theguardian.com/profile/gareth-ffowc-roberts
http://www.theguardian.com/profile/gareth-ffowc-roberts
03-14-2015, 07:55 PM
In the US the Pi day is 2nd of March.
http://en.wikipedia.org/wiki/Indiana_Pi_Bill
http://en.wikipedia.org/wiki/Indiana_Pi_Bill
03-14-2015, 11:41 PM
Someone at YouTube coments of this Kate Bush song wrote that she made many mistakes singing the Pi number.
I tried to follow her singing but I can't understand many numbers she sings. Could someone native english could check this out? Just for curiosity? I know she is extremily perfectionist, some albuns took 2 or 3 years to be completed, then I doubt she could have made a mistake like that. TIA Edit: was the 35s available to Kate in 2005? Just kidding...
I tried to follow her singing but I can't understand many numbers she sings.
Could someone native english could check this out? Just for curiosity?
I know she is extremily perfectionist, some albuns took 2 or 3 years to be completed, then I doubt she could have made a mistake like that.
TIA
Edit: was the 35s available to Kate in 2005? Just kidding...
03-15-2015, 12:01 AM
(03-14-2015 11:41 PM)Jlouis Wrote: [ -> ]Someone at YouTube coments of this Kate Bush song wrote that she made many mistakes singing the Pi number. I tried to follow her singing but I can't understand many numbers she sings. Could someone native english could check this out? Just for curiosity? I know she is extremily perfectionist, some albuns took 2 or 3 years to be completed, then I doubt she could have made a mistake like that. TIA Edit: was the 35s available to Kate in 2005? Just kidding...
You can check the lyrics (and the digits) here.
Gerson.
03-15-2015, 12:20 AM
(03-15-2015 12:01 AM)Gerson W. Barbosa Wrote: [ -> ](03-14-2015 11:41 PM)Jlouis Wrote: [ -> ]Someone at YouTube coments of this Kate Bush song wrote that she made many mistakes singing the Pi number. I tried to follow her singing but I can't understand many numbers she sings. Could someone native english could check this out? Just for curiosity? I know she is extremily perfectionist, some albuns took 2 or 3 years to be completed, then I doubt she could have made a mistake like that. TIA Edit: was the 35s available to Kate in 2005? Just kidding... You can check the lyrics (and the digits) here. Gerson.
Thanks Gerson. I'm gonna check this out.
Abraços
03-15-2015, 12:29 AM
There is also Liczba Pi (Number PI), a poem by Wislawa Szymborska (1996 Literature Nobel Prize Winner).
http://www.edulandia.pl/matura/1,117853,...skiej.html I learned from my mother (a Polish Brazilian) how to count in Polish from 1 to 5 only, so I was able to check only the very first digits :-) trzy koma jeden cztery jeden pięć
http://www.edulandia.pl/matura/1,117853,...skiej.html
I learned from my mother (a Polish Brazilian) how to count in Polish from 1 to 5 only, so I was able to check only the very first digits :-)
trzy koma jeden cztery jeden pięć
03-15-2015, 12:38 AM
(03-15-2015 12:20 AM)Jlouis Wrote: [ -> ](03-15-2015 12:01 AM)Gerson W. Barbosa Wrote: [ -> ]You can check the lyrics (and the digits) here. Gerson. Thanks Gerson. I'm gonna check this out. Abraços
Yes, there are many mistakes.
Now I am certain that she used a 35s.
Kate Bush does not make mistakes.
03-15-2015, 12:44 AM
(03-15-2015 12:38 AM)Jlouis Wrote: [ -> ](03-15-2015 12:20 AM)Jlouis Wrote: [ -> ]Thanks Gerson. I'm gonna check this out. Abraços Yes, there are many mistakes. Now I am certain that she used a 35s. Kate Bush does not make mistakes.
Neither does Katie. I haven't tried it yet, but I am quite sure Katie Wasserman's HP-32SII program gives only correct digits, even when run on the HP-35S :-)
http://www.hpmuseum.org/cgi-sys/cgiwrap/...i?read=899
[]s,
Gerson.
P.S.: I like Wuthering Heights better. I had just learned English then (or at least I thought I had :-) and I had also read the book (1977 I think).
03-15-2015, 12:49 AM
(03-15-2015 12:29 AM)Gerson W. Barbosa Wrote: [ -> ]There is also Liczba Pi (Number PI), a poem by Wislawa Szymborska (1996 Literature Nobel Prize Winner). http://www.edulandia.pl/matura/1,117853,...skiej.html I learned from my mother (a Polish Brazilian) how to count in Polish from 1 to 5 only, so I was able to check only the very first digits :-) trzy koma jeden cztery jeden pięć
Now we understand where your genious comes from:
You have Jan Lukasiewicz DNA.
03-15-2015, 07:29 AM
Hi all,
being a DMY-er I forgot this day, but you remained me , thanks. did you noticed that for MDY-ers, yesterday was 4 decimals ? 3.1415
being a DMY-er I forgot this day, but you remained me , thanks.
did you noticed that for MDY-ers, yesterday was 4 decimals ? 3.1415
Pages: 12
|
Automorphisms of the Upper Half Plane
Welcome back to our little series on automorphisms of four (though, for all practical purposes, it's really
three) different Riemann surfaces: the unit disc, the upper half plane, the complex plane, and the Riemann sphere. Last time, we proved that the automorphisms of the unit disc take on a certain form. Today, our goal is to prove a similar result about automorphisms of the upper half plane.
If you missed the introductory/motivational post for this series, be sure to check it out here!
Also in this series:
Automorphisms of the Upper Half Plane Theorem: Every automorphism $f$ of the upper half plane $\mathcal{U}$ is of the form $f(z)=\frac{az+b}{c z+d}$ where $a,b,c,d\in\mathbb{R}$ and $ad-bc=1$. Proof. First let's suppose $f\in\text{Aut}(\mathcal{U})$ and recall (or observe) that the function $g:\mathcal{U}\to\Delta$ given by$$g(z)=\frac{z-i}{z+i}$$is a conformal map. (Here, $\Delta$ denotes the unit disc.) Define $F=g\circ f\circ g^{-1}:\Delta\to\Delta$ so that $F\in\text{Aut}(\Delta)$, that is, $F$ is an automorphism of $\Delta$. By our previous post, there exists $a,b\in\mathbb{C}$ with $|a|^2-|b|^2=1$ such that$$F(t)=\frac{at+b}{\bar bt + \bar a}, \qquad t\in\Delta.$$Now let $z=g^{-1}(t)\in\mathcal{U}$ so that $g(z)=t$. Then since $f=g^{-1}\circ F \circ g$ and $g^{-1}(w)=\frac{i+wi}{1-w}$ for any $w\in\Delta$ we have$$f(z)=g^{-1}(F(g(z)))=g^{-1}\left( \frac{ a\left(\frac{z-i}{z+i} \right) + b }{\bar b\left(\frac{z-i}{z+i} \right) + \bar a } \right)$$which reduces to\begin{align}\frac{z(ai+bi+\bar a i +\bar bi) -\bar a +\bar b - b +a }{z(\bar a +\bar b - a - b) +\bar ai-\bar bi-bi+ai }.\end{align}Since $a=x+iy$ and $b=p+iq$ for real numbers $x,y,p,q$, we see that\begin{align*}ai=-yix \qquad&\text{and}\qquad \bar ai=y+ix,\\bi=-q+ip \qquad&\text{and}\qquad \bar bi=q+ip,\\\bar a - a = -2iy \qquad&\text{and}\qquad \bar b - b=-2iq.\end{align*}Thus (1) becomes$$\frac{z(x+p) + (y-q)}{z(-y-q)+(x-p)}$$which is of the form $\frac{Az+B}{Cz+D}$ where $A,B,C,D\in\mathbb{R}$ and $AD-BC=(x^2-p^2)+(y^2-q^2)=(x^2+y^2)-(p^2+q^2)=|a|^2-|b|^2=1$ as desired.
Conversely suppose $f(z)=\frac{az+b}{cz+d}$ for $a,b,c,d\in\mathbb{R}$ where $ad-bc=1$ for $z\in\mathcal{U}$. To show $f\in\text{Aut}(\mathcal{U})$, the only potentially tricky part is to show that $f(z)\in\mathcal{U}$, i.e. $\text{Im}f(z)>0$. (Note that $f$ is a linear fractional transformation, so it's holomorphic and invertible and it's inverse is also holomorphic.) But in fact it's not hard at all! Simply notice that if $z=x+iy$ then \begin{align*} f(z)&=\frac{(az+b)(c\bar z+d)}{|cz+d|^2}\\ &=\frac{ac(x^2+y^2) +bc(x-iy) + ad(x+iy) + bd }{|cz+d|^2} \end{align*} and so Im$f(z)=\frac{ad-bc}{|cz+d|^2}y=\frac{\text{Im}(z)}{|cz+d|^2}$ and this is strictly greater than zero since $\text{Im}(z)>0$.
$\square$
Next time: We'll prove a similar result about the automorphisms of the complex plane.
|
I will take pretty much the same approach that 9-BBN does except I will do it a little bit differently. I have never heard of setting the reactants to zero and I'm not very good at making matrices, so I'll just jump right into the systems of equations.
First of all, define your coefficients $a,b,c,d,e$ for each of the chemicals involved, being the reactants that you start with and the products that you get from the reaction.
$$\ce{a FeS2 + b O2 + c H2O -> d Fe(OH)3 + e H2SO4}$$
Now we keep a tally of the number of each atom in molecule. If I put a zero, it means that there are none in that compound and this is for clarity; an equality sign replaces the reaction arrow separating products and reactants.
Iron (Fe): $\displaystyle 1a + 0b + 0c = 1d + 0e \Longrightarrow a = d$
Sulfur (S): $\displaystyle 2a + 0b + 0c = 0d + 1e \Longrightarrow 2a = e$
Oxygen (O): $\displaystyle 0a + 2b + 1c = 3d + 4e \Longrightarrow 2b + c = 3d + 4e$
Hydrogen (H): $\displaystyle 0a + 0b + 2c = 3d + 2e \Longrightarrow 2c = 3d + 2e$
Now then, we have four equations and 5 variables $a,b,c,d,e$ … so it shouldn't be solvable, except that we only want one solution which is the lowest integer coefficients for each molecule. So we have to make one initial guess for any of the variables. We choose the one that is the simplest and that will be $a$. We set $a = 1$ for initial guess. Note that $a < 0$ & $a = 0$ are forbidden.
In iron (Fe), $a = d$, as $a = 1$, then $1 = d$
In sulfur (S), $2a = e$, as $a = 1$, then $2 = e$
What we know: $a = 1, b =\ ?, c =\ ?, d = 1, e = 2$
With what we know, we can only solve for $c$ in the hydrogen equation next.
$c = [(3d + 2e) / 2]$, as $d = 1, e = 2$, then: $c = 7 / 2$
Solving the last equation for b:
$b = [3d + 4e -c] / 2$, given known variables, then: $b = 15 / 4$
Solutions: $a = 1, b = (15 / 4), c = (7 / 2), d = 1, e = 2$
This will balance the equation according to conservation of mass. However, it makes more sense if you multiply $a$ through $e$ by $4$ so as to get the lowest whole number solutions. This is like scaling a recipe. We are synthesizing the same thing, though.
Therefore, $a = 4, b = 15, c = 14, d = 4, e = 8$
Balanced Chemical Equation:
$$\ce{4 FeS2 + 15 O2 + 14 H2O -> 4 Fe(OH)3 + 8 H2SO4}$$
Now that your equation is balanced I will show you a little something that I've derived. So the formula for the
commplete combustion of every hydrocarbon alkane ($\ce{C_nH_{2n+2}}$) such as methane, ethane, propane, butane, pentane, etc … is this:
$$\ce{C_nH_{2n+2} + $(3n + 1) / 2$ O2 -> n CO2 + (n + 1) H2O}$$
So if we are completely combusting (full airflow of oxygen, no major flickering of the flame producing a mixture of $\ce{CO}$ and $\ce{CO2}$) propane which has the molecular formula $ce{C3H8}$, where $n = 3$ then it's combustion is the following:
$$\ce{C3H8 (g) + 5 O2 (g) -> 3 CO2 (g) + 4 H2O (g)}$$
I know the general complete combustion equation is true for every alkane because all alkanes have the formula $\ce{C_nH_{2n+2}}$, $\ce{O2}$ is always involved in combustion, and $\ce{CO2}$ & $\ce{H2O}$ are always the products of the complete combustion of alkanes. I think this is really cool given that most popular general chemistry equations are combustion equations!
Additional information: I have a lot of ideas running through my head now so once you've determined the balanced chemical equation, you may be able to calculate the spontaneity of the reaction from the change in Gibbs Free Energy of reaction which is useful for knowing if the reaction is a waste of energy, that is that it consumes more energy than produces or not!
You can determine the percent yield after experimentation, how useful your reaction is from the stoichiometric amount of expected product produced. But even with ideally $100~\%$ yield, are all the elements that you find in your desired product going on to form your product or are they being wasted producing something else?
The 2nd Principle of Green Chemistry: Atom Economy, states that
Synthetic methods should be designed to maximize incorporation of all materials used in the process into the final product. (The American Chemical Society)
So what's the point of your reaction? To make $\ce{Fe(OH)3}$? What percent of the atoms $\ce{Fe, O}$, and $\ce{H}$ are being incorporated as $\ce{Fe(OH)3}$ and how much is going off to form sulfuric acid instead? We can calculate that as the percent atom economy.
$$\%\ \text{Atom Economy} = \frac{\text{mass of atoms in desired product}}{\text{mass of atoms in all reactants}} \times 100$$
where units are in grams per mole, but they are eliminated in the ratio.
$$\begin{align}\%\ \text{Atom Economy} &= \frac{4 \times 106.866}{4 \times 119.965 + 15 \times 31.998 + 14 \times 18.015} \times 100\\&= \frac{427.464}{1212.04} \times 100\\&= 35.27~\%\end{align}$$
So if there were multiple ways to make $\ce{Fe(OH)3}$ $\%\ \text{Atom economy}$ may be a useful factor to consider when conducting a synthesis. I think this is more important, though, in choosing a synthetic reaction where toxic byproducts are involved and you want to minimize the toxic byproducts produced for the health of the earth and of the customer or patient and also to reduce on having to spend money to deactivate and/or filter out toxic byproducts in an industrial synthesis. Therefore, you want the synthesis with the highest $\%\ \text{Atom Economy}$.
A link on Atom Economy: https://www.acs.org/content/acs/en/greenchemistry/what-is-green-chemistry/principles/gc-principle-of-the-month-2.html
|
Sorry, but my first answer was completely and utterly wrong. Since this question is one of the top search results for the query "braid group fundamental group configuration space" I think it's high time I've updated with a correct explanation! :-)
I am not sure why there is a non-trivial loop. My understanding of homotopy is that if there is no "hole" in the space, then we can continuously retract our loop back to our base point. Why can we not do this in this case?
Short answer. You're thinking of $B_n(\Bbb R)$ when you should be thinking of $B_n(\Bbb C)$.
Long answer. Let $X$ be a "nice" topological space (say, a manifold). Define $F_n(X)$ to be the subspace of $X^n$ comprised of tuples with distinct coordinates. The symmetric group $S_n$ acts on it freely, and we can form the $n$-configuration space as the quotient $SF_n(X):=F_n(X)/S_n$. Then we define the braid group as $B_n(X)=\pi_1(SF_n(X))$. (Of course, $SF_n(X)$ should be path-connected...)
If you take $X=\Bbb R$ then the connected components of $F_n(X)$ are blocks for the action of $S_n$. Given any two tuples $(x_1,\cdots,x_n)$ and $(y_1,\cdots,y_n)$ with $x_1<x_2<\cdots<x_n$ and $y_1<y_2<\cdots<y_n$, these two tuples will be path-connected: first shift all coordinates of $\vec{y}$ uniformly enough to the right so that $x_n<y_1$, then shift $y_1$ back until it's $x_1$, then shift $y_2$ back until it's $y_2$, and so on. The space of all tuples $(x_1,\cdots,x_n)$ with increasing coordinates is homeomorphic to $\Bbb R^n$ which is simply connected. Similarly for any other tuples whose coordinates are "ranked" in a given order.
However $(x_1,x_2,x_3,\cdots,x_n)$ will
not be path-connected to $(x_2,x_1,x_3,\cdots,x_n)$ within $F_n(X)$. The difference between the first two coordinates would need to change from positive to negative, and hence by IVT must be zero at some point. In general, a path in $F_n(X)$ cannot change the "rank order" of the coordinates of a tuple. So there are no paths between points in any $S_n$-orbit in $F_n(X)$. Therefore, any based loop in $SF_n(X)$ when lifted back to $F_n(X)$ must also be a loop, hence must be nullhomotopic since $F_n(X)$'s connected components are simply connected, so $SF_n(X)$ is simply connected, so the braid group $B_n(\Bbb R)=\pi_1(SF_n(\Bbb R))$ is trivial.
Now consider $X=\Bbb C$ with $n=2$. We must delete the subspace $\{(z,z):z\in\Bbb C\}$ from $\Bbb C^2$. (Keep in mind for now that $C_2$ acts on the carved-out space by transposing coordinates.) This subspace is a plane inside Euclidean $4$-space, so it homeomorphic to $\Bbb R\times (\Bbb R^3- L)$ for a line $L\subset\Bbb R^3$. Better yet, consider the obvious Euclidean structure on the space and take the orthogonal complement $\{(z,-z):z\in\Bbb C\}$: there is an orthogonal projector given by $(z,w)\mapsto(z-w,w-z)/\sqrt{2}$ and then an isomorphism into the punctured plane given by $(u,-u)\mapsto u$. Thus, we have a deformation retract from $\Bbb C^2-{\rm diag}$ onto $\Bbb C^\times$, and we know $\pi_1(\Bbb C^\times)$ is infinite cyclic. (This is the pure braid group $P_2$.)
If one further deformation retracts $\Bbb C^\times\to S^1$ and has $C_2$ act by swapping antipodal points, then our deformation retract from $\Bbb C^2-{\rm diag}$ is
equivariant. Thus we have a commutative diagram:
$$\begin{array}{ccc}\pi_1(\Bbb C^2-{\rm diag}) & \longrightarrow & \pi_1((\Bbb C^2-{\rm diag})/C_2) \\ \downarrow & & \downarrow \\ \pi_1(S^1) & \longrightarrow & \pi_1(S^1/C_2) \end{array} $$
The vertical maps are isomorphisms since they are induced from deformation retracts. As a result, we know that the inclusion of the pure braid group $P_2\hookrightarrow B_2$ is akin to $2\Bbb Z\hookrightarrow\Bbb Z$.
I don't think this kind of argument will generalize though.
So what about $n>2$? In configuration space (which has $2n$ real dimensions, so is hard to visualize) a single point, a "configuration," represents $n$ distinct points in a plane (which is easy to visualize). And a path in configuration space represents each of the $n$ points in the plane having a path in and out of it.
Thus, imagine a continuum (indexed by $[0,1]$) of copies of $\Bbb C$ (resting flat) piled on top of each other. If one lets the altitude represent time, then the paths traced out between the points represent strings, and if one looks at this picture from the side one sees braid diagrams! Example:
$\hskip 1.3in$
Since we can choose our basepoint for $\pi_1$ to be anything, without loss of generality we may assume it is $\{1,2,\cdots,n\}\subset\Bbb C$ for the purpose of visualization. Tuples in $\Bbb C^n$ with nondistinct coordinates represent two strings intersecting at the same point, which is why wemust delete this subspace from $\Bbb C^n$: to prevent collisions.
A path in $\Bbb C^n$ ending where it started means each colored string above would have to go back to its original point, and this defines a pure braid. If we quotient by the action of $S_n$, we essentially allow the path in configuration space to go to any of the permuted configurations, which means the strings in the braid diagram can connect
different dots.
There is another way to visualize braids that is also very interesting: mapping classes of the closed unit disk with $n$ points inside deleted. I recently asked a question about generalizing this idea to generalized braid groups. Mappings can warp the unit disk like a sheet of rubber, but the rubber is attached to the boundary (the unit circle) which must remain fixed pointwise. When you delete $n$ points, that essentially means your mappings of the disk must restrict to a permutation of those $n$ points.
To visualize what such mappings look like, for $B_2$ imagine putting two fingers on the two points in the disk, then using your two fingers to warp the rubber disk by turning it one way or the other with your fingers. Remember the border of the rubber sheet is stuck in place, so you'll be twisting the inside of the rubber relative to the outside rather than lamely rotating it. In general for $n$ points, you can do the same thing by twisting the rubber around two points with two fingers.
There two ways to twist two marked points around each other with your fingers (clockwise or counterclockwise), corresponding to which string goes over/under which in the braid diagram. The paths that the marked points take throughout the twisting process essentially trace out a braid diagram. Intuitively, we should be able to "lift" any braid diagram into a composition of such twistings of the unit disk. More detail is given in the link.
|
When you fit a generalized linear model (GLM) in R and call
confint on the model object, you get confidence intervals for the model coefficients. But you also get an interesting message:
Waiting for profiling to be done...
What's that all about? What exactly is being profiled? Put simply, it's telling you that it's calculating a
profile likelihood ratio confidence interval.
The typical way to calculate a 95% confidence interval is to multiply the standard error of an estimate by some normal quantile such as 1.96 and add/subtract that product to/from the estimate to get an interval. In the context of GLMs, we sometimes call that a Wald confidence interval.
Another way to determine an upper and lower bound of plausible values for a model coefficient is to find the minimum and maximum value of the set of all coefficients that satisfy the following:
\[-2\log\left(\frac{L(\beta_{0}, \beta_{1}|y_{1},…,y_{n})}{L(\hat{\beta_{0}}, \hat{\beta_{1}}|y_{1},…,y_{n})}\right) < \chi_{1,1-\alpha}^{2}\]
Inside the parentheses is a ratio of
likelihoods. In the denominator is the likelihood of the model we fit. In the numerator is the likelihood of the same model but with different coefficients. (More on that in a moment.) We take the log of the ratio and multiply by -2. This gives us a likelihood ratio test (LRT) statistic. This statistic is typically used to test whether a coefficient is equal to some value, such as 0, with the null likelihood in the numerator (model without coefficient, that is, equal to 0) and the alternative or estimated likelihood in the denominator (model with coefficient). If the LRT statistic is less than \(\chi_{1,0.95}^{2} \approx 3.84\), we fail to reject the null. The coefficient is statisically not much different from 0. That means the likelihood ratio is close to 1. The likelihood of the model without the coefficient is almost as high the model with it. On the other hand, if the ratio is small, that means the likelihood of the model without the coefficient is much smaller than the likelihood of the model with the coefficient. This leads to a larger LRT statistic since it's being log transformed, which leads to a value larger than 3.84 and thus rejection of the null.
Now in the formula above, we are seeking all such coefficients in the numerator that would make it a true statement. You might say we're “profiling” many different null values and their respective LRT test statistics.
Do they fit the profile of a plausible coefficient value in our model? The smallest value we can get without violating the condition becomes our lower bound, and likewise with the largest value. When we're done we'll have a range of plausible values for our model coefficient that gives us some indication of the uncertainly of our estimate.
Let's load some data and fit a binomial GLM to illustrate these concepts. The following R code comes from the help page for
confint.glm. This is an example from the classic Modern Applied Statistics with S.
ldose is a dosing level and
sex is self-explanatory.
SF is number of successes and failures, where success is number of dead worms. We're interested in learning about the effects of dosing level and sex on number of worms killed. Presumably this worm is a pest of some sort.
# example from Venables and Ripley (2002, pp. 190-2.)
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20-numdead)
budworm.lg <- glm(SF ~ sex + ldose, family = binomial)
summary(budworm.lg)
## ## Call: ## glm(formula = SF ~ sex + ldose, family = binomial) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.10540 -0.65343 -0.02225 0.48471 1.42944 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.4732 0.4685 -7.413 1.23e-13 *** ## sexM 1.1007 0.3558 3.093 0.00198 ** ## ldose 1.0642 0.1311 8.119 4.70e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 124.8756 on 11 degrees of freedom ## Residual deviance: 6.7571 on 9 degrees of freedom ## AIC: 42.867 ## ## Number of Fisher Scoring iterations: 4
The coefficient for
ldose looks significant. Let's determine a confidence interval for the coefficient using the
confint function. We call
confint on our model object,
budworm.lg and use the
parm argument to specify that we only want to do it for
ldose:
confint(budworm.lg, parm = "ldose")
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 0.8228708 1.3390581
We get our “waiting” message though there really was no wait. If we fit a larger model and request multiple confidence intervals, then there might actually be a waiting period of a few seconds. The lower bound is about 0.8 and the upper bound about 1.32. We might say every increase in dosing level increase the log odds of killing worms by at least 0.8. We could also exponentiate to get a CI for an odds ratio estimate:
exp(confint(budworm.lg, parm = "ldose"))
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 2.277027 3.815448
The odds of “success” (killing worms) is at least 2.3 times higher at one dosing level versus the next lower dosing level.
To better understand the profile likelihood ratio confidence interval, let's do it “manually”. Recall the denominator in the formula above was the likelihood of our fitted model. We can extract that with the
logLik function:
den <- logLik(budworm.lg)
den
## 'log Lik.' -18.43373 (df=3)
The numerator was the likelihood of a model with a
different coefficient. Here's the likelihood of a model with a coefficient of 1.05:
num <- logLik(glm(SF ~ sex + offset(1.05*ldose), family = binomial))
num
## 'log Lik.' -18.43965 (df=2)
Notice we used the
offset function. That allows us to fix the coefficient to 1.05 and not have it estimated.
Since we already extracted the
log likelihoods, we need to subtract them. Remember this rule from algebra?
\[\log\frac{M}{N} = \log M – \log N\]
So we subtract the denominator from the numerator, multiply by -2, and check if it's less than 3.84, which we calculate with
qchisq(p = 0.95, df = 1)
-2*(num - den)
## 'log Lik.' 0.01184421 (df=2)
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] TRUE
It is. 1.05 seems like a plausible value for the
ldose coefficient. That makes sense since the estimated value was 1.0642. Let's try it with a larger value, like 1.5:
num <- logLik(glm(SF ~ sex + offset(1.5*ldose), family = binomial))
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] FALSE
FALSE. 1.5 seems too big to be a plausible value for the
ldose coefficient.
Now that we have the general idea, we can program a
while loop to check different values until we exceed our threshold of 3.84.
cf <- budworm.lg$coefficients[3] # fitted coefficient 1.0642
cut <- qchisq(p = 0.95, df = 1) # about 3.84
e <- 0.001 # increment to add to coefficient
LR <- 0 # to kick start our while loop
while(LR < cut){
cf <- cf + e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(upper <- cf)
## ldose ## 1.339214
To begin we save the original coefficient to
cf, store the cutoff value to
cut, define our increment of 0.001 as
e, and set
LR to an initial value of 0. In the loop we increment our coefficient estimate which is used in the
offset function in the estimation step. There we extract the log likelihood and then calculate
LR. If
LR is less than
cut (3.84), the loop starts again with a new coefficient that is 0.001 higher. We see that our upper bound of 1.339214 is very close to what we got above using
confint (1.3390581). If we set
e to smaller values we'll get closer.
We can find the LR profile lower bound in a similar way. Instead of adding the increment we subtract it:
cf <- budworm.lg$coefficients[3] # reset cf
LR <- 0 # reset LR
while(LR < cut){
cf <- cf - e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(lower <- cf)
## ldose ## 0.822214
The result, 0.822214, is very close to the lower bound we got from
confint (0.8228708).
This is a
very basic implementation of calculating a likelihood ratio confidence interval. It is only meant to give a general sense of what's happening when you see that message
Waiting for profiling to be done.... I hope you found it helpful. To see how R does it, enter
getAnywhere(profile.glm) in the console and inspect the code. It's not for the faint of heart.
I have to mention the book Analysis of Categorical Data with R, from which I gained a better understanding of the material in this post. The authors have kindly shared their R code at the following web site if you want to have a look: http://www.chrisbilder.com/categorical/
To see how they “manually” calculate likelihood ratio confidence intervals, go to the following R script and see the section “Examples of how to find profile likelihood ratio intervals without confint()”: http://www.chrisbilder.com/categorical/Chapter2/Placekick.R
|
Ranklist, scores and difficulties Your score
Your score is simply the sum of difficulties of your solved problems. Solving the same problem twice does not give any extra points. Note that Kattis' difficulty estimates vary over time, and that this can cause your score to go up or down without you doing anything.
Scores are only updated every few minutes – your score and rank will not increase instantaneously after you have solved a problem, you have to wait a short while.
If you have set your account to be anonymous, you will not be shown in ranklists, and your score will not contribute to the combined score of your country or university. Your user profile will show a tentative rank which is the rank you would get if you turned off anonymous mode (assuming no anonymous users with a higher score than you do the same).
Combined scores
The combined score for a group of people (e.g., all users from a given country or university) is computed as a weighted average of the scores of the individual users, with geometrically decreasing weights (higher weights given to the larger scores). Suppose the group contains $n$ people, and that their scores, ordered in non-increasing order, are $s_0 \ge s_1 \ge \ldots \ge s_{n-1}$ Then the combined score for this group of people is calculated as \[ S = \frac{1}{f} \sum_{i=0}^{n-1} \left(1-\frac{1}{f}\right)^i \cdot s_i, \] where the parameter $f$ gives a trade-off between the contribution from having a few high scores and the contribution from having many users. In Kattis, the value of this parameter is chosen to be $f = 5$.
For example, if the group consists of a single user, the score for the group is 20% of the score of that user. If the group consists of a very large number of users, about 90% of the score is contributed by the 10 highest scores.
Adding a new user with a non-zero score to a group always increases the combined score of the group.
Problem difficulty
Kattis has problems of varying difficulty. She estimates the difficulty for different problems by using a variant of the ELO rating system. Broadly speaking, problems which are solved by many people using few submissions get low difficulty scores, and problems which are often attempted but rarely solved get high difficulty scores. Problems with very few submissions tend to get medium difficulty scores, since Kattis does not have enough data about their difficulty.
The difficulty estimation process also assigns an ELO-style rating to you as a user. This rating increases when you solve problems, like your regular score, but is also affected by your submission accuracy. We use your rating to choose which problems to suggest for you to solve. If your rating is higher, the problems we suggest to you in each category (trivial, easy, medium, hard) will have higher difficulty values.
|
Both roughness length $z_0$ and zero plane displacement $d$ seemed to be defined as the height above the ground at which wind speed theoretically becomes zero. But wind speed is also supposed to go to zero at $d+z_0$. What is the difference between the two, and how should they actually be defined?
First some definitions:
$z_0$:
Roughness length is defined as the height at which the mean velocity is zero due to substrate roughness. Real walls/ground are not smooth and often have varying degrees of roughness, this parameter (which is determined empirically) accounts for that effect.
$d$:
Zero Plane displacement is defined as the height at which the mean velocity is zero due to large obstacles such as buildings/canopy.
The two parameters are not the same because they describe the effects of two fundamentally different processes. $d$ can be anywhere from $6$ to $20$ times larger than $z_0$.
The basis for most turbulence modeling is the eddy viscosity model:
$$-\overline{u'w'} = \nu_t \frac{\partial U}{\partial z}$$
$${U} = \frac{u^*}{\kappa} \ln\, \frac{z}{z_0}$$
Your equation is
$${U} = \frac{u^*}{\kappa} \ln\, \frac{z-d}{z_0}$$ which is the law of the wall with $d = 0$ because it applies to flat plates. It is easy to see then, that by subtracting $d$ from $z$ the effect is to reduce $U$ at that height, which makes sense because large obstacles remove energy from the mean flow and slow it down.
Note that if there are no large obstacles then $d \approx 0$, but $z_0$ is still larger than zero.
|
We model the expansion history of the Universe as a Gaussian process and find constraints on the dark energy density and its low-redshift evolution using distances inferred from the Luminous Red Galaxy (LRG) and Lyman-alpha (Ly$\alpha$) datasets of the Baryon Oscillation Spectroscopic Survey, supernova data from the Joint Light-curve Analysis (JLA) sample, Cosmic Microwave Background (CMB) data from the Planck satellite, and local measurement of the Hubble parameter from the Hubble Space Telescope ($\mathsf H0$). Our analysis shows that the CMB, LRG, Ly$\alpha$, and JLA data are consistent with each other and with a $\Lambda$CDM cosmology, but the ${\mathsf H0}$ data is inconsistent at moderate significance. Including the presence of dark radiation does not alleviate the ${\mathsf H0}$ tension in our analysis. While some of these results have been noted previously, the strength here lies in that we do not assume a particular cosmological model. We calculate the growth of the gravitational potential in General Relativity corresponding to these general expansion histories and show that they are well-approximated by $\Omega_{\rm m}^{0.55}$ given the current precision. We assess the prospects for upcoming surveys to measure deviations from $\Lambda$CDM using this model-independent approach.
We incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the Fermi Gamma Ray Space Telescope. The range of particle annihilation rate and masses expand when including these unknowns. However, empirical determinations of the Milky Way halo's local density and density profile leave the signal region to be in considerable tension with dark matter annihilation searches from combined dwarf galaxy analyses. The GCE and dwarf tension can be alleviated if: one, the halo is extremely concentrated or strongly contracted; two, the dark matter annihilation signal differentiates between dwarfs and the Galactic Center; or, three, local stellar density measures are found to be significantly lower, like that from recent stellar counts, pushing up the local dark matter density.
The Milky Way's Galactic Center harbors a gamma-ray excess that is a candidate signal of annihilating dark matter. Dwarf galaxies remain predominantly dark in their expected commensurate emission. We quantify the degree of consistency between these two observations through a joint likelihood analysis. In doing so I incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the {\em Fermi Gamma-Ray Space Telescope}. The preferred range of annihilation rates and masses expands when including these unknowns. Even so, using two recent determinations of the Milky Way halo's local density leave the GCE preferred region of single-channel dark matter annihilation models to be in strong tension with annihilation searches in combined dwarf galaxy analyses. A third, higher Milky Way density determination, alleviates this tension. This joint likelihood analysis allows us to quantify this inconsistency. As an example, we test a representative inverse Compton sourced self-interacting dark matter model, which is consistent with both the GCE and dwarfs.
Self-interacting dark matter (SIDM) models have been proposed to solve the small-scale issues with the collisionless cold dark matter (CDM) paradigm. We derive equilibrium solutions in these SIDM models for the dark matter halo density profile including the gravitational potential of both baryons and dark matter. Self-interactions drive dark matter to be isothermal and this ties the core sizes and shapes of dark matter halos to the spatial distribution of the stars, a radical departure from previous expectations and from CDM predictions. Compared to predictions of SIDM-only simulations, the core sizes are smaller and the core densities are higher, with the largest effects in baryon-dominated galaxies. As an example, we find a core size around 0.3 kpc for dark matter in the Milky Way, more than an order of magnitude smaller than the core size from SIDM-only simulations, which has important implications for indirect searches of SIDM candidates.
|
I am trying to understand some results and would appreciate some general comments on tackling nonlinear problems.
Fisher's equation (a nonlinear reaction-diffusion PDE),
$$ u_t = du_{xx} + \beta u (1 - u) = F(u) $$
in discretised form,
$$ u_j^{\prime} = \boldsymbol{L}\boldsymbol{u} + \beta u_j (1 - u_j) = F(\boldsymbol{u}) $$
where $\boldsymbol{L}$ is the differential operator and $\boldsymbol{u}=(u_{j-1}, u_j, u_{j+1}) $ is the discretisation stencil.
Method
I wish to apply a implicit scheme because I require stability and unrestricted time step. For this purpose I am using the $\theta$-method, (note that $\theta=1$ gives a fully implicit scheme and $\theta=0.5$ gives the trapezoidal or "Crank-Nicolson" scheme),
$$ u_{j}^{\prime} = \theta F(\boldsymbol{u}^{n+1}) + (1-\theta) F(\boldsymbol{u}^{n}) $$
However, for nonlinear problems this cannot be done because the equation cannot be written in a linear form.
To get around this problem I have been exploring two numerical approaches,
IMEX method
$$ u_j^{\prime} = \underbrace{\theta\boldsymbol{L}\boldsymbol{u^{n+1}} + (1-\theta)\boldsymbol{L}\boldsymbol{u^{n}}}_{\theta-\text{method diffusion term}} + \underbrace{\beta u_j^{n} (1 - u_j^{n})}_{\text{Fully explicit reaction term}} $$
The most obvious route is to ignore the nonlinear part of the reaction term and just update the reaction term with the best possible value, i.e. that from the previous time step. This results in the IMEX method.
Newton solver
$$ \nu^{k+1} = \nu^{k} - (I - \theta\tau A^{n})^{-1} \left( \nu^{k} - u_{n} - (1-\theta) \tau F(w^n) - \theta\tau F(w^{n+1}) \right) $$
The full $\theta$-method equation can be solved using the a Newton-Raphson iteration to find the future solution variable. Where $k$ is the iteration index ($k\geq0$) and $A^{n}$ is the Jacobian matrix of $F(w^n)$. Here I use the symbols $\nu^{k}$ for iteration variables such that they are distinguished from solution of the equation at a real time point $u^n$. This is actually a
modified Newton solver because the Jacobian is not updated with every iteration. Results
The results above are calculated for a reasonably large time step and they show the difference between the time stepping approach and a full Newton iteration solver.
Things I don't understand:
I am surprised that the time-stepping method does "OK" but it eventually lags behind the analytical solution as time goes by. (
NBif I had chosen a smaller time-step then the time-stepping approach gives results closed to the analytical model). Why does the time-stepping approach give reasonable results to a nonlinear equation?
The Newton model does much better, but starts to lead the analytical model as time goes forward. Why does the accuracy of the Newton approach decrease with time? Can accuracy be improved?
Why is there a general feature that after many iterations then numerical model and the analytical model begin to diverge? Is this just because the time step is too large or will this always happen?
|
The impulse response of the LTI system is
$$h(t)=e^{-4t} u(t)$$
The expression for the step response is
$$\frac14 \left(1-e^{-4t}\right)u(t)$$
My question is how $u(t)$ appears in the answer.
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
Since your question is not about how to obtain the answer, but specifically about the reason why the unit step $u(t)$ appears in it, I assume that you know that the response to a step input is computed by convolving the impulse response with a step:
$$y(t)=\int_{-\infty}^{\infty}h(\tau)u(t-\tau)d\tau=\int_{-\infty}^{t}h(\tau)d\tau\tag{1}$$
Since $h(t)$ is causal, i.e., $h(t)=0$ for $t<0$, the lower integration limit can be changed to $0$. This implies that if in $(1)$ the upper integration limit $t$ is less than $0$, the result must be zero. Consequently, the response to a step input can be written as
$$y(t)=u(t)\cdot\int_0^{t}h(\tau)d\tau\tag{2}$$
Evaluating $(2)$ gives you the final answer, which includes the step function $u(t)$.
EDIT: Triggered by Dilip Sarwate's comment I add some extra explanation. Note that the step function in $(2)$ is actually redundant because for $t<0$ the integral in $(2)$ is zero because for $t<0$ we have
$$\int_0^{t}h(\tau)d\tau=-\int_{-|t|}^0h(\tau)d\tau=0,\quad t<0$$
because $h(t)=0$ for $t<0$. However, if $h(t)=f(t)u(t)$ with some function $f(t)$ that is not zero for $t<0$ (in the given example we have $f(t)=e^{-4t}$), we cannot just replace the lower integration limit in $(1)$ by $0$ and leave out the step function:
$$\int_{-\infty}^th(\tau)d\tau=\int_{-\infty}^tf(\tau)u(\tau)d\tau=\int_{0}^tf(\tau)u(\tau)d\tau\neq \int_{0}^tf(\tau)d\tau$$
because now we generally have for $t<0$
$$\int_{0}^tf(\tau)d\tau=-\int_{-|t|}^0f(\tau)d\tau\neq 0$$
The correct way to evaluate the integral for $h(t)=f(t)u(t)$ is
$$\int_{-\infty}^th(\tau)d\tau=u(t)\cdot\int_0^tf(\tau)d\tau$$
|
I'm writing a basic gear train simulation, where it is possible for every gear to be attached to a source of torque/angular friction. All the online resources I've found only deal with systems where a
single gear is powered and all others simply accept torque from that gear, so I've kind of had to build the equations from scratch. This is what I've come up with so far:
I started by modeling gears as levers, and looking at the force they exerted on one another.
$$F_n=\frac{\tau_n}{r_n}\\ F_{net12}=F_1+F_2=\frac{\tau_1}{r_1}+\frac{\tau_2}{r_2}$$
Then I converted to torque and found the angular acceleration: $$\tau_{net_1}=F_{net12}*r_1,\ \tau_{net_2}=F_{net12} * r_2\\ \alpha_n=\frac{\tau_{net_n}}{m_n*r_n^2}\\ a_n=\alpha_n*r_n$$ (I'm considering gears as perfect disks for simplification here)
But if you substitute in, the $r_n$'s in $\tau_n$ and $a_n$ cancel out those in $\alpha_n$, leaving you just with $$a_n=\frac{F_{net12}}{m_n}$$
And therefore the equation for a system of many gears is $$F_{net}=\sum_n \frac{\tau_n}{r_n}\\ a_0=a_1=a_2=\ ...\ =\frac{F_{net}}{\sum_n m_n}$$
I have two questions:
First of all, is my reckoning correct? It seems strange that the evolution of a rotational system is expressed only in linear units. But since the radius of each gear could be different, there can't be some global sum torque acting on all of them equally, which means there
has to be a global sum force.
Secondly, if it is correct, how could I elegantly extend this model to a system that allows for multiple gears on an axle? And how could I (preferably numerically, rather than logically or analytically) check for impossible systems, like this one?
|
Consider the Hamiltonian: $$H=\sum_{\vec k} \varepsilon (\vec k)a_{\vec k}^\dagger a_{\vec k}$$ with $\varepsilon(\vec k) \rightarrow 0$ as $|\vec k|\rightarrow 0$. I know that this has gapless excitations and therefore Goldstone modes but I am confused about the actual definition of what counts as a Goldstone mode. Do the refer to the states $\hat a^\dagger(k) \left|0\right>$ for small/infinitesimal $k$ which do still have some energy or to the ground states $\hat a^\dagger(0) \left|0\right>$ which have no energy.
The massless field is called the Goldstone 'mode'. The term 'mode' does indeed make one think of a particular momentum mode and in that it is a misnomer. For simplicity lets take the $U(1)$ case with $\phi=|\phi| e^{i\theta}$. After symmetry breaking the Goldstone part of the Lagrangian is
$$ \mathcal L = \frac{1}{2} \phi_0^2 (\partial \theta)^2 $$
with $\theta \in [0, 2 \pi)$ and for some non-zero value of $\phi_0$ corresponding to the minimum of the potential. Symmetry breaking chooses one value of $\theta=\theta_0$ arbitrarily. Linearized excitations can be quantized and they change the value of $\theta$. However, strictly speaking $a^\dagger_0 |\theta_0 \rangle$ is not defined as it is not normalizable by itself and you cannot put it in a wavepacket while keeping the nerdy zero. This is related to the fact that one cannot change the vacuum of this system if the volume is infinite and it is really only in that limit that the symmetry is spontaneously broken as for finite volume one can take linear superpositions of different $\theta$ much like one can form a superposition of position for a particle living on a circle.
|
Inhalt des Dokuments Kolloquium
In unserem Kolloquium algorithmische Mathematik und Komplexitätstheorie tragen Mitarbeiter und Gäste über aktuelle Themen ihrer Forschung vor. Organisiert mit Hilfe von Alperen Ali Ergür.
In our Kolloquium on algorithmic mathematics and complexity theory, guests and members of the group presents current topics about their research. Organized with the help of Alperen Ali Ergür. Terminplan
Vortragender Titel Datum Zeit Kathlén Kohn Coisotropic Varieties in Algebraic Vision 21.09.2017 14 15-15 45 James Mathews Submanifold jets and envelopes 28.09.2017 14 15-15 45 Mario Kummer Eigenvalues of Symmetric Matrices over Integral Domains 26.10.2017 14 15-15 45 Paul Breiding Random Spectrahedra 2.11.2017 14 15-15 45 Christian Ikenmeyer Width 2 algebraic branching programs and the continuant 16.11.2017 14 15-15 45 Guillaume Malod Lower bounds for restricted arithmetic models 7.12.2017 14 15-15 45 Corey Harris Moved to the 18th of January 14.12.2017 14 15-15 45 Boulos El Hilany Real Hurwitz numbers and simple rational functions 21.12.2017 14 15-15 45 Cordian Riener Homology of Symmetric Semi-Algebraic sets 11.1.2017 14 15-15 45 Corey Harris Computations and applications of Segre classes 18.1.2017 14 15-15 45 Pierre Lairez Quasi-optimal average complexity for finding one root of a polynomial system 1.2.2017 14 15-15 45 Rainer Sinn Quadratic Persistence of Real Projective Varieties 8.2.2017 14 15-15 45 Kristin Shaw Bounding Betti numbers of patchworked real hypersurfaces by Hodge numbers 15.2.2017 14 15-15 45 Michael Walter Quantum marginal problem, tensor scaling, and invariant theory 22.2.2017 15 15-16 45 Submanifold jets and envelopes Speaker: James Mathews
I will introduce a notion of the envelopes of a family of submanifolds or subvarieties, in the generality of arbitrary dimensions and jet-order. The notion has its roots throughout classical geometry (optics, caustics, developable surfaces, confocal quadrics).
The case of plane curves is a surprisingly rich testing ground for the ideas, with applications to projective geometry and web geometry. The Koszul complex makes an unexpected appearance here, suggesting an elegant (though conjectural) computation of envelopes in the completely general case.
This presentation is derived from my PhD thesis at Stony Brook University.
Coisotropic Varieties in Algebraic Vision Speaker: Kathlén Kohn
This talk will have two parts.
In the first part, we will motivate and define coisotropic varieties as certain subvarieties of Grassmannians. We will discuss which properties these varieties should have and show first non-trivial examples.
In the second part, we will see how coisotropic varieties appear naturally in Algebraic Vision and how they can lead to new proof techniques in this field. This is based on joint work with Bernd Sturmfels and Matthew Trager.
Eigenvalues of Symmetric Matrices over Integral Domains Speaker: Mario Kummer (MPI Leipzig)
Given an integral domain A we consider algebraic integers over A that can appear as Eigenvalue of a symmetric matrix over A. We address the question of characterising those algebraic integers as well as the problem of finding the smallest possible size of the corresponding symmetric matrix. The focus will lie on the case where A is the polynomial ring over the real numbers or the ring of integers in an algebraic number field. This has implications on the size of semidefinite programs and on multiplicities of Eigenvalues of graphs.
Random Spectrahedra Speaker: Paul Breiding (MPI Leipzig)
Spectrahedra are a special class of convex sets, defined to be linear sections of the cone of positive definite matrices. In this talk we will consider random spectrahedra, which are defined for linear sections chosen uniform at random. In this talk we will consider the expected volume of a random spectrahedron and the expected volume of the boundary of a random spectrahedron. If the linear space of intersection is of dimension 4, it is known that the boundary has finitely many singularities (counted in projective space). We will also consider the expected number of those. This will be related to the volume of matrices with double eigenvalues, for which we present an explicit formula. This is joint work with Antonio Lerario.
Width 2 algebraic branching programs and the continuant Speaker: Christian Ikenmeyer (MPI Saarbrucken)
In 1979 Valiant introduced the complexity class VP_e of families with polynomially bounded formula size. In this talk we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2): The continuant polynomial, which has rich connections to the theory of continued fractions.
The methods are rooted in the study of algebraic branching programs (ABPs) of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction naturally lead to the continuant polynomial.
Lower bounds for restricted arithmetic models Speaker: Guillaume Malod (Université Paris Diderot)
In arithmetic complexity, the objects computed are polynomials and the models can be formulas or circuits or other variants. In this setting, the main open question, similar to P vs NP, is to determine whether the classes VP and VNP are equal. As in Boolean complexity an answer seems out of reach. One current research endeavor is to get lower bounds and separations for classes defined by restricted models, for instance monotone computations where cancellations are not allowed. In this talk I will start with a detailed presentation of lower bounds in the non-commutative setting, based on Nisan's 1991 paper, and then describe various recent results, which can be seen as implementing a similar strategy. I will end with a brief tour of the most important open questions.
Real Hurwitz numbers and simple rational functions Speaker: Boulos El Hilany (Universitat Tubingen)
Joint work with Johannes Raul.
We study the problem of counting real simple rational functions f with prescribed ramification data. These correspond to a particular class of oriented real Hurwitz numbers of genus 0. Unlike in the complex situation, the number of such real covers depend on the position of the real branch points.
We introduce a signed count of such functions that is invariant under change of the branch locus, thus providing a lower bound for the actual count. The approach is based on the recent works of Itenberg and Zvonkine which treats the polynomial case.
Homology of Symmetric Semi-Algebraic sets Speaker: Cordian Riener (Universität Konstanz)
Let $R$ be a real closed field and $S\subset R^k$ be a semi-algebraic set defined by symmetric polynomials whose degree is $d$. We consider the rational cohomology groups of $S$. The action of the symmetric group on $\R^k$ gives these groups the structure of a $S_k$-module. We study the isotypic decomposition of this $S_k$-module and show bounds on the multiplicities of the various irreducible $S_k$ representations that can appear. In particular, we study the irreducible representation, which is naturally isomorphic to the equivariant Homology groups, and given an algorithm with polynomially bounded (in $k$) complexity for computing these equivariant Betti numbers. In the end of the talk, we will sketch how this algorithm can be extended to computing the lower Betti numbers of $S$.
This is joint work with Saugata Basu.
Computations and applications of Segre classes Speaker: Corey Harris (MPI Leipzig)
A fundamental object in intersection theory is the Segre class s(X,Y) of a subscheme X of a variety Y. In the case that X is regularly embedded, this is just the inverse of the Chern class of the normal bundle. The Segre class is more general though, in that it always exists, even when the normal sheaf is not locally free. Segre classes facilitate the computation of many other important objects, such as Chern-Schwartz-MacPherson classes, Chern-Mather classes, and Milnor classes of hypersurfaces. In this talk, I'll give an explicit formula for the Segre class s(X,Y) when Y is a subvariety of a toric variety and give some applications.
Quasi-optimal average complexity for finding one root of a polynomial system Speaker: Pierre Lairez (Inria)
How many operations do we need on the average to compute an approximate root of a random Gaussian polynomial system? Beyond Smale's 17th problem that asked whether a polynomial bound is possible, we prove a quasi-optimal bound $\text{(input size)}^{1+o(1)}$. This improves upon the previously known $\text{(input size)}^{\frac32 +o(1)}$ bound.
The new algorithm relies on numerical continuation along rigid continuation paths. The central idea is to consider rigid motions of the equations rather than line segments in the linear space of all polynomial systems. This leads to a better average condition number and allows for bigger steps. We show that on the average, we can compute one approximate root of a random Gaussian polynomial system of $n$ equations of degree at most $D$ in $n+1$ homogeneous variables with $O(n^5 D^2)$ continuation steps. This is a decisive improvement over previous bounds that prove no better than $\sqrt{2}^{\min(n, D)}$ continuation steps on the average.
Quadratic Persistence of Real Projective Varieties Speaker: Rainer Sinn (FU Berlin)
The goal of the talk is to study the Pythagoras number of polynomials, which is the maximal sum-of-squares length. The sum-of-squares length of a polynomial f is the smallest k such that f is a sum of k squares of polynomials. This turns out to be a subtle problem and we discuss relations to an invariant of real projective varieties that we call quadratic persistence. This invariant can be used to provide lower bounds for the Pythagoras number.
Bounding Betti numbers of patchworked real hypersurfaces by Hodge numbers Speaker: Kristin Shaw (TU Berlin)
The Smith-Thom inequality bounds the sum of the Betti numbers of a real algebraic variety by the sum of the Betti numbers of its complexification. In this talk I will explain our proof of a conjecture of Itenberg which refines this bound for a particular class of real algebraic projective hypersurfaces in terms of the Hodge numbers of its complexification.
The real hypersurfaces we consider arise from Viro's patchworking construction, which is a powerful combinatorial method for constructing topological types of real algebraic varieties. To prove the bounds conjectured by Itenberg we develop a real analogue of tropical homology and use spectral sequences to compare it to the usual tropical homology of Itenberg, Katzarkov, Mikhalkin, Zharkov. Their homology theory gives the Hodge numbers of a complex projective variety from its tropicalisation. Lurking in the spectral sequences of the proof are the keys to being able to actually control the topology of the real hypersurface produced from a patchwork. This is joint work in progress with Arthur Renaudineau. Quantum marginal problem, tensor scaling, and invariant theory Speaker: Michael Walter (University of Amsterdam and QuSoft)
Given a vector in a rational representation, can it be distinguished from zero by an invariant polynomial? This problem of deciding semi-stability is a fundamental problem in geometric invariant theory. I will explain why it can be usefully thought of as a generalization of linear programming and describe some situations where it is particularly interesting, including the quantum marginal problem in quantum information theory. I’ll then present a geometric algorithm, called tensor scaling, that solves this problem exactly.
|
I'll state the question from my book below:
If $A$, $B$ and $C$ are the angles of a triangle, then find the determinant value of $$\Delta = \begin{vmatrix}\sin^2A & \cot A & 1 \\ \sin^2B & \cot B & 1 \\ \sin^2C & \cot C & 1\end{vmatrix}.$$
Here's how I tried solving the problem:
$\Delta = \begin{vmatrix}\sin^2A & \cot A & 1 \\ \sin^2B & \cot B & 1 \\ \sin^2C & \cot C & 1\end{vmatrix}$
$R_2 \to R_2 - R_1$
$R_3 \to R_3 -R_1$
$= \begin{vmatrix}\sin^2A & \cot A & 1 \\ \sin^2B-\sin^2A & \cot B-\cot A & 0 \\ \sin^2C-\sin^2A & \cot C-\cot A & 0\end{vmatrix}$
Expanding the determinant along $C_3$
\begin{align} &= (\sin^2B-\sin^2A)(\cot C-\cot A)-(\cot B-\cot A)(\sin^2C-\sin^2A) \\ &= \sin(B+A) \sin(B-A) \left[\frac {\cos C}{\sin C} - \frac {\cos A}{\sin A}\right] - \left[\frac {\cos B}{\sin B} - \frac {\cos A}{\sin A}\right]\sin(C+A) \sin(C-A) \\ &= \frac {\sin(B+A) \sin(B-A) \sin(A-C)} {\cos A \cos C} - \frac {\sin(A-B) \sin(C+A) \sin(C-A)} {\cos A \cos C} \\ &= \frac {\sin(B-A) \sin (A-C)} {\cos A} \left[\frac {\sin(A+B)} {\cos C} - \frac {\sin(A+C)} {\cos B}\right] \\ &= \frac {\sin(B-A) \sin (A-C)} {\cos A} \left[\frac {\sin C} {\cos C} - \frac {\sin B} {\cos B}\right] \\ &= \frac {\sin(B-A) \sin (A-C) \sin (C-B)} {\cos A \cos B \cos C} \end{align}
I tried solving further but the expression just got complicated. I don't even know if the work I've done above is helpful. My textbook gives the answer as $0$. I don't have any clue about getting the answer. Any help would be appreciated.
|
Let $U \subseteq \mathbb{R}^n$ be open and denote by $\mathcal{D}(U)$ the space of all compactly supported smooth functions $U \to \mathbb{R}$. Let $\mathcal{D}^\prime(U)$ be the space of all distributions $\mathcal{D}(U) \to \mathbb{R}$ with the standard topology.
Given a distribution $T$, I would like to prove that there exists a sequence $(\psi_n)$ in $\mathcal{D}(U)$ such that \begin{equation}\label{eq:1}\tag{$\ast$} \lim_{n \to \infty} \left\langle \psi_n, \varphi \right\rangle = \left\langle T , \varphi\right\rangle \end{equation} for all $\varphi \in \mathcal{D}(U)$. I became interested in this question while the following paragraph from this Wikipedia article:
The test functions are themselves locally integrable, and so define distributions. As such they are dense in $\mathcal{D}^\prime(U)$ with respect to the topology on $\mathcal{D}^\prime(U)$ in the sense that for any distribution $T \in \mathcal{D}^\prime(U)$, there is a sequence $\psi_n \in \mathcal{D}(U)$ such that $$ \left\langle \psi_n, \varphi \right\rangle \to \left\langle T, \varphi \right\rangle $$ for all $\varphi \in \mathcal{D}(U)$. This fact follows from the Hahn-Banach theorem, since the dual of $\mathcal{D}^\prime(U)$ with its weak*-topology is the space $\mathcal{D}(U)$.
My question is as follows:
how does this follow from the Hahn-Banach theorem? I understand why $(\mathcal{D}^\prime(U))^\ast \cong \mathcal{D}(U)$ when the former is given the weak*-topology, but I fail to see how \eqref{eq:1} follows from the Hahn-Banach theorem.
|
In spaceflight, an
orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth (for example those in orbits around the Sun) an orbital maneuver is called a deep-space maneuver (DSM).
The rest of the flight, especially in a transfer orbit, is called
coasting.
Contents General 1 Rocket equation 1.1 Delta-v 1.2 Delta-v budget 1.3 Impulsive maneuvers 1.4 Applying a low thrust over a longer period of time 1.5 Assists 1.6 Oberth effect 1.6.1 Gravitational assist 1.6.2 Transfer orbits 2 Hohmann transfer 2.1 Bi-elliptic transfer 2.2 Low energy transfer 2.3 Orbital inclination change 2.4 Constant Thrust Trajectory 2.5 Rendezvous and docking 3 Orbit phasing 3.1 Space rendezvous and docking 3.2 See also 4 References 5 External links 6 General Rocket equation
Rocket mass ratios
versus final velocity calculated from the rocket equation
The
Tsiolkovsky rocket equation, or ideal rocket equation is an equation that is useful for considering vehicles that follow the basic principle of a rocket: where a device that can apply acceleration to itself (a thrust) by expelling part of its mass with high speed and moving due to the conservation of momentum. Specifically, it is a mathematical equation that relates the delta-v (the maximum change of speed of the rocket if no other external forces act) with the effective exhaust velocity and the initial and final mass of a rocket (or other reaction engine.)
For any such maneuver (or journey involving a number of such maneuvers):
\Delta v = v_\text{e} \ln \frac {m_0} {m_1}
where:
m_0 is the initial total mass, including propellant, m_1 is the final total mass, v_\text{e} is the effective exhaust velocity (v_\text{e} = I_\text{sp} \cdot g_0 where I_\text{sp} is the specific impulse expressed as a time period and g_0 is the gravitational constant), \Delta v\ is delta-v - the maximum change of speed of the vehicle (with no external forces acting). Delta-v
The applied change in speed of each maneuver is referred to as delta-v (\Delta\mathbf{v}\,).
Delta-v budget
The total delta-v for all and each maneuver is estimated for a mission and is called a delta-v budget. With a good approximation of the delta-v budget designers can estimate the fuel to payload requirements of the spacecraft using the rocket equation.
Impulsive maneuvers
Figure 1: Approximation of a finite thrust maneuver with an impulsive change in velocity
An "impulsive maneuver" is the mathematical model of a maneuver as an instantaneous change in the spacecraft's velocity (magnitude and/or direction) as illustrated in figure 1. In the physical world no truly instantaneous change in velocity is possible as this would require an "infinite force" applied during an "infinitely short time" but as a mathematical model it in most cases describes the effect of a maneuver on the orbit very well. The off-set of the velocity vector after the end of real burn from the velocity vector at the same time resulting from the theoretical impulsive maneuver is only caused by the difference in gravitational force along the two paths (red and black in figure 1) which in general is small.
In the planning phase of space missions designers will first approximate their intended orbital changes using impulsive maneuvers that greatly reduces the complexity of finding the correct orbital transitions.
Applying a low thrust over a longer period of time
Applying a low thrust over a longer period of time is referred to as a
non-impulsive maneuver (where 'non-impulsive' refers to the maneuver not being of a short time period rather than not involving impulse- change in momentum, which clearly must take place).
Another term is
finite burn, where the word "finite" is used to mean "non-zero", or practically, again: over a longer period.
For a few space missions, such as those including a space rendezvous, high fidelity models of the trajectories are required to meet the mission goals. Calculating a "finite" burn requires a detailed model of the spacecraft and its thrusters. The most important of details include: mass, center of mass, moment of inertia, thruster positions, thrust vectors, thrust curves, specific impulse, thrust centroid offsets, and fuel consumption.
Assists Oberth effect
In astronautics, the
Oberth effect is where the use of a rocket engine when travelling at high speed generates much more useful energy than one at low speed. Oberth effect occurs because the propellant has more usable energy (due to its kinetic energy on top of its chemical potential energy) and it turns out that the vehicle is able to employ this kinetic energy to generate more mechanical power. It is named after Hermann Oberth, the Austro-Hungarian-born, German physicist and a founder of modern rocketry, who apparently first described the effect. [1]
Oberth effect is used in a
powered flyby or Oberth maneuver where the application of an impulse, typically from the use of a rocket engine, close to a gravitational body (where the gravity potential is low, and the speed is high) can give much more change in kinetic energy and final speed (i.e. higher specific energy) than the same impulse applied further from the body for the same initial orbit. For the Oberth effect to be most effective, the vehicle must be able to generate as much impulse as possible at the lowest possible altitude; thus the Oberth effect is often far less useful for low-thrust reaction engines such as ion drives, which have a low propellant flow rate.
Oberth effect also can be used to understand the behaviour of multi-stage rockets; the upper stage can generate much more usable kinetic energy than might be expected from simply considering the chemical energy of the propellants it carries.
Historically, a lack of understanding of this effect led early investigators to conclude that interplanetary travel would require completely impractical amounts of propellant, as without it, enormous amounts of energy are needed.
[1] Gravitational assist
The trajectories that enabled NASA's twin Voyager spacecraft to tour the four gas giant planets and achieve velocity to escape our solar system
In orbital mechanics and aerospace engineering, a
gravitational slingshot, gravity assist maneuver, or swing-by is the use of the relative movement and gravity of a planet or other celestial body to alter the path and speed of a spacecraft, typically in order to save propellant, time, and expense. Gravity assistance can be used to accelerate, decelerate and/or re-direct the path of a spacecraft.
The "assist" is provided by the motion (orbital angular momentum) of the gravitating body as it pulls on the spacecraft.
[2] The technique was first proposed as a mid-course manoeuvre in 1961, and used by interplanetary probes from Mariner 10 onwards, including the two Voyager probes' notable fly-bys of Jupiter and Saturn. Transfer orbits
Orbit insertion is a general term for a maneuver that is more than a small correction. It may be used for a maneuver to change a transfer orbit or an ascent orbit into a stable one, but also to change a stable orbit into a descent:
descent orbit insertion. Also the term orbit injection is used, especially for changing a stable orbit into a transfer orbit, e.g. trans-lunar injection (TLI), trans-Mars injection (TMI) and trans-Earth injection (TEI). Hohmann transfer
Hohmann Transfer Orbit
In orbital mechanics, the
Hohmann transfer orbit is an elliptical orbit used to transfer between two circular orbits of different altitudes, in the same plane.
The orbital maneuver to perform the Hohmann transfer uses two engine impulses which move a spacecraft onto and off the transfer orbit. This maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book
Die Erreichbarkeit der Himmelskörper ( The Accessibility of Celestial Bodies). [3] Hohmann was influenced in part by the German science fiction author Kurd Laßwitz and his 1897 book Two Planets. Bi-elliptic transfer
In astronautics and aerospace engineering, the
bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver.
The bi-elliptic transfer consists of two half elliptic orbits. From the initial orbit, a delta-v is applied boosting the spacecraft into the first transfer orbit with an apoapsis at some point r_b away from the central body. At this point, a second delta-v is applied sending the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third delta-v is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally requires a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
[4]
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
[5] Low energy transfer
A
low energy transfer, or low energy trajectory, is a route in space which allows spacecraft to change orbits using very little fuel. [6] [7] These routes work in the Earth-Moon system and also in other systems, such as traveling between the satellites of Jupiter. The drawback of such trajectories is that they take much longer to complete than higher energy (more fuel) transfers such as Hohmann transfer orbits.
Low energy transfer are also known as weak stability boundary trajectories, or ballistic capture trajectories.
Low energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Following these pathways allows for long distances to be traversed for little expenditure of delta-v.
Orbital inclination change
Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes).
In general, inclination changes can require a great deal of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life.
Maximum efficiency of inclination change is achieved at apoapsis, (or apogee), where orbital velocity v\, is the lowest. In some cases, it may require less total delta v to raise the satellite into a higher orbit, change the orbit plane at the higher apogee, and then lower the satellite to its original altitude.
[8] Constant Thrust Trajectory
Constant-thrust and constant-acceleration trajectories involve the spacecraft firing its engine in a prolonged constant burn. In the limiting case where the vehicle acceleration is high compared to the local gravitational acceleration, the spacecraft points straight toward the target (accounting for target motion), and remains accelerating constantly under high thrust until it reaches its target. In this high-thrust case, the trajectory approaches a straight line. If it is required that the spacecraft rendezvous with the target, rather than performing a flyby, then the spacecraft must flip its orientation halfway through the journey, and decelerate the rest of the way.
In the constant-thrust trajectory,
[9] the vehicle's acceleration increases during thrusting period, since the fuel use means the vehicle mass decreases. If, instead of constant thrust, the vehicle has constant acceleration, the engine thrust must decrease during the trajectory.
This trajectory requires that the spacecraft maintain a high acceleration for long durations. For interplanetary transfers, days, weeks or months of constant thrusting may be required. As a result, there are no currently available spacecraft propulsion systems capable of using this trajectory. It has been suggested that some forms of nuclear (fission or fusion based) or antimatter powered rockets would be capable of this trajectory.
Rendezvous and docking Orbit phasing
In astrodynamics
orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly. Space rendezvous and docking
Gemini 7 photographed from Gemini 6 in 1965
A
space rendezvous is an orbital maneuver during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous may or may not be followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them. See also References ^ a b NASA-TT-F-622: Ways to spaceflight p 200 - Herman Oberth ^ http://www2.jpl.nasa.gov/basics/bsf4-1.php Basics of Space Flight, Sec. 1 Ch. 4, NASA Jet Propulsion Laboratory ^ Walter Hohmann, The Attainability of Heavenly Bodies (Washington: NASA Technical Translation F-44, 1960) Internet Archive. ^ Vallado, David Anthony (2001). Fundamentals of Astrodynamics and Applications. Springer. p. 317. ^ Sternfeld A., Sur les trajectoires permettant d'approcher d'un corps attractif central à partir d'une orbite keplérienne donnée. - Comptes rendus de l'Académie des sciences (Paris), vol. 198, pp. 711 - 713. ^ ^ ^ Braeunig, Robert A. "Basics of Space Flight: Orbital Mechanics". ^ W. E. Moeckel, Trajectories with Constant Tangential Thrust in Central Gravitational Fields, Technical Report R-63, NASA Lewis Research Center, 1960 (accessed 26 March 2014) External links Handbook Automated Rendezvous and Docking of Spacecraft by Wigbert Fehse
Shape/Size Orientation Position Variation
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
As I mentioned in comments, professor Jack Simons offers a relatively simple physical interpretation of the idea of mixing in excited detrminant for treating dynamical correlation in his book "Quantum Mechanics in Chemistry" (Chapter 8) freely available here, as well as in this video which is also authored by him.
An important mathematical finding is that a linear combination of a reference determinant and a doubly-exicted one can be expressed as linear combination of two other determinants. Namely,
\begin{multline} c_1 | \dotsc \ \phi_a \alpha \ \phi_a \beta \ \dotsc | - c_2 | \dotsc \ \phi_r \alpha \ \phi_r \beta \ \dotsc | = \\ \frac{c_1}{2} \Big( | \dotsc \ (\phi_a - c \phi_r) \alpha \ (\phi_a + c \phi_r) \beta \ \dotsc | - | \dotsc \ (\phi_a - c \phi_r) \beta \ (\phi_a + c \phi_r) \alpha \ \dotsc | \Big) \, ,\end{multline}where $c = \sqrt{c_2/c_1}$.
1
Here, the determinants to the left differ by a doubly occupied spatial orbital $\phi_a$ being replaced by a doubly occupied spatial orbital $\phi_r$, while the determinants to the right describe the singlet $(\phi_a - c \phi_r)^1 (\phi_a + c \phi_r)^1$ state. So, a state created by adding the doubly-excited $| \dotsc \ \phi_r \alpha \ \phi_r \beta \ \dotsc |$ determinant to the reference $| \dotsc \ \phi_a \alpha \ \phi_a \beta \ \dotsc |$ one is equivalent to a state in which one electron occupies $\phi_a - c \phi_r$ spatial orbital (being in any spin state) while another electron occupies $\phi_a + c \phi_r$ spatial orbital (also being in any spin state). And this is the way electrons "avoid" each other: by occupying these different spatial orbitals.
For example, $\pi^2 \rightarrow \pi^{*2}$ configuration mixing in alkenes or $\mathrm{2s^2} \rightarrow \mathrm{2p^2}$ configuration mixing in alkaline earth atoms produce left-right polarized and top-bottom polarized spatial orbital pairs shown below.
Here one electron stays closer to the left carbon atom by occupying $\pi^2 + c \pi^{*2}$ orbital, while another avoids it staying closer to the right carbon atom by occupying $\pi^2 - c \pi^{*2}$ orbital.
In this case one electron stays closer to the top of an atom by occupying $\mathrm{2s} + c \mathrm{2p}$ orbital, while another avoids it staying closer to the bottom of the atom by occupying $\mathrm{2s} - c \mathrm{2p}$ orbital.
1) Here is the proof for $2 \times 2$ determinants, where for brevity $\phi_i = \phi_i \alpha$ and $\phi_i^* = \phi_i \beta$.\begin{align} c_1 | \phi_a \ \phi_a^* | - c_2 | \phi_r \ \phi_r^* | &= \frac{c_1}{2} \Big( 2 | \phi_a \ \phi_a^* | - 2 c^2 | \phi_r \ \phi_r^* | \Big) \\ &= \frac{c_1}{2} \Big( 2 (\color{red}{\phi_a \phi_a^*} - \color{green}{\phi_a^* \phi_a}) - 2 (\color{blue}{c^2 \phi_r \phi_r^*} - \color{purple}{c^2 \phi_r^* \phi_r}) \Big) \\ &= \frac{c_1}{2} \Big( 2 (\color{red}{\phi_a \phi_a^*} - \color{blue}{c^2 \phi_r \phi_r^*}) - 2 (\color{green}{\phi_a^* \phi_a} - \color{purple}{c^2 \phi_r^* \phi_r}) \Big) \\ &= \frac{c_1}{2} \Big( (\color{red}{\phi_a \phi_a^*} - \color{blue}{c^2 \phi_r \phi_r^*}) - (\color{green}{\phi_a^* \phi_a} - \color{purple}{c^2 \phi_r^* \phi_r}) \\ &\phantom{=\frac{c_1}{2}}- (\color{green}{\phi_a^* \phi_a} - \color{purple}{c^2 \phi_r^* \phi_r}) + (\color{red}{\phi_a \phi_a^*} - \color{blue}{c^2 \phi_r \phi_r^*}) \Big) \\ &= \frac{c_1}{2} \Big( (\color{red}{\phi_a \phi_a^*} + c \phi_a \phi_r^* - c \phi_r \phi_a^* - \color{blue}{c^2 \phi_r \phi_r^*}) - (\color{green}{\phi_a^* \phi_a} - c \phi_a^* \phi_r + c \phi_r^* \phi_a - \color{purple}{c^2 \phi_r^* \phi_r}) \\ &\phantom{=\frac{c_1}{2}}- (\color{green}{\phi_a^* \phi_a} + c \phi_a^* \phi_r - c \phi_r^* \phi_a - \color{purple}{c^2 \phi_r^* \phi_r}) + (\color{red}{\phi_a \phi_a^*} - c \phi_a \phi_r^* + c \phi_r \phi_a^* - \color{blue}{c^2 \phi_r \phi_r^*}) \Big) \\ &= \frac{c_1}{2} \Big( (\phi_a - c \phi_r) (\phi_a^* + c \phi_r^*) - (\phi_a^* + c \phi_r^*) (\phi_a - c \phi_r) \\ &\phantom{=\frac{c_1}{2}}- (\phi_a^* - c \phi_r^*) (\phi_a + c \phi_r) + (\phi_a + c \phi_r) (\phi_a^* - c \phi_r^* - c^2 \phi_r \phi_r^*) \Big) \\ &= \frac{c_1}{2} \Big( | (\phi_a - c \phi_r) \ (\phi_a^* + c \phi_r^*) | - | (\phi_a^* - c \phi_r^*) \ (\phi_a + c \phi_r) | \Big) \, .\end{align}
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
One way to obtain the -2/3 is by singularity analysis.
The first step is to construct the generating function of your sequence.From the Taylor expansion of the Lambert $W$ function at 0, one gets that $-W(-x)$ is the generating function of the sequence $N^{N-1}/N!$ and therefore by differentiation$$\frac{-W(-x)}{1+W(-x)}=\sum_{N\ge1}{\frac{N^N}{N!}x^N}.$$Replacing $x$ by $x/e$ and multiplying by $1/(1-x)$ yields the desired generating function$$F(x):=\frac{-W(-x/e)}{(1-x)(1+W(-x/e))}=\sum_{N\ge1}{\left(\sum_{n=1}^N{\frac{n^ne^{-n}}{n!}}\right)x^N}.$$
From there, the result follows from an analysis at $x=1$. From the known expansion of $-W(-x)$ at $1/e$, one deduces$$F(x)=\frac{\sqrt{2}}{2(1-x)^{3/2}}-\frac{2}{3(1-x)}+O\left(\frac1{\sqrt{1-x}}\right),\quad x\rightarrow1.$$
Now singularity analysis (or Darboux's method) deduces the asymptotic expansion of your sequence as$$\frac{\sqrt{2 N}}{\sqrt{\pi}}-\frac23+O(1/\sqrt{N}).$$
With slightly more work along the same lines, one obtains a full asymptotic expansion beginning with$${\frac {\sqrt {2N}}{\sqrt {\pi}}}-\frac23+{\frac {\sqrt {2}}{3\sqrt {\pi N}}}-{\frac {37\sqrt {2}}{864\sqrt {\pi}N^{3/2}}}+{\frac {359\sqrt {2} }{64800\sqrt {\pi}N^{5/2}}}+O \left( {N}^{-7/2} \right) .$$
|
Difference between revisions of "Rank into rank"
(→$C^{(n)}$ variants: headers)
m (→$C^{(n)}$ variants: -m)
Line 78: Line 78:
$\mathrm{I3}$ and other $C^{(n)}$ variants:
$\mathrm{I3}$ and other $C^{(n)}$ variants:
−
*
+
* $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-[[superstrong]], $C^{(n)}$-[[extendible]] and $C^{(n)}$-$m$-[[huge]] in $V_δ$, for all $n$ and $m$.
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
Revision as of 11:00, 17 September 2019
A nontrivial elementary embedding $j:V_\lambda\to V_\lambda$ for some infinite ordinal $\lambda$ is known as a
rank into rank embedding and the axiom asserting that such an embedding exists is usually denoted by $\text{I3}$, $\text{I2}$, $\text{I1}$, $\mathcal{E}(V_\lambda)\neq \emptyset$ or some variant thereof. The term applies to a hierarchy of such embeddings increasing in large cardinal strength reaching toward the Kunen inconsistency. The axioms in this section are in some sense a technical restriction falling out of Kunen's proof that there can be no nontrivial elementary embedding $j:V\to V$ in $\text{ZFC}$). An analysis of the proof shows that there can be no nontrivial $j:V_{\lambda+2}\to V_{\lambda+2}$ and that if there is some ordinal $\delta$ and nontrivial rank to rank embedding $j:V_\delta\to V_\delta$ then necessarily $\delta$ must be a strong limit cardinal of cofinality $\omega$ or the successor of one. By standing convention, it is assumed that rank into rank embeddings are not the identity on their domains.
There are really two cardinals relevant to such embeddings: The large cardinal is the critical point of $j$, often denoted $\mathrm{crit}(j)$ or sometimes $\kappa_0$, and the other (not quite so large) cardinal is $\lambda$. In order to emphasize the two cardinals, the axiom is sometimes written as $E(\kappa,\lambda)$ (or $\text{I3}(\kappa,\lambda)$, etc.) as in [1]. The cardinal $\lambda$ is determined by defining the
critical sequence of $j$. Set $\kappa_0 = \mathrm{crit}(j)$ and $\kappa_{n+1}=j(\kappa_n)$. Then $\lambda = \sup \langle \kappa_n : n <\omega\rangle$ and is the first fixed point of $j$ that occurs above $\kappa_0$. Note that, unlike many of the other large cardinals appearing in the literature, the ordinal $\lambda$ is not the target of the critical point; it is the $\omega^{th}$ $j$-iterate of the critical point.
As a result of the strong closure properties of rank into rank embeddings, their critical points are huge and in fact $n$-huge for every $n$. This aspect of the large cardinal property is often called $\omega$-hugeness and the term
$\omega$-huge cardinal is sometimes used to refer to the critical point of some rank into rank embedding. Contents The $\text{I3}$ Axiom and Natural Strengthenings
The $\text{I3}$ axiom asserts, generally, that there is some embedding $j:V_\lambda\to V_\lambda$. $\text{I3}$ is also denoted as $\mathcal{E}(V_\lambda)\neq\emptyset$ where $\mathcal{E}(V_\lambda)$ is the set of all elementary embeddings from $V_\lambda$ to $V_\lambda$, or sometimes even $\text{I3}(\kappa,\lambda)$ when mention of the relevant cardinals is necessary. In its general form, the axiom asserts that the embedding preserves all first-order structure but fails to specify how much second-order structure is preserved by the embedding. The case that
no second-order structure is preserved is also sometimes denoted by $\text{I3}$. In this specific case $\text{I3}$ denotes the weakest kind of rank into rank embedding and so the $\text{I3}$ notation for the axiom is somewhat ambiguous. To eliminate this ambiguity we say $j$ is $E_0(\lambda)$ when $j$ preserves only first-order structure.
The axiom can be strengthened and refined in a natural way by asserting that various degrees of second-order correctness are preserved by the embeddings. A rank into rank embedding $j$ is said to be $\Sigma^1_n$ or
$\Sigma^1_n$ correct if, for every $\Sigma^1_n$ formula $\Phi$ and $A\subseteq V_\lambda$ the elementary schema holds for $j,\Phi$, and $A$: $$V_\lambda\models\Phi(A) \Leftrightarrow V_\lambda\models\Phi(j(A)).$$The more specific axiom $E_n(\lambda)$ asserts that some $j\in\mathcal{E}(V_\lambda)$ is $\Sigma^1_{2n}$.
The ``$2n$" subscript in the axiom $E_n(\lambda)$ is incorporated so that the axioms $E_m(\lambda)$ and $E_n(\lambda)$ where $m<n$ are strictly increasing in strength. This is somewhat subtle. For $n$ odd, $j$ is $\Sigma^1_n$ if and only if $j$ is $\Sigma^1_{n+1}$. However, for $n$ even, $j$ being $\Sigma^1_{n+1}$ is
significantly stronger than a $j$ being $\Sigma^1_n$[2]. The $\text{I2}$ Axiom
Any $j:V_\lambda\to V_\lambda$ can be extended to a $j^+:V_{\lambda+1}\to V_{\lambda+1}$ but in only one way: Define for each $A\subseteq V_\lambda$ $$j^+(A)=\bigcup_{\alpha < \lambda}(j(V_\alpha\cap A)).$$ $j^+$ is not necessarily elementary. The $\text{I2}$ axiom asserts the existence of some elementary embedding $j:V\to M$ with $V_\lambda\subseteq M$ where $\lambda$ is defined as the $\omega^{th}$ $j$-iterate of the critical point. Although this axiom asserts the existence of a
class embedding with a very strong closure property, it is in fact equivalent to an embedding $j:V_\lambda\to V_\lambda$ with $j^+$ preserving well-founded relations on $V_\lambda$. So this axioms preserves some second-order structure of $V_\lambda$ and is in fact equivalent to $E_1(\lambda)$ in the hierarchy defined above. A specific property of $\text{I2}$ embeddings is that they are iterable (i.e. the direct limit of directed system of embeddings is well-founded). In the literature, $IE(\lambda)$ asserts that $j:V_\lambda\to V_\lambda$ is iterable and $IE(\lambda)$ falls strictly between $E_0(\lambda)$ and $E_1(\lambda)$.
As a result of the strong closure property of $\text{I2}$, the equivalence mentioned above cannot be through an analysis of some ultrapower embedding. Instead, the equivalence is established by constructing a directed system of embeddings of various ultrapowers and using reflection properties of the critical points of the embeddings. The direct limit is well-founded since well-founded relations are preserved by $j^+$. The use of both direct and indirect limits, in conjunction with reflection arguments, is typical for establishing the properties of rank into rank embeddings.
The $\text{I1}$ Axiom
$\text{I1}$ asserts the existence of a nontrivial elementary embedding $j:V_{\lambda+1}\to V_{\lambda+1}$. This axiom is sometimes denoted $\mathcal{E}(V_{\lambda+1})\neq\emptyset$. Any such embedding preserves all second-order properties of $V_\lambda$ and so is $\Sigma^1_n$ for all $n$. To emphasize the preservation of second-order properties, the axiom is also sometimes written as $E_\omega(\lambda)$. In this case, restricting the embedding to $V_\lambda$ and forming $j^+$ as above yields the original embedding.
Strengthening this axiom in a natural way leads the $\text{I0}$ axiom, i.e. asserting that embeddings of the form $j:L(V_{\lambda+1})\to L(V_{\lambda+1})$ exist. There are also other natural strengthenings of $\text{I1}$, though it is not entirely clear how they relate to the $\text{I0}$ axiom. For example, one can assume the existence of elementary embeddings satisfying $\text{I1}$ which extend to embeddings $j:M\to M$ where $M$ is a transitive class inner model and add various requirements to $M$. These requirements must not entail that $M$ satisfies the axiom of choice by the Kunen inconsistency. Requirements that have been considered include assuming $M$ contains $V_{\lambda+1}$, $M$ satisfies $DC_\lambda$, $M$ satisfies replacement for formulas containing $j$ as a parameter, $j(\mathrm{crit}(j))$ is arbitrarily large in $M$, etc.
Virtually rank-into-rank
(Information in this subsection from [4])
A cardinal $κ$ is
virtually rank-into-rank iff in a set-forcing extension it is the critical point of an elementary embedding $j : V_λ → V_λ$ for some $λ > κ$.
This notion does not require stratification, because Kunen’s Inconsistency does not hold for virtual embeddings.
Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The least ω-Erdős cardinal $η_ω$ is a limit of virtually rank-into-rank cardinals. Every virtually rank-into-rank cardinal is an $ω$-iterable limit of $ω$-iterable cardinals. Every element of a club $C$ witnessing that $κ$ is a Silver cardinal is virtually rank-into-rank. Large Cardinal Properties of Critical Points
The critical points of rank into rank embeddings have many strong reflection properties. They are measurable, $n$-huge for all $n$ (hence the terminology $\omega$-huge mentioned in the introduction) and partially supercompact.
Using $\kappa_0$ as a seed, one can form the ultrafilter $$U=\{X\subseteq\mathcal{P}(\kappa_0): j``\kappa_0\in j(X)\}.$$ Thus, $\kappa_0$ is a measurable cardinal.
In fact, for any $n$, $\kappa_0$ is also $n$-huge as witnessed by the ultrafilter $$U=\{X\subseteq\mathcal{P}(\kappa_n): j``\kappa_n\in j(X)\}.$$ This motivates the term $\omega$-huge cardinal mentioned in the introduction.
Letting $j^n$ denote the $n^{th}$ iteration of $j$, then $$V_\lambda\models ``\lambda\text{ is supercompact"}.$$ To see this, suppose $\kappa_0\leq \theta <\kappa_n$, then $$U=\{X\subseteq\mathcal{P}_{\kappa_0}(\theta): j^n``\theta\in j^n(X)\}$$ winesses the $\theta$-compactness of $\kappa_0$ (in $V_\lambda$). For this last claim, it is enough that $\kappa_0(j)$ is $<\lambda$-supercompact, i.e. not *fully* supercompact in $V$. In this case, however, $\kappa_0$ *could* be fully supercompact.
Critical points of rank-into-rank embeddings also exhibit some *upward* reflection properties. For example, if $\kappa$ is a critical point of some embedding witnessing $\text{I3}(\kappa,\lambda)$, then there must exist another embedding witnessing $\text{I3}(\kappa',\lambda)$ with critical point
above $\kappa$! This upward type of reflection is not exhibited by large cardinals below extendible cardinals in the large cardinal hierarchy. Algebras of elementary embeddings
If $j,k\in\mathcal{E}_{\lambda}$, then $j^+(k)\in\mathcal{E}_{\lambda}$ as well. We therefore define a binary operation $*$ on $\mathcal{E}_{\lambda}$ called application defined by $j*k=j^{+}(k)$. The binary operation $*$ together with composition $\circ$ satisfies the following identities:
1. $(j\circ k)\circ l=j\circ(k\circ l),\,j\circ k=(j*k)\circ j,\,j*(k*l)=(j\circ k)*l,\,j*(k\circ l)=(j*k)\circ(j*l)$
2. $j*(k*l)=(j*k)*(j*l)$ (self-distributivity).
Identity 2 is an algebraic consequence of the identities in 1.
If $j\in\mathcal{E}_{\lambda}$ is a nontrivial elementary embedding, then $j$ freely generates a subalgebra of $(\mathcal{E}_{\lambda},*,\circ)$ with respect to the identities in 1, and $j$ freely generates a subalgebra of $(\mathcal{E}_{\lambda},*)$ with respect to the identity 2.
If $j_{n}\in\mathcal{E}_{\lambda}$ for all $n\in\omega$, then $\sup\{\textrm{crit}(j_{0}*\dots*j_{n})\mid n\in\omega\}=\lambda$ where the implied parentheses a grouped on the left (for example, $j*k*l=(j*k)*l$).
Suppose now that $\gamma$ is a limit ordinal with $\gamma<\lambda$. Then define an equivalence relation $\equiv^{\gamma}$ on $\mathcal{E}_{\lambda}$ where $j\equiv^{\gamma}k$ if and only if $j(x)\cap V_{\gamma}=k(x)\cap V_{\gamma}$ for each $x\in V_{\gamma}$. Then the equivalence relation $\equiv^{\gamma}$ is a congruence on the algebra $(\mathcal{E}_{\lambda},*,\circ)$. In other words, if $j_{1},j_{2},k\in \mathcal{E}_{\lambda}$ and $j_{1}\equiv^{\gamma}j_{2}$ then $j_{1}\circ k\equiv^{\gamma} j_{2}\circ k$ and $j_{1}*k\equiv^{\gamma}j_{2}*k$, and if $j,k_{1},k_{2}\in\mathcal{E}_{\lambda}$ and $k_{1}\equiv^{\gamma}k_{2}$ then $j\circ k_{1}\equiv^{\gamma}j\circ k_{2}$ and $j*k_{1}\equiv^{j(\gamma)}j*k_{2}$.
If $\gamma<\lambda$, then every finitely generated subalgebra of $(\mathcal{E}_{\lambda}/\equiv^{\gamma},*,\circ)$ is finite.
$C^{(n)}$ variants
(section from [5])
$\mathrm{I3}$ and other $C^{(n)}$ variants:
Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-superstrong, $C^{(n)}$-extendible and $C^{(n)}$-$m$-huge in $V_δ$, for all $n$ and $m$.
Definitions of $C^{(n)}$ variants of rank-into-rank cardinals:
$κ$ is called $C^{(n)}$-$\mathrm{I3}$ cardinalif it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $j(κ) ∈ C^{(n)}$. $κ$ is called $C^{(n)+}$-$\mathrm{I3}$ cardinalif it is an $\mathrm{I3}$ cardinal, witnessed by some embedding $j : V_δ → V_δ$, with $δ ∈ C^{(n)}$. $κ$ is called $C^{(n)}$-$\mathrm{I1}$ cardinalif it is an $\mathrm{I1}$ cardinal, witnessed by some embedding $j : V_{δ+1} → V_{δ+1}$, with $j(κ) ∈ C^{(n)}$.
Results:
If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then $κ ∈ C^{(n)}$. Every $\mathrm{I3}$-cardinal is $C^{(1)}$-$\mathrm{I3}$ and $C^{(1)+}$-$\mathrm{I3}$. By simple reflection arguments: The least $C^{(n)}$-$\mathrm{I3}$ cardinal is not $C^{(n+1)}$-$\mathrm{I1}$, for $n ≥ 1$. The least $C^{(n)}$-$\mathrm{I1}$ cardinal is not $C^{(n+1)}$-$\mathrm{I1}$, for $n ≥ 1$. Every $C^{(n)}$-$\mathrm{I1}$ cardinal is also $C^{(n)}$-$\mathrm{I3}$. For every $n ≥ 1$, if $δ$ is a limit ordinal and $j : V_δ → V_δ$ witnesses that $κ$ is $\mathrm{I3}$, then $(\forall_{m < ω}j^m (κ) ∈ C^{(n)}) \iff δ ∈ C^{(n)}$. If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then it is $C^{(n)}$-$m$-huge, for all $m$, and there is a normal ultrafilter $\mathcal{U}$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-$m$-huge for every $m\} ∈ \mathcal{U}$. If $κ$ is $C^{(n)}$-$\mathrm{I1}$, then the least $δ$ s.t. there is an elementary embedding $j : V_{δ+1} → V_{δ+1}$ with $crit(j) = κ$ and $j(κ) ∈ C^{(n)}$ is smaller than the first ordinal in $C^{(n+1)}$ greater than $κ$. Moreover, the least $C^{(n)}$-$\mathrm{I1}$ cardinal, if it exists, is smaller than the first ordinal in $C^{(n+1)}$, for all $n ≥ 1$.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Laver, Richard. Implications between strong large cardinal axioms.Ann Math Logic 90(1--3):79--90, 1997. MR bibtex Corazza, Paul. The gap between ${\rm I}_3$ and the wholeness axiom.Fund Math 179(1):43--60, 2003. www DOI MR bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
|
Welcome back to our mini-series on quantum probability! Last time, we motivated the series by pondering over a thought from classical probability theory, namely that marginal probability doesn't have memory. That is, the process of summing over of a variable in a joint probability distribution causes information about that variable to be lost. But as we saw then, there is a quantum version of marginal probability that behaves much like "marginal probability with memory." It remembers what's destroyed when computing marginals in the usual way. In today's post, I'll unveil the details. Along the way, we'll take an introductory look at the mathematics of quantum probability theory.
Let's begin with a brief recap of the ideas covered in Part 1: We began with a joint probability distribution on a product of finite sets $p\colon X\times Y\to [0,1]$ and realized it as a matrix $M$ by setting $M_{ij} = \sqrt{p(x_i),p(y_j)}$. We called elements of our set $X=\{0,1\}$ prefixes and the elements of our set $Y=\{00,11,01,10\}$ suffixes so that $X\times Y$ is the set of all bitstrings of length 3.
We then observed that the matrix $M^\top M$ contains the marginal probability distribution of $Y$ along its diagonal. Moreover its eigenvectors define conditional probability distributions on $Y$. Likewise, $MM^\top$ contains marginals on $X$ along its diagonal, and its eigenvectors define conditional probability distributions on $X$.
The information in the eigenvectors of $M^\top M$ and $MM^\top$ is precisely the information that's destroyed when computing marginal probability in the usual way. The big reveal last time was that the matrices $M^\top M$ and $MM^\top$ are the quantum versions of marginal probability distributions.
As we'll see today, the quantum version of a probability distribution is something called a density operator. The quantum version of marginalizing corresponds to "reducing" that operator to a subsystem. This reduction is a construction in linear algebra called the partial trace. I'll start off by explaining the partial trace. Then I'll introduce the basics of quantum probability theory. At the end, we'll tie it all back to our bitstring example.
In this article and the next, I'd like to share some ideas from the world of quantum probability.* The word "quantum" is pretty loaded, but don't let that scare you. We'll take a first—not second or third—look at the subject, and the only prerequisites will be linear algebra and basic probability. In fact, I like to think of quantum probability as another name for "linear algebra + probability," so this mini-series will explore the mathematics, rather than the physics, of the subject.**
In today's post, we'll motivate the discussion by saying a few words about (classical) probability. In particular, let's spend a few moments thinking about the following:
What do I mean? We'll start with some basic definitions. Then I'll share an example that illustrates this idea.
A probability distribution (or simply, distribution) on a finite set $X$ is a function $p \colon X\to [0,1]$ satisfying $\sum_x p(x) = 1$. I'll use the term joint probability distribution to refer to a distribution on a Cartesian product of finite sets, i.e. a function $p\colon X\times Y\to [0,1]$ satisfying $\sum_{(x,y)}p(x,y)=1$. Every joint distribution defines a marginal probability distribution on one of the sets by summing probabilities over the other set. For instance, the marginal distribution $p_X\colon X\to [0,1]$ on $X$ is defined by $p_X(x)=\sum_yp(x,y)$, in which the variable $y$ is summed, or "integrated," out. It's this very process of summing or integrating out that causes information to be lost. In other words, marginalizing loses information. It doesn't remember what was summed away!
I'll illustrate this with a simple example. To do so, I need to give you some finite sets $X$ and $Y$ and a probability distribution on them.
|
Existence of solutions for a fractional hybrid boundary value problem via measures of noncompactness in Banach algebras Abstract
We study the existence of solutions for the following fractional hybrid boundary value problem
$$ \cases \displaystyle D_{0^+}^{\alpha}\bigg[\frac{x(t)}{f(t,x(t))}\bigg]+g(t,x(t))=0, &0< t< 1,\\ x(0)=x(1)=0, \endcases $$ where $1< \alpha\leq 2$ and $D_{0^+}^{\alpha}$ denotes the Riemann-Liouville fractional derivative. The main tool is our study is the technique of measures of noncompactness in the Banach algebras. Some examples are presented to illustrate our results. Finally, we compare the results of paper with the ones obtained by other authors.
$$
\cases
\displaystyle
D_{0^+}^{\alpha}\bigg[\frac{x(t)}{f(t,x(t))}\bigg]+g(t,x(t))=0, &0< t< 1,\\
x(0)=x(1)=0,
\endcases
$$
where $1< \alpha\leq 2$ and $D_{0^+}^{\alpha}$ denotes the Riemann-Liouville fractional derivative. The main tool is our study is
the technique of measures of noncompactness in the Banach algebras. Some examples are presented to illustrate our results. Finally, we
compare the results of paper with the ones obtained by other authors.
Keywords
Banach algebras; Riemann-Liouville fractional derivative; measure of noncompactness; hybrid boundary value problem
Full Text:FULL TEXT Refbacks There are currently no refbacks.
|
While studying, I came upon this problem: "Find the maximum area of a rectangle circumscribed about a fixed rectangle of length 8 and width 4." I looked at the answer key, which showed that the maximum area possible was 72 inches squared. Also, it stated to use cosine and sine functions to solve the problem. However, I do not get how to apply these two functions to this problem. Could someone explain how to achieve this answer?
Two vertices of the circumscribed rectangle are located on the semicircles centered at $O_1$ and $O_2$:
The area of a rectangle $S$, circumscribed around the rectangle $ABCD$, consists of the fixed part $S_{ABCD}=8\cdot 4=32$ plus doubled sum of $S_{CED}$ and $S_{ADF}$, controlled by the angle $\phi$. Also, $\triangle CED$ is congruent to $\triangle ADF$. With the base of $\triangle CED$ fixed, the maximum of the area is reached when the height $|EG|$ is maximal, that is, $|EG|=r_1=\frac12|CD|=4$, when $\phi=\frac{\pi}{4}$.
$S_{\max}=8\cdot4+2(\frac12 8\cdot4+\frac12 4\cdot2)=72.$
Hint: Drawing a picture is usually helpful. So draw a rectangle centered at the origin, and draw a ray emanating from the origin at an angle $\theta$ from the positive $x$-axis (for simplicity, take $0<\theta<\pi/2$, and take $\theta$ just slightly less than $\pi/2$ so the ray is just a little less than vertical).
Now draw the circumscribed rectangle whose sides are parallel/perpendicular to this ray (two will be parallel, two perpendicular). The interior of the circumscribed rectangle contains the original rectangle and four right triangles. The angles in these right triangles will be $\pi/2$, $\theta$, and $\pi/2-\theta$. The hypotenuse of each of these right triangles is a side of the original triangle. You can determine area of each of the right triangles as $\frac12\times$base$\times$height, and the base and height of each of them can be determined using the hypotenuse (a side of the original rectangle) and the sine or cosine (as appropriate according to your drawing) of one of the angles ($\theta$ or $\pi/2 - \theta$ as appropriate according to your drawing).
Now find which $\theta$ produces the greatest total area.
In the figure below, the black rectangle is circumscribed around the blue rectangle:
The edges of the two rectangles make an angle $\theta$, as shown in the figure.
Now see if you can tell what the lengths of $p$, $q$, $r$, and $s$ are in this figure, and if you can see what that has to do with the cosine and sine functions.
(Note that if you draw any blue rectangle and circumscribe a rectangle around it, the sizes of the two rectangles do not change when you rotate the entire figure in the plane. I have chosen to rotate the figure so that the black rectangle is "straight", both because it was easier to draw this way and because I think it helps in understanding the hint above. But if you don't like the orientation of this figure, rotate it again so that the blue rectangle is back exactly where it was in the first place; the lengths of the labeled segments will still be the same as they are now.)
The circumscribed rectangle is the union of the original $a\times b$-rectangle and four similar right triangles with hypotenuses $a$, resp. $b$. The total area is maximal when one of these triangles, and therewith all of them, have maximal area, and this is the case when they are all isosceles.
|
A linear transformation is one where
$T(\alpha x) = \alpha T(x)$
and
$T(x + y ) = T(x) + T(y)$
That's all the is to it.
$T((x, y)) = (2x + y, y, 0)$
Is linear because $T(c(x,y)) = T((cx,cy)) = (2cx + cy, cy, 0) = c(2x + y, cy, 0) = cT((x,y))$
and $T((x,y) + (w,z)) = T((x+w, y + z)) = (2x + 2w + y + z, y + z, 0) = (2x +y,y,0) + (2w + z, z,0) = T((x,y)) + T((w,z))$
But $T((x,y)) = (2x + y, xy, 0)$ is not because
$T(c(x,y)) = (2cx + cy, c^2xy, 0) \ne (2cx + cy, cxy, 0) = cT((x,y))$.
Also $T((x,y) + (w,z)) \ne T((x,y)) + T((w,z))$
====
A trick to notice is if the transformation involves adding a scalar other than 0 it will not be linear. ($f(x) = x + c => f(2x) = 2x + c; c \ne 2c: f(x + y) = x + y + c \ne x + c + y + c$)
If the transformation involves multiplying two terms together or taking a power of a term it will not be linear. ($f(x) = x^2 => f(2x) = (2x)^2 \ne 2(x^2): f(x + y) = (x +y)^2 \ne x^2 + y^2$)
But if the transformation is only multipling terms by scalars and adding terms it is linear.
$f((x,y)) = \sum (c_ix + d_iy) \implies f(a(x,y) + e(w,z)) = \sum (c_iax + d_iay + ec_iw + d_iez) = a\sum(c_ix + d_iy) + e\sum(c_ix + d_iy) = af((x,y)) + ef((w,z))$
====
Intuitive. If you travel along a line the distance you change vertically is proportional to the distance you change horizontally. Lines are consistant. If you double the input, you double the output. If you add two inputs together you get the two outputs added together.
If you travel anything that isn't a line that isn't true any more. That's why they call them "linear".
BUT there is a caveat. In high school algebra lines could have y-intercepts which throw these proportions off. It's only the "steepness" that is linear. In linear algebra they call those type of functions "affine". They are
sort of "linear" but with a constant "slide displacement to them".
|
We can start by analyzing equation #15 from the Derivation of Ideal Gas Law below:
\(P=\frac{N}{V}\frac{m\overline{v^2}}{3}=\frac{N}{V}\frac{2E_{kinetic}}{3}=\frac{N}{V}kT=\frac{NkT}{V}\)
Because we want to derive the equation to the speed of gaseous molecules, the most important factor come to mind is the speed v
x. Therefore, the equation will be
1. \(\frac{N}{V}\frac{m\overline{v^2}}{3}=\frac{N}{V}kT\)
Canceling out \(\frac{N}{V}\) from both side we get:
2. \(\frac{m\overline{v^2}}{3}=kT\Rightarrow{\overline{v^2}=\frac{3kT}{m}}\)
Since R = N
ak.
3. \(\overline{v^2}=\frac{3(R/N_a)(T)}{m}=\frac{3RT}{N_{a}m}\)
Finally, to calculate the avarage speed we find v
rms(Root Meean Square of Speed). 4. \(v_{rms}=\sqrt{\overline{v^2}}=\sqrt{\frac{3RT}{N_{a}m}}=\sqrt{\frac{3RT}{M}}\)
M = N
am in the equation above is the mass of one mole of molecules (the molecular mass).
* The gas constant R must be expressed in correct units for the situation in which it is being used. In the ideal gas equation where pv=nRT, it is logical to use units (L)(atm)/(mol)(K).
*In regard to speed, however, energy unit must be taken into account. Therefore, it is more appropriate to to convert it to (J)/(mol)(K).
A. \(R=8.314\frac{J}{molK}\)
|
Limits and Colimits, Part 2 (Definitions)
Welcome back to our mini-series on categorical limits and colimits! In Part 1 we gave an intuitive answer to the question, "What
are limits and colimits?" As we saw then, there are two main ways that mathematicians construct new objects from a collection of given objects: 1) take a "sub-collection," contingent on some condition or 2) "glue" things together. The first construction is usually a limit, the second is usually a colimit. Of course, this might've left the reader wondering, "Okay... but what are we taking the (co)limit of?" The answer? A diagram. And as we saw a couple of weeks ago, a diagram is really a functor.
We are now ready to give the formal definitions (along with more intuition). First, here's a bit of setup.
The Setup
In what follows, let $\mathsf{C}$ denote any category and let $\mathsf{I}$ be an indexing category. Note that given any object $X$ in $\mathsf{C}$, there is a
constant functor- let's also call it $X$ - from $\mathsf{I}$ to $\mathsf{C}$. This functor sends every object to $X$ and every morphism to the identity of $X$. Therefore, given any functor (ahem, diagram) $F:\mathsf{I}\to\mathsf{C}$, we can make sense of a natural transformation between $X$ and $F$. Such a natural transformation consists of a collection of morphisms between $X$ and the objects in the diagram $F$. Moreover, these morphisms must commute with all the morphisms that appear in diagram. If the arrows point from $X$ to the diagram $F$, then the setup is called a cone over $F$, as we previously discussed here. If, on the other hand, the arrows point from the diagram $F$ to $X$, then it's called a cone under $F$ (or sometimes a cocone).
Notice that a cone is comprised of two things: an
object AND a collection of arrows to or from it. Now here's the punchline: The limit of a diagram $F$ is a special cone over $F$. The colimit of $F$ is a special cone under $F$.
Let's take a look at the formal definitions. I'll give a lite version first, followed by the full version.
Definitions (Lite Version)
Definition (limit): The limit of a diagram $F$ is the "shallowest" cone over $F$.
By "shallowest" (not a technical term) I mean in the sense of the picture to the right. There may be
many cones -- many objects with maps pointing down to the diagram $F$ (depicted as a blob) -- over $F$, but the limit is the cone that is as close as possible to the diagram $F$. Perhaps this is why "limit" is a good choice of terminology. You might imagine all the cones over $F$ as cascading down to the limit.
If we let gravity pull all the arrows down, then we obtain the dual notion: a colimit.
Definition (colimit): The colimit of a diagram $F$ is the "shallowest" cone under $F$.
Again, by "shallowest" (not a technical term) I mean in the sense of the picture on the left. There may be
many cones under $F$, but the colimit is the one that's closest to the diagram. It's the shallowest.
Okay, this is all very handwavy and not very informative. To capture the mathematics behind "shallowest," we'll use a universal property. I'll comment on intuition below.
Definition: Limit (Full Version)
The definitions themselves can be stated very succinctly. But like little onions, they have several layers, which we will peel away
slowly to minimize the shedding of tears. Definition (1): The limit of a diagram $F:\mathsf{I}\to\mathsf{C}$ is the universal cone over $F$. Let's unwind this a bit... Definition (2): The limit of a diagram $F:\mathsf{I}\to\mathsf{C}$ is an object $\text{lim }F$ in $\mathsf{C}$ together with a natural transformation $\eta:\text{lim }F\Rightarrow F$ with the following property: for any object $X$ and for any natural transformation $\alpha\colon X\Rightarrow F$, there is a unique morphism $f\colon X\to \text{lim }F$ such that $\alpha=\eta\circ f$. Let's unwind this a bit more... Definition (3): The limit of a diagram $F:\mathsf{I}\to\mathsf{C}$ is an object $\text{lim }F$ in $\mathsf{C}$ together with morphisms $\eta_A:\text{lim }F\to A$, for each $A$ in the diagram, satisfying $\eta_B=\phi_{AB}\circ \eta_A$ for every morphism $\phi_{AB}\colon A\to B$ in the diagram. Morever, these maps have the following property: for any object $X$ and for any collection of morphisms $\alpha_A:X\to A$ satisfying $\alpha_B=\phi_{AB}\circ \alpha_A$, there exists a unique morphism $f\colon X\to\text{lim }F$ such that $$\alpha_A=\eta_A\circ f \qquad\text{for all objects $A$ in the diagram.}$$
In summary, for all objects $X$ in $\mathsf{C}$:
Definition: Colimit (Full Version) Definition (1): The colimit of a diagram $F:\mathsf{I}\to\mathsf{C}$ the universal cone under $F$. Let's unwind this a bit... Definition (2): The colimit of a diagram $F:\mathsf{I}\to\mathsf{C}$ is an object $\text{colim }F$ in $\mathsf{C}$ together with a natural transformation $\epsilon:F\Rightarrow \text{colim }F$ with the following property: for any object $X$ and for any natural transformation $\beta\colon F\Rightarrow X$, there is a unique morphism $g\colon \text{colim }F\to X$ such that $\beta= g\circ \epsilon$. Let's unwind this a bit more... Definition (3): The colimit of a diagram $F:\mathsf{I}\to\mathsf{C}$ is an object $\text{colim }F$ in $\mathsf{C}$ together with morphisms $\epsilon_A:A\to\text{colim }F$, for each $A$ in the diagram, satisfying $\epsilon_A=\epsilon_B\circ \phi_{AB}$ for every morphism $\phi_{AB}\colon A\to B$ in the diagram. Morever, these maps have the following property: for any object $X$ and for any collection of morphisms $\beta_A: A\to X$ satisfying $\beta_A=\beta_B\circ \phi_{AB}$, there exists a unique morphism $g\colon \text{colim }F\to X$ such that $$\beta_A= g \circ \epsilon_A\qquad\text{for all objects $A$ in the diagram.}$$
In summary, for all objects $X$ in $\mathsf{C}$:
a little intuition + a little exercise
I once heard (or read?) Eugenia Cheng refer to a universal property as a way to describe a
special role than an object—or in our case, a cone—plays. I like that analogy, and it's exactly what's going on with limits. (Similar sentiments hold for colimits.) Let me elaborate: Out of all the cones over a diagram $F$, there is exactly one that plays the role of limit, namely the pair $(\text{lim}F,\eta)$. Of course you might come across another cone $(X,\alpha)$ that plays a very similar role. Perhaps $\alpha$ behaves very similarly to $\eta.$ BUT -- and this is the punchline -- this behavior is no coincidence! The natural transformation $\alpha$ "behaves" like $\eta$ because it is built up from $\eta$! More precisely, it has $\eta$ as a factor: $\alpha=\eta\circ f$ for some unique morphism $f$!
By way of analogy, think of the role that the number 2 plays among the integers. Out of all the integers, we might say that 2 is the quintessential candidate for "an integer which possesses the quality of 'two-ness,'" that is, of being
even. Of course there are other integers $a$ that play a similar role. In particular, if $a$ is an even integer, then it also possesses the quality of "two-ness." But this is no coinicidence! An even integer is even because it is built up from 2! More precisely, it has 2 as a factor: $a=2k$, for some unique integer $k$!
These two equations $a=2k$ and $\alpha=\eta\circ f$ are analogous. In fact, they're more than analogous....
EXERCISE
Let $\mathsf{C}$ be the category $2\mathbb{Z}$ of even integers. A morphism $n\to m$ in this category is an integer $k$ such that $n=mk$. For example, 3 defines an arrow $6\overset{3}{\longrightarrow} 2$ because $6=2\times 3$. On the other hand, there is no arrow $8\longrightarrow 6$.
Show that $2$ is the limit of a particular diagram in $2\mathbb{Z}$.
That is, come up with an indexing category $\mathsf{I}$ and a functor $\mathsf{I}\to2\mathbb{Z}$
(Next question: does this diagram have a colimit? If so, what is it?)
I'll close with one final thought. Once we get used to the ideas/definitions above, we discover that limits and colimits have
very familiar names, depending on the shape of the indexing category $\mathsf{I}$!
In the next two posts, I'll justify some of these claims by giving explicit examples of limits and colimts in the category $\mathsf{Set}$.
Until then!
In this series:
|
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
I want to trace electrostatic field lines emerging from 2D surfaces in 3D space. Eventually I want to find their intersection with an (uncharged) mesh.
The charge distribution $\sigma(x), x \in \Gamma \subset \mathbb{R}^3$ is defined by requiring a constant potential $\phi(x) = \phi_\Gamma\ \forall x\in\Gamma$.
For the potential field $\phi$ and the electrostatic field $E$ we have:
$$ \phi(x) = \int_\Gamma \sigma(y) \frac{1}{\|x-y\|} dy,\quad x\in\mathbb{R}^3 \\ E(x) = \nabla \phi(x) = \int_\Gamma \sigma(y) \frac{x-y}{\|x-y\|^3} dy,\quad x\in\mathbb{R}^3 $$
The potential function $\phi$ is harmonic ($\Delta\phi=0$), so the points on the charged surfaces are the maxima of $\phi$ and the electric field is $0$ there. To trace the field lines I place two particles with a small offset in/against normal direction. The particles have in general a non-zero $E$ field, which can be used to trace the field lines with a time stepping scheme
$$ x_{i+1} = x_i + E(x_i) $$
For a surface like a plane you get two field lines in opposite directions and the field vanishes at infinity. When the surface belongs to a solid object, e.g a sphere, the potential field is constant and the electric field is $0$ on the inside, because $\phi$ is harmonic.
To calculate the field line, I use an approximation $\tilde{E}$ by replacing the integral over $\Gamma$ with a finite sum over triangles $T_i$:
$$ E(x) \approx \sum_j q_j \int_{T_j} \frac{1}{\|x-y\|} dy \approx \sum_j q_j \sum_k w_k \frac{1}{\|x-y_k\|} = \tilde{E}(x) $$
where $q_j = \int_{T_j} \sigma(y) dy$ and the last sum is a gauss quadrature of the integral over the triangle $T_j$. The potential $\tilde{\phi}$ can be approximated in the same way.
Now the problem is that as the functions are approximations, we have on the inside $\Omega \subset \mathbb{R}^3$ of the solid object
$$ \|\tilde{E}(x)\| > 0 \\ \tilde{\phi}(x) \not\equiv \text{const}, x \in \Omega $$
and the field $\tilde{E}$ often points outside the object.
While $\|\tilde{E}\|$ inside the object is typically small, the field lines still eventually cross the surface at a step $x_i \to x_{i+1}$.
The field has a discontinuity on the surface, so $\|E\|$ is zero on the inside and large on the outside near the surface. When tracing a particle in the approximated field, you have near the surface $\tilde{E}(x_i) \ll \tilde{E}(x_{i+1}), x_i \in \Omega, \ x_{i+1} \not\in \Omega$.
So the error between the position $x_k$ on an analytic field line and the position $\tilde{x}_k$ is small for $k \le i$ and huge for $k > i$.
The problem does not only affect solids, but creates local extrema in the space between non-closed charged surfaces as well, which leads to wrong field lines.
I thought of different ways to fix this numeric problems, but all ways seem to affect valid field lines as well:
Rounding small values to $0$ can fix the field inside solids. But this will make local minima even worse. Disallowing field lines to cross surfaces. This fixes the solid problem but not the local extrema. Adding some inertia term could fix field lines getting stuck in local minima, but will worsen the problem of the non-zero field inside solids. Normalizing $E$ to $E/\|E\|$. This makes the problem much worse for errors inside solid objects and requires choosing a smaller timestep size for all steps.
The approximations, which influence the accurracy of the solution:
The the resolution of the triangulation. Can be refined if needed, but makes the calculation much more costly as calculating the field at a single point is in $\mathcal{O}(n)$. So calculating one timestep of the field lines is in $\mathcal{O}(n^2)$. The approximation of the charges. I have a piecewise constant charge on each triangle and currently no way to approximate it with higher order. The approximation of the integrals in $\int_{T_j} q_j \frac{x-y}{\|x-y\|^3}dy$. I currently use 6th order gauss quadrature. The calculation of the charges $q_j$ by requiring $\phi(x)=\phi_\Gamma \ \forall x\in\Gamma$ is approximated by requiring $\phi(x_i)=\phi_\Gamma$, where $x_i \in T_i$ are the centers of the triangles.
|
Here, I give a quick review of the concept of a Martingale. A Martingale is a sequence of random variables satisfying a specific expectation conservation law. If one can identify a Martingale relating to some other sequence of random variables, its use can sometimes make quick work of certain expectation value evaluations.
This note is adapted from Chapter 2 of Stochastic Calculus and Financial Applications, by Steele.
Follow @efavdb Follow us on twitter for new submission alerts! Definition
Often in random processes, one is interested in characterizing a sequence of random variables $\{X_i\}$. The example we will keep in mind is a set of variables $X_i \in \{-1, 1\}$ corresponding to the steps of an unbiased random walk in one-dimension. A Martingale process $M_i = f(X_1, X_2, \ldots X_i)$ is a derived random variable on top of the $X_i$ variables satisfying the following conservation law
\begin{eqnarray} \tag{1} E(M_i | X_1, \ldots X_{i-1}) = M_{i-1}. \end{eqnarray} For example, in the unbiased random walk example, if we take $S_n = \sum_{i=1}^n X_i$, then $E(S_n) = S_{n-1}$, so $S_n$ is a Martingale. If we can develop or identify a Martingale for a given $\{X_i\}$ process, it can often help us to quickly evaluate certain expectation values relating to the underlying process. Three useful Martingales follow. Again, the sum $S_n = \sum_{i=1}^n X_i$ is a Martingale, provided $E(X_i) = 0$ for all $i$. The expression $S_n^2 – n \sigma^2$ is a Martingale, provided $E(X_i) = 0$ and $E(X_i^2) = \sigma^2$ for all $i$. Proof: \begin{eqnarray} \tag{2} E(S_n^2 | X_1, \ldots X_{n-1}) &=& \sigma^2 + 2 E(X_n) S_{n-1} + S_{n-1}^2 – n \sigma^2\\ & =& S_{n-1}^2 – (n-1) \sigma^2. \end{eqnarray} The product $P_n = \prod_{i=1}^n X_i$ is a Martingale, provided $E(X_i) = 1$ for all $i$. One example of interest is \begin{eqnarray} \tag{3} P_n = \frac{\exp \left ( \lambda \sum_{i=1}^n X_i\right)}{E(\exp \left ( \lambda X \right))^n}. \end{eqnarray} Here, $\lambda$ is a free tuning parameter. If we choose a $\lambda$ such that $E(\exp(\lambda X)) = 1$ for our process, we can get a particularly simple form. Stopped processes
In some games, we may want to setup rules that say we will stop the game at time $\tau$ if some condition is met at index $\tau$. For example, we may stop a random walk (initialized at zero) if the walker gets to either position $A$ or $-B$ (wins $A$ or loses $B$). This motivates defining the stopped Martingale as,
\begin{eqnarray} M_{n \wedge \tau} = \begin{cases} M_n & \text{if } \tau \geq n \\ M_{\tau} & \text{else}. \tag{4} \end{cases} \end{eqnarray} Here, we prove that if $M_n$ is a Martingale, then so is $M_{n \wedge \tau} $. This is useful because it will tell us that the stopped Martingale has the same conservation law as the unstopped version.
First, we note that if $A_i \equiv f_2(X_1, \ldots X_{i-1})$ is some function of the observations so far, then the transformed process
\begin{eqnarray} \tag{5} \tilde{M}_n \equiv M_0 + \sum_{i=1}^n A_i (M_i – M_{i-1}) \end{eqnarray} is also a Martingale. Proof: \begin{eqnarray} \tag{6} E(\tilde{M}_n | X_1, \ldots X_{n-1}) = A_n \left ( E(M_n) – M_{n-1} \right) + \tilde{M}_{n-1} = \tilde{M}_{n-1}. \end{eqnarray}
With this result we can prove the stopped Martingale is also a Martingale. We can do that by writing $A_i = 1(\tau \geq i)$ — where $1$ is the indicator function. Plugging this into the above, we get the transformed Martingale,
\begin{eqnarray} \nonumber \tag{7} \tilde{M}_n &=& M_0 + \sum_{i=1}^n 1(\tau \geq i) (M_i – M_{i-1}) \\ &=& \begin{cases} M_n & \text{if } \tau \geq n \\ M_{\tau} & \text{else}. \end{cases} \end{eqnarray} This is the stopped Martingale — indeed a Martingale, by the above. Example applications Problem 1 Problem 1
Consider an unbiased random walker that takes steps of size $1$. If we stop the walk as soon as he reaches either $A$ or $-B$, what is the probability that he is at $A$ when the game stops?
Solution: Let $\tau$ be the stopping time and let $S_n = \sum_{i=1}^n X_i$ be the walker’s position at time $n$. We know that $S_n$ is a Martingale. By the above, so then is $S_{n \wedge \tau}$, the stopped process Martingale. By the Martingale property
\begin{eqnarray} \tag{8} E(S_{n \wedge \tau}) = E(S_{i \wedge \tau}) \end{eqnarray} for all $i$. In particular, plugging in $i = 0$ gives $E(S_{n \wedge \tau}) = 0$. If we take $n \to \infty$, then \begin{eqnarray} \tag{9} \lim_{n \to \infty} E(S_{n \wedge \tau}) \to E(S_{\tau}) = 0. \end{eqnarray} But we also have \begin{eqnarray} \tag{10} E(S_{\tau}) = P(A) * A – (1 – P(A)) B. \end{eqnarray} Equating (9) and (10) gives \begin{equation} \tag{11} P(A) = \frac{B}{A + B} \end{equation} Problem 2
In the game above, what is the expected stopping time? Solution: Use the stopped version of the Martingale $S_n^2 – n \sigma^2$.
Problem 3
In a biased version of the random walk game, what is the probability of stopping at $A$? Solution: Use the stopped Martingale of form $P_n = \frac{\exp \left ( \lambda \sum_{i=1}^n X_i\right)}{E(\exp \left ( \lambda X \right))^n}$, with $\exp[\lambda] = q/p$, where $p = 1-q$ is the probability of step to the right.
|
Let the random variable $X$ has Rician distribution (unit power in direct and scattered paths), whose PDF is given by
$$f_X(x)=\frac{2x}{\alpha}\text{exp}\left(\frac{-(x^2+v^2)}{\alpha}\right)I_0\left(\frac{2xv}{\alpha}\right)$$ with $\frac{v^2}{\alpha}=1$ and $I_0(z)$ is the modified Bessel function of the first kind with order zero.
what is the PDF of $Y=X^2$?
And what is $\mathbb{E}[Y^{\delta}]$ when $0<\delta<1.$
Note: when $v^2=0$, $X$ has Rayleigh distribution.
************************************************************************
Some additional Info:
We know that the square of a Rayleigh random variable has exponential distribution, i.e.,
Let the random variable $X$ have Rayleigh distribution with PDF $$f_X(x)=\frac{2x}{\alpha}e^{-x^2/{\alpha}}.$$
Then the random variable $Y=X^2$ has the PDF given by $$f_Y(y)=\frac{1}{\alpha}e^{-y/{\alpha}}.$$
For an exponentially distributed r.v. $Y$ with mean $\mathbb{E}[Y]=1$
$$\mathbb{E}[Y^{\delta}]=\Gamma[1+\delta].$$
|
In this answer we will only consider the leading semi-classical approximation of a $1$-dimensional problem with Hamiltonian
$$ H(x,p) ~=~ \frac{p^2}{2m}+ \Phi(x), $$
where $\Phi$ is a potential. Semi-classically, the number of states $N(E)$ below energy-level $E$ is given by the area of phase space that is classically accessible, divided by Planck's constant $h$,
$$ N(E) ~\approx~ \iint_{H(x,p)\leq E} \frac{dx~dp}{h}. \qquad (1)$$
[Here we ignore the Maslov index, also known as the metaplectic correction, which e.g. yields the zero-point energy in the simple harmonic oscillator(SHO) spectrum.]Let
$$ V_0~:=~ \inf_{x\in\mathbb{R}} ~\Phi(x) $$
be the infimum of the potential energy. Let
$$\ell(V)~:=~\lambda(\{x\in\mathbb{R} \mid \Phi(x) \leq V\}) $$
be the length of the classically accessible position region at potential energy-level $V$.[Technically, the length $\ell(V)$ is the Lebesgue measure $\lambda$ of the preimage
$$\Phi^{-1}(]-\infty,V])~:=~ \{x\in\mathbb{R} \mid \Phi(x) \leq V\},$$
which does not necessarily have to be a connected interval.]
Example 1: If the potential $\Phi(x)=\Phi(-x)$ is an even function and is strongly monotonically increasing for $x\geq 0$, then the accessible length $\ell(V)=2\Phi^{-1}(V)$ is twice the positive inverse branch of $\Phi$.
Example 2: If the potential has a hard wall $\Phi(x)=+\infty$ for $x<0$, and is strongly monotonically increasing for $x\geq 0$, then the accessible length $\ell(V)=\Phi^{-1}(V)$ is the positive inverse branch of $\Phi$.
Example 3: If the potential $\Phi(x)$ is strongly monotonically decreasing for $x\leq0$ and strongly monotonically increasing for $x\geq 0$, then the accessible length $\ell(V)=\Phi_{+}^{-1}(V)-\Phi_{-}^{-1}(V)$ is the difference of the two inverse branch of $\Phi$.
In Example 1 and 2, if we would be able to determine the accessible length function $\ell(V)$, then we would also be able to generate the corresponding potential $\Phi(x)$ as OP asks.
The main claim is that we can reconstruct the accessible length $\ell(V)$ from $N(E)$, and vice-versa.
$$N(E) ~\approx ~\frac{\sqrt{2m}}{h} \int_{V_0}^E \frac{\ell(V)~dV}{\sqrt{E-V}},\qquad (2) $$
$$ \ell(V) ~\approx ~\hbar\sqrt{\frac{2}{m}} \frac{d}{dV}\int_{V_{0}}^V \frac{N(E)~dE}{\sqrt{V-E}}.\qquad (3) $$
[The $\approx$ signs are to remind us of the semi-classical approximation (1) we made.The formulas can be written in terms of fractional derivatives as Jose Garcia points out in his answer.]
Proof of eq.(2):
$$ h ~N(E) ~\stackrel{(1)}{\approx}~ 2\int_0^{\sqrt{2m(E-V_0)}} \left. \ell(V) \right|_{V=E-\frac{p^2}{2m}}~dp$$ $$~\stackrel{V=E-\frac{p^2}{2m}}{=}~2\int_{V_0}^E \frac{\ell(V)~dV}{v}~=~\sqrt{2m}\int_{V_0}^E \frac{\ell(V)~dV}{\sqrt{E-V}}, $$
because $dV~=~ - v~dp$ with speed $v~:=~\frac{p}{m}~=~\sqrt{\frac{2(E-V)}{m}}$.
Proof of eq.(3): Notice that
$$ \int_{V^{\prime}}^V \frac{dE}{\sqrt{(V-E)(E-V^{\prime})}} ~\stackrel{E=V \sin^2\theta + V^{\prime} \cos^2\theta }{=}~ 2 \int_0^{\frac{\pi}{2}} d\theta ~=~ \pi.\qquad (4) $$
Then
$$\frac{h}{\sqrt{2m}}\int_{V_0}^V \frac{N(E)~dE}{\sqrt{V-E}} ~\stackrel{(2)}{\approx}~
\int_{V_0}^{V}\frac{dE}{\sqrt{V-E}}\int_{V_0}^{E} \frac{\ell(V^{\prime})~dV^{\prime}}{\sqrt{E-V^{\prime}}} $$$$~\stackrel{{\rm Fubini}}{=}~\int_{V_0}^V \ell(V^{\prime})~dV^{\prime}\int_{V^{\prime}}^V \frac{dE}{\sqrt{(V-E)(E-V^{\prime})}} ~\stackrel{(4)}{=}~ \pi \int_{V_0}^V \ell(V^{\prime})~dV^{\prime},\qquad (5)$$
where we rely on Fubini's Theorem to change the order of integrations.Finally, differentiation wrt. $V$ on both sides of eq. (5) yields eq. (3).
|
This tutorial illustrates how impurity and information gain can be calculated in Python using the
NumPy and
pandas modules for information-based machine learning. The impurity calculation methods described in here are as follows:
Entropy Gini index
We start off with a simple example, which is followed by the Vegetation example in Chapter 5 in the textbook.
Suppose you are going out for a picnic and you are preparing a basket of some delicious fruits.
import warningswarnings.filterwarnings("ignore")import pandas as pdimport numpy as np
lst = ['apple']*3 + ['orange']*2 + ['banana']*2fruits = pd.Series(lst)print(fruits)
0 apple 1 apple 2 apple 3 orange 4 orange 5 banana 6 banana dtype: object
Here is the relative frequency of each fruit in the basket, which can be considered as the probability distribution of the fruits.
probs = fruits.value_counts(normalize=True)probs
apple 0.428571 banana 0.285714 orange 0.285714 dtype: float64
If you like, you can define the probability distribution yourself as below.
probs_by_hand = [3/7, 2/7, 2/7]print(probs_by_hand)
[0.42857142857142855, 0.2857142857142857, 0.2857142857142857]
Recall that Shannon's model defines entropy as $$H(x) := - \sum_{i=1}^{\ell}(P(t=i) \times \log_{2}(P(t=i))$$ The idea with entropy is that the more heterogenous and impure a feature is, the higher the entropy. Conversely, the more homogenous and pure a feature is, the lower the entropy.
The following calculation shows how impurity of this fruit basket can be computed using the entropy criterion.
entropy = -1 * np.sum(np.log2(probs) * probs)entropy
1.5566567074628228
The gini impurity index is defined as follows: $$ \mbox{Gini}(x) := 1 - \sum_{i=1}^{\ell}P(t=i)^{2}$$ The idea with Gini index is the same as in entropy in the sense that the more heterogenous and impure a feature is, the higher the Gini index.
A nice property of the Gini index is that it is always between 0 and 1, and this may make it easier to compare Gini indices across different features.
The impurity of our fruit basket using Gini index is calculated as below.
gini_index = 1 - np.sum(np.square(probs))gini_index
0.653061224489796
In comparison, let's compute impurity of another fruit basket with 7 different fruits with equal frequency.
lst2 = ['apple', 'orange', 'banana', 'mango', 'blueberry', 'watermelon', 'pear']fruits2 = pd.Series(lst2)print(fruits2)probs2 = fruits2.value_counts(normalize=True)probs2
0 apple 1 orange 2 banana 3 mango 4 blueberry 5 watermelon 6 pear dtype: object pear 0.142857 mango 0.142857 orange 0.142857 watermelon 0.142857 blueberry 0.142857 banana 0.142857 apple 0.142857 dtype: float64
entropy = -1 * np.sum(np.log2(probs2) * probs2)entropy
2.807354922057604
gini_index = 1 - np.sum(np.square(probs2))gini_index
0.8571428571428572
As expected, both entropy and Gini index of the second fruit basket is higher than those of the first fruit basket.
We now work out the details of the impurity calculations for the Vegetation dataset in Chapter 5 in the textbook.
Let's first import the dataset from the Cloud.
import pandas as pdimport ioimport requests# if you run into any SSL certification issues, # you may need to run the following command for a Mac OS installation.# $/Applications/Python\ 3.6/Install\ Certificates.command# how to read a csv file from a github accountdf_url = 'https://raw.githubusercontent.com/vaksakalli/ml_tutorials/master/FMLPDA_Table4_3.csv'url_content = requests.get(df_url).content# print(url_content)df = pd.read_csv(io.StringIO(url_content.decode('utf-8')))df
stream slope elevation vegetation 0 False steep high chapparal 1 True moderate low riparian 2 True steep medium riparian 3 False steep medium chapparal 4 False flat high conifer 5 True steep highest conifer 6 True steep high chapparal
For convenience, we define a function called
compute_impurity() that calculates impurity of a feature using either entropy or gini index.
def compute_impurity(feature, impurity_criterion): """ This function calculates impurity of a feature. Supported impurity criteria: 'entropy', 'gini' input: feature (this needs to be a Pandas series) output: feature impurity """ probs = feature.value_counts(normalize=True) if impurity_criterion == 'entropy': impurity = -1 * np.sum(np.log2(probs) * probs) elif impurity_criterion == 'gini': impurity = 1 - np.sum(np.square(probs)) else: raise ValueError('Unknown impurity criterion') return(round(impurity, 3))# let's do two quick examples.print('impurity using entropy:', compute_impurity(fruits, 'entropy'))print('impurity using gini index:', compute_impurity(fruits, 'gini'))# how to test for an incorrect compute_impurity_criterion value:# print('impurity using gini index:', compute_impurity(df['stream'], 'foo'))
impurity using entropy: 1.557 impurity using gini index: 0.653
Let's calculate entropy of the target feature "vegetation" using our new function.
target_entropy = compute_impurity(df['vegetation'], 'entropy')target_entropy
1.557
Let's compute the information gain for splitting based on a descriptive feature to figure out the best feature to split on. For this task, we do the following:
Compute impurity of the target feature (using either entropy or gini index). Partition the dataset based on unique values of the descriptive feature. Compute impurity for each partition. Compute the remaining impurity as the weighted sum of impurity of each partition. Compute the information gain as the difference between the impurity of the target feature and the remaining impurity.
We will define another function to achieve this, called
comp_feature_information_gain().
As an example, let's have a look at the levels of the "elevation" feature.
df['elevation'].value_counts()
high 3 medium 2 highest 1 low 1 Name: elevation, dtype: int64
Let's see how the partitions look like for this feature and what the corresponding calculations are using the entropy split criterion.
for level in df['elevation'].unique(): print('level name:', level) df_feature_level = df[df['elevation'] == level] print('corresponding data partition:') print(df_feature_level) print('partition target feature impurity:', compute_impurity(df_feature_level['vegetation'], 'entropy')) print('partition weight:', str(len(df_feature_level)) + '/' + str(len(df))) print('====================')
level name: high corresponding data partition: stream slope elevation vegetation 0 False steep high chapparal 4 False flat high conifer 6 True steep high chapparal partition target feature impurity: 0.918 partition weight: 3/7 ==================== level name: low corresponding data partition: stream slope elevation vegetation 1 True moderate low riparian partition target feature impurity: -0.0 partition weight: 1/7 ==================== level name: medium corresponding data partition: stream slope elevation vegetation 2 True steep medium riparian 3 False steep medium chapparal partition target feature impurity: 1.0 partition weight: 2/7 ==================== level name: highest corresponding data partition: stream slope elevation vegetation 5 True steep highest conifer partition target feature impurity: -0.0 partition weight: 1/7 ====================
The idea here is that, for each one of the 4 data partitions above,
We compute their impurity with respect to the target feature as a stand-alone dataset. We weigh these impurities with the relative number of observations in each partition. The relative number of observations is calculated as the number of observations in the partition divided by the total number of observations in the entire dataset. For instance, the weight of the first partition is 3/7. We add up these weighted impurities and call it the remaining impurity for this feature.
For instance, remaining impurity as measured by entropy for the elevation feature is 0.918 x (3/7) + 1.0 x (2/7) = 0.679.
Information gain is then calculated as 1.557 - 0.679 = 0.878.
Now we are ready to define our function. There is a bit of coding in here, but we can assure you that trying to figure out how things work in here will be rewarding to improve your Python programming skills.
def comp_feature_information_gain(df, target, descriptive_feature, split_criterion): """ This function calculates information gain for splitting on a particular descriptive feature for a given dataset and a given impurity criteria. Supported split criterion: 'entropy', 'gini' """ print('target feature:', target) print('descriptive_feature:', descriptive_feature) print('split criterion:', split_criterion) target_entropy = compute_impurity(df[target], split_criterion) # we define two lists below: # entropy_list to store the entropy of each partition # weight_list to store the relative number of observations in each partition entropy_list = list() weight_list = list() # loop over each level of the descriptive feature # to partition the dataset with respect to that level # and compute the entropy and the weight of the level's partition for level in df[descriptive_feature].unique(): df_feature_level = df[df[descriptive_feature] == level] entropy_level = compute_impurity(df_feature_level[target], split_criterion) entropy_list.append(round(entropy_level, 3)) weight_level = len(df_feature_level) / len(df) weight_list.append(round(weight_level, 3)) print('impurity of partitions:', entropy_list) print('weights of partitions:', weight_list) feature_remaining_impurity = np.sum(np.array(entropy_list) * np.array(weight_list)) print('remaining impurity:', feature_remaining_impurity) information_gain = target_entropy - feature_remaining_impurity print('information gain:', information_gain) print('====================') return(information_gain)
Now that our function has been defined, we will call it for each descriptive feature in the dataset. First let's call it using the entropy split criteria.
split_criterion = 'entropy'for feature in df.drop(columns='vegetation').columns: feature_info_gain = comp_feature_information_gain(df, 'vegetation', feature, split_criterion)
target feature: vegetation descriptive_feature: stream split criterion: entropy impurity of partitions: [0.918, 1.5] weights of partitions: [0.429, 0.571] remaining impurity: 1.250322 information gain: 0.306678 ==================== target feature: vegetation descriptive_feature: slope split criterion: entropy impurity of partitions: [1.371, -0.0, -0.0] weights of partitions: [0.714, 0.143, 0.143] remaining impurity: 0.9788939999999999 information gain: 0.578106 ==================== target feature: vegetation descriptive_feature: elevation split criterion: entropy impurity of partitions: [0.918, -0.0, 1.0, -0.0] weights of partitions: [0.429, 0.143, 0.286, 0.143] remaining impurity: 0.6798219999999999 information gain: 0.877178 ====================
Now let's call it using the gini index split criteria.
split_criteria = 'gini'for feature in df.drop(columns='vegetation').columns: feature_info_gain = comp_feature_information_gain(df, 'vegetation', feature, split_criteria)
target feature: vegetation descriptive_feature: stream split criterion: gini impurity of partitions: [0.444, 0.625] weights of partitions: [0.429, 0.571] remaining impurity: 0.5473509999999999 information gain: 0.1056490000000001 ==================== target feature: vegetation descriptive_feature: slope split criterion: gini impurity of partitions: [0.56, 0.0, 0.0] weights of partitions: [0.714, 0.143, 0.143] remaining impurity: 0.39984000000000003 information gain: 0.25316 ==================== target feature: vegetation descriptive_feature: elevation split criterion: gini impurity of partitions: [0.444, 0.0, 0.5, 0.0] weights of partitions: [0.429, 0.143, 0.286, 0.143] remaining impurity: 0.333476 information gain: 0.31952400000000003 ====================
We observe that, with both the entropy and gini index split criteria, the highest information gain occurs with the "elevation" feature.
This is the for the split at the root node of the corresponding decision tree. In subsequent splits, the above procedure is repeated with the subset of the entire dataset in the current branch until the termination condition is reached.
Please refer to Chapter 4: Information-Based Learning for more details.
www.featureranking.com
|
Difference between revisions of "Vopenka"
m (→Formalizations: inaccessible so MK!)
(weakly!)
Line 123: Line 123:
* If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of [[rank-into-rank|virtually rank-into-rank]] cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
* If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of [[rank-into-rank|virtually rank-into-rank]] cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
* If [[zero sharp|$0^\#$ exists]], then $L$, equipped with only its definable classes, is a model of $gVP$. (By [[Elementary_embedding#Absoluteness|elementary-embedding absoluteness results]]. The hypothesis can be weakened, because one can chop at off the universe at any Silver indiscernible and use reflection.)<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
* If [[zero sharp|$0^\#$ exists]], then $L$, equipped with only its definable classes, is a model of $gVP$. (By [[Elementary_embedding#Absoluteness|elementary-embedding absoluteness results]]. The hypothesis can be weakened, because one can chop at off the universe at any Silver indiscernible and use reflection.)<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
−
* The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of virtually $A$-[[extendible]] cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
+
* The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of virtually $A$-[[extendible]] cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
Open problems:
Open problems:
Revision as of 12:48, 3 October 2019
Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following:
For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$.
For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there is a stationary proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, or even almost-high-jump, then $V_\kappa$ satisfies Vopěnka's principle.
Contents Formalizations
As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. A somewhat stronger alternative is to view Vopěnka's principle as an axiom in second-order set theory capable to dealing with proper classes, such as von Neumann-Gödel-Bernays set theory. This is a strictly stronger assertion. [1] Finally, one may relativize the principle to a particular cardinal, leading to the concept of a Vopěnka cardinal.
Vopěnka's principle can be formalized in first-order set theory as a schema, where for each natural number $n$ in the meta-theory there is a formula expressing that Vopěnka’s Principle holds for all $Σ_n$-definable (with parameters) classes.[1]
Vopěnka principle VP and the Vopěnka scheme VS are not equivalent, but they are equiconsistent and have the same first-order consequences (GBC+VP is conservative over GBC+VS and ZFC+VS, VP makes no sense in the context of ZFC):[2]
If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka scheme holds, but the Vopěnka principle fails. If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka principle holds.
Vopěnka cardinal is an inaccessible cardinal $δ$ such that $\langle V_δ , ∈, V_{δ+1} \rangle$ is a model of VP (and the Morse–Kelley set theory). Vopěnka-scheme cardinal is a cardinal $δ$ such that $\langle V_δ , ∈ \rangle$ is a model of ZFC+VS.[2]
Vopěnka cardinals
An inaccessible cardinal $\kappa$ is a
Vopěnka cardinal if and only if $V_\kappa$ satisfies Vopěnka's principle, that is, where we interpret the proper classes of $V_\kappa$ as the subsets of $V_\kappa$ of cardinality $\kappa$. Because of a characterization of Vopěnka's principle in terms of graphs, a cardinal $\kappa$ is Vopěnka if and only if $\kappa$ is inaccessible and any set $\kappa$-sized set $G$ of $<\kappa$-sized nonisomorphic graphs has some $g_0$ and $g_1$ with $g_0$ a proper subgraph of $g_1$. (Need to cite sources)
As we mentioned above, every almost huge cardinal is a Vopěnka cardinal.
Equivalent statements Extendible cardinals
The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters (see section "Variants”). [4]
The Vopěnka principle is equivalent over GBC to both following statements:[2]
For every class $A$, there is an $A$-extendible cardinal. For every class $A$, there is a stationary proper class of $A$-extendible cardinals. Strong Compactness of Logics
Vopěnka's principle is equivalent to the following statement about logics as well:
For every logic $\mathcal{L}$, there is a cardinal $\mu_{\mathcal{L}}$ such that for any language $\tau$ and any $\mathcal{L}(\tau)$-theory $T$, $T$ is satisfiable if and only if every $t\subseteq T$ such that $|t|<\mu_{\mathcal{L}}$ is satisfiable. [5]
This $\mu_{\mathcal{L}}$ is called the strong compactness cardinal of $\mathcal{L}$. Vopěnka's principle therefore is equivalent to every logic having a strong compactness cardinal. This is very similar in definition to the Löwenheim–Skolem number of $\mathcal{L}$, although it is not guaranteed to exist.
Here are some examples of strong compactness cardinals of specific logics:
If $\kappa\leq\lambda$ and $\lambda$ is strongly compact or $\aleph_0$, then the strong compactness cardinal of $\mathcal{L}_{\kappa,\kappa}$ is at most $\lambda$. Similarly, if $\kappa\leq\lambda$ and $\lambda$ is extendible, then for any natural number $n$, the strong compactness cardinal of $\mathcal{L}^n_{\kappa,\kappa}$ ($\mathcal{L}_{\kappa,\kappa}$ with $n+1$-th order logic) is at most $\lambda$. Therefore for any natural number $n$, the strong compactness cardinal of $n+1$-th order finitary logic is at most the least extendible cardinal. Locally Presentable Categories
Vopěnka's principle is equivalent to the axiom stating "no large full subcategory $C$ of any locally presentable category is discrete." (Sources needed). Equivalently, no large full subcategory of Graph (the category of all graphs) is discrete; that is, for any proper class of simple directed graphs, there is at least one pair of nonequal graphs $G$ and $H$ in the class such that $G$ is a subgraph of $H$. This is a $\Pi^1_1$ statement, so the least Vopěnka cardinals are not even weakly compact (although the least weakly compact cardinal is much, much, much smaller than the least Vopěnka cardinal, if it exists).
Intuitively, a "category" is just a class of mathematical objects with some notion of "morphism", "homomorphism", "isomorphism", (etc.). For example, in Set, the category of all sets, homomorphisms are just injections, and isomorphisms are bijections. In categories of groups and models, homomorphisms and isomorphisms share their actual names.
A "locally small category" $C$ is one with only set-many morphisms between any two objects of $C$. This is one where the objects of $C$ behave "set-like" in the sense that, usually, the number of morphisms between two set-sized objects is at most the number of functions between their universes (like in groups and in graphs). A "locally presentable category" is a locally small category with a couple more really nice properties; you can "generate" all of the objects from set-many objects in the category.
Vopěnka's principle intuitively states that if you have a locally presentable category $C$, then any proper class of objects of $C$ has some nonisomorphic objects $c$ and $d$ where $c$ has a morphism into $d$.
Woodin cardinals
There is a strange connection between the Woodin cardinals and the Vopěnka cardinals. In particular, Vopěnkaness is equivalent to two strengthening variants of Woodinness, namely the Woodin for Supercompactness cardinals and the $2$-fold Woodin cardinals. As a result, every Vopěnka cardinal is Woodin.
Elementary Embeddings Between Ranks
An equivalent statement to Vopěnka's principle is that for any proper class $C\subseteq ORD$, there are $\alpha\in C$, $\beta\in C$, and a nontrivial elementary embedding $j:\langle V_\alpha;\in,P\rangle\rightarrow\langle V_\beta;\in,P\rangle$. Vopěnka's principle quite obviously implies this. The reason the converse holds is because every elementary embedding can be "encoded" (in a sense) into one of these. For more information, see [6].
Other points to note
Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$ (Vopěnka's principle itself is $\Pi^1_1$), and $\Pi^1_1$-indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it.
Variants
(Boldface) $VP(\mathbf{Σ_n})$ denotes the fragment of Vopěnka’s Principle for $Σ_n$-definable classes and (lightface) $VP(Σ_n)$ is the weaker principle, where parameters are not allowed in the definition of the class (with analogous definitions for $Π_n$ and $∆_n$).
Vopěnka-like principles $VP(κ, \mathbf{Σ_n})$ for cardinal $κ$ state that for every proper class $\mathcal{C}$ of structures of the same type that is $Σ_n$-definable with parameters in $H_κ$ (the collection of all sets of hereditary size less than $κ$), $\mathcal{C}$ reflects below $κ$, namely for every $A ∈ C$ there is $B ∈ H_κ ∩ C$ that elementarily embeds into $A$.
Results:
For every $Γ$, $VP(κ, Γ)$ for some $κ$ implies $VP(Γ)$. $VP(κ, \mathbf{Σ_1})$ holds for every uncountable cardinal $κ$. $VP(Π_1) \iff VP(κ, Σ_2)$ for some $κ \iff$ There is a supercompact cardinal. $VP(\mathbf{Π_1}) \iff VP(κ, \mathbf{Σ_2})$ for a proper class of cardinals $κ \iff$ There is a proper class of supercompact cardinals. For $n ≥ 1$, the following are equivalent: $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There is a $C(n)$-extendible cardinal. The following are equivalent: $VP(Π_n)$ for every n. $VP(κ, \mathbf{Σ_n})$ for a proper class of cardinals $κ$ and for every $n$. $VP$ For every $n$, there is a $C(n)$-extendible cardinal. Generic
(Information in this section from [7] unless noted otherwise)
Definitions:
The Generic Vopěnka’s Principlestates that for every proper class $\mathcal{C}$ of structures of the same type there are $B ≠ A$, both in $\mathcal{C}$, such that $B$ elementarily embeds into $A$ in some set-forcing extension. (Boldface) $gVP(\mathbf{Σ_n})$ and (lightface) $gVP(Σ_n)$ (with analogous definitions for $Π_n$ and $∆_n$) as well as $gVP(κ, \mathbf{Σ_n})$ are generic analogues of corresponding weakenings of Vopěnka's principle. For transitive $∈$-structures $B$ and $A$ and elementary embedding $j : B → A$, we say that $j$ is overspillingif it has a critical point and $j(crit(j)) > rank(B)$. The principle $gVP^∗(Σ_n)$ states that for every $Σ_n$-definable (without parameters) proper class $\mathcal{C}$ of transitive $∈$-structures, there are $B ≠ A$ in $\mathcal{C}$ such that there is an overspilling elementary embedding $j : B → A$ in some set-forcing extension. ($gVP^∗(Π_n)$, $gVP^∗(\mathbf{Π_n})$, and $gVP^∗(κ, \mathbf{Σ_n})$ are defined analogously.)
Results:
The following are equiconsistent: $gVP(Π_n)$ $gVP(κ, \mathbf{Σ_{n+1}})$ for some $κ$ There is an $n$-remarkable cardinal. The following are equiconsistent: $gVP(\mathbf{Π_n})$ $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$ There is a proper class of $n$-remarkable cardinals. $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal. If $gVP^∗(Π_n)$, then there is an $n$-remarkable cardinal. If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.[8] If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of virtually rank-into-rank cardinals.[8] If $0^\#$ exists, then $L$, equipped with only its definable classes, is a model of $gVP$. (By elementary-embedding absoluteness results. The hypothesis can be weakened, because one can chop at off the universe at any Silver indiscernible and use reflection.)[8] The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.[8]
Open problems:
Must there be an $n$-remarkable cardinal if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$. if $gVP(Π_n)$ holds.
References Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Hamkins, Joel David. The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme., 2016. www arχiv bibtex Perlmutter, Norman. The large cardinals between supercompact and almost-huge., 2010. arχiv bibtex Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex Makowsky, Johann. Vopěnka's Principle and Compact Logics.J Symbol Logic www bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Has Fulltext no (9) (remove)
320
Power-ordered sets are not always lattices. In the case of distributive lattices we give a description by disjoint of chains. Finite power-ordered sets have a polarity. We introduct the leveled lattices and show examples with trivial tolerance. Finally we give a list of Hasse diagrams of power-ordered sets.
326
Unde tales melancholici sunt, et optimi fiunt mathematici (H. v. Gent) Zur Interaktion von Mathematik und Melancholie (2001)
322
Integral equations on the half of line are commonly approximated by the finite-section approximation, in which the infinite upper limit is replaced by apositie number called finite-section parameter. In this paper we consider the finite-section approximation for first kind intgral equations which are typically ill-posed and call for regularization. For some classes of such equations corresponding to inverse problems from optics and astronomy we indicate the finite-section parameters that allows to apply standard regularization techniques. Two discretization schemes for the finite-section equations ar also proposed and their efficiency is studied.
303
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.
324
|
I'm very new to elementary number theory proofs and have been trying to figure out how to prove these seemingly straightforward identities with divisibility with no success.
For some integers $a,b,c \in \mathbb{Z}$
1) If ${a\mid bc}$ and ${a \not\mid b}$ then ${a\mid c}$
2) If ${a\mid c}$, ${b\mid c}$ and ${\gcd(a,b) = 1}$ then ${ab\mid c}$
For 1), example, if 2 divides 3a, then 2 clearly divides a because 2 does not divide 3... not sure how to formalize
For 2), I think it is somewhat related to the divisibility rule (https://en.wikipedia.org/wiki/Divisibility_rule). Let's say $6\mid 12$ and $3\mid 12$ is true but ${6\cdot 3\mid 12}$ is not and that relates to the fact that 6 is a multiple of 3. However, suppose ${2\mid a}$ and ${3\mid a}$. 2 and 3 are not multiples of each other and so the smallest number that is divisible by 2 and 3 must be a multiple of 2 and 3 (6 being the smallest), hence ${a}$ is divisible by 6. Is there a way to formalize this a better way?
|
Let $G$ be a connected, finite graph. (For me a graph is undirected, and it possibly has multiple edges, although the latter is not really crucial for this question). The
complexity $c(G)$ (also known as the tree-number of $G$) is defined to be the number of spanning trees of $G$.
There is another number $k(G)$ that one can easily (and naturally?) associate to $G$. In all examples that I have studied, $k(G)$ equals $c(G)$, but I am not able to prove this in general.
The natural number $k(G)$ is defined to be the cardinalty of a certain set of functions $$\mathcal{F}(G) \subset \left\{ f \colon V(G) \to \mathbb{N}: \sum_{v \in V(G)} f(v) = b_1(G)\right\}\subset \mathbb{N}^{V(G)}$$ where $V(G)$ is the vertex set of $G$ and $b_1(G)$ is the first Betti number of $G$.
Here we construct all elements of $\mathcal{F}(G)$. Start by considering the constant zero function on all vertices of $G$. Then fix a spanning tree $\Gamma$ of $G$. For all edges $e$ that are missing from $\Gamma$ in $G$, add a $1$ to the function at exactly one of the two endpoints of $e$, and do this in all possible ways to obtain a set of functions $\mathcal{F}_{\Gamma}(G)$. Then $\mathcal{F}(G)$ is the union of all $\mathcal{F}_{\Gamma}(G)$, for $\Gamma$ that ranges over all spanning trees of $G$.
Here are two simple examples where it is immediate to check that $k(G)=c(G)$.
1) Take for $G$ the graph with two vertices $v_1,v_2$ connected by $k$ edges. The functions set $\mathcal{F}(G)$ constructed in the above paragraph consists of the assignments $$\{(k-1, 0), (k-2, 1), ... (0, k-1)\},$$ so $c(G)=k(G)=k$.
2) Take for $G$ the graph that is a planar $k$-gon (with $k$ vertices and $k$ edges). Here $\mathcal{F}(G)$ consists of functions that are constantly zero except for one vertex of $G$ where they equal $1$. Again we have that $c(G)=k(G)=k$.
Question: is it true that $k(G)=c(G)$ for all $G$?
Even if you do not know the answer, maybe you could point me to the relevant literature. Since I am not myself a graph theory expert nor a combinatorialist, this may very well be well-known or trivial in which case I do apologize.
|
A common resistor circuit goes by the nickname
voltage divider.
In the previous article we developed an equation for voltage dividing,
$v_{out} = v_{in}\,\dfrac{\text R2}{\text R1 + \text R2}$
The derivation assumed the current flowing away from the center node is very small.
In this article we challenge that assumption and see how real-world voltage dividers behave when there is a load current.
Written by Willy McAllister.
Contents Operating the voltage divider near mid-range Operating the voltage divider near extremes Lessons from a loaded voltage divider Controlling error in a voltage divider Real-world resistor tolerance impacts accuracy What’s in a nickname? Where we’re headed
When a voltage divider delivers current to a load, the output voltage is lower than the voltage divider equation. The percentage droop depends on where the divider is operated within its range, $0$ to $v_{in}$.
The accuracy of a divider is also impacted by the tolerance of the two resistors.
A voltage divider isn’t useful unless its output is connected to something. Remember back in the voltage divider article we made an assumption? We assumed the current flowing away from the node between the two resistors was $0$. That let us treat $\text{R1}$ and $\text{R2}$ as if they were in series, and we developed the voltage divider equation. Let’s check what happens if that assumption is not true.
We look at three important cases,
Output voltage around the mid-range of the divider $($near $v_{in}/2)$ Output voltage close to $v_{in}$ Output voltage near $0$ Operating the voltage divider near mid-range
To start this discussion, we set up a voltage divider with $\text{R1} = \text{R2}$. When the resistors are the same value the expected $v_{out}$ of the voltage divider is half the input voltage,
$v_{out} = v_{in}\,\dfrac{\text R}{\text R + \text R} = v_{in}\,\dfrac{\cancel{\text R}}{2\,\cancel{\text R}}$
$v_{out} = \dfrac{v_{in}}{2}$
In ideal operation, current $i_1$ flows down through $\text R1$ and continues on through $\text R2$. If we connect a load to the divider, represented by resistor $\text R_{\text L}$, this will cause a small portion of $i_1$, we’ll call it $i_\text L$, to flow away from the center node, heading to the right through $\text R_\text L$.
Does the voltage divider still work with a load, or does our voltage divider story collapse?
Resistor $\text R_\text L$ acts as a
load on the output of the voltage divider, meaning it causes a current $i_\text L$ to flow. The presence of $\text R_\text L$ means $\text{R1}$ and $\text{R2}$ are no longer strictly in series.
We want $i_\text L$ to be small (we need it to be much smaller than $i_1$ or $i_2$), so let’s make $\text{R}_{\text L}$ fairly large. Let $\text R_\text L$ be ten times bigger than $\text{R2}$,
$\text{R}_{\text L} = 10\,\text{R2}$
With this high-resistance load, take a look at what happens to the output voltage.
$\text{R2}$ and $\text{R}_{\text L}$ are in parallel. Combine the two parallel resistors using the parallel resistor formula,
$\text{R2} \parallel \text{R}_{\text L} = \dfrac{\text{R2}\cdot\text{R}_{\text L}}{\text{R2}+\text{R}_{\text L}}$
$\parallel$
The vertical bars $\parallel$ are shorthand notation for “in parallel with.”
$\dfrac{\text{R2}\cdot10\,\text{R2}}{\text{R2}+10\,\text{R2}} = \dfrac{10}{11}\,\text{R2} = 0.91\,\text{R2}$
Here is a redrawn version of our loaded voltage divider, showing the equivalent resistance of $\text R2$ in parallel with $\text R_\text L$,
The $10{\times}$ load resistor reduces the resistance at the bottom of the divider by about $9\%$.
How does this additional load change the output voltage? Without the load, the expected output is $0.5\,v_{in}$. With the load, the output voltage becomes,
$v_{out} = v_{in}\, \dfrac{0.91\, \text R2}{\text R1 + 0.91\, \text R2} $
We designed our divider with $\text R1 = \text R2$, so all the $\text R$’s cancel out,
$v_{out} = v_{in}\, \dfrac{0.91}{1 + 0.91}$
$v_{out} = v_{in}\, \dfrac{0.91}{1 .91} = 0.48\,v_{in}$
The output voltage drops from $50\%$ to $48\%$ of the input voltage. How big an error is this?
$\dfrac{0.48}{0.50} = 0.96 = 96\%$
The actual output of the voltage divider is low by $4\%$ compared to the target voltage. Notice the $4\%$ voltage error is significantly less than the $9\%$ change of resistance.
Does a few $\%$ error matter? That is for you alone to decide. It depends on how accurate the voltage divider needs to be for your application.
Simulation model
Simulation model of the loaded voltage divider. Open it in a new tab. Run a
DC operating point on the circuit as-is. Then connect a wire from the top of the load resistor to the center node of the voltage divider. Run the DC operating point again. How much does the output voltage change.
Explore: Change the value of $\text R_\text L$ and see how much the output voltage changes.
The nugget to tuck away from this analysis:
If the effective load resistance is $10{\times}$ greater than the bottom resistor in the voltage divider, you get roughly “one hand” $\%$ error $(4-5\%)$ in the output voltage. This result is valid when the output voltage is near the center of its range (in the neighborhood of $v_{\text{in}}/2$). Operating the voltage divider near extremes
If the voltage divider operates near its extremes, with the output voltage close to either $v_{\text{in}}$ or $0$, the percentage error will be different. To find out how much different, we repeat the analysis with the output voltage set to $90\%$ and $10\%$ of the divider range. We keep the load resistor ten times the bottom resistor, so the parallel combination of $\text R2$ and $\text R_{\text L}$ is still $0.91\,\text R2$.
Case 1: 90% of $v_{in}$
Let the design target for $v_{out}$ be $90\%$ of $v_{in}$, so $v_{out}$ is really high in its range.
First, we need to design a voltage divider to give us a $90\%$ output. We do this by figuring out $\text R2$ in terms of $\text R1$ for a $90\%$ voltage divider,
$\dfrac{v_{out}}{v_{in}} = 0.90 = \dfrac{\text R2}{\text R1 + \text R2}$
$0.90 \,(\text R1 + \text R2) = \text R2$
$0.90 \,\text R1 = \text R2 - 0.90\,\text R2$
$0.90 \,\text R1 = 0.10 \,\text R2$
$\text R2 = \dfrac{0.90\,\text R1}{0.10} = 9\,\text R1$
That means $\text R2$, the resistor on the bottom, is $9$ times bigger than $\text R1$.
Now we load the circuit with $\text R_\text L$ and see how the output voltage changes.
Here’s a repetition of the expression we derived above for the loaded voltage divider,
$\dfrac{v_{out}}{v_{in}} = \dfrac{0.91\, \text R2}{\text R1 + 0.91\, \text R2} $
Last time we had $\text R2 =\text R1$, but this time $\text R2 = 9\,\text R1$,
$\dfrac{v_{out}}{v_{in}} = \dfrac{0.91 \,(9\,\text R1)}{\text R1 + \text 0.91\,(9\,\text R1)}$
All the $\text R1$’s cancel out, leaving,
$\dfrac{v_{out}}{v_{in}} = \dfrac{0.91 \,(9)}{1 + 0.91\,(9)} = \dfrac{8.19}{9.19} = 0.89$
The actual output voltage is $89\%$ of $v_{in}$ instead of the design goal of $90\%$. So the actual voltage is lower than the expected by only $1\%$.
Simulation model
Simulation model of $v_\text{out} = 90\%$ of $v_\text{in}$. Perform a
DC operating point simulation of the circuit as-is. Then connect a wire from the load resistor to the voltage divider. Perform another DC operating point and see how the output voltage droops slightly. Case 2: 10% of $v_{in}$
Let $v_{out} = 10\%$ of $v_{in}$, so $v_{out}$ is really low in its range.
Express $\text R1$ in terms of $\text R2$ for a $10\%$ voltage divider,
$\dfrac{v_{out}}{v_{in}} = 0.10 = \dfrac{\text R2}{\text R1 + \text R2}$
$0.10 \,(\text R1 + \text R2) = \text R2$
$0.10 \,\text R1 = \text R2 - 0.10\,\text R2$
$0.10 \,\text R1 = 0.90 \,\text R2$
$\text R1 = \dfrac{0.90\,\text R2}{0.10} = 9\,\text R2$
$\text R1$, the resistor on the top, is $9$ times bigger than $\text R2$.
Now we load the circuit with $\text R_\text L$ and see what happens to the output voltage.
The expression for the loaded voltage divider is still,
$\dfrac{v_{out}}{v_{in}} = \dfrac{0.91\, \text R2}{\text R1 + 0.91\, \text R2} $
Replace $\text R1$ with $9\,\text R2$,
$\dfrac{v_{out}}{v_{in}} = \dfrac{0.91 \,\text R2}{9\,\text R2 + \text 0.91\,\text R2}$
All the $\text R2$’s cancel out,
$\dfrac{v_{out}}{v_{in}} = \dfrac{0.91}{9 + 1} = \dfrac{0.91}{10} = 0.091$
The actual output voltage is $9.1\%$ of $v_{in}$ instead of the expected $10\%$.
So the actual voltage is lower than expected by about $\dfrac{10\% - 9.1\%}{10\%} = 9\%$.
This error is pretty big, twice as large as the error of the loaded mid-range divider.
Simulation model
Simulation model of the voltage divider at $10\%$ of $v_\text{in}$. Run a
DC operating point. Then add a wire to connect the load resistor to the divider. Run the DC operating point again. How much does the divider output change with the load? Lessons from a loaded voltage divider
If you have a $(10\times$$\text R2)$ load resistor drawing current from a voltage divider,
Near mid-range, the output voltage is reduced by about $4\%$.
Near the top of its range, the error is much less, around $1\%$.
Near the bottom of its range, the error roughly doubles compared to mid-range. The output voltage is $9\%$ lower than expected.
Controlling error in a voltage divider
If your design requires the voltage error to be significantly smaller, the load needs to be significantly larger than $(10\times$$\text R2)$, like an additional $10{\times}$ or more. You can get an additional $10{\times}$ two ways. Increase the load resistance. Or redesign the voltage divider to have smaller $\text{R1}$ and $\text{R2}$, at the cost of more power dissipated by the voltage divider.
example
Suppose you have a fixed load resistor $\text R_\text L = 10 \,\text k\Omega$, and you can’t change it. You design a voltage divider to connect to the load. Your first design is to pick two resistors $10$ times smaller than the load resistor, or $\text R1 = \text R2 = 1\,\text k\Omega$.
You’ve paid attention to this article where you learned about the sources of error in a voltage divider when it has a load connected. You check how much the divider’s output voltage “sags” due to the load and you are not happy with the result. You want the voltage to sag less and be closer to $\text V_{\text{in}}/2$.
Since you can’t change the load resistor you have to change $\text R1$ and $\text R2$. For you second design you pick resistors $100$ times smaller than the load resistor, or $\text R1 = \text R2 = 100\,\Omega$. When you make $\text R1$ and $\text R2$ smaller, the load resistor has less effect on the divider voltage. We say the voltage divider is “stiffer”. The cost for doing this is a $10$ times increase in the power dissipated by the voltage divider.
Real-world resistor tolerance impacts accuracy
Real-world resistors always have a $\pm$ tolerance on their value. If accuracy is critical to your application, use resistors with tight tolerances. Check for acceptable performance by analyzing the voltage divider like we did here at the extremes of tolerance.
What’s in a nickname?
We mentioned the nickname of this circuit is a
voltage divider. In many situations, that is exactly what it does. However, we showed that under certain conditions when there is a load on the divider, the actual output voltage is slightly lower than the value predicted by the voltage divider equation. Real-world dividers are built with resistors with real-world tolerances. This also introduces errors to the output voltage. The lesson: Call the circuit by its nickname, but remember, it’s only a nickname. Summary
When a voltage divider has some of its current diverted to drive a load, the output voltage will be a little lower than the target value predicted by the voltage divider equation.
The error is greatest when the output voltage is near $0$.
Design voltage dividers based on the accuracy demanded by your application.
|
I've been deriving Bass diffusion model and keep consistently finding a different result than Bass' original answer. To make things worse, every single link in the Google results page just copies the Bass original solution, while I am finding the different one. You don't have to know the model, just let me show you the math part, so you can check my solution.
The premise:
We have two formulations of the hazard rate --- as Bayesian conditional probability and as a linear function:
\begin{equation} \frac{f(t)}{1-F(t)} = p + q F(t) \end{equation}
where $f(t) = \frac{dF(t)}{dt}$. This gives the following differential equation:
\begin{equation} \frac{dF(t)}{dt} = (1 - F(t)) (p + qF(t)) \end{equation}
My solution
Using Chain Rule, we rewrite this as:
\begin{equation} \int \frac{dF(t)}{(1-F(t))(p+qF(t))} = \int dt = t \label{eq:diff} \end{equation}
Notice that:
\begin{equation} \frac{1}{(1-F(t))(p+qF(t))} = \left( \frac{1}{p+q} \right) \left( \frac{q}{p+qF(t)} + \frac{1}{1-F(t)} \right), \end{equation}
substituting this to the above equation, implies:
\begin{equation} \int \frac{q}{p+qF(t)} dF(t) - \int \frac{-1}{1-F(t)} dF(t) = (p + q) t \end{equation}
integrating yields:
\begin{equation} \log (p + qF(t)) - \log (1 - F(t)) = (p + q) t \end{equation}
using log properties:
\begin{equation} \log \left( \frac{p + qF(t)}{1 - F(t)} \right) = (p + q) t \end{equation}
or:
\begin{equation} \frac{p + qF(t)}{1 - F(t)} = e^{(p+q)t} \end{equation}
cross producting the fraction yields:
\begin{equation} p + qF(t) = e^{(p+q)t} - e^{(p+q)t} F(t) \end{equation}
finally:
\begin{equation} (q + e^{(p+q)t}) F(t) = (e^{(p+q)t} - p) \end{equation}
or:
\begin{equation} F(t) = \frac{e^{(p+q)t} - p}{e^{(p+q)t} + q} \end{equation}
The problem:
cancelling out my answer does not yield Bass' answer. The unnecessary $q$ is ruining everything.
Can you please help me with this inconsistency?
Thank you.
|
I'm trying to clarify the notion of being contractible in homotopy theory vs. homotopy
type theory. Is the following right?
"
In homotopy theory the real interval $[0,1]$, considered as a subset of $\mathbb R$, is contractible. In contrast, for a set in homotopy type theory, isContr is only inhabited if that the set (extensionally) is empty or a singleton."
Looking at the relevant definitions I reproduced below, the answer should be yes. Because as terms of identity types, those sort of paths the type theory speaks about are simply not there in a "setty" homotopy theory, where a path means moving through a set. This then leads me to ask:
Does isEquiv, which is defined in terms of isContr, directly capture homotopy equivalence in the traditional sense. How can this still be used to natively describe geometric kind of paths, which are not relations signifying equality of the end points?
I've added text to the definitions to make clear where my dissonance comes from.
$isProp(A) := \prod_{x\colon A} \prod_{y\colon A}\ \ x=y$
For all terms x,y in A we can show that they are equal. (Roughly, A has one unique term or none, just like truth and falsehood.)
$isSet(A) := \prod_{x\colon A} \prod_{y\colon A} isProp(x=y)$
For all terms x in A, we can for all terms y in A shown that the type x=y is a proposition and not a complicated path space. Either x and y are not equal or they are equal in a unique way.
$isContr(A) := \sum_{x\colon A} \prod_{y\colon A}\ \ y=x$
A collection of all x's in A together with the way to to show, for all terms y in A, that y is equal to x in some way. This type is the same as the direct product of A with isProp(A).
$isEquiv(f:A\to B) := \prod_{b\colon B} isContr\left( \sum_{x\colon A}\ \ f(x) = b \right)$
For all b in the base we can show that the fibres (the collection of x's which make up the kernel of f over b) is contractible.
|
Now, let’s get some practice on calculating centre of mass of objects.
An object of mass $M$ is in the shape of a right-angle triangle whose dimensions are shown in the figure. Locate the coordinates of the centre of mass, assuming that the object has a uniform mass per unit area.
Recall that the equations for centre of mass:
$$\begin{aligned} x_{CM} &= \frac{1}{M} \int x \, dm \\ y_{CM} &= \frac{1}{M} \int y \, dm \end{aligned}$$
First, in order to find $x_{CM}$, we shall slice the triangle into thin slices with mass $dm$, height y and thickness $dx$, as shown in the figure below. This is such that every point in $dm$ has the same x value (same distance from y-axis).
Look at the figure above, from theory of similar triangles, we can obtain this relation:
$$\begin{aligned} \frac{y}{x} &= \frac{b}{a} \\ y &= \frac{b}{a} x \end{aligned}$$
We will use this relation in the calculation for $dm$ below.
The mass per unit area, $\rho$ is given by:
$$\begin{aligned} M &= \rho \left( \frac{1}{2} a b \right) \\ &= \frac{2M}{ab} \end{aligned}$$
Hence, $dm$ will be given by:
$$\begin{aligned} dm &= \rho \left( y \, dx \right) \\ &= \rho \left( \frac{b}{a} x \right) \, dx \end{aligned}$$
Using $dm$ in the equation for centre of mass:
$$\begin{aligned} x_{CM} &= \frac{1}{M} \int x \, dm \\ &= \frac{\rho}{M} \int\limits_{0}^{a} x \left( \frac{b}{a} x \right) \, dx \\ &= \frac{1}{M} \frac{2M}{ab} \left[ \frac{b}{a} \frac{x^{3}}{3} \right]_{0}^{a} \\ &= \frac{2}{ab} \left( \frac{b}{a} \frac{a^{3}}{3} \right) \\ &= \frac{2}{3} a \end{aligned}$$
Now, to find $y_{CM}$, we will choose $dm$ such that every point in $dm$ has the same y value.
From the above figure, using the theory of similar triangles, we can arrive at a relation:
$$\begin{aligned} \frac{y}{x} &= \frac{b}{a} \\ x &= \frac{a}{b} y \end{aligned}$$
Hence, $dm$ is given by:
$$\begin{aligned} dm &= \rho \left( a-x \right) dy \\ &= \rho \left( a-\frac{a}{b} y \right) dy \end{aligned}$$
Using $dm$ in the equation for centre of mass:
$$\begin{aligned} y_{CM} &= \frac{1}{M} \int y \, dm \\ &= \frac{\rho}{M} \int\limits_{0}^{b} y \left( a-\frac{a}{b} y \right) \, dy \\ &= \frac{1}{M} \frac{2M}{ab} \left[ \frac{ay^{2}}{2}-\frac{ay^{3}}{3b} \right]_{0}^{b} \\ &= \frac{2}{ab} \left( \frac{ab^{2}}{2}-\frac{ab^{3}}{3b} \right) \\ &= \frac{1}{3} b \end{aligned}$$
|
Difference between revisions of "Extendible"
(→Variants: === $C^{(n)}$-extendible cardinals ===)
(→$C^{(n)}$-extendible cardinals: I3)
Line 59: Line 59:
** There exists a $C(n)$-extendible cardinal.
** There exists a $C(n)$-extendible cardinal.
* “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle.
* “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle.
+
=== $(\Sigma_n,\eta)$-extendible cardinals ===
=== $(\Sigma_n,\eta)$-extendible cardinals ===
Revision as of 11:02, 17 September 2019
A cardinal $\kappa$ is
$\eta$-extendible for an ordinal $\eta$ if and only if there is an elementary embedding $j:V_{\kappa+\eta}\to V_\theta$, with critical point $\kappa$, for some ordinal $\theta$. The cardinal $\kappa$ is extendible if and only if it is $\eta$-extendible for every ordinal $\eta$. Equivalently, for every ordinal $\alpha$ there is a nontrivial elementary embedding $j:V_{\kappa+\alpha+1}\to V_{j(\kappa)+j(\alpha)+1}$ with critical point $\kappa$. Contents 1 Alternative definition 2 Relation to Other Large Cardinals 3 Variants 4 In set-theoretic geology 5 References Alternative definition
Given cardinals $\lambda$ and $\theta$, a cardinal $\kappa\leq\lambda,\theta$ is
jointly $\lambda$-supercompact and $\theta$-superstrong if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{crit}(j)=\kappa$, $\lambda<j(\kappa)$, $M^\lambda\subseteq M$ and $V_{j(\theta)}\subseteq M$. That is, a single embedding witnesses both $\lambda$-supercompactness and (a strengthening of) superstrongness of $\kappa$. The least supercompact is never jointly $\lambda$-supercompact and $\theta$-superstrong for any $\lambda$,$\theta\geq\kappa$.
A cardinal is extendible if and only if it is jointly supercompact and $\kappa$-superstrong, i.e. for every $\lambda\geq\kappa$ it is jointly $\lambda$-supercompact and $\kappa$-superstrong. [1] One can show that extendibility of $\kappa$ is in fact equivalent to "for all $\lambda$,$\theta\geq\kappa$, $\kappa$ is jointly $\lambda$-supercompact and $\theta$-superstrong". A similar characterization of $C^{(n)}$-extendible cardinals exists.
The ultrahuge cardinals are defined in a way very similar to this, and one can (very informally) say that "ultrahuge cardinals are to superhuges what extendibles are to supercompacts". These cardinals are superhuge (and stationary limits of superhuges) and strictly below almost 2-huges in consistency strength.
To be expanded: Extendibility Laver Functions, $C^{(n)}$-extendibility Relation to Other Large Cardinals
Extendible cardinals are related to various kinds of measurable cardinals.
Supercompactness
Extendibility is connected in strength with supercompactness. Every extendible cardinal is supercompact, since from the embeddings $j:V_\lambda\to V_\theta$ we may extract the induced supercompactness measures $X\in\mu\iff j''\delta\in j(X)$ for $X\subset \mathcal{P}_\kappa(\delta)$, provided that $j(\kappa)\gt\delta$ and $\mathcal{P}_\kappa(\delta)\subset V_\lambda$, which one can arrange. On the other hand, if $\kappa$ is $\theta$-supercompact, witnessed by $j:V\to M$, then $\kappa$ is $\delta$-extendible inside $M$, provided $\beth_\delta\leq\theta$, since the restricted elementary embedding $j\upharpoonright V_\delta:V_\delta\to j(V_\delta)=M_{j(\delta)}$ has size at most $\theta$ and is therefore in $M$, witnessing $\delta$-extendibility there.
Although extendibility itself is stronger and larger than supercompactness, $\eta$-supercompacteness is not necessarily too much weaker than $\eta$-extendibility. For example, if a cardinal $\kappa$ is $\beth_{\eta}(\kappa)$-supercompact (in this case, the same as $\beth_{\kappa+\eta}$-supercompact) for some $\eta<\kappa$, then there is a normal measure $U$ over $\kappa$ such that $\{\lambda<\kappa:\lambda\text{ is }\eta\text{-extendible}\}\in U$.
Strong Compactness
Interestingly, extendibility is also related to strong compactness. A cardinal $\kappa$ is strongly compact iff the infinitary language $\mathcal{L}_{\kappa,\kappa}$ has the $\kappa$-compactness property. A cardinal $\kappa$ is extendible iff the infinitary language $\mathcal{L}_{\kappa,\kappa}^n$ (the infinitary language but with $(n+1)$-th order logic) has the $\kappa$-compactness property for every natural number $n$. [2]
Given a logic $\mathcal{L}$, the minimum cardinal $\kappa$ such that $\mathcal{L}$ satisfies the $\kappa$-compactness theorem is called the
strong compactness cardinal of $\mathcal{L}$. The strong compactness cardinal of $\omega$-th order finitary logic (that is, the union of all $\mathcal{L}_{\omega,\omega}^n$ for natural $n$) is the least extendible cardinal. Variants $C^{(n)}$-extendible cardinals
(Information in this subsection from [3])
A cardinal $κ$ is called
$C^{(n)}$-extendible if for all $λ > κ$ it is $λ$-$C^{(n)}$-extendible, i.e. if there is an ordinal $µ$ and an elementary embedding $j : V_λ → V_µ$, with $\mathrm{crit(j)} = κ$, $j(κ) > λ$ and $j(κ) ∈ C^{(n)}$.
For $λ ∈ C^{(n)}$, a cardinal $κ$ is $λ$-$C^{(n)+}$-extendible iff it is $λ$-$C^{(n)}$-extendible, witnessed by some $j : V_λ → V_µ$ which (besides $j(κ) > λ$ and $j(κ) ∈ C(n)$) satisfies that $µ ∈ C^{(n)}$.
$κ$ is $C^{(n)+}$-extendible iff it is $λ$-$C^{(n)+}$-extendible for every $λ > κ$ such that $λ ∈ C^{(n)}$.
Properties:
There exists a $C^{(n)}$-extendible cardinal if and only if there exists a $C^{(n)+}$-extendible cardinal. Every extendible cardinal is $C^{(1)}$-extendible and $C^{(1)+}$-extendible. If $κ$ is $C^{(n)}$-extendible, then $κ ∈ C^{(n+2)}$. For every $n ≥ 1$, if $κ$ is $C^{(n)}$-extendible and $κ+1$-$C^{(n+1)}$-extendible, then the set of $C^{(n)}$-extendible cardinals is unbounded below $κ$. Hence, the first $C^{(n)}$-extendible cardinal $κ$, if it exists, is not $κ+1$-$C^{(n+1)}$-extendible. In particular, the first extendible cardinal $κ$ is not $κ+1$-$C^{(2)}$-extendible. For every $n$, if there exists a $C^{(n+2)}$-extendible cardinal, then there exist a proper class of $C^{(n)}$-extendible cardinals. The existence of a $C^{(n+1)}$-extendible cardinal $κ$ (for $n ≥ 1$) does not imply the existence of a $C^{(n)}$-extendible cardinal greater than $κ$. For if $λ$ is such a cardinal, then $V_λ \models$“κ is $C^{(n+1)}$-extendible”. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. For $n ≥ 1$, the following are equivalent ($VP$ — Vopěnka's principle): $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There exists a $C(n)$-extendible cardinal. “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-extendible (inter alia) in $V_δ$, for all $n$ and $m$. $(\Sigma_n,\eta)$-extendible cardinals
There are some variants of extendible cardinals because of the interesting jump in consistency strength from $0$-extendible cardinals to $1$-extendibles. These variants specify the elementarity of the embedding.
A cardinal $\kappa$ is $(\Sigma_n,\eta)$-extendible, if there is a $\Sigma_n$-elementary embedding $j:V_{\kappa+\eta}\to V_\theta$ with critical point $\kappa$, for some ordinal $\theta$. These cardinals were introduced by Bagaria, Hamkins, Tsaprounis and Usuba [4].
$\Sigma_n$-extendible cardinals
The special case of $\eta=0$ leads to a much weaker notion. Specifically, a cardinal $\kappa$ is
$\Sigma_n$-extendible if it is $(\Sigma_n,0)$-extendible, or more simply, if $V_\kappa\prec_{\Sigma_n} V_\theta$ for some ordinal $\theta$. Note that this does not necessarily imply that $\kappa$ is inaccessible, and indeed the existence of $\Sigma_n$-extendible cardinals is provable in ZFC via the reflection theorem. For example, every $\Sigma_n$ correct cardinal is $\Sigma_n$-extendible, since from $V_\kappa\prec_{\Sigma_n} V$ and $V_\lambda\prec_{\Sigma_n} V$, where $\kappa\lt\lambda$, it follows that $V_\kappa\prec_{\Sigma_n} V_\lambda$. So in fact there is a closed unbounded class of $\Sigma_n$-extendible cardinals.
Similarly, every Mahlo cardinal $\kappa$ has a stationary set of inaccessible $\Sigma_n$-extendible cardinals $\gamma<\kappa$.
$\Sigma_3$-extendible cardinals cannot be Laver indestructible. Therefore $\Sigma_3$-correct, $\Sigma_3$-reflecting, $0$-extendible, (pseudo-)uplifting, weakly superstrong, strongly uplifting, superstrong, extendible, (almost) huge or rank-into-rank cardinals also cannot.[4]
Virtually extendible cardinals
(Information in this subsection from [5])
A cardinal $κ$ is
virtually extendible iff for every $α > κ$, in a set-forcing extension there is an elementary embedding $j : V_α → V_β$ with $\mathrm{crit(j)} = κ$ and $j(κ) > α$. $C^{(n)}$-virtually extendible cardinals require additionaly that $j(κ)$ has property $C^{(n)}$ (i.e. $\Sigma_n$-correctness).
If $κ$ is virtually Shelah for supercompactness or 2-iterable, then $V_κ$ is a model of proper class many virtually $C^{(n)}$-extendible cardinals for every $n < ω$.
If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals.
In set-theoretic geology This article is a stub. Please help us to improve Cantor's Attic by adding information.
References Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
Answer
$2cis240^\circ$ or $2(cos 240^\circ + isin240^\circ)$
Work Step by Step
$arctan(\frac{-\sqrt{3}}{-1}) = 60^\circ$, but since, $- 1 - i\sqrt{3}$ is in the 3rd quadrant, its angle is $240^\circ$. The absolute value of $- 1 - i\sqrt{3}$ is $\sqrt{(-1)^2 + (-\sqrt{3})^2} = 2$, the equivalent trigonometric form is $2cis240^\circ$ or $2(cos 240^\circ + isin240^\circ)$.
|
I'm trying to apply a Fourier transform of a one dimensional list of a time history of some quantity using the
Fourier function. I'm interested in the frequency spectrum, but the problem is that the
Fourier function uses the fast Fourier transform algorithm which places the zero frequency at the beginning, complicating my analysis of the results.
So how can I shift the zero frequency to the center?
I tried to search for the solution and found two methods, which seem to give contradicting answers.
Method 1 (from course notes available here). It simply rotates the list before and after the Fourier transform:
DFT1[ls_?(EvenQ@Length[#] &), dt_] := Module[{N0, fft}, N0 = Length[ls]; fft = RotateRight[ dt*Fourier[RotateLeft[ls, N0/2 - 1], FourierParameters -> {1, 1}], N0/2 - 1]; fft ]
Method 2 (from some code of my professor). It rotates the list only after the transform, and adds a phase shift:
DFT2[ls_?(EvenQ@Length[#] &), dt_] := Module[{N0, dw, wls, fft}, N0 = Length[ls]; dw = (2 π)/(N0 dt); wls = dw Range[-(N0/2), N0/2 - 1]; fft = dt* Reverse[RotateRight[Fourier[ls, FourierParameters -> {1, -1}], N0/2 - 1]]*Exp[(I π)/dw wls]; fft ]
If we use these two methods on an example, we get different answers:
dt = 0.05;els = Table[Sin[ t] Sin[t/40]^2, {t, 0., 40 π, dt}];ListPlot[els, Joined -> True]
Row[ListPlot[{#[DFT1[els, dt]], #[DFT2[els, dt]]}, Joined -> True, PlotRange -> {{1200, 1300}, All}, ImageSize -> 300] & /@ {Abs, Re, Im}]
Questions: Which of the methods is correct (if neither is correct then what is the right way)? What are the frequencies corresponding to the Fourier transform results? For example, should the frequencies range from $\{-\frac{N0\Delta \omega}{2},\frac{(N0-1)\Delta \omega}{2}\}$ or $\{-\frac{(N0-1)\Delta \omega}{2},\frac{N0\Delta \omega}{2}\}$? $N0$ is the length of the data, $\Delta t$ is the time interval of the data, $\Delta \omega=\frac{2\pi }{N0 \Delta t}$. What difference does it make if N0 is even or odd? Update:
This is the result using the method in Bill's answer here to a similar question. The results appear to differ from both of the approaches mentioned above.
n = Length[els];sampInt = dt;data = els;ssf = RotateRight[Range[-n/2, n/2 - 1]/(n sampInt), n/2];fft = dt Fourier[data, FourierParameters -> {1, 1}];Row[ListPlot[#@ Sort[Transpose[{ssf, fft}], #1[[1]] < #2[[1]] &][[All, 2]], PlotRange -> {{1200, 1300}, All}, Joined -> True, ImageSize -> 300] & /@ {Abs, Re, Im}]
|
$$U(g_1)U(g_2)=e^{i \phi(g_1,g_2)}U(g_1g_2)\tag{1}$$
Using local coordinates $\{x^a \}$ near identity element, $g1.g2=g(x_1).g(x_2)=g(x_3(x_1,x_2))$ $$x^a_3(x_1,x_2)=x^a_1+x^a_2+\gamma^{abc}{x^b_1}x_2^c+\cdots\tag{2}$$ $$\phi(g_1,g_2)\equiv\phi(x_1,x_2)=\gamma^{bc}x_1^bx_2^c+\cdots\tag{3}$$ $$U(g(x))=1+ix^aT^a+\frac{1}{2}x^a x^b T^{ab}+\cdots\tag{4}$$ with $T^a$ Hermitian and $T^{ab}=T^{ba}$.
Substitude $(2,3,4)$ into $(1)$, $$-T^cT^b= i \gamma^{cb}1+i\gamma^{acb}T^a+T^{cb}\tag{5}$$
By defining, $$f^{abc}\equiv \gamma^{acb}-\gamma^{abc} \quad f^{bc}\equiv \gamma^{cb}-\gamma^{bc}$$ $$[T^b,T^c]=i f^{abc}T^a+i f^{bc}1\tag{6}$$ They call $f^{bc}$ as central charge.
My questions:
1.This derivation heavily relies on the coordinates and representation which is unfamiliar to me. From my knowledge, given a Lie group $G$, $T^a$ should be the tangent vector at identity element, that is $T^a \in T_e G = \mathfrak{g}$.
The commutator of Lie algebra should still be in Lie algebra. Why can $i f^{bc}1$ occur in $(6)$ since $1$ is not an element in $\mathfrak{g}$.
2.It seems that they're talking about a specific representation $(1)$, because given an abstract Lie group $G$, $e^{i \phi(g_1,g_2)}$ can't occur in group product. Only after you find a projective representation $(1)$, i.e. $\phi(g_1,g_2)$ nonzero, you can define the central charge by this way. However textbook also says that for a simply-connected Lie group, it can have projective representation when the central charge of Lie algebra is nontrivial. Is there some circular argument here?
Or maybe the central charge is defined for specific representation. Then the question is the sufficient and necessary condition for a Lie algebra $\mathfrak{g}$ to have a representation with nontrivial central charge?
3.So what's the geometric definition of central charge? There should be some definition of central charge that doesn't depend on representation and coordinate.
|
Microwave rotational spectroscopy uses microwave radiation to measure the energies of rotational transitions for molecules in the gas phase. It accomplishes this through the interaction of the electric dipole moment of the molecules with the electromagnetic field of the exciting microwave photon.
Introduction
To probe the pure rotational transitions for molecules, scientists use microwave rotational spectroscopy. This spectroscopy utilizes photons in the microwave range to cause transitions between the quantum rotational energy levels of a gas molecule. The reason why the sample must be in the gas phase is due to intermolecular interactions hindering rotations in the liquid and solid phases of the molecule. For microwave spectroscopy, molecules can be broken down into 5 categories based on their shape and the inertia around their 3 orthogonal rotational axes. These 5 categories include diatomic molecules, linear molecules, spherical tops, symmetric tops and asymmetric tops.
Classical Mechanics
The
Hamiltonian solution to the rigid rotor is
\[H = T\]
since,
\[H = T + V\]
Where \(T\) is kinetic energy and \(V\) is potential energy. Potential energy, \(V\), is 0 because there is no resistance to the rotation (similar to a particle in a box model).
Since \(H = T\), we can also say that:
\[{T = }\dfrac{1}{2}\sum{m_{i}v_{i}^2}\]
However, we have to determine \(v_i\) in terms of rotation since we are dealing with rotation. Since,
\[\omega = \dfrac{v}{r}\]
where \(\omega\) = angular velocity, we can say that:
\[v_{i} = \omega{X}r_{i}\]
Thus we can rewrite the T equation as:
\[T = \dfrac{1}{2}\sum{m_{i}v_{i}\left(\omega{X}r_{i}\right)}\]
Since \(\omega\) is a scalar constant, we can rewrite the T equation as:
\[T = \dfrac{\omega}{2}\sum{m_{i}\left(v_{i}{X}r_{i}\right)} = \dfrac{\omega}{2}\sum{l_{i}} = \omega\dfrac{L}{2}\]
where \(l_i\) is the
angular momentum of the ith particle, and L is the angular momentum of the entire system. Also, we know from physics that,
\[L = I\omega\]
where
I is the moment of inertia of the rigid body relative to the axis of rotation. We can rewrite the T equation as,
\[T = \omega\dfrac{{I}\omega}{2} = \dfrac{1}{2}{I}\omega^2\]
Quantum Mechanics
The internal Hamiltonian, H, is:
\[H = \dfrac{i^{2}\hbar^{2}}{2I}\]
and the Schrödinger Equation for rigid rotor is:
\[\dfrac{i^{2}\hbar^{2}}{2I}\psi = E\psi\]
Thus, we get:
\[E_n = \dfrac{J(J+1)h^2}{8\pi^2I} \]
where \(J\) is a rotational quantum number and \(\hbar\) is the reduced Planck's constant. However, if we let:
\[B = \dfrac {h}{8 \pi^2I} \]
where \(B\) is a rotational constant, then we can substitute it into the \(E_n\) equation and get:
\[E_{n} = J(J+1)Bh\]
Considering the transition energy between two energy levels, the difference is a multiple of 2. That is, from \(J = 0\) to \(J = 1\), the \(\Delta{E_{0 \rightarrow 1}}\) is 2Bh and from J = 1 to J = 2, the \(\Delta{E}_{1 \rightarrow 2}\) is 4Bh.
Theory
When a gas molecule is irradiated with microwave radiation, a photon can be absorbed through the interaction of the photon’s electronic field with the electrons in the molecules. For the microwave region this energy absorption is in the range needed to cause transitions between rotational states of the molecule. However, only molecules with a permanent dipole that changes upon rotation can be investigated using microwave spectroscopy. This is due to the fact that their must be a charge difference across the molecule for the oscillating electric field of the photon to impart a torque upon the molecule around an axis that is perpendicular to this dipole and that passes through the molecules center of mass.
This interaction can be expressed by the transition dipole moment for the transition between two rotational states
\[ \text{Probability of Transition}=\int \psi_{rot}(F)\hat\mu \psi_{rot}(I)d\tau\]
Where Ψ
rot(F) is the complex conjugate of the wave function for the final rotational state, Ψ rot(I) is the wave function of the initial rotational state , and μ is the dipole moment operator with Cartesian coordinates of μ x, μ y, μ z. For this integral to be nonzero the integrand must be an even function. This is due to the fact that any odd function integrated from negative infinity to positive infinity, or any other symmetric limits, is always zero.
In addition to the constraints imposed by the transition moment integral, transitions between rotational states are also limited by the nature of the photon itself. A photon contains one unit of angular momentum, so when it interacts with a molecule it can only impart one unit of angular momentum to the molecule. This leads to the selection rule that a transition can only occur between rotational energy levels that are only one quantum rotation level (J) away from another
1.
\[ \Delta\textrm{J}=\pm 1 \]
The transition moment integral and the selection rule for rotational transitions tell if a transition from one rotational state to another is allowed. However, what these do not take into account is whether or not the state being transitioned from is actually populated, meaning that the molecule is in that energy state. This leads to the concept of the Boltzmann distribution of states. The Boltzmann distribution is a statistical distribution of energy states for an ensemble of molecules based on the temperature of the sample
2.
\[ \dfrac{n_J}{n_0} = \dfrac{e^{(-E_{rot}(J)/RT)}}{\sum_{J=1}^{J=n} e^{(-E_{rot}(J)/RT)}}\]
where E
rot(J) is the molar energy of the J rotational energy state of the molecule, R is the gas constant, T is the temperature of the sample. n(J) is the number of molecules in the J rotational level, and n 0is the total number of molecules in the sample.
This distribution of energy states is the main contributing factor for the observed absorption intensity distributions seen in the microwave spectrum. This distribution makes it so that the absorption peaks that correspond to the transition from the energy state with the largest population based on the Boltzmann equation will have the largest absorption peak, with the peaks on either side steadily decreasing.
Degrees of Freedom
A molecule can have three types of degrees of freedom and a total of 3N degrees of freedom, where N equals the number of atoms in the molecule. These degrees of freedom can be broken down into 3 categories
3. Translational: These are the simplest of the degrees of freedom. These entail the movement of the entire molecule’s center of mass. This movement can be completely described by three orthogonal vectors and thus contains 3 degrees of freedom. Rotational: These are rotations around the center of mass of the molecule and like the translational movement they can be completely described by three orthogonal vectors. This again means that this category contains only 3 degrees of freedom. However, in the case of a linear molecule only two degrees of freedom are present due to the rotation along the bonds in the molecule having a negligible inertia. Vibrational: These are any other types of movement not assigned to rotational or translational movement and thus there are 3N – 6 degrees of vibrational freedom for a nonlinear molecule and 3N – 5 for a linear molecule. These vibrations include bending, stretching, wagging and many other aptly named internal movements of a molecule. These various vibrations arise due to the numerous combinations of different stretches, contractions, and bends that can occur between the bonds of atoms in the molecule.
Each of these degrees of freedom is able to store energy. However, In the case of rotational and vibrational degrees of freedom, energy can only be stored in discrete amounts. This is due to the quantized break down of energy levels in a molecule described by quantum mechanics. In the case of rotations the energy stored is dependent on the rotational inertia of the gas along with the corresponding quantum number describing the energy level.
Rotational Symmetries
To analyze molecules for rotational spectroscopy, we can break molecules down into 5 categories based on their shapes and their moments of inertia around their 3 orthogonal rotational axes:
4 Diatomic Molecules Linear Molecules Spherical Tops Symmetrical Tops Asymmetrical Tops Diatomic Molecules
The rotations of a diatomic molecule can be modeled as a rigid rotor. This rigid rotor model has two masses attached to each other with a fixed distance between the two masses.
It has an inertia (I) that is equal to the square of the fixed distance between the two masses multiplied by the reduced mass of the rigid rotor.
\[ \large I_e= \mu r_e^2\]
\[ \large \mu=\dfrac{m_1 m_2} {m_1+m_2} \]
Using quantum mechanical calculations it can be shown that the energy levels of the rigid rotator depend on the inertia of the rigid rotator and the quantum rotational number J
2.
\[ E(J) = B_e J(J+1) \]
\[ B_e = \dfrac{h}{8 \pi^2 cI_e}\]
However, this rigid rotor model fails to take into account that bonds do not act like a rod with a fixed distance, but like a spring. This means that as the angular velocity of the molecule increases so does the distance between the atoms. This leads us to the
nonrigid rotor model in which a centrifugal distortion term (\(D_e\)) is added to the energy equation to account for this stretching during rotation.
\[ E(J)(cm^{-1}) = B_e J(J+1) – D_e J^2(J+1)^2\]
This means that for a diatomic molecule the transitional energy between two rotational states equals
\[ E=B_e[J'(J'+1)-J''(J''+1)]-D_e[J'^2(J'+1)^2-J''^2(J'+1)^2]\label{8} \]
Where J’ is the quantum number of the final rotational energy state and J’’ is the quantum number of the initial rotational energy state. Using the selection rule of \(\Delta{J}= \pm 1\) the spacing between peaks in the microwave absorption spectrum of a diatomic molecule will equal
\[ E_R =(2B_e-4D_e)+(2B_e-12D_e){J}''-4D_e J''^3\]
Linear Molecules
Linear molecules behave in the same way as diatomic molecules when it comes to rotations. For this reason they can be modeled as a non-rigid rotor just like diatomic molecules. This means that linear molecule have the same equation for their rotational energy levels. The only difference is there are now more masses along the rotor. This means that the inertia is now the sum of the distance between each mass and the center of mass of the rotor multiplied by the square of the distance between them
2.
\[ \large I_e=\sum_{j=1}^{n} m_j r_{ej}^2 \]
Where m
j is the mass of the jth mass on the rotor and r ej is the equilibrium distance between the j th mass and the center of mass of the rotor. Spherical Tops
Spherical tops are molecules in which all three orthogonal rotations have equal inertia and they are highly symmetrical. This means that the molecule has no dipole and for this reason spherical tops do not give a microwave rotational spectrum.
Examples:
Symmetrical Tops
Symmetrical tops are molecules with two rotational axes that have the same inertia and one unique rotational axis with a different inertia. Symmetrical tops can be divided into two categories based on the relationship between the inertia of the unique axis and the inertia of the two axes with equivalent inertia. If the unique rotational axis has a greater inertia than the degenerate axes the molecule is called an oblate symmetrical top. If the unique rotational axis has a lower inertia than the degenerate axes the molecule is called a prolate symmetrical top. For simplification think of these two categories as either frisbees for oblate tops or footballs for prolate tops.
Figure \(\PageIndex{3}\): Symmetric Tops: (Left) Geometrical example of an oblate top and (right) a prolate top. Images used with permission from Wikipedia.com.
In the case of linear molecules there is one degenerate rotational axis which in turn has a single rotational constant. With symmetrical tops now there is one unique axis and two degenerate axes. This means an additional rotational constant is needed to describe the energy levels of a symmetrical top. In addition to the rotational constant an additional quantum number must be introduced to describe the rotational energy levels of the symmetric top. These two additions give us the following rotational energy levels of a prolate and oblate symmetric top
\[E_{(J,K)}(cm^{-1})=B_e*J(J+1)+(A_e-B_e)K^2 \]
Where B
e is the rotational constant of the unique axis, A e is the rotational constant of the degenerate axes, \(J\) is the total rotational angular momentum quantum number and K is the quantum number that represents the portion of the total angular momentum that lies along the unique rotational axis. This leads to the property that \(K\) is always equal to or less than \(J\). Thus we get the two selection rules for symmetric tops
\[\Delta J = 0, \pm1 \]
\[\Delta K=0\]
when \( K\neq 0 \)
\[\Delta J = \pm1 \]
\[\Delta K=0 \]
when \( K=0 \)
However, like the rigid rotor approximation for linear molecules, we must also take into account the elasticity of the bonds in symmetric tops. Therefore, in a similar manner to the rigid rotor we add a centrifugal coupling term, but this time we have one for each quantum number and one for the coupling between the two.
\[E_{(J,K)}(cm^{-1})=B_e J(J+1)-D_{eJ} J^2(J+1)^2+(A_e-B_e)*K^2 \label{13}\]
\[-D_{ek} K^4-D_{ejk} J(J +1)K^2 \label{14}\]
Asymmetrical Tops
Asymmetrical tops have three orthogonal rotational axes that all have different moments of inertia and most molecules fall into this category. Unlike linear molecules and symmetric tops these types of molecules do not have a simplified energy equation to determine the energy levels of the rotations. These types of molecules do not follow a specific pattern and usually have very complex microwave spectra.
Additional Rotationally Sensitive Spectroscopies
In addition to microwave spectroscopy, IR spectroscopy can also be used to probe rotational transitions in a molecule. However, in the case of IR spectroscopy the rotational transitions are coupled to the vibrational transitions of the molecule. One other spectroscopy that can probe the rotational transitions in a molecule is Raman spectroscopy, which uses UV-visible light scattering to determine energy levels in a molecule. However, a very high sensitivity detector must be used to analyze rotational energy levels of a molecule.
References Harris, D. C.; Bertolucci, M. D., Symmetry and Spectroscopy. University Press: Oxford, 1978. McQuarrie, D. A.; Simon, J. D., Physical Chemistry: A Molecular Approach. University Science Books: 1997. Shoemaker, D. P.; W., G. C.; W., N. J., Experiments in Physical Chemistry. 8th ed.; McGraw Hill: New York, 2009. Hollas, M. J., Basic Atomic and Molecular Spectroscopy. Royal Society of Chemistry: Cambridge, 2002. Contributors Nicholas Houghton, UC Davis
|
Here is another perspective, which perhaps shifts where the intrigue should be directed. An important question to consider: How do you define $\pi$? If you take the definition of $\pi$ to be the usual one we first encounter involving circles, then it is fairly remarkable that this geometric constant should have anything to do with the complex function $e^z$, defined in terms of a power series.
To see how remarkable this is, let's go backwards. Let's define $\pi$ completely in the context of complex analysis. So for now let's forget what we know about $\pi$ as it relates to circles and trigonometric functions.
As others have noted, having appropriately defined radius of convergence of a power series, one can show that the series
$$\sum_{n=0}^{\infty}\frac{z^n}{n!}$$
converges everywhere, and thus defines an analytic function on $\mathbb{C}$ which we call $\exp$. Something helpful to notice right away is that $\overline{\exp(z)}=\exp{\overline{z}}$. Also, power series manipulation will show that $\exp(y+z)=\exp(y)\exp(z)$, so that for $a,b\in\mathbb{R}$
$$|\exp(a+bi)|=\sqrt{\exp(a+bi)\exp(a-bi)}=\sqrt{\exp(2a)}=\exp(a)$$
This shows us that if $|\exp(z)|=1$, then $z$ must be purely imaginary. Now, make the following definitions:
$$\cos(z)=\frac{e^{iz}+e^{-iz}}{2}\qquad\sin(z)=\frac{e^{iz}-e^{-iz}}{2i}$$
Notice it follows that $\exp(iz)=\cos(z)+i\sin(z)$ and $\cos^2(z)+\sin^2(z)=1$. At this point, with such familiar formulas, you may be tempted to just "plug in $\pi$," but remember, we don't know what $\pi$ is yet. Even if we did, we certainly don't know the values of $\cos(\pi)$ or $\sin(\pi)$ because these functions are defined in terms of infinite series.
The next part is the trickiest. Here we show that $\exp(iz)$ is actually periodic, that is, there is a constant $c$ such that $\exp(i(z+c))=\exp(iz)$ for all $z\in\mathbb{C}$. Notice if such a $c$ existed, we may plug in $z=0$ to obtain $\exp(ic)=\exp(0)=1$, so that $c$ must be real (based on our computation of $|\exp(a+bi)|$ above). To find $c$, it can be shown (using the intermediate value theorem) that there is some smallest positive real number $d$ such that $\cos(d)=0$. From this, and the formulas $\exp(iz)=\cos(z)+i\sin(z)$ and $\cos^2(z)+\sin^2(z)=1$ it follows that $\sin(d)=\pm 1$, that $\exp(i\cdot d)=\pm i$, and that $\exp(i\cdot4d)=1$. Hence, our desired period is $c=4d$. Notice along the way we proved that $\exp(i\cdot2d)=-1$.
Now, make the following
definition:
$$\pi:=2d$$
Well, we're done. We've shown the formula $\exp(i\pi)=-1$, and you're right, from this perspective, it is rather unremarkable, because all we've done is call $\pi$ something we want to make the formula work.
Now, to see why this formula is indeed remarkable, spend a minute thinking about how
the same constant we just defined (using power series, complex analysis, and calculus) also satisfies the following:
The ratio of the length of the circumference of any circle to the length of its diameter is $\pi$.
Remarkable if you ask me.
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays
PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281
A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV...
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
2. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
3. Study of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c^+ \rightarrow J/\psi D_s^+$$\end{document}Bc+→J/ψDs+ and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c^+ \rightarrow J/\psi D_s^{+}$$\end{document}Bc+→J/ψDs∗+ decays with the ATLAS detector
The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 2016, Volume 76
The decays \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy}...
Regular - Experimental Physics
Regular - Experimental Physics
Journal Article
Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102
A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,...
scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
Journal Article
Physics Letters B, ISSN 0370-2693, 12/2015, Volume 751, Issue C, pp. 63 - 80
An observation of the decay and a comparison of its branching fraction with that of the decay has been made with the ATLAS detector in proton–proton collisions...
PARTICLE ACCELERATORS
PARTICLE ACCELERATORS
Journal Article
Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281
A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The...
Journal Article
7. Prompt and non-prompt $$J/\psi $$ J/ψ elliptic flow in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}} = 5.02$$ sNN=5.02 Tev with the ATLAS detector
The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 23
The elliptic flow of prompt and non-prompt $$J/\psi $$ J/ψ was measured in the dimuon decay channel in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}}=5.02$$...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
8. Measurement of the prompt J/ψ pair production cross-section in pp collisions at √s = 8 TeV with the ATLAS detector
The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 2, pp. 1 - 34
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 03/2015, Volume 114, Issue 12, p. 121801
A search for the decays of the Higgs and Z bosons to J/psi gamma and Upsilon(nS)gamma (n = 1,2,3) is performed with pp collision data samples corresponding to...
PHYSICS, MULTIDISCIPLINARY | PSI | PHYSICS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PHYSICS, MULTIDISCIPLINARY | PSI | PHYSICS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
10. Measurement of the differential cross-sections of inclusive, prompt and non-prompt J / ψ production in proton–proton collisions at s = 7 TeV
Nuclear Physics, Section B, ISSN 0550-3213, 2011, Volume 850, Issue 3, pp. 387 - 444
The inclusive production cross-section and fraction of mesons produced in -hadron decays are measured in proton–proton collisions at with the ATLAS detector at...
Journal Article
|
Continuous random variables have many applications. Baseball batting averages, IQ scores, the length of time a long distance telephone call lasts, the amount of money a person carries, the length of time a computer chip lasts, and SAT scores are just a few. The field of reliability depends on a variety of continuous random variables.
Note
The values of discrete and continuous random variables can be ambiguous. For example, if
X is equal to the number of miles (to the nearest mile) you drive to work, then X is a discrete random variable. You count the miles. If X is the distance you drive to work, then you measure values of X and X is a continuous random variable. For a second example, if X is equal to the number of books in a backpack, then X is a discrete random variable. If X is the weight of a book, then X is a continuous random variable because weights are measured. How the random variable is defined is very important. Properties of Continuous Probability Distributions
The graph of a continuous probability distribution is a curve. Probability is represented by area under the curve.
The curve is called the
probability density function (abbreviated as f( x) to represent the curve. f( x) is the function that corresponds to the graph; we use the density function f( x) to draw the graph of the probability distribution. Area under the curve is given by a different function called the cumulative distribution function (abbreviated as cdf). The cumulative distribution function is used to evaluate probability as area. The outcomes are measured, not counted. The entire area under the curve and above the x-axis is equal to one. Probability is found for intervals of xvalues rather than for individual xvalues. [latex]P(c<x<d)[/latex] is the probability that the random variable Xis in the interval between the values cand d. [latex]P(c<x<d)[/latex] is the area under the curve, above the x-axis, to the right of cand the left of d. [latex]P(x=c)=0[/latex] The probability that xtakes on any single individual value is zero. The area below the curve, above the x-axis, and between [latex]x=c[/latex] and [latex]x=c[/latex] has no width, and therefore no area (area[latex]=0[/latex]). Since the probability is equal to the area, the probability is also zero. [latex]P(c<x<d)[/latex] is the same as [latex]P(c{\leq}x{\leq}d)[/latex] because probability is equal to area.
We will find the area that represents probability by using geometry, formulas, technology, or probability tables. In general, calculus is needed to find the area under the curve for many probability density functions. When we use formulas to find the area in this textbook, the formulas were found by using the techniques of integral calculus. However, because most students taking this course have not studied calculus, we will not be using calculus in this textbook.
There are many continuous probability distributions. When using a continuous probability distribution to model probability, the distribution used is selected to model and fit the particular situation in the best way.
In this chapter and the next, we will study the uniform distribution, the exponential distribution, and the normal distribution. The following graphs illustrate these distributions.
Uniform Distribution a continuous random variable (RV) that has equally likely outcomes over the domain, [latex]a<x<b[/latex]; it is often referred as the rectangular distribution because the graph of the pdf has the form of a rectangle. Notation: [latex]X{\sim}U(a,b)[/latex]. The mean is [latex]\mu=\frac{{a+b}}{{2}}[/latex] and the standard deviation is [latex]\sigma=\sqrt{\frac{{b-a}}{{12}}}[/latex]. The probability density function is [latex]f(x)=\frac{{1}}{{b-a}}[/latex] for [latex]a<x<b[/latex] or [latex]a{\leq}x{\leq}b[/latex]. The cumulative distribution is [latex]P(X{\leq}x)=\frac{{x-a}}{{b-a}}[/latex]. Exponential Distribution a continuous random variable (RV) that appears when we are interested in the intervals of time between some random events, for example, the length of time between emergency arrivals at a hospital; the notation is [latex]X{\sim}\text{Exp}(m)[/latex]. The mean is [latex]\mu=1m[/latex] and the standard deviation is [latex]\sigma=1m[/latex]. The probability density function is [latex]f(x)=me^{−mx}[/latex], [latex]x{\geq}0[/latex] and the cumulative distribution function is [latex]P(X{\leq}x)=1−e^{−mx}[/latex].
|
In school, I have learnt to plot simple graphs such as $y=x^2$ followed by $y=x^3$.
A grade or two later, I learnt to plot other interesting graphs such as $y=1/x$, $y=\ln x$, $y=e^x$. I have also recently learnt about trigonometric graphs and circle equations. In the internet, I have seen users posting graphs of different shapes like a heart-shaped graph and a Batman logo-shaped graph. I am sure there are numerous more graphs that I have yet to see. Seeing that the graphs can be shaped into shapes like the Batman logo and a heart brings me to my question: Is it possible to plot a graph of any shape regardless of its complexity? Perhaps, shaped into an outline of a person or a landmark? Why or why not?
In school, I have learnt to plot simple graphs such as $y=x^2$ followed by $y=x^3$.
Simple answer: Yes--simply draw your person or landmark and then superimpose the $xy$-plane on top and suddenly you have all of the points (i.e., coordinates) that need to be filled in to create a plot of your graph.
Now,
how you come up with a good (read: not complex) mathematical description of these coordinates (e.g., using a function) is another issue entirely. Depending on the complexity of what you are drawing, you most likely won't get something pretty. For example, consider drawing the fictional character Donkey Kong:
The picture above was generated by Wolfram|Alpha. How complicated is the curve? Well, here you go:
That's pretty horrible. So yes, you can certainly plot whatever you want, but describing your plot effectively using whatever kind of function, parametric equations, etc., may not be very easy or nice in the end.
Added: Given the unexpected popularity of this post (both question and answer(s)), I thought I might add something that some may find helpful or useful. In 2012, I wrote an article entitled Bézier Curves with a Romantic Twist that appeared in the Math Horizons periodical. This piece largely dealt with using lower order Bézier curves (linear and cubic) to construct letters for a person's name on a graphing calculator; in the context of this post, the problem was to plot a graph of letters in the alphabet (along with a heart and parametrically-defined sequence). If you read the article, you will see that the math behind constructing such letters is not all too complicated--my reason for providing the Donkey Kong example was largely to show just how complicated it can be to effectively sketch something with equations.
But sketching letters and the like (as opposed to much more complicated representations like Captain Falcon, Pikachu, Sonic, etc.) is quite manageable. In fact, the avatar for my username even uses a simple construction to spell the word MATH:
For those interested, I will provide the equations I used for the M, the sequence, and the heart (as entered on a TI-89 calculator):
$$ \mathrm{M}= \begin{cases} xt1 & = & (1-t)10+11.25t\\ yt1 & = & (1-t)5+12.75t\\ xt2 & = & (1-t)11.25+12.5t\\ yt2 & = & (1-t)12.75+8.875t\\ xt3 & = & (1-t)12.5+13.75t\\ yt3 & = & (1-t)8.875+12.75t\\ xt4 & = & (1-t)13.75+15t\\ yt4 & = & (1-t)12.75+5t\\ \end{cases} $$
$$ \mathrm{Heart} = \begin{cases} xt5 & = & 4\sin(t)^3\\ yt5 & = & \frac{1}{2}\bigl(13\cos(t)-5\cos(2t)-2\cos(3t)-\cos(4t)\bigr)+34.2 \end{cases} $$
$$ \mathrm{Sequence}= \begin{cases} xt6 & = & t\\ yt6 & = & (3^t+5^t)^{1/t} \end{cases} $$
Of course, the A, T, and H are all similar to the M in that they are drawn using linear Bézier curves. A more interesting letter is something like C or S or even D or B (these will all use at least cubic Bézier curves).
Is it possible to plot a graph of any shape regardless of its complexity?
Definitely not, simply because it is possible to define shapes that are so complex they cannot be computed. More precisely, there exist uncomputable functions $f$, such that no program can compute their graphs $\{(x,y)\mid y=f(x)\}.$
In fact,
most functions that map $N \to N$ are uncomputable, in the sense that uncountably many are uncomputable, whereas only countably many are computable (there being uncountably many such functions altogether).
An example would be $f(x)$ defined as the number of $x$-state Busy-Beaver-class Turing machines.
NB: This is contary to all $5$ of the other answers -- perhaps because they assume the graph is supposed to be finite (and hence computable).
Yes. These graphs are often not functions, but can be written as several piecewise functions (which I imagine you have covered) with domain restrictions.
When I was a junior in high school, I graphed the words "Homecoming Date" and asked a girl to be my $f(x)$. I defined several functions as $f(x)$ to ensure that my wording was acceptable. She didn't seem to notice and accepted, haha.
But yes, your graphing possibilities are endless. Try some out yourself!
The answer to your question is yes, you can plot any graph like the one you're interested in. A way to do this (which I'm sure won't be completely satisfying) is to simply have the complete list of all co-ordinate points that you want to graph, and define your mapping m so that $m(x) = y$ whenever $(x,y)$ is a co-ordinate point. (Also note here that $x$ might map to more than one $y$ - for example in your heart shaped example).
What it seems like you'd like however is a 'nice looking' map like $y = e^x-4x+1$, or something like that. In order to do something like this, many methods exist. I would suggest looking up interpolation for a start.
It would be a good idea to make clear what a graph
is.
First, a
function is simply an “answering machine” where you put something in, and get something else out. It's not necessarily something you could write as a mathematical formula in the usual way. Only, it so happens that this is possible for many useful functions which simply map numbers to numbers – but that's just a very specific special case. For instance, a function could map manifolds to manifolds: it could map a sphere to a torus, a torus to a double torus, etc.. Or, perhaps an even better example would be the function that maps people to their parents. To define a function you only need to define, for each input argument, what the result should be.
In particular, given any shape, you can
define a function that takes an x-coordinate and yields the highest corresponding y-coordinate which is present in the shape †. The graph of this function is then, again by definition, the upper edge of your original shape.
It's relatively easy to generalise this to give not only the upper edge but a whole drawing:
if you can draw the shape with a pen, then this defines a function from time $t$ to position-of-the-pen-tip $p$. This can be used for a parametric plot, and by the implicit function theorem that is equivalent to a couple of graphs.
So, the answer to your question is trivially
yes, because drawing is nothing else but plotting a function.
What's a more interesting question is whether any shape can be the plot of a function that's reasonably simple to write down / store as a definition. Daniel W. Farlow gave an example for how the usual maths-formula way is not really well-suited for this in general; however there are more optimised ways to do it. In particular, computer graphics file formats are essentially nothing but clever conventions of how to efficiently write (in binary form) a mathematical description of a picture/shape. Some such formats, the vector graphics kind, can actually give an exact description of shapes, however for more complicated stuff like photographs an exact description is not feasible
‡; instead you approximate the image by something that looks almost the same. And finding such approximations is a pretty interesting topic mathematically, mainly in the branch of functional analysis. † That would actually be a partial function, because not for all $x$ can any point in the shape at all be found. ‡ In fact its not possible, physically: a picture is a collection of measured brightnesses. No physical measurement is exact, it always has some uncertainty.
|
K-function of a Three-Dimensional Point Pattern
Estimates the \(K\)-function from a three-dimensional point pattern.
Usage
K3est(X, …, rmax = NULL, nrval = 128, correction = c("translation", "isotropic"), ratio=FALSE)
Arguments X
Three-dimensional point pattern (object of class
"pp3").
…
Ignored.
rmax
Optional. Maximum value of argument \(r\) for which \(K_3(r)\) will be estimated.
nrval
Optional. Number of values of \(r\) for which \(K_3(r)\) will be estimated. A large value of
nrvalis required to avoid discretisation effects.
correction
Optional. Character vector specifying the edge correction(s) to be applied. See Details.
ratio
Logical. If
TRUE, the numerator and denominator of each edge-corrected estimate will also be saved, for use in analysing replicated point patterns.
Details
For a stationary point process \(\Phi\) in three-dimensional space, the three-dimensional \(K\) function is $$ K_3(r) = \frac 1 \lambda E(N(\Phi, x, r) \mid x \in \Phi) $$ where \(\lambda\) is the intensity of the process (the expected number of points per unit volume) and \(N(\Phi,x,r)\) is the number of points of \(\Phi\), other than \(x\) itself, which fall within a distance \(r\) of \(x\). This is the three-dimensional generalisation of Ripley's \(K\) function for two-dimensional point processes (Ripley, 1977).
The three-dimensional point pattern
X is assumed to be a partial realisation of a stationary point process \(\Phi\). The distance between each pair of distinct points is computed. The empirical cumulative distribution function of these values, with appropriate edge corrections, is renormalised to give the estimate of \(K_3(r)\).
The available edge corrections are:
"translation":
the Ohser translation correction estimator (Ohser, 1983; Baddeley et al, 1993)
"isotropic":
the three-dimensional counterpart of Ripley's isotropic edge correction (Ripley, 1977; Baddeley et al, 1993).
Alternatively
correction="all" selects all options.
Value
A function value table (object of class
"fv") that can be plotted, printed or coerced to a data frame containing the function values.
References
Baddeley, A.J, Moyeed, R.A., Howard, C.V. and Boyde, A. (1993) Analysis of a three-dimensional point pattern with replication.
Applied Statistics 42, 641--668.
Ohser, J. (1983) On estimators for the reduced second moment measure of point processes.
Mathematische Operationsforschung und Statistik, series Statistics, 14, 63 -- 71.
Ripley, B.D. (1977) Modelling spatial patterns (with discussion).
Journal of the Royal Statistical Society, Series B, 39, 172 -- 212. See Also Aliases K3est Examples
# NOT RUN { X <- rpoispp3(42) Z <- K3est(X) if(interactive()) plot(Z)# }
Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
|
Suppose that $X$ and $Y$ are smooth varieties over a field $k$ (not necessarily algebraically closed), of dimension $m$ and $n$. Suppose we are given a morphism $\pi:X\rightarrow Y$.
We know that if $\pi$ is smooth, it is flat (by definition) and all the fibers are smooth of dimension $m-n$.
Under these assumptions:
if all the (geometric) fibers are smooth of dimension $m-n$, does it follow that $\pi$ is flat (and hence smooth, e.g. by Vakil, Theorem 25.2.2) more generally, can we say something about flatness of $\pi$ even if the fibers are not smooth, but still of constant dimension?
The first part is exercise 25.2.F from Vakil.
We can readily perform a number of reductions to simplify the question: the question is local on the target and on the source, so locally we can replace $Y$ by $\mathbb{A}^n_k$, by choosing an étale surjective map to $\mathbb{A}^n_k$. We can also replace $X$ by a smooth affine variety. It should also be sufficient to check this at closed points. Hence we have a morphism of rings $B \colon= k[y_1,\dots,y_n]\rightarrow A$, where A is a finitely generated $k$-algebra of dimension $m$ with $\Omega_{\mathrm{Spec }A/k}$ locally free of rank $m$, such that all the tensor products with maximal ideals of $B$ are (possibly smooth) of dimension $m-n$. Does it follow that $A$ is a flat $B$-algebra?
I must say that I don't really know how to approach this. I can't seem to relate the flatness and smoothness conditions, and the answer to this question is clearly false if $Y$ is not smooth, think of the normalization of the node, where all fibers are smooth of dimension $0$ (and probably also if $X$ isn't, although I don't have a counterexample in my head).
|
In the article Finding Intermediate Values in Arithmetic Mean a linear formula was obtained to solve intermediate values in an Arithmetic Mean sequence. The same problem of finding the intermediate values on the Geometric Mean sequence was placed, refer to paper. During the simplification process the following sequence generator was observed.\kappa(\lambda,\ n)\ =\ 2^\lambda\left(\frac{J_n}{2^n-1}\right)
Running the sequence for lambda, \lambda, values ranging from 1 to 8, and keeping 1 \le n \le \lambda, the following sequence will be generated.\begin{matrix} λ & & n = 1 & n = 2 & n = 3 & n = 4 & n = 5 & n = 6 & n = 7 & n = 8 \\ 01 & : & 2.0 \\ 02 & : & 4.0 & 2.0 \\ 03 & : & 8.0 & 4.0 & 6.0\\ 04 & : & 16.0 & 8.0 & 12.0 & 10.0\\ 05 & : & 32.0 & 16.0 & 24.0 & 20.0 & 22.0\\ 06 & : & 64.0 & 32.0 & 48.0 & 40.0 & 44.0 & 42.0\\ 07 & : & 128.0 & 64.0 & 96.0 & 80.0 & 88.0 & 84.0 & 86.0\\ 08 & : & 256.0 & 128.0 & 192.0 & 160.0 & 176.0 & 168.0 & 172.0 & 170.0\\ \end{matrix}
When observing the sequence generated the following properties can be observed:
When n = 1, \kappa = 2^\lambda When n = \lambda, \kappa = 2J_\lambda For n > 1, n can be found using the equation 2^{λ−(n−1)} Given that the sequence for lambda is known, it is easy to compute any value in \lambda + 1 using a linear evaluation
For the finite range 1 \le n \le \lambda the properties of the generator are already interesting. When n is not within the range, the properties get more interesting.
For n = 0, \kappa will always return 0 When n -> -\infty, \kappa approaches negative infinity When n -> \infty, \kappa = \frac{2^{(λ+1)}}{3}
The proofs for the listed properties can be found in the paper, Optimisation to Geometric Mean Sequence Special Case Solver
An implementation of the Kappa function is available in github.
|
You are right that the Yoneda lemma plays the key role in the proof.
By the definition of convolution:$$(F \otimes G) \otimes H \approx \int^{C, D} \left(\int^{A, B} F(A) \times G(B) \times \hom(C, A \otimes B)\right) \times H(D) \times \hom(-, C \otimes D)$$Because products in $\mathbf{Set}$ preserve coends, the above is isomorphic to:$$\int^{A, B, C, D} F(A) \times G(B) \times \hom(C, A \otimes B) \times H(D) \times \hom(-, C \otimes D)$$
The Yoneda lemma says that any
covariant functor $K(A)$ is naturally isomorphic to $\hom(\hom(A, -), K)$. By definition, the last object is the end $\int_C K(C)^{\hom(A, C)}$. Therefore the Yoneda lemma says:$$K(A) \approx \int_C K(C)^{\hom(A, C)}$$By duality, we get the Yoneda lemma for contravariant functors $K$:$$K(A) \approx \int^C K(C) \times \hom(A, C)$$and from the perspective of the opposite category, for covariant functors $K$:$$K(A) \approx \int^C K(C) \times \hom(C, A)$$where the end turned into the coend and the exponent turned into the product.In our case, we may reduce by Yoneda $\hom(C, A\otimes B)$ with $\hom(-, C \otimes D)$ under the coend:$$\int^C \hom(-, C \otimes D) \times \hom(C, A\otimes B) \approx \hom(-, (A \otimes B) \otimes D)$$obtaining:$$\int^{A, B, D} F(A) \times G(B) \times H(D) \times \hom(-, (A \otimes B) \otimes D)$$Due to associativity of $\otimes$ we know that $(A \otimes B) \otimes D \approx A \otimes (B \otimes D)$. Now it suffices to "unwind" the coend in the other direction.
|
The Weierstrass approximation theorem says any continuous function $f(x): [0,1] \to \mathbb{R}$ can be approximated uniformly by polynomials. Given any $\epsilon$, we can find $p(x) = x^n + \dots $ such that:
$$ |f(x) - p(x)| < \epsilon $$
is always true. In practice how to do we find $p$? Let's say $f(x)$ was the step function:
$$ f(x) = \left\{ \begin{array}{cc} 0 & x \leq 0 \\ 1 & x > 0\end{array} \right. $$
How do we find the polynomial with $\deg p = 100$ which minimizes the tolerance $\epsilon$ ?
$$ \epsilon = \min_{\deg p = 10^2}\left[\max_{x \in [0,1]} | f(x) - p(x) | \right]$$
Hopefully I have defined a tractable problem... For example, does Lagrange Interpolation necessarily minimize $\epsilon$ ?
|
Alladi Ramakrishnan Hall
Tensor product decomposition for the general linear supergroup GL(m|n)
Thorsten Heidersdorf
Max Planck Institute, Bonn.
Let $\mathfrak{gl}(n)$ denote the Lie algebra of the general linear group $GL(n). Given two finite dimensional irreducible representations $L(\lambda), L(\mu)$ of $\mathfrak{gl}(n)$, its tensor product decomposition $L(\lambda) \otimes L(\mu)$ is given by the Littlewood-Richardson rule. The situation becomes much more complicated when one replaces $\mathfrak{gl}(n)$ by the general linear Lie superalgebra $\mathfrak{gl}(m|n)$. The analogous decomposition $L(\lambda) \otimes L(\mu)$ is largely unknown. Indeed many aspects of the representation theory of $\mathfrak{gl}(m|n)$ are more akin to the study of Lie algebras and their representations in prime characteristic or to the BGG category $\mathcal{O}$. I will give a survey talk about this problem and explain why some approaches don't work and what can be done about it. This will give me the chance to speak about a) the character formula for an irreducible representation $L(\lambda)$, b) Deligne's interpolating category $Rep(GL_t)$ for $t \in \mathbb{C}$ and c) the process of semisimplification of a tensor category. Done
|
Suppose you use a neural network for a classification problem and the neurons in the output layer should return a valid discrete probability distribution. If you set the number of output neurons \(n\) equal to the number of classes of your classification problem, you have the nice interpretation that the result for each neuron \(y_i\) gives you the probability that the corresponding input belongs to the class \(\omega_i\). If the network is confident in its classification, you will see a strong peak in the probability distribution. On the other hand, for a noisy input where the network has not really a clue what it means (or it hasn't learned yet), the resulting distribution will be more broadened.
But how do you transform the weighted input \(u_i = \left\langle \fvec{w}_i, \fvec{x} \right\rangle + b\) (weight \(\fvec{w}_i\), input \(\fvec{x}\) and bias \(b\)) into a valid probability distribution? A common approach is to apply the softmax function on the weighted input\begin{equation} \label{eq:Softmax} y_i = \frac{e^{c \cdot u_i}}{\sum_{j=1}^{n} e^{c \cdot u_j}} \end{equation}
with the parameter \(c \in \mathbb{R}\). Using this transformation we ensure that the resulting output vector \(\fvec{y} = (y_1, y_2, \ldots, y_n)\) satisfies\begin{equation*} \sum_{i=1}^{n} y_i = 1 \quad \text{and} \quad \forall i : y_i \geq 0 \end{equation*}
and is therefore indeed a valid probability distribution. A common choice is to set \(c=1\) but it is useful to analyse the result for different values for this parameter. You can do so in the following animation based on an arbitrary example vector\begin{equation*} \fvec{u} = (-0.2, 1, 2, 0, 0, 2.1, 1.4, 0.8). \end{equation*}
List of attached files:
← Back to the overview page
|
Mesuration
Category : 7th Class
Mensuration
Standard Units of Area
The inter relationship among various units of measurement of area are listed below.
\[1\,{{m}^{2}}\] = \[(100\times 100)\,c{{m}^{2}}={{10}^{4}}\,c{{m}^{2}}\]
\[1\,{{m}^{2}}\] = \[(10\times 10)\,d{{m}^{2}}=100\,d{{m}^{2}}\]
\[1\,d{{m}^{2}}\] = \[(10\times 10)\,c{{m}^{2}}=100\,c{{m}^{2}}\]
\[1\,da{{m}^{2}}\] = \[(10\times 10)\,{{m}^{2}}=100\,{{m}^{2}}\]
\[1\,h{{m}^{2}}\] = \[(100\times 100)\,{{m}^{2}}={{10}^{4}}{{m}^{2}}\]
\[1\,k{{m}^{2}}\] = \[(1000\times 1000)\,{{m}^{2}}={{10}^{6}}\,{{m}^{2}}\]
\[1\,hectare\] = \[10000\,{{m}^{2}}\]
\[1\,k{{m}^{2}}\] = \[100\,hectare\]
Formula Related to Perimetre and Area
Example:
Find the area of a right-angled triangle whose sides are 15 cm, 9 cm and 2 cm.
(a) \[48\,c{{m}^{2}}\] (b) \[80\,c{{m}^{2}}\]
(c)\[54\,c{{m}^{2}}\] (d) \[78\,c{{m}^{2}}\]
(e) None of these
Answer (c)
Explanation: Here, a = 15 cm, b = 9 cm and c = 12 cm
Also, \[{{a}^{2}}={{b}^{2}}+{{c}^{2}}\Rightarrow \]The given triangle is a right triangle.
\[\therefore \]Area of the right triangle \[=\frac{1}{2}\times 9\times 12=54\,c{{m}^{2}}\]
Example:
The dimensions of the floor of a room are 15 m and 20 m. How many square tiles each of length 20 cm are required to furnish the floor?
(a) 2,400 (b) 5,200
(c) 7,500 (d) 5,250
(e) None of these
Answer (c)
Explanation: Area of the room \[=15\,m\times 20\,m\]
\[=1500\,cm\times 2000\,cm=3\times {{10}^{6}}\,c{{m}^{2}}\]
Area of a tile \[=20\,cm\times 20\,cm=400\,c{{m}^{2}}\]
Total number of tiles required \[=\frac{3\times {{10}^{6}}}{400}=\frac{30000}{4}=7,500\]
You need to login to perform this action.
You will be redirected in 3 sec
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
See asymptotics of H_t for asymptotics of the function [math]H_t[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
If I understood correctly, you have a signal $x[n]$ and an unknown discrete-time LTI filter, so that you can look at the output of the filter $y[n]$. Now you are looking for the impulse response of the filter $h[n]$.
The output follows a convolution rule$$y[n]=x[n]*h[n]=\sum_{m=-\infty}^{\infty}x[m]h[n-m]$$The process to find $h[n]$, given $x[n]$ and $y[n]$ is called deconvolution. There are some approaches to do that.
For example, if the Fourier transform of the signals exist, using convolution theorem, the convolution can be converted to product in the frequency domain: $$Y(\omega)=X(\omega)H(\omega)$$Therefore, $$H(\omega)=\frac{Y(\omega)}{X(\omega)}$$You should however be careful about the zeros of $X(\omega)$. If $x[n]$ can be an option, then you should choose it such that it can
excite the system at all frequencies such that you can probe the desired frequency range that you are interested in $H(\omega)$. Alternatively, you can excite the filter in repeated number of times at different frequencies, which can be interpreted as sampling the frequency response. This will give you a sampled spectrum, so if you know the filter does not have singularities between the sample frequencies, then you can model it by an LTI filter with your desired order (depending on your acceptable approximation error).
In response to your second question, after you acquired an approximation such as $\hat{h}[n]$ for the impulse response, then you can apply it to any desired input by using convolution.
|
I'm new to Mathematica, and for most purposes the program has served me well and been straightforward. However, I'm hitting a snag while trying to create a contour plot for the distribution function
$\qquad f(x,y) = (x\,y)^{p-1}/(\alpha + \beta\,x + \gamma\,y + \delta\,x\,y)^{p + q}$
Notice $x,y$ are variables, and $\alpha,\beta,\gamma,\delta, p,$ and $q$ are constants. I need to set a list of assumptions for constants in the function, but my attempts have been fruitless. Every command yields a graph without an image.
I first tried assigning my function with its assumptions by:
Assuming[ {x > 0, y > 0, p > 0, α > 0, β > 0, γ > 0, δ > 0}, f[x_, y_] := (x*y)^(p - 1)/(α + β*x + γ*y + δ*x*y)^(p + q)]
After the assignment, I tried plotting with
ContourPlot and
DensityPlot.
I'll provide just the
ContourPlot expression below because not much changes across them:
ContourPlot[f[x, y], {x, 0, 200}, {y, 0, 200}]
In regards to the
ContourPlot code, I've changed the domain to both larger and smaller numbers to no avail. Neither
ContourPlot nor
DensityPlot provides an image. I then try the code without assigning the function beforehand, while including ContourPlot within the Assuming command:
Assuming[ {α > 0, β > 0, γ > 0, δ > 0, p > 0}, ContourPlot[(x*y)^(p - 1)/(α + β*x + γ*y + δ*x*y)^(p + q), {x, 0, 3}, {y, 0, 3}]]
I know this equation should produce some sort of image since it's simply a type of truncated distribution function. I believe I've narrowed down the issue to one of the following: Mathematica does not allow assumptions to be used with
ContourPlot/DensityPlot, the distribution function is too complicated for Mathematica, or my user error is hindering me. My next step is to try creating different plots on the same graph for various pre-determined values of the parameters.
Any help is much appreciated. As previously mentioned, I'm not very experienced with Mathematica, so I'm more than willing to learn something new or help further explain my goals.
|
Let $\lambda_1,\ldots,\lambda_n$ be complex numbers such that for each positive integer $k\geq 0$, $$\sum_{i=1}^n \lambda_i^k=0.$$ Here I am supposed to show that $\lambda_i=0$ for each $i\in 1,\ldots,n$. One of my friends said it sounds that I should use Vandermonde determinant. I tried to find an appropriate version of Vandermonde determinant that I may apply here, but I could not. I appreciate if you let me know a suitable version.
Denote $\mu_1,...,\mu_k$ the non-zero distinct values of $\lambda_i$ (supposing they exist), with (non-zero) multiplicities $m_1,...,m_k$.
Then you have $$\sum_{i=1}^km_i\mu_i^p=0, p=1,2,...k $$
Consider this as a system with unknowns $m_i$. Then the determinant of this system is
$$ V=\left| \begin{matrix} \mu_1 & \mu_2 & \cdots & \mu_k \\ \mu_1^2 & \mu_2^2 & \cdots & \mu_k^2 \\ \vdots & \vdots & \ddots &\vdots \\ \mu_1^k & \mu_2^k & \cdots & \mu_k^k \end{matrix} \right| $$
But since this is a Vandermonde determinant, this determinant is zero if and only if one of $\mu_i$ is zero, or there exists $i \neq j$ such that $\mu_i=\mu_j$. This is a contradiction to the way we have chosen $\mu_i$. Therefore, the assumption that not all $\lambda_i$ are zero gives us a contradiction.
This proves that $\lambda_i=0,\ i=1..n$.
This determinant is zero since the system has non-trivial solution.
An elegant way to solve this question would be to use this lemma
If $P \in \mathbb{C} [X_1,...X_n]$ is symetric there exists $Q \in \mathbb{C}[X_1,...,X_n]$ such that $P = Q(\Sigma^1,...,\Sigma^n)$ where $\Sigma^i(X_1,...,X_n) = \sum_{k=1}^n{(X_k)^i}$.
Let $P = X_1...X_n$. By previous lemma $P(\lambda_1,...,\lambda_n) = 0$ because the hypothesis tels us that $\forall i$ $\Sigma^i(\lambda_1,...,\lambda_n) = 0$. So at least one of the $\lambda_i$ is $0$.
And the result follows by induction.
It suffices to assume that each $\lambda_i$ is nonzero. (Just ignore the zeros). Suppose there are $r_i$ copies of $\lambda_i$. Assume that $r_i>0$ for all $i$ and $\sum_{i=1}^s r_i = n$.
For $k = 1, \cdots, s$, write the equations $$ \sum_{i=1}^s \lambda^k = 0 $$ gives a matrix similar to the one J.D. mentioned in the comments except with $r_i$ 's instead of 1's. Use the property of the Vandermonde matrix to get a contradiction to the assumption that there is some non-zero $\lambda$.
|
I will outline how the problem can be approached and statewhat I think the end result will be for the special casewhen the shape parameters are integers, but not fill in thedetails.
First, note that $X-Y$ takes on values in $(-\infty,\infty)$and so $f_{X-Y}(z)$ has support $(-\infty,\infty)$.
Second, from the standard results that the density of the sum of two independent continuous random variables is theconvolution of their densities, that is,$$f_{X+Y}(z) = \int_{-\infty}^\infty f_X(x)f_Y(z-x)\,\mathrm dx$$and that the density of the random variable $-Y$ is$f_{-Y}(\alpha) = f_Y(-\alpha)$, deduce that$$f_{X-Y}(z) = f_{X+(-Y)}(z) = \int_{-\infty}^\infty f_X(x)f_{-Y}(z-x)\,\mathrm dx= \int_{-\infty}^\infty f_X(x)f_Y(x-z)\,\mathrm dx.$$
Third, for non-negative random variables $X$ and $Y$, note that theabove expression simplifies to$$f_{X-Y}(z) = \begin{cases}\int_0^\infty f_X(x)f_Y(x-z)\,\mathrm dx, & z < 0,\\\int_{0}^\infty f_X(y+z)f_Y(y)\,\mathrm dy, & z > 0.\end{cases}$$
Finally, using parametrization $\Gamma(s,\lambda)$ to mean arandom variable with density $\lambda\frac{(\lambda x)^{s-1}}{\Gamma(s)}\exp(-\lambda x)\mathbf 1_{x>0}(x)$,and with$X \sim \Gamma(s,\lambda)$ and $Y \sim \Gamma(t,\mu)$ random variables, we have for $z > 0$ that$$\begin{align*}f_{X-Y}(z) &= \int_{0}^\infty \lambda\frac{(\lambda (y+z))^{s-1}}{\Gamma(s)}\exp(-\lambda (y+z))\mu\frac{(\mu y)^{t-1}}{\Gamma(t)}\exp(-\mu y)\,\mathrm dy\\&= \exp(-\lambda z) \int_0^\infty p(y,z)\exp(-(\lambda+\mu)y)\,\mathrm dy.\tag{1}\end{align*}$$Similarly, for $z < 0$,$$\begin{align*}f_{X-Y}(z) &= \int_{0}^\infty \lambda\frac{(\lambda x)^{s-1}}{\Gamma(s)}\exp(-\lambda x)\mu\frac{(\mu (x-z))^{t-1}}{\Gamma(t)}\exp(-\mu (x-z))\,\mathrm dx\\&= \exp(\mu z) \int_0^\infty q(x,z)\exp(-(\lambda+\mu)x)\,\mathrm dx.\tag{2}\end{align*}$$
These integrals are not easy to evaluate but for the special case$s = t$, Gradshteyn and Ryzhik,
Tables of Integrals, Series, and Products,Section 3.383, lists the value of$$\int_0^\infty x^{s-1}(x+\beta)^{s-1}\exp(-\nu x)\,\mathrm dx$$in terms of polynomial, exponential and Bessel functions of $\beta$and this can be used to write down explicit expressions for $f_{X-Y}(z)$.
From here on, we assume that $s$ and $t$ are integers sothat $p(y,z)$ is a polynomial in $y$ and $z$ of degree $(s+t-2, s-1)$and $q(x,z)$ is a polynomial in $x$ and $z$ of degree $(s+t-2,t-1)$.
For $z > 0$, the integral $(1)$ is the sum of $s$ Gamma integrals with respect to $y$ with coefficients$1, z, z^2, \ldots z^{s-1}$. It follows that the density of$X-Y$ is proportional to a
mixture density of $\Gamma(1,\lambda), \Gamma(2,\lambda), \cdots, \Gamma(s,\lambda)$random variables for $z > 0$. Note that this resultwill hold even if $t$ is not an integer.
Similarly, for $z < 0$,the density of$X-Y$ is proportional to a
mixture density of $\Gamma(1,\mu), \Gamma(2,\mu), \cdots, \Gamma(t,\mu)$random variables flipped over, that is,it will have terms such as $(\mu|z|)^{k-1}\exp(\mu z)$instead of the usual $(\mu z)^{k-1}\exp(-\mu z)$.Also, this result will hold even if $s$ is not an integer.
|
What is an Operad? Part 1
If you browse through the research of your local algebraist, homotopy theorist, algebraic topologist or―well, anyone whose research involves an
operation of some type, you might come across the word "operad." But what are operads? And what are they good for? Loosely speaking, operads―which come in a wide variety of types―keep track of various "flavors" of operations. Historically, they arose from a quest to understand $k$-fold loop spaces in homotopy theory. I'll say a little more about this next time, when we'll look at some examples and applications. In today's post, I simply want to present the definition of an operad. To start, what do I mean by "flavors" of operations?Let's think about multiplication on an algebra. An algebra is a vector space $V$ equipped with a bilinear map \begin{align*}m\colon V\times V&\longrightarrow V\\(a,b)&\longmapsto m(a,b)\end{align*}which we can think of as multiplication, so let's write $a\cdot b$ instead of $m(a,b)$. (And if we want a unital algebra, we may ask for an element $1\in V$ so that $1\cdot a=a=a\cdot 1$ for all $a\in V$.) Now, depending on what relations we ask $m$ to satisfy, our algebra $V$ is given different names. For example, if we ask that $$(a \cdot b)\cdot c = a \cdot (b\cdot c)$$ then $V$ is called an associative algebra. If we also ask that $$a\cdot b = b\cdot a$$ then $V$ is a commutative algebra. The real line $\mathbb{R}$ with the usual multiplication is an example of a commutative algebra. On the other hand, if―and let's now think of $m(a,b)$ as a bracket $[a,b]$―we ask that $$[a,b]=-[b,a] \qquad\text{and}\qquad [a,[b,c]]=[[a,b],c]+[b,[a,c]]$$ then $V$ is called a Lie algebra. An example of a Lie algebra is all vectors in $\mathbb{R}^3$ together with the cross product. Behind each of these algebras is a particular operad that encodes the "flavor" of the operation $m\colon V\times V\to V$. How so? Let's think of the 2-to-1 map $m$ as a binary tree:
More abstractly, let’s call anything with $n$ inputs and 1 output an "$n$-to-1 operation" or an "$n$-ary operation."
There's a way to compose operations. For instance, if $f$ is a 3-ary operation and $g$ is a 4-ary operation, we can combine them to get a 6-ary operation: just use the output of $g$ as one of the inputs of $f$:
Notice I've grafted $g$ on the
second leaf of the tree depicting $f$, so I'm calling that composition "$\circ_2$." Also notice how the leaves are relabeled after the composition. And there are two other possibilities, $f\circ_1g$ and $f\circ_3 g.$
In general, there are $n$ ways to compose an $m$-ary operation with an $n$-ary operation and the result is always an $m+n-1$-ary operation. An
operad is the collection of all such operations, together with the compositions. These compositions should satisfy three very sensible axioms that are easy to understand pictorially, but a mess to write down. So let's look at some pictures! To start, suppose we're given operations $f,g$ and $h$.
Axiom 1: Composition behaves nicely.
Let's say we want to stick $h$ onto $f\circ_2 g$. How many ways can we do this? There are three options: (case 1) we can stick $h$ somewhere to the left of $g$, or (case 2) we can stick $h$ on $g$ itself, or (case 3) we can stick $h$ somewhere to the right of $g$. The picture below is an example of case 1. Grafting $h$ onto the first leaf of $f\circ_2 g$ gives us the first equals sign. But we could have obtained the tree in the middle (the one with 7 leaves) in a
different way: first graft $h$ to the first leaf of $f$, then graft $g$ onto the third leaf of the new composition, $f\circ_1 h$. And that gives us the second equals sign. Conclusion? $(f\circ_2 g)\circ_1h=(f\circ_1 h)\circ_3g$.
Now, what about case 2? Grafting $h$
on $g$ (while $g$ is already attached to $f$) should be the same as first grafting $h$ on $g$ (before $g$ is attached to $f$), and then inserting the new operation $h\circ g$ into $f$. For example, $(f\circ_2 g)\circ_3h = f\circ_2(g\circ_2h).$
Finally, in case 3, Inserting $h$ to the
right of $g$ (while $g$ is already attached to $f$) should be the same as first inserting $h$ on $f$, and then attaching $g$. For example, $(f\circ_2)g\circ_6 h= (f\circ_3h)\circ_2 g.$
Axiom 2: Permutations behave nicely.
If we like, we can permute the inputs of any $n$-ary operation using elements in the symmetric group $S_n$. For example, if $f$ is a 3-ary operation and $\sigma=(123)\in S_3$, then $\sigma f$ is a new 3-ary operation given by
And what if we want to compose this with a 2-ary operation , $g$ whose inputs have also been permuted? This axiom requires that first twisting $f$ and $g$ (individually) and then composing is the
same as first composing and then twisting the new tree in an appropriate way. For example if $\tau=(12)\in S_2$ then we must have $\sigma f \circ_2 \tau g=(\sigma\circ_2 \tau)(f\circ_{2}g)$.
where $\sigma\circ_2\tau$ is the permutation, "do $\sigma$, but replace 2 by $\tau$."
Axiom 3: There's a unit (that behaves nicely).
Lastly, we require the existence of a 1-ary operation, which I'll call 1, that serves as an identity. For instance, $f\circ_3 1=f=1\circ_1 f$.
The Definition
And there you have it! An operad is a collection of 1,2,3,$\ldots$-ary operations that can be composed and permuted à la the sensible axioms above. Writing this down is a horrible mess, but let's do it for fun. To start, let's think of all $n$-ary operations as forming a set denoted by $\mathcal{O}(n)$. The definition is as follows
Definition: An operad is a sequence of sets $\mathcal{O}=\{\mathcal{O}(1),\mathcal{O}(2),\mathcal{O}(3)\ldots\}$ together with composition functions $$\circ_i\colon \mathcal{O}(n)\times \mathcal{O}(m)\to\mathcal{O}(m+n-1)$$satisfying the following: For all $f\in\mathcal{O}(n)$, $g\in\mathcal{O}(m)$, and $h\in\mathcal{O}(p)$, $$ (f\circ _j g)\circ_i h= \begin{cases} (f\circ_i h)\circ_{j+p-1}g &\text{if $1\leq i \leq j-1$} \\[5pt] f\circ_j(g\circ_{i-j+1}h) &\text{if $j\leq i\leq m+j-1$}\\[5pt] (f\circ_{i-m+1}h)\circ_jg &\text{if $i\geq m+j$} \end{cases} $$ Each $\mathcal{O}(n)$ has an action of the symmetric group $S_n$, \begin{align*} S_n\times \mathcal{O}(n)&\longrightarrow \mathcal{O}(n)\\ (\sigma,f) &\longmapsto \sigma f \end{align*} such that for all $f\in\mathcal{O}(n)$ and $g\in \mathcal{O}(m)$, and for all $\sigma\in S_n$ and $\tau\in S_m,$ $$\sigma f\circ_i\tau g=(\sigma\circ_i\tau )(f\circ_{i} g) \qquad 1\leq i \leq n$$ where $\sigma\circ_i\tau$ is the block permutation, "replace the $i$th entry of $\sigma$ with $\tau$." There exists $1\in\mathcal{O}(1)$ (think of it as the "identity operation") so that for every $n$ and for every $f\in\mathcal{O}(n)$, $$1\circ_1 f=f \circ_i 1=f \qquad 1\leq i \leq n.$$
In fact, the $\mathcal{O}(n)$ need not be sets. They can be vector spaces, or topological spaces, or chain complexes, or any kind of object where "$\mathcal{O}(n)\times\mathcal{O}(m)$" makes sense. (In general, an operad can be defined in any
symmetric monoidal category.) In those cases, we ask that the $\circ_i$ and $S_n$ action be the appropriate type of map: linear transformations, or continuous functions, or chain maps, for instance.
Example? Endomorphisms!
Here's the flagship example of an operad. Fix a vector space $V$ and for each $n=1,2,\ldots$, define$$\text{End}_V(n):=\text{hom}(V^n,V)$$to be the vector space of all multilinear maps $V^{n}\to V$. Then $\text{End}=\{\text{End}_V(1), \text{End}_V(2),\ldots \}$ forms an operad called the
endomorphism operad. The composition\begin{align*}\circ_i\colon \text{End}_V(n)\times \text{End}_V(m)&\longrightarrow \text{End}_V(n+m-1)\\(f,g)&\longmapsto f\circ_i g\end{align*}is the map $f\circ_i g\colon V^{n+m-1}\to V$ given by "use the output of $g$ as the $i$th input of $f$." For example, if $n=3$ and $m=2$, then$$(f\circ_2g)(a,b,c,d)=f(a,g(b,c),d).$$And the action of the symmetric group simply permutes the inputs. Example? If $\sigma=(123)\in S_3$, then $$\sigma f (a,b,c)=f(c,a,b).$$
But wait, there's more!
The reason that End$_V$ is so important is because it helps us regard abstract operations as
actual operations. You see, operads aren't very interesting on their own---just like groups aren't very interesting until you look at a representation of the group (or let them act on something). Operads become useful when each abstract operation is realized as a concrete operation, and we do this by assigning each $n$-ary operation $f\in \mathcal{O}(n)$ a map $V^n\to V$. Such an assignment assembles into a collection of maps$$\{\varphi_n\colon \mathcal{O}(n)\to\text{End}_V(n)\},$$and we ask that each $\varphi_n$ be compatible with the $\circ_i$ and $S_n$ action. This assembly is called an algebra over $\mathcal{O}$ or an $\mathcal{O}$-algebra. Next time, for example, we'll chat about an operad called Assoc. An algebra over that operad is precisely an associative algebra! We'll look at other examples, too: topological simplices, the associahedra, little $k$-cubes, and some Riemann surfaces all form operads. And the resulting algebras over them are quite interesting. Stay tuned!
In this series:
|
A method of integration that simplifies the function by rewriting it in terms of a different variable. The goal is to transform the integral into another that is easier to solve. It is the inverse of the chain rule in differentiation.
The new variable used is usually \(u\) by convention, hence this method is also known as 'U-Substitution'.
Examples
\[\int (\sin{({x}^{2})})x \, dx\]
1
Regroup terms.
\[\int x\sin{({x}^{2})} \, dx\]
2
Use Integration by Substitution.
Let \(u={x}^{2}\), \(du=2x \, dx\), then \(x \, dx=\frac{1}{2} \, du\)
3
Using \(u\) and \(du\) above, rewrite \(\int x\sin{({x}^{2})} \, dx\).
|
What is an Operad? Part 2
Last week we introduced the definition of an
operad: it's a sequence $\mathcal{O}(1),\mathcal{O}(2), \mathcal{O}(3),\ldots$ of sets or vector spaces or topological spaces or most anything you like (whose elements we think of as abstract operations), together with composition maps $\circ_i\colon \mathcal{O}(n)\times\mathcal{O}(m)\to\mathcal{O}(n+m-1)$ and a way to permute the inputs using symmetric groups. We also defined an algebra over an operad, which a way to realize each abstract operation as an actual operation. Now it's time for some examples! The Associative and Commutative Operads
Suppose $V$ is a vector space over a field $\mathbb{k}$. For each $n\geq 1$, define Assoc$(n)$ to be the 1-dimensional vector space generated by the tree with $n$ leaves. And let's not worry about permuting the leaves---there's no action of the symemtric group $S_n$ here. (Such operads are called
non-symmetric operads.)
The $\circ_i$ composition is tree grafting, as introduced last time For example, to describe the compositions $$\circ_i\colon \text{Assoc}(2)\times \text{Assoc}(2)\to \text{Assoc}(3), \qquad i=1,2$$ it's enough to say where 𝖸$\circ_1$𝖸 and 𝖸$\circ_2$𝖸 land. Here, I'm using 𝖸 to depict the 2-to-1 operation that generates $\text{Assoc}(2)$. But there's only one option! Up to a scalar multiple, there's only one 3-to-1 tree in $\text{Assoc}(3)$! In other words, the following trees must be equal
Now, what's an algebra over this operad? As we saw last time, it's a collection of maps $\varphi\colon \text{Assoc}(n)\to\text{End}_V(n)$ for each $n=1,2,\ldots$ that's compatible with the $\circ_i$. Let's define $m:=\varphi(𝖸)\colon V\times V\to V$ to be the image of the 2-to-1 operation 𝖸. Compatibility tells us that the first and third equalities hold:$$\varphi(𝖸)\circ_1\varphi(𝖸)=\varphi(𝖸\circ_1 𝖸)=\varphi(𝖸\circ_2 𝖸)=\varphi(𝖸)\circ_2\varphi(𝖸)$$while the second equality holds from the picture above. This amounts to the statement that$$m(m(v_1,v_2),v_3)=m(v_1,m(v_2,v_3))$$for all $(v_1,v_2,v_3)\in V^3$, or writing $v_1\cdot v_2$ instead of $m(v_1,v_2)$, $$(v_1\cdot v_2)\cdot v_3= v_1\cdot(v_2\cdot v_3).$$This shows that $m$ is an associative product on $V$! In other words, an algebra over the operad Assoc is an
associative algebra.
We didn't consider a symmetric group action, but if we
do, and if we define it to be trivial (i.e. $\sigma f=f$ for all $\sigma\in S_n$ and all $n$-ary operations $f$) then $v_1\cdot v_2= v_2\cdot v_1$ for all $v_1,v_2\in V$ since the two trees on the left must be equal. This operad is called the Comm operad, and an algebra over it is a commutative algebra.
The Associahedra Operad
The associahedra are a sequence of polytopes that encode operations that are associative
up to homotopy. Let's look at an example. Suppose $X$ is a topological space and let $a,b\colon I\to X$ be loops based at point in $X$. (That is, $a$ and $b$ are continuous functions, both of which send $0,1\in I$ to the same point in $X$.) The product $a\cdot b$ gives us a new loop by "going around $a$ and $b$ each at twice the original speed." We can think of traversing $a$ in the first half-second, then traversing $b$ in the second half. This gives us a 2-to-1 operation $\Omega X\times \Omega X\to \Omega X$ where $\Omega X$ denotes the space of all based loops of $X$.
Is this operation associative? Well, if we have three loops $a,b$ and $c$, there are two options:
Here, $(a\cdot b)\cdot c$ means "do $a$ on the first quarter of the interval, and do $c$ on the second half," while $a\cdot(b\cdot c)$ means "do $a$ on the first half of the interval and do $c$ on the last quarter." These two loops are not
equal, so this "multiplication" is not associative. But we can get from one loop to the other simply by adjusting the speed at which we traverse $a$ (and $c$)! In other words, we can go from $(a\cdot b)\cdot c$ to $a\cdot(b\cdot c)$ continuously by traveling around $a$ a little slower and traveling around $c$ a little faster. This defines a homotopy between the two loops, which we can represent as a line segment, called $K_3$, joining two points.
The vertices represent the two loops $(a\cdot b)\cdot c$ and $a \cdot(b\cdot c)$, and every point in between represents an intermediate loop. For example, the midpoint represents the loop $a\cdot b\cdot c$ in which $a, b$ and $c$ are all traversed in equal time.
Now what happens if you want to take the product of
four loops $a,b,c,d$?! There are five ways to parenthesize four letters, so we have five different vertices. Some of these can be connected by edges using a homotopy, which gives us the boundary of a pentagon. Now it turns out that you can get from $((ab)c)d$ to $a(b(cd))$ via one of two homotopies, depicted as the red and blue paths below. What's more, you can get from any point on the blue path to a point on the red path in a continuum of ways. In short, we get a continuum of paths between the red and blue paths, which sweeps out the face of the pentagon! So the gray region is really a homotopy between homotopies. All the ways you can multiply four loops is captured by this 2-dimensional polytope, which we call $K_4$.
Now the next polytope $K_5$ has one vertex for each of the 14 ways way you can parenthesize five letters. There are 21 edges (corresponding to homotopies) and 9 faces (homotopies between homotopies) and 1 solid interior (a homotopy between the homotopies between the homotopies)!
And the list goes on. The polytopes $K_2,K_3,K_4,\ldots$ form an non-symmetric operad (where $K_1=\varnothing$) with composition being the inclusion of faces. Each $K_n$ is $n-2$-dimensional and the vertices represent the ways of putting parentheses around $n$ letters.
An algebra over this operad is called an
$A_\infty$ space, first introduced by Jim Stasheff in the early sixties. (Take note of the word "space!" unlike our previous examples, the $n$-ary operations form a topological space* rather than a vector space!) The "A" stands for "associative" and the infinity reminds us of the infinite string of homotopies between homotopies between homotopies between homotopies between.... And the associahedra are of algebraic, geometric, and combinatorial interest, too! For instance, take a look at this survey by J. L. Loday.
The Little k-Cubes Operad
Closely related to the associahedra is the little $k$-cubes operad, where $k>0$ is a fixed integer. In this example, the set $\mathcal{O}(n)$ of $n$-ary operations forms a topological space---it's the space of all labeled configurations of $n$ $k$-dimensional rectangles within the unit $k$-cube. For example, when $k$=2, here's a picture of a point in $\mathcal{O}(5)$.
So $\mathcal{O}(5)$ is the topological space of all such configurations. That is, if we move the rectangle #4 just a little bit, the new picture we get is a new point in the space $\mathcal{O}(5)$. The $\circ_i$ composition is given by insertion of one picture into the $i$th rectangle of the other, then relabeling. For example
This operad appears** in a seminal paper by topologist Peter May called The Geometry of Iterated Loop Spaces. In short, May answered the question, "Does a topological space have a particular structure if and only if it is (weakly homotopy equivalent to) a $k$-fold loop space?" The answer is
Yes! When $k=1$, the structure is that of an algebra over the associahedra oeprad.*** Yes! When $k>1$, the structure is that of an algebra over the little $k$-cubes operad.
In other words, an algebra over the little $k$-cubes operad and a $k$-fold loop space are the same in the eyes of a homotopy theorist. So if you're interested in homotopy theory,**** you'll want to get acquainted with the little cubes operad!
The Simplex Operad
Did you know that topological simplices form an operad?
The standard $n-1$-simplex is defined as $$\Delta^{n-1}=\{(p_1,\ldots,p_n)\in \mathbb{R}^{n}:\sum_{i=1}^np_i=1 \text{ and } 0\leq p_i\leq 1\}.$$ And we can think of each point $p=(p_1\ldots,p_n)$ in $\Delta_n:=\Delta^{n-1}$ as a probability distribution on a discrete set $X=\{1,2,\ldots,n\}$. For example, the point $p=(\frac{1}{2},\frac{1}{2})$ in $\Delta_2$ can represent the distribution of a fair coin toss, while $q=(\frac{1}{6},\ldots,\frac{1}{6})$ in $\Delta_6$ might represent the distribution of rolling a six-sided die.
What's the composition $\circ_i\colon \Delta_n\times\Delta_m\to\Delta_{n+m-1}$? As an example, suppose $m=6$ and $n=2$ with $p$ and $q$ given as above. To compute $p\circ_2 q$, first multiply each of the entries of $q$ by $\frac{1}{2}$, then stick the result in the second entry of $p$.
Notice, the sum of the entries on the right-hand side add up to 1! So we get a bona fide point in $\Delta_{7}$. More generally, I like to think of $p$ as a $n$-ary tree whose leaves are labeled by the $p_i$. Then $p\circ_i q$ is obtained by "painting" the leaves of $q$ with "$p_i$" and then grafting the result onto the $i$th leaf of $p$. For example, the above composition can be pictured as
Convex subsets of $\mathbb{R}^n$ are one example of an algebra over this operad, and this plays a
very cool role in information theory. In a wonderful 2011 paper, John Baez, Tobias Fritz, and Tom Leinster used the simplex operad to provide a categorical/topological characterization of Shannon entropy. Baez has a nice summary of their work in this blog post, and Leinster outlined their use of the simplex operad in a recent talk at CIRM.
Other Examples
We've only looked a few examples of operads, but there are
tons more! There are cyclic operads (think: Frobenius algebras), modular operads (think: moduli spaces), cacti operads (think: string topology), a phylogenetic operad (think: biology), and even a swiss cheese operad. And hey, why stop at operations with only one output? If we consider $n$-to-$m$ operations, we get something called a properad. For example, Riemann surfaces of genus $g$ with $n$ holes for inputs and $m$ holes for outputs form a properad. And an algebra over this properad is a conformal field theory. And we might even consider the disjoint union of such $n$-to-$m$ operations--called a PROP. And algebra over that gadget is a topological quantum field theory. The list goes on!
Interested in reading more? Here are a few places to start:
What is... An Operad? by Jim Stasheff (part of the AMS Notices' excellent "What is...?" series) Homotopy + Algebra = Operad by Bruno Vallette (p. 37ff contains a long list of examples/applications from algebra, deformation theory, quantum algebra, noncommutative geometry, algebraic topology, differential geometry, algebraic geometry, mathematical physics, and computer science.) Algebraic Operads by Bruno Vallette and Jean-Louis Loday Operads in Algebra, Topology, and Physics by Martin Markl, Steve Shnider, and Jim Stasheff Koszul Duality for Operads by Victor Ginzburg and Mikhail Kapranov
Enjoy!
*But there
is an algebraic analogue! We can view each $K_n$ as a CW complex and consider the cellular chain complex of each. These chain complexes assemble into a new operad which is algebraic in nature---each collection of $n$-ary operations forms a differential graded algebra. This operad is called the $A_\infty$ operad and an algebra over it is an $A_\infty$-algebra. For more on $A_\infty$-algebras, check out Homotopy + Algebra = Operad by Bruno Vallette and Introduction to $A$-infinity Algebras and Modules by Bernhard Keller.
**(Added 12/10/17) Here's some history: May
coined the word "operad" in his 1972 Iterated Loops Spaces paper, but the concept originated with Joachim Lambek in his 1969 paper Deductive Systems and Categories II. Lambek used the term 'multicategory' which is a generalization of an operad. Also, the little $n$-cubes first appeared in John Michael Boardman's and Rainer Vogt's 1968 Homotopy-everything $H$-spaces, which is cited by May. They also prove the recognition principle (which is the formal name for what I call "May's question" above), although their proof is a bit different than May's. Sincerest thanks to Prof. Donald Yau for pointing out these historical remarks!
***There's a sense in which the associahedra operad and the little $1$-cubes operad (a.k.a the little intervals operad) are the same.
****Already doing homotopy-things? Be sure to say hi to the folks over at MathOverflow's homotopy chat room!
In this series:
|
I have been given the following definition: $\rule{17cm}{0.4pt}$
Let $\{a_n\}$ be a sequence in $\mathbb{R}$. The series: $$\sum_{n=0}^\infty a_n$$ is $\textbf{convergent}$ if the sequence $\{s_m\}$ of the $\textbf{partial sums}$ $$s_m=\sum_{n=0}^m a_n$$ converges, that is for all $\varepsilon >0$ there exists $N=N(\varepsilon)\in \mathbb{N},$ such that $$\left| s_m-s_k\right| = \left| \sum_{n=k+1}^m a_n \right| <\varepsilon$$ for all $m>k\geq N$.
$\{a_n\}$ $\textbf{converges absolutely}$ if the series: $$\sum_{n=0}^\infty |a_n|$$ converges. $\rule{17cm}{0.4pt}$
I was a bit confused about what it means by partial sums, and why, if the series of partial sums converges, that we know the $\{a_n\}$ converges.
I am thinking that for any finite m, $s_m$ would be a sub-sequence of $a_n$, and i think I am right in saying that if a subsequence is convergent then the sequence must also be convergent. This is a complete guess though, If someone could let me know if I am on the right track in understanding this, that'd be great. Thanks.
|
I have the following question which has given rise to a doubt.
One end of a rod of uniform density is attached to the ceiling in such a way that the rod can swing about freely with no resistance. The other end of the rod is held still so that it touches the ceiling as well. Then the second end is released. If the length of the rod is $l$ metres and gravitational acceleration is $g$ metres per second squared, how fast is the unattached end of the rod moving when the rod is first vertical?
What I did first was equate gravitational potential of the center of mass to the rotational kinetic energy: $$mg\frac{l}{2}=\frac{1}{2}I\omega^2$$ $$\omega^2=\frac{mgl}{I}$$ Calculating the moment of inertia: $$I=\frac{ml^2}{3}$$ Utilising it in the $\omega^2$ equation: $$\omega=\frac{1}{l}\sqrt{3gl}$$ Thus the tangential velocity is: $$v_t=\sqrt{3gl}$$
However upon completion I thought if I consider the whole system to be a pendulum with of mass $m$ and length $\frac{l}{2}$ (only considering the center of mass). Couldn't I simply equate gravitational potential with kinetic energy and simply result with the following: $$mg\frac{l}{2}=\frac{1}{2}mv^2$$ $$mg\frac{l}{2}=\frac{1}{2}m(\frac{\omega l}{2})^2$$ Simplifying to: $$\omega=\frac{1}{l}\sqrt{4gl}$$ And the new tangential velocity is: $$v_t=\sqrt{4gl}$$
These answers are different and I am not to sure as to which analysis is correct. Thanks for any help.
|
When you fit a generalized linear model (GLM) in R and call
confint on the model object, you get confidence intervals for the model coefficients. But you also get an interesting message:
Waiting for profiling to be done...
What's that all about? What exactly is being profiled? Put simply, it's telling you that it's calculating a
profile likelihood ratio confidence interval.
The typical way to calculate a 95% confidence interval is to multiply the standard error of an estimate by some normal quantile such as 1.96 and add/subtract that product to/from the estimate to get an interval. In the context of GLMs, we sometimes call that a Wald confidence interval.
Another way to determine an upper and lower bound of plausible values for a model coefficient is to find the minimum and maximum value of the set of all coefficients that satisfy the following:
\[-2\log\left(\frac{L(\beta_{0}, \beta_{1}|y_{1},…,y_{n})}{L(\hat{\beta_{0}}, \hat{\beta_{1}}|y_{1},…,y_{n})}\right) < \chi_{1,1-\alpha}^{2}\]
Inside the parentheses is a ratio of
likelihoods. In the denominator is the likelihood of the model we fit. In the numerator is the likelihood of the same model but with different coefficients. (More on that in a moment.) We take the log of the ratio and multiply by -2. This gives us a likelihood ratio test (LRT) statistic. This statistic is typically used to test whether a coefficient is equal to some value, such as 0, with the null likelihood in the numerator (model without coefficient, that is, equal to 0) and the alternative or estimated likelihood in the denominator (model with coefficient). If the LRT statistic is less than \(\chi_{1,0.95}^{2} \approx 3.84\), we fail to reject the null. The coefficient is statisically not much different from 0. That means the likelihood ratio is close to 1. The likelihood of the model without the coefficient is almost as high the model with it. On the other hand, if the ratio is small, that means the likelihood of the model without the coefficient is much smaller than the likelihood of the model with the coefficient. This leads to a larger LRT statistic since it's being log transformed, which leads to a value larger than 3.84 and thus rejection of the null.
Now in the formula above, we are seeking all such coefficients in the numerator that would make it a true statement. You might say we're “profiling” many different null values and their respective LRT test statistics.
Do they fit the profile of a plausible coefficient value in our model? The smallest value we can get without violating the condition becomes our lower bound, and likewise with the largest value. When we're done we'll have a range of plausible values for our model coefficient that gives us some indication of the uncertainly of our estimate.
Let's load some data and fit a binomial GLM to illustrate these concepts. The following R code comes from the help page for
confint.glm. This is an example from the classic Modern Applied Statistics with S.
ldose is a dosing level and
sex is self-explanatory.
SF is number of successes and failures, where success is number of dead worms. We're interested in learning about the effects of dosing level and sex on number of worms killed. Presumably this worm is a pest of some sort.
# example from Venables and Ripley (2002, pp. 190-2.)
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20-numdead)
budworm.lg <- glm(SF ~ sex + ldose, family = binomial)
summary(budworm.lg)
## ## Call: ## glm(formula = SF ~ sex + ldose, family = binomial) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.10540 -0.65343 -0.02225 0.48471 1.42944 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.4732 0.4685 -7.413 1.23e-13 *** ## sexM 1.1007 0.3558 3.093 0.00198 ** ## ldose 1.0642 0.1311 8.119 4.70e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 124.8756 on 11 degrees of freedom ## Residual deviance: 6.7571 on 9 degrees of freedom ## AIC: 42.867 ## ## Number of Fisher Scoring iterations: 4
The coefficient for
ldose looks significant. Let's determine a confidence interval for the coefficient using the
confint function. We call
confint on our model object,
budworm.lg and use the
parm argument to specify that we only want to do it for
ldose:
confint(budworm.lg, parm = "ldose")
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 0.8228708 1.3390581
We get our “waiting” message though there really was no wait. If we fit a larger model and request multiple confidence intervals, then there might actually be a waiting period of a few seconds. The lower bound is about 0.8 and the upper bound about 1.32. We might say every increase in dosing level increase the log odds of killing worms by at least 0.8. We could also exponentiate to get a CI for an odds ratio estimate:
exp(confint(budworm.lg, parm = "ldose"))
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 2.277027 3.815448
The odds of “success” (killing worms) is at least 2.3 times higher at one dosing level versus the next lower dosing level.
To better understand the profile likelihood ratio confidence interval, let's do it “manually”. Recall the denominator in the formula above was the likelihood of our fitted model. We can extract that with the
logLik function:
den <- logLik(budworm.lg)
den
## 'log Lik.' -18.43373 (df=3)
The numerator was the likelihood of a model with a
different coefficient. Here's the likelihood of a model with a coefficient of 1.05:
num <- logLik(glm(SF ~ sex + offset(1.05*ldose), family = binomial))
num
## 'log Lik.' -18.43965 (df=2)
Notice we used the
offset function. That allows us to fix the coefficient to 1.05 and not have it estimated.
Since we already extracted the
log likelihoods, we need to subtract them. Remember this rule from algebra?
\[\log\frac{M}{N} = \log M – \log N\]
So we subtract the denominator from the numerator, multiply by -2, and check if it's less than 3.84, which we calculate with
qchisq(p = 0.95, df = 1)
-2*(num - den)
## 'log Lik.' 0.01184421 (df=2)
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] TRUE
It is. 1.05 seems like a plausible value for the
ldose coefficient. That makes sense since the estimated value was 1.0642. Let's try it with a larger value, like 1.5:
num <- logLik(glm(SF ~ sex + offset(1.5*ldose), family = binomial))
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] FALSE
FALSE. 1.5 seems too big to be a plausible value for the
ldose coefficient.
Now that we have the general idea, we can program a
while loop to check different values until we exceed our threshold of 3.84.
cf <- budworm.lg$coefficients[3] # fitted coefficient 1.0642
cut <- qchisq(p = 0.95, df = 1) # about 3.84
e <- 0.001 # increment to add to coefficient
LR <- 0 # to kick start our while loop
while(LR < cut){
cf <- cf + e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(upper <- cf)
## ldose ## 1.339214
To begin we save the original coefficient to
cf, store the cutoff value to
cut, define our increment of 0.001 as
e, and set
LR to an initial value of 0. In the loop we increment our coefficient estimate which is used in the
offset function in the estimation step. There we extract the log likelihood and then calculate
LR. If
LR is less than
cut (3.84), the loop starts again with a new coefficient that is 0.001 higher. We see that our upper bound of 1.339214 is very close to what we got above using
confint (1.3390581). If we set
e to smaller values we'll get closer.
We can find the LR profile lower bound in a similar way. Instead of adding the increment we subtract it:
cf <- budworm.lg$coefficients[3] # reset cf
LR <- 0 # reset LR
while(LR < cut){
cf <- cf - e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(lower <- cf)
## ldose ## 0.822214
The result, 0.822214, is very close to the lower bound we got from
confint (0.8228708).
This is a
very basic implementation of calculating a likelihood ratio confidence interval. It is only meant to give a general sense of what's happening when you see that message
Waiting for profiling to be done.... I hope you found it helpful. To see how R does it, enter
getAnywhere(profile.glm) in the console and inspect the code. It's not for the faint of heart.
I have to mention the book Analysis of Categorical Data with R, from which I gained a better understanding of the material in this post. The authors have kindly shared their R code at the following web site if you want to have a look: http://www.chrisbilder.com/categorical/
To see how they “manually” calculate likelihood ratio confidence intervals, go to the following R script and see the section “Examples of how to find profile likelihood ratio intervals without confint()”: http://www.chrisbilder.com/categorical/Chapter2/Placekick.R
|
In calculus, students are often asked to find the “derivative” of a function. You can think of this graphically: the derivative of a function is the slope of the tangent line to the function at the given point. Still seems confusing? Let's take an example: You should know that the slope of any horizontal line is 0. Therefore, the slope of y=c where c is any constant (any real numbers such as 3, 7, e, 2.5, etc.) is 0. Correspondingly, the derivative of any constant is 0. Let's take another example, the slope of a line — such as
\(y=2x\)
— is the leading coefficient, such that
\(y=2x\)
has a slope of
\(2\)
. Correspondingly, the derivatives of
\(2x\)
is
\(2\)
.Yet what happens when functions get increasingly complicated? How can we determine the derivative? Thankfully, specific rules have been derived that govern the relationship between functions and their derivatives. As a result, for example, we are able to conclude that the slope of the tangent line of
\({x}^{2}\)
is always
\(2x\)
, and for
\(\sin{x}\)
, it is always
\(\cos{x}\)
, and so on.Now, let us look at an important rule in differential calculus: the chain rule.
Applying the Chain Rule for Derivatives
The rule states that
\(f'(g(x))=f'(g(x))g'(x)\)
This allows us to correctly compute the derivatives of functions such as
\(\frac{d}{dx} \sin{2x}\)
.First, we need to break the equation into components, and assign them to f and g correctly:Let
\(f(g(x))=\sin{(g(x))}\)
and
\(g(x)={x}^{2}\)
.The above was the most important step. Now, we just need to find
\(f'(g(x))g'(x)\)
. From other rules that we will not go into detail here, we know that the derivative of
Ready to give it a try? Tackle some of our practice calculus problems at the top of this page using the derivative chain rule, and see if you can find the answer. If you run into trouble, check out the step-by-step solution to see how the chain rule, power rule and constant factor rule can all be used together to find the derivative.
What's Next
Need more derivative chain rules examples? Try the Cymath homework helper app for iOS and Android and get access on the go! At Cymath, we know that calculus and derivative computations can seem difficult for some, and it’s our mandate to help students boost their understanding. If you find Cymath helpful, you can also subscribe to Cymath Plus, which offers ad-free and more in-depth help, from pre-algebra to calculus.
|
The condition is mostly referred to as the No-Ponzi (-scheme) [NP] condition. It is
one additional constraint, that prevents Ponzi-schemes: Paying debt with new higher debt, ad infinitum.
By the way: The NP condition is
one condition, hence the associated multiplier should be $\psi$ instead of $\psi_t$. While certainly nothing is lost repeating the same condition over and over again (for any $t$), we don't need it more than once, and it is being imprecise.
Think about optimization for finite $T$ periods. Then, you have the condition that $B_T \geq 0$. The Lagrangian optimization gives you the local optimization between $0, 1, 2$... There are many solutions that are locally optimal, but you will only allow solutions that in the end lead to $B_T > 0$.
A simple example
Your example is much too messy to think about these core issues. Look instead at the problem
$$ \max_{\{c_t, a_{t+1}\}_t} \sum_t \beta^t U(c_t) + \lambda_t (a_{t+1} + c_t - Ra_t)$$
That is, a household that choses assets $a$ and consumption $c$ to maximize his utility. You can summarize the FOC as
$$ \beta^t U'(c_t) = \lambda_t \\\lambda_t a_{t+1} = R\lambda_{t+1}\\\Leftrightarrow U'(c_t) = \beta R U'(c_{t+1})$$
Look for a moment at the special case where $\beta R = 1$ (what does that imply?). With most preferences, this necessarily leads to $c_t = c_{t+1}$. This is the local optimization that I was referring to, which is what the Lagrangian gives you. There are, however, infinitely many solutions that satisfy $c_t = c_{t+1}$. Next, we try to use the budget constraint:
$$ a_{t+1} + c_t = R a_t\\\Leftrightarrow R a_0 = \lim_{T\to\infty}\sum_{t=0}^T \frac{c_t}{R^t} + \frac{a_{T+1}}{R^T}$$
This is as far we get using the (infinite) set of local budget constraints, where I have used forward iteration (hopefully correctly), assuming any start date $t=0$.
Now,
if the household also has to satisfy the NP condition, this boils down to
$$R a_0 = \lim_{T\to\infty}\sum_{t=0}^T \frac{c_t}{R^t}$$
which, as we showed $c_t$ to be constant, we can solve easily and receive a single budget constraint. The
unique solution to the problem that satisfies the NP condition is the solution where $c_t$ is a constant and this last equation holds.
|
I need to program in an algorithm that recursively makes algebraic replacements which leads to an utterly complicated algebraic function of $x$, but whose final result is only needed at fixed order in the Laurent series in $x$. I wish to use this fact to make the algorithm go faster. Here is sample problem that mimics the realistic problem:
\begin{align}f_n(x) &= \sum_{j=1}^4\frac{1}{6+j x}n\left(\frac{1}{x}+j \,f_{n-1}(x)\right); \qquad f_0(x) = {\tt f[0]}\\ F(x) &= \sum_{n=1}^6\frac{1}{2+n x}\left(\frac{n}{x}+f(n)\right) \end{align}
The algorithm is to repeatedly insert the first equation into the second one until all instances of $f_n(x)$ turn into $f_0={\tt f[0]}$. All that is needed in the end is the Laurent expansion out to $\mathcal{O}(x^0)$ (just need the $1/x$ and constant terms).
nota bene: The sums are there only to lengthen the expression to make Mathematica work harder.
Here is a naive implementation (with the result on my machine):
(*Implementation 1*)f[n_] /; n > 0 -> Sum[1/(6 + j x) n (1/x + j f[n - 1]), {j, 1, 4}];Sum[1/(2 + n x) (n/x + f[n]), {n, 1, 6}] //. %;Timing[Series[%, {x, 0, 0}]]
Performing the actual recursion in the second line is lightening fast. But the final step
Series function is painfully slow (4.109 seconds), since the $x$ in the denominators forces
Mathematica to appeal to its analytical routines to expand things like $\frac{1}{1+x}\approx 1 -x$.
Since all that I need is the first two terms of the Laurent series, I went ahead and, by hand, expanded the fractions $\frac{1}{6+jx}\approx(\frac{1}{6}-\frac{j x}{36})$ and $\frac{1}{2+nx}\approx(\frac{1}{2}-\frac{n x}{4})$, with the intention of partially freeing
Mathematica of painful analytics.
(*Implementation 2*)f[n_] /; n > 0 -> Sum[(1/6 - (j x)/36) n (1/x + j f[n - 1]), {j, 1, 4}];Sum[(1/2 - (n x)/4) (n/x + f[n]), {n, 1, 6}] //. %;Timing[Series[%, {x, 0, 0}]]
This is 8 times faster; the more I save
Mathematica from doing analytic manipulations, the faster it goes. But I need it to go faster, yet. How can I carry out the alogorithm to minimize the extent Mathematica has to do computationally costly analytic work?
Here's my failed attempt: ask
Mathematica drop higher order terms at each stage in the recursion by adding an
O[x] to the end of the line:
(*Implementation 3 [FAILURE]*)f[n_] /; n > 0 -> Sum[(1/6 - (j x)/36) n (1/x + j f[n - 1]), {j, 1, 4}];Timing[Sum[(1/2 - (n x)/4) (n/x + f[n]), {n, 1, 6}] + O[x] //. %]
Very fast, but the wrong answer. It gets the wrong answer because it drops terms like
x*f[n] as it thinks
f[n] is order
O[x]^0 when in reality there is a
1/x part. What can I do to speed this up?
|
Consider the following graph $G=(V,E)$ where $V=\mathbb{R}^2$ and $E = \{\{x,y\}: x,y \in \mathbb{R}^2 \text{ and } |x-y|\in \mathbb{Q}\}$.
What is $\chi(G)$?
(This is a variant of the Hadwiger-Nelson problem.)
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
By considering all the rational numbers on the $x$-axis we can see that we need at least countably many colors. This is also sufficient, that is the chromatic number of the rational-distances graph is countable. This is due to Erdos and Hajnal in the case of $\mathbb R^2$. They show that the rational-distances graph in the plane does not contain a copy of the complete bipartite graph $K(2,\omega_1)$, and that any such graph must have countable chromatic number.
P. Erdos and A. Hajnal, "On chromatic number of graphs and set systems", Acta Math. Hungar. 17(1966), 61-99.
The result was generalized to rational distances graphs in $\mathbb R^n$ by Peter Komjath. However, the previous method doesn't generalize since now the graph contains even a copy of $K(\omega, 2^\omega)$. Instead, Komjath uses a clever transfinite induction argument.
P. Komjath, "A decomposition theorem for $\mathbb R^n$" Proc. Amer. Math. Society (1994): 921-927.
|
In quantum mechanics, if the wavefunction is normalizable, then it would represent a particle. Why does it not represent a particle when it is not normalizable?
By definition, the probability to detect a particle with normalized wavefunction $\psi(x)$ in an interval $[x_1,x_2]$ is$$ P(x_1,x_2) = \int_{x_1}^{x_2}\lvert\psi(x)\rvert^2\mathrm{d}x.$$If the wavefunction is not normalized, but normalizable, i.e. the integral $C := \int_{-\infty}^\infty\lvert\psi(x)\rvert^2\mathrm{d}x$ is finite, then we can still define this probability as$$ P(x_1,x_2) = \frac{1}{C}\int_{x_1}^{x_2}\lvert\psi(x)\rvert^2\mathrm{d}x.$$This is essentially just one of the basic postulates of quantum mechanics - states are not
vectors in Hilbert space, but rays (or elements of the projective Hilbert space), and it does not matter whether you choose a normalized or an unnormalized representant of a ray to compute physical quantities.
However, an
unnormalizable wavefunction is not a member of any ray - it does not lie in the Hilbert space, usually $L^2(\mathbb{R})$, on which quantum mechanics takes place, because the elements of $L^2(\mathbb{R})$ by definition have finite integrals, i.e. are normalizable. In particular, there is no way to give a prescription how to compute $P(x_1,x_2)$ from it. Therefore, an unnormalizable wavefunction is not a state in the sense of quantum mechanics, it does not represent a physically meaningful or accessible state (although it may be an idealization of one, like the states $\lvert x\rangle$).
|
Reinforcement Learning (DQN) Tutorial¶ Author: Adam Paszke
This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym.
Task
The agent has to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright. You can find an official leaderboard with various algorithms and visualizations at the Gym website.
As the agent observes the current state of the environment and choosesan action, the environment
transitions to a new state, and alsoreturns a reward that indicates the consequences of the action. In thistask, rewards are +1 for every incremental timestep and the environmentterminates if the pole falls over too far or the cart moves more then 2.4units away from center. This means better performing scenarios will runfor longer duration, accumulating larger return.
The CartPole task is designed so that the inputs to the agent are 4 real values representing the environment state (position, velocity, etc.). However, neural networks can solve the task purely by looking at the scene, so we’ll use a patch of the screen centered on the cart as an input. Because of this, our results aren’t directly comparable to the ones from the official leaderboard - our task is much harder. Unfortunately this does slow down the training, because we have to render all the frames.
Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image.
Packages
First, let’s import needed packages. Firstly, we need gym for the environment (Install using pip install gym). We’ll also use the following from PyTorch:
neural networks (
torch.nn)
optimization (
torch.optim)
automatic differentiation (
torch.autograd)
utilities for vision tasks (
torchvision- a separate package).
import gymimport mathimport randomimport numpy as npimport matplotlibimport matplotlib.pyplot as pltfrom collections import namedtuplefrom itertools import countfrom PIL import Imageimport torchimport torch.nn as nnimport torch.optim as optimimport torch.nn.functional as Fimport torchvision.transforms as Tenv = gym.make('CartPole-v0').unwrapped# set up matplotlibis_ipython = 'inline' in matplotlib.get_backend()if is_ipython: from IPython import displayplt.ion()# if gpu is to be useddevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Replay Memory¶
We’ll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure.
For this, we’re going to need two classses:
Transition- a named tuple representing a single transition in our environment. It essentially maps (state, action) pairs to their (next_state, reward) result, with the state being the screen difference image as described later on.
ReplayMemory- a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a
.sample()method for selecting a random batch of transitions for training.
Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward'))class ReplayMemory(object): def __init__(self, capacity): self.capacity = capacity self.memory = [] self.position = 0 def push(self, *args): """Saves a transition.""" if len(self.memory) < self.capacity: self.memory.append(None) self.memory[self.position] = Transition(*args) self.position = (self.position + 1) % self.capacity def sample(self, batch_size): return random.sample(self.memory, batch_size) def __len__(self): return len(self.memory)
Now, let’s define our model. But first, let quickly recap what a DQN is.
DQN algorithm¶
Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment.
Our aim will be to train a policy that tries to maximize the discounted,cumulative reward\(R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t\), where\(R_{t_0}\) is also known as the
return. The discount,\(\gamma\), should be a constant between \(0\) and \(1\)that ensures the sum converges. It makes rewards from the uncertain farfuture less important for our agent than the ones in the near futurethat it can be fairly confident about.
The main idea behind Q-learning is that if we had a function \(Q^*: State \times Action \rightarrow \mathbb{R}\), that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards:
However, we don’t know everything about the world, so we don’t have access to \(Q^*\). But, since neural networks are universal function approximators, we can simply create one and train it to resemble \(Q^*\).
For our training update rule, we’ll use a fact that every \(Q\) function for some policy obeys the Bellman equation:
The difference between the two sides of the equality is known as the temporal difference error, \(\delta\):
To minimise this error, we will use the Huber loss. The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of \(Q\) are very noisy. We calculate this over a batch of transitions, \(B\), sampled from the replay memory:
Q-network¶
Our model will be a convolutional neural network that takes in thedifference between the current and previous screen patches. It has twooutputs, representing \(Q(s, \mathrm{left})\) and\(Q(s, \mathrm{right})\) (where \(s\) is the input to thenetwork). In effect, the network is trying to predict the
expected return oftaking each action given the current input.
class DQN(nn.Module): def __init__(self, h, w, outputs): super(DQN, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 5, stride = 2): return (size - (kernel_size - 1) - 1) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w))) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h))) linear_input_size = convw * convh * 32 self.head = nn.Linear(linear_input_size, outputs) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) return self.head(x.view(x.size(0), -1))
Input extraction¶
The code below are utilities for extracting and processing renderedimages from the environment. It uses the
torchvision package, whichmakes it easy to compose image transforms. Once you run the cell it willdisplay an example patch that it extracted.
resize = T.Compose([T.ToPILImage(), T.Resize(40, interpolation=Image.CUBIC), T.ToTensor()])def get_cart_location(screen_width): world_width = env.x_threshold * 2 scale = screen_width / world_width return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CARTdef get_screen(): # Returned screen requested by gym is 400x600x3, but is sometimes larger # such as 800x1200x3. Transpose it into torch order (CHW). screen = env.render(mode='rgb_array').transpose((2, 0, 1)) # Cart is in the lower half, so strip off the top and bottom of the screen _, screen_height, screen_width = screen.shape screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)] view_width = int(screen_width * 0.6) cart_location = get_cart_location(screen_width) if cart_location < view_width // 2: slice_range = slice(view_width) elif cart_location > (screen_width - view_width // 2): slice_range = slice(-view_width, None) else: slice_range = slice(cart_location - view_width // 2, cart_location + view_width // 2) # Strip off the edges, so that we have a square image centered on a cart screen = screen[:, :, slice_range] # Convert to float, rescale, convert to torch tensor # (this doesn't require a copy) screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) # Resize, and add a batch dimension (BCHW) return resize(screen).unsqueeze(0).to(device)env.reset()plt.figure()plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')plt.title('Example extracted screen')plt.show()
Training¶ Hyperparameters and utilities¶
This cell instantiates our model and its optimizer, and defines some utilities:
select_action- will select an action accordingly to an epsilon greedy policy. Simply put, we’ll sometimes use our model for choosing the action, and sometimes we’ll just sample one uniformly. The probability of choosing a random action will start at
EPS_STARTand will decay exponentially towards
EPS_END.
EPS_DECAYcontrols the rate of the decay.
plot_durations- a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode.
BATCH_SIZE = 128GAMMA = 0.999EPS_START = 0.9EPS_END = 0.05EPS_DECAY = 200TARGET_UPDATE = 10# Get screen size so that we can initialize layers correctly based on shape# returned from AI gym. Typical dimensions at this point are close to 3x40x90# which is the result of a clamped and down-scaled render buffer in get_screen()init_screen = get_screen()_, _, screen_height, screen_width = init_screen.shape# Get number of actions from gym action spacen_actions = env.action_space.npolicy_net = DQN(screen_height, screen_width, n_actions).to(device)target_net = DQN(screen_height, screen_width, n_actions).to(device)target_net.load_state_dict(policy_net.state_dict())target_net.eval()optimizer = optim.RMSprop(policy_net.parameters())memory = ReplayMemory(10000)steps_done = 0def select_action(state): global steps_done sample = random.random() eps_threshold = EPS_END + (EPS_START - EPS_END) * \ math.exp(-1. * steps_done / EPS_DECAY) steps_done += 1 if sample > eps_threshold: with torch.no_grad(): # t.max(1) will return largest column value of each row. # second column on max result is index of where max element was # found, so we pick action with the larger expected reward. return policy_net(state).max(1)[1].view(1, 1) else: return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)episode_durations = []def plot_durations(): plt.figure(2) plt.clf() durations_t = torch.tensor(episode_durations, dtype=torch.float) plt.title('Training...') plt.xlabel('Episode') plt.ylabel('Duration') plt.plot(durations_t.numpy()) # Take 100 episode averages and plot them too if len(durations_t) >= 100: means = durations_t.unfold(0, 100, 1).mean(1).view(-1) means = torch.cat((torch.zeros(99), means)) plt.plot(means.numpy()) plt.pause(0.001) # pause a bit so that plots are updated if is_ipython: display.clear_output(wait=True) display.display(plt.gcf())
Training loop¶
Finally, the code for training our model.
Here, you can find an
optimize_model function that performs asingle step of the optimization. It first samples a batch, concatenatesall the tensors into a single one, computes \(Q(s_t, a_t)\) and\(V(s_{t+1}) = \max_a Q(s_{t+1}, a)\), and combines them into ourloss. By defition we set \(V(s) = 0\) if \(s\) is a terminalstate. We also use a target network to compute \(V(s_{t+1})\) foradded stability. The target network has its weights kept frozen most ofthe time, but is updated with the policy network’s weights every so often.This is usually a set number of steps but we shall use episodes forsimplicity.
def optimize_model(): if len(memory) < BATCH_SIZE: return transitions = memory.sample(BATCH_SIZE) # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for # detailed explanation). This converts batch-array of Transitions # to Transition of batch-arrays. batch = Transition(*zip(*transitions)) # Compute a mask of non-final states and concatenate the batch elements # (a final state would've been the one after which simulation ended) non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch.next_state)), device=device, dtype=torch.uint8) non_final_next_states = torch.cat([s for s in batch.next_state if s is not None]) state_batch = torch.cat(batch.state) action_batch = torch.cat(batch.action) reward_batch = torch.cat(batch.reward) # Compute Q(s_t, a) - the model computes Q(s_t), then we select the # columns of actions taken. These are the actions which would've been taken # for each batch state according to policy_net state_action_values = policy_net(state_batch).gather(1, action_batch) # Compute V(s_{t+1}) for all next states. # Expected values of actions for non_final_next_states are computed based # on the "older" target_net; selecting their best reward with max(1)[0]. # This is merged based on the mask, such that we'll have either the expected # state value or 0 in case the state was final. next_state_values = torch.zeros(BATCH_SIZE, device=device) next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach() # Compute the expected Q values expected_state_action_values = (next_state_values * GAMMA) + reward_batch # Compute Huber loss loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1)) # Optimize the model optimizer.zero_grad() loss.backward() for param in policy_net.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step()
Below, you can find the main training loop. At the beginning we resetthe environment and initialize the
state Tensor. Then, we samplean action, execute it, observe the next screen and the reward (always1), and optimize our model once. When the episode ends (our modelfails), we restart the loop.
Below, num_episodes is set small. You should download the notebook and run lot more epsiodes, such as 300+ for meaningful duration improvements.
num_episodes = 50for i_episode in range(num_episodes): # Initialize the environment and state env.reset() last_screen = get_screen() current_screen = get_screen() state = current_screen - last_screen for t in count(): # Select and perform an action action = select_action(state) _, reward, done, _ = env.step(action.item()) reward = torch.tensor([reward], device=device) # Observe new state last_screen = current_screen current_screen = get_screen() if not done: next_state = current_screen - last_screen else: next_state = None # Store the transition in memory memory.push(state, action, next_state, reward) # Move to the next state state = next_state # Perform one step of the optimization (on the target network) optimize_model() if done: episode_durations.append(t + 1) plot_durations() break # Update the target network, copying all weights and biases in DQN if i_episode % TARGET_UPDATE == 0: target_net.load_state_dict(policy_net.state_dict())print('Complete')env.render()env.close()plt.ioff()plt.show()
Here is the diagram that illustrates the overall resulting data flow.
Actions are chosen either randomly or based on a policy, getting the next step sample from the gym environment. We record the results in the replay memory and also run optimization step on every iteration. Optimization picks a random batch from the replay memory to do training of the new policy. “Older” target_net is also used in optimization to compute the expected Q values; it is updated occasionally to keep it current.
Total running time of the script: ( 0 minutes 0.000 seconds)
|
Use Coulomb’s Law to find the force on a charge from two nearby charges.
Written by Willy McAllister.
Contents Coulomb’s Law Multiple charges Triangles Strategy Three point charges Summary Where we are headed
When you have more than two point charges pushing and pulling on each other, use Coulomb’s Law to find the force between pairs of charges. Then combine the forces with vector addition.
We use the Law of Cosines and the Law of Sines to solve force triangles.
Coulomb’s Law
Coulomb’s Law predicts the force between pairs of charges,
$\vec F = \dfrac{1}{4\pi\epsilon_0}\dfrac{q_0\,q_1}{r^2}\,\bold{\hat r_{01}}$
$q_0$ and $q_1$ are the two point charges involved.
$r$ is the distance between the charges.
$\bold{\hat r_{01}}$ is a unit vector (length $1$) pointing from one charge to the other. We include this to make the right side of the equation a vector. $\bold{\hat r}$ is pronounced “r hat.”
$\epsilon_0$ is a constant equal to $8.85 \times 10^{-12}$ coulomb$^2/$newton-meter$^2$.
$K = \dfrac{1}{4\pi\epsilon_0} = 9\times 10^9$ newton-meter$^2/$coulomb$^2$
Multiple charges
How do you find the force on one charge caused by several others?
If you have multiple point charges tugging on each other you might wonder if the forces somehow get tangled and warp each other. Nope, that is not what happens. It is simpler than that. The pair-wise forces are independent. Each pair-wise force obeys Coulomb’s Law, and combines with the other forces by vector addition. If charges $1$ and $2$ are near charge $0$, there is no sense in which charge $3$ “saps” or “absorbs” the ability of charge $2$ to generate an electric force on charge $0$.
Suppose you have $N$ point charges surrounding one charge. You’ve picked $q_0$ to be your favorite. Find the force on $q_0$ by adding up the pair-wise force vectors from charges $q_1 \ldots q_N$ using vector addition. In algebraic notation we write this as,
$\displaystyle \vec F_0 = \sum_{n=1}^N \dfrac{1}{4\pi\epsilon_0}\dfrac{q_0\,q_n}{r_{0n}}\,\bold{\hat r_n}$
We will work through an example with three charges, but before diving in let’s review a little triangle theory.
Triangles
Solving the force with three point charges is basically an exercise in solving triangles. There will be two triangles involved,
A physical triangle with three charges on the corners. A different triangle of force vectors pushing or pulling on the selected charge.
Sometimes a test question is designed to give you a simple force triangle you can solve in your head, but usually the force triangle isn’t that easy. We will cover a general method for solving any triangle based on the Law of Cosines and the Law of Sines.
Here is an arbitrary triangle with its sides and angles labeled,
Law of Cosines
The Law of Cosines has three forms. They all mean the same thing.
$c^2 = a^2 + b^2 - 2ab \cos \gamma$
$b^2 = a^2 + c^2 - 2ac \cos \beta$ $a^2 = b^2 + c^2 - 2bc \cos \alpha$
When you know two sides and the angle between them, the Law of Cosines gives you the third side. We use it to find the magnitude of the resultant force vector.
Law of Sines
The Law of Sines has one form,
$\dfrac{a}{\sin \alpha} = \dfrac{b}{\sin \beta} = \dfrac{c}{\sin \gamma}$
We use the Law of Sines to find the angle of the resultant force vector.
If you have an electrostatics test coming up consider memorizing these trig laws.
Strategy
A three-charge problem usually unfolds like this,
The charge triangle is given in the problem statement. You are asked to find the force on one of the charges. We’ll call that one $q_0$. The force triangle appears when you apply Coulomb’s Law two times to $q_0$. Find the two pair-wise force vectors using Coulomb’s Law, giving you two sides of a force triangle. The hard part is finding the magnitude and angle of the third side. Three point charges Given three charges at the corners of a $\mathbf{30\degree – \,60\degree – \,90\degree}$triangle, find the force on $q_0$.
Let $q_0 = +1$, $q_1 = +2$, and $q_3 = -3$, all in units of coulombs $(\text C)$. The distance between $q_0$ and $q_1$ is $1\,\text m$. All three charges are static, meaning they don’t move. Think of them as glued to the page or pinned down with a thumbtack.
Predict
Before we do the math, use your intuition to predict the result.
Draw the charge triangle on a piece of paper. Sketch your estimate of the two force vectors pushing/pulling on $q_0$. Sketch a vector with your prediction of the total force on $q_0$. Charge triangle
The first thing to do is complete the details of the charge triangle with all the angles and sides. (This should be a familiar triangle from geometry class.)
For Coulomb’s Law problems we manage direction separate from magnitude.
Next, sketch the individual force vectors. There are two force vectors to think about, {$q_1$ to $q_0$}, and {$q_2$ to $q_0$}. We can sketch them on the triangle,
Apply Coulomb’s Law to find the magnitude of each force,
Use the absolute value of the charges in the numerator of Coulomb’s Law.
$|\vec F| = K \,\dfrac{q_0\, q_n}{r^2}$
$\blueD{|\vec F_{10}|} = K \,\dfrac{1 \cdot 2}{1^2} = 2K\qquad\quad$ (repels)
$\greenD{|\vec F_{20}|} = K \,\dfrac{1 \cdot 3}{2^2} = 0.75K\qquad$ (attracts)
We have the magnitude and direction of the pairwise forces,
Force triangle
The final step is to perform the vector sum of $\blueD{\vec F_{10}}$ and $\greenD{\vec F_{20}}$ to find the resultant force on $q_0$.
If you are new to vector addition, check here.
To set up the force triangle for vector addition, slide the green vector down so its tail touches the tip of the blue vector. We are looking for force $F_0$ shown in black,
We want to find the magnitude and angle of $\vec F_0$. This triangle is not a right triangle, so it’s not so simple to find $\vec F_0$. This is where we use the Laws of Cosines and Sines.
Label the force triangle with the notation we used for the general triangle up above,
We know two sides and the angle between them, $b$, $c$, and $\alpha$. We want to find the third side, $a$. This is a job for the Law of Cosines. Select the variation that solves for $a$,
$a^2 = b^2 + c^2 - 2bc \cos \alpha$
Plug in the known values and crank,
$a^2 = (0.75K)^2 + (2K)^2 - 2\cdot 0.75K \cdot 2K \cos 60\degree$
$a^2 = [\,0.75^2 + 2^2 - 2\cdot 0.75 \cdot 2 \cdot 0.5\,]K^2\qquad \cos 60\degree = 0.5$
$a^2 = [\,0.5625 + 4 - 1.5\,]K^2$
$a^2 = 3.0625\,K^2$
$a = \sqrt{3.0625\,K^2}$
$a = 1.75\,K$
That’s the magnitude of the $F_0$ vector. $(K = 9\times 10^9\,\text{N-m}^2/\text C^2)$
Now find the angle of $F_0$ using the Law of Sines. We know all three sides and angle $\alpha$. Two angles are missing, but we only need to find one of them, $\beta$. Pick the appropriate part of the Law of Sines that involves $\beta$ and three of our knowns,
$\dfrac{a}{\sin \alpha} = \dfrac{b}{\sin \beta}$
Fill in the known variables and isolate $\beta$,
$\dfrac{1.75K}{\sin 60\degree} = \dfrac{0.75K}{\sin \beta}$
$\sin \beta = \dfrac{0.75K}{1.75K}\sin 60\degree \qquad \sin 60\degree = \dfrac{\sqrt 3}{2}$
$\beta = \sin^{-1} \left (\dfrac{0.75}{1.75}\dfrac{\sqrt 3}{2} \right )$
$\beta = \sin^{-1} 0.37$
$\beta = 21.8\degree$
$\beta$ is the internal angle inside the triangle. The best answer is the angle down from horizontal, which is $-90\degree + 21.8\degree = -68.2\degree$
$F_0 = 1.75\,K \,\angle{-68.2\degree}$
$F_0 = 1.75\cdot 9 \times 10^9 \,\angle{-68.2\degree}$
$F_0 = 15.7 \times 10^9\,\angle{-68.2\degree}\,\text N$
Take a moment now to go back to your prediction drawing to check your initial intuition.
Do calculations with Google
Compute arcsine in Google: Copy/paste this equation into Google search,
arcsin((0.75 * sqrt 3)/(1.75 * 2)) in degrees
Google understands the Law of Cosines and the Law of Sines. Copy/paste these instructions into Google search to call these special-purpose calculators,
law of cosines calc: find c a=2 b=0.75 gamma=60 degrees
law of sines calc: find beta, a=1.75, alpha=60, b=0.75
Summary
The force on a point charge from several neighboring point charges is the vector sum of the pair-wise forces,
$\displaystyle \vec F_0 = \sum_{n=1}^N \dfrac{1}{4\pi\epsilon_0}\dfrac{q_0\,q_n}{r_{0n}}\,\bold{\hat r_n}$
|
Consider the following convex continuous optimization problem:
min \(f(x)=\sqrt{(x'Qx)}-c'x\) s.t. \(e'x \leq 1\) \(x \geq 0,\)
where Q is positive definite, \(c\geq 0\) and \(e=(1,...,1)\).
I'm interested in any feasible point \(y\) satisfying \(f(y) < 0\). Assuming the \(f(z) < 0\) for the optimal solution \(z\). Is it true that there exists a vertex \(e_i\), such that \(f(e_i) < 0\)? For sure if this would not be true, there is a convex combination of the vertices with a negative objective function value, but how to find it?
It is possible that \(f\) is strictly positive at every vertex but negative somewhere in the interior. For instance, try two dimensions with \(Q=I\) and \(c = (0.9, 0.9)\).
Since your objective function is differentiable (other than at the origin), you could apply a gradient projection method and just stop iterating as soon as you got an objective value less than zero (by more than rounding tolerance).
answered
Paul Rubin ♦♦
Hi Paul, i was aware of taking this method. But actually this might be even be to costly. Is there a way to find 'any' descent direction in 0? I was reading about the steepest descent and computing it would result in the solution of a trust region subproblem. But actually any descent direction would be sufficient to decrease the objective function value below zero in one iteration, right? Unfortunately i'm not aware of any method to determine one.
answered
Long
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup
Here are the Polymath15 grant acknowledgments.
Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
F-singularities via alterations
Manuel Blickle, Karl Schwede, Kevin Tucker
Number 11 Author Prof. Dr. Manuel Blickle Year 2011
For a normal F-finite variety X and a boundary divisor \Delta we give a
uniform description of an ideal which in characteristic zero yields the
multiplier ideal, and in positive characteristic the test ideal of the pair (X,\Delta). Our description is in terms of regular alterations over X, and
one consequence of it is a common characterization of rational singularities
(in characteristic zero) and F-rational singularities (in characteristic p)
by the surjectivity of the trace map \pi_* \omega_Y \to \omega_X for every
such alteration \pi \: Y \to X.
Furthermore, building on work of Bhatt, we establish up-to-finite-map versions of Grauert-Riemenscheneider and Nadel/Kawamata-Viehweg vanishing theorems in the characteristic p setting without assuming W2 lifting, and show that these are strong enough in some applications to extend sections.
|
Noetherian Rings = Generalization of PIDs
When I was first introduced to Noetherian rings, I didn't understand why my professor made such a big
hoopla over these things. What makes Noetherian rings so special? Today's post is just a little intuition to stash in The Back Pocket, for anyone hearing the word "Noetherian" for the first time.
A ring is said to be
Noetherian if every ideal in the ring is finitely generated. Right away we see that every principal ideal domain is a Noetherian ring since every ideal is generated by one element. Well, in short, "Noetherian-ness" is a property which generalizes "PID-ness".
As K. Conrad so nicely puts it,
The property of all ideals being singly generated is often not preserved under common ring-theoretic constructions (e.g. $\mathbb{Z}$ is a PID but $\mathbb{Z}[x]$ is not), but the property of all ideals being finitely generated doesremain valid under many constructions of new rings from old rings. For example... every quadratic ring $\mathbb{Z}[\sqrt{d}]$ is Noetherian, even though many of these rings are not PIDs." [italics added]
And there you have it! We like rings with finitely generated ideals because it keeps the math (relatively) nice. For instance, one might ask, "Given a Noetherian ring $R$, can I build a
new ring such that it, too, is Noetherian?"* Hilbert's Basis Theorem says the answer is yes! You can construct the polynomial ring $R[x]$ and it will be Noetherian whenever $R$ is (and in fact so will $R[x_1,\ldots,x_n]$).
As a final remark, there are three equivalent conditions associated with the Noetherian property. So when testing a ring for "Noetherian-ness," remember that one condition may be easier to invoke than another. They are as follows:
Proposition: Let $R$ be a commutative ring with 1. The following are equivalent: Every ascending chain of ideals $I_1\subset I_2\subset\cdots$ in $R$ is stationary (i.e. there exists $N\in \mathbb{N}$ s.t. $I_n=I_N$ for all $n\geq N$.) Every nonempty set of ideals in $R$ has a maximal element.** Every ideal $I\triangleleft R$ is finitely generated. Sketch of proof: (1 $\Rightarrow$ 2) Let $X$ be a nonempty set of ideals in $R$. Pick $I_1\in X$. If $I_1$ is maximal, then done. Otherwise there is $I_2\in X$ such that $I_1\subset I_2$. If $I_2$ is maximal, then done. Otherwise there is $I_3\in X$ such that $I_1\subset I_2\subset I_3$. Continuing in this fashion, we see that if $X$ does notcontain a maximal ideal, then we can produce an ascending chain of ideals which never stabilizes, contradicting 1. (2 $\Rightarrow$ 3) Let $I\triangleleft R$ be an ideal and $X=\{J\triangleleft R:J\subset I \text{ and $J$ is finitely generated}\}$ be the set of all ideals contained in $I$which are finitely generated. Since $X\neq \emptyset$ (as $I$ contains the zero ideal), $X$ contains a maximal element, say $J$. We claim $J=I$. Suppose not. Then we can find an element $a\in I\smallsetminus J$ and can form the ideal $J+(a)$. This ideal is finitely generated and so is an element in $X$. But $J\subset J+(a)$ and this contradicts the maximality of $J$. Hence $J=I$ and so $I$ is finitely generated. (3 $\Rightarrow$ 1) Let $I_1\subset I_2\subset \cdots$ be an ascending chain and define $I=\bigcup_{n=1}^{\infty}I_n$. This is an ideal and so by assumption, $I=(a_1,a_2,\ldots,a_m)$ for some $a_i\in R$. Thus we can find $N$ such that $a_1,a_2,\ldots,a_m\in I_N$. Hence $I\subset I_N$ which implies $I_n=I_N$ for all $n\geq N$ and so the chain stabilizes.
Footnotes:
*This is a standard question a mathematician might ask.
How can I build new things from existing ones?
**Don't forget, "
maximal" is not the same thing as "a maximum"! - we've seen this before .
|
Today I learned about the Elliot Activation (or Sigmoid) Function. In fact, The MathWorks just included it in their most recent update to the Neural Network toolbox. The function was first introduced in 1993 by D.L. Elliot under the title A Better Activation Function for Artificial Neural Networks. The function closely approximates the Sigmoid or Hyperbolic Tangent functions for small values, however it takes longer to converge for large values (i.e. It doesn't go to 1 or 0 as fast), though this isn't particularly a problem if you're using it for classification.
In my testing in MATLAB "the function computes over
2x faster than the exponential sigmoid function", which, for certain types of ML problems can lead to a significant speed improvement. So what is this function?
Behold:
\begin{aligned} \sigma_e(x) = \frac{1}{1 + |x|} \end{aligned}
It's also differntiable with the derivative of
\begin{aligned} \frac{\partial\sigma_e(x)}{\partial x} = \frac{1}{(1 + |x|)^2} \end{aligned}
For an activation function in the range $ [0,1] $ it can be written as
\begin{aligned} \sigma_e(x) = \frac{0.5(x)}{1 + |x|} + 0.5 \end{aligned}
In MATLAB this can be simply written as
function p = elliotsig(x)% From http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.7204% "A better Activation Function for Artificial Neural Networks"p = 0.5*(x)/(1 + abs(x))+0.5; end
Here's an image I ganked from Wikipedia that shows the Elliot, Exponential and Hyperbolic Tangent Sigmoid functions:
|
One can construct a finite measure on a compact metric space $(X,d)$ by the following procedure:
Fix a non-negative sequence $\{\epsilon_n\}$, $\epsilon_n \to 0$. Let $Y_{\epsilon_n}$ be the minimal covering net: $\bigcup_{y \in Y_{\epsilon_n}} \mathcal{B} (y, \epsilon_n ) = X$ and there is no lower-cardinality net with this property. Let $\#Y_{\epsilon_n}$ denote the respective minimum cardinality.
Construct the measure $\mu_n$ on $X$ as $\mu_n (A) = \frac{\# A \cap Y_{\epsilon_n} }{\# Y_{\epsilon_n}}$. Now $\mu_n \to \mu$, weakly, independently of the choices of nets.
See the following post for the above construction: Does every compact metric space have a canonical probability measure?
This construction can not be regarded as canonical, since it depends on the sequence $\{\epsilon_n\}$ that is employed. (An example is given in the comments on the linked post.)
My question: Is the choice of sequence irrelevant if $X$ is a compact subset of a Banach space? If yes, compact subsets of Banach spaces may carry a canonical uniform measure.
Many thanks!
|
From previous lessons, you know that the derivative is the instantaneous rate of change of a function. We said that the derivative of a function at a certain point is just the slope of the function at that point. And to calculate that slope of the function at a given point, we make $\Delta x$ value smaller until it approaches zero, and see what our $ \frac{\Delta f}{\Delta x} $ converges upon.
For example, we saw the following table:
$ \Delta x $ $ \frac{\Delta y}{\Delta x} $ 1 5 .1 4.1 .01 4.01 .001 4.001
This convergence around one number is called the *
limit *. And we can describe what we see in the above table as the expression:
$$ f'(2) = \lim_{\Delta x\to0} \frac{\Delta f}{\Delta x} = 4 $$
We read this as the limit of $\frac{\Delta f}{\Delta x} $ as $ \Delta x $ approaches zero equals 4. So, in general our definition of the derivative is:
$$ f'(x) = \lim_{\Delta x\to0} \frac{\Delta f}{\Delta x} = \lim_{h\to0} \frac{f(x + h) - f(x)}{h} $$
In the previous lesson, we calculated the derivative by changing our delta to see the convergence around a number as reflected in the table above. However, mathematicians have derived shortcuts to calculate the derivative. And these shortcuts allow us not just to evaluate the derivative at a single point, as we have done previously, but across any value of $x$ of the function.
The first rule for us to learn is the power rule. The power rule is expressed as the following. Given the following:
$$f(x) = x^r $$
Then, the derivative is: $$ f'(x) = r*x^{r-1} $$
This says that if a variable, $x$, is raised to a exponent $r$, then the derivative of that function is the exponent $r$ multiplied by the variable, with the variable raised to the original exponent minus one.
Let's see this by way of example, with the function, $f(x) = 3*x $. Remember that we originally calculated the derivative with our formula:
$$ f'(x) = \lim_{h\to0} \frac{f(x + h) - f(x)}{h} $$
$$ f'(4) = \lim_{h\to0} \frac{f(4 + h) - f(4)}{h} = 3 $$
$$ f'(8) = \lim_{h\to0} \frac{f(8 + h) - f(8)}{h} = 3 $$
We saw that our rate of change of our linear function $f(x) = 3x $ was always 3. Since the rate of change is constant for linear functions, the derivative was the same across all values of $x$.
Now let's see how this works with our power rule:
$$f(x) = 3*x = 3*x^{1} $$
Now applying our rule that for a function with
$$f(x) = x^r $$
$$ f'(x) = r*x^{r-1} $$
we see that in this case $r = 1$. So applying our power rule we have:
$$f'(x) = r*3*x^{r-1} = 1*3*x^{1-1} = 3*x^{0} = 3 $$
Great! This is aligns with what our graph shows, as well as our calculation using the original definition of the derivative, $\lim_{\Delta x\to0} \frac{\Delta y}{\Delta x}$ .
Let's apply the power rule with another example to make sure that we understand it.
$$f(x) = x^2 $$
$$f'(x) = 2*x^{2-1} = 2*x^1 = 2*x $$
Think about what our calculation for $f'(x)$ is saying about our function. It says, for our function $f(x) = x^2$, a small change in $x$ produces an increase in $f(x) $ equal to 2 times the $ x $ value. Or, in other words: $$ f'(x) = 2*x $$
We won't prove the power rule here. But hopefully you can see that it does seem to fit our graph of the function $f(x) = x^2$. Let's take a look.
It seems reasonable that the slope of the line tangent to a curve is $2*x$. So our power rule for derivatives looks good.
After learning the power rule, the constant factor is a breeze. The constant factor addresses how to take the derivative of a function multiplied by a constant.
So in the above example, we our function of $f(x) = 3*x$. Now, the derivative of that function
$$f'(x) = 3 * \frac{\Delta f}{\Delta x} $$
Applying the power rule, we know that $ \frac{\Delta f}{\Delta x}x^1 = x^{1-1} = 1 $, so we have:
$$f'(x) = 3 * \frac{\Delta f}{\Delta x}x = 3*1 = 3$$
In the general case, we can say, consider the function $a*f(x)$ where $a$ is a constant (that is, is a number and not a variable). Then
$$\frac{\Delta f}{\Delta x}(a*f(x)) = a * \frac{\Delta f}{\Delta x}*f(x) $$
Now, don't let the fancy equations above confuse you. The rule simply says if a variable is multiplied by a constant (i.e. a number), then to take the derivative of that term, apply our familiar power rule to the variable and multiply the variable by that same constant.
So given the function:
$$f(x) = 2x^2 $$
$$f'(x) = 2*\frac{\Delta f}{\Delta x} x^{2} = 2*2*x^{2-1} = 4x^1 = 4x $$
That's the constant factor rule in action.
So far, all of our functions consisted of only one term. Remember that a term is a constant or variable that is separated by a plus or minus sign. For example, the function $f(x)$ below has three terms:
$ f(x) = 4x^3 - x^2 + 3x $
To take a derivative of a function that has multiple terms, simply take the derivative of each of the terms individually. So for the function above,
$$ f(x) = 4x^3 - x^2 + 3x $$
$$ f'(x) = 12x^2 - 2x + 3 $$
Do you see what we did there? We simply applied our previous rules to each of the terms individually and continued to add or subtract the terms accordingly.
Let's take the last few lines of this lesson to practice these derivative rules.
$$f(x) = 3x^5$$
$$g(x) = 10x$$
$$ z(x) = 10 $$
What are the derivatives of these respective functions?
Take some time to think through it.
Even a pen and paper could be in order.
Ok, maybe the pen is too far away...Time for the answers.
$$f(x) = 3x^5$$ $$f'(x) = 15x^4$$
$$g(x) = 10x$$ $$g'(x) = 10$$
$$ z(x) = 10 $$ $$ z(x) = 10 * (x^0) $$ $$ z'(x) = 0*10x^{0-1} = 0 $$
So as you can see, we are just applying our rule:
$$f(x) = x^r $$
$$ f'(x) = r*x^{r-1} $$
And note that whenever we take the derivative of a constant like the number 10, then the derivative of that constant is 0.
Let's evaluate $f'(x)$, $g'(x)$ and $z'(x)$, each at the value where $x = 3$.
Are you able to determine what the derivatives of each of these functions each will equal when $x = 3$? We simply substitute x for 3, whenever we see $x$.
So:
$$f'(3) = 15x^4 = 15*3^4 = 15*81 = 1215 $$
$$g'(3) = 10 = 10 $$
$$z'(3) = 0 = 0 $$
Let's try a couple more derivatives.
$$f(x) = 3x^3 + 8x + 12$$
$$g(x) = 12x^2 + 4x^2 + 2$$
Ok, now for the derivatives.
Let's see it!
$$f(x) = 3x^3 + 8x + 12$$ $$f'(x) = 9x^2 + 8 $$
$$g(x) = 12x^2 + 4x^2 + 2$$ $$g'(x) = 24x + 8x = 32x$$
In this section, we learned a different way for calculating the derivative. The derivative of a function at a given point is still the instantaneous rate of change of that function at that point. Now we have three rules that allow us to calculate our derivative. The most tricky of these is the power rule, which says that if $f(x) = x^r$, then $ f'(x) = r * x^{r-1} $.
Using our derivative rules, we can now calculate the derivative across the entire function. So the derivative of $f(x) = 3x $ is always 3, and the derivative of $f(x) = x^2 $ is $f(x) = 2x $. To evaluate our derivative at a specific value of $x$, we simply plug that value of $x$ into our derivative. When $f'(x) = 2x$, then $f'(2) = 2*2$.
|
Difference between revisions of "Reflecting"
(typo)
(→$\Sigma_2$ correct cardinals: more information)
Line 31: Line 31:
If there is a pseudo [[uplifting]] cardinal, or indeed, merely a pseudo $0$-uplifting cardinal $\kappa$, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus [[Ord is Mahlo]]. You can get this by taking some $\lambda\gt\kappa$ such that $V_\kappa\prec V_\lambda$.
If there is a pseudo [[uplifting]] cardinal, or indeed, merely a pseudo $0$-uplifting cardinal $\kappa$, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus [[Ord is Mahlo]]. You can get this by taking some $\lambda\gt\kappa$ such that $V_\kappa\prec V_\lambda$.
−
== $\Sigma_2$ correct cardinals ==
+
== $\Sigma_2$correct cardinals ==
The $\Sigma_2$-correct cardinals are a particularly useful and robust class of cardinals, because of the following characterization: $\kappa$ is $\Sigma_2$-correct if and only if for any $x\in V_\kappa$ and any formula $\varphi$ of any complexity, whenever there is an ordinal $\alpha$ such that $V_\alpha\models\varphi[x]$, then there is $\alpha\lt\kappa$ with $V_\alpha\models\varphi[x]$. The reason this is equivalent to $\Sigma_2$-correctness is that assertions of the form $\exists \alpha\ V_\alpha\models\varphi(x)$ have complexity $\Sigma_2(x)$, and conversely all $\Sigma_2(x)$ assertions can be made in that form.
The $\Sigma_2$-correct cardinals are a particularly useful and robust class of cardinals, because of the following characterization: $\kappa$ is $\Sigma_2$-correct if and only if for any $x\in V_\kappa$ and any formula $\varphi$ of any complexity, whenever there is an ordinal $\alpha$ such that $V_\alpha\models\varphi[x]$, then there is $\alpha\lt\kappa$ with $V_\alpha\models\varphi[x]$. The reason this is equivalent to $\Sigma_2$-correctness is that assertions of the form $\exists \alpha\ V_\alpha\models\varphi(x)$ have complexity $\Sigma_2(x)$, and conversely all $\Sigma_2(x)$ assertions can be made in that form.
−
It follows, for example, that if $\kappa$ is $\Sigma_2$-correct, then any feature of $\kappa$ or any larger cardinal than $\kappa$ that can be verified in a large $V_\alpha$ will reflect below $\kappa$. So if $\kappa$ is $\Sigma_2$-reflecting, for example, then there must be unboundedly many inaccessible cardinals below $\kappa$. Similarly, if $\kappa$ is $\Sigma_2$-reflecting and measurable, then there must be unboundedly many measurable cardinals below $\kappa$.
+
It follows, for example, that if $\kappa$ is $\Sigma_2$-correct, then any feature of $\kappa$ or any larger cardinal than $\kappa$ that can be verified in a large $V_\alpha$ will reflect below $\kappa$. So if $\kappa$ is $\Sigma_2$-reflecting, for example, then there must be unboundedly many inaccessible cardinals below $\kappa$. Similarly, if $\kappa$ is $\Sigma_2$-reflecting and measurable, then there must be unboundedly many measurable cardinals below $\kappa$.
+ + + + + +
==The Feferman theory==
==The Feferman theory==
Revision as of 09:50, 10 October 2019
Reflection is a fundamental motivating concern in set theory. The theory of ZFC can be equivalently axiomatized over the very weak Kripke-Platek set theory by the addition of the reflection theorem scheme, below, since instances of the replacement axiom will follow from an instance of $\Delta_0$-separation after reflection down to a $V_\alpha$ containing the range of the defined function. Several philosophers have advanced philosophical justifications of large cardinals based on ideas arising from reflection.
Contents Reflection theorem
The Reflection theorem is one of the most important theorems in Set Theory, being the basis for several large cardinals. The Reflection theorem is in fact a "meta-theorem," a theorem about proving theorems. The Reflection theorem intuitively encapsulates the idea that we can find sets resembling the class $V$ of all sets.
Theorem (Reflection): For every set $M$ and formula $\phi(x_0...x_n,p)$ ($p$ is a parameter) there exists some limit ordinal $\alpha$ such that $V_\alpha\supseteq M$ such that $\phi^{V_\alpha}(x_0...x_n,p)\leftrightarrow \phi(x_0...x_n,p)$ (We say $V_\alpha$ reflects $\phi$). Assuming the Axiom of Choice, we can find some countable $M_0\supseteq M$ that reflects $\phi(x_0...x_n,p)$.
Note that by conjunction, for any finite family of formulas $\phi_0...\phi_n$, as $V_\alpha$ reflects $\phi_0...\phi_n$ if and only if $V_\alpha$ reflects $\phi_0\land...\land\phi_n$. Another important fact is that the truth predicate for $\Sigma_n$ formulas is $\Sigma_{n+1}$, and so we can find a (Club class of) ordinals $\alpha$ such that $(V_\alpha,\in)\prec_{{T_{\Sigma_n}}\restriction{V_\alpha}} (V,\in)$, where $T_{\Sigma_n}$ is the truth predicate for $\Sigma_n$ and so $ZFC\vdash Con(ZFC(\Sigma_n))$ for every $n$, where $ZFC(\Sigma_n)$ is $ZFC$ with Replacement and Separation restricted to $\Sigma_n$.
Lemma: If $W_\alpha$ is a cumulative hierarchy, there are arbitrarily large limit ordinals $\alpha$ such that $\phi^{W_\alpha}(x_0...x_n,p)\leftrightarrow \phi^W(x_0...x_n,p)$. Reflection and correctness
For any class $\Gamma$ of formulas, an inaccessible cardinal $\kappa$ is
$\Gamma$-reflecting if and only if $H_\kappa\prec_\Gamma V$, meaning that for any $\varphi\in\Gamma$ and $a\in H_\kappa$ we have $V\models\varphi[a]\iff H_\kappa\models\varphi[a]$. For example, an inaccessible cardinal is $\Sigma_n$-reflecting if and only if $H_\kappa\prec_{\Sigma_n} V$. In the case that $\kappa$ is not necessarily inaccessible, we say that $\kappa$ is $\Gamma$-correct if and only if $H_\kappa\prec_\Gamma V$ . A simple Löwenheim-Skolem argument shows that every infinite cardinal $\kappa$ is $\Sigma_1$-correct. For each natural number $n$, the $\Sigma_n$-correct cardinals form a closed unbounded proper class of cardinals, as a consequence of the reflection theorem. This class is sometimes denoted by $C^{(n)}$ and the $\Sigma_n$-correct cardinals are also sometimes referred to as the $C^{(n)}$-cardinals. Every $\Sigma_2$-correct cardinal is a $\beth$-fixed point and a limit of such $\beth$-fixed points, as well as an $\aleph$-fixed point and a limit of such. Consequently, we may equivalently define for $n\geq 2$ that $\kappa$ is $\Sigma_n$-correct if and only if $V_\kappa\prec_{\Sigma_n} V$.
A cardinal $\kappa$ is
correct, written $V_\kappa\prec V$, if it is $\Sigma_n$-correct for each $n$. This is not expressible by a single assertion in the language of set theory (since if it were, the least such $\kappa$ would have to have a smaller one inside $V_\kappa$ by elementarity). Nevertheless, $V_\kappa\prec V$ is expressible as a scheme in the language of set theory with a parameter (or constant symbol) for $\kappa$.
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
$C^{(n)}$ are the classes of $\Sigma_n$-correct ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $V_α = H(α)$. References to the $C^{(n)}$ classes (different from just the requirement that the cardinal belongs to $C^{(n)}$) can sometimes make large cardinal properties stronger (for example $C^{(n)}$-superstrong, $C^{(n)}$-extendible, $C^{(n)}$-huge and $C^{(n)}$-I3 and $C^{(n)}$-I1 cardinals). On the other hand, every measurable cardinal is $C^{(n)}$-measurable for all $n$ and every ($λ$-)strong cardinal is ($λ$-)$C^{(n)}$-strong for all $n$.[1]
A cardinal $\kappa$ is
reflecting if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme ( Lévy scheme). The existence of such a cardinal is equiconsistent to the assertion ORD is Mahlo.
If there is a pseudo uplifting cardinal, or indeed, merely a pseudo $0$-uplifting cardinal $\kappa$, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus Ord is Mahlo. You can get this by taking some $\lambda\gt\kappa$ such that $V_\kappa\prec V_\lambda$.
$\Sigma_2$-correct cardinals
The $\Sigma_2$-correct cardinals are a particularly useful and robust class of cardinals, because of the following characterization: $\kappa$ is $\Sigma_2$-correct if and only if for any $x\in V_\kappa$ and any formula $\varphi$ of any complexity, whenever there is an ordinal $\alpha$ such that $V_\alpha\models\varphi[x]$, then there is $\alpha\lt\kappa$ with $V_\alpha\models\varphi[x]$. The reason this is equivalent to $\Sigma_2$-correctness is that assertions of the form $\exists \alpha\ V_\alpha\models\varphi(x)$ have complexity $\Sigma_2(x)$, and conversely all $\Sigma_2(x)$ assertions can be made in that form.
It follows, for example, that if $\kappa$ is $\Sigma_2$-correct, then any feature of $\kappa$ or any larger cardinal than $\kappa$ that can be verified in a large $V_\alpha$ will reflect below $\kappa$. So if $\kappa$ is $\Sigma_2$-reflecting, for example, then there must be unboundedly many inaccessible cardinals below $\kappa$. Similarly, if $\kappa$ is $\Sigma_2$-reflecting and measurable, then there must be unboundedly many measurable cardinals below $\kappa$.
Other facts:
Remarkable cardinals are $Σ_2$-reflecting.[2] It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. In such a model, there can be no $Σ_2$-reflecting cardinals.[3] The Feferman theory
This is the theory, expressed in the language of set theory augmented with a new unary class predicate symbol $C$, asserting that $C$ is a closed unbounded class of cardinals, and every $\gamma\in C$ has $V_\gamma\prec V$. In other words, the theory consists of the following scheme of assertions: $$\forall\gamma\in C\ \forall x\in V_\gamma\ \bigl[\varphi(x)\iff\varphi^{V_\gamma}(x)\bigr]$$as $\varphi$ ranges over all formulas. Thus, the Feferman theory asserts that the universe $V$ is the union of a chain of elementary substructures $$V_{\gamma_0}\prec V_{\gamma_1}\prec\cdots\prec V_{\gamma_\alpha}\prec\cdots \prec V$$Although this may appear at first to be a rather strong theory, since it seems to imply at the very least that each $V_\gamma$ for $\gamma\in C$ is a model of ZFC, this conclusion would be incorrect. In fact, the theory does
not imply that any $V_\gamma$ is a model of ZFC, and does not prove $\text{Con}(\text{ZFC})$; rather, the theory implies for each axiom of ZFC separately that each $V_\gamma$ for $\gamma\in C$ satisfies it. Since the theory is a scheme, there is no way to prove from that theory that any particular $\gamma\in C$ has $V_\gamma$ satisfying more than finitely many axioms of ZFC. In particular, a simple compactness argument shows that the Feferman theory is consistent provided only that ZFC itself is consistent, since any finite subtheory of the Feferman theory is true by the reflection theorem in any model of ZFC. It follows that the Feferman theory is actually conservative over ZFC, and proves with ZFC no new facts about sets that is not already provable in ZFC alone.
The Feferman theory was proposed as a natural theory in which to undertake the category-theoretic uses of Grothendieck universes, but without the large cardinal penalty of a proper class of inaccessible cardinals. Indeed, the Feferman theory offers the advantage that the universes are each elementary substructures of one another, which is a feature not generally true under the universe axiom.
Maximality Principle
The existence of an inaccessible reflecting cardinal is equiconsistent with the boldface maximality principle $\text{MP}(\mathbb{R})$, which asserts of any statement $\varphi(r)$ with parameter $r\in\mathbb{R}$ that if $\varphi(r)$ is forceable in such a way that it remains true in all subsequent forcing extensions, then it is already true; in short, $\text{MP}(\mathbb{R})$ asserts that every possibly necessary statement with real parameters is already true. Hamkins showed that if $\kappa$ is an inaccessible reflecting cardinal, then there is a forcing extension with $\text{MP}(\mathbb{R})$, and conversely, whenever $\text{MP}(\mathbb{R})$ holds, then there is an inner model with an inaccessible reflecting cardinal.
References Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Wilson, Trevor M. Weakly remarkable cardinals, Erdős cardinals, and the generic Vopěnka principle., 2018. arχiv bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex
|
Difference between revisions of "ORD is Mahlo"
(Vopěnka)
m (link)
Line 1: Line 1:
{{DISPLAYTITLE: $\text{Ord}$ is Mahlo}}
{{DISPLAYTITLE: $\text{Ord}$ is Mahlo}}
−
The assertion ''$\text{Ord}$ is Mahlo'' is the scheme expressing that the proper class
+
The assertion ''$\text{Ord}$ is Mahlo'' is the scheme expressing that the proper class REG consisting of all regularcardinals is a [[stationary]] proper class, meaning that it has elements from every definable (with parameters) [[closed unbounded]] proper class of ordinals. In other words, the scheme asserts for every formula $\varphi$, that if for some parameter $z$ the class $\{\alpha\mid \varphi(\alpha,z)\}$ is a closed unbounded class of ordinals, then it contains a regular cardinal.
* If $\kappa$ is [[Mahlo]], then $V_\kappa\models\text{Ord is Mahlo}$.
* If $\kappa$ is [[Mahlo]], then $V_\kappa\models\text{Ord is Mahlo}$.
Revision as of 09:59, 10 October 2019
The assertion
$\text{Ord}$ is Mahlo is the scheme expressing that the proper class REG consisting of all regular cardinals is a stationary proper class, meaning that it has elements from every definable (with parameters) closed unbounded proper class of ordinals. In other words, the scheme asserts for every formula $\varphi$, that if for some parameter $z$ the class $\{\alpha\mid \varphi(\alpha,z)\}$ is a closed unbounded class of ordinals, then it contains a regular cardinal. If $\kappa$ is Mahlo, then $V_\kappa\models\text{Ord is Mahlo}$. Consequently, the existence of a Mahlo cardinal implies the consistency of $\text{Ord}$ is Mahlo, and the two notions are not equivalent. Moreoever, since the ORD is Mahlo scheme is expressible as a first-order theory, it follows that whenever $V_\gamma\prec V_\kappa$, then also $V_\gamma$ satisfies the Levy scheme. Consequently, if there is a Mahlo cardinal, then there is a club of cardinals $\gamma\lt\kappa$ for which $V_\gamma\models\text{Ord is Mahlo}$.
A simple compactness argument establishes that $\text{Ord}$ is Mahlo is equiconsistent over $\text{ZFC}$ with the existence of an inaccessible reflecting cardinal. On the one hand, if $\kappa$ is an inaccessible reflecting cardinal, then since $V_\kappa\prec V$ it follows that any class club definable in $V$ with parameters below $\kappa$ will be unbounded in $\kappa$ and hence contain $\kappa$ as an element and consequently contain an inaccessible cardinal. On the other hand, if $\text{Ord}$ is Mahlo is consistent, then every finite fragment of the theory asserting that $\kappa$ is an inaccessible reflecting cardinal (which is after all asserted as a scheme) is consistent, and hence by compactness the whole theory is consistent.
If there is a pseudo uplifting (proof in that article) cardinal, or indeed, merely a pseudo $0$-uplifting cardinal, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus $\text{Ord}$ is Mahlo.[1]
The Vopěnka principle implies that $Ord$ is Mahlo: every club class contains a regular cardinal and indeed, an extendible cardinal and more. It is relatively consistent that GBC and the generic Vopěnka principle holds, yet $Ord$ is not Mahlo. It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. In such a model, there can be no $Σ_2$-reflecting cardinals and therefore also no remarkable cardinals.
References Hamkins, Joel David and Johnstone, Thomas A. Resurrection axioms and uplifting cardinals., 2014. www arχiv bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex
|
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
|
The
Topic Model is a type of statistical model to find the topics that occur in a collection of documents.It is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. For example, image to have some articles or a series of social media messages and we want to understand what is going on inside of them. A common tool to face this problem is via Unsupervised Machine Learning model.From a high level perspective, having a bunch of documents, and like in K-Mean Clustering we want to find the K number of topics that best describe our corpus of text.
From the figure above, we have three different topics: technology (yellow), business(orange), and arts(blue). In addition to those topics, we also have an association to the documents to topics. In fact, a document can be entirely a technology topic (i.e. red light, green light, 2-tone led, simplify screen), but also a document that is a sort of mixture of two or more topic (see the grey text in the figure below).
Topic Modelcan be seen as a
Matrix Factorization Problem, where K is the number of topics, M is the number of documents, and V is the size of the vocabulary.
The
MxV matrix corresponds to the distribution of the words for each of the topics,and to find this the Latent Semantic Analysis is widely used.
\[ P(\boldsymbol{p} | \alpha \boldsymbol{m})=\frac{\Gamma\left(\sum_{k} \alpha m_{k}\right)}{\prod_{k} \Gamma\left(\alpha m_{k}\right)} \prod_{k} p_{k}^{\alpha m_{k}-1} \]
Here above, we have the Dirichlet Distribution Equation, where
alpha is the variance, and m is the mean.
As described in the figure above, when we have a
uniform mean (1/3), and an variance (alpha) of three, when we molltiply them togheter, the LDA=1, and so each topic is equally likely.But when the variance is larger and larged the LDA is concentrating the distribution around the mean (the dark circle in the middle of the triangle).
There are other ways to parametrize the LDA distribution. For example, we can have mean in a differnt location (see the left and center figure below). We can also have the
variance parameter alpha smaller than 1. In this case we push the probability mass to the edges of the simplex (see the right figure below). In this case, with alpha<1 we have a preference for the multinomial distribution which is far avay for the center.To be far away from the center means we have not a precise topic to assign to our word. It is similar to how people write document where many things are inside a concept.In other words, the Dirichlet Distribution give us a distribution over all the places where the Multinomila Distribution can land.
The
Dirichlet Distribution can be used to isolate which document is about a specific topic.For each topic K, we have a multinomial distribution Betak, called Topic Distribution from a Dirichlet Distribution with parameter lambda.The next step is to draw a multinomial distribution over topics represented by Өd. Once we have it, we can draw for each word the so called Topic Assignment represented in the figure below by Zn.Till now, we don’t know what the word is. We have to look at the Topic Distribution Betak in order to generate the word that comes from the multinomial distribution.
The graph above is a representation of the probabilistic model, also called
Plate Notation. As we can see we have in the Plate Notation K topics in M documents, and N words in each document. Crucially, the only thing that we observe are the words, and our task is to figure out what topic to assign Zn.
Ideally, once we have the collection of words per topic, is the topic is interpretable, people will consistently choose true
Intruder, or define the words that didn’t belong. To learn more about LDA please check out this link.
|
Difference between revisions of "Extendible"
(def etc.)
m (→Virtually extendible cardinals: ?)
Line 138: Line 138:
* The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
* The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
* Open problems: Must there be an $n$-remarkable cardinal
* Open problems: Must there be an $n$-remarkable cardinal
−
** if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$
+
** if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$
−
** if $gVP(Π_n)$ holds
+
** if $gVP(Π_n)$ holds
......
......
Revision as of 21:27, 8 October 2019
A cardinal $\kappa$ is
$\eta$-extendible for an ordinal $\eta$ if and only if there is an elementary embedding $j:V_{\kappa+\eta}\to V_\theta$, with critical point $\kappa$, for some ordinal $\theta$. The cardinal $\kappa$ is extendible if and only if it is $\eta$-extendible for every ordinal $\eta$. Equivalently, for every ordinal $\alpha$ there is a nontrivial elementary embedding $j:V_{\kappa+\alpha+1}\to V_{j(\kappa)+j(\alpha)+1}$ with critical point $\kappa$. Contents 1 Alternative definition 2 Relation to Other Large Cardinals 3 Variants 4 In set-theoretic geology 5 References Alternative definition
Given cardinals $\lambda$ and $\theta$, a cardinal $\kappa\leq\lambda,\theta$ is
jointly $\lambda$-supercompact and $\theta$-superstrong if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{crit}(j)=\kappa$, $\lambda<j(\kappa)$, $M^\lambda\subseteq M$ and $V_{j(\theta)}\subseteq M$. That is, a single embedding witnesses both $\lambda$-supercompactness and (a strengthening of) superstrongness of $\kappa$. The least supercompact is never jointly $\lambda$-supercompact and $\theta$-superstrong for any $\lambda$,$\theta\geq\kappa$.
A cardinal is extendible if and only if it is jointly supercompact and $\kappa$-superstrong, i.e. for every $\lambda\geq\kappa$ it is jointly $\lambda$-supercompact and $\kappa$-superstrong. [1] One can show that extendibility of $\kappa$ is in fact equivalent to "for all $\lambda$,$\theta\geq\kappa$, $\kappa$ is jointly $\lambda$-supercompact and $\theta$-superstrong". A similar characterization of $C^{(n)}$-extendible cardinals exists.
The ultrahuge cardinals are defined in a way very similar to this, and one can (very informally) say that "ultrahuge cardinals are to superhuges what extendibles are to supercompacts". These cardinals are superhuge (and stationary limits of superhuges) and strictly below almost 2-huges in consistency strength.
To be expanded: Extendibility Laver Functions Relation to Other Large Cardinals
Extendible cardinals are related to various kinds of measurable cardinals.
Supercompactness
Extendibility is connected in strength with supercompactness. Every extendible cardinal is supercompact, since from the embeddings $j:V_\lambda\to V_\theta$ we may extract the induced supercompactness measures $X\in\mu\iff j''\delta\in j(X)$ for $X\subset \mathcal{P}_\kappa(\delta)$, provided that $j(\kappa)\gt\delta$ and $\mathcal{P}_\kappa(\delta)\subset V_\lambda$, which one can arrange. On the other hand, if $\kappa$ is $\theta$-supercompact, witnessed by $j:V\to M$, then $\kappa$ is $\delta$-extendible inside $M$, provided $\beth_\delta\leq\theta$, since the restricted elementary embedding $j\upharpoonright V_\delta:V_\delta\to j(V_\delta)=M_{j(\delta)}$ has size at most $\theta$ and is therefore in $M$, witnessing $\delta$-extendibility there.
Although extendibility itself is stronger and larger than supercompactness, $\eta$-supercompacteness is not necessarily too much weaker than $\eta$-extendibility. For example, if a cardinal $\kappa$ is $\beth_{\eta}(\kappa)$-supercompact (in this case, the same as $\beth_{\kappa+\eta}$-supercompact) for some $\eta<\kappa$, then there is a normal measure $U$ over $\kappa$ such that $\{\lambda<\kappa:\lambda\text{ is }\eta\text{-extendible}\}\in U$.
Strong Compactness
Interestingly, extendibility is also related to strong compactness. A cardinal $\kappa$ is strongly compact iff the infinitary language $\mathcal{L}_{\kappa,\kappa}$ has the $\kappa$-compactness property. A cardinal $\kappa$ is extendible iff the infinitary language $\mathcal{L}_{\kappa,\kappa}^n$ (the infinitary language but with $(n+1)$-th order logic) has the $\kappa$-compactness property for every natural number $n$. [2]
Given a logic $\mathcal{L}$, the minimum cardinal $\kappa$ such that $\mathcal{L}$ satisfies the $\kappa$-compactness theorem is called the
strong compactness cardinal of $\mathcal{L}$. The strong compactness cardinal of $\omega$-th order finitary logic (that is, the union of all $\mathcal{L}_{\omega,\omega}^n$ for natural $n$) is the least extendible cardinal. Variants $C^{(n)}$-extendible cardinals
(Information in this subsection from [3] unless noted otherwise)
A cardinal $κ$ is called
$C^{(n)}$-extendible if for all $λ > κ$ it is $λ$-$C^{(n)}$-extendible, i.e. if there is an ordinal $µ$ and an elementary embedding $j : V_λ → V_µ$, with $\mathrm{crit(j)} = κ$, $j(κ) > λ$ and $j(κ) ∈ C^{(n)}$.
For $λ ∈ C^{(n)}$, a cardinal $κ$ is $λ$-$C^{(n)+}$-extendible iff it is $λ$-$C^{(n)}$-extendible, witnessed by some $j : V_λ → V_µ$ which (besides $j(κ) > λ$ and $j(κ) ∈ C(n)$) satisfies that $µ ∈ C^{(n)}$.
$κ$ is $C^{(n)+}$-extendible iff it is $λ$-$C^{(n)+}$-extendible for every $λ > κ$ such that $λ ∈ C^{(n)}$.
Properties:
The notions of $C^{(n)}$-extendible cardinals and $C^{(n)+}$-extendible cardinals are equivalent.[4] Every extendible cardinal is $C^{(1)}$-extendible. If $κ$ is $C^{(n)}$-extendible, then $κ ∈ C^{(n+2)}$. For every $n ≥ 1$, if $κ$ is $C^{(n)}$-extendible and $κ+1$-$C^{(n+1)}$-extendible, then the set of $C^{(n)}$-extendible cardinals is unbounded below $κ$. Hence, the first $C^{(n)}$-extendible cardinal $κ$, if it exists, is not $κ+1$-$C^{(n+1)}$-extendible. In particular, the first extendible cardinal $κ$ is not $κ+1$-$C^{(2)}$-extendible. For every $n$, if there exists a $C^{(n+2)}$-extendible cardinal, then there exist a proper class of $C^{(n)}$-extendible cardinals. The existence of a $C^{(n+1)}$-extendible cardinal $κ$ (for $n ≥ 1$) does not imply the existence of a $C^{(n)}$-extendible cardinal greater than $κ$. For if $λ$ is such a cardinal, then $V_λ \models$“κ is $C^{(n+1)}$-extendible”. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. For $n ≥ 1$, the following are equivalent ($VP$ — Vopěnka's principle): $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There exists a $C(n)$-extendible cardinal. “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-extendible (inter alia) in $V_δ$, for all $n$ and $m$. $(\Sigma_n,\eta)$-extendible cardinals
There are some variants of extendible cardinals because of the interesting jump in consistency strength from $0$-extendible cardinals to $1$-extendibles. These variants specify the elementarity of the embedding.
A cardinal $\kappa$ is $(\Sigma_n,\eta)$-extendible, if there is a $\Sigma_n$-elementary embedding $j:V_{\kappa+\eta}\to V_\theta$ with critical point $\kappa$, for some ordinal $\theta$. These cardinals were introduced by Bagaria, Hamkins, Tsaprounis and Usuba [5].
$\Sigma_n$-extendible cardinals
The special case of $\eta=0$ leads to a much weaker notion. Specifically, a cardinal $\kappa$ is
$\Sigma_n$-extendible if it is $(\Sigma_n,0)$-extendible, or more simply, if $V_\kappa\prec_{\Sigma_n} V_\theta$ for some ordinal $\theta$. Note that this does not necessarily imply that $\kappa$ is inaccessible, and indeed the existence of $\Sigma_n$-extendible cardinals is provable in ZFC via the reflection theorem. For example, every $\Sigma_n$ correct cardinal is $\Sigma_n$-extendible, since from $V_\kappa\prec_{\Sigma_n} V$ and $V_\lambda\prec_{\Sigma_n} V$, where $\kappa\lt\lambda$, it follows that $V_\kappa\prec_{\Sigma_n} V_\lambda$. So in fact there is a closed unbounded class of $\Sigma_n$-extendible cardinals.
Similarly, every Mahlo cardinal $\kappa$ has a stationary set of inaccessible $\Sigma_n$-extendible cardinals $\gamma<\kappa$.
$\Sigma_3$-extendible cardinals cannot be Laver indestructible. Therefore $\Sigma_3$-correct, $\Sigma_3$-reflecting, $0$-extendible, (pseudo-)uplifting, weakly superstrong, strongly uplifting, superstrong, extendible, (almost) huge or rank-into-rank cardinals also cannot.[5]
$A$-extendible cardinals
(this subsection from [6])
Definitions:
A cardinal $κ$ is $A$-extendible, for a class $A$, iff for every ordinal $λ > κ$ there is an ordinal $θ$ such that there is an elementary embedding $j : \langle V_λ , ∈, A ∩ V_λ \rangle → \langle V_θ , ∈, A ∩ V_θ \rangle$ with critical point $κ$ (such that $λ < j(κ)$ — removing this does not change, what cardinals are extendible). $λ$ is called the degree of $A$-extendibility of an embedding. A cardinal $κ$ is $(Σ_n)$-extendible, iff it is $A$-extendible, where $A$ is the $Σ_n$-truth predicate. (This is a different notion than $\Sigma_n$-extendible cardinals.)[4]
Results:
The Vopěnka principle is equivalent over GBC to both following statements: For every class $A$, there is an $A$-extendible cardinal. For every class $A$, there is a stationary proper class of $A$-extendible cardinals. ...... Virtually extendible cardinals
Definitions:
A cardinal $κ$ is (weakly? strongly? ......) virtually extendibleiff for every $α > κ$, in a set-forcing extension there is an elementary embedding $j : V_α → V_β$ with $\mathrm{crit(j)} = κ$ and $j(κ) > α$. A cardinal $κ$ is (weakly) virtually $A$-extendible, for a class $A$, iff for every ordinal $λ > κ$ there is an ordinal $θ$ such that in a set-forcing extension, there is an elementary embedding $j : \langle V_λ , ∈, A ∩ V_λ \rangle → \langle V_θ , ∈, A ∩ V_θ \rangle$ with critical point $κ$. For (strongly) virtually $A$-extendible$κ$, we require additionally $λ < j(κ)$.[4] A cardinal $κ$ is $n$-remarkable, for $n > 0$, iff for every $η > κ$ in $C^{(n)}$ , there is $α<κ$ also in $C^{(n)}$ such that in $V^{Coll(ω, < κ)}$, there is an elementary embedding $j : V_α → V_η$ with $j(\mathrm{crit}(j)) = κ$. A cardinal is completely remarkableiff it is $n$-remarkable for all $n > 0$.[8] A cardinal is A cardinal κ is weakly or strongly virtually $(Σ_n)$-extendible, iff it is respectively weakly or strongly virtually $A$-extendible, where $A$ is the $Σ_n$-truth predicate.[4]
Equivalence and hierarchy:
$1$-remarkability is equivalent to remarkability. A cardinal is virtually $C^{(n)}$-extendible iff it is $n + 1$-remarkable (virtually extendible cardinals are virtually $C^{(1)}$-extendible).[8] Weakly and strongly $A$-extendible cardinal are non-equivalent, although in the non-virtual context, the weak and strong forms of $A$-extendibility coincide.[4] It is relatively consistent with GBC that every class $A$ admits a (weakly) virtually $A$-extendible cardinal (and so the generic Vopěnka principle holds), but no class $A$ admits a (strongly) virtually $A$-extendible cardinal.[4] Every $n$-remarkable cardinal is in $C^{(n+1)}$.[8] Every $n+1$-remarkable cardinal is a limit of $n$-remarkable cardinals.[8]
Upper limits for strength:
If $κ$ is virtually Shelah for supercompactness or 2-iterable, then $V_κ$ is a model of proper class many virtually $C^{(n)}$-extendible cardinals for every $n < ω$.[7] If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals.[7] Completely remarkable cardinals can exist in $L$.[8] For a $2$-iterable cardinal $κ$, $V_κ$ is a model of proper class many completely remarkable cardinals.[8] If $0^\#$ exists, then every Silver indiscernible is in $L$ completely remarkable and virtually $A$-extendible for every definable class $A$.[4, 8]
Lower limit for strength:
Virtually extendible cardinals are remarkable limits of remarkable cardinals and 1-iterable limits of 1-iterable cardinals.[7] The following are equiconsistent $gVP(Π_n)$ $gVP(κ, \mathbf{Σ_{n+1}})$ for some $κ$ There is an $n$-remarkable cardinal. The following are equiconsistent $gVP(\mathbf{Π_n})$ $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$ There is a proper class of $n$-remarkable cardinals. $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal. If $gVP^∗(Π_n)$, then there is an $n$-remarkable cardinal. If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.[4] If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of virtually rank-into-rank cardinals.[4] The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.[4] Open problems: Must there be an $n$-remarkable cardinal if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$? if $gVP(Π_n)$ holds?
......
In set-theoretic geology This article is a stub. Please help us to improve Cantor's Attic by adding information.
References Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Hamkins, Joel David. The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme., 2016. www arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex
|
Ľuboš Elexa, Ľubica Lesáková, Vladimíra Klementová and Ladislav Klement
growth of the region. In developed EU countries, current trends in the planning of regional development policy are based on supporting the creation and cluster building. Trends represent a significant shift from the traditional approach, such as the creation of regional development programmes aimed at promoting the development of individual enterprises, to regional policies based on cluster support. A cluster-based policy understands businesses and the industry as a system. It focuses on developing strategies designed to promote an efficient allocation of scarce
electrodes are near-to constant because of the high resistance to current of the stratum corneum in the considered frequency range [ 3 ]. This allows us to rewrite the boundary conditions, Eqs. 5 - 7 , between the probe and the uppermost skin layer n , stratum corneum, as (we drop the subindex ` eff ’ for notational convenience in the analysis)− σ n ∂ Φ ( r , H n ) ∂ z = ∑ j = 1 m I j A j [ U ( R 2 j − 1 − r ) − U ( R 2 j − 2 − r ) ] ,$$\begin{array}{}\displaystyle-\sigma_{n}\frac{\partial\Phi(r,\mathcal{H}_{n})}{\partial z}=\sum_{j=1}^{m
Warit Wipulanusat, Kriengsak Panuwatwanich, Rodney A. Stewart and Jirapon Sunkpho
government and society.Moore and Hartley (2008) contend that there are four interdependent attributes differentiating the characteristics of public sector innovations from the private sector. Public sector innovations go beyond organisational frontiers to generate network-based and financial decision-making and production systems; tap new pools of resources; exploit the government’s capacity to shape private rights and responsibilities; and redistribute the right to define and judge value. These aspects should be approached in terms of the degree to which they promote
Juraj Belan, Lenka Kuchariková, Magdalena Mazur, Eva Tillová and Mária Chalupová
(online 5.4.2018 https://marketrealist.com/2015/12/auto-industrys-aluminum-usage-increasing )Kracke A. 2010. Superalloys, the most successful alloy system of modern times-past, present and future . 7 th international symposium on Superalloy 718 and derivatives. 13-50.Nový F., Bokůvka O., Trško L., Chalupová M. 2012. Ultra-high cycle fatigue of materials . International Journal of engineering, Vol. X 2, 231 - 234.Palček P., Chalupová M., Nicoletto G., Bokůvka O. 2003. Prediction of machine element durability , Education Aid for multimedia
Business Intelligence. Proceedings of the 16th International Conference on Enterprise Information Systems , 456-463.Tsai, S.B., Huang, C.Y., Wang, C.K., Chen Q., et al. 2016. Using a Mixed Model to Evaluate Job Satisfaction in High-Tech Industries. Plos One, 11(5): e0154071.Tsai, S-B, Zhou, J, Gao, Y, Wan,g J, Li G, Zheng, Y, et al. (2017) Combining FMEA with DEMATEL models to solve production process problems. Plos One 12(8): e0183634. https://doi.org/10.1371/journal.pone.0183634Tureková, I. 2016. Evaluation of human reliability by methods of
, stress-reactivity, personality and coping style. Sleep Med Rev. 18(3) , 237-47.Kellmann, M. (2010). Preventing overtraining in athletes in high-intensity sports and stress/recovery monitoring. Scand J Med Sci Sports . 20, 95-102.Knutson, K. L. (2005). The association between pubertal status and sleep duration and quality among a nationally representative sample of US adolescents. American Journal of Human Biology, 17, 418-24.Lang, C., Kalak, N., & Brand, S. (2016). The relationship between physical activity and sleep from mid adolescence to
Zahra Kahvandi, Ehsan Saghatforoush, Mohammad Mahoud and Christopher Preece
Integrated Project Delivery (IPD) In the Construction Industry. IGCESH 2013 .Korkmaz, S., Swarup, L., and Riley, D. (2013). Delivering Sustainable, High-Performance Buildings: Influence of Project Delivery Methods on Integration and Project Outcomes. Journal of Management in Engineering , 29, 71-78.Li, Y. and Taylor, T. R. (2011). The Impact of Design Rework on Construction Project Performance. The 29th International Conference of the System Dynamics Society, Washington, D.C., 25-35.Manning, R. T. (2012). Challenges, Benefits, and Risks Associated
Simona Jursova, Stanislav Honus, Pavlina Pustejovska and Rafal Prusak
copyright office.Jha, G., Soren, S., 2017. Study on Applicability of Biomass in Iron Ore Sintering Process . Renewable and Sustainable Energy Reviews, 80 (2017), 399–407. DOI: 10.1016/j.rser.2017.05.246.Jursova, S., Pustejovska, P., Bilik, J., Honus, S., 2017. Evaluation of Reducibility of High and Low Basic Sinter in Economical Point of View . Ostrava, Tanger, 2176-2181.Kardas, E., 2016. The Assessment of Selected Elements of Quality Management System in the Metallurgical Company . Ostrava, Tanger, 1851-1856.Takazo, K., Hara, M., 2013
). Framing of project critical success factors by systems model. International journal of project management , 24 (1), 53-65.Flyvbjerg, B. (2013). Quality control and due diligence in project management: getting decisions right by taking the outside view. International journal of project management , 31 (5), 760-774.Guangshe, J., Li, C., Jiangou, C., Shuisen, Z., and Jin, W. (2008). Application of organizational project management maturity model (OPM3) to construction in China: an empirical study. International conference on information management
learning. Safety Science, 60, 196-202. https://doi.org/10.1016/j.ssci.2013.07.019Haggström, E., Mamhidir, A. G., & Kihlgren, A. (2010). Caregivers’ strong commitment to their relationship with older people. International Journal of Nursing Practice, 16(2), 99-105. https://doi.org/10.1111/j.1440-172X.2010.01818.xHaight, J. M., Yokiro, P., Rost, K. M., & Willmer, D. R. (2014). Safety Management Systems Comparing Contents and Impact. Safety Management, May, 44-51.Hamdan, M. (2013) Measuring safety culture in Palestinian
|
I am new to neural networks. I tried coding the
backpropogation alogrithm and tried running it on a test set which gave wrong results. I have used the following knowledge to code it,
For the forward pass, $$z^l = w^la^{l-1} + b^l$$ $$a^l = g^l (z^l)$$
For the backward pass, (Here $\circ\text{ - Element wise Product}$)
For the last layer, $$\delta ^L = (a^l - y) \circ g'^L(z^L)$$ $$\frac{\partial L}{\partial w^L} = \delta^L (a^{L-1})^T$$ $$\frac{\partial L}{\partial b^L} = \delta^L$$
For the other layers, $$\delta ^l = (w^{l+1})^T(\delta^{l+1}) \circ g'^l(z^l)$$ $$\frac{\partial L}{\partial w^l} = \delta^l (a^{l-1})^T$$ $$\frac{\partial L}{\partial b^l} = \delta^l$$
For the Update, $$W : = W - \alpha \frac{\partial L}{\partial w^l} $$ $$b : = b - \alpha\frac{\partial L}{\partial b^l} $$
The following is the code which I have written,
X = load('iris.csv'); W1 = rand(3,4); W2 = rand(1,3); b1 = rand(3,1); b2 = rand(1,1); y = [ones(1,50) 2*ones(1,50) 3*ones(1,50)]'; for j = 1:100 %for epoch for i = 1:150 %for iteration through dataset x1 = X(i,:)'; z1 = W1*x1 + b1; a1 = tanh(z1); z2 = W2*a1 + b2; a2 = relu(z2); dz2 = (a2 - y(i)).*reluGradient(a2); %this is delta L dw2 = dz2*transpose(a1); %The derivative term of Lth db2 = dz2; %for the bias g1 = 1 - a1.^2; %Derivative dz1 = transpose(W2)*dz2 .* g1; %FOr delta l dw1 = dz1*transpose(x1); %derivative term for lth layer db1 = dz1; %bias update W1 = W1 - 0.1*dw1; b1 = b1 - 0.1*db1; W2 = W2 - 0.1*dw2; b2 = b2 - 0.1*db2; end end
I am trying to train the net for the iris data set (150 X 4 - dataset Size). I have considered 4 input units, 1 hidden layer with 3 hidden units and 1 output unit. Hence the dimension of the weight matrix for first layer is 3 X 4 and for the last layer is 1 X 3. When I try to test the network I always get the input classified to class 3. I tried changing the hyper parameters, but it seems there is something wrong with the code.
In the code , I first load the CSV file, and then initialize the weight matrices accordingly. I have run two for loops one for the epoch and other for the iteration. I do a forward pass first using the above equations then a backward pass. I have made functions for the RELU and RELU_GRADIENT.
For relu:
function g = relu(z)g = max(0,z);end
For relu gradient:
function z = reluGradient(z)z(z>=0) = 1;z(z<0) = 0;end
I would be obliged if someone can direct to me to the solution to this problem
About IRIS.csv:
Has four input features and 150 data samples. 50 of each class. Hence there are 3 classes.
Updates:
The cost function here I have used is the quadratic cost function:
$$C = \Sigma (a^L - y)^2 $$
Hence all the derivatives have been calculated with respect to that cost function. For example for the last layer,
$$\frac{\partial C}{\partial z} = (a^L - y) \circ g'^L(z^L)$$ which is $\delta^L$
Also,
I have tried using the sigmoidal transfer function instead of the relu transfer function with all the proper changes at all derivatives,weight and output vectors. (The output vector was then of (3,1) dim) As 3 classes hence - $[1,0,0]^T$ , $[0,1,0]^T$ and $[0,0,1]^T$
Thanks
|
Let $M=\left (\omega\mathbb{I}-A\right )\left(\omega^{*}\mathbb{I}-A^{\dagger}\right)$ be a Hermitian matrix of size $n\times n$ where $A$ is a real non symmetric matrix and $\omega=a+\mathrm{i}b$. $A^{\dagger}$ represents the conjugate transpose of $A$.
I want to compute $\det[M]^{-\frac{1}{2}}$.
I know that for a real symmetric matrix $\Sigma$ we can represent its determinant as a gaussian integral with real variables $x_i$: $$ \frac{1}{|\Sigma|^{1 / 2}}=\int \frac{1}{(2 \pi)^{n / 2}} \exp \left(-\frac{1}{2}\mathbf{x}^{T} \Sigma\mathbf{x}\right)\mathrm{d}\mathbf{x}.$$
However in my case $M$ has complex values. I was wondering if we could extend this integral representation to Hermitian matrices. Among the feedback I got, these are the candidates: \begin{equation} \det[M]^{-\frac{1}{2}}=\int \left ( \prod_{i} \frac{\mathrm{d} x_i}{\sqrt{2 \pi / i}}\right ) \exp \left\{-\frac{\mathrm{i}}{2} \sum_{i j }x_i\left (\sum_k\left(\omega \delta_{i k}-A_{i k}\right)\left(\omega^* \delta_{k j}-A_{k j}^T\right)\right ) x_j\right\}. \end{equation} \begin{equation} \det[M]^{-\frac{1}{2}}=\int\left(\prod_i \frac{d^{2} z_{i}}{\pi}\right) \exp \left\{-\sum_{i, j, k} z_{i}^{*}\left(\omega^{*} \delta_{i k}-J_{i k}^{T}\right)\left(\omega \delta_{k j}-J_{k j}\right) z_{j}\right\} \end{equation} The second one involving complex variables seems intuitively the best suited. However I do not know whether this is correct, and I could use a simpler integral then I would prefer very much so.
Why would not this work: $$ \det[M]^{-\frac{1}{2}}=\int \left ( \prod_{i} \frac{\mathrm{d} x_i}{\sqrt{2 \pi }}\right ) \exp \left\{-\frac{1}{2} \sum_{i j }x_i\left (\sum_k\left(\omega \delta_{i k}-A_{i k}\right)\left(\omega^* \delta_{k j}-A_{k j}^T\right)\right ) x_j\right\}. $$
I am very curious on what the correct way would be. Any remark or advice would be greatly appreciated!
edit: I consider the case where $A$ is real, and does not have complex entries anymore.
Second edit: I was told that I had to integrate over complex $z_i$ rather than real $x_i$. If this is true I would like to know why I can't use real integration.
|
Forgot password? New user? Sign up
Existing user? Log in
For any arbitrary matrix,
[an+1bn+1]=[3+11−33−11+3][anbn]\left[ \begin{matrix} { a }_{ n+1 } \\ { b }_{ n+1 } \end{matrix} \right] \quad =\quad \begin{bmatrix} \sqrt { 3 } +1 & \quad 1-\sqrt { 3 } \\ \sqrt { 3 } -1 & 1+\sqrt { 3 } \end{bmatrix}\quad \left[ \begin{matrix} { a }_{ n } \\ { b }_{ n } \end{matrix} \right] [an+1bn+1]=[3+13−11−31+3][anbn]
where n∈Nn\in Nn∈NAlso, an=bn=1{ a }_{ n }={ b }_{ n }=1an=bn=1
Find a22{ a }_{ 22 }a22
P.S. Thanks to my friend Shantam, for the question!!
Note by Abhineet Nayyar 4 years, 6 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Is the answer(22)21 (2\sqrt{2})^{21} (22)21 ?
Log in to reply
Yes I also getting same ..... what's ur method ?
I did in this way......
{Vn⃗=ani+bnjVn+1⃗=an+1i+bn+1j\\ \begin{cases} \vec { { V }_{ n } } ={ a }_{ n }i+{ b }_{ n }j \\ \vec { { V }_{ n+1 } } ={ a }_{ n+1 }i+{ b }_{ n+1 }j \end{cases}\\ {Vn=ani+bnjVn+1=an+1i+bn+1j
Now using matrix manuipaltion method that axis is rotted by an angle of 15 degrre ...
Vn+1⃗=(22)Vn⃗V1⃗=i+j(∵a1=b1=1)V22⃗=(22)V21⃗=(22)21V1⃗\displaystyle{\vec { { V }_{ n+1 } } =(2\sqrt { 2 } )\vec { { V }_{ n } } \\ \vec { { V }_{ 1 } } =i+j\quad (\because \quad a_{ 1 }=b_{ 1 }=1)\\ \\ \vec { { V }_{ 22 } } =(2\sqrt { 2 } )\vec { { V }_{ 21 } } ={ \left( 2\sqrt { 2 } \right) }^{ 21 }\vec { { V }_{ 1 } } }Vn+1=(22)VnV1=i+j(∵a1=b1=1)V22=(22)V21=(22)21V1
Now simply comparing coffecients of An and Bn we get answer.....
Is it correct ....? @Sudeep Salgia
Yep. There is a small typo in the first equation. It should be Vn+1⃗=(22)eiπ12Vn⃗ \displaystyle \vec{ V_{n+1} } = ( 2 \sqrt{2} ) e^{\frac{i \pi }{12}} \vec{V_n } Vn+1=(22)e12iπVn And the net rotation is 315∘ 315^{ \circ } 315∘with the initial angle being 45∘ 45^{\circ } 45∘ making the angle 360∘ 360^{\circ} 360∘. Hence an=(22)21cos360∘ a_n = (2 \sqrt{2} )^{21} \cos 360^{\circ} an=(22)21cos360∘ and bn=(22)21sin360∘=0 b_n = (2 \sqrt{2} )^{21} \sin 360^{\circ} =0 bn=(22)21sin360∘=0
@Sudeep Salgia – Yes thanks... I missed in typing ... And Abhineet I don't think there is another method 0 ... if fact This question is designed using the concept of Matrix Transformation method ... which I learnt from here brilliant in a question This in which deepanshu gupta posted solution by using this concept ....!
So I think this question is specially designed for this concept... May be possible some other method but I think they will surly not an elegant one (at least time consuming) ... But I don't have any ideas about them yet...
@Karan Shekhawat – Also I like your status very much ..... :)
@Karan Shekhawat – Haha...Thank You!!:D:D
@Karan Shekhawat – @Karan Shekhawat Thanks, actually i found it in an MTG magazine for engineering, well, my friend did...So, it is possible that the question may be from vectors, and not matrices. Anyways, Solutions are welcome. So, let's see:)
Thanks guys,but is there a method to solve it by Matrix algebra and not Vectors...?
@Abhineet Nayyar – I have a method which is not so intuitive and I will post it as soon as I get some time.
@Sudeep Salgia – Thanks ... we are egarly waiting ....
@Sudeep Salgia – Thanks @Sudeep Salgia and @Karan Shekhawat for your help:)
Problem Loading...
Note Loading...
Set Loading...
|
Pseudoscore Diagnostic For Fitted Model against Saturation Alternative
Given a point process model fitted to a point pattern dataset, this function computes the pseudoscore diagnostic of goodness-of-fit for the model, against moderately clustered or moderately inhibited alternatives of saturation type.
Usage
psstG(object, r = NULL, breaks = NULL, …, model=NULL, trend = ~1, interaction = Poisson(), rbord = reach(interaction), truecoef = NULL, hi.res = NULL)
Arguments object
Object to be analysed. Either a fitted point process model (object of class
"ppm") or a point pattern (object of class
"ppp") or quadrature scheme (object of class
"quad").
r
Optional. Vector of values of the argument \(r\) at which the diagnostic should be computed. This argument is usually not specified. There is a sensible default.
breaks
Optional alternative to
rfor advanced use.
…
Ignored.
model
Optional. A fitted point process model (object of class
"ppm") to be re-fitted to the data using
update.ppm, if
objectis a point pattern. Overrides the arguments
trend,interaction,rbord,ppmcorrection.
trend,interaction,rbord truecoef
Optional. Numeric vector. If present, this will be treated as if it were the true coefficient vector of the point process model, in calculating the diagnostic. Incompatible with
hi.res.
hi.res
Optional. List of parameters passed to
quadscheme. If this argument is present, the model will be re-fitted at high resolution as specified by these parameters. The coefficients of the resulting fitted model will be taken as the true coefficients. Then the diagnostic will be computed for the default quadrature scheme, but using the high resolution coefficients.
Details
This function computes the pseudoscore test statistic which can be used as a diagnostic for goodness-of-fit of a fitted point process model.
Consider a point process model fitted to \(x\), with conditional intensity \(\lambda(u,x)\) at location \(u\). For the purpose of testing goodness-of-fit, we regard the fitted model as the null hypothesis. The alternative hypothesis is a family of hybrid models obtained by combining the fitted model with the Geyer saturation process (see
Geyer) with saturation parameter 1. The family of alternatives includes models that are more regular than the fitted model, and others that are more clustered than the fitted model.
For any point pattern \(x\), and any \(r > 0\), let \(S(x,r)\) be the number of points in \(x\) whose nearest neighbour (the nearest other point in \(x\)) is closer than \(r\) units. Then the pseudoscore for the null model is $$ V(r) = \sum_i \Delta S(x_i, x, r ) - \int_W \Delta S(u,x,r) \lambda(u,x) {\rm d} u $$ where the \(\Delta\) operator is $$ \Delta S(u,x,r) = S(x\cup\{u\}, r) - S(x\setminus u, r) $$ the difference between the values of \(S\) for the point pattern with and without the point \(u\).
According to the Georgii-Nguyen-Zessin formula, \(V(r)\) should have mean zero if the model is correct (ignoring the fact that the parameters of the model have been estimated). Hence \(V(r)\) can be used as a diagnostic for goodness-of-fit.
The diagnostic \(V(r)\) is also called the
pseudoresidual of \(S\). On the right hand side of the equation for \(V(r)\) given above, the sum over points of \(x\) is called the pseudosum and the integral is called the pseudocompensator. Value
A function value table (object of class
"fv"), essentially a data frame of function values.
Columns in this data frame include
dat for the pseudosum,
com for the compensator and
res for the pseudoresidual.
There is a plot method for this class. See
fv.object.
References
Baddeley, A., Rubak, E. and Moller, J. (2011) Score, pseudo-score and residual diagnostics for spatial point process models.
Statistical Science 26, 613--646. See Also Aliases psstG Examples
# NOT RUN { X <- rStrauss(200,0.1,0.05) plot(psstG(X)) plot(psstG(X, interaction=Strauss(0.05)))# }
Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
|
I am reading
Optics by Eugene Hecht, and I am confused about the author's derivation of the differential wave equation:
To relate the space and time dependencies of $\psi(x, t)$, take the partial derivative of $\psi(x, t) = f(x')$ with respect to $x$, holding $t$ constant. Using $x' = x \mp vt$, and inasmuch as
$$\dfrac{\partial{\psi}}{\partial{x}} = \dfrac{\partial{f}}{\partial{x}}$$
$$\dfrac{\partial{\psi}}{\partial{x}} = \dfrac{\partial{f}}{\partial{x'}} \dfrac{\partial{x'}}{\partial{x}} = \dfrac{\partial{f}}{\partial{x'}} \tag{2.8}$$
because $$\dfrac{\partial{x'}}{\partial{x}} = \dfrac{\partial{x \mp vt}}{\partial{x}} = 1$$
Holding $x$ constant, the partial derivative with respect to time is
$\dfrac{\partial{\psi}}{\partial{t}} = \dfrac{\partial{f}}{\partial{x'}} \dfrac{\partial{x'}}{\partial{t}} = \dfrac{\partial{f}}{\partial{x'}} (\mp v) = \mp v \dfrac{\partial{f}}{\partial{x'}} \tag{2.9}$
Combining Eqs. (2.8) and (2.9) yields
$$\dfrac{\partial{\psi}}{\partial{t}} = \mp v \dfrac{\partial{\psi}}{\partial{x}}$$
This says that the rate of change of $\psi$ with $t$ and with $x$ are equal, to within a multiplicative constant, as shown in Fig. 2.5. The second partial derivatives of Eqs. (2.8) and (2.9) are
$$\dfrac{\partial^2{\psi}}{\partial{x}^2} = \dfrac{\partial^2{f}}{\partial{x'}^2} \tag{2.10}$$
and $$\dfrac{\partial^2{\psi}}{\partial{t}^2} = \dfrac{\partial}{\partial{t}} \left( \mp v \dfrac{\partial{f}}{\partial{x'}} \right) = \mp v \dfrac{\partial}{\partial{x'}} \left( \dfrac{\partial{f}}{\partial{t}} \right)$$
Since $$\dfrac{\partial{\psi}}{\partial{t}} = \dfrac{\partial{f}}{\partial{t}}$$
$$\dfrac{\partial^2{\psi}}{\partial{t}^2} = \mp \dfrac{\partial}{\partial{x'}} \left( \dfrac{\partial{\psi}}{\partial{t}} \right)$$
It follows, using Eqn. (2.9), that $$\dfrac{\partial^2{\psi}}{\partial{t}^2} = v^2 \dfrac{\partial^2{f}}{\partial{x'}^2}$$
I don't see how
$$\dfrac{\partial{\psi}}{\partial{t}} = \dfrac{\partial{f}}{\partial{t}} \tag{1}$$
We actually found that $\dfrac{\partial{\psi}}{\partial{t}} = \mp v \dfrac{\partial{\psi}}{\partial{x}}$, so I have no idea where the author got $\dfrac{\partial{\psi}}{\partial{t}} = \dfrac{\partial{f}}{\partial{t}}$ from?
I would greatly appreciate it if people could please take the time to clarify this.
|
Let $P$ be a
convex $n$-gon. Suppose that we have an infinite number of $P$s, and that each of them is colored either red or blue. Here, let us consider the following operations : Operation 1 : Place a red $P$ on a plane.
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $
Operation 2 : Place $n$ blue $P$s around the red $P$ such that each of the blue $P$s and the red $P$ are laid symmetrically with each $E_i\ (i=1,2,\cdots,n)$ where $E_i$ is an edge of the red $P$.
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $
Operation 3 : Place $n$ red $P$s around every blue $P$ in the same way as operation 2.
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $
Operation 4 : Place $n$ blue $P$s around every red $P$ in the same way as operation 2. Operation 5 : Repeat operation 3 and 4.
Here, let us consider the following conditions :
Condition 1 : These $P$s are plane tessellation figures. Condition 2 : There exists no place where both red $P$ and blue $P$ are placed.
Note that a regular hexagon, for example, does
not satisfy the condition 2. See the figure below. The blue $P$ on which a letter $P$ is written, for example, will be colored red by the next operation.
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $
Then, here is the first question.
Question 1: Is the following true?
$P$ satisfies these two conditions $\iff$ $P$ is either "a $45–45–90$ triangle", "a $30–60–90$ triangle", "an equilateral triangle" or "a rectangle".
I reached this conjecture by considering the inner angles of $P$. The followings are what I've thought : Every inner angle, say $\alpha$, of $P$ has to satisfy $2m\alpha=360^{\circ}$ where $m\ge 2\in\mathbb N$. Hence, $\alpha$ has to be any of the positive divisors of $180$ except $180$. This leads $n\le 4$ and so on.
Then, here is the second question.
Question 2: Letting $P$ be a convexpolyhedron, how about the three dimensional version of this question? Suppose that we consider the plane symmetry instead of the line symmetry.
In the two dimensional version, I think we can consider the inner angles of $P$. However, I don't have any good idea for the higher dimensional version. Can anyone help?
|
Page Content There is no English translation for this web page. Berliner Kolloquium Wahrscheinlichkeitstheorie und Seminar RTG 1845 Sommerstemester 2015
Datum Sprecher GK-Seminar 17:15-18:00 Sprecher Kolloquium (18:00-19:00) Ort 22. 4. 2015 (keiner, Ausstellung) Herbert Spohn (München) 17:15-18:15 U Potsdam Haus 8, Raum 0.58 6.5.2015 Maite Wilke Berenguer (TUB) Patrick Cattiaux (Toulouse) TU Berlin MA 041 20. 5. 2015 (kein Kolloquium, Euler-Lecture am 22. 5.) U Potsdam 3.6.2015 Alberto Chiarini (TUB) Sabine Jansen (Bochum) TU Berlin MA 041 17.6.2015 Matti Leimbach (TUB) Jan Swart (Prag) TU Berlin MA 041 1.7.2015 Alexandros Saplaouras (TUB) Yuri Kifer (Jerusalem) TU Berlin MA 041 15.7.2015 Jennifer Krüger/Eva Lang (TUB) Dirk Blömker (Augsburg) U Potsdam
Das Kolloquium im Sommersemester findet an der TU Berlin (MA 041) und der U Potsdam statt. Organisation: S. Roelly (UP) und N. Kurt (TUB).
Abstracts 22. April
Kolloquium: Herbert Spohn (TU München) The KPZ (Kardar-Parisi-Zhang) equation and its universality class Abstract: The one-dimensional KPZ equation is a stochastic PDE which describes the dynamics of surface growth. It is one representative of a much larger universality class. I will discuss a few models in this class and explain how they are connected. They all share the common feature to be stochastic integrable.
6. Mai:
RTG-Seminar: Maite Wilke Berenguer (TUB)
Lipschitz Percolation
Abstract:
Kolloquium: Patrick Cattiaux (Toulouse) Central Limit Theorem for additive functionals of some Markov processes: anomalous results Abstract: In this talk we will consider an ergodic Markov process $X_t$ ($t \in \mathbb N$ or $t \in \mathbb R^+$) with unique invariant probability $\mu$, and some additive functional $S_t=\sum_{k=1}^t \, f(X_k)$ or $S_t=\int_0^t \, f(X_s)ds$ for some $\mu$ centered $f$. If $f \in \mathbb L 2(\mu)$ the expected appropriate normalization is $\sqrt{\Var(S_t)}$ (expected to be of order $\sqrt t$), and the expected limit is then a standard gaussian. If $f \in mathbb L^p(\mu)$ ($1<p<2$) one expects in some cases some stable limit after appropriate normalization. It turns out that the mixing rate of the process (equivalently the rate of convergence to equilibrium) is of particular importance for these results to hold true. We shall recall some of the main recent (and less recent) results in this direction and explain how the mixing rate enters into the game. We shall also discuss a particular class of examples for which depending on whether the convergence to equilibrium is quick enough or not, anomalous limit (with some variance breaking) or anomalous normalization appear. At the level of the invariance principle instead of the simple CLT theorem, the expected limiting process becomes a fractional Brownian motion instead of the usual one. These examples correspond to a special class of kinetic P.D.E.'s with heavy tails equilibria.
3. Juni:
RTG-Seminar: Alberto Chiarini (TUB) Extremes of the supercritical Gaussian Free FieldAbstract: We show that the rescaled maximum of the discrete Gaussian Free Field (DGFF) in dimension larger or equal to 3 is in the maximal domain of attraction of the Gumbel distribution. A finer description of the maximum can also be obtained, that is, the associated extremal process converges to a Poisson point process. These results holds both for the infinite-volume field as well as the field with zero boundary conditions. The proofs follow from an interesting application of the Stein-Chen method from Arratia et al. (1989).Joint work with Alessandra Cipriani (WIAS) and Rajat Subhra Hazra (Indian Statistical Institute)
Kolloquium: Sabine Jansen (Bochum)
Non-colliding Ornstein-Uhlenbeck bridges and symmetry breaking in a quantum 1D Coulomb system
Abstract:
Jellium is a model where negatively charged electrons move in a uniform neutralizing background of positive charge. Eugene Wigner conjectured that at low density, the electrons should crystallize, i.e., form a periodic lattice. We prove that in dimension 1, in a quantum mechanics setup, this actually happens for all temperatures and densities, thereby extending low-density results by Brascamp and Lieb (1975) and classical results by Aizenman and Martin (1980). The proof uses the Feynman-Kac formula to map the quantum model to asystem of non-colliding Ornstein-Uhlenbeck bridges, and then applies the Krein-Rutman theorem (an infinite-dimensional version of Perron-Frobenius). The talk is based on joint work with Paul Jung (University of Alabama at Birmingham).
17. Juni
RTG-Seminar: Matti Leimbach (TUB)
Porous medium equation with proliferation Abstract: Motivated by mathematical oncology, we present two PDE's, the viscous poruous medium equation and the Fisher-Kolmogorov-Petrovskii-Piskunov model (FKPP). Formally, we derive the viscous porous medium equation as the limit of the empirical measure of a system of interacting particles with intermediate interaction-range and large amplitude. We illustrate that also the FKPP model is a limit of a particle system with certain branching mechanism. At the end, we briefly discuss a combination of these two PDE's, the porous medium equation with proliferation. This is based on ongoing research with Franco Flandoli.
Kolloquium: Jan Swart (Prag)
Rank-based Markov chains, self-organized criticality, and order book dynamics. Abstract: In this talk, we will take a look at some systems of interacting particles on the real line, where the only spatial structure that is relevant for the dynamics is the relative order of the particles. Examples of such systems are the modified Bak-Sneppen model, introduced (as a variation of the original 1993 model) by Meester and Sarkar (2012), Barabási's (2005) queueing system and a variation on the latter due to Gabrielli and Caldarelli (2009), a model for the evolution of the state of an order book on a stock market, introduced by Stigler (1964) and independently by Luckock (2003), and a two models for canyon formation introduced by me (2014). All these systems employ a version of the rule "kill the lowest particle" and seem to exhibit self-organized criticality at a critical point that marks the boundary between an interval where all particles are eventually removed and an interval where particle stay in the system forever.
1. Juli
RTG-Seminar: Alexandros Saplaouras (TU Berlin)
Towards a robustness result for BSDEs with jumpsMotivated by the robustness of BSDEs with respect to the Brownian motion, see \cite{BDM}, we want to prove that the same holds when the BSDE is taken with respect to a square integrable, quasi-left-continuous martingale $M$. The robustness of a BSDE stands for the following property: having a suitable martingale approximation $M^n$ of $M$, then the solutions of the BSDEs driven by $M^n$, converge to the solution of the BSDE driven by $M$. In order to obtain the result, we need to overcome two intermediate problems. The first is to guarantee the existence and uniqueness of solutions of BSDEs driven by $M^n$. In this case, the predictable quadratic covariation of $M^n$ may have jumps, hence the Lebesgue-Stieltjes integral is not necessarily a continuous process. In this work we improve a general result of existence and uniqueness for BSDEs, see \cite{EKH}, where the Lebesgue-Stieltjes integral is with respect to a continuous, predictable and increasing process. Our improvement consists in allowing the integrator of the Lebesgue-Stieltjes integral having (suitably small) jumps, i.e. being a c\`adl\`ag, predictable and increasing process. The second problem consists in proving that the corresponding stochastic and Lebesgue-Stieltjes integrals with respect to $M^n$ and the predictable quadratic covariations $\pqc{M^n}$ respectively converge to the stochastic and Lebesgue-Stieltjes integral with respect to $M$ and the predictable quadratic covariation $\pqc{M}$ respectively. Once this second obstacle is overcome, we could proceed to proving the desired result. As a byproduct of this result, the convergence of the Euler scheme for BSDEs is obtained, where $M^n$ is the time discretization of $M$.
Kolloquium: Yuri Kifer, (Hebrew University, Jerusalem)
Further advances in nonconventional limit theorems
15. Juli (Beginn 16:30/17:30, Ort: U Potsdam)
RTG-Seminar: Eva Lang (TUB)
A multiscale analysis of traveling waves in stochastic neural fields
Kolloquium: Dirk Blömker (Augsburg) Stochastic dynamics near a change of stability (Amplitude- and Modulation-Equations)
|
This is the third part of a series consisting of three articles with the goal to introduce some general concepts and concrete algorithms in the field of neural network optimizers. As a reminder, here is the table of contents:
We covered two important concepts of optimizers in the previous sections, namely the introduction of a momentum term and adaptive learning rates. However, other variations, combinations or even additional concepts have also been proposed
1.
Each optimizer has its own advantages and limitations making it suitable for specific contexts. It is beyond the scope of this series to name or introduce them all. Instead, we shortly explain the well established Adam optimizer as one example. It also re-uses some of the ideas discussed previously.
Before we proceed, we want to stress some thoughts regarding the combination of optimizers. One obvious choice might be to combine the momentum optimizer with the adaptive learning scheme. Even though this is theoretically possible and even an option in an implementation of the RMSProp algorithm, there might be a problem.
The main concept of the momentum optimizer is to accelerate when the direction of the gradient remains the same in subsequent iterations. As a result, the update vector increases in magnitude. This, however, contradicts one of the goals of adaptive learning rates which tries to keep the gradients in “reasonable ranges”. This may lead to issues when the momentum vector \(\fvec{m}\) increases but then gets scaled down again by the scaling vector \(\fvec{s}\).
It is also noted by the authors of RMSProp that the direct combination of adaptive learning rates with a momentum term does not work so well. The theoretical argument discussed might be a cause for these observations.
In the following, we first define the Adam algorithm and then look at the differences compared to previous approaches. The first is the usage of first-order moments which behave differently compared to a momentum vector. We are using an example to see how this choice has an advantage in skipping suboptimal local minima. The second difference is the usage of bias-correction terms necessary due to the zero-initialization of the moment vectors. Finally, we are also going to take a look at different trajectories.
This optimizer was introduced by Diederik P. Kingma and Jimmy Ba in 2017. It mainly builds upon the ideas from AdaGrad and RMSProp, i.e. adaptive learning rates, and extends these approaches. The name is derived from
adaptive moment estimation.
Additionally to the variables used in classical gradient descent, let \(\fvec{m} = (m_1, m_2, \ldots, m_n) \in \mathbb{R}^n\) and \(\fvec{s} = (s_1, s_2, \ldots, s_n) \in \mathbb{R}^n\) be the vectors with the estimates of the first and second raw moments of the gradients (same lengths as the weight vector \(\fvec{w}\)). Both vectors are initialized to zero, i.e. \(\fvec{m}(0) = \fvec{0}\) and \(\fvec{s}(0) = \fvec{0}\). The hyperparameters \(\beta_1, \beta_2 \in [0;1[\) denote the decaying rates for the moment estimates and \(\varepsilon \in \mathbb{R}^+\) is a smoothing term. Then, the Adam optimizer defines the update rules\begin{align} \begin{split} \fvec{m}(t) &= \beta_1 \cdot \fvec{m}(t-1) + (1-\beta_1) \cdot \nabla E\left( \fvec{w}(t-1) \right) \\ \fvec{s}(t) &= \beta_2 \cdot \fvec{s}(t-1) + (1-\beta_2) \cdot \nabla E \left( \fvec{w}(t-1) \right) \odot \nabla E \left( \fvec{w}(t-1) \right) \\ \fvec{w}(t) &= \fvec{w}(t-1) - \eta \cdot \frac{\fvec{m}(t)}{1-\beta_1^t} \oslash \sqrt{\frac{\fvec{s}(t)}{1-\beta_2^t} + \varepsilon} \end{split} \label{eq:AdamOptimizer_Adam} \end{align}
to find a path from the initial position \(\fvec{w}(0)\) to a local minimum of the error function \(E\left(\fvec{w}\right)\). The symbol \(\odot\) denotes the point-wise multiplication and \(\oslash\) the point-wise division between vectors.
There is a very close relationship to adaptive learning rates. In fact, the update rule of \(\fvec{s}(t)\) in \eqref{eq:AdamOptimizer_Adam} is identical to the one in the adaptive learning scheme. We also see that there is an \(\fvec{m}\) vector, although this one is different compared to the one defined in momentum optimization. We are picking up this point shortly.
In the description of Adam, the arguments are more statistically-driven: \(\fvec{m}\) and \(\fvec{s}\) are interpreted as exponentially moving averages of the first and second raw moment of the gradient. That is, \(\fvec{m}\) is a biased estimate of the means of the gradients and \(\fvec{s}\) is a biased estimate of the uncentred variances of the gradients. In total, we can say that the Adam update process uses information about where the gradients are located on average and how they tend to scatter.
In momentum optimization, we keep track of an exponentially decaying sum whereas in Adam we have an exponentially decaying average. The difference is that in Adam we do not add the full new gradient vector \(\nabla E\left( \fvec{w}(t-1) \right)\). Instead, only a fraction is used while at the same time a fraction of the old momentum is removed (the last part is identical to the momentum optimizer). For example, if we set \(\beta_1 = 0.9\), we keep 90 % of the old value and add 10 % of the new. The bottom line is that we build much less momentum, i.e. the momentum vector does not grow that much.
In the analogy of a ball rolling down a valley, we may think of the moment updates in \eqref{eq:AdamOptimizer_Adam} as of a very heavy ball with a lot of friction. It accelerates less and needs more time to take the gradient information into account. The ball rolls down the valley according to the running average of gradients along the track. Since it takes some time until the old gradient information is lost, it is less likely to stop at small plateaus and can hence overshoot small local minima.
2
We now want to test this argument on a small example function. For this, we leave out the second moments \(\fvec{s}\) for now so that \eqref{eq:AdamOptimizer_Adam} reduces to\begin{align} \begin{split} \fvec{m}(t) &= \beta_1 \cdot \fvec{m}(t-1) + (1-\beta_1) \cdot \nabla E\left( \fvec{w}(t-1) \right) \\ \fvec{w}(t) &= \fvec{w}(t-1) - \eta \cdot \frac{\fvec{m}(t)}{1-\beta_1^t}. \end{split} \label{eq:AdamOptimizer_AdamFirstMoment} \end{align}
We want to compare these first moment updates with classical gradient descent. The following figure shows the example function and allows you to play around with a trajectory which starts near the summit of the hill.
Directly after the first descent is a small local minimum and we see that classical gradient descent (\(\beta_1 = 0\)) gets stuck here. However, with first-order moments (e.g. \(\beta_1 = 0.95\)), we leverage the fact that the moving average decreases not fast enough so that we can still roll over this small hole and make it down to the valley.
4
We can see from the error landscape that the first gradient component has the major impact on the updates as it is the direction of the steepest hill. It is insightful to visualize the first component \(m_1(t)\) of the first-order moments over iteration time \(t\):
With classical gradient descent (\(\beta_1 = 0\)), we move fast down the hill but then get stuck in the first local minimum. As only local gradient information is used in the update process, the chances of escaping the hole are very low.
In contrast, when using first-order moments, we increase slower in speed as only a fraction of the large first gradients is used. However, \(m_1(t)\) also decreases slower when reaching the first hole. In this case, the behaviour of the moving average helps to step over the short increase and to move further down the valley.
Building momentum and accelerating when we move in the same direction in subsequent iterations is the main concept and advantage of momentum optimization. However, as we already saw in the toy example used in the momentum optimizer article, large momentum vectors may be problematic as they can overstep local minima and lead to oscillations. What is more, as stressed in the argument above, it is not entirely clear if momentum optimization works well together with adaptive learning rates. Hence, it might be reasonable that the momentum optimizer is not used directly in Adam.
The final change in the Adam optimizer compared to its predecessors is the bias correction terms where we divide both moment vectors by either \((1-\beta_1^t)\) or \((1-\beta_2^t)\). This is because the moment vectors are initialized to zero so that the moving averages are, especially in the beginning, biased towards the origin. The factors are a countermeasure to correct this bias.
Practically speaking, these terms boost both vectors in the beginning since they are divided by a number usually \(< 1\). This can speed-up convergence when the true moving averages are not located at the origin but are larger instead. As the factors have the iteration number \(t\) in the exponent of the hyperparameters, the terms approach 1 over time and hence become less influential.
We now consider, once again, a one-dimensional example and define measures to compare the update vectors of the second iteration using either classical gradient descent or the Adam optimizer. To visualize the effect of the bias-correction terms, we repeat the process in which we leave these terms out.
Denoting the gradients of the first two iterations as \(g_t = \nabla E\left( w(t-1) \right)\), we build the moment estimates\begin{align*} m(1) &= \beta_1 \cdot m(0) + (1-\beta_1) \cdot g_1 = (1-\beta_1) \cdot g_1 \\ m(2) &= \beta_1 \cdot m(1) + (1-\beta_1) \cdot g_2 = \beta_1 \cdot (1-\beta_1) \cdot g_1 + (1-\beta_1) \cdot g_2 \\ s(1) &= \beta_2 \cdot s(0) + (1-\beta_2) \cdot g_1^2 = (1-\beta_2) \cdot g_1^2 \\ s(2) &= \beta_2 \cdot s(1) + (1-\beta_2) \cdot g_2^2 = \beta_2 \cdot (1-\beta_2) \cdot g_1^2 + (1-\beta_2) \cdot g_2^2 \end{align*}
so that we can define a comparison measure as\begin{equation} \label{eq:AdamOptimizer_AdamMeasureCorrection} C_A(g_1,g_2) = \left| \eta \cdot \frac{\frac{m(2)}{1-\beta_1^2}}{\sqrt{\frac{s(2)}{1-\beta_2^2} + \varepsilon}} \right| - |\eta \cdot g_2| = \left| \eta \cdot \frac{\sqrt{1-\beta_2^2}}{1-\beta_1^2} \cdot \frac{m(2)}{\sqrt{s(2) + (1-\beta_2^2) \cdot \varepsilon}} \right| - |\eta \cdot g_2|. \end{equation}
To make the effect of the bias correction terms more evident, we moved them out of the compound fraction and used them as prefactor. We define a similar measure without these terms\begin{equation} \label{eq:AdamOptimizer_AdamMeasureNoCorrection} \tilde{C}_A(g_1,g_2) = \left| \eta \cdot \frac{m(2)}{\sqrt{s(2) + \varepsilon}} \right| - |\eta \cdot g_2|. \end{equation}
The following figure compares the two measures by interpreting the gradients of the first two iterations as variables.
With correction terms (left image), we can observe that small gradients get amplified and larger ones attenuated. This is an inheritance from the adaptive learning scheme. Back then, however, this behaviour was more centred around the origin whereas here smaller gradients get amplified less and more independently of \(g_1\). This is likely an effect of the \(m(2)\) term which uses only a small fraction (10 % in this case) of the first gradient \(g_1\) leading to a smaller numerator.
When we compare this result with the one without any bias corrections (right image), we see a much brighter picture. That is, the area of amplification of small and attenuation of large gradients is stronger. This is not surprising, as the prefactor\begin{equation*} \frac{\sqrt{1-\beta_2^2}}{1-\beta_1^2} = \frac{\sqrt{1-0.999^2}}{1-0.9^2} \approx 0.2353 \end{equation*}
is smaller than 1 and hence leads to an overall decrease (the term \((1-\beta_2^2) \cdot \varepsilon \) is too small to have a visible effect). Therefore, the bias correction terms ensure that the update vectors behave also more moderately at the beginning of the learning process.
Like in previous articles, we now also want to compare different trajectories when using the Adam optimizer. For this, we can use the following widget which implements the Adam optimizer.
Basically, the parameters behave like expected: larger values for \(\beta_1\) make the accumulated gradients decrease slower so that we first overshoot the minimum. \(\beta_2\) controls again the preference of direction (\(\beta_2\) small) vs. magnitude (\(\beta_2\) large).
It is to note that even though the Adam optimizer is much more advanced than classical gradient descent, this does not mean that it is immune against extreme settings. It is still possible that weird effects happen like oscillations or that the overshooting mechanism discards good minima (example settings). Hence, it may still be worth it to search for good values for the hyperparameters.
We finished with the main concepts of the Adam optimizer. It is a popular optimization technique and its default settings are often a good starting point. Personally, I have had good experience with this optimizer and would definitely use it again. However, depending on the problem, it might not be the best choice or requires tuning of the hyperparameters. For this, it is good to know what they do and also how the other optimization techniques work.
List of attached files:
|
№ 9
All Issues Tedeev A. F. On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with source in the case where the initial function slowly vanishes
Ukr. Mat. Zh. - 2012. - 64, № 11. - pp. 1500-1515
We study the Cauchy problem for a degenerate parabolic equation with source and inhomogeneous density of the form $$u_t = \text{div}(\rho(x)u^{m-1}|Du|^{\lambda-1}Du) + u ^p $$ in the case where initial function decreases slowly to zero as $|x| \rightarrow \infty$. We establish conditions for the existence and nonexistence of a global-in-time solution, which substantially depend on the behavior of the initial data as $|x| \rightarrow \infty$. In the case of global solvability, we obtain an exact estimate of a solution for large times.
Bilateral estimates for the support of a solution of the Cauchy problem for an anisotropic quasilinear degenerate equation
Ukr. Mat. Zh. - 2006. - 58, № 11. - pp. 1477–1486
We establish exact-order bilateral estimates for the size of the support of a solution of the Cauchy problem for a doubly nonlinear parabolic equation with anisotropic degeneration in the case where the initial data are finite and have finite mass.
Initial-boundary-value problems for quasilinear degenerate hyperbolic equations with damping. Neumann problem
Ukr. Mat. Zh. - 2006. - 58, № 2. - pp. 272–282
We study the behavior of the total mass of the solution of Neumann problem for a broad class of degenerate parabolic equations with damping in spaces with noncompact boundary. New critical indices for the investigated problem are determined.
Theorems on the existence and nonexistence of solutions of the Cauchy problem for degenerate parabolic equations with nonlocal source
Ukr. Mat. Zh. - 2005. - 57, № 11. - pp. 1443–1464
We consider the Cauchy problem for a doubly nonlinear degenerate parabolic equation with nonlocal source under the assumption that the initial function is integrable. We establish the existence and nonexistence of time-global solutions of the problem.
Ukr. Mat. Zh. - 1998. - 50, № 7. - pp. 1007–1008
Two-sided estimates of a solution of the Neumann problem as $t \rightarrow \infty$ for a second-order Quasilinear Parabolic Equation
Ukr. Mat. Zh. - 1996. - 48, № 7. - pp. 989-998
We establish exact upper and lower bounds as $t \rightarrow \infty$ for the norm ‖
u(·, t)‖ L ∞(Ω) of a solution of the Neumann problem for a second-order quasilinear parabolic equation in the region D=Ω×{>0}, where Ω is a region with noncompact boundary. Method for symmetrization and estimation of solutions of the Neumann problem for the equation of a porous medium in domains with noncompact boundary for infinitely increasing time
Ukr. Mat. Zh. - 1995. - 47, № 2. - pp. 147–157
We consider the initial boundary-value Neumann problem for the equation of a porous medium in a domain with noncompact boundary. By using a symmetrization method, we obtain exact
L p-estimates, 1≤ p≤∞, for solutions as t→∞. Qualitative properties of solutions of the Neumann problem for a higher-order quasilinear parabolic equation
Ukr. Mat. Zh. - 1993. - 45, № 11. - pp. 1571–1579
The property of localization of perturbations is proved for a solution of an initial boundary-value Neumann problem in a region
D=?x, t>0, where ? is a region in R nwith a noncompact boundary. Symmetrization and initial boundary-value problems for certain classes of nonlinear second order parabolic equations
Ukr. Mat. Zh. - 1993. - 45, № 7. - pp. 884–892
Ukr. Mat. Zh. - 1992. - 44, № 2. - pp. 260–268
Exact embedding theorems of the multiplicative type are established for functions of Sobolev spaces defined in a domain Ω ⊂
R n, n⩾2, whose boundary is not compact. The main condition on the domain is of the isoperimetric type.
Ukr. Mat. Zh. - 1992. - 44, № 2. - pp. 295-297
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.