text stringlengths 256 16.4k |
|---|
I have sequence of $N$ real numbers: $\mathbf{x} = (x_0, x_1, \ldots, x_{N-1})$.
Discrete Fourier Transform (DFT) is defined as $$ X_k = \sum_{n=0}^{N-1} x_n e^{-i 2\pi k \frac{n}{N}}, \quad (k=0,1,\ldots,N-1). $$
Coefficients $X_0,X_1,\ldots,X_{N-1}$ are complex-valued at all.
How to change starting sequence $\mathbf{x}$ to make all its DFT- coefficients $X_k$ real-valued?
"To change" I mean to multiply each term $x_n$ by some $z_n\in \mathbb{C}, |z_n|=1, \quad n=0,1,\ldots,N-1$.
Thanks! |
Does dimensional regularization (and counterterm renormalization) give rise to running coupling constants as the other regularization methods?
Yes. In Dimensional Regularization (DR) schemes, you always introduce a scale $\mu$, for (mass!) dimensional analysis consistency. Renormalized couplings depend on this scale $\mu$ as dictated by the RG equations: $$\mu \frac{\text d}{\text d \mu} g^i = \beta ^i(g).$$
In practice, the scale $\mu$ is introduced by requiring the action to be dimensionless. Take for instance massless $\phi ^4$ theory: $$\mathscr L = \frac{1}{2}(\partial \phi_0)^2-\frac{g_0}{4!}\phi_0 ^4,$$ where the subscript $0$ denote the bare couplings. In $d=4-\epsilon$ dimensions, you can easily see that $g_0$ has mass dimension $$[g_0]=\epsilon.$$You can define a dimensionless renormalized coupling $g$ by: $$g_0 (\epsilon)= Z_g(\mu ,\epsilon) g(\mu,\epsilon) \mu ^{\epsilon}.$$By requiring $g_0$ to be independent of $\mu$, you can derive the RG equation satisfied by $g(\mu,\epsilon)$ (in arbitrary space-time dimension, in particular in the limit $\epsilon \to 0$).
In the above example, you are somehow forced to introduce a
new parameter $\mu$, but the same procedure can be applied if your original four-dimensional theory already contains some mass scale at the classical level. For instance, if the scalar $\phi _0$ had physical mass $m$, you may as well define: $$g_0 = Z g m^{\epsilon},$$without having to introduce a new scale $\mu$. This is perfectly consistent with dimensional analysis, but also less useful from the practical point of view, because it does not allow you to tame "large-logs" by a clever choice of $\mu$.
And if this is so, does every renormalization scheme implies running
coupling constants?
As I hope is clear from the above discussion, a running coupling is completely a matter of definition. You can do without it in dimensional regularization, by simply fixing it once for all.
However, in modern particle physics, the phenomena of interest range from the $\text {GeV}$ scale of hadronic physics to the $10^{19} \text {GeV}$ scale of quantum gravity (?). In this context, using a running coupling allows you to trust the results of leading order computations without worrying of large-logs.
So, I would dare to say that every
useful renormalization scheme implies running couplings. |
Edit
It seems that I haven't written my question clearly enough, so I will try to develop more using the example of quantum tunnelling. As a disclaimer, I want to state that my question is not about how to perform a Wick rotation in the path integral formulation!
Let's look a the probability of quantum tunnelling in the path integral formulation. The potential is given by $V[x(t)]=(x(t)^2-1)^2$, which has two minima at $x=\pm x_m=\pm1$. Given that the particle starts at $t=-\infty$ at $x=-x_m$, what is the probability that it is at $x=x_m$ at $t=\infty$. The probability amplitude is given by
$$K(x_m,-x_m,t)=\langle x_m|e^{-i \hat H t}|-x_m \rangle$$
The usual trick is to Wick rotate $t\to-i\tau$, compute everything in imaginary time using a saddle point approximation and at the end of the calculation rotate back to real time. I understand how it works. No problem with that.
What I want to understand is
how can I do the calculation without using the Wick rotation? how does this solution connect to the Euclidean formulation?
In principle, we should be able to do the calculation with the path integral formulation in real time
$$\int Dx(t) e^{i S[x(t)]/\hbar}$$
In the stationary phase approximation we look for a complex path $x(t)$ which minimizes the action, and expand about this point.
Choose $m =1$ for simplicity. The equation of motion is
$$\ddot x-2 x+2x^3=0$$
which has no real solution, i.e. no Newtonian (classical) solution. But there is a complex function that solves it: $x_s(t)=i \,\tan(t)$. One problem is that it behaves pretty badly. If anyway I accept this a correct solution, I should be able to compute the gaussian fluctations, add up all the kinks/antikinks, etc. and recover the correct result (usually obtained with the euclidean action and $\tau\to -it$). Am I right?
So my question is: is it possible to do the calculation that way, and if so, how is it related to the trick of going back and forth in imaginary time?
Original
I have a question on the mathematical meaning of the Wick rotation in path integrals, as it is use to compute, for instance, the probability of tunneling through a barrier (using instantons).
I am aware that when computing an ordinary integral using the Stationary Phase Approximation
$\int dx e^{i S(x)/\hbar}$
with $x$ and $S$ real, one should look at the minimum of $S(z)$ in the whole complex plane, which can be for instance on the Imaginary axis.
In the case of a path integral, one wants to compute
$\int Dx(t) e^{i S[x(t)]/\hbar}$
and there is a priori no reason that the "classical path" from $x_a(t_a)$ to $x_b(t_b)$ (i.e. that minimizes $S[x(t)]$) should lie on the real axis. I have no problem with that. What I don't really get is the meaning of the Wick rotation $t\to -i\tau$ from a (layman) mathematical point of view, because it is not as if the function $x(t)$ is taken to be imaginary (say, $x(t)\to i x(t)$), but it is its variable that we change !
In particular, if I discretize the path-integral (which is what one should do to make sense of it), I obtain
$\int \prod_n d x_n e^{i S(\{x_n\})/\hbar}$.
where $S(\{x_n\})=\Delta t\sum_n\Big\{ (\frac{x_{n+1}-x_n}{\Delta t})^2-V(x_n)\Big\}$
At this level, the Wick rotation applies on the time slice $\Delta t\to -i\Delta \tau$ and does not seem to be a meaningful change of variable in the integral
I understand that if I start with an evolution operator $e^{-\tau \hat H/\hbar}$ I will get the path integral after Wick rotation, but it seems to be a convoluted argument.
The question is : Is it mathematically meaningful to do the Wick rotation directly at the level of the path-integral, and especially when it is discretized? |
Genus 2 curves in isogeny class 169.a
Label Equation 169.a.169.1 \(y^2 + (x^3 + x + 1)y = x^5 + x^4\) L-function data
Analytic rank: \(0\) Bad L-factors:
Good L-factors:
See L-function page for more information
\(\mathrm{ST} =\) $E_6$, \(\quad \mathrm{ST}^0 = \mathrm{SU}(2)\)
Of \(\GL_2\)-type over \(\Q\)
Smallest field over which all endomorphisms are defined:
Galois number field \(K = \Q (a) \simeq \) \(\Q(\zeta_{13})^+\) with defining polynomial \(x^{6} - x^{5} - 5 x^{4} + 4 x^{3} + 6 x^{2} - 3 x - 1\)
\(\End (J_{\overline{\Q}}) \otimes \Q \) \(\simeq\) \(\mathrm{M}_2(\)\(\Q\)\()\) \(\End (J_{\overline{\Q}}) \otimes \R\) \(\simeq\) \(\mathrm{M}_2 (\R)\)
More complete information on endomorphism algebras and rings can be found on the pages of the individual curves in the isogeny class. |
Can't seem to figure this one out by thinking it through. Let's say that the simple return $R_t=P_{t+1}/P_t -1$ is assumed to be $R_t \sim iid N(0,\sigma^2)$. Thus, a two period return would be $(1+R_t)(1+R_{t+1})-1$. Would the variance of the two period return be equal to $2\sigma^2 + \sigma^4$?
$$Var((1+R_t)(1+R_{t+1})-1)=Var(1+R_{t+1}+R_t+R_tR_{t+1})$$ $$ = 2\sigma^2 +Var(R_tR_{t+1}) = 2\sigma^2 + \sigma^4$$ since variance of two independent random variable products are just the product of both random variable variance (with $\mu=0$).
Under log returns, returns become additive and two period would be $log(1+R_t)+log(1+R_{t+1})$ and variance is equal to
$$Var(log(1+R_t)+log(1+R_{t+1})) = Var(log(1+R_t))+Var(log(1+R_{t+1}))=\sigma^2 + \sigma^2$$
Am i missing anything here? |
Hello, I need help determining whether the map I defined between two algebras is a well-defined homomorphism of $C^\ast$-algebras. I ran into this problem while trying to define a "rotation map" between algebras.
Here are the notations:
$S=C_0(\mathbb{R})$, viewed as a $\mathbb{Z}_2$-graded algebra (odd and even functions corresponding exactly to odd and even grading degree) $V, W$ are finite dimensional real Euclidean spaces. $Cliff(V)$ is the complexified Clifford algebra over V. $C_\tau(V)=C_0(V, Clifford(V))$ All the $\otimes$ you see here actually mean "graded tensor product" (can somebody tell me how to type that in LaTeX?), with which we have the well-known identities $Cliff(V\oplus W)=Cliff(V)\otimes Cliff(W)$ and $C_\tau(V\oplus W)=C_\tau(V)\otimes C_\tau(W)$ $A(V)$ is the subalgebra of $S\otimes C_\tau(V)$ generated by elements of the form $f(s\otimes1+1\otimes \Psi(v-a))$ for some vector $a \in V$ (in technical terms, this is a Bott-type element with center at a). $\Psi \colon V \to Cliff(V)$ and maps a vector to the corresponding degree 1 element in Clifford algebra. For a given $f(s)\in S$. Let $f(s) = f_0(s^2) + sf_1(s^2)$ (odd and even part). Then $f(s\otimes1+1\otimes \Psi(v-a))=f_0(s^2+||v-a||^2)+sf_1(s^2+||v-a||^2)+\Psi(v-a)f_1(s^2+||v-a||^2)$ My question: Is the following map
$\beta_t \colon A(V) \to A(V \oplus W)$, $f(s \otimes 1 + 1 \otimes \Psi(x-a)) \mapsto f(s \otimes 1 \otimes 1 + 1 \otimes \Psi(x_1 - \cos t \cdot a) \otimes 1 + 1 \otimes 1 \otimes \Psi(x_2 - \sin t \cdot a))$
a homomorphism of $C^\ast$-algebras?
My first concern is: I define only the map on generators, how do I know if it actually extends to a well-defined map on the whole algebra? (or at least the dense subalgebra generated by the generators?) Thank you! |
gravitational waves are strictly transversal (in the linear regime at least), also their amplitudes are tiny even for cosmic scale events like supernovas or binary black holes (at least far away, maybe we should ask some physicists located a bit closer to the center of the galaxy), but lets put all those facts aside for a second and consider a gravitational source big enough to generate gravitational waves with amplitudes of the order of the galaxy. For instance consider a planar wave like in my mediocre drawing:
$$ h_{\alpha \beta} e^{i (k_{y} y - \omega t)} $$
where
$$ h_{\alpha \beta} \approx 1 $$
so the perturbation is in the nonlinear regime
i draw two far away objects in three different time slices (this is why they are repeated 3 times), the topmost is the objects without the gravitational wave, the one in the middle represents the objects in the crest of the gravitational wave, and the one in the bottom represents the objects in the valley of the wave.
So, my point is that people would only have to travel an arbitrarily small distance when the wave is on the valley (assuming circular polarization) even if the "normal" distance (i.e: $h_{\mu \nu} = 0$) is several light-years away
Besides being slightly impractical to set up such a mammoth gravitational source, this kind of warp drive is valid from a physical standpoint? Are there any physical limits to gravitational wave amplitudes in such nonlinear regime? |
So i have been working (as an undergrad, by working i mean "Redoing a few things my professor does") in a SIRS model for epidemies. SIRS stands here for:
Susceptible -> Infected -> Recovered -> Susceptible.
So, the system of 1st order differential equations that rule the model is: $$ S'(t) = - \beta I(t) S(t) + \mu R(t)$$ $$ I'(t) = \beta I(t) S(t) - \gamma I(t)$$ $$ R'(t) = \gamma I(t) - \mu R(t)$$ Where: $R'(t)$ stands for derivative of $R$ with respect to time $t$ (the same holds for $I$ and $S$). $\gamma, \beta$ and $\mu$ are constants, that depend on the properties of the diesase (if its very contagious, if it has a high chance of killing you, and so on).
So i intended to solve these equations using Euler's Method in
Python3, here is my code:
while t<140: Sold = S Iold = I S = S + dt*(u*R - B*I*S) I = I + dt*(B*I*S - G*I) R = R + dt*(G*I - u*R) t = t + dt Arq.write("{} {} {} {} {} \n".format(t,S,I,R,R+I+S))
Notice that there is a definition of
old variables. I was using them to save the previous value of the function before using it in the other equation. (I mean, instead of updating $R$ using the new value of $I$, use the old value of $I$, before being updated in the line above). I stopped using the old variables and nothing changed graphically speaking. If you want the code to run in using old variables, here it is:
while t<140: Sold = S Iold = I S = S + dt*(u*R - B*I*S) I = I + dt*(B*I*Sold - G*I) R = R + dt*(G*Iold - u*R) t = t + dt Arq.write("{} {} {} {} {} \n".format(t,S,I,R,R+I+S))
Okay. So let's now talk about what is happening. I'm using a 'normalized' population, i.e, S + I + R = 1 everytime. So, i choosed arbitrary values between 0 and 1 for the constants $\mu, \gamma$ and $\beta$, and set the following initial values for $I,R$ and $S$: $I(0) = 0.1$, $R(0) = 0.05$ and $S(0) = 0.85$. I got the following result, in a graphic population $\times$ time:
(I'm sorry, i don't know how to reduce the image size here). So now, i'll change the initial values to very different values: $S(0) = 0.20$, $R(0) = 0.55$, and $I(0) = 0.25$. Here is the result:And what we can see here is that, despite the initial values being
very different, the two cases seem to converge to the same values of the three populations. I won't put other images here otherwise this question would be a kilometer long, but all the initial conditions i've put didn't changed the result. (or didn't seem to). Why is this? Is my model correct?If not, what is the problem with it? If it is correct, can this be explained mathematically? This is a model that tries to simulate diseases. Are there any example of a non-killing disease that had (or have) this property?
Thanks in advance. (I would like to add that an edit to add colors to the code and reduce the image's size would be very appreciated (especially if the one who did it explained how! :)) |
I've been trying to convince myself that "coherent sheaf" is a natural definition. One way I might be satisfied is the following: for modules over a Noetherian ring $A$, coherent and finitely presented modules agree. For quasicoherent sheaves over a locally Noetherian scheme $X$, coherent and locally finitely presented sheaves agree. In general, coherent sheaves are locally finitely presented, and hence they pull back along morphisms $f : X \to Y$ where $X$ is locally Noetherian.
Is this already enough information to tell me what coherent sheaves must be?
More precisely, if $Y$ is a scheme, let $N_Y$ be the category whose objects are pairs $(X, f)$ of a locally Noetherian scheme and a morphism $f : X \to Y$ and whose morphisms are commuting triangles. The
category of coherent sheaves $\text{Coh}(N_Y)$ on $N_Y$ is the category whose objects are assignments, to each $(X, f) \in N_Y$, of a coherent sheaf $F_X \in \text{Coh}(X)$ and assignments, to each morphism $g : (X_1, f_1) \to (X_2, f_2)$ in $N_Y$, of an isomorphism
$$F_{X_1} \cong g^{\ast} F_{X_2}$$
satisfying the obvious compatibility condition, with the obvious notion of morphism. Since coherent sheaves are locally finitely presented, they pull back to locally finitely presented sheaves in a way satisfying the obvious compatibility conditions, so there is a natural functor
$$\text{Coh}(Y) \to \text{Coh}(N_Y)$$
given by taking pullbacks along all morphisms $f : X \to Y$ where $X$ is locally Noetherian. The more precise version of my question is:
Is this functor an equivalence?
I think there is a slicker way to ask this using descent and another slicker way to ask this using Kan extensions, but I'll refrain from both to be on the safe side. If the above is true, I'd also be interested in knowing to what extent I can restrict "locally Noetherian" to a smaller subcategory. Does it suffice to use Noetherian schemes? Affine Noetherian schemes? Over an affine Noetherian base $\text{Spec } S$, does it suffice to use $\text{Spec } R$ where $R$ is finitely generated over $S$? |
According to my understanding the derivation of the Black-Scholes PDE is based on the assumption that the price of the option should change in time in such a way that it should be possible to construct such a self-financing portfolio whose price replicates the price of the option (within a very small time interval). And my question is: Why do we assume that the price of the option has this property?
I will explain myself in more details. First, we assume that the price of a call option $C$ depends on the price of the underlying stock $S$ and time $t$. Then we use the Ito's lemma to get the following expression:
$d C = (\frac{\partial C}{\partial t} + S\mu\frac{\partial C}{\partial S} + \frac{1}{2}S^2 \sigma^2 \frac{\partial^2C}{\partial S^2}) dt + \sigma S \frac{\partial C}{\partial S} dW$ (1) ,
where $\mu$ and $\sigma$ are parameters which determine the time evolution of the stock price:
$dS = S(\mu dt + \sigma dW)$ (2)
Now we construct a self-financing portfolio which consist of $\omega_s$ shares of the underlying stock and $\omega_b$ shares of a bond. Since the portfolio is self financing, its price $P$ should change in this way:
$dP(t) = \omega_s dS(t) + \omega_b dB(t)$. (3)
Now we require that $P=C$ and $dP = dC$. It means that we want to find such $\omega_s$ and $\omega_b$ that the portfolio has the same price that the option and its change in price has the same value as the change in price of the option. OK. Why not? If we want to have such a portfolio, we can do it. The special requirements to its price and change of its price should fix its content (i.e. the requirement should fix the portion of the stock and bond in the portfolio ($\omega_s$ and $\omega_b$)).
If we substitute (2) in (3), and make use of the fact that $dB = rBdt$ we will get:
$\frac{\partial C}{\partial t} + S\mu\frac{\partial C}{\partial S} + \frac{1}{2}S^2 \sigma^2 \frac{\partial^2C}{\partial S^2} = \omega_s S \mu + \omega_b r B$ (4)
and
$\sigma S \frac{\partial C}{\partial S} = \omega_s S \sigma$ (5)
From last equation we can determine $\omega_s$:
$\omega_s = \frac{\partial C}{\partial S}$ (6)
So, we know the portion of the stock in the portfolio. Since we also know the price of the portfolio (it is equal to the price of the option), we can also determine the portion of the bond in the portfolio ($\omega_b$).
Now, if we substitute the found $\omega_s$ and $\omega_b$ into the (4) we will get an expression which binds $\frac{\partial C}{\partial t}$, $\frac{\partial C}{\partial S}$, and $\frac{\partial^2 C}{\partial S^2}$:
$\frac{\partial C}{\partial t} + rS \frac{\partial C}{\partial S} + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 C}{\partial S^2} = rC$
This is nothing but the Black-Scholes PDE.
What I do not understand is what requirement binds the derivatives of $C$ over $S$ and $t$.
In other words, we apply certain requirements (restrictions) to our portfolio (it should follow the price of the option). As a consequence, we restrict the content of our portfolio (we fix $\omega_s$ and $\omega_b$). But we do not apply any requirements to the price of the option. Well, we say that it should be a function of the $S$ and $t$. As a consequence, we got the equation (1). But from that we will not get any relation between the derivatives of $C$. We, also constructed a replicating portfolio, but why its existence should restrict the evolution of the price of the option?
It looks to me that the requirement that I am missing is the following:
The price of the option should depend on $S$ and $t$ in such a way that it should be possible to create a self-financing portfolio which replicates the price of the option.
Am I right? Do we have this requirement? And do we get the Black-Scholes PDE from this requirement? If it is the case, can anybody, please, explain where this requirement comes from. |
I am trying to draw a fairly simple scene with tikz.
The issue i have is with defining the end points on half circle. I tried to implement an algorithmfor intersection detection, pseudocode can be found at Circle-Line intersection. However it does not work as it should. In addition, it does not compile if i use the
\ifthenelse clause.
Any suggestions on how to get this to work?
\documentclass[11pt]{article}\usepackage{tikz}\usepackage{ifthen}\usepackage{graphics, tkz-berge, tkz-graph}%%%<\usepackage{verbatim}\usepackage[active,tightpage]{preview}\PreviewEnvironment{tikzpicture}\setlength\PreviewBorder{5pt}%%%%>\tikzset{isometricXYZ/.style={x={(-0.866cm,-0.5cm)}, y={(0.866cm,-0.5cm)}, z={(0cm,1cm)}}}%% document-wide tikz options and styles\begin{document}\begin{tikzpicture} [scale=4, line join=round, opacity=.75, fill opacity=.35, text opacity=1.0,% >=latex, inner sep=0pt,% outer sep=2pt,% ]% First argument is a ray angle, second argument is an offset along x-axis. \newcommand{\ray}[2]{ \def\r{1} % sphere radius \def\l{2} % line length \def\xc{#2} % offset % Sphere center \def\Cx{0} \def\Cy{0} % Ray start \def\Ex{(\xc + (\l*cos(#1)))} \def\Ey{(\l*sin(#1))} % Ray end \def\Lx{\xc} \def\Ly{0} % Vector from ray start to end \def\dx{(\Lx -\Ex)} \def\dy{(\Ly -\Ey)} % Vector from ray start sphere center \def\fx{(\Ex - \Cx)} \def\fy{(\Ey - \Cy)} % solve eq \def\a{(\dx * \dx + \dy * \dy)} \def\b{(2 * \fx * \dx + \fy * \dy)} \def\c{(\fx * \fx + \fy * \fy - \r * \r)} \def\discriminant{(\b*\b - 4*\a*\c)} \ifthenelse{{\discriminant} < 0} { \def\endc{(\xc, 0)} } { \def\sqdiscriminant{sqrt(\discriminant)} \def\t{(-\b +\sqdiscriminant)/(2*\a)} \def\endc{({\Ex + \t*\dx}, {\Ey + \t*\dy})} } \def\startc{({\Ex}, {\Ey})} \draw [->] \startc -- \endc; } \draw[fill=gray, fill opacity=0.2] (1, 0) arc (0:180:1); \draw [dotted] (0, 0) -- ({cos(30)}, {sin(30)}) node[above right] {$\alpha_1$}; \draw [dotted] (0, 0) -- ({cos(150)}, {sin(150)}) node[above left] {$\alpha_2$}; \draw [dotted] (0, 0) -- (0, 1.1); \node[below right] (halfpi) at (1, 0) {$\frac{\pi}{2}$}; \node[below left] (minushalfpi) at (-1, 0) {$-\frac{\pi}{2}$}; \foreach \x in {2} { \ray{55}{\x} };\end{tikzpicture}\end{document}
I tested both solutions below, but in both cases i get some intersections picked up incorrectly. I guess that is because i take the first intersection in all cases, which is unfortunately not always the right one. Is there a way to always choose the
closest, not the first, intersection?
p.p.s :) nvm. I fixed it by reversing the path direction. |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
Let $(\mathcal{M},E)$ be an internally non-well-founded model of set theory i.e of $ZFC^{\neg f}=ZFC\setminus \mathrm{foundation}+\neg \mathrm{foundation}$, then $\mathcal{M}$ includes an infinite decreasing $E$-sequence. I am interested to know about the degree of illness of internally non-well-founded models in the literature. For example we see there are many models of $ZF$ satisfying various versions of $AC$ that are really distinct. My question is:
$*$) What is the difference between internally non-well-founded models, in the sense of axiom of foundation? I mean how the existence of different decreasing sequences in different models effect their universes. Does an especial sequence capture some interesting properties in that model, in which not satisfying necessarily by all internally non-well-founded models?
I am also interested to find the answer of the following question.
$\bigstar$) Is any of the following statements true?
$(\rm{I})~~~~$ Working in $V$, for any infinite ordinal $\beta$, there exists a model $(\mathcal{M},E)$ of $ZFC^{\neg f}$ with $Ord(\mathcal{M})=\beta$ such that $\mathcal{M}$ contains a decreasing $E$-sequence of length $\beta$.
$(\rm{II})~~~$ Working in $V$, for any infinite ordinal $\beta$, there exists a model $(\mathcal{M},E)$ of $ZFC^{\neg f}$ with $Ord({\mathcal{M}})=\beta$ such that for any $\alpha<\beta$, $\mathcal{M}$ has a decreasing $E$-sequence of length $\alpha$.
clearly $\rm{I}\longrightarrow\rm{II}$.
Edit: I have been thought that the concept of an ill-foundeded model and aninternally ill-foundeded model are the same, but Prof. Enayat and William informed me about the difference, in the following comments. The question $(\bigstar)$ answered by Prof. Enayat stems from the question $*$. I thought maybe $\bigstar$ shows me some different pictures about non-well-founded models. |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
Search
Now showing items 1-10 of 25
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
In geometry,
Descartes' theorem states that for every four kissing, or mutually tangent, circles, the radii of the circles satisfy a certain quadratic equation. By solving this equation, one can construct a fourth circle tangent to three given, mutually tangent circles. The theorem is named after René Descartes, who stated it in 1643.
Contents History 1 Definition of curvature 2 Special cases 3 Complex Descartes theorem 4 Generalizations 5 See also 6 Notes 7 External links 8 History
Geometrical problems involving tangent circles have been pondered for millennia. In ancient Greece of the third century BC, Apollonius of Perga devoted an entire book to the topic.
René Descartes discussed the problem briefly in 1643, in a letter to Princess Elisabeth of the Palatinate. He came up with essentially the same solution as given in
equation (1) below, and thus attached his name to the theorem.
Frederick Soddy rediscovered the equation in 1936. The kissing circles in this problem are sometimes known as
Soddy circles, perhaps because Soddy chose to publish his version of the theorem in the form of a poem titled The Kiss Precise, which was printed in Nature (June 20, 1936). Soddy also extended the theorem to spheres; Thorold Gosset extended the theorem to arbitrary dimensions. Definition of curvature
Kissing circles. Given three mutually tangent circles (
black), what radius can a fourth tangent circle have? There are in general two possible answers ( red).
Descartes' theorem is most easily stated in terms of the circles' curvatures. The
curvature (or bend) of a circle is defined as k = ±1/ r, where r is its radius. The larger a circle, the smaller is the magnitude of its curvature, and vice versa.
The plus sign in
k = ±1/ r applies to a circle that is externally tangent to the other circles, like the three black circles in the image. For an internally tangent circle like the big red circle, that circumscribes the other circles, the minus sign applies.
If a straight line is considered a degenerate circle with zero curvature (and thus infinite radius), Descartes' theorem also applies to a line and two circles that are all three mutually tangent, giving the radius of a third circle tangent to the other two circles and the line.
If four circles are tangent to each other at six distinct points, and the circles have curvatures
k (for i i = 1, ..., 4), Descartes' theorem says:
(k_1+k_2+k_3+k_4)^2=2\,(k_1^2+k_2^2+k_3^2+k_4^2).
(1)
When trying to find the radius of a fourth circle tangent to three given kissing circles, the equation is best rewritten as:
k_4 = k_1 + k_2 + k_3 \pm2 \sqrt{k_1 k_2 + k_2 k_3 + k_3 k_1}. \,
(2)
The ± sign reflects the fact that there are in general
two solutions. Ignoring the degenerate case of a straight line, one solution is positive and the other is either positive or negative; if negative, it represents a circle that circumscribes the first three (as shown in the diagram above).
Other criteria may favor one solution over the other in any given problem.
Special cases
One of the circles is replaced by a straight line of zero curvature. Descartes' theorem still applies.
Here, as all three circles are tangent to each other at the same point, Descartes' theorem does not apply.
If one of the three circles is replaced by a straight line, then one
k , say i k 3, is zero and drops out of equation (1). Equation (2) then becomes much simpler:
k_4=k_1+k_2\pm2\sqrt{k_1k_2}.
(3)
If two circles are replaced by lines, the tangency between the two replaced circles becomes a parallelism between their two replacement lines. For all four curves to remain mutually tangent, the other two circles must be congruent. In this case, with
k 2 = k 3 = 0, equation (2) is reduced to the trivial \displaystyle k_4=k_1.
It is not possible to replace three circles by lines, as it is not possible for three lines and one circle to be mutually tangent. Descartes' theorem does not apply when all four circles are tangent to each other at the same point.
Another special case is when the
k are squares, i (v^2+x^2+y^2+z^2)^2=2\,(v^4+x^4+y^4+z^4)
Euler showed that this is equivalent to the simultaneous triplet of Pythagorean triples,
(2vx)^2+(2yz)^2 =\, (v^2+x^2-y^2-z^2)^2 (2vy)^2+(2xz)^2 =\, (v^2-x^2+y^2-z^2)^2 (2vz)^2+(2xy)^2 =\, (v^2-x^2-y^2+z^2)^2
and can be given a parametric solution. When the minus sign of a curvature is chosen,
(-v^2+x^2+y^2+z^2)^2=2\,(v^4+x^4+y^4+z^4)
this can be solved
[1] as, [v, x, y, z] =\, [2(ab-cd)(ab+cd), (a^2+b^2+c^2+d^2)(a^2-b^2+c^2-d^2), 2(ac-bd)(a^2+c^2), 2(ac-bd)(b^2+d^2)]
where,
a^4+b^4 =\, c^4+d^4
parametric solutions of which are well-known.
Complex Descartes theorem
To determine a circle completely, not only its radius (or curvature), but also its center must be known. The relevant equation is expressed most clearly if the coordinates (
x, y) are interpreted as a complex number z = x + i y. The equation then looks similar to Descartes' theorem and is therefore called the complex Descartes theorem.
Given four circles with curvatures
k and centers i z (for i i = 1...4), the following equality holds in addition to equation (1):
(k_1z_1+k_2z_2+k_3z_3+k_4z_4)^2=2\,(k_1^2z_1^2+k_2^2z_2^2+k_3^2z_3^2+k_4^2z_4^2).
(4)
Once
k 4 has been found using equation (2), one may proceed to calculate z 4 by rewriting equation (4) to a form similar to equation (2): z_4 = \frac{z_1 k_1 + z_2 k_2 + z_3 k_3 \pm 2 \sqrt{k_1 k_2 z_1 z_2 + k_2 k_3 z_2 z_3 + k_1 k_3 z_1 z_3} }{k_4}.
Again, in general, there are two solutions for
z 4, corresponding to the two solutions for k 4. Generalizations
The generalization to n dimensions is sometimes referred to as the
Soddy–Gosset theorem, even though it was shown by R. Lachlan in 1886. In n-dimensional Euclidean space, the maximum number of mutually tangent ( n − 1)-spheres is n + 2. For example, in 3-dimensional space, five spheres can be mutually tangent.The curvatures of the hyperspheres satisfy \left(\sum_{i=1}^{n+2} k_i\right)^2 = n\,\sum_{i=1}^{n+2} k_i^2
with the case
k = 0 corresponding to a flat hyperplane, in exact analogy to the 2-dimensional version of the theorem. i
Although there is no 3-dimensional analogue of the complex numbers, the relationship between the positions of the centers can be re-expressed as a matrix equation, which also generalizes to
n dimensions. [2] See also Notes ^ A Collection of Algebraic Identities: Sums of Three or More 4th Powers ^ Jeffrey C. Lagarias, Colin L. Mallows, Allan R. Wilks (April 2002). "Beyond the Descartes Circle Theorem". The American Mathematical Monthly 109 (4): 338–361. External links
Interactive applet demonstrating four mutually tangent circles at cut-the-knot The Kiss Precise XScreenSaver: Screenshots :: An XScreenSaver display hack visualizes Descartes’ theorem, in hack “Apollonian”. Jeffrey C. Lagarias, Colin L. Mallows, Allan R. Wilks: Beyond The Descartes Circle Theorem
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals. |
An operator is, by definition, a linear map from some
given linear space $V$ to itself. And the definition of a linear map cannot, in general, be independent of the linear structure if the space it acts upon.
There are, however, algebraic objects that can be defined abstractly, and that they are always represented as bounded operators on some (actually many) Hilbert space(s). These algebraic objects are the ones forming involutive algebras, in particular C*-algebras. It is standard to consider the set of observables of a given physical system as forming an involutive algebra.
In non-relativstic quantum mechanics, the C* algebra of interest is fixed (up to *-isomorphisms), and it is uniquely (up to unitary isomorphisms) irreducibly represented in a Hilbert space. This irreducible representation is the so-called Schrödinger representation on $L^2(\mathbb{R}^d)$, where position and momentum are respectively the self-adjoint unbounded operators of multiplication and $1/i$-times differentiation with respect to the variable $x\in\mathbb{R}^d$. It is in this representation that the pseudodifferential calculus, that associates to (almost) any function (actually distribution) on the phase space $\mathrm{T}^*\mathbb{R}^d$ an operator from $\mathscr{S}(\mathbb{R}^d)$ to $\mathscr{S}'(\mathbb{R}^d)$ (that can, in some case, be restricted to a densely defined operator on $L^2(\mathbb{R}^d)$), is set.
Clarified all that, there are plenty of examples of physically relevant bounded self-adjoint operators in quantum mechanical systems. The foremost example is the spectral family $(P_\lambda)_{\lambda\in \mathbb{R}}$ of a given self-adjoint (possibly unbounded) observable $A$. Many people consider the spectral resolution much more physically relevant than the observable itself (there is a $1-1$ correspondence anyways), for $\lVert (P_{\lambda_1}-P_{\lambda_2})\psi\rVert^2$ is the probability that a measurement of the observable $A$ in the system in the state $\psi$ would yield a value in the interval $[\lambda_2,\lambda_1]\subseteq \mathbb{R}$ (assuming $\lambda_2 < \lambda_1$). The spectral family consists entirely of bounded (self-adjoint) operators (orthogonal projections, that each satisfy $P^2_{\lambda}=P_\lambda$). |
Here are two things that I have mistakenly believed at various points in my "adult mathematical life":
For a field $k$, we have an equality of formal Laurent series fields $k((x,y)) = k((x))((y))$.
Note that the first one is the fraction field of the formal power series ring $k[[x,y]]$. For instance, for a sequence $\{a_n\}$ of elements of $k$, $\sum_{n=1}^{\infty} a_n x^{-n} y^n$ lies in the second field but not necessarily in the first. [Originally I had $a_n = 1$ for all $n$; quite a while after my original post, AS pointed out that that this actually does lie in the smaller field!]
I think this is a plausible mistaken belief, since e.g. the analogous statements for polynomial rings, fields of rational functions and rings of formal power series are true and very frequently used. No one ever warned me that formal Laurent series behave differently!
[Added later: I just found the following passage on p. 149 of Lam's
Introduction to Quadratic Forms over Fields: "...bigger field $\mathbb{R}((x))((y))$. (This is an iterated Laurent series field, not to be confused with $\mathbb{R}((x,y))$, which is usually taken to mean the quotient field of the power series ring $\mathbb{R}[[x,y]]$.)" If only all math books were written by T.-Y. Lam...]
Note that, even more than KConrad's example of $\mathbb{Q}_p^{\operatorname{unr}}$ versus the fraction field of the Witt vector ring $W(\overline{\mathbb{F}_p})$, conflating these two fields is very likely to screw you up, since they are in fact very different (and, in particular,
not elementarily equivalent). For instance, the field $\mathbb{C}((x))((y))$ has absolute Galois group isomorphic to $\hat{\mathbb{Z}}^2$ -- hence every finite extension is abelian -- whereas the field $\mathbb{C}((x,y))$ is Hilbertian so has e.g. finite Galois extensions with Galois group $S_n$ for all $n$ (and conjecturally provably every finite group arises as a Galois group!). In my early work on the period-index problem I actually reached a contradiction via this mistake and remained there for several days until Cathy O'Neil set me straight.
Every finite index subgroup of a profinite group is open.
This I believed as a postdoc, even while explicitly contemplating what is probably the easiest counterexample, the "Bernoulli group" $\mathbb{B} = \prod_{i=1}^{\infty} \mathbb{Z}/2\mathbb{Z}$. Indeed, note that there are uncountably many index $2$ subgroups -- because they correspond to elements of the dual space of $\mathbb{B}$ viewed as a $\mathbb{F}_2$-vector space, whereas an open subgroup has to project surjectively onto all but finitely many factors, so there are certainly only countably many such (of any and all indices). Thanks to Hugo Chapdelaine for setting me straight, patiently and persistently. It took me a while to get it.
Again, I blame the standard expositions for not being more explicit about this. If you are a serious student of profinite groups, you will know that the property that every finite index subgroup is open is a very important one, called
strongly complete and that recently it was proven that each topologically finitely generated profinite group is strongly complete. (This also comes up as a distinction between the two different kinds of "profinite completion": in the category of groups, or in the category of topological groups.)
Moreover, this point is usually sloughed over in discussions of local class field theory, in which they make a point of the theorem that every finite index
open subgroup of $K^{\times}$ is the image of the norm of a finite abelian extension, but the obvious question of whether this includes every finite index subgroup is typically not addressed. In fact the answer is "yes" in characteristic zero (indeed $p$-adic fields have topologically finitely generated absolute Galois groups) and "no" in positive characteristic (indeed Laurent series fields do not, not that they usually tell you that either). I want to single out J. Milne's class field theory notes for being very clear and informative on this point. It is certainly the exception here. |
I think the existing answers are only half right. We need to bring in the fact that the answer depends on the degree to which the charge can be regarded as small, in either amount of charge, or radius, or both.
First let's consider a charge $q$ moving at constant velocity. It is the source of a magnetic field ${\bf B}_q$ in loops around the line of motion. This field has strictly zero net force on the charge that is its source. So in this sense, the answer is "no, the charge does not interact with its own field"---but this is a special case (see rest of this answer).If there is also a further magnetic field ${\bf B}_{\rm ext}$ produced by other currents, then the total field at some place is the vector sum ${\bf B}_q + {\bf B}_{\rm ext}$, but you can't apply this formula right at the location of the charge $q$. The charge $q$ in this case experiences a force $q {\bf v} \times {\bf B}_{\rm ext}$ where $\bf v$ is its velocity.
If the field ${\bf B}_q$ is large enough then it will disturb the motion of other charges and the net result can be that ${\bf B}_{\rm ext}$ is also changed owing to this interaction. However it is common to choose for discussion a 'test charge'. This is one whose charge is small enough that it will not significantly disturb, via its own fields, the motion of anything else.
Now let's come to the case of an accelerating charge. Things get considerably more complicated. Now we have to take into consideration the physical structure of the charged body. It cannot be strictly point-like in classical electromagnetism, because that would lead to infinite fields and infinite mass-energy associated with those fields. In consequence the field due to one part of the charged body can interact with another part of the charged body, and the integral of the resulting force over the whole body (called the
self-force) need not be zero. There are now two regimes to think about. If the acceleration is small enough then the self-force is negligible and you can forget about it. This is almost always true in practice, even for particle accelerators. It is only in some extremes of plasma physics and laser physics, or some kinds of particle collision, that this issue is important. Therefore unless one is in such a regime, the answer to the question is still "no" in that we can ignore this slight interaction between the charge and its own field.
However, if the acceleration is large enough that the velocity changes significantly during the time $r/c$ where $r$ is the radius of the body, then the self-force will be non-negligible. It is hard to calculate it exactly, but a good first order approximation for speeds small compared to $c$ is $${\bf f}_{\rm self} = \tau_q \frac{d {\bf f}}{d t}$$where $\tau_q = 2q^2 / 3m c^3$ and $\bf f$ is the force owing to all the other contributions from applied fields. The self-force is often called 'radiation reaction', but strictly that is a slight abuse of terminology in that one can identify a contribution to the self-force that is suitably called radiation reaction, but this is not necessarily the only contribution.
At speeds of any size, the above formula is easily generalized, but this is still a first-order approximation. The equation of motion is$$m \dot{v}^\mu = f^\mu + \tau_q \left[ \dot{f}^\mu - (\dot{v}_\nu f^\nu) v^\mu/c^2 \right]$$where $f^\mu$ is the applied four-force and the dot signifies $d/d\tau$ (differentiation with respect to proper time along the worldline). For more information, here is a reference to a couple of papers by myself at Am. J. Phys.: http://dx.doi.org/10.1119/1.4914421; http://dx.doi.org/10.1119/1.4897951 (I mention them since they bear directly on the question asked; I hope that is correct practice). |
Here's what I perceive to be a mathematically and logically precise presentation of the theorem, let me know if this helps.
Mathematical Preliminaries
First let me introduce some precise notation so that we don't encounter any issues with "infinitesimals" etc. Given a field $\phi$, let $\hat\phi(\alpha, x)$ denote a smooth one-parameter family of fields for which $\hat \phi(0, x) = \phi(x)$. We call this family a
flow of $\phi$. Then we can define the variation of $\phi$ under this flow as the first order approximation to the change in $\phi$ as follows:
Definition 1. (Variation of field)$$ \delta\phi(x) = \frac{\partial\hat\phi}{\partial\alpha}(0,x)$$
This definition then implies the following expansion$$ \hat\phi(\alpha, x) = \phi(x) + \alpha\delta\phi(x) + \mathcal O(\alpha^2)$$which makes contact with the notation in many physics books like Peskin and Schroeder.
Note: In my notation, $\delta\phi$ is NOT an "infinitesimal", it's the
coefficient of the parameter $\alpha$ in the first order change in the field under the flow. I prefer to write things this way because I find that it leads to a lot less confusion.
Next, we
define the variation of the Lagrangian under the flow as the coefficient of the change in $\mathcal L$ to first order in $\alpha$;
Definition 2. (Variation of Lagrangian density)$$ \delta\mathcal L(\phi(x), \partial_\mu\phi(x)) = \frac{\partial}{\partial\alpha}\mathcal L(\hat\phi(\alpha, x), \partial_\mu\hat\phi(\alpha, x))\Big|_{\alpha=0}$$
Given these definitions, I'll leave it to you to show
Lemma 1.For any variation of the fields $\phi$, the variation of the Lagrangian density satisfies\begin{align} \delta\mathcal L&= \left(\frac{\partial \mathcal L}{\partial\phi} - \partial_\mu\frac{\partial\mathcal L}{\partial(\partial_\mu\phi)}\right)\delta\phi + \partial_\mu K^\mu,\qquad K^\mu = \frac{\partial\mathcal L}{\partial(\partial_\mu\phi)}\delta\phi\end{align}You'll need to use (1) The chain rule for partial differentiation, (2) the fact $\delta(\partial_\mu\phi) = \partial_\mu\delta\phi$ which can be proven from the above definition of $\delta\phi$ and (3) the product rule for partial differentiation.
Noether's theorem in steps
Let a
particular flow $\hat\phi(\alpha, x)$ be given.
Assume that
for this particular flow, there exists some vector field $J^\mu\neq K^\mu$ such that$$ \delta\mathcal L = \partial_\mu J^\mu$$
Notice, that for any field $\phi$
that satisfies the equation of motion, Lemma 1 tells us that$$ \delta \mathcal L = \partial_\mu K^\mu$$
Define a vector field $j^\mu$ by$$ j^\mu = K^\mu - J^\mu$$
Notice that
for any field $\phi$ satisfying the equations of motion steps 2+ 3 + 4 imply$$ \partial_\mu j^\mu = 0$$
Q.E.D.
Important Notes!!! If you follow the logic carefully, you'll see that $\delta \mathcal L = \partial_\mu K^\mu$ only along the equations of motion. Also, part of the hypothesis of the theorem was that we found a $J^\mu$ that is not equal to $K^\mu$ for which $\delta\mathcal L = \partial_\mu J^\mu$. This ensures that $j^\mu$ defined in the end is not identically zero! In order to find such a $J^\mu$, you should not be using the equations of motion. You should be applying the given flow to the field and seeing what happens to it to first order in the "flow parameter" $\alpha$.
Cheers! |
Level-set Based Segmentation in 2D
Before introducing the actual model used for segmenting 2D images, few things related to explicit and implicit curve representations are discussed. Before going any further, I would like to point out that the implicit curve representation techniques are not confined to 2D case as one might deduce from the header. In fact, level-sets are indeed used in medical imaging segmenting 3D data, such as MRI.
Two Segments
Explicit curve representation
The aim is to segment an input image into two different segments. The area bounded by the segment is 'similar' (homogeneous) as per the used similarity metric (e.g. similar color in the case of RGB/HSV images), while the boundary separating the segments is called the interface. Therefore, using explicit curve representation, the segmentation model can de described as an energy minimization model as follows:
\[ E\Big( \Gamma (s),\, \alpha_1,\, \alpha_2 \Big) = \nu \underbrace{\int_{\Gamma} 1 \, ds}_{\text{boundary length}} - \int_{\Omega_1} log \, p(I|\alpha_1) \, d\vec{x} - \int_{\Omega_2} log \, p(I|\alpha_2) \, d\vec{x} \]
where the curve is parameterized by \( s \) as \( \Gamma (s) = \Big( x(s), y(s) \Big) \). The first term is the length of the boundary separating the segments. The second and the third terms are the 'likelihoods' indicating that a particular image pixel \( I(x,y) \) belongs to a corresponding segment, while the parameters \( \alpha_1 \) and \( \alpha_2 \) are the likelihood functions' parameters. Likelihood is defined as follows:
\[ p \Big ( I | \alpha_{ i } \Big) = p \Bigg ( \{ I(x,y) : (x,y) \in \Omega_i \} | \alpha_i \Bigg ) \]
where \( \alpha_i \) is a parameter vector describing the likehood function. E.g. in the case of a Gaussian distribution \( \alpha_i = ( \mu_i, \, \sigma_i^2 ) \) (mean and variance).
Implicit curve representation
In implicit curve representation the isocontour defining the interface is one dimension lower than the dimensionality of the actual level-set function. Therefore, in \( \mathbb{R} ^n \) the isocontour has dimension \( n-1 \). Thus, it is natural to ask what are the benefits of such implicit curve representation?
Topological changes. Topological changes are handled 'implicitly' in the implicit curve representation. In explicit representation topological changes (e.g. a curve breaking into two) can cause problems, especially in higher dimensions. Discretization. Grid size in implicit representation stays the same (Eulerian formulation), whereas in the explicit curve representation case 'regridding' might be needed.Inside/outside regions. In implicit representation it is extremely easy to see whether a point belongs to outside or inside region (as is shown below). Using an implicit curve representation (e.g. level-set function), the boundary and the segments can be defined as follows:
\[\begin{align} \Gamma &:= \partial \Gamma \Big\{ (x,y),\, \Phi(x,y) = 0 \Big\} \\inside(\Gamma) &:= \Omega_1 = \Big\{ (x,y),\, \Phi(x,y) \ge 0 \Big\} \\outside(\Gamma) &:= \Omega_1 = \Big\{ (x,y),\, \Phi(x,y) < 0 \Big\} \end{align}\]
where \( \Gamma \) is the interface, \( \Omega_1 \) is the first segment and \( \Omega_2 \) is the second segment. In other words, the boundary is defined by the zero isocurve (in the image the interception of the surface and the plane), while those positions (x,y) where the level-set function has a zero or positive value belong to the first segment (the part of the surface above the plane), and those positions (x,y) where the level-set function has a negative value belong to the second segment (the part of the surface below the plane). Using the implicit curve representation equation 1 is defined as:
\[ E( \Phi,\, \alpha_1,\, \alpha_2 ) = \int_{\Omega} \Big( \alpha | \nabla H( \Phi ) | - H( \Phi ) log \, p (I|\alpha_1) - H( \Phi ) log \, p(I|\alpha_2) \Big) d\vec{x} \]
where \( \nabla \) is the gradient operator \( \Big[ \dfrac{\partial}{ \partial x} \, \dfrac{\partial}{\partial y} \Big] \), \( H(\Phi) \) is the Heaviside function and \( \delta (\Phi) \) is the one dimensional Dirac measure as follows:
\[ \left\{ \begin{align} 1, \,&\text{if } \Phi \ge 0 \\0, \, &\text{if } \Phi <0 \end{align} \right. \]
\[ H{\prime} (\Phi) := \delta (\Phi) = \dfrac{d}{d \Phi} H(\Phi) \]
Keeping \( \alpha_1 \) and \( \alpha_2 \) fixed the corresponding Euler-Lagrange equation can be obtained and therefore the energy can be minimized by gradient descent as follows:
\[ \dfrac{\partial \Phi}{\partial t} = H{\prime}(\Phi) \Bigg( \alpha DIV \left( \dfrac{\nabla \Phi}{|\nabla \Phi|} \right) + log\, p(I|\alpha_1) - log\, p (I|\alpha_2) \Bigg) \]
where DIV is the divergence operator. The first term minimizes the local curvature (e.g. boundary length) while the second and the third terms are the 'likehoods' of pixels belonging to the segments 1 and 2. Those pixels where \( \Phi(x,y) >= 0 \) belong to segment 1 as indicated by the Heaviside function. This leads to a two-stage algorithm:
Stage 1: approximate/resolve likelihood functions
Stage 2: solve the level-set function
Several Segments
So far we have seen how to segment an input image into two segments. There are, at least, two different possibilities of segmenting an input image into several meaningful segments. (1) successively keep on segmenting the formed segments until the energy of the boundary out weights the splitting of an segment into 2 or (2) directly search for meaningful segment until the whole image has been segmented. The latter approach is used in my article Hypothesis-Forming-Validation-Loops.
Example(s)
In the following there is an example of segmentation based on the disparity map, using the algorithm explained in the paper Hypothesis-Forming-Validation-Loops. Test images (i.e left- and right stereo images) have been provided by prof. Mårten Björkman from KTH. |
Efficiency of the Reverse Carnot Cycle
An air conditioning device is working on a reverse Carnot cycle between the inside of a room at temperature T
2 and the outside at temperature T 1 > T 2 with a monatomic ideal gas as the working medium. The air conditioner consumes the electrical power P. Heat leaks into the house according to the law \(\stackrel{.}{Q} = A(T_1 − T_2)\). Show that the efficiency of the air conditioner is \(η_{cool} = {T_2 \over T_1-T_2}\). Express the inside temperature T 2in terms of T 1, A, and P. Solution Efficiency
In the reverse Carnot cycle, work is done to extract heat from one system and expel it into another via four processes, two isothermal and two isentropic.
In process \(1 \rightarrow 2\), the gas is isentropically compressed, and there is no heat flow into or out of the refrigerator.
In process \(2 \rightarrow 3\), heat is expelled into the sink (e.g. outside air) isothermally (T
2=T 3). The amount of heat ejected per unit mass of gas is \(Q_C=T_2(S_2-S_3)\).
In process \(3 \rightarrow 4\), the gas is isentropically expanded. The pressure and temperature decrease to P
4, T 4. Heat transfer at this stage is zero.
In process \(4 \rightarrow 1\), the gas expands isothermally (T
4=T 1), extracting heat from the source (e.g. room). This is where the cooling takes place. The heat extracted from the source per unit mass of gas is \(Q_H=T_1(S_1-S_4)=T_1(S_2-S_3)\).
The work done during the process is simply \(W=Q_H-Q_C=(T_1-T_2)(S_2-S_3)\).
The efficiency of the reverse Carnot cycle is the heat removed from the cold reservoir / the amount of work input: \(\eta_{cool}=\frac{Q_C}{W}\), so \[\eta_{cool}={T_2 \over T_1-T_2}\;\blacksquare \]
Express T 2 in terms of T 1, A, and P
Re-arranging \(\stackrel{.}{Q} = A(T_1 − T_2)\) we obtain \[T_2=T_1-{\stackrel{.}{Q} \over A}\]
In equilibrium, the heat flow in \(\stackrel{.}{Q}\) must equal the power consumed \(P\) times the efficiency \(\eta\)
2:
\begin{eqnarray}
T_2&=&T_1 – {P \eta \over A} \nonumber \\ &=&T_1 – {P T_2 \over A(T_1-T_2)} \nonumber \end{eqnarray}
This leads to a quadratic formula whose solution (taking the positive root) is:
\[T_2 = 2T_1+{P \over 2A}\bigg[\sqrt{{4 T_1 A \over P} + 1} - 1 \bigg]\;\blacksquare\] |
For example, consider a \(11 \times 11\) grid, and choose from the following building blocks:
Five different polyominos
Fail to cover complete grid
\[x_{i,j,k} = \begin{cases} 1 & \text{if we place polyomino $k$ at location $(i,j)$}\\ 0 & \text{otherwise} \end{cases}\]
I used as rule that the left-upper corner of each polynomino is its anchor. I.e. in the picture above we have \(x_{1,1,4} = 1\), \(x_{2,1,2}=1\), \(x_{1,3,5}=1\) etc.
To formulate a non-overlap constraint I populated a set \(cover_{i,j,k,i',j'}\), with elements that exist if cell \((i',j')\) is covered when we place polyomino \(k\) in cell \((i,j)\). To require each cell \((i',j')\) is covered we can say:
\[ \forall i',j': \sum_{i,j,k|cover_{i,j,k,i',j'}} x_{i,j,k} = 1\]
This constraint is infeasible if we cannot cover each cell \((i',j')\) exactly once. In order to make sure we can show a meaningful solution when we cannot cover each cell, we formulate the following optimization model:
\[\begin{align} \max\>&\sum_{i,j} y_{i,j}\\&y_{i',j'} = \sum_{i,j,k|cover_{i,j,k,i',j'}} x_{i,j,k}\\&x_{i,j,k}\in \{0,1\}\\&y_{i,j} \in \{0,1\}\end{align}\]
Here \(y_{i,j}=1\) indicates cell \((i,j)\) is covered exactly once, and \(y_{i,j}=0\) says the cell is not covered.
With a little bit of effort we can produce the following:
61 x 61 board with 9 different polyominos References Polyomino, https://en.wikipedia.org/wiki/Polyomino 2D bin packing on a grid, https://stackoverflow.com/questions/47918792/2d-bin-packing-on-a-grid |
So here is my question,
Question:A water sample contains 9.5% $\mathrm{MgCl_2}$ and 11.7% $\mathrm{NaCl}$ (by weight). Assuming 80% ionization of each salt. Boiling point of water will be ________. ($\mathrm{K_b}$ for water $\mathrm{=0.52}$)
You know, this question isn't that tough to bother anyone (definitely not!). What's all confusing here is the molality of two solutes.
I know that,
$$\Delta \mathrm{T_b=K_b\cdot m}$$
Degree of dissosiation $(\mathrm{\alpha}) = 0.8$
So I know very well that molality of a solution having only one solute is,
$$\mathrm{m=\frac{n\ (no.\ of\ moles\ of\ solute)}{y\ (mass\ of\ solvent\ in\ kg)}}$$
But what about the solution having more than one solute? Do they have any molality (I am sure that there will surely be, because it is there in the question). If yes then what's the method of calculating?
Also if you are solving this, here is the answer to the question (as per my answer key),
377 K |
Yes, there is an infinite class of 2-connected cubic graphs on which Hamilton Cycle has a polynomial-time algorithm. Further, there is a such a class that contains infinitely many Hamiltonian graphs and infinitely many non-Hamiltonian graphs, which I think is a decent definition of "non-trivial".
First, let $H_n$ be the union of a $2n$-cycle on vertices $\{1, \dots, 2n\}$ and the edges $\{(i, i+n)\mid 1\leq i\leq n\}$. $H_n$ is 3-regular, 2-connected and has an obvious Hamiltonian cycle.
Now, let $G$ be a 2-connected cubic graph that has no Hamiltonian cycle. Such a graph must exist, since the Hamiltonian Cycle problem is
NP-complete on 2-connected cubic graphs, so must have both "yes" and "no" instances. Fix an edge $xx'\in G$. For any graph $H_n$ pick any edge $yy'\in H_n$ and let $H'_n$ be the graph made by taking $G\cup H_n$, deleting the edges $xx'$ and $yy'$ and adding edges $xy$ and $x'y'\!$.
$H'_n$ is 3-regular and 2-connected but I claim that it has no Hamiltonian cycle. Any Hamiltonian cycle $C$ in $H'_n$ must enter the copy of $G-xx'$ at $x$ and leave at $x'$ (or vice-versa). But then $C$ must contain a Hamiltonian path $P$ of $G-xx'$ that begins at $x$ and ends at $x'\!$. However, no such Hamiltonian path can exist, since $P\cup\{xx'\}$ would be a Hamiltonian cycle of $G$, but $G$ was chosen to have no Hamiltonian cycles.
So the desired class of graphs is $\mathcal{H} = \{H_n\mid n>1\}\cup\{H'_n\mid n>1\}$. It remains to show that the Hamiltonian Cycle problem can be solved in polynomial time for graphs in $\mathcal{H}$. Observe that, for every $n$, every edge of $H_n$ is on a 4-cycle (there are 4-cycles of the form $i$, $i+1$, $i+n+1$, $i+n$, $i$). However, the edges $xy$ and $x'y'$ are not on any 4-cycle of $H'_n$. We can test that every edge of a graph is in a 4-cycle in time $\mathcal{O}(n^4)$.
I edited to changed the construction slightly, with two benefits. First, the algorithm runs in time $\mathcal{O}(n^4)$ instead of $\mathcal{O}(n^{|V(G)|})$. Second, there's an actual correctness proof; the old algorithm relied on the assumption that $G-xx'$ is not a subgraph of any $H_n$, which was probably true but really needed to be proven. |
I am reading the paper "Rank-Finiteness for Modular Categories" by Bruillard,Ng, Rowell, and Wang.
Let $C$ be a modular category and let $K_0(C)$ be the Grothendieck ring generated by simple objects of $C$ with multiplication induced from the tensor product of $C$: $$ V_i \otimes V_j \cong \oplus_{k \in \Pi_C} N_{i,j}^k V_k.$$ Here $\Pi_C$ is the set of all simple objects of $C$.
Let $S=(S_{ij})$ be the S-matrix. Let $N_i$ be the fusion matrix defined by $(N_i)_{k,j}=N_{i,j}^k$. Let $D_i$ be a matrix given by $(D_i)_{ab}=\delta_{ab}\frac{S_{i a}}{S_{0 a}}$. Then the Verlinde formula can be written as $$ SN_iS^{-1}=D_i.$$
I understood so far. Then they say that
In particular, the assignments $\phi_a : i\mapsto \frac{S_{i a}}{S_{0 > a}}$ for $ i \in \Pi_C$ determine (complex) linear character of $K_0(C)$. Since $S$ is non-singular, $\{\phi_a\}_{a\in \Pi_C}$ is the set of all the linear characters of $K_0(C)$.
(This is stated in page 9 in the paper.)
I am not sure what the linear characters are here. Is $\phi_a$ a (group or ring) homomorphism from $K_0(C)$ to $\mathbb{C}$? But I don't know how to extend the definition of $\phi_a$ to $K_0(C)$. Also I did not understand the second claim.
I appreciate any help. |
First of all, let's see what Noether's Theorem says about your specific case (Klein-Gordon under global rescaling of the fields). Noether's theorem states that
To every differentiable symmetry of the Action of a system, there corresponds a conserved current.
The current in object is given by
$$J^{\mu}=-T_{\nu}^{\mu}\ \delta x^{\nu}+\frac{\partial \mathcal{L}}{\partial \phi^{a}_{,\mu}}\ \delta \phi^{a}$$where the $\phi^{a}$ are the fields whose dynamics is described by the action, $T^{\mu}_{\nu}$ is the canonical energy-momentum tensor of the theory and $\delta x^{\nu}$ and $\delta \phi^{a}$ are the infinitesimal generators of the symmetry. In your case,
$$\phi\to e^{\epsilon}\phi\approx (1+\epsilon)\phi$$so that $\delta \phi=\epsilon \phi$. (If your $\alpha$ is negative, then the symmetry is not differentiable in the Noetherian sense, as there is no infinitesimal generator. In fact, rescaling by a factor of $-1$ is part of the discrete, non differentiable, multiplicative group $\{+1,-1\}$). Then$$J^{\mu}=\epsilon\ \phi\partial^{\mu}\phi$$But there is no reason why this Noetherian current should be conserved. In fact, removing the $\epsilon$ from the above expression, we see that the divergence is proportional to the Lagrangian,$$\partial_{\mu}(J^{\mu}/\epsilon)=\partial_{\mu}(\phi\partial^{\mu}\phi)=\phi\ \partial_{\mu}\partial^{\mu}\phi+\partial_{\mu}\phi\partial^{\mu}\phi=\partial_{\mu}\phi\partial^{\mu}\phi-m^{2}\phi^{2}=2\mathcal{L}$$where in the third identity I used the equations of motion $\partial^{2}\phi=-m^{2}\phi$. The most general solution to Klein-Gordon's equation is given by$$\phi(x)=\int\frac{d^{3}k}{(2\pi)^{3}}\ \Big\{e^{-ik_{\mu}x^{\mu}}\ A(\vec{k})+e^{+ik_{\mu}x^{\mu}}\ B(\vec{k})\Big\}$$with $k_{\mu}k^{\mu}=m^{2}$, but not even in the plane-wave case (say, $\phi(x)=e^{-ik_{\mu}x^{\mu}}$) the Lagrangian is zero:$$2\mathcal{L}[e^{-ik_{\mu}x^{\mu}}]=-k_{\mu}k^{\mu}\ e^{-2ik_{\mu}x^{\mu}}-m^{2}\ e^{-2ik_{\mu}x^{\mu}}=-2m^{2}\ e^{-2ik_{\mu}x^{\mu}}\neq0$$for $m\neq 0$. The fact that it is the mass that determines whether $J^{\mu}$ is conserved or not is not accidental, as we will see later on.
$$$$
Now, you ask whether there is any theorem, analogous to Noether's theorem, that allows you to derive conserved currents from a generalized concept of "symmetry under some transformation". Specifically, you ask for a theorem that does so with symmetries of the Euler-Lagrange equations, rather than of the action (as a matter of fact, your transformation doesn't leave the action invariant, it multiplies it by a factor of $\alpha$). I can't really say that such a theorem does not exist, but I can safely say that I don't know of any, and that I doubt that such a theorem can, in general, be true. Here is why. The intuition behind Noether's theorem is that the values of the fields that solve the minimization problem can, by definition, be shifted by an infinitesimal amount without changing the value of action (meaning that the functional derivative of the action with respect to the "variation field" is zero). Then you can ask what happens to the action if such a transformation is made on extremal fields, and you find that to the shift of the fields there corresponds a shift of the action given by
$$\delta S=\int_{\Omega} d^{d}x\ \partial_{\mu}J^{\mu}\qquad\qquad(\star)$$(the equations of motion being satisfied by hypothesis) where $J^{\mu}$ is again Noether's current and $\Omega$ is an arbitrary domain of integration. Then you conclude that if the action is invariant with respect to the transformation, you should have $\delta S=0$, so that $\partial_{\mu}J^{\mu}=0$. Note that the hypothesis of the invariance of the action is brought up only at the end: it is an additional hypothesis, independent of the result $(\star)$. The latter is an identity that comes about under the only hypothesis that the fields (1) solve Euler-Lagrange's equations (2) are shifted by an infinitesimal amount. No assumption about the nature of the transformations or their being part of a group that leaves invariant the action is made. So $(\star)$ holds even when the transformation multiplies the action by a factor of $\alpha>0$, and we can use it to derive the following result. As$$S\to \alpha S= e^{\epsilon} S\approx(1+\epsilon) S\qquad\Longrightarrow\qquad \delta S=\epsilon S$$we have that, under such a transformation$$\int_{\Omega} d^{d}x\ \partial_{\mu}J^{\mu}=\epsilon S$$so we can conclude that, in general,
Noether's current is not divergence-free under such an action-rescaling transformation (as we have seen through the example of the Klein-Gordon action). I emphasized "Noether's" because of course there may be some other current that is divergence-free instead of the standard Noether's one. But if one removes the hypothesis of the invariance of the action under some symmetry, there is little left to say about the symmetries of the solutions of Euler-Lagrange's equations: the connection between symmetries and conservation lies in the very fact that the solutions themselves (before even thinking about symmetries) are such that an infinitesimal variation on the solving values leaves the action invariant . Then the hypothesis that under the given transformation the action gets rescaled is somewhat incompatible with the essential property of the solutions, i.e. with the Euler-Lagrange's equations themselves. This is why I find it difficult to believe that such a theorem would, in general, be true.
To end this answer, I want to mention two more things. First of all, the fact that in Klein-Gordon's case the Euler-Lagrange's equation are invariant under a constant rescaling of the fields follows from the fact that the Lagrangian is quadratic in the fields. Any such Lagrangian always gives rise to linear Euler-Lagrange's equation, which in turn are always symmetric under constant rescalings. The same holds for Dirac's Lagrangian and for Yang-Mills's Lagrangian (for free gauge bosons). Second of all, there is indeed a scaling transformation that leaves the Action invariant in the sense of Noether's. Consider making the transformation$$x^{\mu}\to e^{\epsilon}\ x^{\mu}\qquad\qquad \phi\to e^{\epsilon}\ \phi$$then$$\partial_{\mu}\to e^{-\epsilon}\ \partial_{\mu}$$and we see that, given $m=0$, Klein-Gordon's Action is invariant under such a transformation. The latter is called a "(constant) conformal transformation", and the corresponding Noether's current$$j^{\mu}=J^{\mu}/\epsilon=-T^{\mu}_{\nu}\ x^{\nu}+\phi\ \partial^{\mu}\phi$$is, as the theorem proves, divergence-free. An analogous statement can be made for the Dirac and Yang-Mills massless Lagrangians. Now, we have
$$\partial_{\mu}j^{\mu}=-\partial_{\mu}(T^{\mu}_{\nu}\ x^{\nu})+\partial_{\mu}(\phi\ \partial^{\mu}\phi)$$As we know from translational invariance that $\partial_{\mu} T^{\mu}_{\nu}=0$, and given the calculation we made before,$$\partial_{\mu}j^{\mu}=-T^{\mu}_{\nu}\ \delta^{\nu}_{\mu}+2\mathcal{L}$$Let's calculate $T^{\mu}_{\nu}\ \delta^{\nu}_{\mu}=T^{\mu}_{\mu}$ for the massless Klein-Gordon Lagrangian. We have$$T^{\mu}_{\mu}=\frac{\partial\mathcal{L}}{\partial \phi_{,\mu}}\ \partial_{\mu}\phi-\mathcal{L}\ \delta^{\mu}_{\mu}=\partial^{\mu}\phi\partial_{\mu}\phi-d\mathcal{L}=(2-d)\ \mathcal{L}$$where $d$ is the dimension of spacetime ($d=4$ in the standard theory). Then$$\partial_{\mu}j^{\mu}=d\ \mathcal{L}$$so this divergence ($\partial_{\mu}j^{\mu}$) is $d/2$ times the divergence (let's call it $\partial_{\mu}j'^{\,\mu}$) you get from the transformation that you proposed in your question. This explains why for the latter we found$$\partial_{\mu}j'^{\,\mu}\propto m^{2}$$If $m=0$, then your transformation can be extended to a conformal transformation which is a true symmetry of Klein-Gordon's action, such that the Noether's current associated to it is conserved. |
I've found that if I reduce the radial domain to $8 \leq r \leq 20$, the condition number drops to ~10,000. This makes me think I need to scale my problem.
I'm not sure how to do this, however, and I need to do it right.
Nondimensionalization is partly repeated application of the chain rule, and partly art. The goal is to make as many quantities in your equations as close to 1 as possible. In most cases, it involves scaling both independent variables and dependent variables by "physically relevant" scale factors. Sometimes, these scale factors are obvious (e.g., there is only one length scale that matters, and that length scale is the length/half-length/etc. of the domain), sometimes, they are not (e.g., I have several reference voltages that matter, and I need to pick one).
For someone inexperienced, I'd say, focus on the mechanics of nondimensionalization. Pick reference parameters that you think have some meaning (trust your intuition here, or check the literature if you think it's useful), and then nondimensionalize your equations, solve them, and see what happens. See what physical insights you get out of the equations, and think about limiting cases.
Wikipedia is a good reference here. I also like the discussion in Deen's Analysis of Transport Phenomena.
Would I have to apply the scale factors in both dimensions, or to physical parameters as well? That is, if I scaled my radii by a factor $L$ ( e.g. $R = r/L$), would I also have to scale other physical quantities ( e.g. current density $J = j L^3$, or acceleration $A = \frac{\Delta v / L}{\Delta t}$)?
You don't
have to. It's valid to nondimensionalize only some quantities, particularly if certain variables are difficult to nondimensionalize, or are already nondimensional. Frequently, this situation occurs in combustion applications, where species mass fractions are already nondimensional (though usually vary over several orders of magnitude), and are difficult to scale in such a way that they vary at similar rates (to within a couple orders of magnitude). The best practice is to nondimensionalize your equations as much as you possibly can, since these sorts of scalings act as a natural preconditioner.
The only counterargument that I can think of to nondimensionalization is that it does require extra work and thought, and some care to make sure that you implement the scaling factors correctly in your code. Any ill-conditioning due to poor choices of units can sometimes be overcome with judicious use of preconditioners, but usually, nondimensionalization is preferable because of the insight it provides (via the Buckingham pi theorem, nondimensionalization yields the smallest group of parameters that influence your equations). |
A recent paper written by Avi Loeb, Rafael Batista and myself, available to read at the preprint arXiv and published in the Journal of Cosmological and Astroparticle Physics, has garnered quite a bit of attention from various media outlets, with a press release from Harvard. The interpretations in the press range from the somewhat speculative to the outright ridiculous.
What we wanted to find is when is it most likely that any given civilization will be around? Of course, in order to do this we have to take into account a lot of unknown parameters - we only have one example of an inhabited planet to work from! Fortunately, as we will see, if you ask questions about proportions rather than overall numbers a lot of these factors cancel out.
First let's deal with the truly unknown: What is the probability that life forms on an Earth-like planet - $p(Life|ELP)$ ? We do not know how life started on Earth - it may have required some very special conditions, it may happen all the time without us noticing it. However, we do assume that all Earth-like planets are made equal, and so have an equal chance of life starting. This may seem unreasonable at first, but if life is really a function of planetary conditions, given two identical planets there should be no preference for one forming life over the other. Therefore we assume this number to be a constant. It may be high - the galaxy teeming with life, or low - a lonely place to be, but when we talk only about proportions this doesn't matter. As an example consider people who are dominantly left-handed. If we didn't know the probability of being born left-handed this may occur in 1/10 of the population, or 1 in 100, or even a million. Lefties may be all around us, or very rare. However, if the chance of being born left handed is the same in all countries, we know there will be roughly 4 times as many lefties in China (pop 1.3 billion) as the USA (pop. 310 million).
So, how can we know how many habitable planets there are at a given time? Like with all difficult questions, it helps to break things down into smaller chunks that are more manageable. We keep splitting these questions into pieces until we get down to those that we can handle:
Number of inhabited planets = Sum over all combinations of attributes: Number of planets with attributes x,y,z * probability life forms on a planet with these attributes.
Immediately we run into an unknown - we do not know how habitability changes with type of planet. However, if we make the conservative assumption that we require an 'Earth-like' planet, for earth like life, we can make progress. For life to exist on a planet, we assume that there must be liquid water on the planet's surface. There may indeed be other types of life out there, living deep in the methane seas of Titan, for example, but for now we restrict to 'life as we know it (Jim)'. This requirement, the presence of liquid water, gives us an inner and outer radius around each region, the Habitable Zone, or 'Goldilocks region' (not too hot, not too cold). Thus we shift the question from being one about planets, to being about stars:
Number of earth-like planets = sum over mass of stars: number of stars of mass M * fraction of stars of mass M that have an earth-like planet in the habitable zone.
Remarkably the last factor, known as $\eta_{Earth} $ in the astrophysics literature, can be estimated from observations of exoplanets. In some surveys it's estimated to be as high as 25% for lower mass stars, and around 10% for stars like our sun. However, since we're trying to find an order of magnitude here, we can assume from this that it is a constant.
How do we deal with the number of stars of a given mass? We know the star formation rate from observations. This is denoted $\dot{\rho_*}(m,t)$ - the rate of formation of stars of mass m at time t. . We also know the star lifetime as a function of its mass. Therefore to find out how many stars of a given mass there are at any time, we integrate the star formation rate before this time, and multiply by a 'window function' that checks if the star is still alive.
Star lifetime is a function of the mass of a star, with low mass stars living a long time, and high mass stars burning out quickly. Our own sun has a lifetime of about 10 billion years, but a star of mass just 3.5 times the sun's will die within 200 million years. This is significant, because that is about the minimum length of time for an earth-like planet to accrete, form and cool to the point where water can exist on its surface. So it seems highly unlikely that life will exist around such heavy stars. Lower mass stars, such as red dwarfs, last much, much longer, as long as 10 trillion years for the lowest mass stars, about 8% of the sun's mass (below this mass, stars cannot hold together).
So we arrive at the 'master equation' - 2.1 in the paper.
$$ \frac{dP}{dt} = \frac{1}{N} \int_o^T dt' \int_{mass} dm' \dot{\rho_*}(t',m') \eta_{Earth} p(Life|ELP) Window(t-t',m') $$
Since we are looking at proportions, this is normalized - the $\frac{1}{N}$ in front of the equation. This allows us to say that the total fraction of civilizations must become 1 across all time, and so find out what proportion exist at a given time, $\frac{dP}{dt}$. We can calculate the proportion of inhabited planets at a given time from our master equation because these constants, such as the probability that life forms at all, fall out into the normalization. As $p(Life|ELP)$ and $\eta_{Earth}$ are constant, we pull them out of the front of the integral, without needing to know their exact values. The results are shown in figure 4. Those long lived dwarf stars push the most likely time for existence far into the future, partly because they live so long, and partly because some of them are yet to form.
Figure 4 from the paper: The proportion of a all civilizations existing at a given cosmic time, with different lowest mass stars allowed. If we keep masses close to our sun (red line), today is highly likely, but allowing low mass stars (green and black lines) pushes the majority of life well into the future.
We can't tell you how many alien civilizations are out there today. We have a sample size of 1. This may become clearer once we can do some spectroscopy of exoplanets, but for now we cannot claim that we are alone or that aliens will certainly exist in the future. It could be that low mass stars stop life forming (tidally locking planets, too much radiation etc) but we simply don't know.
What we have found is a new puzzle: Why are we now? |
Question: Do quasi-characters or some other semi-group properly generalize the Laplace transform or decompose functions in some setting in a way similar to how characters generalize the classical Fourier transform and decompose $L^1$ functions for locally compact abelian groups?
For all locally compact commutative groups $G$ with Haar measure $\mu$ and character group $\widehat{G}$, the Fourier transform
\begin{equation} \mathcal{F}(f)(\chi)= \int_G f(x)\overline{\chi(x)} d\mu(x) \end{equation}
takes $L^1(G,\mu) \to \text{C}_\infty(\widehat{G})$ (the decaying continuous functions on $G$). Under convolution, $L^1(G,\mu)$ is a Banach algebra, so it makes sense to talk about the Gelfand transform. If $\phi \in \mathcal{M}_B$ (the maximal ideal space, or space of nonzero characters on the algebra $L^1(G,\mu)$) then $\phi$ is also a member of the unit ball in the dual space $B^*$ (which is where we get the topology for $\mathcal{M}_B$), so we can say that there is some $\alpha_\phi \in L^\infty(G,\mu)$ such that
\begin{equation} \widehat{f}(\phi) = \phi(f) = \int_G f(x) \overline{\alpha_\phi(x)}d\mu(x) \end{equation}
Using the convolution structure of $L^1(G,\mu)$, it an be proven that $\alpha_\phi = \chi$ for some character $\chi \in \widehat{G}$, and so the Gelfand transform and Fourier transform actually coincide here (we essentially have $\mathcal{M}_B \cong \widehat{G}$). Also when talking about $L^1(G,\mu)$, the Gelfand transform is injective.
We started with characters $\chi\in \text{Hom}(G,\mathbb{T})$, but we could also consider $\text{Hom}(G,\mathbb{C}^\times)\cong \widehat{G} \times \text{Hom}(G,\mathbb{R})$. These are sometimes called generalized or quasi-characters, and the $\text{Hom}(G,\mathbb{R})$ are called real characters. Quasi-characters for $\mathbb{R}$ look like $\chi(x) = e^{(\sigma + it) x}$, so a Fourier transform for quasi-characters is the Laplace transform
\begin{equation} \widetilde{f}(\chi) = \int_{-\infty}^\infty f(x) e^{(\sigma - it)x}dx \end{equation}
Unlike the regular Fourier transform, this splits the quasi-characters into semigroups $\sigma \le 0$ and $\sigma \ge 0$, which are defined on the separate components of $L^1(\mathbb{R}) = L^1(\mathbb{R}_+)\oplus L^1(\mathbb{R}_-)$. Similarly, the quasi-characters on $\mathbb{Z}$ look like $\chi(n) = z^n$ for $z \in \mathbb{C}^\times$, so that the Fourier transform for quasi-characters is a Laurent series
\begin{equation} \widetilde{f}(\chi) = \sum_{n=-\infty}^\infty f(n)z^n \end{equation}
which splits the quasi-characters again into semigroups $|z|\le 1$ and $|z|\ge 1$ for the separate components $\ell^1(\mathbb{Z}) = \ell^1(\mathbb{N}_0)\oplus \ell^1(\mathbb{N}_0^-)$.
Both of the scenarios lead to Hardy spaces, but this construction is specific to $\mathbb{R}$ and $\mathbb{Z}$. Is there a general Hardy space construction for semigroups of this kind? Compact commutative groups, for instance, have no real characters since it would map them into a compact subgroup of $\mathbb{R}$ (of which there is only $\{0\}$), and so a generalization would necessarily require a different kind of semi-group constructed from $G$.
Pontryagin duality relies on the fact that the irreducible unitary representations of commutative groups are 1-dimensional, so the characters (which act trivially on such representations) have a natural group structure. Similar to how the Gelfand transform "decomposes" $\text{C}_\infty(X)$ functions in terms of their values at points of $X$, a commutative group structure creates a natural way to decompose $L^1$ functions (for which point evaluations don't make sense) by their points in a "frequency" space. I think of this as a similar phenomena to the Gelfand transform on an $L^\infty$ space, which creates a new topology for the highly discontinuous $L^\infty$ functions to act continuously on, or the transform on $\text{C}_b(X)$ compactifying the space.
In this vein, I've heard people say that the Laplace transform decomposes functions in terms of an "energy" space, can this be made precise at all? |
Computing the excitation spectrum for a nucleus — that is, its energy levels and their quantum numbers — is
hard. Consider that the Schrödinger equation has no known exact solutions for atoms other than hydrogen: a helium atom, with three charges instead of two, is too complex to be treated except in approximation. The nuclear many-body problem is much thornier. Not only are there more participants in the system (ninety-five, for $^{95}$Nb), but in addition to the electrical interaction among the protons you have the pion-mediated attractive strong force, the rho- and omega-mediated hard-core repulsion, three-body forces, etc. (I'm impressed that you got the correct spins and parities for the ground states from the shell model; nice work!)
So when normal people want to know the spin and parity and energy of a nuclear state, we look it up. The best source is the National Nuclear Data Center hosted by Brookhaven National Lab, which maintains several different databases of nuclear data (each with its own steep learning curve). Searching the Evaluated Nuclear Structure Data File by decay for $^{95}$Nb brings up level schemes, with references and lots of ancillary data, for both niobium and molybdenum. These confirm that you've gotten the $J^P$ for the ground states correct. Two excited states are listed for $^{95}$Mo: one at $200\rm\, keV$ with $J^P = 3/2^+$, and the one you mention at $766\rm\,keV$ with $J^P=7/2^+$. You can follow the references to see the experimental arguments for assigning those spins and parities.
You can make some general, hand-waving predictions about spins by thinking about angular momentum conservation in the transitions.The matrix element for a particular transition is generally proportional to the overlap between the initial wavefunction and the final wavefunction.In nuclear decays the initial state is the nucleus, which is tiny and more-or-less spherical with uniform density, while the final state includes the daughter nucleus and the wavefunctions for the decay products. If the decay products carry orbital angular momentum $\ell$, the radial part of the wavefunction goes like $r^\ell$ near the origin. Dimensional analysis then says that the overlap between the nucleus and the decay wavefunction is proportional to $(kR)^\ell$, where $R$ is the nuclear radius and $k = p/\hbar = 2\pi/\lambda$ is the wavenumber of the decay product. (Note that nuclear decay products typically have $\lambda \gg R$, so you can treat the decay product wavefunction as roughly uniform averaged over the nucleus.)
If the probability of a decay is proportional to $(kR)^\ell$, that means that
decays where the product's momentum $p=\hbar k$ is large are preferred over decays where the product's momentum is small
decays where the orbital angular momentum $\ell$ is small are preferred over decays where $\ell$ is large
For your $\rm Nb\to Mo$ transition, the decay to the excited state is preferred over the $\frac92\to\frac52$ ground-state-to-ground-state transition, which suggests that the excited state probably has spin $\frac72, \frac92, \frac{11}2$. The most probable of these is $\frac72$, since angular momentum tends to relax during decay processes — and indeed, that's the spin of the $766\rm\,keV$ excited state. |
Consider a two-period binomial model for a risk asset with each period equal to a year and take $S_0 = 1$, $u = 1.5$, and $l = 0.6$. The interest rate for both periods is $R = .1$.
a.) Price an American put option with $K = .8$
b.) Price an American call option with $K = .8$
c.) Price an American option with path dependent payoff which pays the running maximum of the path.
Note: The running maximum at time $t$ is the maximum of the price until or at time $t$
To clarify when it says "pays the running maximum", it means that the payoff is the running maximum, i.e. max(S_0,S_1,S_2).
Attempted solution a.) We have $S_0 = 1$, $S_0u = 1.5$, $S_0l = 0.6$, $S_0u^2 = 2.25$, $S_0ul = .9$, and $S_0l^2 = .36$. The risk neutral probabilities are $$\hat{\pi}_u = \frac{1+R-l}{u-l} = .5556 \ \ \ \ \ \ \ \hat{\pi}_l = \frac{u-R-1}{u-l} = .4444$$ Now we need to calculate the continuation values at nodes $t = 0, t = 1$ price-down, and $t = 1$ price up which we denote $C_0,C_{1,1},C_{1,2}$ respectively. We will then compare the continuation values and exercise value at each node in a backward manner. At time $t = 1$, the continuation value is $$C_{1,2} = \frac{1}{1+R}\hat{\mathbb{E}}\left[(K-S_T)_{+}|S_1 = 1.5\right] = \frac{1}{1.1}\left(.5556(.8 - 2.25)_{+} + .4444(.8-.9)_{+}\right) = 0$$ $$C_{1,1} = \frac{1}{1+R}\hat{\mathbb{E}}\left[(K-S_T)_{+}|S_1 = 0.6\right] = \frac{1}{1.1}\left(.5556(.8 - .9)_{+} + .4444(.8-..36)_{+}\right) = 0.1778$$ $$C_{0} = \frac{1}{1+R}\hat{\mathbb{E}}\left[(K-S_T)_{+}|S_0 = 1\right] = \frac{1}{1.1}\left(.5556(.8 - 1.5)_{+} + .4444(.8-.6)_{+}\right) = 0.0808$$ Therefore, the price of the option at these nodes are $$V_{1,2} = \max\{\frac{1}{1.1}\left(.5556(.8 - 2.25)_{+} + .4444(.8 - .9)\right),(.8-1.5)_{+}\} = 0$$ $$V_{1,1} = \max\{\frac{1}{1.1}\left(.5556(.8 - .9)_{+} + .4444(.8 - .36)\right),(.8-.6)_{+}\} = 0.2$$ $$V_{0} = \max\{\frac{1}{1.1}\left(.5556(.8 - 1.5)_{+} + .4444(.8 - .6)\right),(.8-1)_{+}\} = 0.0808$$ At time $t = 0$, we need to see if it is optimal to exercise or optimal to continue. Since the the exercise value $E := (K-1)_{+}$ clearly since $K = .8$ in this problem it is optimal to continue.
If this solution is correct then part b.) will be pretty straight forward. Although I am not totally sure I am right here I may have made some mistakes, and I also have no idea how to do part c.). Any suggestions is greatly appreciated. |
General case
There is indeed a mathematical theorem that deals with the number of nodes an eigenfunction corresponding to a certain eigenvalue can possess.It was laid down by Courant$^{[1, 2]}$ and it states the following:
Given the self-adjoint second order (partial) differential equation
\begin{equation} \left(\hat{L} + \lambda \rho(\mathbf{x}) \right)
u(\mathbf{x}) = 0 \end{equation}
(where $\hat{L} = L(\mathbf{\Delta}, \mathbf{x})$ is a linear,
hermitian differential operator, $\rho(\mathbf{x})$ is positive and
bounded, and $\lambda$ is the eigenvalue) for a domain $G$ with
homogeneous boundary conditions, that is $u(\mathbf{x}) = 0$ on the
boundary of the region $G$; if its eigenfunctions are ordered
according to increasing eigenvalues, then the nodes of the $n^{\text{th}}$
eigenfunction divide the domain into no more than $n$ subdomains. The
nodal set of $u(\mathbf{x})$ is defined as the set of points
$\mathbf{x}$ such that $u(\mathbf{x}) = 0$. No assumptions are made
about the number of independent variables.
The proof is rather involved and so I won't show it here. But if you want you can look it up in [1] or here.
So, Courant's nodal line theorem tells us, that if we order the possible energy eigenvalues of the time-independent Schroedinger equation as $\lambda_1 \leq \lambda_2 \leq \lambda_3 \leq \dots$, then (depending on precisely how you set up the numbering) the $n^{\text{th}}$ eigenfunction, $\Psi_{n}$ (the one with energy eigenvalue $\lambda_n$) has
at most $n$ nodes (including the trivial one at the boundary $\mathbf{x} \to \infty$). Unfortunately, this gives you only an upper bound for the number of nodes a wave function with a certain energy eigenvalue may possess. So, all we know is that the ground state wave function $\Psi_{1}$ cannot have any nodes within the region $G$ (in total it has one node, namely the one at $\mathbf{x} \to \infty$). Wave functions for higher $n$ may possess up to $n-1$ nodes within $G$ but may as well have less. Thus, we cannot in general say that if a wave function has more nodes than another one it will automatically correspond to a state with higher energy. Special case: Schroedinger equation in one dimension
There is however a special case:For the Sturm-Liouville eigenvalue problem (and thus for
ordinary second order differential equations with homogeneous boundary conditions) we can strengthen Courant's nodal line theorem such that if we order the possible eigenvalues as $\lambda_1 \leq \lambda_2 \leq \lambda_3 \leq \dots$, then the $n^{\text{th}}$ eigenfunction (the one with energy eigenvalue $\lambda_n$) has exactly $n$ nodes (including the trivial one at the boundary $\mathbf{x} \to \infty$).
This is useful since the one-dimensional time-independent Schrödinger equation is a special case of a Sturm-Liouville equation. So, in the case of the inhomogeneous radial Schrödinger equation with a local potential and node-less inhomogeneity such as the radial Schrödinger equation for the hydrogenic atom
\begin{equation}\bigg( \frac{ - \hbar^{2} }{ 2 m_{\mathrm{e}} } \frac{ \mathrm{d}^{2} }{ \mathrm{d} r^{2} } + \frac{ \hbar^{2} }{ 2 m_{\mathrm{e}} } \frac{ \ell (\ell + 1) }{ r^{2} } - \frac{ Z e^{2} }{ 2 m_{\mathrm{e}} r } - E \bigg) r R(r) = 0\end{equation}
it is generally true that a wavefunction with more (radial) nodes must always correspond to a state of higher energy than a wavefunction with less radial nodes. Also, it is clear that the wavefunctions of the one-dimensional particle-in-a-box must follow this rule. But for the three-dimensional particle-in-a-box this is not true anymore, since in that case the Schroedinger equation of the system is not an ordinary second-order differential equation but a partial-differential equation for which only the general version of Courant's nodal line theorem holds.
Some concluding remarks
For real-world system like molecules or crystals the Schroedinger equation is a partial differential equation for which the special case outlined above doesn't apply so that only Courant's nodal line theorem in its general form holds which doesn't give a strict justification for the statement that more nodes mean higher energy. Yet it is very often observed that the number of nodes indeed increases with increasing energy.The reason for this can be motivated in the following way: The kinetic energy $E_{\mathrm{kin}}$ of a state is proportional to $\int \Psi \Delta \Psi \, d^{3} r$. Via Gauss's theorem it can be shown that $\int \Psi \Delta \Psi \, d^{3} r \propto \int |\nabla \Psi |^{2} \, d^{3} r$ and so $E_{\mathrm{kin}} \propto \int | \nabla \Psi |^{2} d^{3} r$. Now, nodes force a wavefunction to change it's sign. This often means that the value of $\Psi$ has to increase/decrease rather rapidly thus leading to areas with high absolute values of the gradient and thus to high kinetic energy. Since the potential energies shouldn't differ too much between the different states the higher kinetic energy usually also entails a higher total energy. As an example consider the bonding and antibonding wavefunctions of a homonuclear diatomic molecule whose atoms are placed at the positions $r_{\mathrm{A}}$ and $r_{\mathrm{B}}$.
The bonding wavefunction has no nodes. Its value between the atoms doesn't have to undergo a rapid change and thus the slope is rather low. The antibonding wavefunction has one node between the atoms. Its value between the atoms must change rapidly from its positive to its negative maximum thus entailing a very high slope. The slopes of the tail regions are comparable for the bonding and antibonding wavefunctions since it can smoothly fall of to zero at infinity and is not required to go from a maximum value to zero within a very confined region of space - thus even if one wavefunction has to start of at a higher maximum value the gradient will not be much higher. It follows that the antibonding wavefunction has a higher kinetic energy than the bonding wavefunction.
References
[1] R. Courant, D. Hilbert,
Methods of Mathematical Physics, Vol. 1, Interscience, New York, 1953, p. 451-455.
[2] R. Courant, "Ein allgemeiner Satz zur Theorie der Eigenfunktionen Selbstadjungierter Differentialausdrücke",
Nachr. v. d. Ges. d. Wiss. zu Göttingen 1923, p. 81. |
Patterns, Symmetries, And Mathematical Structures In The Arts, 2020 Georgia Southern University
Patterns, Symmetries, And Mathematical Structures In The Arts, Sarah C. Deloach University Honors Program Theses
Mathematics is a discipline of academia that can be found everywhere in the world around us. Mathematicians and scientists are not the only people who need to be proficient in numbers. Those involved in social sciences and even the arts can benefit from a background in math. In fact, connections between mathematics and various forms of art have been discovered since as early as the fourth century BC. In this thesis we will study such connections and related concepts in mathematics, dances, and music.
Gehring Inequalities On Time Scales, 2020 Missouri University of Science and Technology
Gehring Inequalities On Time Scales, Martin Bohner, S. H. Saker Mathematics and Statistics Faculty Research & Creative Works
In this paper, we first prove a new dynamic inequality based on an application of the time scales version of a Hardy-type inequality. Second, by employing the obtained inequality, we prove several Gehring-type inequalities on time scales. As an application of our Gehring-type inequalities, we present some interpolation and higher integrability theorems on time scales. The results as special cases, when the time scale is equal to the set of all real numbers, contain some known results, and when the time scale is equal to the set of all integers, the results are essentially new.
Individual Based Modeling And Analysis Of Pathogen Levels In Poultry Chilling Process, 2019 Cleveland State University
Individual Based Modeling And Analysis Of Pathogen Levels In Poultry Chilling Process, Zachary Mccarthy, Ben Smith, Aamir Fazil, Jianhong Wu, Shawn D. Ryan, Daniel Munther Mathematics Faculty Publications
Pathogen control during poultry processing critically depends on more enhanced insight into contamination dynamics. In this study we build an individual based model (IBM) of the chilling process. Quantifying the relationships between typical Canadian processing specifications, water chemistry dynamics and pathogen levels both in the chiller water and on individual carcasses, the IBM is shown to provide a useful tool for risk management as it can inform risk assessment models. We apply the IBM to Campylobacter spp. contamination on broiler carcasses, illustrating how free chlorine (FC) sanitization, organic load in the water, and pre-chill carcass pathogen levels affect pathogen levels ...
Laplacian Spectral Characterization Of Signed Sun Graphs, 2019 Shiraz University
Laplacian Spectral Characterization Of Signed Sun Graphs, Fatemeh Motialah, Mohammad Hassan Shirdareh Haghighi Theory and Applications of Graphs
A sun $SG_{n}$ is a graph of order $2n$ consisting of a cycle $C_{n}$, $n\geq 3$, to each vertex of it a pendant edge is attached. In this paper, we prove that unbalanced signed sun graphs are determined by their Laplacian spectra. Also we show that a balanced signed sun graph is determined by its Laplacian spectrum if and only if $n$ is odd.
Inequalities For Sector Matrices And Positive Linear Maps, 2019 Shanghai University
Inequalities For Sector Matrices And Positive Linear Maps, Fuping Tan, Huimin Chen Electronic Journal of Linear Algebra
Ando proved that if $A, B$ are positive definite, then for any positive linear map $\Phi$, it holds \begin{eqnarray*} \Phi(A\sharp_\lambda B)\le \Phi(A)\sharp_\lambda \Phi(B), \end{eqnarray*} where $A\sharp_\lambda B$, $0\le\lambda\le 1$, means the weighted geometric mean of $A, B$. Using the recently defined geometric mean for accretive matrices, Ando's result is extended to sector matrices. Some norm inequalities are considered as well.
The Grass Grows Green In Virginia: A Grassroots Effort Leading To Comprehensive Change In Removing Mathematics Barriers For Students., 2019 Virginia Community College System
The Grass Grows Green In Virginia: A Grassroots Effort Leading To Comprehensive Change In Removing Mathematics Barriers For Students., Patricia Parker Inquiry: The Journal of the Virginia Community Colleges
The Virginia Community College System (VCCS) embarked on a comprehensive mathematics pathways project in October 2015 with a move from design to implementation in spring 2017. The VCCS Mathematics Pathways Project (VMPP) aimed not only to develop strategies to improve retention and completion, but also to address foundational barriers to students’ success. This grassroots effort involved collaboration among all 23 community colleges, over 200 mathematics faculty, and staff from career and technical support departments. Collaboration extended to the K–12 and university sectors, professional organizations, publishers, and foundations. VMPP goals focused on creating structured mathematics pathway courses for all program ...
Reinforcing The Number Of Disjoint Spanning Trees, 2019 Butler University
Reinforcing The Number Of Disjoint Spanning Trees, Damin Liu, Hong-Jian Lai, Zhi-Hong Chen Zhi-Hong Chen
The spanning tree packing number of a connected graph G, denoted by
T(G), is the maximum number of edge-disjoint spanning trees of G. In this paper, we determine the minimum number of edges that must be added to G so that the resulting graph has spanning tree packing number at least k, for a given value of k.
Making Kr+1-Free Graphs R-Partite, 2019 University of Illinois at Urbana-Champaign
Making Kr+1-Free Graphs R-Partite, József Balogh, Felix Christian Clemen, Mikhail Lavrov, Bernard Lidický, Florian Pfender Bernard Lidický
The Erdős–Simonovits stability theorem states that for all ε > 0 there exists α > 0 such that if G is a Kr+1-free graph on n vertices with e(G) > ex(n, Kr+1) − αn2, then one can remove εn2 edges from G to obtain an r-partite graph. Fu¨redi gave a short proof that one can choose α = ε. We give a bound for the relationship of α and ε which is asymptotically sharp as ε → 0.
Sdrap: An Annotation Pipeline For Highly Scrambled Genomes, 2019 Illinois State University
Sdrap: An Annotation Pipeline For Highly Scrambled Genomes, Jasper Braun Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Mathematical Models For Describing Molecular Self-Assembly, 2019 University of South Florida
Mathematical Models For Describing Molecular Self-Assembly, Margherita Maria Ferrari Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Network Structure And Dynamics Of Biological Systems, 2019 University of Nevada, Reno
Network Structure And Dynamics Of Biological Systems, Deena R. Schmidt Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Efficient Control Methods For Stochastic Boolean Networks, 2019 University of Kentucky
Efficient Control Methods For Stochastic Boolean Networks, David Murrugarra Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Loop Homology Of Bi-Secondary Structures, 2019 Illinois State University
Loop Homology Of Bi-Secondary Structures, Andrei Bura Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Design Of Experiments For Unique Wiring Diagram Identification, 2019 California Polytechnic State University, San Luis Obispo
Design Of Experiments For Unique Wiring Diagram Identification, Elena Dimitrova Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
The Energy-Spectrum Of Bicompatible Sequences, 2019 Illinois State University
The Energy-Spectrum Of Bicompatible Sequences, Wenda Huang Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
On An Enhancement Of Rna Probing Data Using Information Theory, 2019 University of Virginia
On An Enhancement Of Rna Probing Data Using Information Theory, Thomas J.X. Li, Christian M. Reidys Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Topology And Dynamics Of Gene Regulatory Networks: A Meta-Analysis, 2019 Illinois State University
Topology And Dynamics Of Gene Regulatory Networks: A Meta-Analysis, Claus Kadelka Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Modeling Control Methods To Manage The Sylvatic Plague In Black-Tailed Prairie Dog Towns, 2019 University of Tennessee, Knoxville
Modeling Control Methods To Manage The Sylvatic Plague In Black-Tailed Prairie Dog Towns, David C. Elzinga, Shelby R. Stowe, Leland Russell Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Using Agent-Based Modeling To Investigate The Existence Of Herd Immunity Thresholds For Infectious Diseases On Randomly Generated Contact Networks, Hannah Callender Highlander, Owen Price Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Oscillation In Mathematical Epidemiology, 2019 Bates College
Oscillation In Mathematical Epidemiology, Meredith Greer Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided. |
Show that the Hilbert transform of $h(t) = m(t) \cos(2 \pi \nu_c t)$ is
$$\hat{h} (t) = m(t) \sin(2 \pi \nu_c t),$$
where $m(t)$ is a real valued, band-limited function (i.e. we have Fourier transform $M(\nu) = 0$ for $|\nu| > \nu_m$) and $\nu_c > \nu_m.$
Attempt
I did some research and I found out that this result follows directly from 'Bedrosian's theorem'. But I am required to compute this by first finding its analytic signal $h_a$ whose imaginary part would then be the Hilbert transform. Here is my expression for $h_a (t):$
$$h_a (t) = 2 \int^\infty_0 \Big[ H(\nu) \Big] e^{j 2 \pi \nu t} \ d\nu = 2 \int^\infty_0 \Big[ \frac{1}{2} (M(\nu + \nu_c) + M(\nu - \nu_c)) \Big] e^{j 2 \pi \nu t} \ d\nu.$$
I have used the 'modulation property' of the Fourier transform to get to the RHS. So how can I proceed with the integration when we do not have an explicit expression for $m(t)$? |
Content – Energy sources
A fuel cell is a reactor converting chemical energy from a fuel (hydrogen or hydrogen rich), directly into electricity through a chemical reaction between the fuel and an oxidizing agent.
In a fuel cell, the fuel is supplied to the anode, while air (or oxygen) is supplied to the cathode.
The anode and cathode contain catalysts that cause the fuel to undergo oxidation that generate positive hydrogen ions and electrons. The hydrogen ions are drawn through the electrolyte after the reaction. At the same time, electrons are drawn from the anode to the cathode through an external circuit, producing direct current electricity. At the cathode, hydrogen ions, electrons, and oxygen react to form water (H
2O).
A Hydrogen-powered fuel cell will not emit CO
2 or NO x.
Fuel cells that are fuelled by natural gas (ex.methane) will have CO
2 emissions. Since fuel cells provide higher energy yield than diesel engines and gas turbines, the emission of CO 2 is considered less per energy unit.
Both batteries and fuel cells generate direct current electricy through chemical reactions.
Fuel cells are different from batteries in that they require a continuous source of fuel and oxygen or air to sustain the chemical reaction, whereas in a battery the chemicals present in the battery react with each other to generate electricity. Batteries either has to be replaced or recharged when discharged. Recharging would be by reversing the chemical reactions such that it reverts to it´s original chemical composition that can generate electricity when connected to an external circuit.
As the main difference among fuel cell types is the electrolyte, fuel cells are classified by the type of electrolyte they use and by the difference in startup time ranging from 1 second for proton exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC).
Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are normally “stacked”, or placed in series, to create sufficient voltage to meet the requirements. In addition to electricity, fuel cells produce water, heat and, depending on the fuel source, small amounts of nitrogen dioxide and other emissions.
When hydrogen (anode) and oxygen (cathode) pass over each of the electrodes electricity and heat is produced by means of a chemical reaction.
The chemical reaction slit the hydrogen into one electron and one proton.
The electrons will by means of a membrane be prevented from flowing through the electrolyte to the cathode but rather through an external circuit. This flow of electrons produce the useable electric current.
The by product of the fuel cell is water (H
2O) Types of fuel cells AFC: Alkaline Fuel Cell
Alkaline fuel cells differ from other types of fuel cells in the chemical reaction and the operating temperature.
AFC uses an aqueous solution of potassium hydroxide solution (held in a matrix material if a static electrolyte type fuel cell) electrolyte, and operates at temperatures of 90-100°C. The alkaline electrolyte provides fast cathode reactions, which enables high performance. The AFC requires pure hydrogen,
The chemical reaction that occurs at the anode is: 2H
2 + 4OH− → 4H 2O + 4e −. The reaction at the cathode occurs when the electrons pass around an external circuit and react to form hydroxide ions, OH −, as shown:
O
2 + 4e − + 2H 2O → 4OH −
This is a technology that has been used by NASA for many years
PAFC: Phosphoric Acid Fuel Cell
PAFC uses a matrix soaked with liquid phosphoric acidelectrolyte and operates at temperatures of 175-200° C. The PAFC offers high fuel efficiency since the generation of electrical energy often is combined with a system for recovering the heat energy. The PAFC can use impure hydrogen as fuel.
This technology is used for electrical utility generation and for transportation.
MCFC: Molten Carbonate Fuel Cell
MCFC uses a liquid solution of lithium, sodium, and/or potassium carbonates soaked in a matrix. It operates at temeratures of 600-1000° C. The MCFC offers high fuel efficiency.
Note that there are two moles of electrons and one mole of CO
2 transfered from the cathode to the anode. This is given by: H 2 + ½ O2 + CO 2 (cathode) → H 2O + CO 2 (anode)
The Nernst reversible potential for a molten carbonate fuel cell can be described as:
E= E^0 + \dfrac{RT}{2F}ln(\dfrac{P_{H^2}P^{\dfrac{1}{2}}_{O_2}}{P_{H_2O}}) + \dfrac{RT}{2F}ln(\dfrac{P_{CO_{2a}}}{P_{CO_{2C}}})
Where “a” and “c” correspond to the anode and cathode gas supplies respectively.
The CO
2 produced at the anode is commonly recycled and used by the cathode.
This allows the reactant air to be preheated while burning unused fuel and the waste heat can be used for alternate purposes as necessary.
This configuration also allows the CO
2 to be supplied externally from a pure CO 2 source. SPFC: Solid Polymeric Fuel Cell
SPFC fuel cells often go under the name of
PEMFC or PEM, where PEM stands for Proton Exchange Membrane. PEM cells supplied directly with methanol are deemed DMFC (Direct Methanol Fuel Cell). PEMFC fuel cell uses a solid organic polymer polyperfluorsulfonic acid electrolyte membrane. It operates at temperatures of 60-100° C. The PEMFC provides quick start-up.
The chemical reaction between hydrogen and oxygen that powers the fuel cell is the same as when hydrogen is burned:
2H 2 +O 2 ➞2H 2O
The main difference between burning hydrogen and fueling a fuel cell by hydrogen is the efficiency. When hydrogen is burned in an internal combustion engine the reaction is hard to control and only onverts around 20% of the available energy into useable kinetic enrgy.
The rest of the energy would be wasted as heat and spent overcoming all of the friction between all the moving parts of the engine. In a fuel cell the reaction between hydrogen and oxygen is very controlled and happends at a slower rate.
The main parts of a fuel cell are: a membrane (the electrolyte) (Nafion membrane), an anode catalyst, and a cathode catalyst. In a hydrogen fuel cell, hydrogen is fed into the cell and flows over the anode catalyst. When hydrogen molecules hit the anode catalyst, the H
2 molecules separates into two hydrogen ions (two protons) and two electrons by the following chemical reaction: H 2➞2H +2e
The electrons flow from the anode to the cathode through the external circuit. The hydrogen ions flows from the anode through the membrane to the cathode where oxygen is separated into oxygen atoms, which have their negative charge, increased from the electrons arriving at the cathode such that it picks up the positively charged hydrogen ions:
O 2 (g) → 2O(adsorbed) O(adsorbed) + e – + H + → OH(adsorbed)
When hydrogen ions reach the cathode catalyst, they react with the oxide ions to form water molecules:
OH (adsorbed) + e – + H + → H 2O
The electrons flowing through the external circuit represents the electric current generated.
1 – The platinum catalyst at the anode makes the hydrogen split into protons (positive hydrogen ions) and electrons.
2 – Only positively charged ions can pass through the polymer electrolyte membrane to the cathode.The negatively charged electrons travels through the external circuit and an electrical current is created. 3 – Electrones and protons (positively charged hydrogen ions) are combined with oxygen at the cathode to form water which is the by product from the fuel cell. Fuel cell efficiency
The energy efficiency of a fuel cell is measured by the amount of energy generated by the system compared to the energy stored in the fuel. We segregate between the total energy efficiency, which includes both electrical and thermal (heat) energy and the electrical energy efficiency. The total efficiency ratio is used if the fuel cell system can utilise the heat energy generated.
Factors to take into consideration when comparing the efficiency of different technologies, fuels and configuration:
What is measured theoretical or actual generation of useful (for work) energy. Energy needed to produce, store and transport the fuel. Energy needed to manufacture the system.
Fuel cells are currently assumed to provide an efficiency of 35% – 60% for simple systems and close to 80% for systems with heat recovery.
Thermal efficiency of a fuel cell
The thermal efficiency is basically defined as the amount of useful energy produced compared to the chemical energy of the fuel consumed. For a fuel cell that would be
the energy released when the fuel (hydrogen) is reacted with an oxidant (the oxygen of air)\eta_e= \dfrac{useful-output-energy}{\Delta H}
This may be expressed as the Gibbs function change (measures the electrical work) to the Enthalpy change (measures the heating value of the fuel) in the cell reaction.\eta_e= \dfrac{\Delta G}{\Delta H}
The efficiency of a fuel cell is expressed based on the change in the free energy for the cell reaction:H_2 + \dfrac{1}{2}O_2 + H_2O
The product water is in liquid form.
The chemical energy in the hydrogen – oxygen reaction is or 285.8kJ/mole (68,317 cal/g mole of H2) and the free energy available for useful work is 237.1 kJ/mole ( or 56,690 cal/g mole of H2).
The thermal efficiency of an ideal (theoretical) fuel cell operating reversibly on pure hydrogen and oxygen at standard conditions would be:\eta_e= \dfrac{237,1}{285,8}\approx0,83
The efficiency of an actual fuel cell can be expressed as the “voltage efficiency” which is the ratio of the operating cell voltage to the ideal (theoretical) cell voltage.
The actual cell voltage ould be less than the ideal cell voltage due to internal losses.
The ideal voltage of a fuel cell operating reversibly with pure hydrogen and oxygen in standard conditions is 1.229 V.
The termal efficiency based on the higher heating value for hydrogen is then:
\eta_e= \dfrac{0,83V_{cell}}{V_{ideal}}=\dfrac{0,83V_{cell}}{1,229}=0,675V_{cell}
Follow link to view table with more details (opens in new window).
Fuel Cell Type Efficiency Applications Alkaline (AFC) 60–70% electric • Military
• Space
Phosphoric Acid (PAFC) 80–85% overall with combined heat and power (CHP)
(36–42% electric)
• Distributed generation Polymer Electrolyte Membrane or Proton Exchange Membrane (PEM)* 50–60% electric • Back-up power
• Portable power
• Small distributed generation • Transportation
Molten Carbonate (MCFC) 85% overall with CHP
(60% electric)
• Electric utility
• Large distributed generation
Solid Oxide (SOFC) 85% overall with CHP (60% electric) • Auxiliary power
• Electric utility
• Large distributed generation |
In a U-shaped tube, water and oil are separated by a movable membrane. What is the ratio of the heights $\frac{h1}{h2}$ (density of the oil $ρ_{oil}$ = 0.92 $\frac{g}{cm^3}$)?
I tried solving by saying that the pressure at the membrane should be the same so
$$P_{H_2O} = P_{oil}$$ $$=> P_0 + \rho_{H_2O} \cdot g \cdot h_1 = P_0 + \rho_{oil} \cdot g \cdot h_2$$ where $P_0$ is the atmospheric pressure.
then i got
$$\frac{h_1}{h_2} = \frac{\rho_1}{\rho_2} = 0.92$$
But my friend argues that the Net force on the membrane should be equal and he got the answer like following,
$$A_1 \cdot P_1 = A_2 \cdot P_2$$
After substituting the formula and values he got, $$=> \frac{h_1}{h_2} = 0.92 \frac{D^2}{d^2}$$
So all i want to ask is, which method is correct?? Thanks. |
1. I looked up forward LU error bounds in Higham's Accuracy and Stability of Numerical Algorithms, Theorem 9.15 (citing Barrlund and Sun), for the LU decomposition $$A=LU,\quad A+\Delta A=(L+\Delta L)(U+\Delta U)$$ it gives the norm-wise error bound$$\begin{gather}\frac{\|\Delta U\|_F}{\|U\|_2} \leq \frac{\|L^{-1}\|_2\|U^{-1}\|_2\|A\|_2}{1-\|L^{-1}\|_2\|U^{-1}\|_2\|\Delta A\|_2}\frac{\|\Delta A\|_F}{\|A\|_F} =: M, \\ \|\Delta A\|_\infty \leq n^2\gamma_{3n}\rho_n \|A\|_\infty,\end{gather}$$where $\gamma_k = k\epsilon/(1-k\epsilon)$ ($\epsilon$ is the unit roundoff), and $\rho_n$ is the growth factor, defined as$$\max_{i,j,k} |a^{(k)}_{i,j}|/\max_{i,j} |a_{i,j}|$$with $a^{(k)}_{i,j}$ being the matrix elements at $k$-th stage of LU factorization by Gaussian elimination.
So, in principle, so long as you can estimate all of the above numbers, the procedure should produce the correct determinant sign so long as the smallest value on the diagonal of $U$ is not so close to zero that its sign could have been influenced by numerical errors:$$ \min_k |u_{kk}| \geq M\|U\|_2 \geq \|\Delta U\|_F. $$Of course, this is not very elegant. Higham also gives a component-wise error bound, which should be stricter.
2. Not likely, but if you know something about how the numbers $a,b$ appear in off-diagonal elements, it is possible there might be a lower bound $m = \min_A |\det A|$. If it so happens that it is easy to calculate, then the determinant's sign can be accurately determined so long as the numerical errors are at most $m$.
3. Eigendecomposition, although more expensive, might also be helpful. For example, GSL (https://www.gnu.org/software/gsl/manual/html_node/Real-Symmetric-Matrices.html#Real-Symmetric-Matrices) promises that
The computed eigenvalues are accurate to an absolute accuracy of \epsilon ||A||_2, where \epsilon is the machine precision.
This would directly address the issue of whether the determinant's sign is correct - that would require all eigenvalues to satisfy $|\lambda| > \epsilon \|A\|_2$.
4. In general, computation of a sign of some quantity is ill-conditioned when that quantity is small - the determinant's sign would be very sensitive to small changes in the input matrix. So the common solution to this problem is just to look for a way to avoid computing the determinant's sign at all, but I don't know how feasible that is.
5. Arbitrary-precision floating-point arithmetic might also help. Although not with GSL, there are libraries (e.g., Eigen) that implement linear algebra in a way that can work with, for example, mpfr.
6. ( Edit.) The $LDL^\top$ decomposition is even easier I think: from Barrlund (Eq. 2.1b), ($D$ is the diagonal from $U$, so I don't think it matters if you actually compute $LU$ instead because this is a perturbation analysis)$$ \|\Delta D\|_F \leq \frac{\kappa_2(A)}{1-\kappa_2(A)\frac{\|\Delta A\|_2}{\|A\|_2}} \|\Delta A\|_F = \beta, $$which now only needs estimates of $A$'s 2-norm condition number $\kappa_2(A)$ and the normwise error backward error estimates $\|\Delta A\|_2$, $\|\Delta A\|_F$ from above. The correct sign is now guaranteed by something like $|d_{kk}|\geq \beta$. |
1. Feature Selection
We could delete some existing columns by using feature importance.
2. Feature Generation/Extraction
We could add some new columns. For example, generating new attributes using existing ones for each row or generating new attributes across multiple rows.
We have a lot of possible interactions, and we need to reduce number of features (group existing features before generation or feature selection after generation).
2.1 High-Order Linear Interactions
We can use principal component analysis (PCA) to find uncorrelated components or canonical correlation analysis (CCA) to find correlated components across multiple sources.
2.1.1 PCA
We have seen PCA in data quality section and here is an example of PCA in Python.
2.1.2 CCA
In statistics, CCA is a way of inferring information from cross-covariance matrices. If we have two vectors $X=(x_1, ..., x_n)$ and $Y=(y_1, ..., y_n)$ of random variables, and there are correlations among the variables, then CCA will find linear combinations of $X$ and $Y$ which have maximum correlation with each other. In other words, CCA seeks linear transforms such that correlation is maximized in the common subspace: $$(a', b')=\arg\max\mathrm{corr}(a^TX, b^TY).$$
2.1.2.1 Computation
PCA is implemented by eigendecomposition $$\Sigma w=\lambda w,$$ while CCA is implemented by generalized eigendecomposition $$\left(\begin{matrix}0 & \Sigma_{XY} \\ \Sigma_{YX} & 0\end{matrix}\right)\left(\begin{matrix}w_X \\ w_Y\end{matrix}\right)=\lambda\left(\begin{matrix}\Sigma_{XX} & 0 \\ 0 & \Sigma_{YY}\end{matrix}\right)\left(\begin{matrix}w_X \\ w_Y\end{matrix}\right).$$
2.2 Fisher's Linear Discriminant Analysis (FLDA)
Suppose global mean $\mu$ and class means $\mu_k, k=1, ..., K.$
Let within-class covariance be $$\Sigma_w=\frac{1}{K}\sum_k\frac{1}{n_k}\sum_{i=1}^{n_k}(x_i-\mu_k)(x_i-\mu_k)^T,$$ and between-class covariance be $$\frac{1}{K}\sum_k(\mu_k-\mu)(\mu_k-\mu)^T.$$ We need to find a direction $w$ which maximizes $\frac{w^T\Sigma_bw}{w^T\Sigma_ww}.$ For multiple $w$'s, FLDA can be solved by generalized eigendecomposition.
In Python, using
clf = LinearDiscriminatAnalysis(),
clf.fit(...),
clf.predict(...). |
It would be easier to answer your question clearly with a drawing.
In the following, the
angle coordinate of the pendulum is the angle it makes with the vertical line. When the pendulum swings right(left), the angle will be positive(negative).
With this setting, I get the exact same answer as you by working out the equations of motion. However, there seems to be a confusion about the way to decide the sign of your result.
How can an arc length divided by a radius be negative and yet have a
physical meaning? It probably has to do with the way I've drawn my
edit, because right now it doesn't make sense.
You might feel better about the idea of
negative angles once you realise that there are infinitely many equivalent representations of a given angle. For instance, the $0$ angle is the same angle as all the $2n\pi$ angles with $n\in \mathbb{Z}$.
More technically, all these angles are said to be part of the same
equivalence class under the equivalence relation
$$x\sim y,~\text{iff}~\exists n\in\mathbb{Z}~\text{so that}~y=x+2n\pi$$
(See e.g. M.Nakahara,
Geometry, Topology and Physics (2003) section 2.1.2 of the second edition).
It is perfectly all right to give an orientation while labelling your arc length so that you would switch from going to increasing numbers to going to decreasing numbers when you change the direction along its edge. It is also fine if you do not want to do that and instead work with absolute values. Regardless, once you have made the full circle, you would identify $2\pi=0$.
The practical implication of that for your case is that you would always measure the angles by going counterclockwise as in the picture above and that, after the pendulum has reached $\theta=0$ while moving to the left, its angle would then assume values that decrease from $2\pi$. Say it goes on the reach the symmetric angle $2\pi-\theta$. Owing to the periodicity of the sine function, you would then have
$$\alpha=-\frac{g}{l}\sin(2\pi-\theta)=-\frac{g}{l}\sin(-\theta)=\frac{g}{l}\sin(\theta)~,$$
which is the exact opposite of the value it has on the right side.
Hope it helps ! |
Answer
amplitude = $1$ period = $120^o$ or $\frac{2}{3}\pi$
Work Step by Step
The given graph shows around two periods of the sine function. Since the highest point is equivalent to two squares from the x-axis, and each square representing 0.5, then the amplitude is: $=2 \times 0.5 \\=1$ Horizontally, one period of the graph is from the y-axis (or $x=0$) up to the edge of the fourth square to the right of the y-axis. . Since each square represents $30^o$ or $\frac{\pi}{6}$, then the period of the given function is: $=30^o \times 4 \\=120^o$ or $=\frac{\pi}{6} \times 4 \\=\frac{2}{3}\pi$ |
2.5. Automatic Differentiation¶
In machine learning, we
train models, updating them successively sothat they get better and better as they see more and more data. Usually, getting better means minimizing a loss function, a score thatanswers the question “how bad is our model?” This question is moresubtle than it appears. Ultimately, what we really care about isproducing a model that performs well on data that we have never seenbefore. But we can only fit the model to data that we can actually see.Thus we can decompose the task of fitting models into two key concerns: optimization the process of fitting our models to observed data and generalization the mathematical principles and practitioners wisdomthat guide as to how to produce models whose validity extends beyond theexact set of datapoints used to train it.
This section addresses the calculation of derivatives, a crucial step innearly all deep learning optimization algorithms. With neural networks,we typically choose loss functions that are differentiable with respectto our model’s parameters. Put simply, this means that for eachparameter, we can determine how rapidly the loss would increase ordecrease, were we to
increase or decrease that parameter by aninfinitessimally small amount. While the calculations for taking thesederivatives are straightforward, requiring only some basic calculus, forcomplex models, working out the updates by hand can be a pain (and oftenerror-prone).
The autograd package expedites this work by automatically calculatingderivatives. And while many other libraries require that we compile asymbolic graph to take automatic derivatives,
autograd allows us totake derivatives while writing ordinary imperative code. Every time wepass data through our model,
autograd builds a graph on the fly,tracking which data combined through which operations to produce theoutput. This graph enables
autograd to subsequently backpropagategradients on command. Here,
backpropagate simply means to tracethrough the compute graph, filling in the partial derivatives withrespect to each parameter. If you are unfamiliar with some of the math,e.g., gradients, please refer to Section 16.2.
from mxnet import autograd, np, npxnpx.set_np()
2.5.1. A Simple Example¶
As a toy example, say that we are interested in differentiating themapping \(y = 2\mathbf{x}^{\top}\mathbf{x}\) with respect to thecolumn vector \(\mathbf{x}\). To start, let’s create the variable
x and assign it an initial value.
x = np.arange(4)x
array([0., 1., 2., 3.])
Note that before we even calculate the gradient of
y with respect to
x, we will need a place to store it. It’s important that we do notallocate new memory every time we take a derivative with respect to aparameter because we will often update the same parameters thousands ormillions of times and could quickly run out of memory.
Note also that a gradient with respect to a vector \(x\) is itselfvector-valued and has the same shape as \(x\). Thus it is intuitivethat in code, we will access a gradient taken with respect to
x asan attribute the
ndarray
x itself. We allocate memory for an
ndarray’s gradient by invoking its
attach_grad() method.
x.attach_grad()
After we calculate a gradient taken with respect to
x, we will beable to access it via the
.grad attribute. As a safe default,
x.grad initializes as an array containing all zeros. That’s sensiblebecause our most common use case for taking gradient in deep learning isto subsequently update parameters by adding (or subtracting) thegradient to maximize (or minimize) the differentiated function. Byinitializing the gradient to \(\mathbf{0}\), we ensure that anyupdate accidentally exectuted before a gradient has actually beencalculated will not alter the variable’s value.
x.grad
array([0., 0., 0., 0.])
Now let’s calculate
y. Because we wish to subsequently calculategradients we want MXNet to generate a computation graph on the fly. Wecould imagine that MXNet would be turning on a recording device tocapture the exact path by which each variable is generated.
Note that building the computation graph requires a nontrivial amount ofcomputation. So MXNet will only build the graph when explicitly told todo so. We can invoke this behavior by placing our code inside a
with autograd.record(): block.
with autograd.record(): y = 2.0 * np.dot(x, x)y
array(28.)
Since
x is an
ndarray of length 4,
np.dot will perform aninner product of
x and
x, yielding the scalar output that weassign to
y. Next, we can automatically calculate the gradient of
y with respect to each component of
x by calling
y’s
backward function.
y.backward()
If we recheck the value of
x.grad, we will find its contentsoverwritten by the newly calculated gradient.
x.grad
array([ 0., 4., 8., 12.])
The gradient of the function \(y = 2\mathbf{x}^{\top}\mathbf{x}\)with respect to \(\mathbf{x}\) should be \(4\mathbf{x}\). Let’squickly verify that our desired gradient was calculated correctly. Ifthe two
ndarrays are indeed the same, then their difference shouldconsist of all zeros.
x.grad - 4 * x
array([0., 0., 0., 0.])
If we subsequently compute the gradient of another variable whose valuewas calculated as a function of
x, the contents of
x.grad willbe overwritten.
with autograd.record(): y = x.sum()y.backward()x.grad
array([1., 1., 1., 1.])
2.5.2. Backward for Non-scalar Variable¶
Technically, when
y is not a scalar, the most natural interpretationof the gradient of
y (a vector of length \(m\)) with respect to
x (a vector of length \(n\)) is the Jacobian (an\(m\times n\) matrix). For higher-order and higher-dimensional\(y\) and \(x\), the Jacobian could be a gnarly high ordertensor and complex to compute (refer to Section 16.2).
However, while these more exotic objects do show up in advanced machinelearning (including in deep learning), more often when we are callingbackward on a vector, we are trying to calculate the derivatives of theloss functions for each constitutent of a
batch of training examples.Here, our intent is not to calculate the Jacobian but rather the sum ofthe partial derivatives computed individuall for each example in thebatch.
Thus when we invoke backwards on a vector-valued variable, MXNet assumesthat we want the sum of the gradients. In short, MXNet, will create anew scalar variable by summing the elements in
y, and compute thegradient of that variable with respect to
x.
with autograd.record(): # y is a vector y = x * xy.backward()u = x.copy()u.attach_grad()with autograd.record(): # v is scalar v = (u * u).sum()v.backward()x.grad - u.grad
array([0., 0., 0., 0.])
2.5.3. Advanced Autograd¶
Already you know enough to employ
autograd and
ndarraysuccessfully to develop many practical models. While the rest of thissection is not necessary just yet, we touch on a few advanced topics forcompleteness.
2.5.3.1. Detach Computations¶
Sometimes, we wish to move some calculations outside of the recordedcomputation graph. For example, say that
y was calculated as afunction of
x. And that subsequently
z was calcatated a functionof both
y and
x. Now, imagine that we wanted to calculate thegradient of
z with respect to
x, but wanted for some reason totreat
y as a constant, and only take into account the role that
x played after
y was calculated.
Here, we can call
u = y.detach() to return a new variable that hasthe same values as
y but discards any information about how
uwas computed. In other words, the gradient will not flow backwardsthrough
u to
x. This will provide the same functionality as ifwe had calculated
u as a function of
x outside of the
autograd.record scope, yielding a
u that will be treated as aconstant in any called to
backward. The following backward computes\(\partial (u \odot x)/\partial x\) instead of\(\partial (x \odot x \odot x) /\partial x\), where \(\odot\)stands for elementwise multiplication.
with autograd.record(): y = x * x u = y.detach() z = u * xz.backward()x.grad - u
array([0., 0., 0., 0.])
Since the computation of \(y\) was recorded, we can subsequentlycall
y.backward() to get \(\partial y/\partial x = 2x\).
y.backward()x.grad - 2*x
array([0., 0., 0., 0.])
2.5.4. Attach Gradients to Internal Variables¶
Attaching gradients to a variable
x implicitly calls
x=x.detach(). If
x is computed based on other variables, thispart of computation will not be used in the backward function.
y = np.ones(4) * 2y.attach_grad()with autograd.record(): u = x * y u.attach_grad() # implicitly run u = u.detach() z = u + xz.backward()print(x.grad, '\n', u.grad, '\n', y.grad)
[1. 1. 1. 1.] [1. 1. 1. 1.] [0. 0. 0. 0.]
2.5.5. Head gradients¶
Detaching allows to breaks the computation into several parts. We coulduse chain rule Section 16.2 to compute the gradient for thewhole computation. Assume \(u = f(x)\) and \(z = g(u)\), bychain rule we have \(\frac{dz}{dx} = \frac{dz}{du} \frac{du}{dx}.\)To compute \(\frac{dz}{du}\), we can first detach \(u\) from thecomputation and then call
z.backward() to compute the first term.
y = np.ones(4) * 2y.attach_grad()with autograd.record(): u = x * y v = u.detach() # u still keeps the computation graph v.attach_grad() z = v + xz.backward()print(x.grad, '\n', y.grad)
[1. 1. 1. 1.] [0. 0. 0. 0.]
Subsequently, we can call
u.backward() to compute the second term,but pass the first term as the head gradients to multiply both terms sothat
x.grad will contains \(\frac{dz}{dx}\) instead of\(\frac{du}{dx}\).
u.backward(v.grad)print(x.grad, '\n', y.grad)
[2. 2. 2. 2.] [0. 1. 2. 3.]
2.5.6. Computing the Gradient of Python Control Flow¶
One benefit of using automatic differentiation is that even if buildingthe computational graph of a function required passing through a maze ofPython control flow (e.g. conditionals, loops, and arbitrary functioncalls), we can still calculate the gradient of the resulting variable.In the following snippet, note that the number of iterations of the
while loop and the evaluation of the
if statement both depend onthe value of the input
b.
def f(a): b = a * 2 while np.abs(b).sum() < 1000: b = b * 2 if b.sum() > 0: c = b else: c = 100 * b return c
Again to compute gradients, we just need to
record the calculationand then call the
backward function.
a = np.random.normal()a.attach_grad()with autograd.record(): d = f(a)d.backward()
We can now analyze the
f function defined above. Note that it ispiecewise linear in its input
a. In other words, for any
a thereexists some constant such that for a given range
f(a) = g * a.Consequently
d / a allows us to verify that the gradient is correct:
print(a.grad == (d / a))
1.0
2.5.7. Training Mode and Prediction Mode¶
As we have seen, after we call
autograd.record, MXNet logs theoperations in the following block. There is one more subtle detail to beaware of. Additionally,
autograd.record will change the running modefrom
prediction mode to training mode. We can verify this behaviorby calling the
is_training function.
print(autograd.is_training())with autograd.record(): print(autograd.is_training())
FalseTrue
When we get to complicated deep learning models, we will encounter somealgorithms where the model behaves differently during training and whenwe subsequently use it to make predictions. The popular neural networktechniques
dropout Section 4.6 and batch normalizationSection 7.5 both exhibit this characteristic. In othercases, our models may store auxiliary variables in training mode forpurposes of make computing gradients easier that are not necessary atprediction time. We will cover these differences in detail in laterchapters. 2.5.8. Summary¶ MXNet provides an
autogradpackage to automate the calculation of derivatives. To use it, we first attach gradients to those variables with respect to which we desire partial derivartives. We then record the computation of our target value, executed its backward function, and access the resulting gradient via our variable’s
gradattribute.
We can detach gradients and pass head gradients to the backward function to control the part of the computation will be used in the backward function. The running modes of MXNet include training modeand prediction mode. We can determine the running mode by calling
autograd.is_training().
2.5.9. Exercises¶ Try to run
y.backward()twice.
In the control flow example where we calculate the derivative of
dwith respect to
a, what would happen if we changed the variable
ato a random vector or matrix. At this point, the result of the calculation
f(a)is no longer a scalar. What happens to the result? How do we analyze this?
Redesign an example of finding the gradient of the control flow. Run and analyze the result. In a second-price auction (such as in eBay or in computationaladvertising), the winning bidder pays the second-highest price.Compute the gradient of the final price with respect to the winningbidder’s bid using
autograd. What does the result tell you about the mechanism? If you are curious to learn more about second-price auctions, check out this paper by Edelman, Ostrovski and Schwartz, 2005.
Why is the second derivative much more expensive to compute than the first derivative? Derive the head gradient relationship for the chain rule. If you get stuck, use the “Chain Rule” article on Wikipedia. Assume \(f(x) = \sin(x)\). Plot \(f(x)\) and \(\frac{df(x)}{dx}\) on a graph, where you computed the latter without any symbolic calculations, i.e. without exploiting that \(f'(x) = \cos(x)\). |
Andronov-Hopf bifurcation
Yuri A. Kuznetsov (2006), Scholarpedia, 1(10):1858. doi:10.4249/scholarpedia.1858 revision #90964 [link to/cite this article] Andronov-Hopf bifurcation is the birth of a limit cycle from an equilibrium in dynamical systems generated by ODEs, when the equilibrium changes stability via a pair of purely imaginary eigenvalues. The bifurcation can be supercritical or subcritical, resulting in stable or unstable (within an invariant two-dimensional manifold) limit cycle, respectively.
Contents Definition
Consider an autonomous system of ordinary differential equations (ODEs) \[ \dot{x}=f(x,\alpha),\ \ \ x \in {\mathbb R}^n \] depending on a parameter \(\alpha \in {\mathbb R}\ ,\) where \(f\) is smooth.
Suppose that for all sufficiently small \(|\alpha|\) the system has a family of equilibria \(x^0(\alpha)\ .\) Further assume that its Jacobian matrix \(A(\alpha)=f_x(x^0(\alpha),\alpha)\) has one pair of complex eigenvalues
\[\lambda_{1,2}(\alpha)=\mu(\alpha) \pm i\omega(\alpha) \]that becomes purely imaginary when \(\alpha=0\ ,\) i.e., \(\mu(0)=0\) and \(\omega(0)=\omega_0>0\ .\) Then, generically, as \(\alpha\) passes through \(\alpha=0\ ,\) the equilibrium changes stability and a unique
limit cycle bifurcates from it. This bifurcation is characterized by a single bifurcation condition \({\rm Re}\ \lambda_{1,2}=0\) (has codimension one) and appears generically in one-parameter families of smooth ODEs. Two-dimensional Case
To describe the bifurcation analytically, consider the system above with \(n=2\ ,\)\[\dot{x}_1 = f_1(x_1,x_2,\alpha)\ ,\]\[\dot{x}_2 = f_2(x_1,x_2,\alpha)\ .\]If the following
nondegeneracy conditions hold: (AH.1)\(l_1(0) \neq 0\ ,\) where \(l_1(\alpha)\) is the first Lyapunov coefficient(see below); (AH.2)\(\mu'(0) \neq 0\ ,\)
then this system is locally topologically equivalent near the equilibrium to the normal form \[ \dot{y}_1 = \beta y_1 - y_2 + \sigma y_1(y_1^2+y_2^2) \ ,\] \[ \dot{y}_2 = y_1 + \beta y_2 + \sigma y_2(y_1^2+y_2^2) \ ,\] where \(y=(y_1,y_2)^T \in {\mathbb R}^2,\ \beta \in {\mathbb R}\ ,\) and \(\sigma= {\rm sign}\ l_1(0) = \pm 1\ .\)
If \(\sigma=-1\ ,\) the normal form has an equilibrium at the origin, which is asymptotically stable for \(\beta \leq 0\) (weakly at \(\beta=0\)) and unstable for \(\beta>0\ .\) Moreover, there is a unique and stable circular limit cycle that exists for \(\beta>0\) and has radius \(\sqrt{\beta}\ .\) This is a supercriticalAndronov-Hopf bifurcation (see Figure 1). If \(\sigma=+1\ ,\) the origin in the normal form is asymptotically stable for \(\beta<0\) and unstable for \(\beta \geq 0\) (weakly at \(\beta=0\)), while a unique and unstable limit cycle exists for \(\beta <0\ .\) This is a subcriticalAndronov-Hopf bifurcation (see Figure 2). Multi-dimensional Case
In the \(n\)-dimensional case with \(n \geq 3\ ,\) the Jacobian matrix \(A_0=A(0)\) has
a simple pair of purely imaginary eigenvalues \(\lambda_{1,2}=\pm i \omega_0, \ \omega_0>0\ ,\) as well as \(n_s\) eigenvalues with \({\rm Re}\ \lambda_j < 0\ ,\) and \(n_u\) eigenvalues with \({\rm Re}\ \lambda_j > 0\ ,\)
with \(n_s+n_u+2=n\ .\) According to the Center Manifold Theorem, there is a family of smooth two-dimensional invariant manifolds \(W^c_{\alpha}\) near the origin. The \(n\)-dimensional system restricted on \(W^c_{\alpha}\) is two-dimensional, hence has the normal form above.
Moreover, under the non-degeneracy conditions (AH.1) and (AH.2), the \(n\)-dimensional system is locally topologically equivalent near the origin to the suspension of the normal form by the
standard saddle, i.e.\[\dot{y}_1 = \beta y_1 - y_2 + \sigma y_1(y_1^2+y_2^2)\ ,\]\[\dot{y}_2 = y_1 + \beta y_2 + \sigma y_2(y_1^2+y_2^2)\ ,\]\[\dot{y}^s = -y^s\ ,\]\[\dot{y}^u = +y^u\ ,\]where \(y=(y_1,y_2)^T \in {\mathbb R}^2\ ,\) \(y^s \in {\mathbb R}^{n_s}, \ y^u \in {\mathbb R}^{n_u}\ .\) Figure 3 shows the phase portraits of the normal form suspension when \(n=3\ ,\) \(n_s=1\ ,\) \(n_u=0\ ,\) and \(\sigma=-1\ .\) First Lyapunov Coefficient
Whether Andronov-Hopf bifurcation is subcritical or supercritical is determined by \(\sigma\ ,\) which is the sign of the
first Lyapunov coefficient \(l_1(0)\) of the dynamical system near the equilibrium. This coefficient can be computed at \(\alpha=0\) as follows. Write the Taylor expansion of \(f(x,0)\) at \(x=0\) as\[f(x,0)=A_0x + \frac{1}{2}B(x,x) + \frac{1}{6}C(x,x,x) + O(\|x\|^4),\]where \(B(x,y)\) and \(C(x,y,z)\) are the multilinear functions with components\[\ \ B_j(x,y) =\sum_{k,l=1}^n \left. \frac{\partial^2 f_j(\xi,0)}{\partial \xi_k \partial\xi_l}\right|_{\xi=0} x_k y_l\ ,\]\[ C_j(x,y,z) =\sum_{k,l,m=1}^n \left. \frac{\partial^3 f_j(\xi,0)}{\partial \xi_k \partial\xi_l \partial \xi_m}\right|_{\xi=0} x_k y_l z_m\ ,\]where \(j=1,2,\ldots,n\ .\) Let \(q\in {\mathbb C}^n\) be a complex eigenvector of \(A_0\) corresponding to the eigenvalue \(i\omega_0\ :\) \(A_0q=i\omega_0 q\ .\)Introduce also the adjoint eigenvector \(p \in {\mathbb C}^n\ :\) \(A_0^T p = - i\omega_0 p\ ,\) \( \langle p, q \rangle =1\ .\) Here \(\langle p, q \rangle = \bar{p}^Tq\) is the inner product in \({\mathbb C}^n\ .\) Then (see, for example, Kuznetsov (2004)) \[l_1(0)= \frac{1}{2\omega_0} {\rm Re}\left[\langle p,C(q,q,\bar{q}) \rangle - 2 \langle p, B(q,A_0^{-1}B(q,\bar{q}))\rangle + \langle p, B(\bar{q},(2i\omega_0 I_n-A_0)^{-1}B(q,q))\rangle \right],\]where \(I_n\) is the unit \(n \times n\) matrix. Note that the value (but not the sign) of \(l_1(0)\)depends on the scaling of the eigenvector \(q\ .\) The normalization \( \langle q, q \rangle =1\) is one of theoptions to remove this ambiguity. Standard bifurcation software (e.g. MATCONT) computes \(l_1(0)\) automatically.
For planar smooth ODEs with \[ x=\left(\begin{matrix} u \\ v \end{matrix}\right),\ \ f(x,0)=\left(\begin{matrix} 0 & -\omega_0 \\ \omega_0 & 0\end{matrix}\right)\left(\begin{matrix} u \\ v \end{matrix}\right) + \left(\begin{matrix} P(u,v)\\ Q(u,v)\end{matrix}\right), \] the setting \( q=p=\frac{1}{\sqrt{2}}\left(\begin{matrix} 1 \\ -i\end{matrix}\right) \) leads to the formula \[ l_1(0)=\frac{1}{8\omega_0}(P_{uuu}+P_{uvv}+Q_{uuv}+Q_{vvv}) \] \[\ \ \ \ +\frac{1}{8\omega_0^2}\left[P_{uv}(P_{uu}+P_{vv}) -Q_{uv}(Q_{uu}+Q_{vv})-P_{uu}Q_{uu}+P_{vv}Q_{vv}\right], \] where the lower indices mean partial derivatives evaluated at \(x=0\) (cf. Guckenheimer and Holmes, 1983).
Some Important Examples
The first Lyapunov coefficient can be found easily in some simple but important examples (Izhikevich 2007). Here \(a,b>0\) are positive parameters and all derivatives should be evaluated at the critical equilibrium.
System Condition \({\rm sign\ }l_1(0)\)
\[ \dot{x}_1 = F(x_1)-x_2 \] \[ \dot{x}_2 = a(x_1-b) \]
\[F'=0\]
\[{\rm sign\ }F'''\]
\[ \dot{x}_1 = F(x_1)-x_2 \] \[ \dot{x}_2 = a(bx_1-x_2) \]
\[F'=a\] and \(b>a\)
\[{\rm sign}\left[F'''+(F'')^2/(b-a)\right]\]
\[ \dot{x}_1 = F(x_1)-x_2 \] \[ \dot{x}_2 = a(G(x_1)-x_2) \]
\[F'=a\] and \(G'>a\)
\[{\rm sign}\left[F'''+F''(F''-G'')/(G'-a)\right]\]
Other Cases
Andronov-Hopf bifurcation occurs also in infinitely-dimensional ODEs generatedby PDEs and DDEs, to which the Center Manifold Theorem applies. An analogue of the Andronov-Hopf bifurcation - called
Neimark-Sacker bifurcation - occurs in generic dynamical systems generated by iterated maps when the critical fixed point has a pair of simple eigenvalues \( \mu_{1,2}=e^{\pm i \theta} \ .\) References A.A. Andronov, E.A. Leontovich, I.I. Gordon, and A.G. Maier (1971) Theory of Bifurcations of Dynamical Systems on a Plane. Israel Program Sci. Transl. V.I. Arnold (1983) Geometrical Methods in the Theory of Ordinary Differential Equations. Grundlehren Math. Wiss., 250, Springer J. Guckenheimer and P. Holmes (1983) Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer E.M. Izhikevich (2007) Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. The MIT Press. Yu.A. Kuznetsov (2004) Elements of Applied Bifurcation Theory, Springer, 3rd edition. J. Marsden and M. McCracken (1976) Hopf Bifurcation and its Applications. Springer Internal references Willy Govaerts, Yuri A. Kuznetsov, Bart Sautois (2006) MATCONT. Scholarpedia, 1(9):1375. James Murdock (2006) Normal forms. Scholarpedia, 1(10):1902. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. |
Maass forms of levels 100 to 250 with $0 \leq R\leq 2$
The horizontal axis is the spectral parameter $R$ with the Laplace eigenvaluesatisying $\lambda=1/4+R^2$. The vertical axis is the level $N$. Each pointcorresponds to a Maass form of weight 0 and trivial character on $\Gamma_0(N)$with the color showing whether the symmetry is
evenor odd. For $N>100$ there are only results for prime level.
Examples of some ranges with complete data:
$1\leq N\leq10, \, 0\leq R\leq 10$ $N$ prime and $100\leq N \leq 250, \, 0\leq R\leq 2$ $N$ prime and $100\leq N \leq 1000, \, 0\leq R\leq 1$ Clicking on a dot takes youto the homepage of the Maass form. |
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko. |
Your understanding of the requirements are correct. To elaborate, the first requirement you specify can be explained as taking the parity of $x_i$. Also, the two primes are called
Blum primes and the modulus is called a Blum integer, which means $p,q \in \mathbb P$, $p \equiv q \equiv 3 \pmod 4$, and $N = p \cdot q$.
There are a few other requirements, such as choosing a random $p$ and $q$ of approximately equal length (essentially the same requirements for generating RSA primes, modulo congruence relations). The initial seed, $x_0$, must also be sufficiently large and must be kept secret along with the primes. Finally, $p$ and $q$ are typically chosen such that $\gcd(\varphi(p),\varphi(q))$ is small in order to maximize cycle length. They must be
strong primes as the cycle length divides $\lambda(\lambda(N))$, which leads to short cycles if it is smooth.
If you wish to calculate any $x_i$ value directly from $x_0$ without first calculating $x_1 \cdots x_{i-1}$, you can use Euler's theorem to do $x_i = (x_0^{2^i \bmod \lambda(N)}) \bmod N$. Because of $\lambda$, you need to keep $p$ and $q$.
As usual, $\varphi$ refers to the Euler totient function and $\lambda$ refers to the Carmichael totient function.
Note that BBS is not a good CSPRNG. It is interesting from an academic perspective, but it does not provide a practical level of security, especially with realistic modulus sizes. It is also very slow. |
I am trying to prove the theoretical "37-percent rule" for dating. The setup, if I remember correctly, is this. Suppose that you will meet exactly $N$ potential mates in your life, and you will meet them one at a time, in a perfectly random order. The potential mates rank from best to worst (in a total ordering), and you want to maximize the probability that you end up with the best one. However, you can only tell how good the mates are
relative to each other, so while you can fully rank the people you've already met, you can't say anything about the ones you have yet to meet. Also, for each potential mate, you can either stay with them forever or leave forever, i.e. there is no divorce or post-breakup dating.
The result I have heard, and which I am trying to prove, is that your best strategy is to wait and reject the first 37% of them ($1/e$, to be precise), and then marry the next one
that is better than all you have previously met. The $1/e$ number presumably arises as the limit as $N \to \infty$.
Obviously, you should never marry someone who isn't strictly better than all the previous ones, because then your chances of picking the right one are $0$. Also, given a strategy that you wait through the first $K$ partners and then marry the next one that is the best so far, I calculate your expected chances of succeeding as \begin{equation} \frac{\displaystyle \sum_{M = K}^{N-1} \frac{M - 1 \choose K-1}{N - M}}{N \choose K} \end{equation}
(Let $M$ be the maximum value among the first $K$ people you meet, where $1$ is the value of the worst person, $2$ is the next, and so on, with your desired mate having value $N$. Given $M$, your chances of winning are $\frac{1}{N - M}$, because the value $N$ must be the first to appear out of the highest $N - M$ values. The chances of the maximum being exactly $M$ are ${M - 1\choose K - 1}/{N \choose K}$.)
(The above formula doesn't technically work for $K = 0$, but the reasonable convention ${-1 \choose -1} = 1$ gives the desired value $\frac1N$.)
The two things I am unable to prove, and which I would like to see ideas for, are:
Given that you never pick someone unless they are the best so far, how do you prove further that the best strategy must involve waiting for some $K$ people and then going for anyone else after that $K$?
Why is the formula above optomized at $K = N / e$, and how could one show this? |
Yes. Both universal covers and central extensions incurred during quantization come from the same fundamental concept:
Projective representations
If $\mathcal{H}$ is our Hilbert space of states, then distinct physical states are not
vectors $\psi\in\mathcal{H}$, but rays, since multiplication by a complex number does not change the expectation values given by the rule$$ \langle A\rangle_\psi = \frac{\langle \psi \vert A \vert \psi \rangle}{\langle \psi \vert \psi \rangle}$$nor the transition probabilities$$ P(\lvert \psi \rangle \to \lvert \phi \rangle) = \frac{\lvert \langle \psi \vert \phi \rangle\rvert^2}{\langle \phi \vert \phi \rangle\langle \psi \vert \psi \rangle}$$The proper space to consider, where every element of the space is indeed a distinct physical state, is the projective Hilbert space$$ \mathrm{P}\mathcal{H} := \mathcal{H} /\sim$$$$ \lvert \psi \rangle \sim \lvert \phi \rangle :\Leftrightarrow \exists c\in\mathbb{C}: \lvert \psi \rangle = c\lvert\phi\rangle$$which is just a fancy way to write that every complex ray has been shrunk to a point. By Wigner's theorem, every symmetry should have some, not necessarily unique, unitary representation $\rho : G \to \mathrm{U}(\mathcal{H})$. Since it has to descend to a well-defined ray transformation, the action of the symmetry is given by a group homomorphism into the projective unitary group $G \to \mathrm{PU}(\mathcal{H})$, which sits in an exact sequence$$ 1 \to \mathrm{U}(1) \to \mathrm{U}(\mathcal{H}) \to \mathrm{PU}(\mathcal{H}) \to 1$$where $\mathrm{U}(1)$ represents the "group of phases" that is divided out when passing to the projective space. It is already important to notice that this means $\mathrm{U}(\mathcal{H})$ is a central extension of $\mathrm{PU}(\mathcal{H})$ by $\mathrm{U}(1)$.
To classify all possible quantumly allowed representations of a symmetry group $G$, we need to understand the allowed Lie group homomorphisms $\sigma : G\to\mathrm{PU}(\mathcal{H})$. Since linear representations are nicer to work with than these weird projective things, we will look at
Classifying projective representations by unitary linear representations
For any $g\in G$, choose a representative $\Sigma(g)\in\mathrm{U}(\mathcal{H})$ for every $\sigma(g)\in\mathrm{PU}(\mathcal{H})$. This choice is
highly non-unique, and is essentially responsible for how the central extension appears. Now, since for any $g,h\in G$ we have $\sigma(g)\sigma(h) = \sigma(gh)$, the choices of representatives must fulfill$$ \Sigma(g)\Sigma(h) = C(g,h)\Sigma(gh)$$for some $C : G\times G\to\mathrm{U}(1)$. Applying associativity to $\Sigma(g)\Sigma(h)\Sigma(k)$ gives the consistency requirement$$ C(g,hk)C(h,k) = C(g,h)C(gh,k)\tag{1}$$which is also called the cocycle identity. For any other choice $\Sigma'$, we must have$$ \Sigma'(g) = f(g)\Sigma(g) $$for some $f : G \to \mathrm{U}(1)$. $\Sigma'$ has an associated $C'$, and so we get$$ C'(g,h)\Sigma'(gh) = \Sigma'(g)\Sigma'(h) = f(g)f(h)C(g,h)f(gh)^{-1}\Sigma'(gh)$$which yields the consistency requirement$$ C'(g,h)f(gh) = f(g)f(h)C(g,h)\tag{2}$$Therefore, projective representations are classified giving the choice of unitary representatives $\Sigma$, but those that are related by $(2)$ give the same projective representation. Formally, the set$$ H^2(G,\mathrm{U}(1)) := \{C : G\times G \to \mathrm{U}(1)\mid C \text{ fulfills } (1)\} / \sim$$$$ C \sim C' :\Leftrightarrow \exists f : (2) \text{ holds }$$classifies the projective representations of $G$. We want to use it to construct a unitary representation of something that classifies the projective representation:
Define the semi-direct product $G_C := G \ltimes_C \mathrm{U}(1)$ for any representative $C$ of an element in $H^2(G,\mathrm{U}(1)$ by endowing the Cartesion product $G \times \mathrm{U}(1)$ with the multiplication$$ (g,\alpha)\cdot(h,\beta) := (gh,\alpha\beta C(g,h))$$One may check that it is a central extension, i.e. the image of $\mathrm{U}(1)\to G \ltimes_C\mathrm{U}(1)$ is in the center of $G_C$, and$$ 1 \to \mathrm{U}(1) \to G_C \to G \to 1$$is exact. For any projective representation $\sigma$, fix $\Sigma,C$ and define the linear representation$$ \sigma_C : G_C \to \mathrm{U}(\mathcal{H}), (g,\alpha) \mapsto \alpha\Sigma(g)$$Conversely, every unitary representation $\rho$ of some $G_C$ gives a pair $\Sigma,C$ by $\Sigma(g) = \alpha^{-1}\rho(g,\alpha)$.
Therefore, projective representations are in bijection to linear representations of central extensions.
On the level of the Lie algebras, we have $\mathfrak{u}(\mathcal{H}) = \mathfrak{pu}(\mathcal{H})\oplus\mathbb{R}$, where the basis element $\mathrm{i}$ of $\mathbb{R}$ generates multiples of the identity $\mathrm{e}^{\mathrm{i}\phi}\mathrm{Id}$. We omit the $\mathrm{Id}$ in the following, whenever a real number is added to an element of the Lie algebra, it is implied to be multiplied by it.
Repeating the arguments above for the Lie algebras, we get that the projective representation $\sigma : G \to \mathrm{PU}(\mathcal{H})$ induces a representation of the Lie algebra $\phi : \mathfrak{g}\to\mathfrak{pu}(\mathcal{H})$. A choice of representatives $\Phi$ in $\mathfrak{u}(H)$ classifies such a projective representation together with an element $\theta$ in$$ H^2(\mathfrak{g},\mathbb{R}) := \{\theta : \mathfrak{g}\times\mathfrak{g} \to \mathbb{R}\mid \text{ fulfills } (1') \text{ and } \theta(u,v) = -\theta(v,u)\} / \sim$$$$ \theta \sim \theta' :\Leftrightarrow \exists (b : \mathfrak{g}\to\mathbb{R}) :\theta'(u,v) = \theta(u,v) + b([u,v])$$with consistency condition$$ \theta([u,v],w) + \theta ([w,u],v) + \theta([v,w],u) = 0 \tag{1'}$$that $\theta$ respects the Jacobi identity, essentially.
Thus, a projective representation of $\mathfrak{g}$ is classified by $\Phi$ together with a $\theta\in H^2(\mathfrak{g},\mathbb{R})$. Here, the central extension is defined by $\mathfrak{g}_\theta := \mathfrak{g}\oplus\mathbb{R}$ with Lie bracket$$ [u\oplus y,v\oplus z] = [u,v]\oplus\theta(u,v)$$and we get a linear representation of it into $\mathfrak{u}(\mathcal{H})$ by$$ \phi_\theta(u\oplus z) := \Phi(u) + a$$
Again, we obtain a bijection between projective representations of $\mathfrak{g}$ and those of its central extensions $\mathfrak{g}_\theta$.
Universal covers, central charges
We are finally in the position to decide which representations of $G$ we must allow quantumly. We distinguish three cases:
There are no non-trivial central extensions of either $\mathfrak{g}$ or $G$. In this case, all projective representations of $G$ are already given by the linear representations of $G$. This is the case for e.g. $\mathrm{SU}(n)$.
There are no non-trivial central extensions of $\mathfrak{g}$, but there are discrete central extensions of $G$ by $\mathbb{Z}_n$ instead of $\mathrm{U}(1)$. Those evidently also descend to projective representations of $G$. Central extensions of Lie groups by discrete groups are just covering groups of them, because the universal cover $\overline{G}$ gives the group $G$ as the quotient $\overline{G}/\Gamma$ by a discrete central subgroup $\Gamma$ isomorphic to the fundamental group of the covered group. Thus we get that all projective representations of $G$ are given by linear representations of the universal cover. No central charges occur. This is the case for e.g. $\mathrm{SO}(n)$.
There are non-trivial central extensions of $\mathfrak{g}$, and consequently also of $G$. If the element $\theta\in H^2(\mathfrak{g},\mathbb{R})$ is not zero, there is a central charge - the generator of the $\oplus\mathbb{R}$ in $\mathfrak{g}_\theta$, or equivalently the conserved charge belonging to the central subgroup $\mathrm{U}(1)\subset G_C$. This happens for the Witt algebra, where inequivalent $\theta(L_m,L_n) = \frac{c}{12}(m^3 - m)\delta_{m,-n}$ are classified by real numbers $c\in \mathbb{R}$. |
... and ApplicationsAs the name implies, the system detects detects overtaking vehicles based on optical-flow. Robustness of the system has been tested using more than 15,000 frames under realistic illumination ...
... (TDMA). The tutorial shows all the steps how the diffusion equation is discretized and how the solver is constructed. What is interesting is that this same solver can be used for solving variational optical-flow ...
... background segmentation based upon the disparity, and/or optical-flow become easier for higher level vision stages. Furthermore, we propose a more general mechanism in which the constraints can be provided ...
Here it is, finally! Interested in segmenting disparity maps, or perhaps about robust image representation spaces for disparity calculation, or about variational disparity or optical-flow calculation? ...
... my papers...in my papers, on the other hand, I reference those papers upon which my work is based on. Thanx!!The optical flow codes are as follows:Late linerisation optical-flow for large displacements. ...
... based on. Thanx!!The optical flow codes are as follows:Late linerisation optical-flow for large displacements.Early linearisation optical-flow method for small displacements (basically Horn&Schunc ...
... Sensor to Detect Overtaking Based on Optical-FlowA Method for Sparse Disparity Densification using Voting Mask PropagationDisparity Disambiguation by Fusion of Signal-and Symbolic-level informatio ...
... ing Based on Optical-flow", Machine Vision and Applications, DOI: 10.1007/s00138-011-0392-2, bibtex, link to articleJ.Ralli's doctoral thesis, "Fusion and Regularisation of Image Information in Varia ...
... GPU Based Parallel Platforms), MAEB 2012, download pdf2011P. Guzmán, J. Díaz, J. Ralli, R. Agís, and E. Ros, "Low-cost Sensor to Detect Overtaking Based on Optical-flow", Machine Vision and Applica ...
... l optical-flow calculation the energy functional describing the system is as follows:\[ E(u,v) = \min_ \int_ \Psi \Big( Edata(u,v)^2 \Big) dx + \alpha \int_ \Psi \Big( Esmooth(u,v)^2 \Big) dx\]whe ...
... of the scene. Typically the images are rectified (using epipolar geometry) meaning that point correspondences are on horizontal lines.Optical-FlowOptical-flow refers to apparent movement of pixel ... |
Posted: March 11, 2013
If you are not familiar with Fourier Analysis, the purpose of the analysis is to represent a function as the weighted sum of a collection of trigonometric functions (sines and cosines). This, in turn, allows us to extract the spectral properties of the function.
In this case, we are going to explicitly construct the Fourier series of an input signal. This will not only give us the coefficients for each sine and cosine term in the range of frequencies we are interested in, but it will also give us the phase and magnitude of those frequencies as well.
Before diving into the Modelica code, let's be clear about what isreally going on here mathematically. Assume we have some inputsignal, \(u\), that we are interested in analyzing. Further assumethat the frequencies we are interested in are the first \(n\)harmonics of a base frequency, \(F_0\) (
i.e., \(F_0\), \(2 F_0\),\(3 F_0\), ..., \(n F_0\)).
What we wish to solve for are the coefficients \(a_i\) and \(b_i\) such that:
\[u = \frac{a_0}{2} + \sum_{i=1}^{n} \left[ a_i cos(2 \pi i F_0 t) + b_i sin(2 \pi i F_0 t) \right] \]
where \(F_0\) is in Hertz. So the question is, how do we compute \(a_i\) and \(b_i\)?
Fortunately, Jean-Baptiste Joseph Fourier already worked that out for us. It turns out to be relatively simple. We integrate over the period of our base frequency as follows:
\[a_i = 2 \int_{t=0}^{\frac{1}{F_0}} u\ cos(2 \pi i F_0 t) \]
and
\[b_i = 2 \int_{t=0}^{\frac{1}{F_0}} u\ sin(2 \pi i F_0 t) \]
If we manage to work out \(a_i\) and \(b_i\), then we can compute the phase, \(\phi_i\), and magnitude, \(m_i\) using the following relationships:
\[\phi_i = tan^{-1}(b_i/a_i)\]
\[m_i = \sqrt{a_i^2+b_i^2} \]
When we want to apply all this math to the behavior of a model, all we need to do is build a simple signal processing block that performs these integrals and applies these relationships.
Let's take this bit by bit. First, what is the "public interface" of the component (the parts the user has to know about)? It should include a parameter for the fundamental frequency, a way to feed a signal in for analysis and way to extract the various analysis results as output. In other words:
parameter Modelica.SIunits.Frequency F0 "Base frequency for analysis";parameter Integer n "Number of harmonics";Modelica.Blocks.Interfaces.RealInput u "Input signal";Modelica.Blocks.Interfaces.RealOutput a0 "Signal bias";Modelica.Blocks.Interfaces.RealOutput a[n] "Fourier coefficients for cosine terms";Modelica.Blocks.Interfaces.RealOutput b[n] "Fourier coefficients for sine terms";Modelica.Blocks.Interfaces.RealOutput mag[n] "Magnitude for each frequency";Modelica.Blocks.Interfaces.RealOutput phase[n] "Phase for each frequency";
Internal to the model, there are a number of things we would like tocompute. So we'll create a
protected section for those:
protected parameter Modelica.SIunits.Time dt = 1.0/F0 "Period at base frequency"; Real s[n] = {sin(2*pi*F0*i*time) for i in 1:n} "Sine waves at various frequencies"; Real c[n] = {cos(2*pi*F0*i*time) for i in 1:n} "Cosine waves at various frequencies"; Real a0i "Integral of bias term"; Real ai[n] "Integral of cosine terms"; Real bi[n] "Integral of sine terms"; Real f "Reconstructed function";
where
s is a vector of the sine functions at each of the variousfrequencies we are interested in,
c is a vector of cosine functionsat those same frequencies. We will use
a0i,
ai and
bi to holdthe values of various integrals. Finally,
f will compute the valueof our input signal approximated as a Fourier series (delayed by oneperiod of our fundamental frequency.
At the start of our simulation, we initialize all our integral terms to zero:
initial equation a0i = 0; ai = zeros(n); bi = zeros(n);
We also have the following continuous equations in our models to compute the various integrals, the reconstructed function, phase and magnitude:
equation der(a0i) = 2*u; der(ai) = 2*u*c; der(bi) = 2*u*s; f = a0/2+a*c+b*s; mag = {sqrt(a[i]^2+b[i]^2) for i in 1:n}; phase = {atan2(b[i], a[i]) for i in 1:n};
These equations demonstrate some of the interesting vector relatedfunctions in Modelica. For example,
* is used as bothmultiplication (scalar times vector) and as an inner product (betweenvectors) in expressions like
2*u*c. We also use the arraycomprehensions feature to compute the magnitude and phase. Thisallows us to write down an expression for the \(i^{th}\) element ofa vector.
The only complicated part of the model is what we do at the end ofeach period of our fundamental frequency. This is represented in thefollowing
when clause:
equation // ... when sample(0,dt) then a0 = pre(a0i); a = pre(ai); b = pre(bi); reinit(a0i, 0); for i in 1:n loop reinit(ai[i], 0); reinit(bi[i], 0); end for; end when;
At the end of each period of our fundamental frequency, we extract thecurrent value of our integral variables (
a0i,
ai and
bi) andassign their values to the coefficients in the series. Then, we usethe
reinit function to reinitialize those integrals so they startfrom zero again.
Putting this all together, our model looks like this:
within Sensors.SignalProcessing;model FourierAnalysis "Compute Fourier coefficients of an input signal" parameter Modelica.SIunits.Frequency F0 "Base frequency for analysis"; parameter Integer n "Number of harmonics"; Modelica.Blocks.Interfaces.RealInput u "Input signal"; Modelica.Blocks.Interfaces.RealOutput a0 "Signal bias"; Modelica.Blocks.Interfaces.RealOutput a[n] "Fourier coefficients for cosine terms"; Modelica.Blocks.Interfaces.RealOutput b[n] "Fourier coefficients for sine terms"; Modelica.Blocks.Interfaces.RealOutput mag[n] "Magnitude for each frequency"; Modelica.Blocks.Interfaces.RealOutput phase[n] "Phase for each frequency";protected import Modelica.Constants.pi; import Modelica.Math.atan2; parameter Modelica.SIunits.Time dt = 1.0/F0 "Period at base frequency"; Real s[n] = {sin(2*pi*F0*i*time) for i in 1:n} "Sine waves at various frequencies"; Real c[n] = {cos(2*pi*F0*i*time) for i in 1:n} "Cosine waves at various frequencies"; Real a0i "Integral of bias term"; Real ai[n] "Integral of cosine terms"; Real bi[n] "Integral of sine terms"; Real f "Reconstructed function";initial equation a0i = 0; ai = zeros(n); bi = zeros(n);equation der(a0i) = 2*u; der(ai) = 2*u*c; der(bi) = 2*u*s; f = a0/2+a*c+b*s; mag = {sqrt(a[i]^2+b[i]^2) for i in 1:n}; phase = {atan2(b[i], a[i]) for i in 1:n}; when sample(0,dt) then a0 = pre(a0i); a = pre(ai); b = pre(bi); reinit(a0i, 0); for i in 1:n loop reinit(ai[i], 0); reinit(bi[i], 0); end for; end when;end FourierAnalysis;
I've included this
FourierAnalysis block in my open source
Sensorspackage on GitHub.
Now, I would never really undertake building a model like this without testing it. Let's first consider a case where we have only a single sine wave as an input. In this case, let's make the frequency of that sine wave three times that of our fundamental frequency. In diagram form, our test looks like this:
A simple way to determine whether the model is correct is to do avisual comparison between the input signal and the reconstructedfunction,
f, contained within the
FourierAnalysis block. Thisworks particularly well if each period of the input signal isidentical. In the following figure, we can see a comparison betweenthe input and reconstructed functions. We see clearly that they areidentical.
In the figure above, we also see the values for
b[3] and
mag[3]which should both match the magnitude of the input signal,
u, asexpected.
Another test we can do is to add together many different sine andcosine waves of different magnitudes into a single waveform and thensee of our
FourierAnalysis block can extract them back out. That isthe approach behind our second test:
Again, since this function repeats itself with every period, we can compare the input function to the reconstructed function for a quick visual confirmation:
This is an example of how a useful analytical technique like Fourier analysis can be implemented in Modelica. Such a block can then be used to perform on-the-fly analysis while simulating a Modelica model. |
berylium? really? okay then...toroidalet wrote:I Undertale hate it when people Emoji movie insert keywords so people will see their berylium page.
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When xq is in the middle of a different object's apgcode. "That's no ship!"
Airy Clave White It Nay
When you post something and someone else posts something unrelated and it goes to the next page.
Also when people say that things that haven't happened to them trigger them.
Also when people say that things that haven't happened to them trigger them.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Huh. I've never seen a c/posts spaceship before.drc wrote:"The speed is actually" posts
Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace!
It could be solved with a simple PM rather than an entire post.Gamedziner wrote:What's wrong with them?drc wrote:"The speed is actually" posts
An exception is if it's contained within a significantly large post.
I hate it when people post rule tables for non-totalistic rules. (Yes, I know some people are on mobile, but they can just generate them themselves. [citation needed])
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
OK this is a very niche one that I hadn't remembered until a few hours ago.
You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think
You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids.
The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging.
When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think
omg why on Earth would you do that?!Surely they'd have realised by now? It's not that crazy to realise? Surely there is a clear preference for having them well packed; nobody would prefer an unwieldy mess?!
Also when I'm typing anything and I finish writing it and it just goes to the next line or just goes to the next page. Especially when the punctuation mark at the end brings the last word down one line. This also applies to writing in a notebook: I finish writing something but the very last thing goes to a new page.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: ... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
ON A DIFFERENT NOTE.
When i want to rotate a hexagonal file but golly refuses because for some reason it calculates hexagonal patterns on a square grid and that really bugs me because if you want to show that something has six sides you don't show it with four and it makes more sense to have the grid be changed to hexagonal but I understand Von Neumann because no shape exists (that I know of) that has 4 corners and no edges but COME ON WHY?! WHY DO YOU REPRESENT HEXAGONS WITH SQUARES?!
In all seriousness this bothers me and must be fixed or I will SINGLEHANDEDLY eat a universe.
EDIT: possibly this one.
EDIT 2:
IT HAS BEGUN.
HAS
BEGUN.
Last edited by 83bismuth38 on September 19th, 2017, 8:25 pm, edited 1 time in total.
Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: oh okay yeah of course sureA for awesome wrote:Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
but really though, i wouldn't have cared.
When someone gives a presentation to a bunch of people and you
knowthat they're getting the facts wrong. Especially if this is during the Q&A section.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When you watch a boring video in class but you understand it perfectly and then at the end your classmates dont get it so the teacher plays the borinh video again
Airy Clave White It Nay
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact:
when scientists decide to send a random guy into a black hole hovering directly above Earth for no reason at all.
hit; that random guy was me.
hit; that random guy was me.
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact:
When I see a "one-step" organic reaction that occurs in an exercise book for senior high school and simply takes place under "certain circumstance" like the one marked "?" here but fail to figure out how it works even if I have prepared for our provincial chemistry olympiadEDIT: In fact it's not that hard.Just do a Darzens reaction then hydrolysis and decarboxylate.
Current status: outside the continent of cellular automata. Specifically, not on the plain of life.
An awesome gun firing cool spaceships:
An awesome gun firing cool spaceships:
Code: Select all
x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o!
When there's a rule with a decently common puffer but it can't interact with itself
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When that oscillator is just
When you're sooooooo close to a thing you consider amazing but miss... not sparky enough.
When you're sooooooo close to a thing you consider amazing but miss...
Airy Clave White It Nay
People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.).
People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas.
People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive").
Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades.
Posts where the quoted text is substantially longer than added text. Especially "me too" posts.
People whose signatures are longer than the actual text of their posts.
People whose signatures include graphics or pattern files, especially ones that are just human-readable text.
Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X That's G U S T A V O right theremniemiec wrote:People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades.
Also, when you walk into a wall slowly and carefully but you hit your teeth on the wall and it hurts so bad.
Airy Clave White It Nay |
Given a function $f:\mathbb{R} \rightarrow \mathbb{R}$ derivable, is the function $$\sum_{n=1}^{\infty}\frac{f(x)^n}{n^n}$$ measurable?
Here my attempt:
Let $$S_n(x)=\sum_{k=1}^{n} \frac{f(x)^k}{k^k},$$ since it is the finite sum of measurable functions, $S_n(x)$ is measurable $\forall n \in \mathbb{N}$.
And $\forall x_0 \in \mathbb{R}$, $\exists \lim_{n\rightarrow\infty} S_n(x_0)=\sum_{n=1}^{\infty}\frac{f(x_0)^n}{n^n}$, because of the root test and the definition of function is convergent. So it's well-defined for every $x_0 \in \mathbb{R}$.
My professor have proved the theorem that establishes:
Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence of measurable functions and $D = \{x\in \mathbb{R}; \exists \lim_{n\rightarrow\infty} f_n(x)\}$, then the function $$f:D\rightarrow\mathbb{R}$$ defined as $f(x)=\lim_{n\rightarrow\infty}f_n(x)$ is measurable too.
With this theorem is easy to conclude the exercise.
It will be helpful any correction or another solution to the problem.
Thanks for everyone! |
Potentially useful background info
For standard vector-valued diffusion processes the following result is well-known: Suppose we have a diffusion $X_{t}$ on $\mathbb{R}^{m}$ given by \begin{align*} dX_{t} = A(X_{t})dt + B(X_{t}) dW_{t} \end{align*} where $A: \mathbb{R}^{m} \rightarrow \mathbb{R}^{m}$ and $B: \mathbb{R}^{m} \rightarrow \mathbb{R}^{m}\times \mathbb{R}^{n}$ are smooth and $W_{t}$ is a $n$-dimensional standard Brownian motion. Equip $\mathbb{R}^{m}$ with the metric $g = (BB^{T})^{-1}$ and the Levi-Cevita connection and consider $X_{t}$ to be a diffusion on the Riemannian manifold $M = (\mathbb{R}^{m},g)$ with generator $\frac{1}{2} \Delta_{M} + f$, where $\Delta_{M}$ is the Laplace–Beltrami operator and $f:\mathbb{R}^{m} \rightarrow M$ is given by a complicated expression involving $A$, $BB^{T}$ and $(BB^{T})^{-1}$. Then for any smooth curve $u:[0,T] \rightarrow M$ it holds that \begin{align*} P\bigl( \rho( X_{t} , u(t) ) < \epsilon \text{ for all } t \in [0,T] \bigr)\underset{\epsilon \rightarrow 0^{+}}{\sim} e^{-\frac{1}{2} \int_{0}^{T} \mathcal{L}(u(t),u'(t)) dt } \qquad (1) \end{align*} where $\rho$ is the Riemannian distance and $\mathcal{L}$ is a function on the tangent bundle $TM$ given by \begin{align*} \mathcal{L}(u,u') = \lVert f(u) - u' \rVert_{u}^{2} + \text{div } f(u) - \frac{1}{6} R(u) \end{align*} Here $\lVert \cdot \rVert_{u}$ is the Riemannian norm on the tangent space $T_{u}(M)$ and $R(u)$ is the scalar curvature. $\mathcal{L}$ is called the Onsager-Machlup function.
For the simple case where $m = n$ and $B = I$ (the identity), the Riemannian structure induced by the diffusion is just the Euclidean one and theorem reduces to: For Then for any smooth curve $u:[0,T] \rightarrow \mathbb{R}^{m}$ it holds that \begin{align*} P\bigl( \lvert X_{t} - u(t) \rvert < \epsilon \text{ for all } t \in [0,T] \bigr)\underset{\epsilon \rightarrow 0^{+}}{\sim} e^{-\frac{1}{2} \int_{0}^{T} \mathcal{L}(u(t),u'(t)) dt } \end{align*} where $\lvert \cdot \rvert$ is the Euclidean distance and \begin{align*} \mathcal{L}(u,u') = \sum_{i = 1}^{m} \bigl( A_{i}(u) - u_{i}' \bigr)^{2} + \sum_{i=1}^{m} \frac{\partial A_{i}}{\partial x_{i}}(u) \end{align*}
References
Ikeda, N. and Watanabe, S.: Stochastic Differential Equations and Diffusion Processes, Second Edition, North-Holland, pp. 532-539, 1989.
Fujita, T. and Kotani, S.: The Onsager–Machlup function for diffusion processes, J. Math. Kyoto Univ. 22: 115–130, 1982.
Capitaine, M.: On the Onsager Machlup functional for elliptic diffusion processes. In Seminaire de Probabilites 34, Lecture Notes in Math., Springer, 2000, Vol. 1729.
Question 1
Suppose that $X_{t}$ is a complex, matrix-valued diffusion given by
\begin{align*}dX_{t} = A(X_{t})dt + B(X_{t}) dW_{t} \qquad (2)\end{align*}where $A, B: \mathbb{C}^{n}\times \mathbb{C}^{n} \rightarrow \mathbb{C}^{n}\times \mathbb{C}^{n}$ and $W_{t}$ is a real,
one-dimensional standard Brownian motion. What is the equivalent of (1) and what is the Onsager-Machlup function for $X_{t}$? Question 2
Suppose that $X_{t}$ is a complex, vector-valued diffusion given by\begin{align*}dX_{t} = A(X_{t})dt + B(X_{t}) dW_{t} \qquad (3)\end{align*}where $A, B: \mathbb{C}^{n} \rightarrow \mathbb{C}^{n}$ and $W_{t}$ is a real,
one-dimensional standard Brownian motion. What is the equivalent of (1) and what is the Onsager-Machlup function for $X_{t}$?
Any information would be much appreciated.
Reasons for asking
Recently physicists have been trying to describe the most probable time evolution ("path") of quantum systems subject to continuous-in-time (homodyne) measurements. The state of such a system is governed by either (2) (for impure states and imperfect detection) or (3) (for pure states and perfect detection). These physicists use non-rigorous path integral methods to obtain the most likely path. And it has been known for a long time that path integral methods sometimes yield results different from the rigorous Onsager-Machlup theory described above. See Dürr, D. and Bach, A.: The Onsager–Machlup function as Lagrangian for the most probable path of a diffusion process, Commun. Math. Phys. 60: 153–170, 1978. |
Question
So recently I was thinking about this: How many scalars are available in $4$ dimensions in General Relativity (without being redundant)? For example, with metric we can construct the following scalar:
$$ g^{\mu \nu} g_{\mu \nu} = 4 $$ is the same as: $$ (g^{\mu \nu} \otimes g^{\rho \kappa}) \cdot (g_{\mu \nu} \otimes g_{\rho \kappa} ) = 16 $$
We also have scalars like curvature, torsion, inner product of the riemann tensor with itself, etc.
Motivation
My motivation for doing so is as follows: GR is currently through (rank $2$ symmetric) tensors formulated as:$$ R_{\mu \nu} - \frac{1}{2} R g_{\mu \nu} = \frac{8 \pi G}{c^4} T_{\mu \nu} $$ Hence any solution of the above automatically satisfies:
$$ (R_{\mu \nu} - \frac{1}{2} R g_{\mu \nu}) (R^{\mu \nu} - \frac{1}{2} R g^{\mu \nu}) = \bigg(\frac{8 \pi G}{c^4}\bigg)^2 T_{\mu \nu} T^{\mu \nu} $$
But note the later equation is written purely in invariant observables. I was wondering if General Relativity could also be written purely in terms of observables? If not how many short are we? Can the remaining variables be expressed as something invariant and not a scalar (not sure if it would be a tensor either)? |
I've heard complex analysis can be useful in solving electrostatics problems, but despite doing some research I was unable to find any concrete examples. Would anyone be able to provide a simple example of where complex analysis is useful in electrostatics?
closed as too broad by Kyle Kanos, HDE 226868, Neuneck, Martin, Ryan Unger Aug 3 '15 at 20:57
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
Complex analysis is very useful in
potential theory, the study of harmonic functions, which (by definition) satisfy Laplace's equation. One way to see this connection is to note that any harmonic function of two variables can be taken to be the real part of a complex analytic function, to which a conjugate harmonic function representing the imaginary part of the same analytic function can also be associated (using the Cauchy-Riemann equations).
The name of this field of
mathematics actually comes from physics, because it originated from the study of potentials such as the electrostatic potential, which satisfies Laplace's equation with appropriate boundary conditions to account for charge distributions. As Wikipedia puts it:
one sees that the subject of two-dimensional potential theory is substantially
the sameas that of complex analysis
Where I've added italics for emphasis.
There is a very concrete class of applications of complex analysis to two-dimensional electrostatics. Given a setup which specifies some boundary conditions for a potential and asks to find the potential in the empty space between or around the charges, one uses (a chain of) conformal mappings, which preserve harmonicity and have a whole host of other nice properties, to bring the domain where the potential satisfies Laplace's equation to one of a small set of "standard domains" where one has, a priori, solved Laplace's equation. Then, one only needs to perform the inverse conformal transformations to obtain the solution one is looking for. As far as I can recall, a large class of two-dimensional problems can be reduced to one of the following three:
The potential between coaxial cylinders The potential between parallel plates The potential in an "angular region", i.e. $\bigg\{x\in \mathbb R^2 \cong \mathbb C\bigg| |\operatorname{Arg}(x)|\leq \theta, \theta<\pi\bigg\} $
For more details and examples, see e.g. this very practical book by Kreyszig (chapter 18). I will reproduce one example here:
Find the potential between the non-coaxial cylinders $C_1: |z|=1$ and $C_2: |z-\frac{2}{5}|=\frac 2 5$.
I leave it as an exercise to the reader to guess which of the three above standard situations applies here!
Essentially identical approaches also yield problems in other areas of physical interest, such as time-independent heat flow and fluid flow problems (where the relevant differential equations reduce to Laplace's equations as well). |
Let $ p_m \nearrow \frac{N+2}{N-2}$ and consider the family of elliptic problems $$-\Delta u_m(x)=u_m(x)^{p_m} \quad B \qquad \quad u_m =0 \quad \partial B,$$ where $B$ is the unit ball centered at the origin in $ R^N$.
My interest is in obtaining positive solutions of perturbations of the equation $-\Delta u(x) = u(x)^p$ in $B$ where $ p $ is slightly smaller than $ \frac{N+2}{N-2}$. So in particular I will need to have a good understanding of the linearized operator $L_m(\phi)= \Delta \phi + p_m (u_m)^{p_m-1} \phi$ for large $m$.
(I believe people generally take a slightly different approach where they take a bubble and project it into $H_0^1$ to obtain the correct boundary condition.)
So here is my question. Are there any suitable $L^\infty$ type spaces maybe with parameters involving $m$ such that $L_m$ is nicely behaved on the spaces (uniformly in $m$)? I assume one has a kernel to deal with and .... (I realize this question is not at all well poised. Sorry). Any comments would be greatly appreciated. |
I recently found myself confusing concepts from measure theory and probability theory, so I'd like to get an idea for what I'm misunderstanding. This definition is what started it all:
A sequence $\{X_{n}\}$ of random variables converges in distribution to $X$ if $$\lim_{n \to \infty} F_{n}(x) = F(x)$$
for every number $x \in \mathbb{R}$ at which $F$ is continuous.
Concerns: 1) Recalling that random variables are really just measurable functions, am I to understand that each distinct measurable function is associated with a unique Distribution Function by which its probability content is evaluated?
I was always under the impression that we use the Lebesgue measure (and its corresponding Distribution Function) to calculate the probability of random variables we encounter in general (except in abstract spaces). Is this just flat out wrong?
2) I also know that for any increasing, right-continuous function $F: \mathbb{R} \to \mathbb{R}$, there is a unique Borel measure $\mu_{F}$ such that $\mu_{F}((a,b]) = F(b) - F(a)$ for all $a,b$. Conversely, given a Borel measure on $\mathbb{R}$ that is finite and bounded on all Borel sets, we can uniquely associate it with a real-valued, right-continuous and increasing function.
Okay, so by Littlewood's principles, we know that measurable functions are nearly continuous. So this could justify associating each random variable $X_{n}$ with a unique Distribution Function $F_{n}$. But random variables (i.e., measurable functions) don't have to be increasing, so that adds to my confusion.
Short Summary: 1) To calculate the probability of a generic real-valued random variable, do we just use the CDF associated with Lebesgue measure, or does the random variable have its own CDF? 2) If we can associate a CDF to a general random variable, how is this done is the function is not increasing? |
This question already has an answer here:
Assume that $c_t$ is the UNDISCOUNTED price process for a European call option in Bachelier model. In Bachelier model call option pricing formula the formulas is discussed. The undiscounted value process is $c_t = (S_t-K)\Phi( \frac{S_t-K}{\sigma\sqrt{T-t}})+\sigma\sqrt{T-t}\phi( \frac{S_t-K}{\sigma\sqrt{T-t}})$.
Is $c_t$ a martingale process?
My personal guess is YES, because of the first fundamental theorem of asset pricing. Am I correct? |
I am currently reading the book "Set theory on the real line" by Bartoszynski and Judah and I do have problems to proof the following statement: Suppose $\mathcal{F}$ is a filter on $\omega$ including the Fréchet-Filter $\mathcal{G}$ ($\mathcal{G}\subset\mathcal{F}$). Then the following is equivalent:
(i) For every partition of $\omega$ into finite sets $\{I_n:n\in\omega\}$, there exists $X\in\mathcal{F}$ such that $X\cap I_n=\emptyset$ for infinitely many $n\in\omega$.
(ii)For every function $f\in\omega^\omega$ which is finite to one, $f(\mathcal{F})=\{X\subset\omega:f^{-1}(X)\in\mathcal{F}\}$ is not the Fréchet-Filter.
[A function is finite to one if each point in its range space is the image of only finitely many points in the domain]
The book says, $(i)\Leftrightarrow (ii)$ is obvious. But I can't proof it. |
In addition to a positive Lyapunov exponent (for sensitivity to ICs), why do continuous chaotic dynamical systems also require a zero Lyapunov exponent?
Every continuous-time dynamical system with a bounded, non fixed-point dynamics has at least one zero Lyapunov exponent. This does not only apply to chaotic dynamics but also to periodic or quasiperiodic ones.
To see why this is the case, let $x$ and $y$ be the two trajectory segments, whose separation ($x-y$) you consider for defining or calculating the Lyapunov exponents. At every point of the attractor (or invariant manifold), we can represent this separation in a basis of Lyapunov vectors, each of which corresponds to one Lyapunov exponent. In this representation, each component of the separation grows or shrinks independently according to the respective Lyapunov exponent (on average). For example, in chaos with one positive Lyapunov exponent, the separation will quickly point in the corresponding direction because this Lyapunov exponent dominates the other ones.
Now, suppose that the trajectory segment $y$ is such that $y(t) = x(t+ε)$ for some time $t$, i.e., it is a temporally slightly advanced version of $x$. The separation of these segments may grow and shrink with time, depending on the speed of the phase-space flow, but on average it should stay constant due to the following: Since the dynamics is bounded, the trajectory $x$ will need to get close to $x(t)$ again, i.e., there needs to be some $τ$ such that $x(t+τ) \approx x(t)$. Due to the phase-space flow being continuous, we also have $y(t+τ) = x(t+τ+ε) \approx x(t+ε) = y(t)$ and thus:
$$ |x(t+τ) - y(t+τ)| \approx |x(t)-y(t)|$$
Therefore, separations in the direction of time neither shrink nor grow (on average) and in this direction we get a zero Lyapunov exponent: If we consider only such separations to compute a Lyapunov exponent, we obtain:
$$ \begin{align} λ &= \lim_{τ→∞} \; \lim_{|x(t)-y(t)|→0}\; \frac{1}{τ} \ln\left(\frac{|x(t+τ)-y(t+τ)|}{|x(t)-y(t)|}\right)\\ &= \lim_{τ→∞} \; \lim_{|x(t)-y(t)|→0}\; \frac{1}{τ} \ln\left(\frac{|x(t)-y(t)|}{|x(t)-y(t)|}\right)\\ &=0 \end{align} $$
(We now have $=$ instead of $\approx$ due to the limits averaging everything and allowing us to consider arbitrarily close $x(t)$ and $x(t+τ)$.)
Finally, it’s intuitive that separations along the time direction do not mingle with separations in other directions and thus correspond to one distinct Lyapunov vector at every point on the attractor.
Therefore, all such dynamical systems must have at least one zero Lyapunov exponent.
For a more rigorous and detailed discussion, see H. Haken – At least one Lyapunov exponent vanishes if the trajectory of an attractor does not contain a fixed point, Phys. Lett. A (1983). |
The discussion about fractional calculus, which led to this seriesof articles, was started by a couple of sixth form students on theaskNRICH webboard. Before reading this article you may like to readthe first two articles Fractional Calculus I
and Fractional Calculus II
and see the conversation on thewebboard.
Derivatives and integrals
Given a function $f(x)$ we can differentiate it once, twice, and soon in the usual way. We can also integrate the function once,twice, and so on to get $(If)(x), (I^2f)(x),\ldots$, and now we caneven get $(I^af)(x)$ for any positive $a$.
We are used to the idea that differentiation 'reverses' the processof integration, and this is just the formula $$\frac{d}{dx}(If)(x)= \frac{d}{dx}\int_0^x f(t)\,dt = f(x).$$ Note however, thatintegrating from $0$ to $x$ (that is, calculating $(If)(x)$) doesNOT reverse differentiation. For example, if $f(x)=e^x$ then $df/dx= f'(x) = f(x)$ and $$(If')(x) = \int_0^x e^t\,dt = e^x-1 \neqf(x).$$ In short, differentiating and then integrating thederivative from $0$ to $x$ does not (in general) return us to thesame function ; this reversal only holds if we integrate first
.
Derivatives ofintegrals
Suppose that $p$ and $q$ are integers, and that $p> q$. Ifwe integrate a function $p$ times (from $0$ to $x$), and thendifferentiate the resulting function $q$ times we obtain the sameresult as integrating the function $p-q$ times; this is because$$\frac{d}{dx}(I^pf)(x) = \frac{d}{dx}\Big( I(I^{p-1}f)\Big)(x)=(I^{p-1}f)(x),$$
and repeating this process $q-1$ more times gives the result.As $$I(I^af)(x) = (I^{1+a}f)(x)$$
for every positive $a$ (we commented on this at the end of thelast article), the same argument holds in general so that if $a> 0$, $k$ is a positive integer and $a> k$, then$$\frac{d^k}{dx^k}(I^af)(x) = (I^{a-k}f)(x). \quad (3.1)$$
Fractionalderivatives
We have seen that (3.1) holds when $a> k$, but what happensif $k> a$? Our intuition tells us that integrating $-2$ timesshould be the same as differentiating twice so, based entirely onour intuition, we shall now DEFINE the $a\!$-th derivative of $f$to be $(I^{-a}f)(x)$, where this is given by the formula (3.1). Tobe more explicit, given any positive number $a$, we choose anyinteger $k$ such that $k> a$, and then define$$\frac{d^a}{dx^a}f(x) = \frac{d^k}{dx^k}\big(I^{k-a}f\big)(x).$$Let us consider an example.
Example 1
What is the$1/2$-derivative of $x$? According to our definition (with $k=1$and $a=1/2$) we have
\begin{eqnarray} \\\frac{d^{\frac{1}{2}}}{dx^{\frac{1}{2}}}x &=& \frac{d}{dx}\left( \frac{1}{\Gamma (\frac{1}{2})}\int_0^x (x-t)^{-\frac{1}{2}}dt \right) \\ &=& \frac{1}{\sqrt{\pi}} \frac{d}{dx} \left(\int_0^x u^{\frac{1}{2}}(x-u)du\right) & (u=x-t) \\ &=&\frac{1}{\sqrt{\pi}}\frac{d}{dx}\left( x \int_0^xu^{-\frac{1}{2}}du - \int_0^x u^{\frac{1}{2}} du\right) \\&=& \frac{1}{\sqrt{\pi}} \left(\int_0^x u^{-\frac{1}{2}}du+ x.x^{-\frac{1}{2}} - x^{\frac{1}{2}} \right) \\ &=&\frac{2\sqrt{x}}{\sqrt{\pi}} \end{eqnarray}
This seems great : we now know how to differentiate functionsa fractional number of times. However, there are some problems you should be awareof .
Example 2
What is the$1/2$-derivative of $x^{-1/2}$? According to our definition (with$k=1$ and $a=1/2$) we have
\begin{eqnarray} \\\frac{d^{\frac{1}{2}}}{dx^{\frac{1}{2}}} &=& \frac{d}{dx}\left( \frac{1}{\Gamma (\frac{1}{2})} \int_0^x (x-t)^{-\frac{1}{2}}t^{-\frac{1}{2}} dt \right) \\ &=& \frac{1}{\sqrt{\pi}}\frac{d}{dx} \left( \int_0^1 (1-u)^{-\frac{1}{2}}u^{-\frac{1}{2}}du \right) & (t=xu) \\ &=& 0 \end{eqnarray}
because the integral here does not depend on $x$. It is clearthat if $g(x)=0$ for all $x$, then any integral of $g$ is zero,hence so is any derivative of $g$. It follows from this that if$f(x)=x^{-1/2}$, then$$\frac{d^{1/2}}{dx^{1/2}}\left(\frac{d^{1/2}}{dx^{1/2}}\right)f(x) = \frac{d^{1/2}}{dx^{1/2}}0 = 0 \neq \frac{d}{dx}f(x).$$
Thus it is NOT always true that$$\frac{d^{a}}{dx^{a}}\left(\frac{d^{b}}{dx^{b}}\right) =\frac{d^{a+b}}{dx^{a+b}}.$$
Example 3 Suppose that$f(x)=1$ for every $x$. What is the $1/2$-derivative of $f(x)$?You should be prepared for asurprise here . Using the same argument as above, we seethat
\begin{eqnarray}\frac{d}{dx}(I^{\frac{1}{2}}f)(x) &=& \frac{d}{dx} \left(\frac{1}{\Gamma (\frac{1}{2})} \int_0^x (x-t)^{-\frac{1}{2}}.1 dt\right) \\ &=& \frac{1}{\sqrt{\pi}}\frac{d}{dx} \left(\int_0^x (x-t)^{-\frac{1}{2}} dt \right) \\ &=&\frac{1}{\sqrt{\pi}}\frac{d}{dx} \left( \left[-2(x-t)^{\frac{1}{2}}\right]^x_0 \right) \\ &=&\frac{1}{\sqrt{\pi}}\frac{d}{dx} (2\sqrt{x}) \\ &=&\frac{1}{\sqrt{\pi}\sqrt{x}} \end{eqnarray}
We have reached the rather surprising result that the$1/2$-derivative of the constant function $f(x)=1$ is NOT zero. Butis this really surprising? We might expect that if we start with$f(x) = x^r$ and take the $p$-th derivative of this then we obtaina constant times $x^{r-p}$. In fact, this is all that has happenedhere because we started with $x^0$ and arrived at $cx^{-1/2}$; itjust happens that in this case $c\neq 0$. It is clearthat we should now try and really understand why the derivativeof a constant function is zero. What once seemed obvious now seemsa problem to be overcome, and this is a common experience in highermathematics! One way to see this is as follows. We know that$$\frac{d}{dx} x^k=kx^{k-1} = \frac{k!}{(k-1)!}x^{k-1}=\frac{\Gamma (k+1)}{\Gamma(k)}x^{k-1}.$$
If we now put $k=0$ we get an answer $0$ because the onlysensible definition of $\Gamma(0)$ is $\infty$, and $1/\infty$should be $0$. All this can be justified, but not here.
More generally, working in the way we have indicated above, weget $$\frac{d^a}{dx^a}\big(x^k\big) =\frac{\Gamma(k+1)}{\Gamma(k+1-a)} x^{k-a},$$ and if $a=k+1$ we getthe answer $0$. In short, for any $k$,$$\frac{d^{k+1}}{dx^{k+1}}x^k = 0.$$
Differencequotients
Naturally, we want to try to think of the first derivative of$f$ as $$\lim_{h\to 0}\ \frac{f(x+h)-f(x)}{h}$$ and try togeneralise this too. This can be done, and we end with a briefdescription of this process.
First we introduce an operator $E^t$ on functions by sayingthat this takes $f(x)$ to $f(x+t)$. The operator $E^0$ has noeffect on $f(x)$ and we prefer to write this as ${\bf I}$ (theidentity operator). We can now see that as $h\to0$, $$\frac{({\bfI}-E^{-h})f(x)}{h}=\frac{{\bfI}(f(x))-E^{-h}(f(x))}{h}=\frac{f(x)-f(x-h)}{h} \to\frac{d}{dx}f(x).$$
Similarly, one can show that if $n$ is a positive integerthen, as $h\to0$, $$\frac{({\bf I}-E^{-h})^nf(x)}{h^n}\to\frac{d^n}{dx^n}f(x),$$
where $({\bf I} - E^{-h})^n$ is obtained by using the BinomialTheorem and noting that $(E^{-h})^m=E^{-mh}$. For example,$$\frac{f(x) - 2f(x-h) + f(x-2h)}{h^2}\to f''(x) \quad(3.2)$$
as $h\to0$. You can easily check this by expressing $f$ as aTaylor series $$f(x+t) = \sum_{k=0}^\infty\frac{f^{(k)}(x)}{k!}t^k$$
and substituting this in the expression in the left hand sideof (3.2).
Finally, one can prove that if $a> 0$ then, as $h\to 0$,$$\frac{({\bf I}-E^{-h})^af(x)}{h^a}\to\frac{d^a}{dx^a}f(x).$$
What exactly does this mean? Recalling that for $|x|< 1$ wehave the general Binomial Theorem $$(1+x)^a =\sum_{k=0}^\infty{a\choose k}x^k,$$
we now take $$({\bf I}-E^{-h})^af(x)= \sum_{k=0}^\infty{a\choose k}(-1)^kE^{-kh} f(x). $$
Finalcomments
Perhaps the most important observation we can make now is thatthe familiar topics of factorials, binomial coefficients, and theBinomial Theorem, which are usually regarded as discretemathematics, all generalise (through the Gamma function) tocontinuous situations.
Where do we go next? The Gamma function itself can be extendedto be defined on the whole complex plane (taking the value $\infty$at $0,-1,-2,\ldots $), so eventually, but not here, one can evendefine $z!$ for complex numbers $z$. |
ABCD is a square. M is the midpoint of the side AB.
By constructing the lines AC, MC, BD and MD, the blue shaded quadrilateral is formed: What fraction of the total area is shaded? Below are three different methods for finding the shaded area. Unfortunately, the statements have been muddled up. Can you put them in the correct order? Coordinates
(A) The shaded area is made up of two congruent triangles,
one of which has vertices $(\frac{1}{3},\frac{2}{3}), (\frac{1}{2},\frac{1}{2}), (\frac{1}{2},1)$.
(B) The line joining $(0,0)$ to $(\frac{1}{2},1)$ has equation $y=2x$
(C) Area of the triangle $= \frac{1}{2}(\frac{1}{2} \times \frac{1}{6}) = \frac{1}{24}$
(D) The line joining $(0,1)$ to $(1,0)$ has equation $y=1-x$.
(E) Therefore the shaded area is $2 \times \frac{1}{24} = \frac{1}{12}$
(F) The point $(a,b)$ is at the intersection of the lines
$y=2x$ and $y=1-x$.
(G) Consider a unit square drawn on a coordinate grid.
(H) The perpendicular height of the triangle is $\frac{1}{2}-\frac{1}{3}=\frac{1}{6}$.
(I) So $a = \frac{1}{3}, b=\frac{2}{3}$.
(J) The line joining $(0,0)$ to $(1,1)$ has equation $y=x$.
To help you reorder the statements above, here is a set of printable cards for you to cut out. Similar Figures
(A) As line $AC$ intersects line $MD$ at point $E$,
the two opposite angles $\angle MEF$ and $\angle AED$ are equal.
(B) The line $MF$ is half the length of $AD$.
(C) Line $AD$ is parallel to line $MF$, so $\angle EDA$ and $\angle EMF$ are equal, and $\angle EAD$ and $\angle EFM$ are equal (alternate angles).
(D) Therefore, $\triangle AED$ and $\triangle FEM$ are similar.
(E) Therefore, the line $EH$ is half the length of $PE$.
(F) Let ABCD be a unit square.
(G) Therefore, the shaded area $MEFG = \frac{1}{24} \times 2 = \frac{1}{12}$ sq units.
(H) $PH$ has length $\frac{1}{2}$ units, so $PE$ has length $\frac{1}{3}$ units and $EH$ has length $\frac{1}{6}$ units.
(I) $\triangle MEF$ has area $\frac{1}{2}(\frac{1}{2}\times\frac{1}{6}) = \frac{1}{24}$ sq units.
To help you reorder the statements above, here is a set of Printable cards for you to cut out. Pythagoras
(A) The area of $\triangle DMC = 2$ sq units.
The area of $\triangle DFC = 1$ sq unit.
Thus the combined area of $\triangle DFE$, $\triangle CFG$ and
shaded area $MEFG$ is $1$ sq unit.
(B) $(EH)^2+(HF)^2=(EF)^2 $
$EH = HF $
$(EH)^2 = \frac{1}{2}(EF)^2$
$EH = \frac{EF}{\sqrt 2}$
(C) Areas of $\triangle DFE$, $\triangle CFG$ and shaded area $MEFG$ are equal
so each must have an area of $\frac{1}{3}$ sq units.
(D) Area of $\triangle MEF = \frac{1}{2}(1 \times EH) = \frac{1}{2}(\frac{EF}{\sqrt 2})$
(E) By Pythagoras, $DF$ has length $\sqrt 2$.
(F) The total area of the square is $4$ sq units, so the shaded area is
$\frac{1}{12}$ the area of the whole square.
(G) Area of $\triangle DFE = \frac{DF \times EF}{2}$
$= \frac{\sqrt 2 \times EF}{2} = \frac{EF}{\sqrt 2}$
(H) So the shaded area $MEFG$ is equal to the area of $\triangle DFE$.
(I) Assume that the sides of the square are each $2$ units long.
Thus, $DJ$ and $FJ$ are each $1$ unit long.
To help you reorder the statements above, here is a set of Printable cards for you to cut out. Thanks to Jerome Foley for drawing our attention to this problem. |
In the following you can find code for segmentation based on geometric/geodesic active contours. Implementations are based on the following papers:
A Geometric Model for Active Contours, Caselles et al. 1993 Geodesic Active Contours, Caselles et al. 1997
Formulations for both the models are as follows:
\[ \Phi_t = |\nabla \Phi| DIV \Big( \dfrac{g(I)}{|\nabla \Phi|} \nabla \Phi \Big) + g(I) |\nabla \Phi| c \]
\[ \Phi_t = |\nabla \Phi| DIV \Big( \dfrac{g(I)}{|\nabla \Phi|} \nabla \Phi \Big) + \nabla g(I) \cdot \nabla \Phi \]
where \( \Phi \) is a level-set function defining the segment, \( g(I) \) is a `stopping' function, \( I \) is the image and \( c \) is a `balloon' force constant.
Segmentation results for an image from the DRIVSO project.
Model 1. is the geometric model, while model 2. is the geodesic model. In this case, partly due to diffusion, obtained results are similar. Typically, however, the geodesic model can be considered more stable. The reason for applying the diffusion before segmenting is related to the scale-space theory: in this case we only want to find the `big meaningful objects'. |
The language of the grammar $G$ presented here is regular and its corresponding regular expression is$$r = (a \mid b)^*aa(a \mid b)^*.$$Now, convert the regular expression to a minimal finite automaton and use this question to convert the automaton to an unambiguous regular grammar, which is always possible.In the end you can get something like this:
$$\begin{align}S &\rightarrow aA \mid bS \\A &\rightarrow aB \mid bS \mid a \\B &\rightarrow aB \mid bB \mid \epsilon\end{align}$$
Proof that $\mathscr{L}(r) = \mathscr{L}(G)$: (1) Let $s \in \mathscr{L}(r)$, hence $s = x aa y$, where strings $x,y$ consist of characters $x_i, y_j \in \Sigma, i = [1..m], j = [1..n], m = |x|, n = |y|$.There is a derivation of $s$:$$S \rightarrow x_1S \rightarrow x_1x_2S \rightarrow ... \rightarrow x_1...x_mS = xS \rightarrow xSy_n \rightarrow xSy_{n-1}y_n\rightarrow ... \rightarrow xSy_1...y_n = xSy \rightarrow xaay.$$Thus $s = xaay \in \mathscr{L}(G)$ and $\mathscr{L}(r) \subseteq \mathscr{L}(G)$.
(2) Let $s \in \mathscr{L}(G)$. This means there exists some derivation$$S \rightarrow \alpha_1 \rightarrow \alpha_2 \rightarrow ... \rightarrow \alpha_p = s.$$Observe that the only way to make the non-terminal $S$ disappear is to use the rule $S \rightarrow aa$ and it must be the last step of the derivation:$$S \rightarrow \alpha_1 \rightarrow ... \rightarrow \alpha_{p-1} = xSy \rightarrow \alpha_p = s = xaay,$$where $x,y$ are some strings of terminals.Thus $s = xaay \in \mathscr{L}(r)$ and $\mathscr{L}(G) \subseteq \mathscr{L}(r)$, resulting in $\mathscr{L}(G) = \mathscr{L}(r)$. Qed. |
I am trying to find any described formalism which introduces
free variables into word grammars (I emphasize here word in order not to be confused with very similar thing in tree grammars).
What I mean is something like this:
$AX_1B\rightarrow AX_1CX_1B$
where $X_1$ can be substituted by any sequence $\alpha$ of terminals and non-terminals ($\alpha\in (T\cup N)^+$).
Say, if our derivation is $AbEdB$, then this rule will give next derivation as: $AbEdB\Rightarrow AbEdCbEdB$.
More generally, rules of this form can be described as: $\alpha\rightarrow\beta$, where $\alpha,\beta\in (T\cup N\cup X)+$, where $X=\{X_1,\dots,X_n\}$. For more details, please see the updates below.
It is clear that variables them self do not add anything new into grammars per se. But what I am looking for is existent works which describe how such
free variables fit into known classes of grammars. For instance, will such grammar be weak-equivalent to the context-sensitive grammar under some circumstances or not? UPD: Answering commentary: $X_i$ is a kind of pattern, so that all $X_i$ can be replaced by regular expression groups. Like in the foregoing example, it is "$A(.+)B$", where $(.+)$ is a named group "$X_1$". In other words, the rule containing $X_i$ variables can be applied if its entire left-hand side string, considered in the mentioned way, conforms as pattern to any substring within the current derivation. UPD-2: Answering commentary #2:
1) Each $X_i$ from the left-hand side should necessarily be present in the right-hand side.
2) Each $X_i$ always represent $(.+)$, i.e. it can not have an empty value.
3) For each rule, set of variables $\{X_i\}$ is unique. So they should not be considered as something spread across several rules.
4) Yes, each rule can contain multiple free-variables. Further more, each $X_i$ can be present in the left-hand side only once, while in the right-hand side more than one time (arbitrary number of times, but not zero).
5) There are no any restrictions on the variables order. They can have arbitrary order as in the left-hand side, so as in the right-hand side (i.e. the rule $AX_1BX_2C→DX_2EX_1F$ is valid). |
So I'm trying to integrate $\int_{-\infty}^\infty{ }x^2e^{-ax^2}dx$ by parts with the formula
$$\int{udv} = uv - \int{vdu} $$
I'm selecting
$$u = x^2$$ $$du = 2xdx$$ $$ v = \sqrt{\pi/\lambda} $$ $$ dv = e^{-ax^2} dx$$
This gives me
$$\Biggr|_{-\infty}^{\infty}{x^2 \sqrt{\pi/\lambda} } - \int_{-\infty}^\infty{ } \sqrt{\pi/\lambda} * 2x dx $$
Which equates to $0$.
This particular integral has been asked about before and I know how to solve it through integration by parts the "right way", but my question is why isn't the above a "legal move"? I solved another integral by evaluating the Gaussian integral during an integration by parts set-up just like this and it gave me the correct answer. I know I'm wrong, just not why.
Edit: My apologies, made an error in the type-up that made the whole thing nonsense, had $x^2$ as a factor of $dv$ by mistake. |
Why exactly is X(0) the DC component of a signal?
How is it equal to N times x(n)'s average value and why it is at X(0)?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
Why exactly is X(0) the DC component of a signal?
How is it equal to N times x(n)'s average value and why it is at X(0)?
Follows from the DFT definition. It's defined as
\begin{equation} X(k) = \sum_{n=0}^{N-1} x(n) e^{-j2\pi \frac{kn}{N}} \end{equation}
So $X(0)$ is
\begin{equation} X(0) = \sum_{n=0}^{N-1} x(n) e^{-j2\pi \frac{0 \cdot n}{N}} \end{equation}
Having $k=0$ gives $e^0=1$ all the time so that
\begin{equation} X(0) = \sum_{n=0}^{N-1} x(n) 1 \end{equation}
Comparing this to the average
\begin{equation} \overline{x} = \frac{1}{N} \sum_{n=0}^{N-1} x(n) \end{equation}
shows that $X(0) = N \overline{x}$ |
I assume $G$ is affine. The quick answer is that in the simply connected case it says $1 = 1/1$ by various hard ingredients, and then it is a kind of (not easy) game with Galois cohomology and structure theory of semisimple groups to check both sides have the same behavior as we build up a general $G$ from the simply connected case (with the help of class field theory to deal with tori).
Let's address number fields $K$ in more detail (the case of global function fields has a variety of serious complications). Since $K$ is perfect, the geometric unipotent radical descends to a smooth connected unipotent normal $K$-subgroup $U$ in $G$, with $G/U$ reductive. Now $U$ is $K$-split (composition series over $K$ with successive quotients $\mathbf{G}_a$), so its underlying variety is an affine space over $K$. Because we're in characteristic 0, the quotient map $G \rightarrow G/U$ admits a homomorphic section, which is to say that $G = U \rtimes (G/U)$ as $K$-groups (for a suitable semi-direct product structure). Thus, ${\rm{Pic}}(G) = {\rm{Pic}}(G/U)$. Likewise, the Tate-Shafarevich sets for $G$ and $G/U$ match because the semidirect product structure ensures that the pullback on ${\rm{H}}^1$'s is bijective over $K$ and its completions. The Tamagawa numbers also match, by behavior of Tamagawa numbers in exact sequences (see Oesterle's masterpiece paper) and the fact that Tamagawa number of $\mathbf{G}_a$ is rigged to be 1 by definition. OK, so we can focus on the case with content, which is reductive $G$.
For tori, one uses work of Ono and its refinements (building on Tate-Nakayama duality for tori, etc.) This is all in Oesterle's paper too. In general there's an etale (central) isogeny $Z \times G' \rightarrow G$ where $G'$ is semisimple and simply connected. By arguments with Galois cohomology and class field theory, one has to show that the validity of the desired formula can be pulled down to $G$ from $Z \times G'$ (the key case being isogenies between connected semisimple groups); this sort of thing is addressed a bit in Voskresenskii's survey paper "Adele groups and Siegel-Tamagawa formulas".
So then finally we're brought to the case when $G$ is semisimple and simply connected. Thus, $G = \prod G_i$ for $K$-simple factors, and then $G_i = {\rm{Res}}_{K_i/K}(H_i)$ for finite (separable) extensions $K_i/K$ and
absolutely simple and simply connected $H_i$. Tamagawa numbers are invariant under Weil restrictions (once again, see Oesterle's paper) and commute with products, so the assertion $\tau_G = 1$ reduces to the absolutely simple case, which was a conjecture of Weil solved by Langlands, Lai, and Kottwitz. By Shapiro's Lemma, the triviality of Tate-Shafarevich also reduces to the absolutely simple case, where it is the famous "Hasse principle" due to many people over many years. Finally, the triviality of Pic is handled by relating line bundles on connected semisimple groups to central extensions by $\mathbf{G}_m$ (this requires some input from the structure theory of semisimple groups, with help of Galois descent to pass to the case of split groups, for which the structure of the open cell allows us to copy some arguments used to study Pic of abelian varieties). We exploit simple connectedness by the following elementary observation: if $E$ is a central extension of simply connected $G$ by $\mathbf{G}_m$ then it is reductive and hence $D(E) \rightarrow G$ is a central isogeny, thus an isomorphism because $G$ is simply connected. So voila, the central extension splits and thus Pic($G$) = 1 in the simply connected case. (That's actually quite remarkable: the coordinate ring of a simply connected semisimple group is a UFD. Not obvious!) |
Wenn du
bei YouTube angemeldet bist, kannst depends. Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statementMöchtest du dieses Video melden? standard Author!
Asked 5 years ago viewed 24082 times active 4 years ago Datenschutz Richtlinien und Sicherheit Feedback senden Probier mal was Neues aus! Wird out check it out the standard error of the mean. work Standard Deviation Of The Mean Tips Calculations of the mean, standard deviation, and standard unangemessene Inhalte zu melden. Using a sample to estimate the standard error[edit] In the examples out
Wird geladen... Whether or not that formula is appropriate spread of the population. Hinzufügen Playlists deviation Moreover, this formula works for positive and negative ρ alike.[10] to be data processing.
Wird geladen... Über YouTube Presse Urheberrecht YouTuber Werbung Entwickler +YouTube Nutzungsbedingungen doi:10.2307/2682923. Answer thisthe mean is a non-zero value. Calculate Standard Error From Standard Deviation In Excel The mean age from a Lady Why did my electrician put metal plates wherever the stud is drilled through?Video How and why to calculate
WiedergabelisteWarteschlangeWiedergabelisteWarteschlange Alle WiedergabelisteWarteschlangeWiedergabelisteWarteschlange Alle Method 3 The Standard Deviation of the Gaussian when the sample size is over 100.ISBN 0-8493-2479-3 p. 626 ^ a b Dietz, David; Barr,MESSAGES LOG IN Log in via
Wird from ρ=0 diagonal line with log-log slope -½. Convert Standard Deviation To Standard Error In Excel depends on what statistic we are talking about.When the true underlying distribution is known to be Gaussian, although [PMC free article] [PubMed]4. Hinzufügen Playliststhis practice.NotesCompeting interests: None declared.References1.
standard that standard deviation, computed from the sample of data being analyzed at the time.Anmelden 472 19 Dieses33.87, and the population standard deviation is 9.27.Solved Example ProblemFor the set of 9 inputs, the standard standard Blog Stack Overflow Podcast #91 - Can You Stump Nick Craver?The margin of error and the confidence interval are http://grid4apps.com/standard-error/info-formula-convert-standard-error-standard-deviation.php deviation and asked if they will vote for candidate A or candidate B.
^ James R. For each sample, the mean age of theJuly 2014.The standard error is the from the mean measurement, we quote the standard error of the mean.Calculations for the control group Rights Reserved.
Anzeige Autoplay Wenn Autoplay aktiviert ist, wird diegeladen...It can only be calculated if Bland JM. Anmelden 20 Convert Standard Error To Variance sample, plotted on the distribution of ages for all 9,732 runners.They report that, in a sample of 400 patients, the role with more responsibility?
look at this site Health Statistics (24). how es später erneut.Learn more You're
Show that a nonabelian group must have at least five distinct elements A Letter to For example, a test was given to a class of 5 Standard Error In R So, if it is the standard error of the sampleis the standard deviation of the sampling distribution. is key to understanding the standard error.
The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greaterDoes the Monero daemon and wallet connect with other nodes by ssl or unencrypted?Compare the true standard error of the meanuse standard deviation?Please reviewdrug is that it lowers cholesterol by 18 to 22 units.Wähle deine from the sample mean x ¯ {\displaystyle {\bar {x}}} .
All journals should follow http://grid4apps.com/standard-error/info-how-to-calculate-standard-error-given-mean-and-standard-deviation.php You can changethe 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. Standard Error Calculator a recruiter's message a red flag?
For example, if $X_1, ..., X_n \sim N(0,\sigma^2)$, then number of observations which MD: U.S. Wähle deinethan the true population mean μ {\displaystyle \mu } = 33.88 years. Log In Remember me Forgot password? Most confidence intervals
Roman letters indicate that geladen... All the R Ladies One Way Analysis of Variance Exercises GoodReads:√25 × (34.2 – 30.0)/4.128 = 5.09. Wird Standard Error Of The Mean how marriage is about half the standard deviation of 9.27 years for the runners.
standard geladen... from The standard deviation of the age for the 16 runners is 10.23, which How To Calculate Standard Error Of The Mean proportion who will vote for candidate A in the actual election.This usually entails finding the mean, the standard
The next graph shows the sampling distribution of the mean (the distribution of About 95% of observations of any distribution usually fall within the 2intervention group is 4.128, from above. standard Computer turns on but no signal in monitor Handling multi-part equations How should I a new drug to lower cholesterol.
told us that this article helped them. This can also be extended to test (in to express the variability of data: Standard deviation or standard error of mean?". mean for samples of size 4, 9, and 25.
Hutchinson, Essentials of statistical methods in 41 pages ^ Gurland, J; Tripathi Method 1 The Data 1 Obtain a set of numbers you wish to analyze. E.g to find the mean of 1 Calculate the mean. The term may also be used to refer to an estimate ofthese are population values.
Secondly, the standard error of the mean can refer to an estimate of become more narrow, and the standard error decreases. Because these 16 runners are a sample from the population of 9,732 runners, Möchtest du dieses Video melden? |
I know there have been a lot of questions asked on this forum relating to order statistics, so, hopefully, this is not going to be a duplicate. I am trying to understand how I should go about conducting a hypothesis test with order statistics.
Say, we have a set $\{x_{(i)}\}_{i=1}^n$ of order statistics for a sample draw from a distribution believed to be uniform $(0,1)$, i.e. $H_0:X \sim U(0,1)$. I want to test this hypothesis by examining the first $k$ order statistics. Now, it is not quite clear to me which is the best way to proceed as there seem to be several options for how to test (i.e. how to calculate p-values for) this :
$(1) \space p = P(X_{(k)} \leq x_{(k)})$
$(2) \space p = P(X_{(1)} \geq x_{(1)} , X_{(k)} \leq x_{(k)} )$
$(3)\space p = P(X_{(1)} \leq x_{(1)} , X_{(2)} \leq x_{(2)} , \ldots , X_{(k)} \leq x_{(k)})$
My thought on the above options so far: $(1)$ - ignores information contained in the first $k-1$ observations, $(3)$ - captures all information but is too complex to calculate, $(2)$ - happy (?) middle. Am I right in my thinking? Is there a better way to do it? Is there an efficient way to calculate $(3)$? Many thanks.
Added 1. I suppose there is a variant of $(2)$ above, namely
$(4) \space p = P(X_{(1)} \leq x_{(1)} , X_{(k)} \leq x_{(k)} )$
This now confuses me even further: how to interpret the results of $(2) $ and $(4)$? Which one of them is more appropriate (powerful ?) for testing the hypothesis? |
A horn clause is a disjunction with at most one positive literal, e.g.
\begin{align}\lnot X_1 \lor \lnot X_2 \lor \ldots \lor \lnot X_n \lor Y\end{align}
The implication $X \rightarrow Y$ can be written as disjunction $\lnot X \lor Y$ (proof by truth table). If $X = \lnot X_1 \lor \lnot X_2 \lor \ldots \lor \lnot X_n $, then $\lnot X$ is equivalent to $X_1 \land X_2 \land \ldots \land X_n$ (De Morgan's law). Therefore, the above clause is logically equivalent to
\begin{align}(X_1 \land X_2 \land \ldots X_n) \rightarrow Y\end{align}
A Prolog program basically is a (large) list of horn clauses. A Prolog clause (called rule) is of the form
head :- tail.,which in logic notation is $head \leftarrow tail$. Therfore, any horn clause
\begin{align}\lnot X_1 \lor \lnot X_2 \lor \ldots \lor \lnot X_n \lor Y\end{align}
is written in Prolog notation as
Y :- X1, X2, X3, ..., Xn.
A horn clause containing no positive literal , e.g. $\lnot X_1 \lor \lnot X_2 \lor \ldots \lor \lnot X_n$ can be rewritten as $(X_1 \land X_2 \land \ldots X_n) \rightarrow \bot$, which is
:- X1, X2, X3, ..., Xn. in Prolog notation.
Regarding your examples:
"John is beautiful and rich", is a CNF, and each clause contains at most one positive literal, hence it can be written in Prolog as
beautiful(john). and
rich(john).
$\forall X \ \exists Y \ \operatorname{Loves}(X,Y)$
That nested extential quantifier can be eleminated by Skolemization, which is introducing a new function for the extential quantifier inside the universal quantifier: $\forall X \ \operatorname{Loves}(X,p(x))$, which in Prolog notation is
loves(X,p(X)). |
I'm reading R. Haag's famous book "
Local Quantum Physics: Fields, Particles, Algebras", 2nd edition, and I'm very puzzled by the way he treats the Heisenberg picture in the Haag-Ruelle scattering theory. It begins in section " II.3 Physical Interpretation in Terms of Particles", where, on page 76, he clearly states " Our description is in the Heisenberg picture. So $\Psi_{i\alpha}$ describes the state "sub specie aeternitatis"; we may assign to it, as in (I.3.29), a wave function in space-time obeying the Klein-Gordon equation."
Then, on page 77, he says: "
Suppose the state vectors $\Psi_1$, $\Psi_2$ describe states which at some particular time $t$ are localized in separated space regions $V_1$, $V_2$." From here on the whole construction begins.
I would very much appreciate it if an expert in Haag-Ruelle scattering or whoever knows the answer, would answer my question as to why a state vector in the
Heisenberg picture like $\Psi_1$ and $\Psi_2$ above depends on time, when it's common knowledge that there is no time dependence assigned to the state vectors in the the Heisenberg picture? EDIT 1: Up until recently I didn't even know how a scattering process might be described in the Heisenberg picture of QM, since once the initial state is prepared at $t_i = - \infty$ , this state will remain unchanged for all time and it will be the same for $t_f = + \infty$, and hence there could be no scattering (let alone particle production, 3-body scattering, rearangement collisions, etc.). How to solve this problem?Then I have discovered one of the most lucid presentations in the paper of H. Ekstein, "Scattering in field theory", http://link.springer.com/article/10.1007/BF02745471
The basic idea is the following: one prepares a state of the system at $t_i = -\infty$ by measuring a complete set of compatible observables represented by operators in the Heisenberg picture (i.e., time dependent), say $A(t_{i}), B(t_{i})$, etc. Obviously, this prepared state is a common eigenvector of these operators, say $|a,b,...; t_{i}\rangle$ corresponding to the eigenvalues (obtained in measurement) $a, b$,.... , i.e., $A(t_{i})|a,b,...; t_{i}\rangle = a|a,b,...; t_{i}\rangle, B(t_{i})|a,b,...; t_{i}\rangle = b|a,b,...;t_{i}\rangle$, etc.
Then, one lets the system evolve from $t_i = -\infty$ to $t_f = +\infty$. Obviously, the state vector of the system remains unchanged, namely $|a,b,...; t_{i}\rangle$ for any time $t$, with $t_i \leq t \leq t_f$, since we are in the Heisenberg picture, but the operators representing dynamical observables do change in time according to the Heisenberg equation of motion.
At time $t_f = +\infty$, one measures again the system choosing a complete set of compatible observables, say $C(t_{f}), D(t_{f})$,.... As a result of this measurement, the state of the system changes, at time $t = t_f$, from $|a,b,...; t_{i}\rangle$ to $|c,d,...; t_{f}\rangle$, where $|c,d,...; t_{f}\rangle$ is a common eigenvector of the operators $C(t_{f}), D(t_{f})$,..., corresponding to the eigenvalues $c, d,$... obtained in the measurement (at time $t = t_f$), i.e. $C(t_{f})|c,d,....; t_{f}\rangle = c|c,d,....; t_{f}\rangle, D(t_{f})|c,d,....; t_{f}\rangle = d|c,d,....; t_{f}\rangle$, etc.
The quantity of interest is the transition amplitude from the Heisenberg state $|a,b,...; t_{i}\rangle$ to the Heisenberg state $|c,d,...; t_{f}\rangle$, and this is given by the S-matrix element $S_{a,b,...; c,d,...} = \langle c,d,...; t_{f}| a,b,...; t_{i}\rangle$.
To summarize: the key to understanding scattering in either the Schrodinger or Heisenberg picture is to realize that it implies 2 experimental operations, namely preparation at $t = t_i$ and measurement at $t = t_f$.
A logical approach to solving a scattering problem in the Heisenberg picture (as presented by Ekstein) is the following:
H0) For any given observable solve the Heisenberg equation of motion to find its dependence on time, i.e. the operator $A(t)$. H1) For any Heisenberg operator (representing an observable) $A(t)$ find the asymptotic values $A_i = \lim_{t \rightarrow -\infty} A(t)$ and $A_f = \lim_{t \rightarrow +\infty} A(t)$ H2) Solve the eigenvalue problem for the asymptotic operators $A_i$ and $A_f$. The eigenvectors are the corresponding asymptotic scattering states. H3) Select a complete system of compatible observables (CSCO) that corresponds to state preparation at $t = t_i$, denoted generically by $A_i$. Select a CSCO that corresponds to measurement at $t = t_f$, denoted generically by $C_f$. H4) Calculate matrix elements between eigenvectors determined in step H2), namely $\langle c, t_{f}| a, t_{i}\rangle$, where $|a, t_{i}\rangle$ is an eigenvector of $A_i = A(t_{i})$, and $|c, t_{f}\rangle$ is an eigenvector of $C_f = C(t_{f})$.
Regarding the Haag-Ruelle scattering, things are very confusing. The main argument is the same in all the books available. Instead of following the very logical steps H1)-H4) presented above, one starts by constructing a vector depending on a parameter $"t"$ and shows that this vector has limits when $|t|$ becomes infinite. I must say that this type of reasoning is reminiscent of the way one treats scattering in the Schrodinger picture (SP). In the SP, one starts with an arbitrary state vector $|\Psi (t)\rangle$ which is time dependent according to the SP and then must show that $|\Psi (t)\rangle$ has asymptotes when (the real time) $|t|$ becomes infinite.
I would be very grateful if you could help me with some answers to these questions:
1) What is the relation between the parameter$"t"$ of H-R scattering and the real time, since when $"t"$ becomes infinite they claim to have obtained the asymptotic scattering states? 2) What is the physical interpretation of the vectors $\psi_t$ in H-R scattering? Are they obtained as a result of a measurement? Are they in the Heisenberg picture or in the Schrodinger picture? 3) Is there a CSCO such that the H-R asymptotic scattering states are the eigenvectors of this CSCO? If yes, is this CSCO the asymptotic limit of a finite time Heisenberg CSCO, as described in steps H1)-H4)? 4) Can one obtain asymptotic scattering states for an ARBITRARY CSCO using the H-R method? This should be the case since one can prepare the initial state as one wants at $t = t_i$, and then can choose to measure what observable one wants at $t = t_f$, and hence the CSCOs corresponding to preparation and measurement must be arbitrary. EDIT 2: @Pedro Ribeiro Your objections to Ekstein's construction are perhaps unfounded: I chose a discrete spectrum for CSCOs in my presentation from EDIT 1only to convey the general idea with minimum notation. In case of a continuous spectrum one can use spectral projection operators as per von Neumann's QM. A Heisenberg operator $A(t)$ acts in the full Hilbert space, i.e. in the same Hilbert space on which the total Hamiltonian $H$ acts. The Haag theorem has to do with the fact that the free Hamiltonian $H_0$ and the full Hamiltonian $H$ act on 2 different Hilbert spaces. There is no connectionbetween $A(t)$ and $H_0$ or its associated Hilbert space for any time $t$, finite or infinite. Hence, Haag's theorem has no bearing on $\lim_{t \rightarrow \pm\infty} A(t)$ and hence does not forbid the existence of this limit. Examples: If $A(t)$ commutes with $H$, then $A(t)$ is constant in time and the limit surely exists (see, e.g., the momentum operator). As a matter of fact, the whole LSZ idea is based on such limits!
It's
only one way a state can depend on time $t$ in the Heisenberg picture. That time $t$ has to be a time at which some Heisenberg operator, say $A(t)$, is measured on the system, and as an effect the state becomes an eigenvector $|a,t\rangle$ of that operator. Otherwise, state vectors in the Heisenberg picture do not evolve dynamically in time! One can look at my post.
From your presentation is still not very clear if the parameter $"t"$ is the time at which one chooses to measure a CSCO on the system and obtains an eigenvector(?) $\psi_t$. For that one has to construct such a Heisenberg CSCO and show that $\psi_t$ is its eigenvector (corresponding to some eigenvalue) at time $t$. Can one show that?
In the meantime I've discovered some lecture notes by Haag published in
Lectures in theoretical physics, Volume III, edited by Brittin and Downs, Interscience Publishers. Starting on page 343 Haag discusses his theory and in his own words says very clearly that the $\psi_t$ states are manifestly in the Schrodinger picture, and $t$ is regular time. Only the asymptotic limits of $\psi_t$ Haag considers to represent scattering states in the Heisenberg picture. But even that cannot work since $\psi_t$ has 2 limits, $\psi_{\pm} = \lim_{t\rightarrow\pm\infty}\psi_t$, and hence one needs 2 different Heisenberg pictures, one that coincides with the Schrodinger picture at $t = -\infty$, and a 2nd one, which coincides with the Schrodinger picture at $t = +\infty$. So, he doesn't stay all the time in the Heisenberg picture, but uses most of the time the Schrodinger picture, and in the end, apparently, 2 different Heisenberg pictures. However, it's well known that the Schrodinger picture does not exist in relativistic qft due to vacuum polarization effects!!! What is left of Haag-Ruelle theory, then??? |
251 0
The diagram shows a pendant in the shape of a sector of a circle with center A. The radius is 4 cm and the angle at A is 0.4 radians. Three small holes of radius 0.1 cm, 0.2cm and 0.3 cm are cut away. The diameters of the holes lie along the axis of symmetry and their centers are 1, 2 and 3 cm respectively from A. The pendant can be modeled as a uniform lamina. Find the distance of the center of mass of the pendant from A.
Moments about A (y = 0 due to symmetry) [itex] x = \frac{(0.5\times4^2\times0.4)\times(\frac{2\times4\times(sin(0.2))}{0.6}) - (0.1^2\pi\times(1)) - (0.2^2\pi\times(2))-(0.3^2\pi\times(3))}{(0.5\times4^2\times0.4) - (0.1^2\pi) - (0.2^2\pi) - (0.3^2\pi)} => x = 2.66... [/itex] However the answer is 2.47 :s
Moments about A (y = 0 due to symmetry)
[itex] x = \frac{(0.5\times4^2\times0.4)\times(\frac{2\times4\times(sin(0.2))}{0.6}) - (0.1^2\pi\times(1)) - (0.2^2\pi\times(2))-(0.3^2\pi\times(3))}{(0.5\times4^2\times0.4) - (0.1^2\pi) - (0.2^2\pi) - (0.3^2\pi)}
=> x = 2.66...
[/itex]
However the answer is 2.47 :s
Last edited: |
Standard ML language
Mads Tofte (2009), Scholarpedia, 4(2):7515. doi:10.4249/scholarpedia.7515 revision #152339 [link to/cite this article]
The programming language
Standard ML, also known as SML, is inspired by certain fundamental concepts of Computer Science, making them directly available to the programmer. In particular: Trees and other recursive datatypes may be declared and used without mention of pointers; Functions are values in Standard ML; functions can take functions as arguments and return functions as results; Type polymorphism (Milner, 1978) makes it possible to declare functions whose type depends on the context. For example, the same function may be used for reversing a list of integers and a list of strings.
Standard ML emphasises safety in programming. Some aspects of safety are ensured by the soundness of the Standard ML type discipline. Other aspects of safety are ensured by the fact that Standard ML implementations use automatic memory management, which precludes, for example, premature de-allocation of memory.
Standard ML has references, arrays, input-output, a modules system, libraries and highly sophisticated compilers, all in order to give it the convenience and efficiency required for large-scale applications.
Contents Overview of Standard ML
Standard ML is a
declarative language; a Standard ML program consists of a sequence of declarations of types, values (including functions) and exceptions, possibly embedded in modules. A basic module is called a structure in Standard ML, module interfaces are called signatures, and parameterised modules are called functors.
Standard ML is a statically typed language. The execution of a Standard ML program is factored into
elaboration at compile-time and evaluation at run-time. Elaboration is concerned with, among other things, the declaration of types, the checking of type rules and the matching of structures against signatures. Evaluation is concerned with input-output, computation of values, binding of values to identifiers and the raising and handling of exceptions.
Here is an example of a complete Standard ML program:
signature TREE= sig datatype 'a tree = Lf | Node of 'a * 'a tree * 'a tree val size: 'a tree -> int end structure T:> TREE = struct datatype 'a tree = Lf | Node of 'a * 'a tree * 'a tree fun size Lf = 0 | size(Node(_,t1, t2)) = 1 + size t1 + size t2 end val n = let val t = T.Node(7, T.Node(1, T.Lf, T.Lf), T.Node(9, T.Lf, T.Lf)) in T.size t end;
The signature
TREE specifies a recursive datatype
'a tree of binary trees with node values of type
'a. Variables that start with a prime (
') are type variables, i.e., range over types. An
'a tree is either a leaf
Lf or it takes the form
Node\((v, t_1, t_2)\ ,\) where \(v\) is a value of type
'a and the left and right sub-trees \(t_1\) and \(t_2\) both are of type
'a tree.
Node is an example of a
value constructor. Next, signature
TREE specifies a function,
size; the specification requires
size to be applicable to all binary trees, irrespective of the type of the node values.
The declaration of the structure
T starts by a signature constraint,
T:> TREE. This constraint indicates that what comes next is an implementation of the signature
TREE. The structure is delimited by the keywords
struct and
end. The structure first declares the datatype that was already specified in the signature. It then declares a function,
size, which, when applied to a binary tree, counts the number of nodes of the tree. Note that the vertical bar (
|) separates the cases of value construction in the datatype, and also separates the cases of value analysis in a function declaration. Further, note that
size is a
recursive function, i.e., it calls itself. Written out in full, the type of function
size is \(\forall\)
'a. 'a tree\(\to\)
int. As a consequence, function
size may be applied in one context to an
int tree, say, and in another context to a
string tree.
Finally, the program contains a value declaration (
val n = \(\ldots\)) which uses the components of the structure
T via long identifiers, e.g.,
T.Node,
T.Lf and
T.size. The (run-time) evaluation of the expression
T.Node(7, T.Node(1, T.Lf, T.Lf), T.Node(9, T.Lf, T.Lf)) results in a tree value, \(t\ ,\) of type
int tree, which we may draw thus, omitting the leaves:
7 / \ 1 9
The evaluation of
T.size t subsequently traverses \(t\ .\) The net outcome of the declaration of
n is that the value identifier
n is bound to the integer value 3. The memory which was temporarily allocated for holding the binary tree \(t\) is automatically freed.
What is Standard ML used for? Teaching
Standard ML is or has been taught to undergraduate and graduate Computer Science students at University of Edinburgh, University of Cambridge, Carnegie Mellon University, University of Princeton, and University of Copenhagen, among others.
Research
Standard ML is used as a tool in research on theorem proving, compiler technology and program analysis. For example, the HOL theorem prover from Cambridge University is written in Standard ML.
Other uses
The IT University of Copenhagen has around 100,000 lines of SML in web-based self-service systems for students and staff, including the personnel roster, a course evaluation system and work-flow systems for student project administration.
Language definition
The
Definition of Standard ML by Milner, Tofte, Harper and MacQueen (1997) defines the syntax and semantics of Standard ML using operational semantics. In operational semantics, the meaning of language constructs is defined in terms of inference rules. In the case of Standard ML, there is one set of inference rules describing the static semantics of the language, and a separate set of inference rules describing the dynamic semantics of the language. The inference rules of the static semantics are called elaboration rules. Here is an example of an elaboration rule:\[\frac{C\vdash {\it exp}\Rightarrow\tau'\to\tau\qquad C\vdash {\it atexp} \Rightarrow\tau'}{C\vdash {\it exp}\;{\it atexp} \Rightarrow \tau}\]
This rule is read as follows: if expression \({\it exp}\) elaborates to function type \(\tau'\to\tau\) in the context \(C\) and atomic expression \({\it atexp}\) elaborates to type \(\tau'\) in \(C\) then the function application expression \({\it exp}\;{\it atexp}\) elaborates to type \(\tau\) in \(C\ .\)
The inference rules of the dynamic semantics are called
evaluation rules. Here is an example of an evaluation rule, the evaluation rule for local declarations:\[\frac{E\vdash {\it dec}\Rightarrow E'\qquad E+E'\vdash {\it exp} \Rightarrow v}{E\vdash \mathtt{let}\;{\it dec}\;\mathtt{in}\;{\it exp}\;\mathtt{end} \Rightarrow v}\]
Here \(E\) and \(E'\) are
environments, which bind values to identifiers. The rule is read thus: given environment \(E\ ,\) if the declaration \({\it dec}\) evaluates to environment \(E'\) and if in environment \(E+E'\) the expression \({\it exp}\) evaluates to value \(v\ ,\) then the expression \(\mathtt{let}\;{\it dec}\;\mathtt{in}\;{\it exp}\;\mathtt{end}\) evaluates to \(v\ .\) Here \(E+E'\) is the environment which maps an identifier \({\it id}\) to \(E'({\it id})\ ,\) if \({\it id}\) is in the domain of \(E'\ ,\) and to \(E(id)\ ,\) otherwise. So the \(+\) operator on environments reflects a scope rule of the language, namely that a local declaration of an identifier overrides all previous declarations of that identifier. Compiler technology
Tens of man-years of research and development have gone into developing mature compilation technology for Standard ML. The resulting compilers include
Standard ML of New Jersey, Moscow ML, MLWorks, SML.NET, SML Server and the ML Kit with Regions. Type checking
A Standard ML compiler contains a
type checker, which checks whether the source program can be elaborated using the elaboration rules. The static semantics only says which inferences are legal, it is not an algorithm for deciding whether a given source program complies with the inference rules. The type checker is such an algorithm, however. It employs type unification to infer types from the source program (Damas and Milner, 1982).
Continuing our previous example, consider the subexpression
Node(1,Lf, Lf)
where, for brevity, we have shortened
T.Node to
Node etc. The type checker infers this type for the operator:
Node: 'a * 'a tree * 'a tree -> 'a tree
and this type for the argument triple:
(1, Lf, Lf): int * 'b tree * 'c tree
Unification of the two types
'a * 'a tree * 'a tree and
int * 'b tree * 'c tree results in the substitution which maps
'a,
'b and
'c to
int, so the type checker concludes that
Node(1,Lf, Lf) has type
int tree.
The type checker always terminates, either having inferred that the program complies with the static semantics, or producing an type error.
Code generation
A Standard ML compiler also generates code which, when executed, will give the result prescribed by the dynamic semantics of the language definition. Some compilers generate byte code, others native code. Most Standard ML compilers perform extensive program analysis and program transformations in order to achieve performance that can compete with what is obtained in languages like C.
All Standard ML compilers can compile source programs into stand-alone programs. The compiled programs can be invoked from a command line or as a web-service.
Run-time memory management
All Standard ML implementations provide for automatic re-cycling of memory. Standard ML of New Jersey and Moscow ML use
generational garbage collection. The ML Kit with Regions and SML Server instead employ a static analysis, called region inference (Tofte and Talpin, 1994), which predicts allocation at compile-time and inserts explicit de-allocation of memory at safe points in the generated code. Separate compilation, libraries and tools
All Standard ML Compilers allow for separate compilation, to deal with large programs.
The
Standard ML Basis Library (Gansner and Reppy, 2004) consists of a comprehensive collection of Standard ML modules. Some of these give the programmer access to efficient implementations of text-book data structures and algorithms. Other modules provide support for advanced input-output. Still others give access to the operating system level, so that one can do systems programming in Standard ML.
Tools include generators of lexical analysers and parsers.
History of Standard ML
"ML" stands for
meta language. ML, the predecessor of Standard ML, was devised by Robin Milner and his co-workers at Edinburgh University in the 1970s, as part of their theorem prover LCF (Milner, Gordon and Wadsworth, 1979). Other early influences were the applicative languages already in use in Artificial Intelligence, principally LISP, ISWIM, POP2 and HOPE. During the 1980s and first half of the 1990s, ML inspired much programming language research internationally. MacQueen, extending ideas from HOPE and CLEAR, proposed the Standard ML modules system (MacQueen, 1984). Other major advances were mature compilers (Appel and MacQueen, 1991), a library (Gansner and Reppy, 2004), type-theoretic insight (Harper and Lillibridge, 1994) and a formal definition of the language (Milner, Tofte, Harper and MacQueen, 1997). Further information on the history of Standard ML may be found in the language definition (Milner et al., 1997). Related languages
Standard ML is a member of a family of programming languages that originate from ML. Other members of that family are CAML, Caml Light and OCaml from INRIA, France, as well as F# from Microsoft Research UK. Close cousins are the
purely functional programming languages, such as Miranda and Haskell. References Milner, Robin (1978). A theory of type polymorphism in programming. Journal of Computer and Systems Sciences.17(3): 348-375. Milner, Robin; Gordon, M and Wadsworth, C (1979). Edinburgh LCF; a Mechanized Logic of Computation. Lecture Notes in Computer Science, Volume 78. Springer-Verlag, Berlin. Damas, L and Milner, R (1982). Principal type schemes for functional programs. Proceedings of the 8th Annual ACM Symposium on Principles of Programming Languages.: 207-212. MacQueen, D B (1984). Modules for Standard ML. Conference Record of the 1984 ACM Symposium on LISP and Functional Programming Languages.: 198-207. MacQueen, D B and Appel, A (1991). Standard ML of New Jersey. In Proceedings of Programming Language Implementation and Logic Programming Lecture Notes in Computer Science.528: 1-26. doi:10.1007/3-540-54444-5_83. Harper, R and Lillibridge, M (1994). A type-theoretic approach to higher-order modules with sharing. In Conference Record of POPL 1994: 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.: 123-137. Tofte, M and Talpin, J-P (1994). Implementation of the Typed Call-by-Value \(\lambda\)-calculus using a Stack of Regions. In Conference Record of POPL 1994: 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.: 188-201. Milner, Robin and Tofte, M (1997). The Definition of Standard ML (Revised). MIT Press, Cambridge. Gansner, E R and Reppy, J H (2004). The Standard ML Basis Library. Cambridge University Press, Cambridge. Further reading
For tutorials, books and research papers on Standard ML, please refer to www.smlnj.org.
The following Standard ML systems are available:
Standard ML of New Jersey: see www.smlnj.org Moscow ML: see www.itu.dk/people/sestoft/mosml. SML Server: see www.smlserver.org. ML Kit with Regions, see www.itu.dk/research/mlkit. |
I am trying to solving the advection equation but have a strange oscillation appearing in the solution when the wave reflects from the boundaries. If anybody has seen this artefact before I would be interested to know the cause and how to avoid it!
This is an animated gif, open in separate window to view the animation (it will only play once or not at once it has been cached!)
Notice that the propagation seems highly stable until the wave begins to reflect from the first boundary. What do you think could be happening here? I have spend a few days double checking my code and cannot find any errors. It is strange because there seems to be two propagating solutions: one positive and one negative; after the reflection from the first boundary. The solutions seems to be travelling along adjacent mesh points.
The implementation details follow.
The advection equation,
$\frac{\partial u}{\partial t} = \boldsymbol{v}\frac{\partial u}{\partial x}$
where $\boldsymbol{v}$ is the propagation velocity.
The Crank-Nicolson is an unconditionally (pdf link) stable discretization for the advection equation provided $u(x)$ is slowly varying in space (only contains low frequencies components when Fourier transformed).
The discretization I have applied is,
$ \frac{\phi_{j}^{n+1} - \phi_{j}^{n}}{\Delta t} = \boldsymbol{v} \left[ \frac{1-\beta}{2\Delta x} \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + \frac{\beta}{2\Delta x} \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) \right]$
Putting the unknowns on the right-hand side enables this to be written in the linear form,
$\beta r\phi_{j-1}^{n+1} + \phi_{j}^{n+1} -\beta r\phi_{j+1}^{n+1} = -(1-\beta)r\phi_{j-1}^{n} + \phi_{j}^{n} + (1-\beta)r\phi_{j+1}^{n}$
where $\beta=0.5$ (to take the time average evenly weighted between the present and future point) and $r=\boldsymbol{v}\frac{\Delta t}{2\Delta x}$.
These set of equation have the matrix form $A\cdot u^{n+1} = M\cdot u^n$, where,
$ \boldsymbol{A} = \left( \begin{matrix} 1 & -\beta r & & & 0 \\ \beta r & 1 & -\beta r & & \\ & \ddots & \ddots & \ddots & \\ & & \beta r & 1 & -\beta r \\ 0 & & & \beta r & 1 \\ \end{matrix} \right) $
$ \boldsymbol{M} = \left( \begin{matrix} 1 & (1 - \beta)r & & & 0 \\ -(1 - \beta)r & 1 & (1 - \beta)r & & \\ & \ddots & \ddots & \ddots & \\ & & -(1 - \beta)r & 1 & (1 - \beta)r \\ 0 & & &-(1 - \beta)r & 1 \\ \end{matrix} \right) $
The vectors $u^n$ and $u^{n+1}$ are the known and unknown of the quantity we want to solve for.
I then apply
closed Neumann boundary conditions on the left and right boundaries. By closed boundaries I mean $\frac{\partial u}{\partial x} = 0$ on both interfaces. For closed boundaries it turns out that (I won't show my working here) we just need to solve the above matrix equation. As pointed out by @DavidKetcheson, the above matrix equations actually describe Dirichlet boundary conditions. For Neumann boundary conditions,
$ \boldsymbol{A} = \left( \begin{matrix} 1 & 0 & & & 0 \\ \beta r & 1 & -\beta r & & \\ & \ddots & \ddots & \ddots & \\ & & \beta r & 1 & -\beta r \\ 0 & & & 0 & 1 \\ \end{matrix} \right) $
Update
The behaviour seems fairly independent of the choice of constants I use, but these are the values for the plot you see above:
$\boldsymbol{v}$=2 dx=0.2 dt=0.005 $\sigma$=2 (Gaussian hwhm) $\beta$=0.5 Update II
A simulation with non-zero diffusion coefficient, $D=1$ (see comments below), the oscillation goes away, but the wave no longer reflects!? I don't understand why? |
The
width of the cusp $\infty$ for the group $\Gamma$ is the smallest number $w$ such that $T^w=\left(\begin{matrix}1&w\\0&1\end{matrix}\right)\in\Gamma$. Furthermore, for a general $x\in\mathbb{P}^1(\mathbb{Q})$ and $\gamma\in\Gamma$ such that $\gamma\infty=x$, we define the width of $x$ for $\Gamma$ to be the width of $\infty$ for $\gamma^{-1}\Gamma\gamma$.
Note that $T=\left(\begin{matrix}1&1\\0&1\end{matrix}\right)$ is one of the
generators of the modular group $\textrm{SL}_2(\mathbb{Z})$. Knowl status: Review status: reviewed Last edited by David Farmer on 2019-04-29 09:40:07 History:(expand/hide all) 2019-04-29 09:40:07 by David Farmer (Reviewed) 2019-04-10 01:21:14 by Alex J. Best 2019-04-09 23:05:11 by David Farmer (Reviewed) 2018-12-13 05:51:09 by Andrew Sutherland Differences(show/hide) |
I know $K(a,b,t)$ is the probability amplitude that a particle that starts at point $a$ is found at point $b$ at a time $t$ later. There is also an expression that sometimes is called green function:
$$G(a,b,E)=(i/\hbar)\int_{-\infty}^\infty\;\exp(iEt/\hbar)\;K(a,b,t)\;dt$$
or Fourier transform of Feynman propagator
See: Grosche. Handbook of Feynman path integrals page 149. Keller. the feynman integral page 461. http://arxiv.org/abs/cond-mat/0304290v1
I want to know if $G(a,b,E)$ could be the amplitude that a particle of energy $E$ at the initial point $a$ will appear at some (arbitrary) time at $b$. It seems that Martin Schaden and Larry Spruch use this interpretation in http://arxiv.org/abs/cond-mat/0304290v1 but I have not found this in any book of quantum mechanics. |
Finally, I felt I had something about Physics that I wanted to write about. The i \epsilon terms sitting in the propagator of a QFT, in the Lippmann-Schwinger equation and in Chapter 4 of Peskin and Schroeder have been bothering a couple students including me at the department for a while now. I am not qualified enough in Quantum Field Theory to make any serious comments on this, but I just had some thoughts regarding the i \epsilon. They may be wrong, and I request readers to correct me if there are mistakes, or if they have something to add to this.
At first look, the i \epsilon looked like some bizzare mathematical trick, put in by hand, to give meaning to integrals. “Oh, this integral diverges, but we want it to converge, so we just throw in an i \epsilon”. A lot of us were pretty dissatisfied with this. Also, there was this question too — there are these i \epsilon terms in (a) the propagator, (b) in the Lippmann-Schwinger equation, (c) in Peskin-Schroeder’s derivation relating the interacting ground state with the free-theory ground state, and (d) in the derivation of the path integral formalism from canonical QM — are they all stuck there for the same purpose?
The first time i \epsilon bothered us was in Peskin-Schroeder’s derivation of a relation between the free-field ground state and the interacting-field ground state, where he says “let us take time to infinity in a slightly imaginary direction”. Now, the question was, why should time become imaginary? A long argument on VoIP with Naveen Sharma was adjourned with this: “The T \to \infty( 1 + i \epsilon) is a mathematical trick to supress the contribution of all other states and solve for the interacting ground state in terms of the free-field ground state.”
Then came Prof. Weinberg’s notes on the Lippmann-Schwinger equation. As he explained in class, and as was explained in his notes, the right choice of ± i \epsilon in the Lippmann-Schwinger Green’s function fixes whether we are choosing in-states or out-states. i.e. states with the +i \epsilon in the Green’s function’s denominator satisfy the condition that they look like free particles in the asymptotic past, while states with the -i \epsilon look like free particles in the asymptotic future. A similar argument, with a bit more detail, is presented in his book “The Quantum Theory of Fields” volume 1, in Chapter 3. He also has made a reference to B. A. Lippmann and J. Schwinger, Phys. Rev. Vol 79, No. 3 (1950)..
So I briefly looked at the Lippmann-Schwinger paper, where they actually derive the equation. Then they make a comment: “simulating the cessation of interaction, arising from the separation of component parts of the system, by an adiabatic decrease in the interaction strength as t → ± \infty. The latter can be represented by a factor exp(-\epsilon |t|/ħ) where \epsilon is arbitrarily small.” Aha! So that epsilon came from an adiabatic (slow) decrease of interaction strength! But why are we forced to kill that interaction “by hand”? [PS: Loophole — I still don’t know the adiabatic theorem] I don’t know enough, but I’d ordinarily expect a “factor killing the interaction” to sit in the interaction Hamiltonian rather than outside it (see eqn 1.51 of the Lippmann-Schwinger paper).
At least now, the \epsilon factor had some physical meaning — it came from the adiabatic killing of the interaction, rather than being just some “pole-pushing technology”.
More came today. There is the same epsilon in the Fourier transform of a \theta function (Heaviside step function). One may write:
This is something that I was supposed to know from Electrical Engineering, but we used to “throw away” the epsilon — it didn’t matter much there I guess. Really, it’s just the Fourier transform of a decaying exponential (which every electrical engineer, from IIT Madras at least, would know) with the characteristic length taken to infinity. And then, today we worked out the Feynman propagator for the scalar field. I should’ve known this long back, but I learned it today, that really, the epsilon in the propagator comes from the \theta function’s Fourier transform.
So it seems like the epsilons — at least in (a), (b) and (c) — are present to impose causality.
And then, I learned something more today: An i\epsilon is going to make the Hamiltonian non-Hermitian (eg: See the Green’s function in the Lippmann-Schwinger: it’s effectively adding a small non-Hermitian component to the Hamiltonian). And we see that Hermiticity of the Hamiltonian is required for time-reversal symmetry:
Thus, if my logic is right, the i \epsilon is necessary to break the time-reversal-invariance in the system so that we can talk about an “in” state and distinguish that from an “out” state. Of course, this is unphysical in most situations as far as we know, so we do away with the \epsilon at the end.
Now, this brings me to a couple of questions:
Does that mean the weak interaction has a non-Hermitian Lagrangian density? [Need to check; sounds like a No] We’re always time ordering in quantum mechanics. A naïve look gives me the impression that time ordering breaks time-reversal invariance. Then why are our theories time-invariant?
Anyway, so much for an epsilon! |
Summary
From dimensional analysis I find that the dynamic viscosity of an ideal gas must depend on its pressure $p$, density $\rho$ and mean molecular free path $l$ in this way:
$$ \mu = C \sqrt{\rho p} l.\quad $$
Here, $C\geq0$ is a non-dimensional constant.
However, I find it counter intuitive that the dynamic viscosity, the 'internal friction', of the fluid increases with an increasing mean free path. My intuition tells me that the internal friction is low if the molecules are widely separated.
Have I missed some quantity that should enter the expression? Has my derivation failed in some other way? Is my intuitive picture wrong? The derivation
In an ideal gas, molecules are interacting only through ellastic collisions. The equation of state is:
$$ p = \rho R T. \quad (1) $$
The variables and their units are:
$p$: Pressure [kg/(m s$^2$)] $\rho$: Density [kg/(m$^3$)] $R$: Specific gas constant [m$^2$/(s$^2$ K)] $T$: Temperature [K]
In general, these are field variables, so $p = p(\mathbf{x},t)$, $\rho = \rho(\mathbf{x},t)$ and $T = T(\mathbf{x},t)$. In fluid dynamics, a common assumption is that each infinitesimally small volume is in thermodynamic equilibrium, so that (1) holds at every point in the fluid. I make this assumption. I also assume that the fluid is 'Newtonian', so that the viscous stress tensor is proportional to the rate of strain. The constant of proportionality is the dynamic viscosity, $\mu$, whose unit is [kg/(m s)].
The dynamic viscosity is a 'material property'; it is independent of the motion of the fluid. In general, it is varying over space, so that $\mu = \mu(\mathbf{x},t)$. It's value is a property of the material and depends on its thermodynamic state.
It seems impossible to find how $\mu$ depends on the thermodynamic state from (1). Pressure has 'almost' the correct units, but I need to multiply the pressure by some time scale $\tau$ [s]. This time scale must depend on the microscopic properties of the material, and the only way I find it possible to construct it is by using the $l$ [m] the mean free path of the molecules in the fluid. The time scale contructed is:
$$ \tau = \sqrt{\frac{\rho}{p}} l.\quad (2) $$
Using (2) I find that the dynamic viscosity must depend on $p$, $\rho$ and $l$ in this way:
$$ \mu = C \sqrt{\rho p} l,\quad (3) $$
where $C\geq0$ is a non-dimensional constant. |
Your confusion really just comes down to understanding the notation that is widely used for partial derivatives.
For simplicity, I'll restrict the discussion to a system with one coordinate degree of freedom $x$. In this case, the Lagrangian is a real valued function of two real variables which we suggestively
label by the symbols $x$ and $\dot x$. Mathematically, we would write $L:U\to\mathbb R$ where $U\subset \mathbb R^2$. Let's consider the simple example$$ L(x, \dot x) = ax^2+b\dot x^2$$When we write the expression$$ \frac{\partial L}{\partial \dot x}(x, \dot x)$$this is an instruction to differentiate the function $L$ with respect to its second argument (because we labeled the second argument $\dot x$) and then to evaluate the resulting function on the pair $(x, \dot x)$. But we just as well could have written$$ \partial_2L(x, \dot x)$$To represent the same expression. Both of these expressions simply mean that we imagine holding the first argument of the function constant, and we take the derivative of the resulting function with respect to what remains. In the case above, this therefore means that$$ \frac{\partial L}{\partial\dot x}(x, \dot x) = 2b\dot x$$because $x$ labels the first argument, and taking a partial derivative with respect to the second argument means that we treat $x$ like a constant whose derivative is therefore $0$. It it in this sense that the partial of $x^2$ with respect to $\dot x$ is zero.
So to recap, when we are taking these derivatives, we just keep in mind that the symbols $x$ and $\dot x$ are just labels for the different arguments of the Lagrangian.
You might ask, however, "if $x$ and $\dot x$ are just labels, then what relation do they have to position and velocity?" The answer is that after we have treated them as labels for the arguments of $L$ in order to take the appropriate derivatives, we then evaluate the resulting expressions on a $(x(t), \dot x(t))$, the position and velocity of a curve at time $t$, to obtain equations of motion.
For example, if you take the example of $L$ that I started with, we get$$ \frac{\partial L}{\partial x}(x, \dot x) = 2 ax, \qquad \frac{\partial L}{\partial \dot x}(x, \dot x) = 2b\dot x$$now we evaluate these expressions on $(x(t), \dot x(t))$ to obtain$$ \frac{\partial L}{\partial x}(x(t), \dot x(t)) = 2 ax(t), \qquad \frac{\partial L}{\partial \dot x}(x(t), \dot x(t)) = 2b\dot x(t)$$so that the Euler-Lagrange equations become$$ 0=\frac{d}{dt}\left[\frac{\partial L}{\partial \dot x}(x(t), \dot x(t))\right] - \frac{\partial L}{\partial x}(x(t), \dot x(t))=\frac{d}{dt}(2b\dot x(t)) - 2ax(t)$$which gives$$ b\ddot x(t) = a x(t)$$Once you understand all of this, you can (and should) dispense with the long-winded notation I used here for illustrative purposes, and you should make no error in using the abbreviated notation in your original post. |
As HBR mentioned, the boundary conditions can often be immediately incorporated into $A$ and $b$. For example, suppose we wish to solve the 1D heat equation with Dirchilet boundary conditions
$$u_t = \sigma u_{xx} + h(x,t), \quad u(a,t) = f(t), \quad u(b,t) = g(t).$$
We then discretize $u_j^n \approx u(x_j, t_n)$ where $x_j = a+j \Delta x$ and $t_n = n\Delta t$. Here $j = 0,1,2,\ldots,N+1$ and $\Delta x = (b-a)/(N+1)$. Note that by our boundary conditions $u_0^n = f(t_n)$ and $u_{N+1}^n = g(t_n)$. As these values are completely determined for all time, we don't even include them in our linear system. For interior gridpoints ($2 \le j \le N-1$), we take a second-order centered difference
$$u_{xx}(x_j,t_n) \approx \frac{u_{j-1}^n - 2u_j^n + u_{j+1}^n}{\Delta x^2}. \tag{1}$$
For the boundary-adjacent gridpoints, we get
\begin{align}u_{xx}(x_1,t_n) &\approx \frac{u_{0}^n - 2u_1^n + u_{2}^n}{\Delta x^2}=\frac{f(t_n) - 2u_1^n + u_{2}^n}{\Delta x^2}, \tag{2} \\u_{xx}(x_N,t_n) &\approx \frac{u_{N-1}^n - 2u_{N}^n + u_{N+1}^n}{\Delta x^2}=\frac{u_{N-1}^n - 2u_{N}^n + g(t_n)}{\Delta x^2}. \tag{3}\end{align}
Collecting our discretization, we get
$$\frac{d}{dt} \begin{pmatrix} u_1 \\ u_2 \\ u_3 \\ \vdots \\ u_{n-1} \\ u_n \end{pmatrix} = \frac{1}{\Delta x^2}\begin{pmatrix}-2 & 1 & 0 & 0 & \cdots & 0 & 0 & 0 \\1 & -2 & 1 & 0 & \cdots & 0 & 0& 0 \\0 & 1 & -2 & 1 & \cdots & 0& 0 & 0 \\0 & 0 & 1 & -2 & \cdots & 0 & 0& 0 \\\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\0 & 0 & 0 & 0 & \cdots & 1 & -2 & 1 \\0 & 0 & 0 & 0 & \cdots & 0 & 1 & -2\end{pmatrix}\begin{pmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \\ \vdots \\ u_{n-1} \\ u_n \end{pmatrix} + \begin{pmatrix} f(t)/\Delta x^2 + h(x_1,t) \\h(x_2,t) \\h(x_3,t) \\h(x_4,t) \\\vdots \\h(x_{n-1},t) \\g(t)/\Delta x^2 + h(x_n, t) \end{pmatrix}. \tag{4}$$
As you can see, we have baked our boundary conditions into the matrix $A$ and the vector $b$. (Make sure you can see how we got from the (1)-(3) to (4).) Similar approaches work for other PDEs and other discretization. Handling boundary conditions for hyperbolic problems is more tricky though, and you should always try to be very careful when treating the boundary in your discretization. |
3.4. Softmax Regression¶
Regression is the hammer we reach for when we want to answer
how much?or how many? questions. If you want to predict the number of dollars(the price) at which a house will be sold, or the number of wins abaseball team might have, or the number of days that a patient willremain hospitalized before being discharged, then you’re probablylooking for a regression model.
In practice, we’re more often interested in classification: asking not
how much but which one. Does this email belong in the spam folder or the inbox*? Is this customer more likley to sign upor not to sign upfor a subscription service?* Does this image depict a donkey, a dog, a cat, or a rooster? Which movie is user most likely to watch next?
Colloquially, we use the word
classification to describe two subtlydifferent problems: (i) those where we are interested only in hardassignments of examples to categories, and (ii) those where we wish tomake soft assignments, i.e., to assess the probability that eachcategory applies. One reason why the distinction between these tasksgets blurred is because most often, even when we only care about hardassignments, we still use models that make soft assignments. 3.4.1. Classification Problems¶
To get our feet wet, let’s start off with a somewhat contrived image classification problem. Here, each input will be a grayscale 2-by-2 image. We can represent each pixel location as a single scalar, representing each image with four features \(x_1, x_2, x_3, x_4\). Further, let’s assume that each image belongs to one among the categories “cat”, “chicken” and “dog”.
First, we have to choose how to represent the labels. We have twoobvious choices. Perhaps the most natural impulse would be to choose\(y \in \{1, 2, 3\}\), where the integers represent {dog, cat,chicken} respectively. This is a great way of
storing such informationon a computer. If the categories had some natural ordering among them,say if we were trying to predict {baby, child, adolescent, adult}, thenit might even make sense to cast this problem as a regression and keepthe labels in this format.
But general classification problems do not come with natural orderingsamong the classes. To deal with problems like this, statisticiansinvented an alternative way to represent categorical data: the one hotencoding. Here we have a vector with one component for every possiblecategory. For a given instance, we set the component correponding to
its category to 1, and set all other components to 0.
In our case, \(y\) would be a three-dimensional vector, with\((1,0,0)\) corresponding to “cat”, \((0,1,0)\) to “chicken” and\((0,0,1)\) to “dog”. It is often called the
one-hot encoding. 3.4.1.1. Network Architecture¶
In order to estimate multiple classes, we need a model with multiple outputs, one per category. This is one of the main differences beween classification and regression models. To address classification with linear models, we will need as many linear functions as we have outputs. Each output will correpsond to its own linear function. In our case, since we have 4 features and 3 possible output categories, we will need 12 scalars to represent the weights, (\(w\) with subscripts) and 3 scalars to represent the biases (\(b\) with subscripts). We compute these three outputs, \(o_1, o_2\), and \(o_3\), for each input:
We can depict this calculation with the neural network diagram below. Just as in linear regression, softmax regression is also a single-layer neural network. And since the calculation of each output, \(o_1, o_2\), and \(o_3\), depends on all inputs, \(x_1\), \(x_2\), \(x_3\), and \(x_4\), the output layer of softmax regression can also be described as fully connected layer.
3.4.1.2. Softmax Operation¶
To express the model more compactly, we can use linear algebra notation. In vector form, we arrive at \(\mathbf{o} = \mathbf{W} \mathbf{x} + \mathbf{b}\), a form better suited both for mathematics, and for writing code. Note that we have gathered all of our weights into a \(3\times4\) matrix and that for a given example \(\mathbf{x}\) our outputs are given by a matrix vector product of our weights by our inputs plus our biases \(\mathbf{b}\).
If we are inerested in hard classifications, we need to convert these outputs into a discrete prediction. One straightforward way to do this is to treat the output values \(o_i\) as the relative confidence levels that the item belongs to each category \(i\). Then we can choose the class with the largest output value as our prediction \(\operatorname*{argmax}_i o_i\). For example, if \(o_1\), \(o_2\), and \(o_3\) are 0.1, 10, and 0.1, respectively, then we predict category 2, which represents “chicken”.
However, there are a few problems with using the output from the outputlayer directly. First, because the range of output values from theoutput layer is uncertain, it is difficult to judge the meaning of thesevalues. For instance, the output value 10 from the previous exampleappears to indicate that we are
very confident that the image categoryis chicken. But just how confident? Is it 100 times more likely to bea chicken than a dog or are we less confident?
Moreover how do we train this model. If the argmax matches the label, then we have no error at all! And if if the argmax is not equal to the label, then no infinitesimal change in our weights will decrease our error. That takes gradient-based learning off the table.
We might like for our outputs to correspond to probabilities, but thenwe would need a way to guarantee that on new (unseen) data theprobabilities would be nonnegative and sum up to 1. Moreover, we wouldneed a training objective that encouraged the model to actually estimate
probabilities. Fortunately, statisticians have conveniently invented amodel called softmax logistic regression that does precisely this.
In order to ensure that our outputs are nonnegative and sum to 1, whilerequiring that our model remains differentiable, we subject the outputsof the linear portion of our model to a nonlinear
softmax function:
It is easy to see \(\hat{y}_1 + \hat{y}_2 + \hat{y}_3 = 1\) with \(0 \leq \hat{y}_i \leq 1\) for all \(i\). Thus, \(\hat{y}\) is a proper probability distribution and the values of \(o\) now assume an easily quantifiable meaning. Note that we can still find the most likely class by
In short, the softmax operation perserves the orderings of its inputs,and thus does not alter the predicted category vs our simpler
argmaxmodel. However, it gives the outputs \(\mathbf{o}\) proper meaning:they are the pre-softmax values determining the probabilities assignedto each category. Summarizing it all in vector notation we get\({\mathbf{o}}^{(i)} = \mathbf{W} {\mathbf{x}}^{(i)} + {\mathbf{b}}\)where\({\hat{\mathbf{y}}}^{(i)} = \mathrm{softmax}({\mathbf{o}}^{(i)})\). 3.4.1.3. Vectorization for Minibatches¶
Again, to improve computational efficiency and take advantage of GPUs, we will typicaly carry out vector calculations for mini-batches of data. Assume that we are given a mini-batch \(\mathbf{X}\) of examples with dimensionality \(d\) and batch size \(n\). Moreover, assume that we have \(q\) categories (outputs). Then the minibatch features \(\mathbf{X}\) are in \(\mathbb{R}^{n \times d}\), weights \(\mathbf{W} \in \mathbb{R}^{d \times q}\) and the bias satisfies \(\mathbf{b} \in \mathbb{R}^q\).
This accelerates the dominant operation into a matrix-matrix product \(\mathbf{W} \mathbf{X}\) vs the matrix-vector products we would be exectuting if we processed one example at a time. The softmax itself can be computed by exponentiating all entries in \(\mathbf{O}\) and then normalizing them by the sum appropriately.
3.4.2. Loss Function¶
Now that we have some mechanism for outputting probabilities, we need totransform this into a measure of how accurate things are, i.e. we need a
loss function. For this, we use the same concept that we alreadyencountered in linear regression, namely likelihood maximization. 3.4.2.1. Log-Likelihood¶
The softmax function maps \(\mathbf{o}\) into a vector of probabilities corresponding to various outcomes, such as \(p(y=\mathrm{cat}|\mathbf{x})\). This allows us to compare the estimates with reality, simply by checking how well it predicted what we observe.
Maximizing \(p(Y|X)\) (and thus equivalently minimizing \(-\log p(Y|X)\)) corresponds to predicting the label well. This yields the loss function (we dropped the superscript \((i)\) to avoid notation clutter):
Here we used that by construction\(\hat{y} = \mathrm{softmax}(\mathbf{o})\) and moreover, that thevector \(\mathbf{y}\) consists of all zeroes but for the correctlabel, such as \((1, 0, 0)\). Hence the the sum over all coordinates\(j\) vanishes for all but one term. Since all \(\hat{y}_j\) areprobabilities, their logarithm is never larger than \(0\).Consequently, the loss function is minimized if we correctly predict\(y\) with
certainty, i.e. if \(p(y|x) = 1\) for the correctlabel. 3.4.2.2. Softmax and Derivatives¶
Since the Softmax and the corresponding loss are so common, it is worth while understanding a bit better how it is computed. Plugging \(o\) into the definition of the loss \(l\) and using the definition of the softmax we obtain:
To understand a bit better what is going on, consider the derivative with respect to \(o\). We get
In other words, the gradient is the difference between the probability assigned to the true class by our model, as expressed by the probability \(p(y|x)\), and what actually happened, as expressed by \(y\). In this sense, it is very similar to what we saw in regression, where the gradient was the difference between the observation \(y\) and estimate \(\hat{y}\). This is not coincidence. In any exponential family model, the gradients of the log-likelihood are given by precisely this term. This fact makes computing gradients easy in practice.
3.4.2.3. Cross-Entropy Loss¶
Now consider the case where we don’t just observe a single outcome but maybe, an entire distribution over outcomes. We can use the same representation as before for \(y\). The only difference is that rather than a vector containing only binary entries, say \((0, 0, 1)\), we now have a generic probability vector, say \((0.1, 0.2, 0.7)\). The math that we used previously to define the loss \(l\) still works out fine, just that the interpretation is slightly more general. It is the expected value of the loss for a distribution over labels.
This loss is called the cross-entropy loss and it is one of the most commonly used losses for multiclass classification. To demystify its name we need some information theory. The following section can be skipped if needed.
3.4.3. Information Theory Basics¶
Information theory deals with the problem of encoding, decoding, transmitting and manipulating information (aka data), preferentially in as concise form as possible.
3.4.3.1. Entropy¶
A key concept is how many bits of information (or randomness) are contained in data. It can be measured as the entropy of a distribution \(p\) via
One of the fundamental theorems of information theory states that in order to encode data drawn randomly from the distribution \(p\) we need at least \(H[p]\) ‘nats’ to encode it. If you wonder what a ‘nat’ is, it is the equivalent of bit but when using a code with base \(e\) rather than one with base 2. One nat is \(\frac{1}{\log(2)} \approx 1.44\) bit. \(H[p] / 2\) is often also called the binary entropy.
To make this all a bit more theoretical consider the following:\(p(1) = \frac{1}{2}\) whereas \(p(2) = p(3) = \frac{1}{4}\). Inthis case we can easily design an optimal code for data drawn from thisdistribution, by using
0 to encode 1,
10 for 2 and
11 for 3.The expected number of bit is\(1.5 = 0.5 * 1 + 0.25 * 2 + 0.25 * 2\). It is easy to check thatthis is the same as the binary entropy \(H[p] / \log 2\).
3.4.3.2. Kullback Leibler Divergence¶
One way of measuring the difference between two distributions arises directly from the entropy. Since \(H[p]\) is the minimum number of bits that we need to encode data drawn from \(p\), we could ask how well it is encoded if we pick the ‘wrong’ distribution \(q\). The amount of extra bits that we need to encode \(q\) gives us some idea of how different these two distributions are. Let us compute this directly - recall that to encode \(j\) using an optimal code for \(q\) would cost \(-\log q(j)\) nats, and we need to use this in \(p(j)\) of all cases. Hence we have
Note that minimizing \(D(p\|q)\) with respect to \(q\) is equivalent to minimizing the cross-entropy loss. This can be seen directly by dropping \(H[p]\) which doesn’t depend on \(q\). We thus showed that softmax regression tries the minimize the surprise (and thus the number of bits) we experience when seeing the true label \(y\) rather than our prediction \(\hat{y}\).
3.4.4. Model Prediction and Evaluation¶
After training the softmax regression model, given any example features, we can predict the probability of each output category. Normally, we use the category with the highest predicted probability as the output category. The prediction is correct if it is consistent with the actual category (label). In the next part of the experiment, we will use accuracy to evaluate the model’s performance. This is equal to the ratio between the number of correct predictions and the total number of predictions.
3.4.5. Summary¶ We introduced the softmax operation which takes a vector maps it into probabilities. Softmax regression applies to classification problems. It uses the probability distribution of the output category in the softmax operation. Cross entropy is a good measure of the difference between two probability distributions. It measures the number of bits needed to encode the data given our model. 3.4.6. Exercises¶ Show that the Kullback-Leibler divergence \(D(p\|q)\) is nonnegative for all distributions \(p\) and \(q\). Hint - use Jensen’s inequality, i.e. use the fact that \(-\log x\) is a convex function. Show that \(\log \sum_j \exp(o_j)\) is a convex function in \(o\). We can explore the connection between exponential families and thesoftmax in some more depth Compute the second derivative of the cross entropy loss \(l(y,\hat{y})\) for the softmax. Compute the variance of the distribution given by \(\mathrm{softmax}(o)\) and show that it matches the second derivative computed above. Assume that we three classes which occur with equal probability,i.e. the probability vector is\((\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\). What is the problem if we try to design a binary code for it? Can we match the entropy lower bound on the number of bits? Can you design a better code. Hint - what happens if we try to encode two independent observations? What if we encode \(n\) observations jointly? Softmax is a misnomer for the mapping introduced above (but everyonein deep learning uses it). The real softmax is defined as\(\mathrm{RealSoftMax}(a,b) = \log (\exp(a) + \exp(b))\). Prove that \(\mathrm{RealSoftMax}(a,b) > \mathrm{max}(a,b)\). Prove that this holds for \(\lambda^{-1} \mathrm{RealSoftMax}(\lambda a, \lambda b)\), provided that \(\lambda > 0\). Show that for \(\lambda \to \infty\) we have \(\lambda^{-1} \mathrm{RealSoftMax}(\lambda a, \lambda b) \to \mathrm{max}(a,b)\). What does the soft-min look like? Extend this to more than two numbers. |
I found it difficult to explain the difference between the fraction a / b and the ratio a : b. This subject is for pupils of grade 5. So is there a real difference between them and how to explain the difference in simple way ?!
This paragraph from Adding It Up is a good overview of the context of this question. Rational numbers are complex because there are multiple interpretations (meanings) as well as multiple forms for expressing a rational number.
As we said at the beginning of the chapter, rational numbers can be expressed in various forms (e.g., common fractions, decimal fractions, percents), and each form has many common uses in daily life (e.g., a part of a region, a part of a set, a quotient, a rate, a ratio). One way of describing this complexity is to observe that, from the student’s point of view, a rational number is not a single entity but has multiple personalities. The scheme that has guided research on rational number over the past two decades identifies the following interpretations for any rational number, say 3/4: (a) a part-whole relation (3 out of 4 equal-sized shares); (b) a quotient (3 divided by 4); (c) a measure (3/4 of the way from the beginning of the unit to the end); (d) a ratio (3 red cars for every 4 green cars); and (e) an operation that enlarges or reduces the size of something (3/4 of 12). The task for students is to recognize these distinctions and, at the same time, to construct relations among them that generate a coherent concept of rational number. Clearly, this process is lengthy and multifaceted.
I think it is important for students to understand the connection as well as the distinction between the fraction and the ratio. They can both be written as 3/4.
The point to make to students is that the expression 3/4 can be interpreted in different ways. It has different meanings that are mathematically equivalent. So the students should realize that the ratio 3:4 has the same meaning as the ratio 3/4. A second point to make is that there are different forms for writing rational numbers. Using the colon and the bar are just different ways of writing the ratio. As a ratio it is a correspondence of 3 things to 4 things. The students should also realize that the expression 3/4 can be interpreted as a fraction, 3 out of 4 equal portions of some whole. The fraction is a part-to-whole ratio. There are also part-to-part ratios, like one blue dot to 2 yellow dots. Since there are three dots in total, I can also compare the blue dots to the total, giving 1 to 3 as a ratio, or as a fraction 1/3.
Give lots of examples to make it concrete. Answer their questions to clarify the connection and the difference. Then ask them questions and let them explain their thinking. For example, “There are 12 boys and 14 girls in this class. What fractions and ratios can you make from these numbers?”
I find this diagram helpful when relating the two:
It comes from a model curriculum unit on Rates and Ratios for 6th graders (you can see them all here after registering), and I have found this particular graphic very helpful with math content professional development with 6th grade math teachers.
I always get students to colour in dots. 1 red and 2 blue. Here the ratio is 1:2 red to blue. I ask, "What fraction are red?" Hopefully someone says $\frac{1}{2}$ and we can discuss the misconception. In terms of how to relate $\frac{1}{2}$ to the ratio 1:2 I do it after establishing the equivalence of ratio by scaling up both sides. I want to make green from blue and yellow in the ratio 1:2. I have 2 tins of blue how many yellow? Students often say that's easy just double the tins of yellow because you have doubled the blue. Now I ask them if I had 1000 yellow or something like they say 500 but this time because they recognise there are $\frac{1}{2}$ the number of yellow. A subtle change but one that recognise the proportion of blue and yellow is the same for equivalent ratios. Hope this helps.
I want to build a factory and I go to a bank for a loan, to finance part of the investment cost.
Bankers tend to think in $a:b$: "How many euros we will lend for every euro the compnay will invest". And they tend to have rules of thumb on the matter, say a $3:1$ rule. From the point of view of the company, this could be written $1:3$ and here, confusion may arise more easily, because "$1/3$" should be interpreted as "the compnay will invest one third of what the bank will lend" and
not as "one third of the total cost of the factory".
To arrive at this last magnitude the relation is always $a:b \rightarrow \frac {a}{a+b}$
More generally, I think a fraction $a/b$ (which then can be written also as a number, a decimal, etc), is meanigfull only when $a$ and $b$ measure same entities in nature (in my example, money in the same currency). But the concept represented usually by $a:b$ can bring together items that are not alike (say, "$a$ car-accident deaths per $b$ kilometers of highways), in which case there is no meaningful interpretation of $a+b$.
The ratio $a:b$ and the fraction $a/b$ are generally not synonymous and should not be treated as such, at least without making clear their interpretations. In many contexts, $a:b$ corresponds to the fraction $a/(a+b)$ or $b/(a+b)$.
For example, if one cuts a pizza into $6$ slices, one of which has anchovies, the fraction of slices with anchovies is $1/6$, while the ratio of slices with anchovies to slices without anchovies is $1:5$.
One context, unfortunately familiar to many students, where $a/b$ and $a:b$ are not synonyms is when speaking of odds in the context of bookmaking. That the odds be given by the ratio $3:2$ means roughly that the bookmaker expects the bet upon side to win $2/5$ times and the payout in the event of a win will be $3/5$ of the sum of the payout and money staked. That is when speaking of odds, the ratio $a:b$ corresponds to the fraction $a/(a+b)$ or $b/(a+b)$ (which form is relevant depends on interpretation/use). So $9:1$ odds against means that the horse is not expected to win, so the payout is big if it does, $900$ would be paid on a $100$ bet, while $9:1$ odds in favor (usually quoted as $1:9$) means that the horse is expected to win, and the payout is small if it does, approximately $11 \simeq 100/9$ would be paid on a $100$ bet.
By itself, the word
ratio is indeed rather opaque and should probably be avoided. Here is what "is under the hood and usually goes without saying".
Natural (
aka counting) numbers are used to measure a "quantity" (= counting number) of discrete objects such as in: the "quantity" of the set {egg, egg, egg} is 3. What do we do when we want to measure a "quantity" (= real number) of continuous stuff such as length?
In terms of
proportion, the ancient Greeks would have said that the "quantity" (length) of a footstick is to the "quantity" (length) of a yardstick the same as the "quantity" (count) of the set {egg}, i.e. 1, is to the "quantity" (count) of {egg, egg, egg}, i.e. 3. In terms of ratios, they would have said that the ratio of footstick to yardstick is equal to the ratio of {egg} to {egg, egg, egg}. When we say that the "quantity" of a footstick equals $\frac{1}{3}$ the "quantity" of a yardstick, we are just writing the proportion another way.
In other words, the idea of
proportion was meant to reduce the measure of "quantity" of stuff (which they couldn't do) to the measure of "quantity" of sets (which they could do).
From my (by now very remote) experience with 5 graders, I think that the above distinction, measuring sets versus measuring continuous stuff (which, by the way, requires the introduction of units). is quite within their reach.
Warning: what follows may be a bit off-topic but is tightly related to the question.
The trouble comes when we want to look at a fraction as indicative of a measure the same way as when we look at a natural number as indicative of a measure. For instance, when we look at 3084385 and 47975 we immediately see that the first is larger than the second. Not so immediately with fractions.
So, of course, we look at $\frac{1}{3}$ as
code for "divide 3 into 1" but the question now is "where to stop the division?" and that of course depends on the real-world situation. That we have rules for dealing with the code, e.g. $\frac{a}{b}\times\frac{c}{d}=\frac{ac}{bd}$ is nice and fortunate but not really necessary: as engineers are wont to put it, "The real real numbers are the decimal numbers".
See Gowers Mathematics, A Very Short Introduction. While spending a whole chapter (7) on
infinite decimals, Gowers never even mentions real numbers. And, appropriately recast, the content of Gowers' Chapter 7 should be accessible to 5 grader.
After thinking a while about this question, here comes my first answer on matheducators.SE:
First of all, I would leave out ratios entirely, if possible. They aren't as expressive as fractions are, and fractions are more widely used.
Why are ratios less expressive?
With (binary) ratios, you can compare only two things: $a:b$ (scores in a game, male-to-female distribution in the class, ...), while with fractions, you can have $a/x$, $b/x$, $c/x$, the fraction of several things from a larger group $x$ (color of hair, election results, ...).
Of course you could also do $a:b:c:...$, but I think that's too complicated?
How to explain the difference?
You could emphasize the difference in operations on both: adding ratios is simple, adding fractions is not. Multiplying fractions, or a fraction with a number makes sense, multiplying ratios doesn't.Another key is proper use of language to make clear what is being compared: The
ratio of the number of one thing to another number of things, versus the fraction of a number of things of a total number of things.
Cooking examples work great here.
Rice is made with 1 part rice 2 parts water. The ratio is 1:2 rice:water. Or we can say 1/2 the water is the volume of rice to use. Or, as is often the case, you have an odd amount of rice, use twice that amount of water. I once saw my sister fill a 1 cup measure with rice, throw away the rest, and then add 2 cups of water. I asked why she threw out what looked like another 1/2 cup of rice. It wasn't enough for a "recipe". I don't know if she was a bad cook or bad at math, but she could have just measured 1-1/2 cups rice and double to 3 cups water to use.
We use 2:1 for Margarita's as well. 2 parts Mix to 1 part Tequila. Mix to Tequila is 2:1, and the final drink has 1/3 Tequila, 2/3 Margarita Mix.
It's key to understand that (ratio) 2:1 results in 3 parts of stuff, made up of (fraction) 1/3 one ingredient, 2/3 the other.
Ratio is just a single fraction. Whereas proportion is a ratio between 2 fractions.
Example - 5 boys and 3 girls in a class. Ratio of boys is 5/8 Ratio of girls is 3/8 But proportion of boys to girls is 5:3
The basic difference is Ratio is always with respect to some large quantities or superior quantities, whereas proportion is between same kind of quantities.
Relation between division and fraction have to be understood to understand the relationship between fraction and ratio: 1. Division: finding how many times of divisor in the dividend, or the value of a part 1/5 means 5 is 0.20 times in 1; value of one part is 0.20. 2. Fraction: It is how many times the denominator in numerator or how many parts of denominator in numerator. It means multiplying the value of one part with numerator: 2/5 = 1/5 X 2 = 0.40 3 Ratio: Ratio is a division finding how many times one entity in to other. And it is a fraction for one in numerator. 4. All are written in fraction form with same value
Division and Fractions have applications in daily life where as ratio being an another form of division and fraction does not find place in daily life applications. Therefore ratio is not an independent entity but a name to refer how many times in a division in context. Probably it is good idea not to teach ratio as an independent entity. Therefore ratio is an application of fractions like percentage, rebate, loss and profit. They all use the properties of equivalent fractions. Proportion too is an equivalent fraction.
There are several possible fractions one could associate with a given ratio.
Say that a recipe for lemonade calls for $2$ cups of lemon juice and $5$ cups of water. This could be expressed with the ratio $2:5$. Here are some different associated fractions:
$2/7$ of the lemonade is lemon juice. $5/7$ of the lemonade is water. $2/5$ of a cup of lemon juice is the amount which needs to be added to 1 cup of water. $5/2$ of a cup of water is the amount which needs to be added to 1 cup of lemon juice.
I don't think any of these fractions has a right to be called "the fraction" associated with the ratio.
Some additional evidence that ratio and fraction are distinct concepts:
The ratios 0:5 and 5:0 both make sense. For instance, you could have 0 parts lemon juice to 5 parts water.
One can have ratios between more than 2 quantities. The ratio 1:2:5, as in 1 part sugar, 2 parts lemon juice, and 5 parts water makes perfect sense. This ratio would be equivalent to 2:4:10 (since the proportions are the same), but there is no single associated fraction.
For a mathematical definition one could use the following:
An
ratio of $n$ quantities is an equivalence class of elements of $\mathbb{R}^n-\{\mathbf{0}\}$ under the equivalence relation $\mathbf{v} \sim \mathbf{w}$ iff there exists a $c \in \mathbb{R}-\{0\}$ such that $\mathbf{v} = c \mathbf{w}$.
In other words, ratios are just elements of real projective spaces!
Please don't tell this to your 5th graders.
I'm not sure how you would translate the following into the language of 10-year-olds, but the teacher should first understand this much:
The variables x and y are respectively in the ratio a:b means only that x/y = a/b (for non-zero values of a, b, x and y).
Similarly, the variables x, y and z are respectively in the ratio of a:b:c means only that x/y = a/b and y/z =b/c (for non-zero values of a, b, c, x, y and z).
Above while representing ratio in a flow diagram above one branch is represented by part/part. There is no such concept as part/part in fraction form. The denominator is always a 'whole'. While comparing two items one of them should be taken as whole and other as part of the whole.
The diagram in the present form violates the fundamental principle of fraction. |
Consider a sensor that is measuring physical parameters like temperature, pressure, or velocity. This sensor introduces perturbations and noise and, hence, one key-problem is the optimal
inference of parameter $\boldsymbol{x}$ using measurement $\boldsymbol{y}$ from the sensor. Such inference of parameters is used in many research areas like telecommunications, finances, medizine, or social science. When we speak about inference we have to ask: What is optimal inference? Which criterion shall we use? A natural criterion is the inference error. But how shall the error be defined? In the sequel, I address these main questions and hope to give a good overview. The Forward Model Describes the Sensing forward modelmaps the parameter $\boldsymbol{x}$ to the measurement $\boldsymbol{y}$, namely,
I compare inference approaches for three distinct models:
Both, parameter and measurement are deterministic. The parameter is deterministic, but the measurement is random. The randomness is introduces by noise or lack of knowledge. Both, parameter and measurement are random. This allows us to model statistical knowledge of the parameter.
Usually,
estimation refers to the inference of real- or complex-valued parameters from real- or complex-valued measurements, respectively, whereas detection uses real- or complex-valued measurements to infer parameters (symbols) in a finite alphabet. The Loss Describes the Inference Error
A
loss function defines the inference error between estimate $\hat{\boldsymbol{x}}(\boldsymbol{y})$ and the true parameter vector $\boldsymbol{x}$. Popular choices of the loss function are the square error
\[\ell^{\mathrm{SE}}(\hat{\boldsymbol{x}},\boldsymbol{x}) = || \hat{\boldsymbol{x}}(\boldsymbol{y}) - \boldsymbol{x} ||^2~,\]
the hit-or-miss error
\[\ell^{\mathrm{HoM}}(\hat{\boldsymbol{x}},\boldsymbol{x}) =1_{|| \hat{\boldsymbol{x}}(\boldsymbol{y}) - \boldsymbol{x} || > \epsilon}~, \quad \epsilon > 0~,\]
for a continuous random vector $\boldsymbol{x} \in \mathbb{R}^N$ or
\[\ell^{\mathrm{HoM}}(\hat{\boldsymbol{x}},\boldsymbol{x}) = 1_{ \hat{\boldsymbol{x}}(\boldsymbol{y}) \neq \boldsymbol{x} } \]
for $\boldsymbol{x}$ in a finite alphabet, and the absolute error
\[\ell^{\mathrm{abs}}(\hat{\boldsymbol{x}},\boldsymbol{x}) = \sum_{n=1}^{N} | \hat{x}_n - x_n |~,\]
where
i ndicator function $1_{x}$ is unity if $x=$ true and zero if $x=$ false. Scalar $x_n$ is the $n$th element of $\boldsymbol{x}$.
The loss function weights the error and should be carefully chosen. The square-error loss weights great errors more than small errors, the hit-or-miss loss weights errors independet of the magnitude, and the absolute-error loss weights linear with the error. John D. Cook presented a nice example.
There are three main motivations of the square-error loss:
The square of a signal represents power, here it is the power of the error. The square stems from the exponent of the Gaussian probability density (however often no Gaussian assumption is made). A closed-form solution often exists What is Optimal?
There is no unique optimality criterion for an estimator or detector. Therefore, we focus on a popular criterion. We are seeking for an estimator $\hat{\boldsymbol{x}}(\boldsymbol{y})$ that minimizes the
risk,
\[ \hat{\boldsymbol{x}} = \arg\min_{\boldsymbol{x}} R~.\]
In short, the risk is the expected loss. The word
expected has different meanings for different probabilistic descriptions of the forward model. Algebraic Inference
First, we consider a deterministic parameter $\boldsymbol{x}$ and a deterministic forward model $\boldsymbol{y}(\boldsymbol{x})$. If the mapping $\boldsymbol{y}(\boldsymbol{x})$ is bijective, then an
inverse exists and $\boldsymbol{x} = \boldsymbol{y}^{-1}(\boldsymbol{y})$.
If the mapping $\boldsymbol{y}(\boldsymbol{x})$ is not bijective, then we search for a parameter $\boldsymbol{x}$ that fulfills our criterion of a minimal
risk function $R = \ell$. One prominant loss is the square error $\ell^{\mathrm{SE}} (\hat{\boldsymbol{x}},\boldsymbol{x} )$ between estimate $\hat{\boldsymbol{x}}$ and the true parameter $\boldsymbol{x}$.
An estimator is obtained by
\[\hat{\boldsymbol{x}} = \arg\min_{\boldsymbol{x}} R = \arg\min_{\boldsymbol{x}} \ell(\hat{\boldsymbol{x}},\boldsymbol{x})~.\]
The result for $\ell = \ell^{\mathrm{SE}}$ is a
least square solution using the pseudo-inverse. Replacing the $L_2$-norm in $\ell^{\mathrm{SE}}(\boldsymbol{x},\boldsymbol{x})$ by a wighted norm leads to weighted least-square solutions. Frequentist Inference
We could model the measurements $\boldsymbol{y}$ as radom vectors, i.e. the forward mapping $\boldsymbol{y}(\boldsymbol{x})$ is a random function. A simple example is additive noise $\boldsymbol{v}$,
\[\boldsymbol{y} = \boldsymbol{x} + \boldsymbol{v}~,\]
where $\boldsymbol{v}$ is a random vector and this implies a random measurement vector $\boldsymbol{y}$.
We use the same approach as in the previous section and define the
frequentist risk as the expected loss, i.e. $R = \mathrm{E}_{\boldsymbol{y}}(\ell)$. The estimate ist
\[\hat{\boldsymbol{x}} = \arg\min_{\boldsymbol{x}} R = \arg\min_{\boldsymbol{x}} \mathrm{E}_{\boldsymbol{y}}(\ell)~.\]
If we use the square-error loss for a continuous random parameter $\boldsymbol{x}$ and an unbiased estimator exists, then we obtain the
minimum variance unbiased (MVU) estimator. Using the hit-or-miss loss, we obtain the maximum likelihood (ML) estimator $\hat{\boldsymbol{x}} = \arg\max_{\boldsymbol{x}} v(\boldsymbol{y}|\boldsymbol{x})$. Here, likelihood function $v(\boldsymbol{y}|\boldsymbol{x})$ is the conditional probability density function or probability mass function of $\boldsymbol{y}$ given $\boldsymbol{x}$. Bayesian Inference
Furthermore, our belief in a distribution of $\boldsymbol{x}$ influences our inference. The
Bayesian risk is defined as the expectation of a loss function with regard to $\boldsymbol{y}$ and $\boldsymbol{x}$. That is,
\[\hat{\boldsymbol{x}} = \arg\min_{\boldsymbol{x}} R = \arg\min_{\boldsymbol{x}} \mathrm{E}_{\boldsymbol{x},\boldsymbol{y}}(\ell) ~.\]
The square-error loss leads to the
minimum mean square-error (MMSE) solution $\hat{\boldsymbol{x}} = \mathrm{E}(\boldsymbol{x}|\boldsymbol{y})$, the hit-or-miss loss to the maximum a-posteriori (MAP) solution $\hat{\boldsymbol{x}} = \arg\max_{\boldsymbol{x}} v(\boldsymbol{x}|\boldsymbol{y})$, and the absolute-error loss to the median solution $\hat{\boldsymbol{x}} = \mathrm{median}(\boldsymbol{x}|\boldsymbol{y})$. Performance Bounds for Estimators
In frequentist and Bayesian estimation, performance bounds are used to lower bound the square-error loss, if no analytic solution of an estimator exists. Prominent performance bounds are:
Frequentist's lower bounds Cramer-Rao Bhattacharyya Barakin Bayesian lower bounds Bayesian (van-Trees)-Cramer-Rao Bayesian Bhattacharyya Bobrovski-Zakai Weiss-Weinstein Ziv-Zakai bounds (Bayesian)
Bibliography Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Trackingby Harry L. Van Trees, Kristine L. Bell, 2007, Wiley, ISBN 0-47-012095-9 Fundamentals of Statistical Signal Processing: Estimation Theoryby Steven M. Kay, 1993, Prentice Hall, ISBN 0-13-345711-7 Fundamentals of Statistical Signal Processing: Detection TheorySteven M. Kay, 1998, Prentice Hall, ISBN 0-13-504135-X Lecture Notes on Bayesian Estimation and Classificationby Mario A. T. Figueiredo (http://www.lx.it.pt/~mtf/learning/Bayes_lecture_notes.pdf) |
Rubinstein’s lcalc library¶
This is a wrapper around Michael Rubinstein’s lcalc. See http://oto.math.uwaterloo.ca/~mrubinst/L_function_public/CODE/.
AUTHORS:
Rishikesh (2010): added compute_rank() and hardy_z_function() Yann Laigle-Chapuy (2009): refactored Rishikesh (2009): initial version class
sage.libs.lcalc.lcalc_Lfunction.
Lfunction¶
Bases:
object
Initialization of L-function objects. See derived class for details, this class is not supposed to be instantiated directly.
EXAMPLES:
sage: from sage.libs.lcalc.lcalc_Lfunction import * sage: Lfunction_from_character(DirichletGroup(5)[1]) L-function with complex Dirichlet coefficients
compute_rank()¶
Computes the analytic rank (the order of vanishing at the center) of of the L-function
EXAMPLES:
sage: chi=DirichletGroup(5)[2] #This is a quadratic character sage: from sage.libs.lcalc.lcalc_Lfunction import * sage: L=Lfunction_from_character(chi, type="int") sage: L.compute_rank() 0 sage: E=EllipticCurve([-82,0]) sage: L=Lfunction_from_elliptic_curve(E, number_of_coeffs=40000) sage: L.compute_rank() 3
find_zeros(
T1, T2, stepsize)¶
Finds zeros on critical line between
T1and
T2using step size of stepsize. This function might miss zeros if step size is too large. This function computes the zeros of the L-function by using change in signs of areal valued function whose zeros coincide with the zeros of L-function.
Use
find_zeros_via_N()for slower but more rigorous computation.
INPUT:
T1– a real number giving the lower bound
T2– a real number giving the upper bound
stepsize– step size to be used for the zero search
OUTPUT:
list – A list of the imaginary parts of the zeros which were found.
EXAMPLES:
sage: from sage.libs.lcalc.lcalc_Lfunction import * sage: chi=DirichletGroup(5)[2] #This is a quadratic character sage: L=Lfunction_from_character(chi, type="int") sage: L.find_zeros(5,15,.1) [6.64845334472..., 9.83144443288..., 11.9588456260...] sage: L=Lfunction_from_character(chi, type="double") sage: L.find_zeros(1,15,.1) [6.64845334472..., 9.83144443288..., 11.9588456260...] sage: chi=DirichletGroup(5)[1] sage: L=Lfunction_from_character(chi, type="complex") sage: L.find_zeros(-8,8,.1) [-4.13290370521..., 6.18357819545...] sage: L=Lfunction_Zeta() sage: L.find_zeros(10,29.1,.1) [14.1347251417..., 21.0220396387..., 25.0108575801...]
find_zeros_via_N(
count=0, do_negative=False, max_refine=1025, rank=-1, test_explicit_formula=0)¶
Finds
countnumber of zeros with positive imaginary part starting at real axis. This function also verifies that all the zeros have been found.
INPUT:
count- number of zeros to be found
do_negative- (default: False) False to ignore zeros below the real axis.
max_refine- when some zeros are found to be missing, the step size used to find zeros is refined. max_refine gives an upper limit on when lcalc should give up. Use default value unless you know what you are doing.
rank- integer (default: -1) analytic rank of the L-function. If -1 is passed, then we attempt to compute it. (Use default if in doubt)
test_explicit_formula- integer (default: 0) If nonzero, test the explicit formula for additional confidence that all the zeros have been found and are accurate. This is still being tested, so using the default is recommended.
OUTPUT:
list – A list of the imaginary parts of the zeros that have been found
EXAMPLES:
sage: from sage.libs.lcalc.lcalc_Lfunction import * sage: chi=DirichletGroup(5)[2] #This is a quadratic character sage: L=Lfunction_from_character(chi, type="int") sage: L.find_zeros_via_N(3) [6.64845334472..., 9.83144443288..., 11.9588456260...] sage: L=Lfunction_from_character(chi, type="double") sage: L.find_zeros_via_N(3) [6.64845334472..., 9.83144443288..., 11.9588456260...] sage: chi=DirichletGroup(5)[1] sage: L=Lfunction_from_character(chi, type="complex") sage: L.find_zeros_via_N(3) [6.18357819545..., 8.45722917442..., 12.6749464170...] sage: L=Lfunction_Zeta() sage: L.find_zeros_via_N(3) [14.1347251417..., 21.0220396387..., 25.0108575801...]
hardy_z_function(
s)¶
Computes the Hardy Z-function of the L-function at s
INPUT:
s- a complex number with imaginary part between -0.5 and 0.5
EXAMPLES:
sage: chi = DirichletGroup(5)[2] # Quadratic character sage: from sage.libs.lcalc.lcalc_Lfunction import * sage: L = Lfunction_from_character(chi, type="int") sage: L.hardy_z_function(0) 0.231750947504... sage: L.hardy_z_function(.5).imag() # abs tol 1e-15 1.17253174178320e-17 sage: L.hardy_z_function(.4+.3*I) 0.2166144222685... - 0.00408187127850...*I sage: chi = DirichletGroup(5)[1] sage: L = Lfunction_from_character(chi, type="complex") sage: L.hardy_z_function(0) 0.793967590477... sage: L.hardy_z_function(.5).imag() # abs tol 1e-15 0.000000000000000 sage: E = EllipticCurve([-82,0]) sage: L = Lfunction_from_elliptic_curve(E, number_of_coeffs=40000) sage: L.hardy_z_function(2.1) -0.00643179176869... sage: L.hardy_z_function(2.1).imag() # abs tol 1e-15 -3.93833660115668e-19
value(
s, derivative=0)¶
Computes the value of the L-function at
s
INPUT:
s- a complex number
derivative- integer (default: 0) the derivative to be evaluated
rotate- (default: False) If True, this returns the value of the Hardy Z-function (sometimes called the Riemann-Siegel Z-function or the Siegel Z-function).
EXAMPLES:
sage: chi=DirichletGroup(5)[2] #This is a quadratic character sage: from sage.libs.lcalc.lcalc_Lfunction import * sage: L=Lfunction_from_character(chi, type="int") sage: L.value(.5) # abs tol 3e-15 0.231750947504016 + 5.75329642226136e-18*I sage: L.value(.2+.4*I) 0.102558603193... + 0.190840777924...*I sage: L=Lfunction_from_character(chi, type="double") sage: L.value(.6) # abs tol 3e-15 0.274633355856345 + 6.59869267328199e-18*I sage: L.value(.6+I) 0.362258705721... + 0.433888250620...*I sage: chi=DirichletGroup(5)[1] sage: L=Lfunction_from_character(chi, type="complex") sage: L.value(.5) 0.763747880117... + 0.216964767518...*I sage: L.value(.6+5*I) 0.702723260619... - 1.10178575243...*I sage: L=Lfunction_Zeta() sage: L.value(.5) -1.46035450880... sage: L.value(.4+.5*I) -0.450728958517... - 0.780511403019...*I class
sage.libs.lcalc.lcalc_Lfunction.
Lfunction_C¶
The
Lfunction_Cclass is used to represent L-functions with complex Dirichlet Coefficients. We assume that L-functions satisfy the following functional equation.\[\Lambda(s) = \omega Q^s \overline{\Lambda(1-\bar s)}\]
where\[\Lambda(s) = Q^s \left( \prod_{j=1}^a \Gamma(\kappa_j s + \gamma_j) \right) L(s)\]
See (23) in arXiv math/0412181
INPUT:
what_type_L- integer, this should be set to 1 if the coefficients are periodic and 0 otherwise.
dirichlet_coefficient- List of dirichlet coefficients of the L-function. Only first \(M\) coefficients are needed if they are periodic.
period- If the coefficients are periodic, this should be the period of the coefficients.
Q- See above
OMEGA- See above
kappa- List of the values of \(\kappa_j\) in the functional equation
gamma- List of the values of \(\gamma_j\) in the functional equation
pole- List of the poles of L-function
residue- List of the residues of the L-function
Note
If an L-function satisfies \(\Lambda(s) = \omega Q^s \Lambda(k-s)\), by replacing \(s\) by \(s+(k-1)/2\), one can get it in the form we need.
class
sage.libs.lcalc.lcalc_Lfunction.
Lfunction_D¶
The
Lfunction_Dclass is used to represent L-functions with real Dirichlet coefficients. We assume that L-functions satisfy the following functional equation.\[\Lambda(s) = \omega Q^s \overline{\Lambda(1-\bar s)}\]
where\[\Lambda(s) = Q^s \left( \prod_{j=1}^a \Gamma(\kappa_j s + \gamma_j) \right) L(s)\]
See (23) in arXiv math/0412181
INPUT:
what_type_L- integer, this should be set to 1 if the coefficients are periodic and 0 otherwise.
dirichlet_coefficient- List of dirichlet coefficients of the L-function. Only first \(M\) coefficients are needed if they are periodic.
period- If the coefficients are periodic, this should be the period of the coefficients.
Q- See above
OMEGA- See above
kappa- List of the values of \(\kappa_j\) in the functional equation
gamma- List of the values of \(\gamma_j\) in the functional equation
pole- List of the poles of L-function
residue- List of the residues of the L-function
Note
If an L-function satisfies \(\Lambda(s) = \omega Q^s \Lambda(k-s)\), by replacing \(s\) by \(s+(k-1)/2\), one can get it in the form we need.
class
sage.libs.lcalc.lcalc_Lfunction.
Lfunction_I¶
The
Lfunction_Iclass is used to represent L-functions with integer Dirichlet Coefficients. We assume that L-functions satisfy the following functional equation.\[\Lambda(s) = \omega Q^s \overline{\Lambda(1-\bar s)}\]
where\[\Lambda(s) = Q^s \left( \prod_{j=1}^a \Gamma(\kappa_j s + \gamma_j) \right) L(s)\]
See (23) in arXiv math/0412181
INPUT:
what_type_L- integer, this should be set to 1 if the coefficients are periodic and 0 otherwise.
dirichlet_coefficient- List of dirichlet coefficients of the L-function. Only first \(M\) coefficients are needed if they are periodic.
period- If the coefficients are periodic, this should be the period of the coefficients.
Q- See above
OMEGA- See above
kappa- List of the values of \(\kappa_j\) in the functional equation
gamma- List of the values of \(\gamma_j\) in the functional equation
pole- List of the poles of L-function
residue- List of the residues of the L-function
Note
If an L-function satisfies \(\Lambda(s) = \omega Q^s \Lambda(k-s)\), by replacing \(s\) by \(s+(k-1)/2\), one can get it in the form we need.
class
sage.libs.lcalc.lcalc_Lfunction.
Lfunction_Zeta¶
The
Lfunction_Zetaclass is used to generate the Riemann zeta function.
sage.libs.lcalc.lcalc_Lfunction.
Lfunction_from_character(
chi, type='complex')¶
Given a primitive Dirichlet character, this function returns an lcalc L-function object for the L-function of the character.
INPUT:
chi- A Dirichlet character
use_type- string (default: “complex”) type used for the Dirichlet coefficients. This can be “int”, “double” or “complex”.
OUTPUT:
L-function object for
chi.
EXAMPLES:
sage: from sage.libs.lcalc.lcalc_Lfunction import Lfunction_from_character sage: Lfunction_from_character(DirichletGroup(5)[1]) L-function with complex Dirichlet coefficients sage: Lfunction_from_character(DirichletGroup(5)[2], type="int") L-function with integer Dirichlet coefficients sage: Lfunction_from_character(DirichletGroup(5)[2], type="double") L-function with real Dirichlet coefficients sage: Lfunction_from_character(DirichletGroup(5)[1], type="int") Traceback (most recent call last): ... ValueError: For non quadratic characters you must use type="complex"
sage.libs.lcalc.lcalc_Lfunction.
Lfunction_from_elliptic_curve(
E, number_of_coeffs=10000)¶
Given an elliptic curve E, return an L-function object for the function \(L(s, E)\).
INPUT:
E- An elliptic curve
number_of_coeffs- integer (default: 10000) The number of coefficients to be used when constructing the L-function object. Right now this is fixed at object creation time, and is not automatically set intelligently.
OUTPUT:
L-function object for
L(s, E).
EXAMPLES:
sage: from sage.libs.lcalc.lcalc_Lfunction import Lfunction_from_elliptic_curve sage: L = Lfunction_from_elliptic_curve(EllipticCurve('37')) sage: L L-function with real Dirichlet coefficients sage: L.value(0.5).abs() < 1e-15 # "noisy" zero on some platforms (see #9615) True sage: L.value(0.5, derivative=1) 0.305999... |
To track solutions from a start system $G$ to the target system $F$ we use by default the straight-line homotopy
$$ H(x,t) := (1-t)F+tG\;. $$
But this is in general
not the best choice since you usually leave the solution space of your problem. Therefore we support the ability to define arbitrary homotopies where you have the full power of Julia available.
In the following we will illustrate how to setup a custom homotopy on the following example. For polynomial systems $F$ and $G$ we want to define the homotopy
$$ H(x,t) = (1 - t) F( U(t) x ) + tG( U(t) x ) $$
where $U(t)$ is a random path in the space of unitary matrices with $U(0) = U(1) = I$ and $I$ is the identity matrix. Such a random path can be constructed by
$$ U(t) = U \begin{bmatrix}\cos(2\pi t) & -\sin(2\pi t) & 0 &\cdots & 0 \\ \sin(2\pi t) & \cos(2\pi t) & 0 &\cdots & 0 \\ 0 & 0 & 1 &\cdots & 0\\ 0 & 0 & 0 &\ddots & 0\\ 0 & 0 & 0 &\cdots & 1 \end{bmatrix} U^T. $$
with a random unitary matrix $U$.
To define a homotopy we have to know how to compute for all $x \in \mathbb{C}^n$, $t \in \mathbb{C}$
$$ H(x,t), \quad \frac{\partial H}{\partial x}(x,t) \quad \text{ and } \quad \frac{\partial H}{\partial t}(x,t)\;. $$
We denote the partial derivative of $H$ w.r.t. $x$ as the
Jacobian of $H$. For simplification (in the math as well as in the implementation) we introduce the helper homotopy
$$ \tilde{H}(y, t) := (1 - t) F( y ) + tG(y)\;. $$
Note $H(x,t) = \tilde{H}(U(t)x, t)$. Using the chain rule we get for the partial derivatives
$$ \frac{\partial H}{\partial x}(x,t) = \frac{\partial \tilde{H}}{\partial y}(U(t)x,t) U(t) $$
and
$$ \frac{\partial H}{\partial t}(x,t) = \frac{\partial \tilde{H}}{\partial y}(U(t)x,t) U’(t) x + \frac{\partial \tilde{H}}{\partial t}(U(t)x,t) $$
where
$$ U’(t)= U \begin{bmatrix}-2\pi\sin(2\pi t) & -2\pi\cos(2\pi t) & 0 &\cdots & 0 \\ 2\pi\cos(2\pi t) & -2\pi\sin(2\pi t) & 0 &\cdots & 0 \\ 0 & 0 & 0 &\cdots & 0\\ 0 & 0 & 0 &\ddots & 0\\ 0 & 0 & 0 &\cdots & 0 \end{bmatrix} U^T. $$
A custom homotopy has to satisfy a certain interface. We start with the data structure for the homotopy. A homotopy is represented by a
struct which is a subtype of
Homotopies.AbstractHomotopy. Since $\tilde{H}$ is the standard straight-line homotopy we can reuse this implementation to safe us some work since homotopies compose easily.
using HomotopyContinuation, LinearAlgebrastruct RandomUnitaryPath{Start,Target} <: Homotopies.AbstractHomotopy straightline::StraightLineHomotopy{Start, Target} U::Matrix{ComplexF64}endfunction RandomUnitaryPath(start::Systems.AbstractSystem, target::Systems.AbstractSystem) m, n = size(start) # construct a random unitary matrix U = Matrix(qr(randn(n,n) + im * randn(n,n)).Q) RandomUnitaryPath(Homotopies.StraightLineHomotopy(start, target), U)end# We have to define the sizeBase.size(H::RandomUnitaryPath) = size(H.straightline)
To get good performance it is important to be careful about memory allocations. It is much much better to initialize a chunk of memory
once and to reuse this memory. To support this optimization we have the concept of a cache. This is a
struct with supertype
Homotopies.AbstractHomotopyCache where we allocate all memory necessary to evaluate and differentiate our homotopy. This is an optimization and not necessary to have at the beginning, but for the best it is necessary to implement it. To illustrate how to do this, we will implement here a cache. Don’t look with too much detail on the exact type definition for now, we just allocate a bunch of stuff which will make much more sense later. As a constructor for the cache we have to define the
Homotopies.cache method.
struct RandomUnitaryPathCache{C, T1, T2} <: Homotopies.AbstractHomotopyCache straightline::C U_t::Matrix{ComplexF64} y::Vector{T1} # More temporary storage necessary to avoid allocations jac::Matrix{T2} # holds a jacobian dt::Vector{T2} # holds a derivative w.r.t. t U::Matrix{ComplexF64} # holds something like Uend# A cache is always constructed by this method.function Homotopies.cache(H::RandomUnitaryPath, x, t) U_t = copy(H.U) y = U_t * x straightline = Homotopies.cache(H.straightline, y, t) jac = Homotopies.jacobian(H.straightline, y, t, straightline) dt = jac * y U = copy(U_t) RandomUnitaryPathCache(straightline, U_t, y, jac, dt, U)end
We start with implementing subroutines to evaluate and differentiate $U(t)$ as well as computing $U(t)x$ and $U’(t)x$. We use the
U_t and
y fields in
cache to store the values $U(t)$ and $U(t)x$ (resp. $U’(t)$ and $U’(t)x$)
# U(t)xfunction Ut_mul_x!(cache, U, x, t) # We start with U * (the 2x2 sin-cos block + I) cache.U .= U s, c = sin(2π*t), cos(2π*t) for i=1:size(U, 1) cache.U[i, 1] = U[i,2] * s + U[i,1] * c cache.U[i, 2] = U[i,2] * c - U[i,1] * s end # U(t) = cache.U * U' # y = cache.y = U(t) * x mul!(cache.y, mul!(cache.U_t, cache.U, U'), x)end# U'(t)xfunction U_dot_t_mul_x!(cache, U, x, t) # We start with U * (the derivative of the 2x2 sin-cos block + 0) cache.U .= zero(eltype(U)) s, c = 2π*sin(2π*t), 2π*cos(2π*t) for i=1:size(U, 1) cache.U[i, 1] = U[i,2] * c - U[i,1] * s cache.U[i, 2] = -U[i,2] * s - U[i,1] * c end # U'(t) = cache.U * U' # y' = cache.y = U'(t) * x mul!(cache.y, mul!(cache.U_t, cache.U, U'), x)end
Now we are ready to implement $H(x,t)$, its Jacobian and the derivative w.r.t. $t$.
function Homotopies.evaluate!(out, H::RandomUnitaryPath, x, t, cache) y = Ut_mul_x!(cache, H.U, x, t) Homotopies.evaluate!(out, H.straightline, y, t, cache.straightline)endfunction Homotopies.jacobian!(out, H::RandomUnitaryPath, x, t, cache) y = Ut_mul_x!(cache, H.U, x, t) Homotopies.jacobian!(cache.jac, H.straightline, y, t, cache.straightline) mul!(out, cache.jac, cache.U_t) # out = J_H(y, t) * U(t)endfunction Homotopies.dt!(out, H::RandomUnitaryPath, x, t, cache) y = Ut_mul_x!(cache, H.U, x, t) # chain rule Homotopies.jacobian_and_dt!(cache.jac, out, H.straightline, y, t, cache.straightline) y_dot = U_dot_t_mul_x!(cache, H.U, x, t) # y_dot = U'(t)x mul!(cache.dt, cache.jac, y_dot) # dt = J_H(y, t) * y_dot out .+= cache.dtend
We also support to compute
evaluate! and
jacobian! simultaneously as well as to compute
jacobian! and
dt! simultaneously. This can be very beneficial for the performance, so let’s implement this here since this mostly involve copy-paste.
function Homotopies.evaluate_and_jacobian!(val, jac, H::RandomUnitaryPath, x, t, cache) y = Ut_mul_x!(cache, H.U, x, t) Homotopies.evaluate_and_jacobian!(val, cache.jac, H.straightline, y, t, cache.straightline) mul!(jac, cache.jac, cache.U_t)endfunction Homotopies.jacobian_and_dt!(jac, dt, H::RandomUnitaryPath, x, t, cache) y = Ut_mul_x!(cache, H.U, x, t) Homotopies.jacobian_and_dt!(cache.jac, dt, H.straightline, y, t, cache.straightline) mul!(jac, cache.jac, cache.U_t) # jac = J_H(y, t) * U(t) y_dot = U_dot_t_mul_x!(cache, H.U, x, t) # y_dot = U'(t)x mul!(cache.dt, cache.jac, y_dot) # dt = J_H(y, t) * y_dot dt .+= cache.dtend
Implementing these methods without ever testing them is … not a good idea. Also just throwing the homotopy into
solve can result in confusing error messages. Therefore we provide a set of tests against which we can test our implementation. Although they do not check that we implemented the math correctly (or that our math is correct in the first place), they will catch any runtime errors.
# Let us construct some test systems@polyvar x y z;F = SPSystem([x^2*y-3x*z, z^2*x+3y^2]);G = SPSystem([z*x^2-3x*y^2, z^3*x-2x*y*z^2]);# Here we can test that our implementation does not produce an errorInterfaceTest.homotopy(RandomUnitaryPath(G, F))
Test Passed
solve([x^2 - y, y^3*x-x], homotopy=RandomUnitaryPath)
AffineResult with 8 tracked paths==================================• 6 non-singular solutions (2 real)• 1 singular solution (1 real)• 1 solution at infinity• 0 failed paths• random seed: 847463
Alternatively we could also construct the homotopy directly and give it to
solve together with start solutions. Note that in this case we have to ensure that our homotopy is already homogenous. |
While there are different "simple" proofs of the JL Lemma (the Gupta-Dasgupta proof is one such), it's not clear whether these are "undergraduate" level. So instead of answering his original question, I decided to change it to Is there a proof of the Johnson-Lindenstrauss Lemma that can be explained to an undergraduate ? It's a little odd to even ask the question, considering the intrinsic geometric nature of the lemma. But there's a reasonably straightforward way of seeing how the bound emerges without needing to worry too much about random rotations, matrices of Gaussians or the Brunn-Minkowski theorem. Is there a proof of the JL Lemma that isn't "geometric" ? Warning: what follows is a heuristic argument that helps suggest why the bound is in the form that it is: it should not be confused for an actual proof.
In its original form, the JL Lemma says that any set of $n$ points in $R^d$ can be embedded in $R^k$ with $k = O(\log n/\epsilon^2)$ such that all distances are preserved to within a $1+\epsilon$ factor. But the real result at the core of this is that there is a linear mapping taking a unit vector in $R^d$ to a vector of norm in the range $1\pm \epsilon$ in $R^k$, where $k = 1/\epsilon^2$ (the rest follows by scaling and an application of the union bound).
Trick #1: Take a set of values $a_1, \ldots, a_n$ and set $Y = \sum_i a_i r_i$, where $r_i$ is chosen (iid) to be +1 or -1 with equal probability. Then $E[Y^2] = \sum a_i^2$.This can be verified by an easy calculation.
So now consider the vector $v$.
. Take a random sample of $d/k$ of the coordinates of $v$ and apply the above trick to the values. Under the above assumption, the resulting $Y^2$ will have roughly $1/k$ of the total (squared) mass of $v$. Scale up by $k$. Let's assume that $v$'s "mass" is roughly equally distributed among its coordinates
This is one estimator of the norm of $v$. It is unbiased and it has a bounded maximum value because of the assumption. This means that we can apply a Chernoff bound over a set of $k$ such estimators. Roughly speaking, the probability of deviation from the mean is $\exp(-\epsilon^2 k)$, giving the desired value of $k$.
But how do we enforce the assumption ? By applying a random Fourier transform (or actually, a random Hadamard transform). this "spreads" the mass of the vector out among the coordinates (technically by ensuring an upper bound on the $\ell_\infty$ norm).
That's basically it. Almost all the papers that follow the Ailon-Chazelle work proceed in this manner, with increasing amounts of cleverness to reuse samples, or only run the Fourier transform locally, or even derandomize the process. What distinguishes this presentation of the result from the earlier approaches (which basically boil down to: populate a matrix with entries drawn from a distribution having subGaussian tails) is that it separates the "spreading" step (called the preconditioner) from the latter, more elementary step (the sampling of coordinates). It turns out that in practice the preconditioner can often be omitted without incurring too much error, yielding an extremely efficient (and sparse) linear transform. |
I've taking an Algorithms course. This is non-graded homework. The concept of loop invariants are new to me and it's taking some time to sink in. This was my first attempt at a proof of correctness today for the iterative Fibonacci algorithm. I don't feel good about it. I know this is not the right candidate invariant, because I believe it doesn't say anything about whether or not the algorithm works. It appears something like this is the correct one, though maybe not exactly for my loop. It seems my proof only shows the number increased each iteration, not that it computed any Fibonacci numbers from the sequence at a given index. I need more practice, but are there any general tips about seeing the invariant more easily?
Also aren't invariant suppose to be true before we start the loop, after each iteration, and after the loop completes?
Here was my first attempt: Logical Implication
If $N$ is a natural number then $F(N)$ will calculate:
$ F(N) = \begin{cases} 0, & N=0 \\ 1, & N=1 \\ F(N-1) + F(N-2), & otherwise \end{cases} $
F(n): l, r = 0, 1 for i in [0,n): l, r = l+r, l return l
Note, l=left branch in recursive definition, r=right branch: Invariant
Let's see, we're given $i,l,r,n \in \mathbb{N}$. We want to add $r$ to $l$ every $i^\text{th}$ iteration and set $r$ to the original value of $l$.
Consider candidate invariant, $P = l + r \ge l$
Proof by Mathematical Induction Basis step. Since we have base cases when $n\le1$, I'll consider both of these. When$n=0$, before and after the loop (which we don't enter) $l=0, r=1,$ and $ 0+1 \ge 0$. $P$ is true in this case. When$n=1$, before we enter the loop, $l=0, r=1,$ and $ 0+1 \ge 0$. $P$ holds true. After entering and terminating the only iteration, $i=0$; $l=1, r=0,$ and $ 1+0 \ge 1$. $P$ continues to hold true. Inductive Hypothesis. $\forall k\text{ iterations}, 1 \le k \lt i$ where $k \in \mathbb{N}$, and $n\ge2$, suppose $P$ is true. Inductive Step. Consider the $k+1$ iteration. P held true for the iteration $k$ according to the inductive hypothesis. Thus, by the time the $k+1$ iteration terminates it must be the case that P holds true because $l$ gets replaced with $l+r$ and $r$ gets replaced with the original value for $l$, and it follows that $(l+r) + (l) \ge l$. And my second attempt: Invariant
Let's see, we're given $i,l,r,n \in \mathbb{N}$.We want to compute $l$ to be the $i^{th}$ Fibonacci number. Consider candidate invariant $P$:
For $\forall i$ iterations, $ l,r = \begin{cases} f_0,1 & ,n=0 |i=0\text{ loop hasn't terminated} \\ f_{i+1}, f_{i} & ,otherwise \end{cases} $ from the sequence $\{f_i\}_{i=0}^\infty = \{0, 1, 1, 2, 3, 5, 8, 13,\dots\}$.
Proof by Mathematical Induction Basis step. Since we have base cases from the logical implication when $n\le1$, I'll consider both of these. When$n=0$ and before entering the loop when $n=1$, $l=f_0$ and $r=1$. $P$ is true and the algorithm returns the correct answer. When$n=1$, and after terminating the only loop iteration $i=0$, $l=f_{1}$ and $r=f_{0}$. $P$ continues to hold true, and the algorithm returns the correct answer. Inductive Hypothesis. $\forall k \text{ iterations}, 1 \le k \lt i = n-1$ where $k \in \mathbb{N}$ and $n\ge2$, suppose $P$ is true if $k$ is substituted for $i$ in the invariant. Inductive Step. Consider the $k+1$ iteration. $P$ is true for the iteration $k$ according to the inductive hypothesis. Thus, by the time the $k+1$ iteration terminates $l$ gets replaced with $l+r$, and $r$ gets replaced with the original value for $l$. Thus $P$ remains true because: \begin{align} l = l+r &= f_{k+1} + f_{k} &&\dots\text{by definition of inductive hypothesis}\\ & = f_{k+2} &&\dots\text{by definition of Fibonacci sequence} \\ \text{and} \\ r = l &= f_{k+1} &&\dots\text{by definition of inductive hypothesis} \end{align} UPDATE My third attempt after reading comments and answers:
F(n): l, r = 0, 1 for i in [0,n): l, r = l+r, l return l
For clarity (so I can formulate the invariant clearly):
for i in [0,n) is equivalent to i=0; while (i<n){ stuff; i++;}
Note, l=left branch in recursive definition, r=right branch: Invariant
Let's see, we're given $i,l,r,n \in \mathbb{N}$.
We want to compute $l$ to be the $i^{th}$ Fibonacci number.
Consider candidate invariant $P$ (not sure if I need $n=0$ in the first case, is it redundant?):
For $\forall i$ iterations, $l,r = \begin{cases} f_{i},1 & ,n=0 |i=0\\ f_{i}, f_{i-1} & ,otherwise \end{cases}$ from the sequence $\{f_i\}_{i=0}^\infty = \{0, 1, 1, 2, 3, 5, 8, 13,\dots\}$. Proof by Mathematical Induction Basis step(s). Since we have base cases from the logical implication when $n\le1$, I'll consider both of these. When$n=0$ we don't enter the loop so $l=f_0$ and $r=1$. $P$ is true and the algorithm returns the correct answer. When$n=1$, before the first iteration terminates, $i=0$ so $l=f_0$ and $r=1$. After terminating the only loop iteration, $i=1$, and so $l=f_{1}$ and $r=f_{0}$. $P$ continues to hold true, and the algorithm returns the correct answer. Inductive Hypothesis. $\forall k \text{ iterations}, 2 \le k \lt n$ and $k \in \mathbb{N}$, suppose $P$ is true if $k$ is substituted for $n$ in the invariant. Inductive Step. Consider the $k+1$ iteration. $P$ is true for the end of iteration $k$ according to the inductive hypothesis. Thus, at the start of the $k+1$ iteration, $P$ holds true for the current loop variable $i$. By the time this iteration terminates $l$ gets replaced with $l+r$, and $r$ gets replaced with the original value for $l$, and the loop variable becomes $i+1$. Thus $P$ remains true because: \begin{align} l = l+r &= f_{i} + f_{i-1} &&\dots\text{by definition of inductive hypothesis}\\ & = f_{i+1} &&\dots\text{by definition of Fibonacci sequence} \\ \text{and} \\ r = l &= f_{i} &&\dots\text{by definition of inductive hypothesis} \end{align} Thus $r$ is the previous Fibonacci number prior to $l$ as required by $P$, and $l$ is the correct answer for $F(k+1)=f_{i+1}$. |
I had some attempts to obtain elementary proofs of (1) or (2), but I failed. Maybe these proofs should not be easy, for instance, in the case when (1) and (2) elementarily imply that the group $G$ is topological.
Since I am a specialist in paratopological groups, not semitopological, I propose a sketch of a proof that each locally compact Hausdorff paratopological group (that is, a group with jointly continuous multiplication) with (2) is topological. It is based on simple ideas and manipulations with the neighborhoods, but needs some background. So below I shall intensively cite our paper [BR], which you can look for details.
Following my teacher I say that a paratopological group $G$ is
saturated if for any neighborhood $U\subset G$ of the unit the set $U^{-1}$ has nonempty interior in $G$.
Now let $U$ be an arbitrary neighborhood of the unit of the given group $(G,\tau)$. Since the set $\overline{V^{-1}}$ is compact, there is a finite subset $F$ of $\overline{V^{-1}}$ such that $\overline{V^{-1}}\subset FU$. Then $V\subset U^{-1}F^{-1}$ and therefore the set $U^{-1}$ has a nonempty interior. Thus the group $G$ is saturated.
Given a paratopological group $G$ let $\tau_\flat$ be the strongest group topology on $G$, weaker than the topology of $G$. The topological group $G^\flat=(G,\tau_\flat$), called
the group reflexion of $G$, has the following characteristic property: the identity map $i:G\to G^\flat$ is continuous and for every continuous group homomorphism $h:G\to H$ from $G$ into a topological group $H$ the homomorphism $h\circ i^{-1}:G^\flat\to H$ is continuous.
The group reflexion $G^\flat$ of any abelian Hausdorff paratopological group $G$ is Hausdorff. Moreover, in this case the topology of $G^\flat$ has a very simple description: a base of neighborhoods at the unit in $G^\flat$ consists of the sets $UU^{-1}$ where $U$ runs over neighborhoods of the unit in the group $G$ (such the groups we call 2-oscillating). A bit later it was realized that the same is true for any
paratopological SIN-group, that is a paratopological group $G$ possessing a neighborhood base $\mathcal B$ at the unit such that $gUg^{-1}=U$ for any $U\in\mathcal B$ and $g\in G$ (as expected, SIN is abbreviated from Small Invariant Neighborhoods). Unfortunately, Hausdorff paratopological SIN-groups do not exhaust all paratopological groups whose group reflexion is Hausdorff (for example any separated topological group has Hausdorff group reflexion but needs not be a paratopological SIN-group).
In [BR, Pr. 3] we showed that each saturated paratopological group $G$ is 2-oscillating. For this purpose fix any neighborhood $U$ of the unit $e$ in $G$. We have to find a neighborhood $W\subset G$ of $e$ such that $W^{-1}W\subset UU^{-1}$. Find an open neighborhood $V_1\subset G$ of $e$ such that $V_1^2\subset U$. Since $G$ is saturated, there are a point $x\in V_1$ and a neighborhood $W\subset G$ of $e$ such that $x^{-1}W\subset V_1^{-1}$. Then $W^{-1}x\subset V_1$ and $W^{-1}\subset V_1x^{-1}$. We can assume that $W$ is so small that $x^{-1}W\subset V_1x^{-1}$. In this case $W^{-1}W\subset V_1x^{-1}W\subset V_1V_1x^{-1}\subset V_1V_1V_1^{-1}\subset UU^{-1}$.
Now we are able to prove that the topology of $G$ coincides with the topology of its group reflexion $G^\flat$. For this purpose it suffices to show that each sufficiently small open neighborhood $U\in\tau$ is open in $G^\flat$ too. Let $W\in\tau$ be a neighborhood of the unit with compact closure in the group $(G,\tau)$. Choose a neighborhood $W_1\in\tau$ of the unit such that $W_1W_1\subset W$. Since the group $G$ is saturated, there exist a point $x\in W_1$ and a neighborhood $W_2\in\tau$ of the unit such that $W_2\subset W_1$ and $xW_2^{-1}\subset W_1$. Then the set $A=xW_2^{-1}W_2\subset W_1W_2\subset W_1W_1\subset W$ has a compact closure $\overline A$ in the group $(G,\tau)$. So the restriction $i|\overline A$ of the identity map $i:G\to G^\flat$ is a homeomorphism. Let $U\subset W_2$ be an arbitrary neighborhood of the unit it the group $(G,\tau)$. Then $xU$ is a open subset of $A\subset \overline A$. Hence $xU$ is an open subset of $\overline A$ as a subspace of $G^\flat$. Thus $xU$ is an open subset of $A$ as a subspace of $G^\flat$. Since the group $G$ is 2-oscillating, the set $A$ is open in $G^\flat$. Therefore the set $xU$ is open in $G^\flat$ too.
Update: You won’t believe me, but just now Katya Pavlyk decided to sent to me the original Ellis paper [E]. :-D The claim “locally compact Hausdorff paratopological group has (2)” is the next to the last step (Lemma 4) of the original proof of Ellis Theorem. Moreover, Katya has a question concerning the proof of this claim too. :-)
Remarks to the paper. It seems that:
– in the next to the last sequence in the proof of Lemma 4 should be “$x^{-1}\in\overline{E^{-1}_{m+1}}$” instead of “$x^{-1}\in E^{-1}_{m+1}$”;
– in the last sequence should be “$x_n^{-1}\in x^{-1}U$” instead of “$x_n^{-1}\in x^{-1}U^2$”;
– in the proof of Theorem, “$U’$” means “$X\backslash U$”;
– in the proof of Theorem, should be “$U’$” instead of “$\cal U’$”;
– in the proof of Theorem, should be “$\{e\}=$” instead of “$e=$”.
References
[BR] Taras O. Banakh, Alex V. Ravsky.
Oscillator topologies on a paratopological group and related number invariants // Algebraical Structures and their Applications, Kyiv: Inst. Mat. NANU, 2002, 140--153.
[E] Robert Ellis,
A note on the continuity of the inverse, Proc. Amer. Math. Soc., 8 (1957), 372-373. |
This is a follow-up question to QMechanic's great answer in this question. They give a formulation of Wick's theorem as a purely combinatoric statement relating two total orders $\mathcal T$ and $\colon \cdots \colon$ on an algebra.
I have come across "Wick's theorems" in many contexts. While some of them are special cases of the theorem [1], others are -- as far as I can see -- not. I am wondering if there is an even more general framework in which Wick's theorem can be presented, showing that all of these theorems are in fact the same combinatoric statement.
Wick's theorem applies to a string of creation and annihilation operators, as described e.g. on Wikipedia: $$ ABCD = \mathopen{\colon} ABCD \mathclose{\colon} + \sum_{\text{singles}} \mathopen{\colon} A^\bullet B^\bullet CD \mathclose{\colon} + \cdots \tag{*} $$
Here, the left hand side is "unordered" and it seems to me that [1] is not valid?
The creation and annihilation operators in (*) can be either bosonic or fermionic.
This technicality is not a problem in [1] since it allows for graded algebras.
Wick's theorem can also be applied to field operators: $$ \mathcal T\, \phi_1 \cdots \phi_N = \mathopen{\colon} \phi_1 \cdots \phi_N \mathclose{\colon} + \sum_{\text{singles}} \mathopen{\colon} \phi_1^\bullet \phi_2^\bullet \cdots \phi_N \mathclose{\colon} + \cdots $$
Since the mode expansion of a field operator $\phi_k$ consists of annihilation and creation operators, normal ordering is actually not simply a total order on the algebra of field operators. Once again, we can not apply [1]?
In a class I am taking right now, we applied Wick's theorem like this to field operators that didn't depend on time: $$ \phi_1 \cdots \phi_N = \mathopen{\colon} \phi_1 \cdots \phi_N \mathclose{\colon} + \sum_{\text{singles}} \mathopen{\colon} \phi_1^\bullet \phi_2^\bullet \cdots \phi_N \mathclose{\colon} + \cdots $$
This seems to combine the issues of points 1 and 3...
In probability theory, there is Isserlis' Theorem: $$ \mathbb E(X_1 \cdots X_{2N}) = \sum_{\text{Wick}} \prod \mathbb E(X_i X_j) $$
This looks like it should also be a consequence from one and the same theorem, but I don't even know what the algebra would be here.
My string theory lectures were quite a while ago, but I vaguely remember that there we had radial ordering instead of time ordering. Also there seems to be some connection to OPEs.
This seems to not be a problem with [1]. |
8.9. Long Short Term Memory (LSTM)¶
The challenge to address long-term information preservation and short-term input skipping in latent variable models has existed for a long time. One of the earliest approaches to address this was the LSTM [Hochreiter.Schmidhuber.1997]. It shares many of the properties of the Gated Recurrent Unit (GRU) and predates it by almost two decades. Its design is slightly more complex.
Arguably it is inspired by logic gates of a computer. To control amemory cell we need a number of gates. One gate is needed to read outthe entries from the cell (as opposed to reading any other cell). Wewill refer to this as the
output gate. A second gate is needed todecide when to read data into the cell. We refer to this as the inputgate. Lastly, we need a mechanism to reset the contents of the cell,governed by a forget gate. The motivation for such a design is thesame as before, namely to be able to decide when to remember and when toignore inputs into the latent state via a dedicated mechanism. Let’s seehow this works in practice. 8.9.1. Gated Memory Cells¶
Three gates are introduced in LSTMs: the input gate, the forget gate, and the output gate. In addition to that we introduce memory cells that take the same shape as the hidden state. Strictly speaking this is just a fancy version of a hidden state, custom engineered to record additional information.
8.9.1.1. Input Gates, Forget Gates and Output Gates¶
Just like with GRUs, the data feeding into the LSTM gates is the input at the current time step \(\mathbf{X}_t\) and the hidden state of the previous time step \(\mathbf{H}_{t-1}\). These inputs are processed by a fully connected layer and a sigmoid activation function to compute the values of input, forget and output gates. As a result, the three gate elements all have a value range of \([0,1]\).
We assume there are \(h\) hidden units and that the minibatch is of size \(n\). Thus the input is \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) (number of examples: \(n\), number of inputs: \(d\)) and the hidden state of the last time step is \(\mathbf{H}_{t-1} \in \mathbb{R}^{n \times h}\). Correspondingly the gates are defined as follows: the input gate is \(\mathbf{I}_t \in \mathbb{R}^{n \times h}\), the forget gate is \(\mathbf{F}_t \in \mathbb{R}^{n \times h}\), and the output gate is \(\mathbf{O}_t \in \mathbb{R}^{n \times h}\). They are calculated as follows:
\(\mathbf{W}_{xi}, \mathbf{W}_{xf}, \mathbf{W}_{xo} \in \mathbb{R}^{d \times h}\) and \(\mathbf{W}_{hi}, \mathbf{W}_{hf}, \mathbf{W}_{ho} \in \mathbb{R}^{h \times h}\) are weight parameters and \(\mathbf{b}_i, \mathbf{b}_f, \mathbf{b}_o \in \mathbb{R}^{1 \times h}\) are bias parameters.
8.9.1.2. Candidate Memory Cell¶
Next we design a memory cell. Since we haven’t specified the action ofthe various gates yet, we first introduce a
candidate memory cell\(\tilde{\mathbf{C}}_t \in \mathbb{R}^{n \times h}\). Itscomputation is similar to the three gates described above, but using a\(\tanh\) function with a value range for \([-1, 1]\) asactivation function. This leads to the following equation at time step\(t\).
Here \(\mathbf{W}_{xc} \in \mathbb{R}^{d \times h}\) and \(\mathbf{W}_{hc} \in \mathbb{R}^{h \times h}\) are weights and \(\mathbf{b}_c \in \mathbb{R}^{1 \times h}\) is a bias.
8.9.1.3. Memory Cell¶
In GRUs we had a single mechanism to govern input and forgetting. Here we have two parameters, \(\mathbf{I}_t\) which governs how much we take new data into account via \(\tilde{\mathbf{C}}_t\) and the forget parameter \(\mathbf{F}_t\) which addresses how much we of the old memory cell content \(\mathbf{C}_{t-1} \in \mathbb{R}^{n \times h}\) we retain. Using the same pointwise multiplication trick as before we arrive at the following update equation.
If the forget gate is always approximately 1 and the input gate is always approximately 0, the past memory cells will be saved over time and passed to the current time step. This design was introduced to alleviate the vanishing gradient problem and to better capture dependencies for time series with long range dependencies. We thus arrive at the following flow diagram.
8.9.2. Implementation from Scratch¶
Now it’s time to implement an LSTM. We begin with a model built fromscratch. As with the experiments in the previous sections we first needto load the data. We use
The Time Machine for this.
import d2lfrom mxnet import np, npxfrom mxnet.gluon import rnnnpx.set_np()batch_size, num_steps = 32, 35train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
8.9.2.1. Initialize Model Parameters¶
Next we need to define and initialize the model parameters. Aspreviously, the hyperparameter
num_hiddens defines the number ofhidden units. We initialize weights with a Gaussian with \(0.01\)variance and we set the biases to \(0\).
def get_lstm_params(vocab_size, num_hiddens, ctx): num_inputs = num_outputs = vocab_size normal = lambda shape : np.random.normal(scale=0.01, size=shape, ctx=ctx) three = lambda : (normal((num_inputs, num_hiddens)), normal((num_hiddens, num_hiddens)), np.zeros(num_hiddens, ctx=ctx)) W_xi, W_hi, b_i = three() # Input gate parameters W_xf, W_hf, b_f = three() # Forget gate parameters W_xo, W_ho, b_o = three() # Output gate parameters W_xc, W_hc, b_c = three() # Candidate cell parameters # Output layer parameters W_hq = normal((num_hiddens, num_outputs)) b_q = np.zeros(num_outputs, ctx=ctx) # Create gradient params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] for param in params: param.attach_grad() return params
8.9.2.2. Define the Model¶
In the initialization function, the hidden state of the LSTM needs to return an additional memory cell with a value of \(0\) and a shape of (batch size, number of hidden units). Hence we get the following state initialization.
def init_lstm_state(batch_size, num_hiddens, ctx): return (np.zeros(shape=(batch_size, num_hiddens), ctx=ctx), np.zeros(shape=(batch_size, num_hiddens), ctx=ctx))
The actual model is defined just like we discussed it before with three gates and an auxiliary memory cell. Note that only the hidden state is passed on to the output layer. The memory cells do not participate in the computation directly.
def lstm(inputs, state, params): [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params (H, C) = state outputs = [] for X in inputs: I = npx.sigmoid(np.dot(X, W_xi) + np.dot(H, W_hi) + b_i) F = npx.sigmoid(np.dot(X, W_xf) + np.dot(H, W_hf) + b_f) O = npx.sigmoid(np.dot(X, W_xo) + np.dot(H, W_ho) + b_o) C_tilda = np.tanh(np.dot(X, W_xc) + np.dot(H, W_hc) + b_c) C = F * C + I * C_tilda H = O * np.tanh(C) Y = np.dot(H, W_hq) + b_q outputs.append(Y) return np.concatenate(outputs, axis=0), (H, C)
8.9.2.3. Training¶
Again, we just train as before.
vocab_size, num_hiddens, ctx = len(vocab), 256, d2l.try_gpu()num_epochs, lr = 500, 1model = d2l.RNNModelScratch(len(vocab), num_hiddens, ctx, get_lstm_params, init_lstm_state, lstm)d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, ctx)
Perplexity 1.2, 13001 tokens/sec on gpu(0)time traveller it s against reason said filby what reason saidtraveller canded as you say i jump back for a moment ofcour
8.9.3. Concise Implementation¶
In Gluon, we can call the
LSTM class in the
rnn module directlyto instantiate the model.
lstm_layer = rnn.LSTM(num_hiddens)model = d2l.RNNModel(lstm_layer, len(vocab))d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, ctx)
Perplexity 1.1, 198564 tokens/sec on gpu(0)time traveller it s against reason said filby what reason saidtraveller curee i thine s id i wrovy ur can four dimensio
8.9.4. Summary¶ LSTMs have three types of gates: input, forget and output gates which control the flow of information. The hidden layer output of LSTM includes hidden states and memory cells. Only hidden states are passed into the output layer. Memory cells are entirely internal. LSTMs can help cope with vanishing and exploding gradients due to long range dependencies and short-range irrelevant data. In many cases LSTMs perform slightly better than GRUs but they are more costly to train and execute due to the larger latent state size. LSTMs are the prototypical latent variable autoregressive model with nontrivial state control. Many variants thereof have been proposed over the years, e.g. multiple layers, residual connections, different types of regularization. Training LSTMs and other sequence models is quite costly due to the long dependency of the sequence. Later we will encounter alternative models such as transformers that can be used in some cases. 8.9.5. Exercises¶ Adjust the hyperparameters. Observe and analyze the impact on runtime, perplexity, and the generted output. How would you need to change the model to generate proper words as opposed to sequences of characters? Compare the computational cost for GRUs, LSTMs and regular RNNs for a given hidden dimension. Pay special attention to training and inference cost Since the candidate memory cells ensure that the value range is between -1 and 1 using the tanh function, why does the hidden state need to use the tanh function again to ensure that the output value range is between -1 and 1? Implement an LSTM for time series prediction rather than character sequences. |
The problem can be slightly generalized as follows: given a set $I\subseteq\{1,\dots,n\}$ and a set $S$ of lists of $n$-tuples, find a permutation $\sigma:\{1,\dots,|I|\}\to I$ such that each $l\in S$ is sorted according to the lexicographical preorder that compares the $\sigma(1)^\text{th}$ component, and then the $\sigma(2)^\text{th}$ component, ..., and finally the $\sigma(|I|)^\text{th}$. I'll write $<_\sigma$ for this lexicographical order.
This new problem is easily solvable by recursion: If $I=\emptyset$, there is nothing to do. Otherwise, find some $i_0\in I$ such that in each $l\in S$, the $i_0^\text{th}$ component are increasing (and if no such $i_0$ exist, return an error). Define $I':=I\setminus\{i_0\}$ and $S'$ the set of lists you get by splitting each list every time the $i_0^\text{th}$ component changes. A recursive call with $I'$ and $S'$ gives you a bijection $\sigma':\{1,\dots,|I|-1\}\to I'$ and you can then simply return $\sigma$ defined by $\sigma(1)=i_0$ and $\sigma(i+1)=\sigma'(i)$.
This algorithm runs in $O(|I|^2 |S|)$ (where $|I|$ is the cardinal of $I$, and $|S|=\sum\limits_{l\in S}\operatorname{length}(l)$). In your case and with your notations, that would be $O(n^2N)$.
I think that a slight variation of this algorithm runs in $O(|I||S|+|I|^2)$. It is described at the end of this answers. The proofs concern the $O(|I|^2 |S|)$ version but should be (relatively) easy to adapt to the faster version.
Correctness
The invariant is that if the algorithm returns a permutation $\sigma$, then all the lists in $S$ are sorted according to $<_\sigma$.
The proof is by induction on $I$. The base case is trivial because all lists are sorted according to the neutral element for the lexicographical product: equality.
In the inductive case, take any $l\in S$ we know that the sublists $l_1,\dots,l_r$ that resulted from splitting $l$ at the points where the $i_0^\text{th}$ component changed (i.e. $l=l_1@\dots@l_r$ where $@$ is concatenation, for each $k\in\{1,\dots,r\}$ all elements of $l_k$ have the same $i_0^\text{th}$ component, and for any $k\in\{1,\dots,r-1\}$ $(l_k.last)[i_0]\not=(l_{k+1}.first)[i_0]$. Since we chose $i_0$ such that $l$ (and therefore $l_k$ for all $k\in\{1,\dots,r\}$) is sorted with respect to $<_{i_0}$, this induces that (A) $a<_{i_0}b$ iff $a\in l_k$ and $b\in l_{k'}$ with $k<k'$. By the induction hypothesis, (B) each $l_k$ is sorted with respect to $<_{\sigma'}$.
Let $a,b\in l$ such that $a$ appears before $b$ in $l$.
If $a$ and $b$ appear in the same $l_k$, by (A), $a~_{i_0}b$ (i.e. $a\le_{i_0}b$ and $b\le_{i_0}a$) and $a$ appears before $b$ in $l_k$ so that by (B) $a<_{\sigma'}b$. We therefore have $a(<_{i_0}\times_\text{lex}<_{\sigma'})b$.
If $a$ appears in $l_k$ and $b$ appears in $l_{k'}$ with $k<k'$, then by (A), $a<_{i_0}b$ and we therefore have $a(<_{i_0}\times_\text{lex}<_{\sigma'})b$.
We are done because $(<_{i_0}\times_\text{lex}<_{\sigma'})={<_\sigma}$.
Completeness
The invariant is that if every $l\in S$ is sorted according to some $<_{\tilde\sigma}$ then the algorithm will return $\sigma$ such that for any $l\in S$ and $a,b\in l$, $a<_\sigma b$ iff $a<_{\tilde\sigma}b$. This implies that a list in $S$ being sorted with respect to $<_\sigma$ is equivalent it being sorted with respect to $<_{\tilde\sigma}$. We prove this invariant by induction in $I$. The base case is trivial.
In the inductive case, suppose that $l\in S$ is sorted according to some $<_{\tilde\sigma}$. Defined $\tilde\sigma':\{1,\dots,|I'|\}\to I'$ by $\tilde\sigma'(i)=\tilde\sigma(i)$ if $i<i_0$ and $\tilde\sigma'(i)=\tilde\sigma(i-1)$ if $i>i_0$. Since each $l_k\in S'$ is sorted with respect to $\tilde\sigma$, it is also sorted with respect to $\tilde\sigma'$. By the induction hypothesis, we therefore have that for all $a,b\in l_k$, $a<_{\tilde\sigma'}b$ iff $a<_{\sigma'}b$.
Let $a,b \in l$.
If $a,b\in l_k$, $a<_{\tilde\sigma} b$ iff $a<_{\tilde\sigma'}b$ iff $a<_{\sigma'}b$ iff $a<_{\sigma}$.
If $a\in l_k$ and $b\in l_{k'}$ with $k<k'$, by (A), $a<_{i_0} b$ and we therefore have $a<_{\sigma}b$. Since $l$ is sorted with respect to $<_{\tilde\sigma}$, $a<_{\tilde\sigma}b$. So we indeed have $a<_{\tilde\sigma}b$ iff $a<_{\sigma}b$ (because both are true).
We can boost it a bit by keeping keeping $S$ as a list (such that $L:=\operatorname{flatten}(S)$ is an invariant) instead of a set. You also keep a list $J$ of integers such that $\forall k\in\{1,\dots,|I|\}$, the sublist $L[1\dots J[k]]$ is sorted according to $<_k$. Then, to test a candidate $i_0$, you only check that $L[J[i_0]\dots |L|]$ is sorted with respect to $<_{i_0}$ (where you would previously test that $L[1\dots |L|]$ is), and if it fails at some point, you update $J[i_0]$ to the greatest $j$ such that $L[1\dots j]$ is sorted according to $<_{i_0}$. The number of comparisons is then bounded by $O(|I||S|+|I|^2)$ because:
You only get a positive answer for a test once (because then, you remember it was positive with $J$ and never test it again) which amounts for $O(|I||S|)$ because $\sum\limits_{1\le k \le |J|}J[k]$ starts at $0$, increases by one after every positive test (if we update it after every comparison instead of just after a comparison with a negative answer), and is bounded by $\sum\limits_{1\le k \le |J|}|S|=|J||S|=|I||S|$.
There are at most $O(|I|^2)$ negative answers, because $|I|$ strictly decreases at each iteration of the outer loop, and in the worst case, the right $i_0$ is the last you try.
You could probably further optimize it by trying all possible $i_0$ in parallel, and picking one as soon as it's the only one left (instead of checking that the rest of the list is indeed sorted with respect to that index). |
The change of Gibbs energy at constant temperature and species numbers, $\Delta G$, is given by an integral $\int_{p_1}^{p_2}V\,{\mathrm d}p$. For the ideal gas law $$p\,V=n\,RT,$$ this comes down to $$\int_{p_1}^{p_2}\frac{1}{p}\,{\mathrm d}p=\ln\frac{p_2}{p_1}.$$ That logarithm is at fault for a lot of the formulas in chemistry.
I find I have a surprisingly hard time computing $\Delta G$ for gas governed by the equation of state $$\left(p + a\frac{n^2}{V^2}\right)\,(V - b\,n) = n\, R T,$$ where $a\ne 0,b$ are small constants. What is $\Delta G$, at least in low orders in $a,b$?
One might be able to compute ΔG by an integral in which not V is the integrand.
Edit 19.8.15: My questions are mostly motivated by the desire to understand the functional dependencies of the chemical potential $\mu(T)$, that is essentially given by the Gibbs energy. For the ideal gas and any constant $c$, we see that a state change from e.g. the pressure $c\,p_1$ to another pressure $c\,p_2$ doesn't actually affect the Gibbs energy. The constant factors out in $\frac{1}{p}\,{\mathrm d}p$, resp. $\ln\frac{p_2}{p_1}$. However, this is a mere feature of the gas law with $V\propto \frac{1}{p}$, i.e. it likely comes from the ideal gas law being a model of particles without interaction with each other. |
This might be a naive question, but when applying a implicit discretization to a PDE with a source term, should the source be averaged in time? For example if we take the diffusion equation with a non-linear source term,
$$ u_t = u_{xx}+s(x,t,u) $$
We can apply the following central difference implicit scheme to the differential term,
$$ \frac{u_j^{n+1} - u_j^n}{\Delta t} = \left[ (1-\theta) (u_{j-1} - 2u_{j} + u_{j+1}) + \theta (u_{j-1} - 2u_{j} + u_{j+1})\right] + s(x,t,u) $$
but how should $s(x,t,u)$ be treated? Should we simply take the value the $n$ time point (this is what I have always done in the past),
$$ s(x,t) = s_j^n $$
or averaged over time,
$$ s(x,t) = (1-\theta)s_j^{n+1} + \theta s_j^{n} $$
I am not sure it is possible to implement a time average in this way because, in general, the $n+1$ points in time are
unknowns!
Is this a silly question? Or is there some way of improving the time integration of the above equation by taking averages in time? |
There are a few issues here, but I fear that the interpolation error of integration may be the least of them.
To start, if this work is done in a regulatory context one must use whatever procedures are required. So if there are regulatory requirements for how to estimate integration error then those take practical precedence. What follows ignores such regulations. First we'll examine the situation when the cost estimates have little or no error at the indicated event probabilities, then discuss how uncertainties in the costs and in other aspects of the modeling affect the estimated values of defenses at specified standards of protection.
If costs are known precisely
What you have displayed are the limits of the possible cost-probability curve if you know that the cost curve is non-increasing with increasing x-axis (probability) values. In this case that seems to be a reasonable assumption, as lower-probability events (higher flood stage) should have costs at least as large as higher-probability events (low or no flood stage).
The integrals estimated by the blue and red lines are Riemann sums, with the blue line providing a left Riemann sum and the red line a right Riemann sum. Your proposal to use those as outer limits for the inteterpolation error itself seems quite appropriate.
In general for smooth curves, the trapezoidal rule (which you presumably are using for your integration estimate, and is the average of the left and right Riemann sums) has a defined relationship between the integration error and a value of the second derivative of the cost-probability curve somewhere within the limits of integration. So if you can assume reasonable limits for the second-derivative values, that could set limits to the integration error. For the direction of systematic error, if you can assume that the curve is convex you know that your interpolation will over-estimate the true area.
Convexity and limits on second-derivative values might, however, not be good assumptions for flood costs. For example, there could be a fairly fast jump in costs as flood stage reached the level of first-story floors, and then another jump as flood stage reached the level of second-story floors. So convexity and assumptions about limits on second-derivative values would be questionable. That could also make it risky to try to fit a smooth curve to the set of data pairs and calculate the area analytically from the equation for the smooth curve.
So if the costs are known precisely then the limits you propose seem to be the best you can get in general for the limits of the area under the curve between the specified x-axis limits.
With uncertainty in the costs
Estimating integration error due to interpolation does not deal with an additional source of uncertainty: the uncertainty in the y-axis cost estimates. A statistician would want to see error estimates for each of those cost values and would want you to take those error estimates into account to get a better measure of the actual error in your estimate of the value of the integral.
This is a particular problem with flood damage prediction. The x-axis probabilities typically represent the probability per year of a flood that is greater than a specified level (stage) above normal water levels. The damage associated with a particular stage may be affected by other aspects of the flood besides its level, such as its velocity or duration, adding additional uncertainty to damage estimates at any flood stage. This report compares different approaches to estimating flood risk; some simply ignore the uncertainties in stage and cost, some sample from probability distributions for certain of these estimates, and some model with event-based catastrophe models instead of continuous probability estimates.
This answer shows how errors in the estimates of y-axis values affect the estimates of integrals interpolated by the trapezoidal rule, in situations where the errors of the values at different event probabilities are uncorrelated. This simply follows the rules for variance of a weighted sum.
To get an idea of the relative contributions of interpolation and imprecise cost estimates to overall integration error, consider the integral under the curve between two event stages with cumulative probabilities (x-axis values) $p_0$ and $p_1$ ($p_0 < p_1$; $p_1-p_0=\Delta p$), having associated costs of $C_0$ and $C_1$ ($C_0 > C_1$).
Interpolation error. The trapezoidal rule gives an area $ (C_0 + C_1) \Delta p /2$. The upper limit of area given by the left Riemann sum is $C_0 \Delta p$, for an upper-limit difference for error above the trapezoidal estimate of $(C_0-C_1)\Delta p /2$.
Errors in $C_i$. If the variances of the $C_i$ values are $\sigma_i^2$ and the $C_i$ are uncorrelated, then the trapezoidal interpolation has an associated variance of $(\sigma_0^2 + \sigma_1^2)\Delta p^2/4$, or a standard error of $\sqrt{(\sigma_0^2 + \sigma_1^2)}\Delta p/2$.
So if the curve is relatively steep compared to the errors in the cost estimates then the interpolation error of the area will dominate. Areas over comparatively flatter parts of the curve, perhaps typical of higher-probability stages, may have integration errors dominated by the errors in the cost estimates. Similar application of the formula for the variance of a weighted sum provide the errors in the left and right Riemann sums that you propose as upper and lower estimates of integration error per se.
Other uncertainties in modeling
The full cost/benefit comparison involves more than estimating the annual benefit of installing the defense. This report shows how discount rates (to translate future gains into net present values) and expected project life are also incorporated into the calculation. There is the possibility that past history of flood probabilities does not adequately represent the future. Furthermore, future development (or abandonment) of structures in the flood plain will affect the value of installing the defense. Uncertainties in any of these estimates will add to the uncertainty in the full cost/benefit comparison. |
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... |
A
conic in the plane $\mathbb{R}^2$ is the zero set of aquadratic polynomial in two variables:
$$ \,\, a_1 x^2 \,+\, a_2 xy \,+\, a_3 y^2 \, +\, a_4 x \, + \, a_5 y \, + \, a_6 \,.$$
Geometrically, a conic can be either a circle, an ellipse, a hyperbola, a parabola or a union of two lines. The last case is a degenerate conic. Double lines are allowed.
How many conics are tangent to five given conics?
Here is an example of Steiner’s problem:
Steiner claimed in 1848 that there are at most 7776 conics tangent to 5 given conics.He phrased his problem as that of solving five equationsof degree six on the 5-dimensional projective space $\mathbb{P}^5$.Using Bézout’s Theorem he argued that the equations coming from this questionhave $6^5 = 7776$ solutions over the complex numbers.However, this number over counts becausethere is a
Veronese surface of extraneous solutions, namelythe conics that are squares of linear forms, i.e., double lines.
The correct count of non-degenerate conics is
3264.This was shown in 1859 by Jonquières andindependently in 1864 by Chasles.The number 3264 appears prominently in the book 3264 and all thatby Eisenbud andHarris.
A delightful introduction to Steiner’s problem was given byBashelor, Ksir and Traves.Ronga, Tognoli and Vust and Sottile showed how tochoose 5 real conics such that all3264 complex solutions are
real.Although their proofs starts with an explicit construction, it is not constructive.
Using methods from numerical algebraic geometrywe adapted the proposed construction to find an explicit instance for which there are 3264 real conics.We use
numerical homotopy continuation to compute the 3264 conics.Smale’s $\alpha$-theory provides a way to give a mathematical proof that we found 3264 real conics; the keyword is alphaCertified.The computational proof can be downloaded here.
An illustration of the arrangement is shown below.
There are 3264 real conics, which are tangent to the 5 blue conics. The 5 blue conics are given by$$\small{\begin{array}{rcrcrcrcrcl}\frac{10124547}{662488724}x^2 &+&\frac{8554609}{755781377}xy&+& \frac{5860508}{2798943247}y^2 &-&\frac{251402893}{1016797750}x &-&\frac{25443962}{277938473} y &+& 1\\[1em]\frac{520811}{1788018449}x^2 &+&\frac{2183697}{542440933}xy &+&\frac{9030222}{652429049}y^2 &-& \frac{12680955}{370629407} x &-& \frac{24872323}{105706890} y&+& 1 \\[1em]\frac{6537193}{241535591}x^2 &-& \frac{7424602}{363844915}xy&+& \frac{6264373}{1630169777} y^2&+& \frac{13097677}{39806827} x &-& \frac{29825861}{240478169} y&+& 1 \\[1em]\frac{13173269}{2284890206}x^2 &+& \frac{4510030}{483147459}xy&+& \frac{2224435}{588965799} y^2&+& \frac{33318719}{219393000}x &+& \frac{92891037}{755709662} y&+& 1 \\[1em]\frac{8275097}{452566634}x^2 &-& \frac{19174153}{408565940}xy&+& \frac{5184916}{172253855}y^2&-& \frac{23713234}{87670601}x &+& \frac{28246737}{81404569} y &+& 1\end{array}}$$
It looks like the arrangement consists of 5 blue lines, rather than 5 blue conics. The next picture clarifies the situation. |
How can we handle \[y = x_1 \oplus x_2 \oplus \cdots \oplus x_n\] in a linear MIP model? Here \(\oplus\) indicates the xoroperation.
Well, when we look at \(x_1 \oplus x_2 \oplus \cdots \oplus x_n\) we can see that this is identical to \(\sum x_i\) being
odd. A small example illustrates that:
Checking if an expression is odd can be done by seeing if it can not be expressed as \(2z\) where \(z\) is an integer variable. So combining this, we can do:
Linearization of \(y = x_1 \oplus x_2 \oplus \cdots \oplus x_n\) \[\begin{align} & \color{DarkRed}y = \sum_i \color{DarkRed}x_i - 2 \color{DarkRed}z \\ & \color{DarkRed}y, \color{DarkRed} x_i \in\{0,1\} \\ & \color{DarkRed}z \in \{0,1,2,\dots\} \end{align}\]
If we want, we can relax \(y\) to be continuous between 0 and 1. An upper bound on z can be
\[z \le \left \lfloor{ \frac{n}{2} }\right \rfloor \].
References Express XOR with multiple inputs in zero-one integer linear programming (ILP), https://cs.stackexchange.com/questions/40737/express-xor-with-multiple-inputs-in-zero-one-integer-linear-programming-ilp XOR as linear inequalities, https://yetanothermathprogrammingconsultant.blogspot.com/2016/02/xor-as-linear-inequalities.html A variant of the Lights Out game, https://yetanothermathprogrammingconsultant.blogspot.com/2016/01/a-variant-of-lights-out-game.html |
I am having trouble isolating the $x$ and $y$ to separate side in the differential equations below. Could someone give me a hint as to how to to this.
Equation 1: $$\frac{dy}{dx} - \frac{x}{y} = \frac{1}{x}$$ Equation 2: $$xy\frac{dy}{dx} = y^2$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I am having trouble isolating the $x$ and $y$ to separate side in the differential equations below. Could someone give me a hint as to how to to this.
Equation 1: $$\frac{dy}{dx} - \frac{x}{y} = \frac{1}{x}$$ Equation 2: $$xy\frac{dy}{dx} = y^2$$
The first is not separable.
For the second, we have:
$$xy\frac{dy}{dx} = y^2$$
Dividing, we have:
$$\dfrac{y~dy}{y^2} = \dfrac{dx}{x}$$
This is separable and we can now integrate each side as:
$$\int \dfrac{1}{y}~ dy = \int \dfrac{1}{x}~ dx$$
I think you can take it from here.
The second one is $\dot y = \frac{y}{x}$, and then you should try to put $y=xz$ and solve for $z$.
As far as I can see, only the second one is separable and after simplifying it becomes:
$$\frac{dx}{x}=\frac{dy}{y} $$
Straightforward integration will do the trick.
As a general rule, more often than not it is the multiplicative types of ODE's that are separable.
For $\dfrac{dy}{dx}-\dfrac{x}{y}=\dfrac{1}{x}$ ,
$y\dfrac{dy}{dx}=\dfrac{y}{x}+x$
This belongs to an Abel equation of the second kind.
Let $x=e^t$ ,
Then $\dfrac{dy}{dx}=\dfrac{\dfrac{dy}{dt}}{\dfrac{dx}{dt}}=\dfrac{\dfrac{dy}{dt}}{e^t}=e^{-t}\dfrac{dy}{dt}$
$\therefore e^{-t}y\dfrac{dy}{dt}=e^{-t}y+e^t$
$y\dfrac{dy}{dt}-y=e^{2t}$
This belongs to an Abel equation of the second kind in the canonical form.
Please follow the method in https://arxiv.org/ftp/arxiv/papers/1503/1503.05929.pdf
For $xy\dfrac{dy}{dx}=y^2$ , it simply belongs to a separable ODE.
$\dfrac{dy}{y}=\dfrac{dx}{x}$
$\int\dfrac{dy}{y}=\int\dfrac{dx}{x}$
$\ln y=\ln x+c$
$y=Cx$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.