text stringlengths 256 16.4k |
|---|
I'm trying to understand the equivalence in expressive power of formal grammars whose rules take the form:
$$ \alpha \rightarrow \beta $$ where $ |\alpha| \leq |\beta| $ (called a
monotonic grammar), and grammars whose rules take the form:
$$ \alpha B \gamma \rightarrow \alpha C \gamma $$
where $\alpha $ and $\gamma$ are strings of terminals & non-terminals or possibly empty, and $B$ and $C$ are single non-terminals. I understand that grammars of the second kind are already, by definition, grammars of the first kind, but I'd like to understand how to derive a grammar of the second kind, given one of the first kind (a monotonic grammar). Can anyone suggest a good reference for this? Many thanks in advance. |
I know that stable non-circular orbits in euclidean space exist only in 3 spatial dimensions but what about if the spatial dimensions are hyperbolic instead? Are there any stable non-circular orbits in 2 or 3 dimensional hyperbolic space?
The gravitational field for any space of dimension > 1 scales inversely with the area of a sphere in that space. In hyperbolic 3-space, the area of a sphere is given by $A = 4\pi\sinh(r)^2$; the same as the formula in euclidean space, but substituting $\sinh(r)$ for the euclidean $r$.
Any curved space is locally equivalent to flat euclidean space on a small enough scale. The corresponds to the fact that $\sinh(r) \approx r$ when $r < 1$ (if $r$ is measured in units where the curvature of the space is equal to -1, setting the absolute scale for the space). Thus, for orbits where the apoapsis is much less than 1 in the absolute scale of the space, everything will work pretty much identically to orbits in normal euclidean space, with only minor perturbations, and thus,
yes, there are stable orbits.
Above that scale, the $\sinh$ function fairly rapidly blows up, and the potential acts like that of the gravitational potential in a series of every higher-dimensional euclidean spaces, for which there are no non-circular bounded orbits. A minor perturbation from circularity will result in a satellite flying off to infinity, or being drawn to the center. That seems to suggest that there would be no stable, bounded non-circular orbits with sizes above the absolute scale of the space,
except that perturbations resulting in infalling paths will eventually pass through the pseudo-Euclidean region before actually reach the center, where the centrifugal pseudo-force would start to dominate over a force which does not increase as quickly towards the center as it would in a higher-dimensional euclidean space, forcing the satellite outwards again. It is not, however, immediately obvious whether that results in bounded baths, or merely converting inwardly-unbounded paths into outwardly-unbounded ones, and I'm not certain how to prove that.
More rigorously, in order to show that there are bounded orbits, we need to show that the effective potential parameterized by angular momentum $L$ has a minimum. If my thinking on how centrifugal force works in hyperbolic space is correct, then the effective potential should look something like $U(r)_\text{eff} = \frac{L^2}{2m\sinh(r)^2} - k\coth(r)$. Playing around with this for a while in Wolfram Alpha, there are global minima to the function in $r$ for values of $L$ less than about $0.7$. Above that, surprisingly, you start to get series of multiple shallow local minima, which somewhat surprisingly seems to indicate that not only do bounded orbits exist at all scales, but that at high enough momenta, there are
multiple bounded orbits at different energies!
Apparently no stable orbits in 2+1 dimensions, either: http://arxiv.org/abs/gr-qc/9303005
As for 1+1 dimensions, there is no gravity (the Einstein tensor is a topological invariant, so spacetime is static) |
Is it possible to "construct" the Hamiltonian of a system if its ground state wave function (or functional) is known? I understand one should not expect this to be generically true since the Hamiltonian contains more information (the full spectrum) than a single state vector. But are there any special cases where it's possible to obtain the Hamiltonian? Some examples would be really helpful.
IF you know that your Hamiltonian is of the form$$\hat H=\frac{-\hbar^2}{2m}\nabla^2+V(\mathbf r)\tag 1$$for a single massive, spinless particle, then yes, you can reconstruct the potential and from it the Hamiltonian, up to a few constants, given any eigenstate. To be more specific, the ground state $\Psi_0(\mathbf r)$ obeys$$\hat H\Psi_0(\mathbf r)=\frac{-\hbar^2}{2m}\nabla^2\Psi_0(\mathbf r)+V(\mathbf r)\Psi_0(\mathbf r)=E_0 \Psi_0(\mathbf r),$$which means that if you know $\Psi_0(\mathbf r)$ then you can calculate its Laplacian to get$$\frac{ \nabla^2 \Psi_0(\mathbf r) }{ \Psi_0(\mathbf r) }=\frac{2m}{\hbar^2}\left(V(\mathbf r)-E_0\right).$$If you know the particle's mass, then you can recover $V(\mathbf r)-E_0$, and this is all you really need (since adding a constant to the Hamiltonian does not change the physics).
However, it's important to note that this procedure guarantees that your initial $\Psi_0$ will be an eigenstate of the resulting hamiltonian, but it does not preclude the possibility that $\hat H$ will admit a separate ground state with lower energy. As a very clear example of that, if $\Psi_0$ is a 1D function with a node, then (because 1D ground states have no nodes) you are guaranteed a unique $V(x)$ such that $\Psi_0$ is an eigenstate, but it will never be the ground state.
If you don't know that your Hamiltonian has that structure, there is (in the general case) no information at all that you can extract about the Hamiltonian from just the ground state.
As a simple example, without staying too far from our initial Hamiltonian in $(1)$, consider that Hamiltonian in polar coordinates, $$\hat H=\frac{-\hbar^2}{2m}\left(\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r} + \frac{1}{\hbar ^2r^2}L^2\right)+V(r),$$ where I'm assuming $V(\mathbf r)=V(r)$ is spherically symmetric, and encapsulating the angular dependence into the total angular momentum operator $L^2$.
Suppose, then, that I give you its ground state, and that it is an eigenstate of $L^2$ with eigenvalue zero (like e.g. the ground state of the hydrogenic Hamiltonian). How do you tell if the Hamiltonian that created it is $H$ or a similar version, $$\hat H{}'=\frac{-\hbar^2}{2m}\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r} +V(r),$$ with no angular momentum component? Both versions will have $\Psi_0$ as a ground state (though here $\hat H'$ will have a wild degeneracy on every eigenspace, to be fair). Carrying on with this thought, what about $$\hat H{}''=\frac{-\hbar^2}{2m}\left(\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r} + f(r)L^2\right)+V(r),$$ where I've introduced an arbitrary real function $f(r)$ behind the angular momentum? This won't affect the $\ell=0$ states, but it will take the rest of the spectrum to who knows where. (In fact, you can even tack on an arbitrary function of $L_x$, $L_y$ and $L_z$, while you're at it.)
A bit more generally,
anyself-adjoint operator which vanishes on $\left|\Psi_0\right>$ can be added to the Hamiltonian to get you an operator that has $\left|\Psi_0\right>$ as an eigenstate. As a simple construction, given any self-adjoint operator $\hat A$, the combination $$\hat H {}''' = E_0 \left|\Psi_0\right>\left<\Psi_0\right| + \left(\mathbf 1 - \left|\Psi_0\right>\left<\Psi_0\right| \right) \hat A \left(\mathbf 1 - \left|\Psi_0\right>\left<\Psi_0\right| \right) $$ (where the factors in brackets are there to modify $\hat A$ into vanishing at $\left|\Psi_0\right>$ and its conjugate) will always have $\left|\Psi_0\right>$ as an eigenstate.
Even if you know
all the eigenstates, it's still not enough information to reconstruct the Hamiltonian, because they do not allow you to distinguish between, say, $\hat H$ and $\hat{H}{}^2$. On the other hand, if you know all the eigenstates and their eigenvalues, then you can simply use the spectral decomposition to reconstruct the Hamiltonian.
In general, if you really insist, there is probably a trade-off between what you know about the Hamiltonian's structure (e.g. "of the form $\nabla^2+V$" versus no information at all) and how many of the eigenstates and eigenvalues you need to fully reconstruct it (a single pair versus the whole thing), particularly if you allow for approximate reconstructions. Depending on where you put one slider, you'll get a different reading on the other one.
However, unless you have a specific problem to solve (like reconstructing a Hamiltonian of vaguely known form from a specific set of finite experimental data) then it's definitely not worth it to explore the details of this continuum of trade-offs beyond the knowledge that it exists and the extremes I noted above.
Assume for simplicity that all the operators are bounded. If you know the wave function $\psi$ associated with the ground state of the unknown Hamiltonian $H$, then $H$ has the form $$H = E_0|\psi\rangle\langle\psi| \oplus K$$ where $K$ is another Hamiltonian defined on a subspace of the original Hilbert space of co-dimension 1, and $E_0$ is the energy of the ground state, which we might as well assume to be zero (in particular, the spectrum of $K$ is bounded below by $E_0$). This shows that in general it is not possible to reconstruct $H$, since there is an infinite family of Hamiltonians $H$ parametrised by strictly positive self-adjoint operators $K$ that will solve the original problem.
There's no analytic proof, but numerical evidence suggests that if you know that the Hamiltonian is local, and it satisfies the Eigenstate Thermalization Hypothesis (which most local Hamiltonians do), then you can extract the entire Hamiltonian from a single
excited eigenstate, though not from the ground state: https://arxiv.org/abs/1503.00729.
If unknown part of Hamiltonian is potential $V({\bf{r}})$, then you can write a stationary Schrodinger equation and figure out what the potential should be. |
The two famous theorems of Jingrun Chen, both with similar proofs, state (respectively) that all sufficiently large even numbers are the sum of a prime and an element of $P_2$, and that there are infinitely many numbers $n$ such that $\{n, n+2\}$ consists of a prime and an element of $P_2$. Here $P_2$ is the set of numbers with at most two (not necessarily distinct) prime factors.
His latter theorem can be restated as the polynomial $f(n) = n(n+2)$ hits $P_3$ infinitely often. Note in particular that $3 = \deg(f) + 1$, which seems to be the best possible result of this type (any smaller and we run into the infamous parity problem in sieve theory).
Has Chen's theorem been generalized to higher degree polynomials? In particular, let $\mathcal{H}$ be an admissible set in the sense defined in Goldston-Pintz-Yildirim, that for all primes $p$, $\mathcal{H}$ does not contain a complete residue system modulo $p$. Let $|\mathcal{H}| = k$ and let $f(n) = \prod_{h \in \mathcal{H}} (n + h)$. Then can one prove that $f(n)$ hits $P_{k+1}$ infinitely often?
Note: the recent series of results regarding bounded gaps between primes essentially boils down to showing that for any admissible set $\mathcal{H}$, we have $f(n)$ hits $P_{k+l}$ infinitely often for some small positive $l$ (say $l \leq\sqrt{k}$). My question is are there any specific cases where we can take $l = 1$. |
I will use the below mentioned form of Reynolds Transport theorem(usually derived in Fluid Mechanics context) to give a relation between a Control mass system(no mass in or out) and Control Volume(randomly chosen time varying region of interest). It gives the relation between a chosen region of interest called control volume with continously changing control mass systems under consideration that they pass through the chosen control volume at any time $t$ (respectively). I will also use $\delta$ to represent differential elements in space as contrasting from $d$ for differntials in time.
$$\dfrac{d}{dt}\int_{sys(t)} \rho \;\eta \;\ \delta V=\dfrac{d}{dt}\int_{CV(t)} \rho \;\eta \;\ \delta V+\int_{\partial CV(t)}\rho \;\eta\; (\vec{V}_{sys}-\vec{V}_{CV}).\delta\vec{A}_{out}$$
First, I let $\eta=1$ and I get the conservation of mass.
$$\dfrac{d}{dt}(M_{sys})=\dfrac{d}{dt}(M_{CV})+\int_{\partial CV(t)}\delta(\dot{m}_{rel})$$
Using $\dfrac{d}{dt}(M_{sys})=0$, I get simply
$$\dfrac{d}{dt}(M_{CV})=(\dot{M}_{rel})_{in}-(\dot{M}_{rel})_{out}$$
And I believe this expression is correct and make intuitive sense with the notion of mass conservation.
Now I let $\eta = e = u + V^{2}/2 + gz$ , it's the energy content defined locally with respect to an inertial frame of reference(and so is everything else to follow). Where each component is obviously both a function of space and time.
$$\dfrac{d}{dt}\int_{sys(t)} \rho \;e \;\ \delta V=\dfrac{d}{dt}\int_{CV(t)} \rho \;e \;\ \delta V+\int_{\partial CV(t)}\rho \;e (\vec{V}_{sys}-\vec{V}_{CV}).\delta\vec{A}_{out}$$
Which can be written as,
$$\dfrac{d}{dt}(E_{sys})=\dfrac{d}{dt}(E_{CV})+\int_{\partial CV(t)}\rho \;e (\vec{V}_{sys}-\vec{V}_{CV}).\delta\vec{A}_{out}$$
Now using the first law of thermodynamics, I can write for the system that.
$$\dfrac{d}{dt}(E_{sys})=(\dot{Q}_{sys})_{net-in}-(\dot{W}_{sys})_{net-out}$$
Using this, I get.
$$\dfrac{d}{dt}(E_{CV})=(\dot{Q}_{sys})_{net-in}-(\dot{W}_{sys})_{net-out}-\int_{\partial CV(t)}\;e \;\delta(\dot{m}_{rel})$$
I now attempt the following reasoning work to recover the flow work term from the general work term. We can think that the system in our case is continuously being applied upon a pressure $P$ on its boundary $\partial sys(t)$ by the surrounding fluid. Therefore, a boundary work is being done on the system by the surrounding fluid by virtue of pressure. Assuming finite time and space. We can approximate with finite $\Delta$ for a small time and say for the time being some small area $\delta A$. Then the work done on the system by this pressure $P$ is. (Note that we are assuming that all deviatoric stresses are zero, not true in general).
$$\delta(\Delta W_{flow}) = P \;\delta A . \Delta x_{n}= -P \;\delta \vec{A}_{out} . \Delta \vec{x}=-P \;\delta \vec{A}_{out} . \vec{V}_{sys} \Delta t$$
Under the limit of $\Delta t \to 0$, I will get $\delta(\dot{W}_{flow})=-P \;\delta \vec{A}_{out} . \vec{V}_{sys}$
Integrating over the system boundary, I get. $(\dot{W}_{flow})_{in}=- \int_{\partial sys(t)=\partial CV(t)}P \;\delta \vec{A}_{out} . \vec{V}_{sys}=-\int_{\partial CV(t)}(Pv)\; \rho \;\delta \vec{A}_{out} . \vec{V}_{sys}$ $=-\int_{\partial CV(t)}(Pv)\; \rho \;\delta \vec{A}_{out} . (\vec{V}_{sys}-\vec{V}_{CV})-\int_{\partial CV(t)}(Pv)\; \rho \;\delta \vec{A}_{out} . (\vec{V}_{CV})$ $=-\int_{\partial CV(t)}(Pv)\; \delta(\dot{m}_{rel})-\int_{\partial CV(t)}(Pv)\; \rho \;\delta \vec{A}_{out} . (\vec{V}_{CV})$
Is there any error in my reasoning, if not, what is the meaning of the additional term given next in the energy equation after enthalpy term?
$$\dfrac{d}{dt}(E_{CV})=(\dot{Q}_{sys})_{net-in}-(\dot{W}_{nf-sys})_{net-out}-\int_{\partial CV(t)}\;\theta \;\delta(\dot{m}_{rel})-\int_{\partial CV(t)}(Pv)\; \rho \;\delta \vec{A}_{out} . (\vec{V}_{CV})$$ $$=(\dot{Q}_{sys})_{net-in}-(\dot{W}_{nf-sys})_{net-out}-\int_{\partial CV(t)}\;e \;\delta(\dot{m}_{rel})-\int_{\partial CV(t)}(Pv)\; \rho \;\delta \vec{A}_{out} . (\vec{V}_{sys})$$ $$=(\dot{Q}_{sys})_{net-in}-(\dot{W}_{nf-sys})_{net-out}-\int_{\partial CV(t)}\;e \;\delta(\dot{m}_{rel})-\int_{\partial CV(t)}(Pv)\; \delta(\dot{m}_{sys})$$
Or is it the case that the term carries no physical meaning and it's better to write it in just terms of $e$ and some flow work?
I have also done the same thing for entropy balance and I think I have derived the correct result.
Can someone verify?
Now I let $\eta = s$ , it's the entropy defined locally.
$$\dfrac{d}{dt}\int_{sys(t)} \rho \;s \;\ \delta V=\dfrac{d}{dt}\int_{CV(t)} \rho \;s \;\ \delta V+\int_{\partial CV(t)}\rho \;s (\vec{V}_{sys}-\vec{V}_{CV}).\delta\vec{A}_{out}$$
Which can be written as,
$$\dfrac{d}{dt}(S_{sys})=\dfrac{d}{dt}(S_{CV})+\int_{\partial CV(t)}\rho \;s (\vec{V}_{sys}-\vec{V}_{CV}).\delta\vec{A}_{out}$$
Now using the second law of thermodynamics and Clausius inequality(augmented with $\dot{S}_{gen}$ term), I can write for the system that.
$$\dfrac{d}{dt}(S_{sys})=\int_{\partial sys(t)=\partial CV(t)}\dfrac{\delta(\dot{Q}_{sys})_{in}}{T}+ \dot{S}_{gen-sys}$$
Using this, I get.
$$\dfrac{d}{dt}(S_{CV})=\int_{\partial CV(t)}\dfrac{\delta(\dot{Q}_{sys})_{in}}{T}+ \dot{S}_{gen-sys}-\int_{\partial CV(t)}\;s \;\delta(\dot{m}_{rel})$$ |
This question is a sequel of sorts to my earlier (resolved) question about a recent paper. In the paper, the authors performed molecular dynamics (MD) simulations of parallel-plate supercapacitors, in which liquid resides between the parallel-plate electrodes. The system has a "slab" geometry, so the authors are only interested in variations of the liquid structure along the $z$ direction.
In my previous question, I asked about how particle number density is computed. In this question, I would like to ask about how the electric potential is computed, given the charge density distribution.
Recall that in CGS (Gaussian) units, the Poisson equation is
$$\nabla^2 \Phi = -4\pi \rho$$
where $\Phi$ is the electric potential and $\rho$ is the charge density. So the charge density $\rho$ is proportional to the Laplacian of the potential.
Now suppose I want to find the potential $\Phi(z)$ along $z$, by integrating the Poisson equation. How can I do this?
In the paper, on page 254, the authors write down the average charge density $\bar{\rho}_{\alpha}(z)$ at $z$:
$$\bar{\rho}_{\alpha}(z) = A_0^{-1} \int_{-x_0}^{x_0} \int_{-y_0}^{y_0} dx^{\prime} \; dy^{\prime} \; \rho_{\alpha}(x^{\prime}, y^{\prime}, z)$$
where $\rho_{\alpha}(x, y, z)$ is the local charge density arising from the atomic charge distribution of ionic species $\alpha$, $\bar{\rho}_{\alpha}(z)$ is the average charge density at $z$ obtained by averaging $\rho_{\alpha}(x, y, z)$ over $x$ and $y$, and $\sum_{\alpha}$ denotes sum over ionic species.
The authors then integrate the Poisson equation to obtain $\Phi(z)$:
$$\Phi(z) = -4\pi \sum_{\alpha} \int_{-z_0}^z (z - z^{\prime}) \bar{\rho}_{\alpha}(z^{\prime}) \; dz^{\prime} \; \; \; \; \textbf{(eq. 2)}$$
My question is, how do I "integrate the Poisson equation" to obtain
equation (2)? How do I go from $\nabla^2 \Phi = -4\pi \rho$ to equation (2)? In paricular, where does the $(z - z^{\prime})$ factor come from?
Thanks for your time. |
You are asking to compute the double quotient $B\backslash G /B.$ This is the same as computing $G\backslash (G/B \times G/B)$. A point in $G/B$ is a full flag on $k^n$. So youare trying to compute the set of pairs $(F_1,F_2)$ of flags, modulo the simultaneous actionof $G$.
Another way to think about $G/B$ is that it is the space of Borel subgroups (the coset of$g$ corresponds to the conjugate $g B g^{-1}$, where $B$ is the upper triangular Borel that was fixed in the statement of the question).The passage from flags to Borels is given by mapping $F$ to its stablizer in $G$.
So you can also think that you're trying to describe pairs of Borels $(B_1,B_2)$,modulo simultaneous conjugation by $G$.
Now recall that a torus $T$ in $G$ is a conjugate of the diagonal subgroup. Choosing a torus in $G$ is the same as choosing a decomposition of $k^n$ as a direct sum of1-dimensional subspaces (or lines, for short). (These will be the various eigenspaces of the torus actingon $k^n$.) The diagonal torus corresponds to the standard decomposition of $k^n$ as $n$ copies of $k$.
Now a torus $T$ is contained in a Borel $B$ (let me temporarily use $B$ todenote any Borel, not just the upper triangular one) if and only if the correspondingdecomposition of $k^n$ into a sum of lines is compatible with the flag that $B$ fixes, i.e. if the flag is given by taking first one line, than the sum of that one with a second, then the sum of those twowith a third, and so on. In particular, choosing a torus $T$ contained in a Borel $B$determines a "labelled decomposition" of $k^n$, i.e. we may write $k^n = L_1 \oplus \ldots \oplus L_n$, where $L_i$ is the $i$th line;just to be clear, the labelling is chosen so that the corresponding flag is just$L_1 \subset L_1\oplus L_2 \subset \cdots.$ (Again, to be completely clear,if $T$ is the conjugate by $g \in G$ of the diagonal torus, then $L_i$ is thetranslate by $g$ of the line spanned by the $i$th standard basis vector.)
Note that this labelled decomposition depends not just on $T$ (which only gives anunlabelled decomposition) but on the Borel $B$ containing $T$ as well. (In more Lietheoretic language, this is a reflection of that fact that a torus determinesa collection of weights in any representation of $G$,while a choice of a Borel containing the torus lets you order the weights as well,by determining a set of positive roots.)
Of course, $B$ will contain more than one torus; or more geometrically,$k^n$ will admit more than one decomposition into lines adapted to the filtration$F$ of which $B$ is the stabilizer. But if one thinks about the different possiblelines, you see that $L_1$ is uniquely determined (it must be the first step in the flag), $L_2$ is uniquely determined modulo $L_1$ (since together with $L_1$it spans the second step in the flag), and so on, which shows that any two tori$T$ in $B$ are necessarily conjugate by an element of $B$, and the same sort of reasoning shows that the normalizer of $T$ in $B$ is just $T$ (becauseif $g \in G$ is going to preserve both the flag and the collection of lines, whichis the same as preserving the ordered collection of lines, all it can dois act by a scalar on each line, which is to say, it must be an element of $T$).
Now a key fact is that any two Borels, $B_1$ and $B_2$, contain a common torus. In other words, given two filtrations, we can always choose an (unordered)decomposition of $k^n$ into a direct sum of lines which is adapted to
bothfiltrations. (This is an easy exercise.) Of course the ordering of the lines will depend which of the two filtrations we use. In other words, we get a setof $n$ lines in $k^n$ which are ordered one way according to the filtration$F_1$ given by $B_1$, and in a second way according to the filtrationgiven $F_2$ by $B_2$. If we let $w \in S_n$ be the permutation which takes the firstordering to the second, then we see that the pair $B_1$ and $B_2$ determines an element $w \in S_n$. This is the Bruhat decomposition.
It wouldn't be hard to continue with this point of view to completely provethe claimed decomposition, but it will be easier for me (at least notationally)to switch back to the $B\backslash G/B$ picture.
Thus consider the coset $gB$ in $G/B$, corresponding to the Borel $g B g^{-1}$.Let me use slightly nonstandard notation, and write $D$ for the diagonal torus;of course $D \subset B$. We may also find a torus $T \subset B \cap g B g^{-1}$.Now there is an element $b \in B$, determined modulo $D$, such that $T = b D b^{-1}$.(This follows from the discussion above about conjugacy properties of tori in Borel subgroups.) We also have $g D g^{-1} \subset g B g^{-1}$, and there exists$g b'g^{-1}\in g B g^{-1},$ well defined modulo $g D g^{-1}$, such that $T = (g b'g^{-1}) g D g^{-1} (g b' g^{-1})^{-1} = g b' D (b')^{-1} g^{-1}.$
We thus find that $b^{-1} g b' \in N(D)/D$, and thus that $g \in B w B$ forsome $w$ in the Weyl group $N(D)/D$. Note that since $b$ and $b'$ are well defined modulo $D$, the map from $T$ to $w$ is well-defined.
Thus certainly $G$ is the union of the $B w B$. If you consider what I've alreadywritten carefully, you will also see that the different double cosets are disjoint. We can also prove this directly as follows: given $B$ and $g B g^{-1}$, the map$T \mapsto w$ constructed above is a map from the set of $T$ contained in $B \capg B g^{-1}$ to the set $N(D)/D$. Now any two such $T$ are in fact conjugateby an element of $B \cap g B g^{-1}$. The latter group is connected,and hence the space of such $T$ is connected.(These assertions are perhaps most easily seen by thinking in terms offiltrations and decompositions of $k^n$ into sumsof lines, as above). Since $N(D)/D$ is discrete, we see that $T \mapsto w$ mustin fact be constant, and so $w$ is uniquely determined just by $g B g^{-1}$ alone.In other words, the various double cosets $B w B$ are disjoint.
The preceding discussion is a litte long, since I've tried to explain (in the particular special cases under consideration) some general facts about conjugacyof maximal tori in algebraic groups, using the translation of group theoreticfacts about $G$, $B$, etc., into linear algebraic statements about $k^n$.
Nevertheless, I believe that this is the standard proof of the Bruhat decomposition,and explains why it is true: the relative position of two flags is described by an element of the Weyl group. |
I'm trying to compute ideal class groups of various number fields, and now I'm a little familiar to a class group of a quadratic number field. However, I can't find any non-cyclic example of a class group of a cubic number field. In Marcus' "Number Fields" book, there are some exercises that deal with $\mathbb{Q}(\sqrt[3]{m})$ for integer $m$, but every such exercise has a cyclic class group. Is there any good example of a cubic number field that has a class group isomorphic to the Klein 4-group? How about biquadratic or quartic fields? (I think I can't do with quintic things...) Thanks in advance. Until now, I computed class groups of $\mathbb{Q}(\sqrt{223}), \mathbb{Q}(\sqrt{226}), \mathbb{Q}(\sqrt{-30}), \mathbb{Q}(\sqrt{-89})$.
This following example was found via a computer search for simple integral basis and small discriminant, so that the discriminant is easy to compute and the Minkowski bound is small.
I am not sure if this suffices as a good example though: The only method I know of computing class group is the basic one via Minkowski's bound and this example still involved quite a bit computation since the bound is fairly large at $<38$. I wasn't able to find a cubic extension, $\mathbb Z_2\times \mathbb Z_2$ class group with small bounds.
The stats was also checked on Sagemathcell to be sure.
Let $f(x) = x^3 + 11x+21\in\mathbb Z[x]$.
1. $f(x)$ is irreducible over $\mathbb Z$.
2. Let $\alpha \in \mathbb C$ be a root of $f(x)$ and consider the number field $K=\mathbb Q(\alpha)$.
We can show that $\{1,\alpha,\alpha^2\}$ is an integral basis and $K$ has prime discriminant $-17231$.
3. Hence Minkowski bound $$M_K=\frac{8}{9\pi} \sqrt{17231} < 38,$$ and we can find the list of non-principal ideals for each prime $\leq 37$.
4. Finally we can show that the class group $H(K)$ of $K$ is $H(K)\cong \mathbb Z_2\times \mathbb Z_2$, with generators $$<3,\alpha>,<3,\alpha-1>$$
Edit 1: We check that $<3,\alpha>,<3,\alpha-1>$ has order 2. Let$$\begin{align}I &:= <3,\alpha>^2 = <9,3\alpha,\alpha^2>\\J &:= <3,\alpha-1>^2 = <9,3\alpha-3,\alpha^2-2\alpha+1>\end{align}$$
We claim that $$ \begin{align} <9,3\alpha,\alpha^2> &= <2\alpha^2 - 3\alpha + 27>\\ <9,3\alpha-3,\alpha^2-\alpha+1> &= <\alpha+2> \end{align} $$
Clearly, we have $$ 2\alpha^2 - 3\alpha + 27 \in <9,3\alpha,\alpha^2> \implies <2\alpha^2-3\alpha+27> \subseteq <9,3\alpha,\alpha^2> $$ We obtain $$ \begin{align} -(\alpha^2+3\alpha+2)(2\alpha^2 - 3\alpha + 27) &= 9\\ (\alpha^2 + 6\alpha + 7)(2\alpha^2 - 3\alpha + 27) &= \alpha^2 \end{align} $$ Therefore $9,\alpha^2\in <2\alpha^2 - 3\alpha+27>$, which in turn gives $3\alpha\in <2\alpha^2 - 3\alpha + 27>$. Hence $$ \begin{align} <9, 3\alpha,\alpha^2> &\subseteq <2\alpha^2 - 3\alpha + 27>\\ \implies <9, 3\alpha,\alpha^2> &= <2\alpha^2 - 3\alpha + 27> \end{align} $$ This shows the first equivalence. On the other hand, $$ \begin{align} (\alpha^2-2\alpha+15)(\alpha+2) &= 9\\ 3(\alpha+2)-9 &= 3\alpha-3\\ (\alpha+2)^2-2(3\alpha-3)-(9) &= \alpha^2-2\alpha+1 \end{align} $$ Therefore $9,3\alpha-3,\alpha^2-2\alpha+1\in <\alpha+2>$. For the reverse containment, $$ \begin{align} (\alpha + 2)(\alpha^2 - 2 \alpha + 1) + 5 (3 \alpha - 3) + 4 (9) &= \alpha+2 \end{align} $$ shows that $\alpha+2 \in <9,3\alpha-3,\alpha^2 - 2\alpha+1>$. Therefore $$ <9,3\alpha-3,\alpha^2-2\alpha+1> = <\alpha+2> $$
Another example might be $f(x) = x^3+8x+60$ with integral basis $\{1,\alpha,\alpha^2/2\}$.
Using a math engine, like PARI/GP one can check the class group of the polynomials of the form $x^3-a$. For $a>0$ we have that the first polynomial with root $\alpha$ for which $\mathbb{Q}(\alpha)$ has non-cyclic class group is $65$. Indeed we have that the class group of $x^3-65$ is $\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/6\mathbb{Z}$.
On the other side $a=113$ is the smallest $a$ s.t. $\mathbb{Q}(a)$ has the Klein 4-group as a class group, where $\alpha=\sqrt[3]{113}$. In between the $a'$s producing non-cyclic class groups are $70, 86, 91, 110$ and for all o them $\mathbb{Q}(\sqrt[3]{a})$ has $\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ as a class group.
However considering the size of the discriminant and having to compute at least $20$ some ideals (let alone combine them later) by using Minkowski's Bound it will be too tedious to do it by hand.
Here is a detailed computation of class group of $\mathbb{Q}(\alpha)$, with $\alpha^3+11\alpha +21=0$, the example given by @Yong Hao Ng. I want to illustrate the following two maxims of computational algebraic number theory:
Computation of ideal class group is intractable Knowledge of units and class group often complement each other
Denote $K=\mathbb{Q}(\alpha)$, the discriminant is $-17231$, so an integral basis is $\{1,\alpha,\alpha^2\}$.
1.Finding generators and relations
This is usually the first step (for general number field) in computing class group in almost any algorithm: factor a large number of ideals with small norm, yielding a linear system, then pinpoint possible generators.
First we factors primes smaller than the Minkowski bound, primes not shown below are inert. $$\begin{aligned}(3)&=(3,\alpha)(3,\alpha-1)(3,\alpha+1) \\ (7)&=(7,\alpha)(7,\alpha^2-3)\\ (11)&=(11,\alpha-1)(11,\alpha^2+\alpha+1)\\ (13)&=(13,\alpha+3)(13,\alpha^2-3\alpha-6)\\ (17)&=(17,\alpha-2)(17,\alpha^2+2\alpha-2)\\ (19)&=(19,\alpha+7)(19,\alpha^2-7\alpha+3)\\ (23)&=(23,\alpha-8)(23,\alpha^2+8\alpha+6)\\ (29)&=(29,\alpha+4)(29,\alpha+6)(29,\alpha-10)\\ \end{aligned}$$ Let $$\mathfrak{p}_1=(3,\alpha),\mathfrak{p}_2=(3,\alpha-1),\cdots,\mathfrak{p}_{17}=(29,\alpha+6),\mathfrak{p}_{18}=(29,\alpha-10)$$ which are named in their order of appearance. Now we find elements with small norm: $$N(\alpha) = -3\times 7 \qquad N(\alpha+1)=-3^2 \qquad N(\alpha-1)=-3\times 11 \qquad N(\alpha+3)=-3\times 13$$ $$N(\alpha+4)=3\times 29 \qquad N(\alpha^2+\alpha-2)=-3^3\times 11 \qquad N(\alpha^2-\alpha-2)=3^3\times 17$$ $$N(\alpha^2-\alpha+1)=3\times 13\times 19 \qquad N(\alpha^2-2\alpha-2)=3\times 13\times 23 \qquad N(3\alpha-1)=-23\times 29$$ $$N(4\alpha-1)=-3^2\times 13^2 \qquad N(4\alpha+7)=3\times 7\times 11 \qquad N(4\alpha+9)=3\times 17\times 19$$
Now we factor then into $\mathfrak{p}_n$. For example, to factor $(\alpha)$, we have three possibilities: $$(\alpha) = \mathfrak{p}_1\mathfrak{p}_4 \qquad \mathfrak{p}_2\mathfrak{p}_4 \qquad \mathfrak{p}_3\mathfrak{p}_4$$ I rule out the latter two. If $(\alpha) = \mathfrak{p}_2\mathfrak{p}_4 = (3,\alpha+2)(7,\alpha-7)$, then $(\alpha+2)(\alpha-7)/\alpha \in \mathcal{O}_K$, but this number is not integral. If $(\alpha) = \mathfrak{p}_3 \mathfrak{p}_4 = (3,\alpha-2)(7,\alpha-7)$, then $(\alpha-2)(\alpha-7)/\alpha \in \mathcal{O}_K$, this is neither integral as well. Hence $(\alpha) = \mathfrak{p}_1\mathfrak{p}_4$. Similarily, we can factor other ideals, giving $$(\alpha) = \mathfrak{p}_1\mathfrak{p}_4 \qquad (\alpha+1) = \mathfrak{p}_3^2 \qquad (\alpha-1) = \mathfrak{p}_2 \mathfrak{p}_6 \qquad (\alpha+3) = \mathfrak{p}_1 \mathfrak{p}_8 $$ $$(\alpha+4) = \mathfrak{p}_3 \mathfrak{p}_{16} \qquad (\alpha^2+\alpha-2) = \mathfrak{p}_2^3 \mathfrak{p}_6 \qquad (\alpha^2-\alpha-2) = \mathfrak{p}_3^3 \mathfrak{p}_{10}$$ $$(\alpha^2-\alpha+1) = \mathfrak{p}_3 \mathfrak{p}_8 \mathfrak{p}_{12} \qquad (\alpha^2-2\alpha-2) = \mathfrak{p}_2 \mathfrak{p}_8 \mathfrak{p}_{14} \qquad (3\alpha-1) = \mathfrak{p}_{14} \mathfrak{p}_{18}$$ $$(4\alpha-1) = \mathfrak{p}_2^2 \mathfrak{p}_8^2 \qquad (4\alpha+7) = \mathfrak{p}_3 \mathfrak{p}_4 \mathfrak{p}_6 \qquad (4\alpha+9) = \mathfrak{p}_1 \mathfrak{p}_{10} \mathfrak{p}_{12}$$
We now have $21$ relations, and $18$ 'variables', hence forming the corresponding $21\times 18$ matrix $A$, the rank is $18$, apply Smith normal form on it, gives $$\begin{pmatrix} 1 & & & & & & & & & & & & & & & & 1 & 1 \\ & 1 & & & & & & & & & & & & & & & & \\ & & 1 & & & & & & & & & & & & & & 1 & 1 \\ & & & 1 & & & & & & & & & & & & & 1 & 1 \\ & & & & 1 & & & & & & & & & & & & 1 & 1 \\ & & & & & 1 & & & & & & & & & & & 1 & \\ & & & & & & 1 & & & & & & & & & & 1 & \\ & & & & & & & 1 & & & & & & & & & 1 & 1 \\ & & & & & & & & 1 & & & & & & & & 1 & 1 \\ & & & & & & & & & 1 & & & & & & & & 1 \\ & & & & & & & & & & 1 & & & & & & & 1 \\ & & & & & & & & & & & 1 & & & & & 1 & \\ & & & & & & & & & & & & 1 & & & & 1 & \\ & & & & & & & & & & & & & 1 & & & & 1 \\ & & & & & & & & & & & & & & 1 & & & 1 \\ & & & & & & & & & & & & & & & 1 & 1 & 1 \\ & & & & & & & & & & & & & & & & 2 & \\ & & & & & & & & & & & & & & & & & 2 \end{pmatrix} \vec{v} = 0$$ where $\vec{v}$ is the column matrix form by $\mathfrak{p}_{18},\mathfrak{p}_{17},\cdots,\mathfrak{p}_1$. Form this, we see that the ideal class group is generated by $\mathfrak{p}_1, \mathfrak{p}_2$, and $\mathfrak{p}_1^2, \mathfrak{p}_2^2$ are principal. Also note that $\mathfrak{p}_{17}$ is principal, a fact highly nonobvious. However, we cannot yet conclude the class group is $C_2\times C_2$. To show this, we have to check:
$\mathfrak{p}_1, \mathfrak{p}_2$ are not principal. $\mathfrak{p}_1, \mathfrak{p}_2$ belongs to genuinely different ideal classes.
Both bullets are non-trivial to prove. Below I present a way which tell us with 'high confidence' the class number is $4$, hence $C_2\times C_2$ must be the class group. I do have an
ad hoc (i.e. not the same as general class group algorithm) idea of rigorous establishing this, but the procedure takes much longer than the one presented below. Both the informal and rigorous method require knowledge of the fundamental unit. 2. Fundamental units
Now we find a fundamental unit of $K$. For general number field, a system of independent units can be found by computing the (left) nullspace of above coefficient matrix $A$. In our example, $$\vec{u}=\begin{pmatrix} 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -2 & 0 & -2 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{pmatrix}$$ is a left null-vector of $A$ (i.e. $\vec{u} A$ is the zero vector), this says, in terms of ideals $$(3)^2 (\alpha+1)^{-1} (\alpha-1)^2 (\alpha+3)^{-2} (\alpha^2+\alpha-2)^{-2} (4\alpha-1) = (1)$$ Hence $$v = \frac{{9{{(\alpha - 1)}^2}(4\alpha - 1)}}{{(\alpha + 1){{(\alpha + 3)}^2}{{({\alpha ^2} + \alpha - 2)}^2}}} = 215-25\alpha +16\alpha^2 $$ is a nontrivial unit. After embedding $K$ into $\mathbb{R}$ via $\alpha \to -1.56238$, we have $v = 293.116$. We show $v$ is fundamental, for cubic field with only one real embedding, this can be done via
Let $u>1$ be the fundamental unit of a real cubic field with one real embedding, then $$u^3 > \frac{|\delta|-27}{4}$$ where $\delta$ is the discriminant of the field.
Applying this we have $u>16.2626$, thus we have to show $\sqrt{v} = 17.1206$ is not in $K$. There is cheap way to check this: note that $101\mid (6^3+11\times 6+21)$, we have a homomorphism (such map always exists for monogenic number field): $$\mathcal{O}_K \to \mathbb{F}_{101}: \alpha \mapsto 6$$ then $v$ sends to $215-25\times 6+16\times 6^2$, which can be checked is not a quadratic residue modulo $101$. Now we can easily compute the regulator $R$, which is $5.68057$.
3. The class number
The analytic class number formula says $$\frac{hR}{w} = 2^{-r_1} (2\pi)^{-r_2} \sqrt{|\delta|} \prod_p \frac{1-1/p}{\prod_{\mathfrak{p}|p} (1-1/N(\mathfrak{p}))}$$
Now we approximate the product by taking finitely many terms: $$h \approx \frac{R}{2\pi} \sqrt{17231} \prod_{p\leq 29} \frac{1-1/p}{\prod_{\mathfrak{p}|p} (1-1/N(\mathfrak{p}))} = 4.27026$$ If we count first $100$ primes, then $h\approx 4.10765$. So we have sufficiently confidence that $h=4$. This shows the class group of $K$ is $C_2 \times C_2$. |
Current browse context:
math-ph
Change to browse by: References & Citations Bookmark(what is this?) Mathematical Physics Title: On the classification of multidimensionally consistent 3D maps
(Submitted on 10 Sep 2015 (v1), last revised 24 Feb 2016 (this version, v2))
Abstract: We classify multidimensionally consistent maps given by (formal or convergent) series of the following kind: $$ T_k x_{ij}=x_{ij} + \sum_{m=2}^\infty A_{ij ; \, k}^{(m)}(x_{ij},x_{ik},x_{jk}), $$ where $A_{ij;\, k}^{(m)}$ are homogeneous polynomials of degree $m$ of their respective arguments. The result of our classification is that the only non-trivial multidimensionally consistent map in this class is given by the well known symmetric discrete Darboux system $$ T_k x_{ij}=\frac{x_{ij}+x_{ik}x_{jk}}{\sqrt{1-x_{ik}^2}\sqrt{1-x_{jk}^2}}. $$ Submission historyFrom: Yuri B. Suris [view email] [v1]Thu, 10 Sep 2015 12:49:40 GMT (80kb,D) [v2]Wed, 24 Feb 2016 14:09:51 GMT (80kb,D) |
It isn't true that every DFA for this language is non-planar:Here is a language that is truly non-planar:$$\left\{ x \in \{\sigma_1,\ldots,\sigma_6\}^* \middle| \sum_{i=1}^6 i\#_{\sigma_i}(x) \equiv 0 \pmod 7 \right\}.$$Take any planar FSA for this language. If we remove all unreachable states, we still get a planar graph. Each reachable state has six ...
The concept has been researched before. (Once you know the answer, google for it ...)First there is old work by Book and Chandra, with the following abstract.Summary. It is shown that for every finite-state automaton thereexists an equivalent nondeterministic automaton with a planar stategraph. However there exist finite-state automata with no ...
The treewidth (and pathwidth) of the $k\times k$ grid is exactly $k$. (And, more generally, the treewidth and pathwidth of the $k\times\ell$ grid is exactly $\min\,\{k,\ell\}$). For the example grid$$\begin{matrix}1&-&2&-&3\\|&&|&&|\\4&-&5&-&6\\|&&|&&|\\7&-&8&-&9\end{matrix}$$...
For some graph classes $C$, the question "is there a fast algorithm for deciding whether a graph $G$ belongs to class $C$?" is perhaps only of theoretical curiosity. But it can also be argued otherwise: suppose a problem you care about is hard in general, but efficiently solvable for graphs of class $C$. Wouldn't it be nice if you could quickly test if you ...
These problems usually appear under the name "X edge deletion" and "X vertex deletion", where X is the graph class of interest, e.g., X could be "planar". There are also different variants where you allow for say edge addition and deletion (these are "editing problems"), or the operation could be edge contraction and so on.In other words, you seem to be ...
I'm not aware of any implementations of planar subgraph isomorphism algorithms, sorry. Note that "SubGemini", which is a (1993) circuit/netlist-oriented subgraph isomorphism solver, doesn't use a planar algorithm, seemingly because they did not want to make planarity assumptions.For subgraph isomorphism in general (i.e. not planar), the practical state of ...
Quoting directly from the Wikipedia article linked in the question:De Fraysseix, Pach and Pollack showed how to find in linear time a straight-line drawing in a grid with dimensions linear in the size of the graphAs I commented earlier, this answers all three questions.
One approach would be to enumerate all graphs of a given size, then test each one to see if it is planar and filter out the non-planar ones. This might work acceptably if you only want very small graphs.Brendan McKay has a collection that enumerates all non-isomorphic planar graphs of size up to 11, which you could download and use directly. There are ...
Yes, there's a nice algorithm for this. Compute the transitive reduction, then check whether the result is planar.Why does this work? The transitive reduction is the graph with the smallest number of edges, that has the same reachability relationships as the original graph. It is also unique. Moreover, any other graph with the same reachability ...
It is $NP$-complete. Consider a graph $G$ which is modified by duplicating every vertex, and connecting every duplicate vertex to its original. Then if we constrain all the duplicate vertices to a fixed color, then the thus obtained graph is 4-colorable (with constraints) if and only if the original graph is 3-colorable.
To add to the other answer, the name of the problem you are interested in is precoloring extension: given a graph $G$ with some precolored vertices and a color bound $\ell$, can the precoloring of $G$ be extended to a proper coloring of all vertices of $G$ using not more than $\ell$ colors? This problem is NP-complete for planar bipartite graphs with fixed $\...
One general method is to perturb the node weights slightly, find a good partition, and repeat that $n$ times with $n$ different perturbations.The basic algorithm is something like this:Repeat until you have $n$ distinct balanced partitions of $G$:Copy $G$ to $G'$. Randomly perturb the weight of each node in $G'$ by a tiny amount (a different random ...
You might try using the spring model: https://en.wikipedia.org/wiki/Force-directed_graph_drawing. Or, you could use optimization methods. Build an objective function $\Psi$ that is a sum of penalty terms. For each pair of mapped points $a^*,b^*$ that you are hoping will be at distance $c_{a^*,b^*}$, you have a penalty term $(d(a^*,b^*) - c_{a^*,b^*})^2$. ...
The edges examined never intersect because all points are first sorted in terms of angle from the starting point, and then traversed counterclockwise sequentially. By examining edges to each point sorted by their angle counterclockwise and never traversing clockwise (because you stop when you reach the starting node), you ensure that the edges won't interest....
No. The dual graph of a Voronoi diagram is the Delaunay triangulation of its point set so, in particular, every interior face of it is a triangle. But there are plenty of planar graphs (e.g., the $4$-cycle) that have non-triangular interior faces.
The answer turns out to be no. A counterexample, kindly suggested to me by Balász Gerencsér, is a $\sqrt{N}$ by $\sqrt{N}$ grid, with a path of length $\sqrt{N}$ glued to it. One can show thatthe mixing time of a simple random walk on this graph is $O(N\log N)$ (by combining Cheeger's inequality and the Spielman-Teng [1] bound on the spectral gap)the ...
This was meant to be a comment but it was a bit too long, sorry!There is a well known algorithm to draw a planar graph, namely Tutte's drawing algorithm. The input graph is assumed to be 3-connected and planar. The idea of the algorithm is to fix the position of vertices of a face in convex position and from those coordinates deduce the positions for the ...
As it turns out, this puzzle was solved in a competition at Oxford University in 2012. The winning implementation used backtrack search with several heuristics and pruning, and it can solve instances of the puzzle much larger than the maximum 14x14 in milliseconds. Apparently, the problem I was trying to solve is equivalent to another game called Numberlink. ...
It depends. If you have a planar embedding, and you want to find faces on it, just pick an edge, and keep walking on the face that's on the 'left' side of the edge when looked at in the direction you were walking. If you want to find a cycle of nodes for which there exists a planar embedding in which those cycles are a face, that's even easier, because in ...
First let me mention that the definition of being uniquely embeddable requires ANY graph isomorphism (e.g. just renaming symmetric vertices or any other automorphism permutation) to be (not necessary uniquely) extendable to topological (or combinatorial) one (see Diestel Graph Theory chapter about planar graphs for these definitions) in contrast to usual ...
Presumably you have to show that if you triangulate an arbitrary planar graph then you get an Apollonian network. Then you have to show that the triangulated graph contains more triangles than the original one. This will complete the proof.
This is known as "graph layout" or "graph drawing" (practical methods) or "graph embedding" (theoretical interest). The relevant techniques are more closely associated with computational geometry and mathematical optimization, not machine learning. See this Wikipedia article and Recovering a point embedding from a graph with edges weighted by point ...
House of Graphs is a good resource. From the database, you can download all planar graphs up to 11 vertices.For generating planar graphs even with some additional properties (e.g. connectivity), have a look at plantri by Brinkmann and McKay. |
Take the following transfer function of a 3rd order system:
$$H(s)=\dfrac{2.302~s+0.3548}{s^3+0.739~s^2+3.223~s+0.3548}$$
with poles:
Pole Damping Frequency Time Constant (rad/seconds) (seconds) -1.13e-01 1.00e+00 1.13e-01 8.89e+00 -3.13e-01 + 1.75e+00i 1.76e-01 1.78e+00 3.19e+00 -3.13e-01 - 1.75e+00i 1.76e-01 1.78e+00 3.19e+00
and with the following unitary step response:
If I compute the percentage overshoot (PO) based on the damping ratio $\zeta=0.176$, I get:
$$PO=100{\times}e^{\dfrac{-\zeta\pi}{\sqrt{1-\zeta^2}}}=100{\times}e^{\dfrac{-0.176\pi}{\sqrt{1-0.176^2}}}=\boxed{57.02\%}$$
However, if I compute the PO using the graphical method (comparing the peak value with the final value) I get a completely different result:
$$PO=\dfrac{v_{peak}-v_{final}}{v_{final}}{\times}100=\dfrac{1.2-1}{1}{\times}100=\boxed{20\%}$$
I don't understand why such discrepancy. Why are my PO computations not matching? |
Suppose that the household faces the following problem:
$\underset{ c_t , k_{t+1}, n_t } \max \mathbb{E}_0 \sum_{t=0}^{\infty} \beta^t \ln c_t + \ln (1 - n_t)$
subjected to
$ k_{t+1} = A_t k_t ^{\alpha} n_t ^{1- \alpha} - c_t $
Usually, in my macroeconomic course, we would formulate the Bellman equation as the following:
$ V(k, A) = \underset {c, k', n} \max \ln c + \ln (1- n) + \beta \mathbb{E} [V (k', A')] $
However, in his lecture notes, he formulated the Bellman equation as the following:
$ V(k, A) = \underset {c, k', n} \max \ln c + \ln (1- n) + \beta \mathbb{E} [V (k', A')] + \lambda [ A k^{\alpha} n_t ^{1- \alpha} - c - k' ]$
What I don't understand is where the expression with $\lambda$ comes from? The method that we used to derive the Bellman equation does not give us the Bellman equation with the $\lambda$, but instead gives us the one without the $\lambda$ expression. |
15.2. Deep Convolutional Generative Adversarial Networks¶
In Section 15.1, we introduced the basic ideas behind how GANs work. We showed that they can draw samples from some simple, easy-to-sample distribution, like a uniform or normal distribution, and transform them into samples that appear to match the distribution of some data set. And while our example of matching a 2D Gaussian distribution got the point across, it’s not especially exciting.
In this section, we’ll demonstrate how you can use GANs to generate photorealistic images. We’ll be basing our models on the deep convolutional GANs (DCGAN) introduced in [Radford.Metz.Chintala.2015]. We’ll borrow the convolutional architecture that have proven so successful for discriminative computer vision problems and show how via GANs, they can be leveraged to generate photorealistic images.
from mxnet import gluon, autograd, init, np, npxfrom mxnet.gluon import nnimport d2limport zipfilenpx.set_np()
15.2.1. The Pokemon Dataset¶
The dataset we will use is a collection of Pokemon sprites obtained from pokemondb. First download, extract and load this dataset.
data_dir = '../data/'url = 'http://data.mxnet.io/data/pokemon.zip'sha1 = 'c065c0e2593b8b161a2d7873e42418bf6a21106c'fname = gluon.utils.download(url, data_dir, sha1_hash=sha1)with zipfile.ZipFile(fname) as f: f.extractall(data_dir)pokemon = gluon.data.vision.datasets.ImageFolderDataset(data_dir+'pokemon')
Downloading ../data/pokemon.zip from http://data.mxnet.io/data/pokemon.zip...
We resize each image into \(64\times 64\). The
ToTensortransformation will project the pixel value into \([0,1]\), whileour generator will use the tanh function to obtain outputs in\([-1,1]\). Therefore we normalize the data with \(0.5\) meanand \(0.5\) standard deviation to match the value range.
batch_size = 256transformer = gluon.data.vision.transforms.Compose([ gluon.data.vision.transforms.Resize(64), gluon.data.vision.transforms.ToTensor(), gluon.data.vision.transforms.Normalize(0.5, 0.5)])data_iter = gluon.data.DataLoader( pokemon.transform_first(transformer), batch_size=batch_size, shuffle=True, num_workers=d2l.get_dataloader_workers())
Let’s visualize the first 20 images.
d2l.set_figsize((4, 4))for X, y in data_iter: imgs = X[0:20,:,:,:].transpose(0,2,3,1)/2+0.5 d2l.show_images(imgs, num_rows=4, num_cols=5) break
15.2.2. The Generator¶
The generator needs to map the noise variable \(\mathbf z\in\mathbb R^d\), a length-\(d\) vector, to a RGB image with width and height to be \(64\times 64\) . In Section 12.11 we introduced the fully convolutional network that uses transposed convolution layer (refer to Section 12.10) to enlarge input size. The basic block of the generator contains a transposed convolution layer followed by the batch normalization and ReLU activation.
class G_block(nn.Block): def __init__(self, channels, kernel_size=4, strides=2, padding=1, **kwargs): super(G_block, self).__init__(**kwargs) self.conv2d_trans = nn.Conv2DTranspose( channels, kernel_size, strides, padding, use_bias=False) self.batch_norm = nn.BatchNorm() self.activation = nn.Activation('relu') def forward(self, X): return self.activation(self.batch_norm(self.conv2d_trans(X)))
In default, the transposed convolution layer uses a \(k_h = k_w = 4\) kernel, a \(s_h = s_w = 2\) strides, and a \(p_h = p_w = 1\) padding. With a input shape of \(n_h^{'} \times n_w^{'} = 16 \times 16\), the generator block will double input’s width and height.
x = np.zeros((2, 3, 16, 16))g_blk = G_block(20)g_blk.initialize()g_blk(x).shape
(2, 20, 32, 32)
If changing the transposed convolution layer to a \(4\times 4\) kernel, \(1\times 1\) strides and zero padding. With a input size of \(1 \times 1\), the output will have its width and height increased by 3 respectively.
x = np.zeros((2, 3, 1, 1))g_blk = G_block(20, strides=1, padding=0)g_blk.initialize()g_blk(x).shape
(2, 20, 4, 4)
The generator consists of four basic blocks that increase input’s both width and height from 1 to 32. At the same time, it first projects the latent variable into \(64\times 8\) channels, and then halve the channels each time. At last, a transposed convolution layer is used to generate the output. It further doubles the width and height to match the desired \(64\times 64\) shape, and reduces the channel size to \(3\). The tanh activation function is applied to project output values into the \((-1, 1)\) range.
n_G = 64net_G = nn.Sequential()net_G.add(G_block(n_G*8, strides=1, padding=0), # output: (64*8, 4, 4) G_block(n_G*4), # output: (64*4, 8, 8) G_block(n_G*2), # output: (64*2, 16, 16) G_block(n_G), # output: (64, 32, 32) nn.Conv2DTranspose( 3, kernel_size=4, strides=2, padding=1, use_bias=False, activation='tanh')) # output: (3, 64, 64)
Generate a 100 dimensional latent variable to verify the generator’s output shape.
x = np.zeros((1, 100, 1, 1))net_G.initialize()net_G(x).shape
(1, 3, 64, 64)
15.2.3. Discriminator¶
The discriminator is a normal convolutional network network except that it uses a leaky ReLU as its activation function. Given \(\alpha \in[0,1]\), its definition is
As it can be seen, it is normal ReLU if \(\alpha=0\), and an identity function if \(\alpha=1\). For \(\alpha \in (0,1)\), leaky ReLU is a nonlinear function that give a non-zero output for a negative input. It aims to fix the “dying ReLU” problem that a neuron might always output a negative value and therefore cannot make any progress since the gradient of ReLU is 0.
alphas = [0, 0.2, 0.4, .6, .8, 1]x = np.arange(-2,1,0.1)Y = [nn.LeakyReLU(alpha)(x).asnumpy() for alpha in alphas]d2l.plot(x.asnumpy(), Y, 'x', 'y', alphas)
The basic block of the discriminator is a convolution layer followed by a batch normalization layer and a leaky ReLU activation. The hyper-parameters of the convolution layer are similar to the transpose convolution layer in the generator block.
class D_block(nn.Block): def __init__(self, channels, kernel_size=4, strides=2, padding=1, alpha=0.2, **kwargs): super(D_block, self).__init__(**kwargs) self.conv2d = nn.Conv2D( channels, kernel_size, strides, padding, use_bias=False) self.batch_norm = nn.BatchNorm() self.activation = nn.LeakyReLU(alpha) def forward(self, X): return self.activation(self.batch_norm(self.conv2d(X)))
A basic block with default settings will halve the width and height of the inputs, as we demonstrated in Section 6.3. For example, given a input shape $n_h = n_w = 16 $, with a kernel shape \(k_h = k_w = 4\), a stride shape \(s_h = s_w = 2\), and a padding shape \(p_h = p_w = 1\), the output shape will be:
x = np.zeros((2, 3, 16, 16))d_blk = D_block(20)d_blk.initialize()d_blk(x).shape
(2, 20, 8, 8)
The discriminator is a mirror of the generator.
n_D = 64net_D = nn.Sequential()net_D.add(D_block(n_D), # output: (64, 32, 32) D_block(n_D*2), # output: (64*2, 16, 16) D_block(n_D*4), # output: (64*4, 8, 8) D_block(n_D*8), # output: (64*8, 4, 4) nn.Conv2D(1, kernel_size=4, use_bias=False)) # output: (1, 1, 1)
It uses a convolution layer with output channel \(1\) as the last layer to obtain a single prediction value.
x = np.zeros((1, 3, 64, 64))net_D.initialize()net_D(x).shape
(1, 1, 1, 1)
15.2.4. Training¶
Compared to the basic GAN in Section 15.1, we use the samelearning rate for both generator and discriminator since they aresimilar to each other. In addition, we change \(\beta_1\) in Adam(Section 10.10) from \(0.9\) to \(0.5\). It decreases thesmoothness of the momentum, the exponentially weighted moving average ofpast gradients, to take care of the rapid changing gradients because thegenerator and the discriminator fight with each other. Besides, therandom generated noise
Z, is a 4-D tensor and we are using GPU toaccelerate the computation.
def train(net_D, net_G, data_iter, num_epochs, lr, latent_dim, ctx=d2l.try_gpu()): loss = gluon.loss.SigmoidBCELoss() net_D.initialize(init=init.Normal(0.02), force_reinit=True, ctx=ctx) net_G.initialize(init=init.Normal(0.02), force_reinit=True, ctx=ctx) trainer_hp = {'learning_rate': lr, 'beta1': 0.5} trainer_D = gluon.Trainer(net_D.collect_params(), 'adam', trainer_hp) trainer_G = gluon.Trainer(net_G.collect_params(), 'adam', trainer_hp) animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], nrows=2, figsize=(5,5), legend=['discriminator', 'generator']) animator.fig.subplots_adjust(hspace=0.3) for epoch in range(1, num_epochs+1): # Train one epoch timer = d2l.Timer() metric = d2l.Accumulator(3) # loss_D, loss_G, num_examples for X, _ in data_iter: batch_size = X.shape[0] Z = np.random.normal(0, 1, size=(batch_size, latent_dim, 1, 1)) X, Z = X.as_in_context(ctx), Z.as_in_context(ctx), metric.add(d2l.update_D(X, Z, net_D, net_G, loss, trainer_D), d2l.update_G(Z, net_D, net_G, loss, trainer_G), batch_size) # Show generated examples Z = np.random.normal(0, 1, size=(21, latent_dim, 1, 1), ctx=ctx) fake_x = net_G(Z).transpose(0,2,3,1)/2+0.5 # Noramlize the synthetic data to N(0,1) imgs = np.concatenate( [np.concatenate([fake_x[i * 7 + j] for j in range(7)], axis=1) for i in range(len(fake_x)//7)], axis=0) animator.axes[1].cla() animator.axes[1].imshow(imgs.asnumpy()) # Show the losses loss_D, loss_G = metric[0]/metric[2], metric[1]/metric[2] animator.add(epoch, (loss_D, loss_G)) print('loss_D %.3f, loss_G %.3f, %d examples/sec on %s' % ( loss_D, loss_G, metric[2]/timer.stop(), ctx))
Now let’s train the model.
latent_dim, lr, num_epochs = 100, 0.005, 40train(net_D, net_G, data_iter, num_epochs, lr, latent_dim)
loss_D 0.114, loss_G 6.287, 2601 examples/sec on gpu(0)
15.2.5. Summary¶ DCGAN architecture has four convolutional layers for the Discriminator and four “fractionally-strided” convolutional layers for the Generator. The Discriminator is a 4-layer strided convolutions with batch normalization (except its input layer) and leaky ReLU activations. Leaky ReLU is a nonlinear function that give a non-zero output for a negative input. It aims to fix the “dying ReLU” problem and helps the gradients flow easier through the architecture. 15.2.6. Exercises¶ What will happen if we use standard ReLU activation rather than leaky ReLU? Apply DCGAN on Fashion-MNIST and see which category works well and which does not. |
Difference between revisions of "Hand Dynamics and Control"
(→Chapter Summary)
Line 9: Line 9:
== Chapter Summary ==
== Chapter Summary ==
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
== Additional Information ==
== Additional Information ==
Revision as of 02:49, 25 July 2009
Prev: Multifingered Hand Kinematics Chapter 6 - Hand Dynamics and Control Next: Nonholonomic Behavior
In this chapter, we study the dynamics and control of a set of robots performing a coordinated task. Our primary example will be that of a multifingered robot hand manipulating an object, but the formalism is considerably broader. It allows a unified treatment of dynamics and control of robot systems subject to a set of velocity constraints, generalizing the treatment given in Chapter 4.
Chapter Summary
The following are the key concepts covered in this chapter:
The dynamics of a mechanical system with Lagrangian <amsmath>L(q, \dotq)</amsmath>, subject to a set of Pfaffian constraintsof the form <amsmath> A(q) \dot q = 0 \qquad A(q) \in {\mathbb R}^{k \times n},</amsmath>
can be written as
<amsmath> \frac{d}{dt} \frac{\partial L}{\partial \dot q} - \frac{\partial L}{\partial q} + A^T(q) \lambda - \Upsilon = 0,</amsmath>
where <amsmath>\lambda \in {\mathbb R}^k</amsmath> is the vector of
Lagrange multipliers. The values of the Lagrange multipliers are given by <amsmath> \lambda = (A M^{-1} A^T)^{-1} \left( A M^{-1}(F - C\dot q - N) + \dot A \dot q \right).</amsmath> The Lagrange-d'Alembert formulationof the dynamics represents the motion of the system by projecting the equations of motion onto the subspace of allowable motions. If <amsmath>q = (q_1, q_2) \in {\mathbb R}^{(n-k) \times k}</amsmath> and the constraints have the form <amsmath> \dot q_2 = \cal A(q) \dot q_1,</amsmath>
then the equations of motion can be written as
<amsmath> \left(\frac{d}{dt} \frac{\partial L}{\partial \dot q_1} - \frac{\partial L}{\partial q_1} - \Upsilon_1\right) + {\cal A}^T \left(\frac{d}{dt} \frac{\partial L}{\partial \dot q_1} - \frac{\partial L}{\partial q_2} - \Upsilon_2\right) = 0.</amsmath>
In the special case that the constraint is integrable, these equations agree with those obtained by substituting the constraint into the Lagrangian and then using the unconstrained version of Lagrange's equations.
The dynamics for a multifingered robot handwith joint variables <amsmath>\theta \in {\mathbb R}^n</amsmath> and (local) object variables <amsmath>x \in {\mathbb R}^p</amsmath>, subject to the grasp constraint <amsmath> J_h(\theta, x) \dot \theta = G^T(\theta, x) \dot x,</amsmath>
is given by
<amsmath> \tilde M(q) \ddot x + \tilde C(q, \dot q) \dot x + \tilde N(q, \dot q) = F,</amsmath>
where <amsmath>q = (\theta, x)</amsmath> and
<amsmath> \aligned \tilde{M} &= M_o + G J_h^{-T} M_f J_h^{-1} G^T \\ \tilde{C} &= C_o + G J_h^{-T} \left(C_f J_h^{-1} G^T + M_f \frac{d}{dt} \left(J_h^{-1} G^T \right)\right) \\ \tilde{N} &= N_o + G J_h^{-T} N_f \\ F &= G J_h^{-T} \tau. \endaligned</amsmath>
These same equations can be applied to a large number of other robotic systems by choosing <amsmath>G</amsmath> and <amsmath>J_h</amsmath> appropriately.
For redundantand/or nonmanipulablerobot systems, the hand Jacobian is not invertible, resulting in a more complicated derivation of the equations of motion. For redundant systems, the constraints can be extended to the form <amsmath> \underbrace{\begin{bmatrix} J_h \\ K_h \end{bmatrix}}_{\bar J_h} \dot\theta = \underbrace{\begin{bmatrix} G^T & 0 \\ 0 & I \end{bmatrix}}_{\bar G^T} \begin{bmatrix} \dot x \\ v_N \end{bmatrix},</amsmath>
where the rows of <amsmath>K_h</amsmath> span the null space of <amsmath>J_h</amsmath>, and <amsmath>v_N</amsmath> represents the
internal motionsof the system. For nonmanipulable systems, we choose a matrix <amsmath>H</amsmath> which spans the space of allowable object trajectories and write the constraints as <amsmath> J_h \dot\theta = \underbrace{G^T H}_{\bar G^T} w,</amsmath>
where <amsmath>\dot x = H(q) w</amsmath> represents the object velocity. In both the redundant and nonmanipulable cases, the augmented form of the constraints can be used to derive the equations of motion and put them into the standard form given above.
The kinematics of tendon-driven systemsare described in terms of a set of extension functions, <amsmath>h_i:Q \to {\mathbb R}</amsmath>, which measures the displacement of the tendon as a function of the joint angles of the system. If a vector of tendon forces <amsmath>f \in {\mathbb R}^k</amsmath> is applied at the end of the tendons, the resulting joint torques are given by <amsmath> \tau = P(\theta) f,</amsmath>
where <amsmath>P(\theta) \in {\mathbb R}^{n \times p}</amsmath> is the
coupling matrix: <amsmath> P(\theta) = \frac{\partial h}{\partial \theta}^T(\theta).</amsmath>
A tendon-system is said to be
force-closureat a point <amsmath>\theta</amsmath> if for every vector of joint torques, <amsmath>\tau</amsmath>, there exists a set of tendon forces which will generate those torques. The equations of motion for a constrained robot system aredescribed in terms of the quantities <amsmath>\tilde M(q)</amsmath>, <amsmath>\tilde C(q, \dotq)</amsmath>, and <amsmath>\tilde N(q, \dot q)</amsmath>. When correctly defined, thequantities satisfy the following properties: <amsmath>\tilde M(q)</amsmath> is symmetric and positive definite. <amsmath>\Dot{\tilde M}(q) - 2 \tilde C</amsmath> is a skew-symmetric matrix. <amsmath> \tau = J^T G^+ F + J^T f_N,</amsmath>
where <amsmath>F</amsmath> is the generalized force in object coordinates (determined by the control law) and <amsmath>f_N</amsmath> is an internal force. The internal forces must be chosen so as to insure that all contact forces remain inside the appropriate friction cone so that the fingers satisfy the fundamental grasp constraint at all times. |
Let $k$ be a field. What is an explicit power series $f \in k[[t]]$ that is transcendental over $k[t]$?
I am looking for elementary example (so there should be a proof of transcendence that does not use any big machinery).
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
Let $k$ be a field. What is an explicit power series $f \in k[[t]]$ that is transcendental over $k[t]$?
I am looking for elementary example (so there should be a proof of transcendence that does not use any big machinery).
If $k$ has characteristic zero, then $\displaystyle e^t = \sum_{n \ge 0} \frac{t^n}{n!}$ is certainly transcendental over $k[t]$; the proof is essentially by repeated formal differentiation of any purported algebraic relation satisfied by $e^t$.
Edit: Let me fill in a few details. Given a polynomial $P$ in $e^t$ of degree $d$ where each coefficient is a polynomial in $k[t]$ of degree at most $m$, the possible terms that appear in any formal derivative of $P$ lie in a vector space of dimension $(m+1)(d+1)$, so by taking at least $(m+1)(d+1)$ formal derivatives we obtain too many linear relationships between the terms $t^k e^{nt}$. The coefficient of $e^{dt}$ in particular eventually dominates all other coefficients.
Eisenstein proved (actually, stated) in 1852 that if $f=\sum a_n z^n$ is an algebraic power series with rational coefficients, there exist positive integers $A$ and $B$ such that $A a_n B^n$ are integers for all $n$. In particular, as Eisenstein himself remarks, only finitely many prime numbers appear in the denominators of the coefficients of $f$. For example, $e^z$, $\log(1+z)$, etc., are transcendental.
How about $\sum t^{n!}$? Doesn't a "sea-of-zeroes" argument show it can't be algebraic?
Coming back to the lacunary series, I would prefer the series $f(z)=\sum_{k\ge0}z^{d^k}$, where $d>1$ is an integer, because it is the classical example in Mahler's method; this function satisfies the functional equation $f(z^d)=f(z)-z$. I simply copy Ku.Nishioka's argument from her book "Mahler functions and transcendence" (Theorem 1.1.2). Assume that $f(z)$ is algebraic over $\mathbb C(z)$, hence satisfies an \emph{irreducible} equation $f(z)^n+a_{n-1}(z)f(z)^{n-1}+\dots+a_0(z)=0$ where the coefficients $a_j(z)\in\mathbb C(z)$. Substituting $z^d$ for $z$ and using $f(z^d)=f(z)-z$ we obtain $f(z)^n+(-nz+a_{n-1}(z^d))f(z)^{n-1}+\dots=0$. The left-hand sides of both polynomial relations for $f(z)$ must coincide because of the irreducibility. This in particular implies that $a_{n-1}(z)=-nz+a_{n-1}(z^d)$. Letting $a_{n-1}(z)=a(z)/b(z)$ where $a$ and $b$ are two coprime polynomials we see that $a(z)b(z^d)=-nzb(z)b(z^d)+a(z^d)b(z)$. Since $a(z^d)$ and $b(z^d)$ are coprime, $b(z^d)$ must divide $b(z)$. This is possible if only $\deg b(z)=0$, that is, $b(z)=b$ is a nonzero constant. Then $a(z)=-bnz+a(z^d)$ and comparing the degrees of both sides we see that $a(z)$ is constant as well, hence $nz=0$, a contradiction.
over the rationals every power serie with integer coefficents not periodic is tracendent, over Fp a power serie is algebraic iff the secuence of coeficient is p automatic there is a article od jp allouch tracendence of formal series with it information. |
Introduction¶
While the current hype about Deep Learning and Neural Networks is understandable given the vast number of cases where ANNs clearly outperformed every other algorithm, a lot of people think (myself included) that Neural Networks won't be the ONE Machine Learning algorithm that will solve every problem one day. Nevertheless, the Neural Network framework offers a lot of powerful tools that have proven themselves worthy of their popularity. Today I want to present a Feedforward Network based approach that I was having on my mind for some time now and finally managed to test on Kaggle's Diamonds dataset.
Idea behind the network¶
Two of the major critique points of Neural Networks are their cumbersome training process that often won't converge to a reasonable solution plus the reputation of being a hardly interpretable black-box algorithm. To solve these two issues, I started with a plain Linear Model:
$$y = X\beta$$
If you think about it, this model could easily be represented by a Feedfordward Neural Network with only the input layer and a one-neuron linear output layer. The bias Neuron takes the role of the intercept and the weights connecting the input neurons with the output neuron are simply the regression coefficients - nothing spectacular so far. Linear Models are interpretable and easily trained with most common optimizers, basically the things that Neural Networks lack. However, Linear Models will have trouble dealing with non-linear data that is often encountered in application. Thus, it would be great to have a model that is interpretable, easily trained, yet able to deal with non-linear data as well.
Introducing semi-parametric Neural Networks¶
If you are familiar with Linear Regression notation, you have probably noticed that I had left out the error term $\epsilon$ which covers the deviance of $X\beta$ from the actual value $y$. The correct model would rather look like this:
$$y = X\beta + \epsilon$$
and, roughly speaking, all parts of $y$ that the linear term $X\beta$ cannot explain will fall into $\epsilon$. This includes non-linearities which could be explained by some non-linear function $f(X)$. There exists a lot of literature on so-called semi-parametric models that will try to model the non-linear parts in $\epsilon$ by including $f(X)$ as well: $$y = X\beta + f(x) + \tilde{\epsilon}$$
where $\tilde{\epsilon}$ is the error that is left after everything that could be explained through functions of $X$ has been included in the model. The only problem that has to be solved at this point is the selection of $f(\cdot)$ which will be chosen arbitrarily in many cases. This is where Neural Networks can shine, as they have been proven to be universal approximators that can approximate any function $f(\cdot)$ given enough hidden layers (one sufficiently large hidden layer is enough to cover the space of continuous functions) and a few other conditions that are almost always met in practical problems.
Knowing this it looks reasonable to model $X\beta$ by the trivial model mentioned above and the non-linear part $f(X)$ by another Neural Network with one larger hidden layer and then add the output of both Networks:
$$y = LinearNetwork(X) + LargeOneHiddenLayerNetwork(X) + \tilde{\epsilon}$$
If the output of the non-parametric Neural Network is small compared to the linear component, the weights of the linear network are highly interpretable as the actual linear effects of each input $X_i$.
Additionally, adding up two Feedforward Neural Networks in this fashion creates yet another, larger Feedforward Neural Network that can be trained as a whole. Using the keras package in Python it is really simple to create, train and use such a structure and here is how I did it:
import pandas as pd
import numpy as np
from copy import deepcopy
import keras
from keras.layers import *
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.linear_model import Ridge
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
/home/sarem/.local/lib/python2.7/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend.
df = pd.read_csv("diamonds.csv", index_col = 0)
df.head()
carat cut color clarity depth table price x y z 1 0.23 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43 2 0.21 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31 3 0.23 Good E VS1 56.9 65.0 327 4.05 4.07 2.31 4 0.29 Premium I VS2 62.4 58.0 334 4.20 4.23 2.63 5 0.31 Good J SI2 63.3 58.0 335 4.34 4.35 2.75
df.isnull().any()
carat False cut False color False clarity False depth False table False price False x False y False z False dtype: bool
The dataset has about 50,000 observations and no missing values. This makes it ideal for testing algorithms as there won't be any bias through imputation. I haven't checked for outliers in this example but to show the concept of the proposed model that won't be as important.
"cut", "color" and "clarity" are categorical variables, so they have to be transformed into dummies:
dummified = []
dummy_list = ["cut", "color", "clarity"]
float_list = df.columns.tolist()
float_list.remove("price")
for col in dummy_list:
float_list.remove(col)
for var in dummy_list:
dummies = pd.get_dummies(df[var])
dummies.columns = [var + "_" + colname for colname in dummies.columns.tolist()]
dummified.append(deepcopy(dummies))
df = pd.concat([df.drop(dummy_list,1), pd.concat(dummified,1)],1)
Since there is one column for each category, an intercept in the linear model would be redundant (see some Stats books on Linear Regression). Transferring this to the linear component of the Neural Network, the bias neuron in the respective layer will be omitted.
"price" will be used as the target variable:
X = df.drop("price",1)
y = df["price"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=123)
Now for scaling the variables. This is important as there will be a lot of regularization in the model which is affected by the variable scaling. To keep the model predictions inside the range of the training data, I applied a MinMaxScaler on the target and a StandardScaler (z-transformation) on the numerical input variables; 1.96 is about one standard deviation from 0 but that is probably of minor importance here.
scaler = MinMaxScaler((-1.96,1.96))
y_train = scaler.fit_transform(y_train.reshape(-1,1))
scaler_X = StandardScaler()
X_train.loc[:,float_list] = scaler_X.fit_transform(
X_train.loc[:,float_list])
X_test.loc[:,float_list] = scaler_X.transform(
X_test.loc[:,float_list])
/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:2: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead /home/sarem/.local/lib/python2.7/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to float64 by MinMaxScaler. warnings.warn(msg, DataConversionWarning) /usr/local/lib/python2.7/dist-packages/pandas/core/indexing.py:537: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self.obj[item] = s Building and fitting the model¶
As mentioned before, the training process of a Neural Network might end in a fairly bad solution. This is often caused by unlucky random initial weights and there are endless amounts of papers on how to get good starting weights. Since the linear part of this particular network is basically just a Linear Regression model in disguise, I fitted a linear model first and then initialized the respective weights in the network with the coefficients of the Linear (Ridge) Regression. It might be reasonable to consider a standard OLS model instead of Ridge here but to generalize this approach to high-dimensional data as well I sticked with Ridge.
(again, since the relation between categories and dummy columns is 1-on-1, an intercept/bias neuron would be redundant)
ridge_model = Ridge(fit_intercept = False)
ridge_model.fit(X_train, y_train)
Ridge(alpha=1.0, copy_X=True, fit_intercept=False, max_iter=None, normalize=False, random_state=None, solver='auto', tol=0.001)
#this function transfers the coefficients of the Ridge model to the weights of the linear component of the network
#during intialization
def ridge_coef_init(shape, dtype=None):
return ridge_model.coef_.reshape(shape)
The initial weights in the non-linear part of the model will be initialized as all-zeroes. The target variable will be completely explained by the linear component at first. This and the strong regularization should increase the chance that the model will converge to a solution where the non-linear component explains as little as possible, making the interpretation of the linear weights as simple linear effects much more reliable.
inputs = Input(shape = (X.shape[1],))
lin_layer_op = Dense(1, activation = "linear", kernel_regularizer = keras.regularizers.l2(1.0),
use_bias = False, kernel_initializer = ridge_coef_init)(inputs)
nonp_layer = Dense(200, activation = "relu", kernel_regularizer = keras.regularizers.l2(2.),
bias_regularizer = keras.regularizers.l2(2.),
kernel_initializer = keras.initializers.Zeros(),
bias_initializer = keras.initializers.Zeros())(inputs)
nonp_layer_op = Dense(1, activation = "linear",
kernel_initializer = keras.initializers.Zeros(),
kernel_regularizer = keras.regularizers.l2(.5),
bias_regularizer = keras.regularizers.l2(.5),
bias_initializer = keras.initializers.Zeros())(nonp_layer)
concat_layer = Add()([lin_layer_op, nonp_layer_op])
nn_model = keras.Model(inputs,concat_layer)
np.random.seed(123)
nn_model.compile(loss = "mean_squared_error", optimizer = "adam")
The model structure can be plotted with keras and matplotlib:
from keras.utils import plot_model
plot_model(nn_model, to_file="model.png")
img=mpimg.imread('model.png')
plt.figure(figsize = (6,8))
imgplot = plt.imshow(img)
plt.show()
To prevent the model from both overfitting and convergence to a rather non-parametric solution, the training process is kept short and highly sensitive to an increase in validation error:
nn_model.fit(X_train.as_matrix(), y_train, batch_size = 5, epochs = 500, shuffle = True, validation_split=0.1,
callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss')])
Train on 36409 samples, validate on 4046 samples Epoch 1/500 36409/36409 [==============================] - 9s 243us/step - loss: 0.8950 - val_loss: 0.6582 Epoch 2/500 36409/36409 [==============================] - 8s 232us/step - loss: 0.6561 - val_loss: 0.6580 Epoch 3/500 36409/36409 [==============================] - 10s 267us/step - loss: 0.6562 - val_loss: 0.6581 <keras.callbacks.History at 0x7f6d58f99f50>
pred_nn = nn_model.predict(X_test)
pred_nn = scaler.inverse_transform(pred_nn)
pred_ridge = scaler.inverse_transform(ridge_model.predict(X_test))
print np.sqrt(np.mean((pred_ridge - y_test.as_matrix())**2))
5517.820110113087
Now the neural network:
print np.sqrt(np.mean((pred_nn - y_test.as_matrix())**2))
5082.426707283658
That's a good increase in accuracy. However, we almost certainly lost some interpretability to the non-linear component so it makes sense to compare the difference between the coefficients in the Ridge model and the weights in the linear part of the model:
ridge_weights = ridge_model.coef_.reshape(26,1)
neural_weights = nn_model.layers[2].get_weights()[0]
all_weights = np.concatenate([neural_weights, ridge_weights ])
plt.figure(figsize = (17,8))
plt.bar(range(11),neural_weights[:11,:])
plt.bar(range(11),ridge_weights[:11,:], alpha = 0.35)
plt.xticks(range(11), X.columns.tolist()[:11])
plt.xlim(-0.5,11.5)
plt.ylim(np.min(all_weights)-0.25, np.max(all_weights)+0.25)
plt.legend(["Neural Network", "Ridge"])
plt.plot(np.arange(-1,12,step=0.01), [0.]*len(np.arange(-1,12,step=0.01)), color = "black", linewidth = 0.75)
[<matplotlib.lines.Line2D at 0x7f6d58a47550>]
plt.figure(figsize = (17,8))
plt.bar(range(15),neural_weights[11:26,:])
plt.bar(range(15),ridge_weights[11:26,:].reshape(15,1), alpha = 0.35)
plt.xticks(range(15), X.columns.tolist()[11:26])
plt.xlim(-0.5,14.5)
plt.ylim(np.min(all_weights)-0.25, np.max(all_weights)+0.25)
plt.legend(["Neural Network", "Ridge"])
plt.plot(np.arange(-1,15,step=0.01), [0.]*len(np.arange(-1,15,step=0.01)), color = "black", linewidth = 0.75)
[<matplotlib.lines.Line2D at 0x7f6d58fdf990>]
print np.sum(np.abs(neural_weights))/np.sum(np.abs(ridge_weights))
0.16889941069140849
The linear component lost a lot of explanation power compared to the original linear-only model. What is interesting nonetheless is the strong increase of the size effect (the "x-y-z" variables) in the Neural Network. This could mean that the original Ridge model did not capture the effect of size well (it even implies a negative effect for higher x-size) - it is reasonable to assume that a larger diamond has higher value. Adding a non-linear component to the model allowed to capture this rather logical relationship correctly.
On the other side, e.g. the relation between clarity and price was clearly distorted: clarity_l1 is the worst category according to the explanation of the dataset on Kaggle while the linear component of the network shows other clarity values having a far more negative effect on price. The effect of clarity is probably non-linear and thus has likely been captured by the non-linear part of the model.
Again, there is a clear trade-off between accuracy and interpretability. To increase accuracy at the cost of interpretability, it could be considered to decrease the regularization factors in the non-parametric part of the network (and vice-versa).
Comparison with another non-linear model¶
I was curious to see how the neural network would perform compared to another non-linear/non-parametric model and decided to test on Random Forest:
from sklearn.ensemble import RandomForestRegressor
np.random.seed(123)
rf_model = RandomForestRegressor(n_jobs = -1)
rf_model.fit(X_train,y_train)
pred_rf = rf_model.predict(X_test)
pred_rf = scaler.inverse_transform(pred_rf.reshape(-1,1))
print np.sqrt(np.mean((pred_rf - y_test.as_matrix())**2))
/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:5: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel(). """ 5613.132134068115 Conclusion¶
Besides being somewhat overhyped in my opinion, Neural Network models have their place in the realm of predictive modeling and Machine Learning. Given the right structure and by constructing them with some mindfulness and care they show remarkable performance that can outperform other popular approaches. In this particular example the Feedforward Neural Network started as a Ridge model that, over the course of the training process, was able to capture more and more non-linear relations in the data. Finally it converged to a model that outperformed a plain linear model at the cost of reducing its interpretability.
I am quite confident this approach can be improved further by putting more effort into building a good network structure - if I find a better way that conserves more interpretability of the Linear model, I will write another post in the future. |
It is caused by the wettability of the surface of the nanoparticles. For naked particles this will be determined by the properties of gold but, as Daniel mentioned, it is likely that there is a capping agent, in which case the wettability will be governed by the properties of the capping agent. From this point on I will just call this the solid.
Collecting at the interface
Depending on the surface energies between the solid and water (sw), water and oil (wo) and solid and oil (so) you can have 2 situations both with their own extreme case. We can understand which of the three by looking at the famous Young's equation$^1$ :$$\tag{1} \gamma_{so}-\gamma_{sw}=\gamma_{wo} \cos \theta $$where $\theta$ is called the contact angle and in this case I have defined it through the water phase as shown in the picture below.
If the surface energies are such that a high contact angle occurs then the solid particles will be mostly on the oil side of the interface (situation 1). But given the surface energies it is still beneficial (energetically) to have some contact with the water, hence the angle. If the surface energies are such that $\cos \theta < -1 $ it means that there is no longer a contact angle and it is preferable for the solid particles to be completely submerged in the oil.
Situation 2 is exactly the opposite, the solid 'likes' the water resulting in a low contact angle. Here again an extreme case can occur in which the equation requires $\cos \theta > 1$ which means that the particles will fully submerge in the water.
Because many liquid-liquid-solid combinations have at least some contact angle (i.e. not 0 and not 180) the particles tend to collect at the interface of the two fluids.
Agglomeration
The reasoning for whether agglomeration will occur is in fact similar. Here again you will look at the surface energies and determine whether it is energetically favorable to have reduce the solid-liquid area by making solid-solid contact or whether it is favorable to have the solid in contact with the liquid. Capping agents are typically designed to repel themselves thus making sure that agglomeration doesn't occur (because it is usually not desired)
1: note that the original essay doesn't contain the equation in mathematical form, only in wording |
Preliminary remarks.
As Danu writes in his comment, the physics of the other four generators has to do with spacetime translations, one for each spatial direction, and one for time. But how do we see this explicitly in the math behind the somewhat odd-looking presentation of the Poincare group and its Lie algebra that Hall discusses.
First, recall that any $d+1$-dimensional Lorentz transformation is a Linear transformation on $\mathbb R^{d+1}$, so it can be representation by multiplication by a $(d+1)\times(d+1)$ matrix $\Lambda$.
Second, and most crucially, recall that translations of $\mathbb R^{d+1}$ are
not linear transformations; there is no way to write spacetime translation as multiplication by a matrix.
However, here's the really cool thing. If we embed a
copy of $(d+1)$-dimensional spacetime into the vector space $\mathbb R^{d+2}$, namely into a space with one higher dimension, then we can implement translations as linear transformations. Here's how it works.
The main construction.
For each $x\in \mathbb R^{d+1}$, we associate an element of $\mathbb R^{d+2}$ as follows:\begin{align} x \mapsto \begin{pmatrix} x \\ 1 \\ \end{pmatrix}\end{align}Now, for each Lorentz transformation $\Lambda\in \mathrm O(d,1)$, and for each spacetime translation characterized by a vector $a\in\mathbb R^{d+1}$, we form the matrix\begin{align} \begin{pmatrix} \Lambda & a \\ 0 & 1 \\ \end{pmatrix} \tag{$\star$}\end{align}and we notice what this matrix does to the embedded copies of points in $\mathbb R^{d+1}$;\begin{align} \begin{pmatrix} \Lambda & a \\ 0 & 1 \\ \end{pmatrix} \begin{pmatrix} x \\ 1 \\ \end{pmatrix} = \begin{pmatrix} \Lambda x+a \\ 1 \\ \end{pmatrix}\end{align}Whoah! That's really cool! What has happened here is that when we augment the dimension of spacetime by one with the embedding give above, and when we correspondingly embed Lorentz transformations and translations appropriately into square matrices of dimension $d+2$, then we actually
do get a way of representing both Lorentz transformations and translations as linear transformations on $\mathbb R^{d+2}$ that act in precisely the correct way on the copy of Minkowski space embedded in $\mathbb R^{d+2}$!
In other words, the Poincare group in $d+1$ dimensions can be thought of as the set of all $(d+2)\times(d+2)$ matrices of the form $(\star)$ where $\Lambda\in \mathrm O(d,1)$ and $a\in\mathbb R^{d+1}$.
What about the Lie algebra?
A natural question then arises: what do the Lie algebra elements look like as matrices when we represent the Lie group elements this way? Well, I quick standard computation will show you that the Lie algebra of the Poincare group can, in this representation, be regarded as all matrices of the form\begin{align} \begin{pmatrix} X & \epsilon \\ 0 & 0 \\ \end{pmatrix} \tag{$\star$}\end{align}where $X\in\mathfrak{so}(d,1)$ and $\epsilon\in \mathbb R^{d+1}$, precisely as Hall indicates. But from the remarks above, we see clearly that the parameter $\epsilon$ precisely corresponds to the generators that span the subspace of the Poincare algebra that yield spacetime translations. |
I was wondering how Alcubierre derived the metric for the warp drive? Sources have said it's based on Einstein's field equations, but how did he go from this to the metric?
Alcubierre started with the metric and used the Einstein equation to calculate what stress energy tensor was required.
The Einstein equation tells us:
$$ R_{\mu\nu} - \tfrac{1}{2}R g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu} $$
Normally we start with a known stress-energy tensor $T_{\mu\nu}$ and we're trying to solve the equation to find the metric. This is in general exceedingly hard. However if you start with a metric it's easy to calculate the Ricci tensor and scalar so the left hand side of the equation is easy to calculate, and therefore the matching stress-energy tensor is easy to calculate.
The only trouble is that doing things this way round will usually produce an unphysicial stress-energy tensor e.g. one that involves exotic matter. And indeed this is exactly what happens for the Alcubierre metric - it requires a ring of exotic matter.
The metric for the Alcubierre warp drive was constructed by considering the properties that it should obey, and not the matter source (which is why it's fairly unphysical).
The two ingredients used in it are :
A bump function, so that the warp drive is localized in a specific region (and that bump function moves, so that the inside may move along with it) A widening of the lightcone in that bump function, so that, compared to the outside, the speed of light is "larger".
Given these two characteristics, we get the properties we want for a warp bubble. It is possible to also get variants by changing them, for instance the Krasnikov tunnel does not have a travelling bump function, but still has a widening of the light cone. This is why it is "static", compared to the warp drive. |
I was not quite sure what the right SE for this was, so I posted this also here on DSP. Please tell me which one to remove :)
Problem statement
I have a few hundred unrelated time series, say $P_i(t)$, with $i=1,2,...,N$. The sample times are all unequally spaced.
It is known that during certain (unknown) intervals, the trend in some of the time series is well-approximated by
$$ P_i(t) \approx \alpha - \beta \left(t_f-t\right)^\gamma\left(1 + \delta\cos\left(\omega\ln\left(t_f-t\right)+\phi\right)\right) $$
with
$$ t_0 \leq t < t_f\\ \alpha,\beta > 0\\ 0\leq\gamma,\delta\leq1\\ 0\leq\phi<2\pi $$
and $\omega>0$ usually "small", but not really restricted.
An example of what such a function looks like:
After this interval ($t > t_f$), there is nearly always a rapid decline, after which the signal continues as it did before the interval ($t<t_0$). Of course, each series also has zero-mean, normally distributed noise of varying strength superimposed on it, hence the approximate sign.
My task:
to determine the intervals where this behavior occurs, find best-estimates for all the parameters listed above. My first stab at it
Since each of these time series can be quite long and voluminous, and I have quite many series to analyze (and that number will likely grow over time), visual inspection is out of the question. Moreover: often, it is quite easy to miss this pattern by just looking at the signal.
The pattern consists of two main components:
a power-law (the $(t_f-t)^\gamma$-term) an oscillation with logarithmically varying period (the $\cos\left(\omega\ln\left(t_f-t\right)\right)$-term)
To detect the power law, I thought of the following procedure:
smooth the signal to get rid of the noise and the periodic component take the first two derivatives numerically. Periods of consecutive positive first derivative are a necessary condition for the given power law -- continue analyzing only these periods ($t_0$ estimated) The division of the first derivative by the second makes it possible to estimate $t_f$. The standard deviation and mean of each element's estimate for $t_f$, as well as the demand that $t_f > t$ and $t < t_E$ (the final time at which the first derivative is positive), give a measure for the reliability for this estimate. If it is reliable enough, it is fairly straightforward to backtrack everything and come up with estimates for $\gamma$, $\beta$ and $\alpha$ (in that order).
To detect the oscillating component, I came up with the following:
Take an initial $t_f^{\text{trial}} > t_B$, so some time after the beginning of the time series. Compute $\ln(t_f^{\text{trial}}-t)$, from $t=t_B$ to just before $t_f^{\text{trial}}$. Compute the Fourier transform of the transformed time series $P_T(t) = P_i(\ln(t_f^{\text{trial}}-t))$ (via Lomb/Scargle, because the new sample times are unequally and logarithmically spaced). Determine all the peaks in the frequency domain and save them. Repeat from the top with $t_f^{\text{trial}}\leftarrow t_f^{\text{trial}} + \Delta t$ The progression of the peaks will roughly follow a parabola-shaped path in the frequency domain if there is a periodic component somewhere. The "right" $t_f$ will be found when the maximum power in the frequency has been found. For all peaks and $t_f$ thus found, find the corresponding interval that maximizes the power in the peak ($t_0$ estimated). with all this information, it is fairly straightforward to come up with initial estimates of $t_f$, $\omega$ and $\phi$.
Then, these two approaches are to be combined:
The estimates for $t_0$ and $t_f$ from both approaches will generally differ. determine the smallest overlapping interval $(t_0, t_f)$. Throw the data from this interval, as well as all the initial estimates, into a non-linear least squares fitter to improve the fit. Compute a couple of measures related to the goodness of fit, which will accompany the results. Why I'm looking for another approach
Well, that all sure
sounds very nice, but of course I wouldn't be asking a question here if it all worked as well as it sounds :)
The method I use to detect the presence and parameters of the power law:
seems to be extremelysensitive to noise I don't know how to determine automatically what a "good enough" smoothing is All smoothing algorithms I have tried are not very good at removing the periodic component, throwing all the estimates wayoff. Moreover, the (log-)periodic component can make the derivatives negative.
I could take the log of the data and detect linearity, but that suffers from the same problem -- the periodic component (as well as the unknown offset $t_f$) seems to make that unreliable.
The method I use to detect the periodic component:
is verycomputationally intensive; Lomb/Scargle is certainly not as fast as an FFT would be. And as I mentioned earlier, each time series can be quite long, so the number of times the Fourier transform needs to be computed can also be quite large detecting which peak correspond to which other peak from one estimate for $t_f$ to the next, is rather difficult to automate. transforming the sample times into (roughly) logarithmically-spaced sampling times makes it very hard to detect the start/end of the interval with a decent accuracy.
I'm kind of stuck, and I need some new inspiration. Any suggestions? |
I have a table with 4 columns and 4 rows, in the last row of which I'm using
multicolumn to have cells 2 columns wide. However, since one is taller than the other, for some reason they are automatically aligned to the middle (vertically speaking) of the cell, not the top. I have tried
p instead of
l in the column designation, putting in whitespace (positive and negative), searching on
TeX.SX, but I haven't found a solution. Here is a MWE:
\documentclass{standalone}\usepackage{amsmath, tikz}\begin{document}\renewcommand\arraystretch{1.5}\begin{tabular}{l l|l l}sheaf space & & stack space & \\\hline$U\to {\mathcal F}(U)$ & a group & $U\to \widehat {\mathcal F}(U)$ & a groupoid \\$O(X)^{op} \to \text{Set}$ & open sets to groups & $O(X)^{op}\to \text{Grpd}$ & open sets to groupoids \\[5pt]\multicolumn{2}{c|}{\parbox{7cm}{\raggedrightif $s_i\in \mathcal F(U_i)$ and $s_j\in \mathcal F(U_j)$ such that $s_i|_{U_i\cap U_j} = s_j|_{U_i\cap U_j}$, then there exists $s\in \mathcal F(U)$ such that $s|_{U_i} = s_i$ and $s|_{U_j} = s_j$.}} &\multicolumn{2}{c}{\parbox{7cm}{\raggedrightif $s_i\in \widehat{\mathcal F}(U_i)$ and $s_j\in \widehat{\mathcal F}(U_j)$ such that there an isomorphism $\varphi_{ij}:s_i|_{U_i\cap U_j} \to s_j|_{U_i\cap U_j}$, then there exists $s\in \mathcal F(U)$ and isomorphisms $\varphi_i:s|_{U_i}\to s_i$, $ \varphi_j:s|_{U_j}\to s_j$ such that the following diagram commutes:\[\begin{tikzpicture}\node (si) at (0,0) {$s_i|_{U_i\cap U_j}$};\node (sj) at (3,0) {$s_j|_{U_i\cap U_j}$};\node (s) at (1.5,1.5) {$s$};\draw[->] (si) to node[above] {$\varphi_{ij}$} (sj);\draw[->] (s) to node[auto,swap] {$\varphi_i|_{U_i\cap U_j}$} (si);\draw[->] (s) to node[auto] {$\varphi_j|_{U_i\cap U_j}$} (sj);\end{tikzpicture}\]}}\end{tabular}\renewcommand\arraystretch{1}\end{document}
This is the output:
I would like the text in the bottom left cell to be aligned to the top of its cell, but I can't figure out how to do it. |
I'm trying to write the following equation:
\rho = \max \left\{ \frac{|\mathbf{w} \cdot \mathbf{x^{(i)}} + b|}{\|\mathbf{w}\|} \right\}
but the PDF output renders the math operator
\max in italic instead of upright. I've tried the same with
\log and
\cos, and they behave just the same. What am I doing wrong here?
Document class: memoir
Packages: pgf, fontspec, graphicx, microtype, unicode-math, amsmath
The following code reproduces the problem in different systems:
\documentclass{memoir}\usepackage{unicode-math}\usepackage{amsmath}\begin{document}Yay, a dual problem:\begin{equation}\max_{\mathbf{\alpha}} \left\{ \sum_{i=1}^m \alpha{(i)} \right\}\end{equation}\end{document} |
$ds^2= r^2 (d\theta^2 + \sin^2{\theta}d\phi^2)$
The following is the tetrad basis
$e^{\theta}=r d{\theta} \,\,\,\,\,\,\,\,\,\, e^{\phi}=r \sin{\theta} d{\phi}$
Hence, $de^{\theta}=0 \,\,\,\,\,\, de^{\phi}=r\cos{\theta} d{\theta} \wedge d\phi = \frac{\cot{\theta}}{r} e^{\theta}\wedge e^{\phi}$
Setting the torsion tensor to zero gives: $de^a + \omega^a _b \wedge e^b =0$.
This equation for $a=\theta$ gives $\omega^{\theta}_{\phi}=0$. (I have used $\omega^{\theta}_{\theta}=\omega^{\phi}_{\phi}=0$)
The equation for $\phi$: $\omega^{\phi}_{\theta} \wedge e^{\theta}=\frac{\cot{\theta}}{r} e^{\phi} \wedge e^{\theta} \implies \omega^{\phi}_{\theta}=\frac{\cot{\theta}}{r} e^{\phi}=\cos{\theta} d{\phi}$
$d\omega^{\phi}_{\theta}=-\sin{\theta} d\theta \wedge d\phi$
Hence $R^i_j = d\omega^i_j+ \omega^i_b \wedge \omega^b_j$ gives $R^{\phi}_{\theta}=-\sin{\theta} d\theta \wedge d\phi$ and $R^{\theta}_{\phi}=0$
Writing in terms of components gives $R^{\phi}_{\theta \theta \phi}=-\sin{\theta}$, and $R^{\theta}_{\phi \phi \theta}=0$
However this is wrong. I have done the same problem using Christoffel connections, and the answer which I know to be correct is
$R^{\phi}_{\theta \theta \phi}=-1$, and $R^{\theta}_{\phi \phi \theta}=\sin^2{\theta}$
Please could anyone tell mw what I am doing wrong? Any help will be appreciated? |
I'm playing around with dynamic programming and need to calculate a multidimensional integral $E[V(W)]$ where we assume $W$ has a log normal distribution. I was looking at the following example in this pdf, section 9.4.3 on page 83.
To give some background (I summarize from the section): The example is from economics and is about asset allocation. Assume $R$ is a return vector of dimension $n$ and is log-normally distrïbuted, i.e. $\log(R) = (\log(R_1),\dots, \log(R_n))$ has a multivariate normal distribution with given mean and covariance matrix, i.e. $\log(R)\sim\mathcal{N}((\mu-\frac{\sigma^2}{2})\Delta t,(\Lambda \Sigma\Lambda)\Delta t)$. The exact structure is not that important. $\Sigma$ is the correlation matrix and $\Lambda$ is simple a diagonal matrix with the standard deviation $\sigma_1,\dots,\sigma_n$ on its diagonal. Using Choleski decomposition we can write $\Sigma = LL^T$. Then one has $$\log(R_1) = (\mu_1-\frac{\sigma^2_1}{2})\Delta t + (L_{11}z_1)\sigma_1\sqrt{\Delta t}$$ $$\log(R_2) = (\mu_2-\frac{\sigma^2_2}{2})\Delta t + (L_{21}z_1+ L_{22}z_2)\sigma_2\sqrt{\Delta t}$$ and so on, where $z_i$ are independent normal distribution. So that we have
$$R_i=\exp{((\mu_i-\frac{\sigma^2_i}{2})\Delta t+\sigma_i\sqrt{\Delta t}\sum_{j=1}^iL_{ij}z_j)}$$
Let for simplicity no $\Delta t=1$ then we are interested in the quantity $$ W_{t+1}= W_t(R_f(1-e^Tx_t) + \sum_{i=1}^n\exp{((\mu_i-\frac{\sigma_i^2}{2})+\sigma^2\sum_{j=1}^iL_{ij}z_j)}x_{ti}) $$
where $e$ is a $n$ dimensional vector of $1$ and $x_t$ is some $n$ dimensional vector. For a given function $V$ the conditional expectatino of $V(W_{t+1})$ given $W_t, x_t$ can be calculated using Gauss Hermite quadrature
$$\sum_{k_1,\dots,k_n=1}^m w_{k_1}\cdot\cdot\cdot w_{k_n} V\left(W_t(R_f(1-e^Tx_t) + \sum_{i=1}^n\exp{((\mu_i-\frac{\sigma_i^2}{2})+\sigma^2\sum_{j=1}^iL_{ij}q_{k_j})}x_{ti})\right) $$ where $w_{k_i}$ are the Gauss-Hermite weights and $q_i$ the corresponding nodes. My question is how can the above sum of sums be efficiently implemented in python? The exponential part can be precalculated using a cummulative sum if I'm not wrong. |
I started digging into the history of mode detection after watching Aysylu Greenberg’s Strange Loop talk on benchmarking. She pointed out that the usual benchmarking statistics fail to capture that our timings may actually be samples from multiple distributions, commonly caused by the fact that our systems are comprised of hierarchical caches.
I thought it would be useful to add the detection of this to my favorite benchmarking tool, Hugo Duncan’s Criterium. Not surprisingly, Hugo had already considered this and there’s a note under the TODO section:
1 2
I hadn’t heard of using kernel density estimation for multimodal distribution detection so I found the original paper, Using Kernel Density Estimates to Investigate Multimodality (Silverman, 1981). The original paper is a dense 3 pages and my goal with this post is to restate Silverman’s method in a more accessible way. Please excuse anything that seems overly obvious or pedantic and feel encouraged to suggest any modifications that would make it clearer.
What is a mode?
The mode of a distribution is the value that has the highest probability of being observed. Many of us were first exposed to the concept of a mode in a discrete setting. We have a bunch of observations and the mode is just the observation value that occurs most frequently. It’s an elementary exercise in counting. Unfortunately, this method of counting doesn’t transfer well to observations sampled from a continuous distribution because we don’t expect to ever observe the exact some value twice.
What we’re really doing when we count the observations in the discrete case is estimating the probability density function (PDF) of the underlying distribution. The value that has the highest probability of being observed is the one that is the global maximum of the PDF. Looking at it this way, we can see that a necessary step for determining the mode in the continuous case is to first estimate the PDF of the underlying distribution. We’ll come back to how Silverman does this with a technique called kernel density estimation later.
What does it mean to be multimodal?
In the discrete case, we can see that there might be undeniable multiple modes because the counts for two elements might be the same. For instance, if we observe:
$$1,2,2,2,3,4,4,4,5$$
Both 2 and 4 occur thrice, so we have no choice but to say they are both modes. But perhaps we observe something like this:
$$1,1,1,2,2,2,2,3,3,3,4,9,10,10$$
The value 2 occurs more than anything else, so it’s
the mode. But let’s look at the histogram:
That pair of 10’s are out there looking awfully interesting. If these were benchmark timings, we might suspect there’s a significant fraction of calls that go down some different execution path or fall back to a slower level of the cache hierarchy. Counting alone isn’t going to reveal the 10’s because there are even more 1’s and 3’s. Since they’re nestled up right next to the 2’s, we probably will assume that they are just part of the expected variance in performance of the same path that caused all those 2’s.
What we’re really interested in is the local maxima of the PDF because they are the ones that indicate that our underlying distribution may actually be a mixture of several distributions. Kernel density estimation
Imagine that we make 20 observations and see that they are distributed like this:
We can estimate the underlying PDF by using what is called a kernel density estimate. We replace each observation with some distribution, called the “kernel,” centered at the point. Here’s what it would look like using a normal distribution with standard deviation 1:
If we sum up all these overlapping distributions, we get a reasonable estimate for the underlying continuous PDF:
Note that we made two interesting assumptions here:
We replaced each point with a normal distribution. Silverman’s approach actually relies on some of the nice mathematical properties of the normal distribution, so that’s what we use.
We used a standard deviation of 1. Each normal distribution is wholly specified by a mean and a standard deviation. The mean is the observation we are replacing, but we had to pick some arbitrary standard deviation which defined the width of the kernel.
In the case of the normal distribution, we could just vary the standard deviation to adjust the width, but there is a more general way of stretching the kernel for arbitrary distributions. The kernel density estimate for observations $X_1,X_2,…,X_n$ using a kernel function $K$ is:
$$\hat{f}(x)=\frac{1}{n}\sum\limits_{i=1}^n K(x-X_i)$$
In our case above, $K$ is the PDF for the normal distribution with standard deviation 1. We can stretch the kernel by a factor of $h$ like this:
$$\hat{f}(x, h)=\frac{1}{nh}\sum\limits_{i=1}^n K(\frac{x-X_i}{h})$$
Note that changing $h$ has the exact same effect as changing the standard deviation: it makes the kernel wider and shorter while maintaining an area of 1 under the curve.
Different kernel widths result in different mode counts
The width of the kernel is effectively a smoothing factor. If we choose too large of a width, we just end up with one giant mound that is almost a perfect normal distribution. Here’s what it looks like if we use $h=5$:
Clearly, this has a single maxima.
If we choose too small of a width, we get a very spiky and over-fit estimate of the PDF. Here’s what it looks like with $h = 0.1$:
This PDF has a bunch of local maxima. If we shrink the width small enough, we’ll get $n$ maxima, where $n$ is the number of observations:
The neat thing about using the normal distribution as our kernel is that it has the property that shrinking the width will only introduce new local maxima. Silverman gives a proof of this at the end of Section 2 in the original paper. This means that for every integer $k$, where $1<k<n$, we can find the minimum width $h_k$ such that the kernel density estimate has at most $k$ maxima. Silverman calls these $h_k$ values “critical widths.”
Finding the critical widths
To actually find the critical widths, we need to look at the formula for the kernel density estimate. The PDF for a plain old normal distribution with mean $\mu$ and standard deviation $\sigma$ is:
$$f(x)=\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{–\frac{(x-\mu)^2}{2\sigma^2}}$$
The kernel density estimate with standard deviation $\sigma=1$ for observations $X_1,X_2,…,X_n$ and width $h$ is:
$$\hat{f}(x,h)=\frac{1}{nh}\sum\limits_{i=1}^n \frac{1}{\sqrt{2\pi}}\mathrm{e}^{–\frac{(x-X_i)^2}{2h^2}}$$
For a given $h$, you can find all the local maxima of $\hat{f}$ using your favorite numerical methods. Now we need to find the $h_k$ where new local maxima are introduced. Because of a result that Silverman proved at the end of section 2 in the paper, we know we can use a binary search over a range of $h$ values to find the critical widths at which new maxima show up.
Picking which kernel width to use
This is the part of the original paper that I found to be the least clear. It’s pretty dense and makes a number of vague references to the application of techniques from other papers.
We now have a kernel density estimate of the PDF for each number of modes between $1$ and $n$. For each estimate, we’re going to use a statistical test to determine the significance. We want to be parsimonious in our claims that there are additional modes, so we pick the smallest $k$ such that the significance measure of $h_k$ meets some threshold.
Silverman used a smoothed bootstrap procedure to evaluate the significance. Smoothed bootstrapping is bootstrapping with some noise added to the resampled observations. First, we sample from the original set of observations, with replacement, to get $X_I(i)$. Then we add noise to get our smoothed $y_i$ values:
$$y_i=\frac{1}{\sqrt{1+h_k^2/\sigma^2}}(X_{I(i)}+h_k \epsilon_i)$$
Where $\sigma$ is the standard deviation of $X_1,X_2,…,X_n$, $h_k$ is the critical width we are testing, and $\epsilon_i$ is a random value sampled from a normal distribution with mean 0 and standard deviation 1.
Once we have these smoothed values, we compute the kernel density estimate of them using $h_k$ and count the modes. If this kernel density estimate doesn’t have more than $k$ modes, we take that as a sign that we have a good critical width. We repeat this many times and use the fraction of simulations where we didn’t find more than $k$ modes as the p-value. In the paper, Silverman does 100 rounds of simulation.
Conclusion
Silverman’s technique was a really important early step in multimodality detection and it has been thoroughly investigated and improved upon since 1981. Google Scholar lists about 670 citations of this paper. If you’re interested in learning more, one paper I found particularly helpful was On the Calibration of Silverman’s Test for Multimodality (Hall & York, 2001).
One of the biggest weaknesses in Silverman’s technique is that the critical width is a global parameter, so it may run into trouble if our underlying distribution is a mixture of low and high variance component distributions. For an actual implementation of mode detection in a benchmarking package, I’d consider using something that doesn’t have this issue, like the technique described in Nonparametric Testing of the Existence of Modes (Minnotte, 1997).
I hope this is correct and helpful. If I misinterpreted anything in the original paper, please let me know. Thanks! |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
We discuss how ideas from knot theory and categorical Lie theory can be used to understand the representation theory of Hecke algebras of complex reflection groups. The talk will focus mostly on the combinatorics and diagrammatics underlying these structures.
A Noetherian ring $S$ whose simple modules have the property that their finitely generated essential extensions are Artinian is said to satisfy property $(\diamond)$. In 1958 Matlis proved that commutative Noetherian rings satisfy this property. In this talk we will discuss $(\diamond)$ for skew polynomial rings $S=R[\theta; \alpha]$ where $R$ is a commutative Noetherian ring and $\alpha$ is an automorphism of $R$. When such a skew polynomial ring S satisfies $(\diamond)$ turns out to be a surprisingly subtle question, not completely settled, and which leads naturally to other fundamental representation-theoretic questions concerning these rings. A complete characterization is given when $R$ is an affine algebra over a field $K$ and $\alpha$ is a $K$-automorphism. This is joint work with Ken Brown and Jerzy Matczuk.
In this talk, I will show a description of the irreducible smooth representations of the unitriangular groups and upper triangular groups over a non-archimedean field (for example the $p$-adic field), and related groups (the algebra groups and the unit group of split basic algebras). In particular, I will discuss the admissibility and unitarisability of the irreducible smooth representations. In the end, if time permits I will give some examples.
In this talk, I will discuss techniques to study irreducible representations of various quantum algebras at roots of unity, focussing in particular on the example of quantum determinantal varieties. This is based on joint works with Jason Bell, Samuel Lopes and Alexandra Rogers.
In the construction of the irreducible projective characters of symmetric groups, I. Schur introduced the so called Schur $Q$-functions $Q_{\lambda}$, indexed by strict partitions. They can be scaled to define Schur $P$-functions $P_{\lambda} = 2^{-\ell(\lambda)} Q_{\lambda}$ where $\lambda$ has $\ell(\lambda)$ parts. Both are special cases of Hall-Littlewood polynomials, and are bases for the subring of the symmetric functions over $\mathbb{Q}$ generated by the odd degree power sums. We have then the linear expansions of the product $P_{\mu}P_{\nu} = \sum\limits_{\lambda} f_{\mu\nu}^{\lambda} P_{\lambda}$ and of the skew Schur $Q$-functions $Q_{\lambda/\mu} = \sum\limits_{\nu} f^{\lambda}_{\mu\nu} Q_{\nu}$, where the structure constants $f_{\mu\nu}^{\lambda}$ are non negative integers, called the shifted Littlewood-Richardson (LR) coefficients. While the symmetry $f_{\mu\nu}^{\lambda} = f_{\nu\mu}^{\lambda}$ is an immediate consequence of the $P$-product, it is a natural problem to exhibit this and other shifted LR symmetries in the combinatorial objects that they do enumerate.
Gillespie, Levinson and Purbhoo (2017) introduced a doubled type A crystal structure on shifted tableaux. This crystal graph, denoted $\mathcal{B} (\lambda/\mu, n)$, has vertices the skew shifted tableaux of shape $\lambda/\mu$ on the alphabet $ [n]' =\{1' < 1 < \ldots < n' < n\}$, and the double edges, prime and unprimed-coloured, corresponding to the action of the primed and unprimed lowering and raising operators which commute with the shifted jeu de taquin.
One can define on $\mathcal{B} (\lambda/\mu, n)$ a shifted analogue of crystal reflection operator $\sigma_i$ that coincides, after rectification, with the restriction of the shifted Schützenberger involution to the alphabet $\{i' < i < (i+1)' < i+1\}$, $i\in [n]$. Unlike type A crystals, the operators $\sigma_i$ do not define an action of the symmetric group $\mathfrak{S}_n$ on $\mathcal{B} (\lambda/\mu, n)$, as the braid relations do not hold. However, the restrictions of the Schützenberger involution to alphabets of the form $\{p' < p < \ldots < q' < q\}\subseteq [n]'$, where the crystal reflection operators are included, define an action of the cactus group $J_n$ on this crystal. In this talk we will present some recent results on this action as well as the symmetries on the shifted LR coefficients it yields.
Gentle algebras are a class of algebras of tame representation type, meaning it is often possible to get a global understanding of their representation theory. In this talk, we will show how to encode the module category of a gentle algebra using combinatorics of a surface. This is joint work with Karin Baur (Graz/Leeds).
Let $G$ be a connected, simply connected nilpotent Lie group. For $p,q\in [1,+\infty]$, the $L^p-L^q$ analogue of Morgan’s theorem was proved only for two step nilpotent Lie groups. In order to study this problem in larger subclasses, we formulate and prove a version of $L^p-L^q$ Morgan’s theorem on nilpotent Lie groups whose Lie algebra admits an ideal which is a polarization for a dense subset of generic linear forms on the Lie algebra. An analogue of Cowling-Price Theorem is also provided in the same context. Representation theory and a localized Plancherel formula play an important role in the proof.
The modular representation theory of the general linear Lie algebra over a field of positive characteristic is a subject which has been studied intensively for around seventy years. The simple modules for the enveloping algebra all factor through certain finite dimensional quotients, known as reduced enveloping algebras. In 2002 Premet demonstrated that these reduced enveloping algebras are actually matrix algebras over certain endomorphism rings, which are now known as restricted finite W-algebras. Hence these finite dimensional algebras are the fundamental unit of currency when trying to understand $\operatorname{gl}_n$-modules. In this talk I will explain a joint work with Simon Goodwin, in which we provide a presentation for the restricted finite W-algebras, by showing that they are isomorphic to quotients of a truncated shifted Yangian.
A famous theorem of Auslander states that a finite dimensional algebra is of finite representation type if and only if every module is additively equivalent to a finite dimensional one. This establishes a correlation between quantity (of indecomposable finite dimensional modules) and size (of indecomposable modules).
We will discuss the ocurrence of an analogous correlation for torsion pairs. Indeed, for a finite dimensional algebra $A$ we prove that the category of $A$-modules has finitely many torsion pairs if and only if every torsion pair is generated by a finite dimensional module. Time allowing, we will also mention an analogous result for the derived category of $A$. This is based on joint work with L. Angeleri Hügel and F. Marks and on ongoing joint work with L. Angeleri Hügel and D. Pauksztello.
According to Drozd’s trichotomy theorem for every finite dimensional algebra $A$ there are three possibilities
The Borel-Schur algebras $S(B,n,r)$ is a family of finite-dimensional algebras depending on two natural numbers $n$ and $r$. Their representation theory is closely related to the representation theory of Schur algebras $S(n,r)$, representation theory of the general linear groups $\operatorname{GL}_n$ in defining characteristic, and representation theory of symmetric group. In this talk I will describe the representation types of the algebras $S(B,n,r)$ for every $n$ and $r$.
This is joint work with Karin Erdmann and Ana Paula Santana. |
Dynamics of { $\lambda tanh(e^z): \lambda \in R$\ ${ 0 }$ }
1.
Department of Mathematics, Indian Institute of Technology Guwahati, Guwahati - 781039, India, India
$\mathcal{M} = { f_{\lambda}(z) = \lambda f(z) : f(z) = \tanh(e^{z}) \mbox{for} z \in \mathbb{C} \mbox{and} \lambda \in \mathbb{R} \setminus \{ 0 \} }$
is studied. We prove that there exists a parameter value $\lambda^$* $\approx -3.2946$ such that the Fatou set of $f_{\lambda}(z)$ is a basin of attraction of a real fixed point for $\lambda > \lambda^$* and, is a parabolic basin corresponding to a real fixed point for $\lambda = \lambda^$*. It is a basin of attraction or a parabolic basin corresponding to a real periodic point of prime period $2$ for $\lambda < \lambda^$*. If $\lambda >\lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ is connected and, is infinitely connected. Consequently, the singleton components are dense in the Julia set of $f_{\lambda}$ for $\lambda >\lambda^$*. If $\lambda \leq \lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ contains infinitely many pre-periodic components and each component of the Fatou set of $f_{\lambda}$ is simply connected. Finally, it is proved that the Lebesgue measure of the Julia set of $f_{\lambda}$ for $\lambda \in \mathbb{R} \setminus \{ 0 \}$ is zero.
Mathematics Subject Classification:Primary: 37F45, 37F5. Citation:M. Guru Prem Prasad, Tarakanta Nayak. Dynamics of { $\lambda tanh(e^z): \lambda \in R$\ ${ 0 }$ }. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 121-138. doi: 10.3934/dcds.2007.19.121
[1] [2]
Hiroki Sumi.
Dynamics of postcritically bounded polynomial semigroups I:
Connected components of the Julia sets.
[3] [4]
Luiz Henrique de Figueiredo, Diego Nehab, Jorge Stolfi, João Batista S. de Oliveira.
Rigorous bounds for polynomial Julia sets.
[5] [6] [7]
Jun Hu, Oleg Muzician, Yingqing Xiao.
Dynamics of regularly ramified rational maps: Ⅰ. Julia sets of maps in one-parameter families.
[8] [9] [10] [11] [12]
Ranjit Bhattacharjee, Robert L. Devaney, R.E. Lee Deville, Krešimir Josić, Monica Moreno-Rocha.
Accessible points in the Julia sets of stable exponentials.
[13]
Koh Katagata.
Transcendental entire functions whose Julia sets contain any infinite collection of quasiconformal copies of quadratic Julia sets.
[14]
Suzanne Lynch Hruska.
Rigorous numerical models for the dynamics of complex Hénon mappings on their chain recurrent sets.
[15]
Weiyuan Qiu, Fei Yang, Yongcheng Yin.
Quasisymmetric geometry of the Cantor circles as the Julia sets of rational maps.
[16]
Rich Stankewitz, Hiroki Sumi.
Random backward iteration algorithm for Julia sets of rational semigroups.
[17] [18]
Alexander Blokh, Lex Oversteegen, Vladlen Timorin.
Non-degenerate locally connected models for plane continua and Julia sets.
[19]
Hiroki Sumi, Mariusz Urbański.
Measures and dimensions of Julia sets of semi-hyperbolic
rational semigroups.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Let $k$ be a positive integer and let $\Gamma$ be a finite index subgroup of the modular group $\SL(2,\Z)$.
A (classical)
modular form $f$ of weight $k$ on $\Gamma$, is a holomorphic function defined on the upper half plane $\mathcal{H}$, which satisfies the transformation property\[f(\gamma z) = (cz+d)^k f(z)\] for all $z\in\mathcal{H}$ and $\gamma=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in \Gamma$ and is holomorphic at all the cusps of $\Gamma$.
For each fixed choice of $k$ and $\Gamma$ the set of modular forms of weight $k$ on $G$ form a finite-dimensional $\mathbb{C}$-vector space denoted $M_k(\Gamma)$.
For the congruence subgroup $\Gamma_1(N)$ the space $M_k(\Gamma_1(N))$ decomposes as a direct sum of subspaces $M_k(N,\chi)$ over the group of Dirichlet characters $\chi$ of modulus $N$, where $M_k(N,\chi)$ is the subspace of forms $f\in M_k(N)$ that satisfy \[ f(\gamma z) = \chi(d)(cz+d)^k f(z) \] for all $\gamma=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ in $\Gamma_0(N)$.
Elements of $M_k(N,\chi)$ are said to be modular forms of weight $k$, level $N$, and character $\chi$.
For trivial character $\chi$ of modulus $N$ we have $M_k(N,\chi)=M_k(\Gamma_0(N))$.
Knowl status: Review status: reviewed Last edited by Alex J. Best on 2018-12-19 06:32:25 Referred to by: cmf.bad_prime cmf.character cmf.cm_form cmf.cusp_form cmf.eisenstein cmf.fouriercoefficients cmf.hecke_operator cmf.level cmf.newform cmf.nk2 cmf.oldspace cmf.rm_form cmf.space cmf.weight lfunction.underlying_object mf mf.bianchi.spaces mf.gl2.history.elliptic mf.siegel.lift.miyawaki rcs.cande.lfunction lmfdb/classical_modular_forms/__init__.py (line 6) History:(expand/hide all) 2018-12-19 06:32:25 by Alex J. Best (Reviewed) |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
The $\Theta(n)$ difference-of-sums solution proposed by Tobi and Mario can in fact be generalized to any other data type for which we can define a (constant-time) binary operation $\oplus$ that is:
total, such that for any values $a$ and $b$, $a \oplus b$ is defined and of the same type (or at least of some appropriate supertype of it, for which the operator $\oplus$ is still defined); associative, such that $a \oplus (b \oplus c) = (a \oplus b) \oplus c$; commutative, such that $a \oplus b = b \oplus a$; and cancellative, such that there exists an inverse operator $\ominus$ that satisfies $(a \oplus b) \ominus b = a$. Technically, this inverse operation doesn't even necessarily have to be constant-time, as long as "subtracting" two sums of $n$ elements each doesn't take more than ${\rm O}(n)$ time.
(If the type can only take a finite number of distinct values, these properties are sufficient to make it into an Abelian group; even if not, it will at least be a commutative cancellative semigroup.)
Using such an operation $\oplus$, we can define the "sum" of an array $a = (a_1, a_2, \dots, a_n)$ as $$(\oplus\, a) = a_1 \oplus a_2 \oplus \dotsb \oplus a_n.$$ Given another array $b = (b_1, b_2, \dots, b_n, b_{n+1})$ containing all the elements of $a$ plus one extra element $x$, we thus have $(\oplus\, b) = (\oplus\, a) \oplus x$, and so we can find this extra element by computing: $$x = (\oplus\, b) \ominus (\oplus\, a).$$
For example, if the values in the arrays are integers, then integer addition (or modular addition for finite-length integers types) can be used as the operator $\oplus$, with subtraction as the inverse operation $\ominus$. Alternatively, for
any data type whose values can be represented as fixed-length bit strings, we can use bitwise XOR as both $\oplus$ and $\ominus$.
More generally, we can even apply the bitwise XOR method to strings of variable length, by padding them up to the same length as necessary, as long as we have some way to reversibly remove the padding at the end.
In some cases, this is trivial. For example, C-style null terminated byte strings implicitly encode their own length, so applying this method for them is trivial: when XORing two strings, pad the shorter one with null bytes to make their length match, and trim any extra trailing nulls from the final result. Note that the intermediate XOR-sum strings
can contain null bytes, though, so you'll need to store their length explicitly (but you'll only need one or two of them at most).
More generally, one method that would work for arbitrary bit strings would be to apply one-bit padding, where each input bitstring is padded with a single $1$ bit and then with as many $0$ bits as necessary to match the (padded) length of the longest input string. (Of course, this padding does not need to be done explicitly in advance; we can just apply it as needed while computing the XOR sum.) At the end, we simply need to strip any trailing $0$ bits and the final $1$ bit from the result. Alternatively, if we knew that the strings were e.g. at most $2^{32}$ bytes long, we could encode the length of each string as a 32-bit integer and prepend it to the string. Or we could even encode arbitrary string lengths using some prefix code, and prepend those to the strings. Other possible encodings exist as well.
In fact, since
any data type representable on a computer can, by definition, be represented as a finite-length bit string, this method yields a generic $\Theta(n)$ solution to the problem.
The only potentially tricky part is that, for the cancellation to work, we need to choose a unique canonical bitstring representation for each value, which could be difficult (indeed, potentially even computationally undecidable) if the input values in the two arrays may be given in different equivalent representations. This is not a specific weakness of this method, however; any other method of solving this problem can also be made to fail if the input is allowed to contain values whose equivalence is undecidable. |
Content – Forms of energy
Sound is produced when a force causes an object or substance to vibrate.
Sound energy is transferred through the environment (matter) by wave motion caused by the vibrating object.
Sound energy is often measured by its pressure and intensity, in units called decibels.
Waves
A vibrating object that is connected to its environment (matter) will transfer energy to its environment. The vibrations and the resulting energy is transferred though the environment from “neighbour to neighbour” i.e the wave motion. Waves transfer energy through a matter without changing the physical location of the matter.
Longitudinal waves
When waves transfer energy by pushing neighbours in the same direction that the energy moves, the waves are called longitudinal waves.
Sound waves are examples of longitudinal waves
Waves are called pressure waves when the particles cluster together in volumes of “high pressure”. Sound waves are an example of pressure waves that move through matters such as gases, liquids and solids.
The speed of the sound waves increases by the density of the matter they travel through.
Speed of sound through iron = 5130 m/s
Speed of sound through water (seawater) = 1531 m/s Speed of sound through air =344 m/ Formula – Sound energy
The total sound energy will equal the maximum kinetic energy:
E= \dfrac{1}{2}mv^2 = \dfrac{1}{2}m(A\omega)^2 m = density of the medium the sound waves travel through Aω = the maximum transverse speed of particles A = \dfrac{\nu}{2\pi} A = amplitude |
IWOTA 2019
International Workshop on
Operator Theory and its Applications
We show that dual truncated Toeplitz operators on the orthogonal complement of a model space are equivalent after extension to certain paired operators, and we use this to study their kernels and their spectral properties.
Based on joint work with Kamila Kliś-Garlicka, Bartosz Łanucha and Marek Ptak.
Consider the standard weighted Fock spaces $F^p_{\alpha}$ on $\mathbb{C}^n$, where $p \in (1,\infty)$ and $\alpha > 0$. The theory of Fock spaces is in many regards very similar to the classical Bergman space theory. However, there are a few key differences, which make the theory interesting. One of them concerns Hankel operators, which are denoted by $H_f$ for bounded symbols $f$. In the Hilbert space case ($p = 2$) a well-known theorem by Berger and Coburn states that $H_f$ is compact if and only if $H_{\bar{f}}$ is compact. On the other hand, it is easily seen that this statement is wrong for Bergman spaces. Zhu comments:
“A partial explanation for this difference is probably the lack of bounded analytic or harmonic functions on the entire complex plane.” Using limit operator theory, I will give a new proof of the Berger-Coburn result, which also includes the Banach space cases ($p \neq 2$) and fully explains this difference between Bergman and Fock spaces. Namely, it will be apparent that the only ingredient missing for the same proof to work on Bergman spaces is Liouville's theorem. Based on joint work with Jani Virtanen.
We study certain densely defined unbounded operators on the Segal-Bargmann space. These are the annihilation and creation operators of quantum mechanics. In several complex variables we have the $\partial$-operator and its adjoint $\partial^*$ acting on $(p,0)$-forms with coefficients in the Segal-Bargmann space. We consider the corresponding $\partial$-complex and study spectral properties of the corresponding complex Laplacian $\tilde \Box = \partial \partial^* + \partial^*\partial.$ Finally we study a more general complex Laplacian $\tilde \Box_D = D D^* + D^* D,$ where $D$ is a differential operator of polynomial type, to find the canonical solutions to the inhomogeneous equations $Du=\alpha$ and $D^*v=\beta.$
We also study the $\partial$-complex on several models including the complex hyperbolic space, which turns out to have duality properties similar to the Segal-Bargmann space (which is common work with Duong Ngoc Son).
We obtain complete characterizations in terms of Carleson measures for bounded/compact differences of weighted composition operators acting on the standard weighted Bergman spaces over the unit disk. Unlike the known results, we allow the weight functions to be non-holomorphic and unbounded.
As a consequence we obtain a compactness characterization for differences of unweighted composition operators acting on the Hardy spaces in terms of Carleson measures and, as a nontrivial application of this, we show that compact differences of composition operators with univalent symbols on the Hardy spaces are exactly the same as those on the weighted Bergman spaces. As another application, we show that an earlier characterization due to Acharyya and Wu for compact differences of weighted composition operators with bounded holomorphic weights does not extend to the case of non-holomorphic weights. We also include some explicit examples related to our results.
It is known that for Cowen-Douglas operators, the curvature of the corresponding eigenvector bundles play an important role in their classification up to unitary equivalence or similarity. However, the curvature is not so easy to calculate in general. We consider a subclass of the Cowen-Douglas class in which things are more tractable. This talk is based on joint work with K. Ji, J. Sarkar, and J. Xu.
In this talk, we consider the problem: when Möbius invariant function spaces are continuously embedded in tent spaces. It is also known as Carleson measure problem. We will introduce the recent development of Carleson measure for some well-known analytic function spaces, then present our work on Carleson measure for $\operatorname{BMOA}$, Bloch space and $Q_p$ spaces. (This is a joint work with K. Zhu)
In this talk, we study the Banach algebras $\mathcal{T}(\mathbb{T}_m^q)$ which is generated by Toeplitz operators whose symbols are invariant under the action $\mathbb{T}_m^q$ subgroup of the maximal torus $\mathbb{T}^n$, which are acting on the Bergman space on weakly pseudo-convex domains $ \Omega^n_p$. Moreover, we proved that the commutator of the $C*$ algebra $\mathcal{T}(\mathcal{R}_k(\Omega^n_p))$ is equal to The Toeplitz algebra $\mathcal{T}(\mathbb{T}_m^q)$, where $\mathcal{T}(\mathcal{R}_k(\Omega^n_p))$ is the $C^*$ algebra generated by Toeplitz operators where the symbols are $k$-radials. Finally, using this relationship we found some commutative Banach algebras generated by Toeplitz operators which generalized the Banach algebra generated by Toeplitz operators with quasi-homogeneous quasi-radial symbols.
This is a joint work with Mauricio Hernández-Marroquin and Luis Alfredo Dupont-García.
Determinantal varieties of images of Coxeter generators were shown to determine representations of non-exceptional finite Weyl groups up to unitary equivalence by Cuckovic, Stessin, and Tchernev. The main result established shows determinantal varieties of a larger set (more than the generating set) of elements determine the character of representations of affine Weyl groups $\tilde{B}_n$, $\tilde{C}_n$, and $\tilde{D}_n$.
Carleson measures are fundamental to the study of holomorphic function spaces, as they are connected to the characterization of the corresponding multiplier algebras, existence of boundary values, interpolating sequences, Hankel-type operators, Corona problems, etc. I will discuss a description of the Carleson measures for the Dirichlet space of the bidisc, in terms of a newly developed bi-parameter potential theory which is based on kernels of tensor-product structure. There are very significant differences to classical potential theory. In particular, the maximum principle fails. Yet, perhaps surprisingly, the bi-parameter theory completely characterizes the Carleson measures, in analogy with Stegenga’s description of the Carleson measures for the Dirichlet space of the disc.
Based on joint work with Nicola Arcozzi, Pavel Mozolyako, and Giulia Sarfatti.
Let $D = G/K \subset \mathbb{C}^n$ be an irreducible bounded symmetric domain circled around the origin, where $G$ is a reductive group with an action on $D$ that realizes the biholomorphism group of $D$ with $K$ the isotropy subgroup at the origin. For any closed subgroup $H$ of $K$ let $\mathcal{A}^H$ be the essentially bounded measurable symbols on $D$ that are $H$-invariant, and let us denote by $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ the $C^*$-algebra generated by the Toeplitz operators with symbols in $\mathcal{A}^H$ acting on the weighted Bergman space $\mathcal{H}^2_\lambda(D)$. It is well known that $\mathcal{T}^{(\lambda)}(\mathcal{A}^K)$ is commutative for $D$ as above. But $\mathcal{T}^{(\lambda)}(\mathcal{A}^T)$ is commutative for $T$ a maximal torus in $K$ if and only if $D$ is biholomorphic to a unit ball. A question posed by Nikolai Vasilevski is to find out whether, for $D$ not biholomorphic to a unit ball, there exists $H$ a closed proper subgroup of $K$ containing a maximal torus such that $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ is commutative.
In this talk we will show that there is such a subgroup that gives commutative $C^*$-algebras, as required, in the case of the classical domain $D^I_{2,2}$ consisting of complex $2\times 2$ matrices $Z$ such that $Z Z^* < I_2$. The biholomorphism group for this domain can be realized by $\mathrm{U}(2,2)$ with isotropy at the origin given by $\mathrm{U}(2)\times\mathrm{U}(2)$ which contains the maximal torus $\mathbb{T}^2\times\mathbb{T}^2$ given by pairs of diagonal unitary $2\times 2$ matrices. For this setup, we will prove that for the group $H = \mathrm{U}(2)\times\mathbb{T}^2$ the $C^*$-algebra $\mathcal{T}^{(\lambda)}(\mathcal{A}^H)$ is commutative. The proof is based in representation theory and allows us to provide a description of the spectra of the corresponding Toeplitz operators.
This is joint work with Gestur Olafsson and Matthew Dawson.
Let $H$ be a RKHS of functions regarded as a subspace of $L^2(G\times Y)$, where $G$ is a locally compact abelian group provided with a Haar measure and $Y$ is a measure space. Suppose $H$ to be invariant under "horizontal translations" naturally associated to $G$. Under some technical assumptions, we study the W*-algebra $\mathcal{V}$ of all translation-invariant bounded linear operators acting on $H$. For this purpose, we apply the operator $F \otimes I$, i.e. the Fourier transform with respect to the first coordinate, and decompose the image $\widehat{H}$ of $H$ into the direct integral of the fibers $\widehat{H}_{\xi}$, where $\xi\in\widehat{G}$. We conclude that $\mathcal{V}$ is commutative if and only if all fibers have dimension $0$ or $1$. If this happens, we construct a unitary operator that simultaneously diagonalizes all operators in $\mathcal{V}$. Our scheme generalizes several results previously found by Vasilevski, Quiroga-Barranco, Grudsky, Karapetyants, Hutník, Loaiza, Lozano, Sánchez-Nungaray, Ramírez Ortega, Esmeral, and other authors.
As a new example, we describe the W*-algebra of "vertical" operators in the polyharmonic space over the upper half-plane.
This talk is based on a joint work with Crispin Herrera Yañez and Egor Maximenko.
In this talk we give decompositions of the W*-algebras of radial operators on the spaces $L^2(\mathbb{C},d\mu_G)$, $F_n$ and $F_{(n)}$, where $d\mu_g=\frac{1}{\pi}e^{-\vert z\vert^2}$ is the gaussian weight on the complex plane, $F_n$ is the $n$-th polyanalytic Bargmann-Segal-Fock space and $F_{(n)}=F_n\ominus F_{n-1}$ is the $n$-th true polyanalytic Fock space.
We construct an orthonormal basis for $L^2(\mathbb{C},d\mu_G)$ using creation operators, and show its equivalence to complex Hermit polynomials, denoted $(b_{p,q})_{p,q=0}^\infty$. Using ideas from Vasilevski (2000), we prove explicit formulas for the reproducing kernels of $F_{(n)}$ and $F_n$: $$K_z^{(n)}(w)=e^{\overline{z}w}L_{n-1}(\vert w-z\vert^2), \qquad\qquad K_z^{n}(w)=e^{\overline{z}w}L^1_{n-1}(\vert w-z\vert^2),$$ where $L^\alpha_n$ is the $n$-th generalized Laguerre polynomial of degree $\alpha$.
Let $\mathcal{D}_d$ be the
$d$th diagonal subspace of $L^2(\mathbb{C},d\mu_G)$ defined as the closed\\subspace generated by $b_{j,k}$ with $j-k=d$. We prove that these diagonal
I am going to speak about recent work with Dan Timotin and Mohamed Zarrabi showing in which ways the classical Szego theorem about eigenvalues of Toeplitz matrices can be generalized to truncated Toeplitz operators.
We review a recent approach to weighted Bergman spaces $A_v^p$, $1 \lt p \lt \infty$, or $H_v^\infty$ on the unit disc and also related spaces of entire functions: we use techniques where the Taylor series of the analytic functions are divided into an infinite number of blocks, which consist of polynomials with given fixed degrees somehow related to the given weight of the Bergman norm. This allows to write an expression of the weighted Bergman norm, which is useful and applicable, if $p \not=2$ (in the case $p=2$ our results do not yield new information). We apply the techniques to describe solid hulls and cores of the spaces, and characterize the boundedness and compactness of some sequence space multipliers. We also give a characterization of the boundedness and compactness of Toeplitz operators with radial symbols in the space $H_v^\infty$ on the disc. The work is in co-operation with José Bonet (Valencia) and Wolfgang Lusky (Paderborn). |
I'm currently reading the book
"Classical Theory of Gauge Fields" by Rubakov, therefore I will use his convention in this question. In the following we assume that: $\phi$ comprises columns consisting of complex scalar fields. $G$ is a simple compact Lie Group. The representations $\rho:G\rightarrow \mathrm{Gl}(V)$ are unitary. $\rho_*: \mathfrak{g} \rightarrow \mathfrak{gl}(V):A\mapsto \left. \frac{\mathrm{d}}{\mathrm{d}\epsilon}\right|_{\epsilon=0}\rho\left(\exp[\epsilon A] \right)$ are the corresponding Lie Algebra representations and therefore anti-hermitian.
Now, we have
global gauge symmetry of the Lagrangian
$$\begin{equation} \mathcal{L} = \partial_{\mu}\phi^{\dagger}\partial^{\mu}\phi - m^2\phi^{\dagger}\phi-\lambda\left(\phi^{\dagger}\phi\right)^2 \end{equation}$$
under the action
$$\phi(x) \mapsto \phi'(x) = \rho(\omega)\phi(x)$$
We can generalize this global symmetry to a
gauge symmetry by introducing the gauge field $A_{\mu}:\mathbb{R}^4 \to \mathfrak{g}$ transforming as
$$A_{\mu} \mapsto A'_{\mu}(x) = \omega(x)A_{\mu}(x)\omega^{-1}(x) + \omega(x)\partial_{\mu}\omega^{-1}(x),$$
where $\omega:\mathbb{R}^4 \to G$ and additionally introducing the covariant (In the sense of the gauge transformation) derivative
$$D_{\mu}\phi(x) := \left[\partial_{\mu} + \rho_*(A_{\mu}(x))\right] \phi(x) \Rightarrow \left( D_{\mu}\phi \right)' = \rho(\omega(x))\left( D_{\mu}\phi \right)$$.
Then, the invariant Lagrangian of the Field is given by
$$\mathcal{L}_{\phi} = \left( D_{\mu}\phi \right)^{\dagger} \left( D^{\mu}\phi \right) -m^2 \phi^{\dagger} \phi -\lambda\left(\phi^{\dagger} \phi\right)^2$$
So far so good. But what if we have the case of a non-simple compact Lie Group, say $SU(2) \times U(1)$. Let's assume we have two doublets $\phi, \chi$ under $SU(2)$ (fundamental representation) and $\epsilon$ a singlet under $SU(2)$ (trivial representation):
$$ \begin{align} \phi &\mapsto \omega \phi,\\ \chi &\mapsto \omega \chi,\\ \epsilon &\mapsto \epsilon, \end{align} $$
where $\omega \in SU(2)$. The fields $\phi, \chi$ are two components complex columns, $\epsilon$ is a one component complex scalar field. Furthermore, we assume the fields to transform under $U(1)$ as
$$ \begin{align} \phi & \mapsto \exp\left[ iq_{\phi} \alpha\right] \phi, \\ \chi & \mapsto \exp\left[ iq_{\chi} \alpha\right] \chi, \\ \epsilon & \mapsto \exp\left[ iq_{\epsilon} \alpha\right] \epsilon, \\ \end{align} $$
where $q_{\phi}, q_{\chi}, q_{\epsilon} \in \mathbb{R}$. The kinetic term in the Lagrangian has the standard form and the interaction can be chosen as
$$ \begin{align} \lambda [ (\phi^{\dagger} \epsilon) \chi + \chi^{\dagger}(\epsilon^* \phi) ]. \end{align} $$
It is invariant under these global gauge transformations if $q_{\chi}+ q_{\epsilon} - q_{\phi}=0$.
Question: So, how would one approach the generalization to a gauge symmetry of this system as described above? I am really lost here, so every hint would be highly appreciated. |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
I am feeling a little nostalgic. So I rebuilt one of the first circuits I put together when I started messing around with circuits in eighth grade. It’s a very simple beginner project, and might appeal to your child – real or inner!
Another reason I wanted to put this up is because most “blinky” circuits I see now are based on the venerable 555 IC. This is fine, but the two transistor circuit is simpler for a beginner to understand than the innards of the 555, I think.
Here is the circuit. It’s a two transistor astable multivibrator. (Read the wikipedia link for the gory details of how it works. 😉 )
For supply, I used a 3V CR 2032 coin cell. The transistors are 2N2222, but I think any NPN transistor would work in this case. The LEDs are Red, which have low turn-on voltage, and this is important, since we’re using a coin cell here. The circuit was assembled on a breadboard.
Even for this simple circuit, some calculations are involved to get it right. The total period of oscillation is given by:
$$ T = t_{1} + t_{2} = \ln 2 R_{2}C_{1} + \ln 2 R_{3}C_{2} \approx 0.693(R_{2}C_{1} + R_{3}C_{2})$$
Choosing $$ R_{2} = R_{3} = 47 K\Omega$$ and $$ C_{1} = C_{2} = 22 uF $$ gives us a total period of about 1.4 seconds – 0.7 seconds per LED. Good enough.
Now, to compute $$ R_{1} = R_{4}$$. Assuming a 0.6 V drop across the transistor, and a 20 mA current through the LED gives us $$ R = \frac{3 – 0.6}{20 mA} = 120 \Omega$$. So a 100 Ohm resistor, which is more common, will do just fine.
And here’s the circuit in action:
Build it for your child! 🙂 |
In practice, the runtime of numerically solving an IVP $$ \dot{x}(t) = f(t, x(t)) \quad \text{ for } t \in [t_0, t_1] $$ $$ x(t_0) = x_0 $$ is often dominated by the duration of evaluating the right-hand side (RHS) $f$. Let us therefore assume that all other operations are instant (i.e. without computational cost). If the overall runtime for solving the IVP is limited then this is equivalent to limiting the number of evaluations of $f$ to some $N \in \mathbb{N}$.
We are only interested in the final value $x(t_1)$.
I'm looking for theoretical and practical results that help me choose the best ODE method in such a setting.
If, for example, $N = 2$ then we could solve the IVP using two explicit Euler steps of width $(t_1 - t_0)/2$ or one step of width $t_1 - t_0$ using the midpoint method. It is not immediately clear to me which one is preferrable. For larger $N$, one can of course also think about multi-step methods, iterated Runge-Kutta schemes, etc.
What I'm looking for are results similar to the ones that exist, for example, for quadrature rules: We can pick $n$ weights $\{w_i\}$ and associated points $\{x_i\}$ such that the quadrature rule $\sum_{i = 1}^n w_i g(x_i)$ is exact for all polynomials $g$ such that $deg(g) \le 2n - 1$.
Hence I'm looking for upper or lower bounds on the global accuracy of ODE methods, given a limited number of allowed evaluations of the RHS $f$. It's OK if the bounds only hold for some classes of RHS or pose additional constraints on the solution $x$ (just like the result for the quadrature rule which only holds for polynomials up to a certain degree).
EDIT: Some background information: This is for hard real-time applications, i.e. the result $x(t_1)$ must be available before a known deadline. Hence the limit on the number of RHS evaluations $N$ as the dominating cost factor. Typically our problems are stiff and comparatively small.
EDIT2: Unfortunately I don't have the precise timing requirements, but it is safe to assume that $N$ will be rather small (definitely <100, propably closer to 10). Given the real-time requirement we have to find a tradeoff between the accuracy of the models (with better models leading to longer execution times of the RHS and hence to a lower $N$) and the accuracy of the ODE method (with better methods requiring higher values of $N$). |
10.5. Mini-batch Stochastic Gradient Descent¶
In each iteration, the gradient descent uses the entire training data set to compute the gradient, so it is sometimes referred to as batch gradient descent. Stochastic gradient descent (SGD) only randomly select one example in each iteration to compute the gradient. Just like in the previous chapters, we can perform random uniform sampling for each iteration to form a mini-batch and then use this mini-batch to compute the gradient. Now, we are going to discuss mini-batch stochastic gradient descent.
Set objective function \(f(\boldsymbol{x}): \mathbb{R}^d \rightarrow \mathbb{R}\). The time step before the start of iteration is set to 0. The independent variable of this time step is \(\boldsymbol{x}_0\in \mathbb{R}^d\) and is usually obtained by random initialization. In each subsequent time step \(t>0\), mini-batch SGD uses random uniform sampling to get a mini-batch \(\mathcal{B}_t\) made of example indices from the training data set. We can use sampling with replacement or sampling without replacement to get a mini-batch example. The former method allows duplicate examples in the same mini-batch, the latter does not and is more commonly used. We can use either of the two methods
to compute the gradient \(\boldsymbol{g}_t\) of the objective function at \(\boldsymbol{x}_{t-1}\) with mini-batch \(\mathcal{B}_t\) at time step \(t\). Here, \(|\mathcal{B}|\) is the size of the batch, which is the number of examples in the mini-batch. This is a hyper-parameter. Just like the stochastic gradient, the mini-batch SGD \(\boldsymbol{g}_t\) obtained by sampling with replacement is also the unbiased estimate of the gradient \(\nabla f(\boldsymbol{x}_{t-1})\). Given the learning rate \(\eta_t\) (positive), the iteration of the mini-batch SGD on the independent variable is as follows:
The variance of the gradient based on random sampling cannot be reduced during the iterative process, so in practice, the learning rate of the (mini-batch) SGD can self-decay during the iteration, such as \(\eta_t=\eta t^\alpha\) (usually \(\alpha=-1\) or \(-0.5\)), \(\eta_t = \eta \alpha^t\) (e.g \(\alpha=0.95\)), or learning rate decay once per iteration or after several iterations. As a result, the variance of the learning rate and the (mini-batch) SGD will decrease. Gradient descent always uses the true gradient of the objective function during the iteration, without the need to self-decay the learning rate.
The cost for computing each iteration is \(\mathcal{O}(|\mathcal{B}|)\). When the batch size is 1, the algorithm is an SGD; when the batch size equals the example size of the training data, the algorithm is a gradient descent. When the batch size is small, fewer examples are used in each iteration, which will result in parallel processing and reduce the RAM usage efficiency. This makes it more time consuming to compute examples of the same size than using larger batches. When the batch size increases, each mini-batch gradient may contain more redundant information. To get a better solution, we need to compute more examples for a larger batch size, such as increasing the number of epochs.
10.5.1. Reading Data¶
In this chapter, we will use a data set developed by NASA to test the wing noise from different aircraft to compare these optimization algorithms. We will use the first 1500 examples of the data set, 5 features, and a normalization method to preprocess the data.
%matplotlib inlineimport d2lfrom mxnet import autograd, gluon, init, np, npxfrom mxnet.gluon import nnnpx.set_np()# Save to the d2l package.def get_data_ch10(batch_size=10, n=1500): data = np.genfromtxt('../data/airfoil_self_noise.dat', dtype=np.float32, delimiter='\t') data = (data - data.mean(axis=0)) / data.std(axis=0) data_iter = d2l.load_array((data[:n, :-1], data[:n, -1]), batch_size, is_train=True) return data_iter, data.shape[1]-1
10.5.2. Implementation from Scratch¶
We have already implemented the mini-batch SGD algorithm in the
sec_linear_scratch. We have made its input parameters moregeneric here, so that we can conveniently use the same input for theother optimization algorithms introduced later in this chapter.Specifically, we add the status input
states and place thehyper-parameter in dictionary
hyperparams. In addition, we willaverage the loss of each mini-batch example in the training function, sothe gradient in the optimization algorithm does not need to be dividedby the batch size.
def sgd(params, states, hyperparams): for p in params: p[:] -= hyperparams['lr'] * p.grad
Next, we are going to implement a generic training function to facilitate the use of the other optimization algorithms introduced later in this chapter. It initializes a linear regression model and can then be used to train the model with the mini-batch SGD and other algorithms introduced in subsequent sections.
# Save to the d2l package.def train_ch10(trainer_fn, states, hyperparams, data_iter, feature_dim, num_epochs=2): # Initialization w = np.random.normal(scale=0.01, size=(feature_dim, 1)) b = np.zeros(1) w.attach_grad() b.attach_grad() net, loss = lambda X: d2l.linreg(X, w, b), d2l.squared_loss # Train animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[0, num_epochs], ylim=[0.22, 0.35]) n, timer = 0, d2l.Timer() for _ in range(num_epochs): for X, y in data_iter: with autograd.record(): l = loss(net(X), y).mean() l.backward() trainer_fn([w, b], states, hyperparams) n += X.shape[0] if n % 200 == 0: timer.stop() animator.add(n/X.shape[0]/len(data_iter), (d2l.evaluate_loss(net, data_iter, loss),)) timer.start() print('loss: %.3f, %.3f sec/epoch'%(animator.Y[0][-1], timer.avg())) return timer.cumsum(), animator.Y[0]
When the batch size equals 1500 (the total number of examples), we use gradient descent for optimization. The model parameters will be iterated only once for each epoch of the gradient descent. As we can see, the downward trend of the value of the objective function (training loss) flattened out after 6 iterations.
def train_sgd(lr, batch_size, num_epochs=2): data_iter, feature_dim = get_data_ch10(batch_size) return train_ch10( sgd, None, {'lr': lr}, data_iter, feature_dim, num_epochs)gd_res = train_sgd(1, 1500, 6)
loss: 0.246, 0.066 sec/epoch
When the batch size equals 1, we use SGD for optimization. In order to simplify the implementation, we did not self-decay the learning rate. Instead, we simply used a small constant for the learning rate in the (mini-batch) SGD experiment. In SGD, the independent variable (model parameter) is updated whenever an example is processed. Thus it is updated 1500 times in one epoch. As we can see, the decline in the value of the objective function slows down after one epoch.
Although both the procedures processed 1500 examples within one epoch, SGD consumes more time than gradient descent in our experiment. This is because SGD performed more iterations on the independent variable within one epoch, and it is harder for single-example gradient computation to use parallel computing effectively.
sgd_res = train_sgd(0.005, 1)
loss: 0.243, 0.337 sec/epoch
When the batch size equals 100, we use mini-batch SGD for optimization. The time required for one epoch is between the time needed for gradient descent and SGD to complete the same epoch.
mini1_res = train_sgd(.4, 100)
loss: 0.245, 0.008 sec/epoch
Reduce the batch size to 10, the time for each epoch increases because the workload for each batch is less efficient to execute.
mini2_res = train_sgd(.05, 10)
loss: 0.260, 0.042 sec/epoch
Finally, we compare the time versus loss for the preview four experiments. As can be seen, despite SGD converges faster than GD in terms of number of examples processed, it uses more time to reach the same loss than GD because that computing gradient example by example is not efficient. Mini-batch SGD is able to trade-off the convergence speed and computation efficiency. Here, a batch size 10 improves SGD, and a batch size 100 even outperforms GD.
d2l.set_figsize([6, 3])d2l.plot(*list(map(list, zip(gd_res, sgd_res, mini1_res, mini2_res))), 'time (sec)', 'loss', xlim=[1e-2, 10], legend=['gd', 'sgd', 'batch size=100', 'batch size=10'])d2l.plt.gca().set_xscale('log')
10.5.3. Concise Implementation¶
In Gluon, we can use the
Trainer class to call optimizationalgorithms. Next, we are going to implement a generic training functionthat uses the optimization name
trainer_name and hyperparameter
trainer_hyperparameter to create the instance
Trainer.
# Save to the d2l package.def train_gluon_ch10(trainer_name, trainer_hyperparams, data_iter, num_epochs=2): # Initialization net = nn.Sequential() net.add(nn.Dense(1)) net.initialize(init.Normal(sigma=0.01)) trainer = gluon.Trainer( net.collect_params(), trainer_name, trainer_hyperparams) loss = gluon.loss.L2Loss() animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[0, num_epochs], ylim=[0.22, 0.35]) n, timer = 0, d2l.Timer() for _ in range(num_epochs): for X, y in data_iter: with autograd.record(): l = loss(net(X), y) l.backward() trainer.step(X.shape[0]) n += X.shape[0] if n % 200 == 0: timer.stop() animator.add(n/X.shape[0]/len(data_iter), (d2l.evaluate_loss(net, data_iter, loss),)) timer.start() print('loss: %.3f, %.3f sec/epoch'%(animator.Y[0][-1], timer.avg()))
Use Gluon to repeat the last experiment.
data_iter, _ = get_data_ch10(10)train_gluon_ch10('sgd', {'learning_rate': 0.05}, data_iter)
loss: 0.243, 0.039 sec/epoch
10.5.4. Summary¶ Mini-batch stochastic gradient uses random uniform sampling to get a mini-batch training example for gradient computation. In practice, learning rates of the (mini-batch) SGD can self-decay during iteration. In general, the time consumption per epoch for mini-batch stochastic gradient is between what takes for gradient descent and SGD to complete the same epoch. 10.5.5. Exercises¶ Modify the batch size and learning rate and observe the rate of decline for the value of the objective function and the time consumed in each epoch. Read the MXNet documentation and use the
Trainerclass
set_learning_ratefunction to reduce the learning rate of the mini-batch SGD to 1/10 of its previous value after each epoch. |
In my AP Chem class we are working on testing Hess's Law and conducted three reactions. Then enthalpy changes of the 1) $\ce{NaOH + HCl}$ and 2) $\ce{NaOH + NH4Cl}$ to predict the enthalpy change of 3) $\ce{HCl + NH3}$.
I used the equation $q = C_pm\Delta T$. We determined the $q$-values using a method given to us by our teacher. She told us to assume a denisty of $1.03\ \mathrm{g/mL}$ for all solutions, we then multiplied the density by $50\ \mathrm{mL}$ to determine the grams of each reactant present. That value is the $m$ term. We used $4.18\ \mathrm{J/(g\ ^\circ C)}$ for the specific heat of water. The temperature change for the first reaction was $14.7\ \mathrm{^\circ C}$, the second was $1.2\ \mathrm{^\circ C}$, and the third was $9.8\ \mathrm{^\circ C}$.
The first reaction yielded a q value of $6328.938\ \mathrm J$, the second yielded $516.648\ \mathrm J$, and the third yielded $4219.292\ \mathrm J$.
The next step would be to convert this $q$ values into $\mathrm{kJ/mol}$. I consulted my lab partner who did the following operation:
$$\mathrm{kJ/mol} = 6328.938\ \mathrm J \times 1\ \mathrm{kJ}/1000\ \mathrm J \times x/0.2\ \mathrm{mol}$$
In his math, there was no numerator on the $0.2\ \mathrm{mol}$ term. The mol value he used came from there being 0.2 moles of the reactants present. I believe his math to be incorrect, but as my teacher is out sick and there are no other chemistry teachers present, I cannot determine the correct conversion. I believe it to be:
$$\mathrm{kJ/mol} = 6328.938\ \mathrm J \times 1\ \mathrm{kJ}/1000\ \mathrm J \times 6.022^{23}/0.2\ \mathrm{mol}$$
Both of the conversions yield values with significant % error. Any assistance is appreciated. |
This is College Physics Answers with Shaun Dychko. We have <i>β</i>-decay of this nuclide <i>X</i> with atomic number <i>Z<i/>, mass number <i>A</i> and number of neutrons <i>N</i>. <i>β</i>-decay means that a neutron turns into a proton and so there is a additional proton here in this daughter nuclide and in order to conserve charge, an electron is also produced. So a neutron turns into a proton and an electron; it splits up into these two. And in order to conserve electron family number, an electron anti-neutrino is produced. Ok but let's go step-by-step through each of those conservation rules. We have conservation of charge so on the left side, we have a charge of <i>Z</i> and on the right hand side, we have a charge of <i>Z</i> plus 1 from this daughter nuclide and we have a charge of negative 1 from the electron produced for a total of <i>Z</i> so that checks out. Considering electron family number on the left side, there are no electrons or other particles that have an electron family number and so that's zero. On the right hand side, we have an electron family number of negative 1 for this <i>β</i>-particle and then we have a compensating positive 1 electron family number for the electron anti-neutrino and so this total is zero also. And the number of nucleons on the left-side is <i>A</i> and on the right hand side, it's also <i>A</i> and then there are no nucleons in a <i>β</i>-particle or in a neutrino and so it's <i>A</i> on each side and so that conservation rule checks out also.
Question
Confirm that charge, electron family number, and the total number of nucleons are all conserved by the rule for $\beta^-$ decay given in the equation $^A_Z\textrm{X}_N \to ^A_{Z+1}\textrm{Y}_{N-1} + \beta^- + \bar{\nu_\textrm{e}}$. To do this, identify the values of each before and after the decay.
Final Answer
Please see the solution video.
Video Transcript |
use parseval's identity to evaluate the integral
\begin{equation} \int_{-\pi}^{\pi}(\sin x)^4dx\end{equation}
I'm familiar with Parseval's identity which states that for each piecewise continuous complex function $f$ we have the equality \begin{equation} \int_{-\pi}^{\pi}\left|f(x) \right|^{2}dx=\frac{|a_{0}|^{2}}{2}+\sum_{n=1}^{\infty}\left(|a_{n}|^{2}+|b_{n}|^{2} \right) \end{equation} where $a_{n}$ and $b_{n}$ are the Fourier coefficients of $f$.
I'm confused how evaluate $\sin^4x$ |
UPDATE (18/04/18): The old answer still proved to be useful on my model. The trick is to model the partition function and the distribution separately, thus exploiting the power of softmax.
Consider your observation vector $y$ to contain $m$ labels. $y_{im}=\delta_{im}$ (1 if sample i contains label m, 0 otherwise). So the objective would be to to model the matrix in a per-sample manner. Hence the model evaluates $F(y_i,x_i)=-\log P(y_i|x_i)$. Consider expanding $y_{im}=Z\cdot P(y_m)$ to achieve two property:
Distribution function: $\sum_m P(y_m) = 1$ Partition function: $Z$ estimates the number of labels
Then it's a matter of modeling the two separately. The distribution function is best modeled with a
softmax layer, and the partition function can be modeled with a linear unit (in practice I clipped it as $max(0.01,output)$. More sophisticated modeling like Poisson unit would probably work better). Then you can choose to apply distributed loss (KL on distribution and MSE on partition), or you can try the following loss on their product.
In practical, the choice of optimiser also makes a huge difference. My experience with the factorisation approach is it works best under
Adadelta (Adagrad dont work for me, didnt try RMSprop yet, performances of SGD is subject to parameter).
Side comment on sigmoid: I have certainly tried sigmoid + crossentropy and it did not work out. The model inclined to predict the $Z$ only, and failed to capture the variation in distribution function. (aka, it's somehow quite useful for modelling the partition and there may be math reason behind it)
UPDATE: (Random thought) It seems using Dirichlet process would allow incorporation of some prior on the number of labels?
UPDATE: By experiment, the modified KL-divergence is still inclined to give multi-class output rather than multi-label output.
(Old answer)
My experience with sigmoid cross-entropy was not very pleasant. At the moment I am using a modified KL-divergence. It takes the form
$$\begin{aligned}Loss(P,Q)&=\sum_x{|P(x)-Q(x)| \cdot \left|\log\frac{P(x)}{Q(x)}\right| } \\ &= \sum_x{\left| (P(x)-Q(x)) \cdot \log\frac{P(x)}{Q(x)}\right| }\end{aligned}$$Where $P(x)$ is the target pseudo-distribution and $Q(x)$ is the predicted pseudo-distribution (but the function is actually symmetrical so it does not actually matter)
They are called pseudo-distributions for not being normalised. So you can have $\sum_x{P(x)}=2$ if you have 2 labels for a particular sample.
Keras impelmentation
def abs_KL_div(y_true, y_pred):
y_true = K.clip(y_true, K.epsilon(), None)
y_pred = K.clip(y_pred, K.epsilon(), None)
return K.sum( K.abs( (y_true- y_pred) * (K.log(y_true / y_pred))), axis=-1) |
If an automorphism fails (C), then obviously it satisfies (B). An (A)+(B) example easily extends from rank 1 to ranks 2: take a root of unity $\zeta$ in $\mathbb Z_p^\times$ that is not $\pm1$ (requiring $p\geqslant 5$). If the generators of the free group are $x$ and $y$, then there is an automorphism of the $p$-adic completion given by $x\mapsto \zeta x$ and $y\mapsto y$. It obviously has finite order, but it has nontrivial determinant, so it is not in the closure of $Out(F)$.
Define $SOut$ as the subgroup of $Out$ with determinant $1$. I believe that $SOut(F)$ is dense in $SOut(\hat F)$, so there is nothing satisfying (B)+(C), let alone all three conditions. I will not attempt to show that, but only that all torsion is conjugate into the closure, which is awfully close. Specifically, I will show that an $\ell$-Sylow subgroup is contained in the closure; and I will suggest that there is no $p$-torsion ($p\geqslant 5$). (Added at last minute: that only covers prime-power torsion and does not show that torsion of order with multiple prime factors lifts from $SL_2(\mathbb Z_p)$ to $SOut(\hat F)$, let alone to the closure of $SOut(F)$.) The key point is that $\hat F$ is a pro-p-group and thus pro-nilpotent. That makes it much easier to analyze than if it were, say, the 2,3-completion and pro-soluble, let alone the full completion with simple composition factors. The length $n$ nilpotent quotient of a group is a characteristic quotient, so an automorphism of a group induces a automorphism of its length $n$ nilpotent quotient. The group of automorphisms of a pro-nilpotent group is the inverse limit of the automorphism groups of the length $n$ quotients. (The transition maps in this inverse limit need not be surjective, but are in the free case, as indicated below.)
The automorphism group of a nilpotent group is easy to understand. If $G_n$ is a length $n$ nilpotent group, $G_{n-1}$ its maximal length $n-1$ quotient and $Z$ the kernel, then the kernel of $Aut(G_n)\to Aut(G_{n-1})$ consists of automorphisms that differ from the identity by shearing into the kernel: $\phi$ so that $\phi(g)g^{-1}\in Z$. Using the centrality of $Z$, the map $g\mapsto \phi(g)g^{-1}$ is a homomorphism $G_n\to Z$. A useful observation is $\phi(g)g^{-1}=g^{-1}\phi(g)$. The kernel is isomorphic to to the group of homomorphisms $G_n^{ab}\to Z$. $Aut(G_n)$ need not surject to $Aut(G_{n-1})$, but a similar analysis shows that if an automorphism of $G_{n-1}$ lifts to an endomorphism of $G_n$, it has an inverse. By induction that yields a nice clean statement: an endomorphism of a nilpotent group is an automorphism if and only if the induced endomorphism of the abelianization is an automorphism.
Applied to $\hat F$, this shows that the kernel $Aut(\hat F)\to GL_2(\mathbb Z_p)$ is a torsion-free pro-$p$ group. Also, the map is surjective because freeness makes it easy to lift endomorphisms, which are then automorphisms. (Similarly, it is easy to lift automorphisms of the length $n$ nilpotent quotient to endomorphisms of the free group, but it takes work to lift them to automorphisms, which is why I do not do it. That would imply that $SOut(F)$ is dense in $SOut(\hat F)$. That would answer your question, but the points about Sylow subgroups would still be interesting.)
Thus the kernel $Aut(\hat F)\to GL_2(\mathbb Z_p)$ is built out of composition factors of the form $Hom(A,B)$, where $A$ and $B$ are composition factors of $\hat F$, so the kernel is a pro-$p$ group. More careful consideration shows that $A$ and $B$ are torsion free, hence so the kernel. The kernel $Out(\hat F)\to GL_2(\mathbb Z_p)$ is a quotient of the prior kernel by $\hat F$, so also a pro-$p$ group. I believe that directly applying an analogous stage-by-stage analysis shows that it is torsion-free. The kernel of $GL_2(\mathbb Z_p)\to GL_2(\mathbb F_p)$ is also a pro-$p$-group, torsion-free unless $p=2$. Extension by a $p$-group cannot change the prime-to-$p$ Sylow subgroups. For each $\ell$ prime to $p$, the $\ell$-Sylow subgroups of $Aut(\hat F)$, $Out(\hat F)$, $GL_2(\mathbb Z_p)$, and $GL_2(\mathbb F_p)$ are isomorphic by the quotient map. Similarly, the $\ell$-Sylow subgroups of $SAut(\hat F)$, $SOut(\hat F)$, $SL_2(\mathbb Z_p)$, and $SL_2(\mathbb F_p)$ are isomorphic. Since $SOut(F)=SL_2(\mathbb Z)$ surjects to $SL_2(\mathbb F_p)$, its closures in $SAut(\hat F)$, $SOut(\hat F)$, and $SL_2(\mathbb Z_p)$ must contain full $\ell$-Sylow subgroups. In particular, all $\ell$-power torsion is conjugate into Sylow subgroups and thus into the closure of $SOut(F)$. And I claim that there is no $p$-power torsion (except for $p=2,3$, where all the torsion in $GL_2(\mathbb Z_p)$ is conjugate into $GL_2(\mathbb Z)=Out(F)$).
All that applies to any rank free group ($Out(F)$ always surjects to $GL(\mathbb Z)$, but is no longer a isomorphic), except that there are more possibilities for $p$-power torsion in $GL_n(\mathbb Z_p)$. I think it is all conjugate into $GL_n(\mathbb Z)$ (though I’m more sure about the fields $\mathbb Q_p$ and $\mathbb Q$). I don’t know how to tell if it lifts to $Out(\hat F)$, but if it does, it probably lifts to $Out(F)$. Anyhow, I claim without proof that $SOut(F)$ is dense in $SOut(\hat F)$. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
When hydrogen peroxide is mixed with potassium permanganate, oxygen gas and water vapour are formed, according to the reaction (source):
$$\ce{2MnO4- + 3H2O2 -> 2MnO2 + 2H2O + 3O2 + 2OH-}$$
This reaction is spontaneous, and exothermic. It is an example of a redox reaction, with the following half reactions occurring (data from Vanýsek):
$$ \begin{align} \ce{MnO4- + 2 H2O + 3 e- &-> MnO2 + 4 OH-} &\quad E^\circ_\mathrm{red} &= 0.595~\mathrm{V} \\ \ce{H2O2 &-> O2 + 2 H+ + 2 e-} &\quad E^\circ_\mathrm{ox} &= -0.695~\mathrm{V} \end{align} $$
$E^\circ_\mathrm{cell}$ is equal to the sum of the oxidation potential and the reduction potential of the two half reactions; in this case, it would be $-0.1~\mathrm{V}$. A redox reaction is spontaneous if $E^\circ_\mathrm{cell}$ is positive — how can it be, then, that hydrogen peroxide spontaneously reacts with permanganate ions?
Using thermodynamical data (from NIST), I have calculated that the $\Delta G^\circ_\mathrm{m}$ of the reaction is $-463.576~\mathrm{kJ}$. The reaction should indeed be spontaneous. How can it be, then, that the results of the thermodynamical approach and the electrochemical one differ drastically? |
Stepping backwards during debugging as a motivation for non-determinism
The notion of a non-deterministic machine suggests itself when you wish to step backward (in time) through a program while debugging. In a typical computer, each step modifies only a finite amount of memory. If you always save this information for the previous 10000 steps, then you can nicely step both forward and backward in the program, and this possibility is not limited to toy programs. If you try to remove the asymmetry between forward steps and backward steps, then you end up with the notion of a non-deterministic machine.
Similarities and differences between non-determinism and randomness
While probabilistic machines shares some characteristics with non-deterministic machines, this symmetry between forward steps and backward steps is not shared. To see this, let's model the steps or transitions of a deterministic machine by (total or partial) functions, the transitions of a non-deterministic machine by (finite) relations, and the transitions of a probabilistic machine by (sub)stochastic matrices. For example, here are corresponding definitions for finite automata
a finite set of states $Q$ a finite set of input symbols $\Sigma$ deterministic: a transition function $\delta:Q\times \Sigma \to Q$ non-deterministic: a transition function $\Delta:Q\times \Sigma \to P(Q)$ non-deterministic: a transition relation $\Delta\subset Q\times \Sigma \times Q$ non-deterministic: a function $\Delta: \Sigma \to P(Q \times Q)$ probabilistic: a function $\delta: \Sigma \to ssM(Q)$
Here $P(Q)$ is the power set of $Q$ and $ssM(Q)$ is the space of substochatic matrices on $Q$. A right substochastic matrix is a nonnegative real matrix, with each row summing to at most 1.
There are many different reasonable acceptance conditions
The transitions are only one part of a machine, initial and final states, possible output and acceptance conditions are also important. However, there are only very few non-eqivalent acceptance conditions for deterministic machines, a number of reasonable acceptance conditions for non-deterministic machines (NP, coNP, #P, ...), and many possible acceptance conditions for probabilistic machines. Hence this answer focuses primarily on the transitions.
Reversibility is non-trivial for probabilistic machines
A partial function is reversible iff it is injective. A relation is always reversible in a certain sense, by taking the opposite relation (i.e. reversing the direction of the arrows). For a substochastic matrix, taking the transposed matrix is analogous to taking the opposite relation. In general, the transposed matrix is not a substochastic matrix. If it is, then the matrix is said to be
doubly substochastic. In general $P P^T P\neq P$, even for a doubly substochastic matrix $P$, so one can wonder whether this is a reasonable notion of reversibility at all. It is reasonable, because the probability to reach state $B$ from state $A$ in $k$ forward steps is identical to the probability to reach state $A$ from state $B$ in $k$ backward steps. Each path from A to B has the same probability forward and backward. If suitable acceptance conditions (and other boundary conditions) are selected, then doubly substochastic matrices are an appropriate notion of reversibility for probabilistic machines. Reversibility is tricky even for non-deterministic machines
Just like in general $P P^T P\neq P$, in general $R\circ R^{op}\circ R \neq R$ for a binary relation $R$. If $R$ describes a partial function, then $R\circ R^{op}\circ R = R$ and $R^{op}\circ R\circ R^{op} = R^{op}$. Even if relations $P$ and $Q$ should be strictly reversible in this sense, this doesn't imply that $P\circ Q$ will be strictly reversible too. So let's ignore strict reversibility now (even so it feels interesting), and focus on reversal by taking the opposite relation. A similar explanation like for the probabilistic case shows that this reversal works fine if suitable acceptance conditions are used.
These considerations also make sense for pushdown automata
This post suggests that one motivation for non-determinism is to remove that asymmetry between forward steps and backward steps. Is this symmetry of non-determinism limited to finite automata? Here are corresponding symmetric definitions for pushdown automata
a finite set of states $Q$ a finite set of input symbols $\Sigma$ a finite set of stack symbols $\Gamma$ deterministic: a partial transition function $\delta:Q\times\Gamma\times (\Sigma\cup\{\epsilon\}) \to Q\times\Gamma^{\{0,2\}}$ such that $\delta(q,\gamma,\epsilon)\neq\epsilon$ only if $\delta(q,\gamma,\sigma)=\epsilon$ for all $\sigma\in\Sigma$ non-deterministic: a transition function $\Delta:Q\times\Gamma^{\{0,1\}}\times (\Sigma\cup\{\epsilon\}) \to P(Q\times\Gamma^{\{0,1\}})$ non-deterministic: a transition relation $\Delta\subset Q\times\Gamma^{\{0,1\}}\times (\Sigma\cup\{\epsilon\}) \times Q\times\Gamma^{\{0,1\}}$ non-deterministic: a function $\Delta: \Sigma\cup\{\epsilon\} \to P(Q\times\Gamma^{\{0,1\}}\ \times\ Q\times\Gamma^{\{0,1\}})$ probabilistic: a function $\delta: \Sigma\cup\{\epsilon\} \to ssM(Q\times\Gamma^{\{0,1\}})$ such that $\delta(\epsilon)+\delta(\sigma)\in ssM(Q\times\Gamma^{\{0,1\}})$ for all $\sigma\in\Sigma$
Here $\epsilon$ is the empty string, $\Gamma^{\{0,2\}}=\{\epsilon\}\cup\Gamma\cup(\Gamma\times\Gamma)$ and $\Gamma^{\{0,1\}}=\{\epsilon\}\cup\Gamma$. This notation is used because it is similar to $\Gamma^*$, which is used in many definitions for pushdown automata.
Diagramed verification of reversal for (non)advancing input and stack operations
An advancing input operation with $b\in\Sigma\subset\Sigma\cup\{\epsilon\}$ gets reversed as follows
$a|bc \to a|\boxed{b}c \to ab|c$
$a|bc \leftarrow a\boxed{b}|c \leftarrow ab|c$
$c|ba \to c|\boxed{b}a \to cb|a$
A non advancing input operation with $\epsilon\in\Sigma\cup\{\epsilon\}$ that doesn't read any input can be reversed
$a|bc \to a|bc \to a|bc$
$a|bc \leftarrow a|bc \leftarrow a|bc$
$cb|a \to cb|a \to cb|a$
Here is a diagram of an advancing input operation whose reversal would look bad
$\require{cancel}\xcancel{\begin{matrix}
a|bc \to \boxed{a}|bc \to ab|c\\
a|bc \leftarrow \boxed{a}b|c \leftarrow ab|c\\
c|ba \to c|b\boxed{a} \to cb|a
\end{matrix}}$
For a stack operation $(s,t) \in \Gamma^{\{0,1\}}\times\Gamma^{\{0,1\}}$, there are the three cases $(s,t)=(a,\epsilon)$, $(s,t)=(\epsilon,a)$, and $(s,t)=(a,b)$. The stack operation $(a,\epsilon)$ gets reversed to $(\epsilon,a)$ as follows
$ab\ldots \to \boxed{a}b\ldots \to |b\ldots$
$\boxed{a}b\ldots \leftarrow |b\ldots \leftarrow b\ldots$
$b\ldots \to |b\ldots \to \boxed{a}b\ldots$
The stack operation $(a,b)$ gets reversed to $(b,a)$ as follows
$ac\ldots \to \boxed{a}c\ldots \to \boxed{b}c\ldots$
$\boxed{a}c\ldots \leftarrow \boxed{b}c\ldots \leftarrow bc\ldots$
$bc\ldots \to \boxed{b}c\ldots \to \boxed{a}c\ldots$
A generalized stack operation $(ab,cde)\in\Gamma^*\times\Gamma^*$ would be reversed to $(cde,ab)$
$abf\ldots \to \boxed{ab}f\ldots \to \boxed{cde}f\ldots$
$\boxed{ab}f\ldots \leftarrow \boxed{cde}f\ldots \leftarrow cdef\ldots$
$cdef\ldots \to \boxed{cde}f\ldots \to \boxed{ab}f\ldots$ Reversibility for Turing machines
A machine with more than one stack is equivalent to a Turing machine, and stack operations can easily be reversed. The motivation at the beginning also suggests that reversal (of a Turing machine) should not be difficult. A Turing machine with a typical instruction set is not so great for reversal, because the symbol under the head can influence whether the tape will move left or right. But if the instruction set is modified appropriately (without reducing the computational power of the machine), then reversal is nearly trivial again.
A reversal can also be constructed without modifying the instruction set, but it is not canonical and a bit ugly. It might seem that the existence of a reversal is just as difficult to decide as many other question pertaining to Turing machines, but a reversal is a local construction and the difficult questions often have a global flavor, so pessimism would probably be unjustified here.
The urge to switch to equivalent instruction sets (easier to reverse) shows that these questions are less obvious than they first appear. A more subtle switch happened in this post before, when total functions and stochastic matrices were replaced by partial functions and substochastic matrices. This switch is not strictly necessary, but the reversal is ugly otherwise. The switch to the substochastic matrices was actually the point where it became obvious that reversibility is not so trivial after all, and that one should write down details (as done above) instead of taking just a high level perspective (as presented in the motivation at the beginning). The questions raised by Niel de Beaudrap also contributed to the realization that the high level perspective is slightly shaky.
Conclusion
Non-deterministic machines allow a finite number of deterministic transitions at each step. For probabilistic machines, these transitions additionally have a probability. This post conveys a different perspective on non-determinism and randomness. Ignoring global acceptance conditions, it focuses on local reversibility (as a local symmetry) instead. Because randomness preserves some local symmetries which are not preserved by determinism, this perspective reveals non-trivial differences between non-deterministic and probabilistic machines. |
One more assumption must be made regarding the $^{222}Rn$ concentration at time zero (week ago). If you assume radioactive equilibrium between $^{222}Rn$ and the parent $^{238}U$ you can proceed like you did. But instead of taking one wall, you should take into account all the walls made of the $^{238}U$ contaminated material.
On the other hand, if you assume that the room was well ventilated until week ago, i.e. the $^{222}Rn$ concentration in the air was zero at the beginning, you must take into account the activity build-up. Starting with zero activity, $^{222}Rn$ will gradually grow until the equilibrium with parent radionuclide is reached (it takes approximately one month in this case). Let's assume the radioactive equilibrium between the $^{238}U$ and its decay products exists until $^{226}Ra$ (which decays to the $^{222}Rn$). The decay-growth equations describing such situation are:
$\frac{dN_1}{dt}=-\lambda_1N_1$
$\frac{dN_2}{dt}=-\lambda_2N_2+\lambda_1N_1$
Where $N_1$, $N_2$ are numbers of atoms of $^{238}U$ and $^{222}Rn$ respectively (as a functions of time) and $\lambda_1$, $\lambda_2$ are decay constants of the $^{238}U$ and $^{222}Rn$ respectively.
Solution to this system of equations is
$N_2=\frac{N_0 \lambda_1}{\lambda_2-\lambda_1} (e^{-\lambda_1 t}-e^{-\lambda_2 t}) \Rightarrow A_2=A_1 \frac{\lambda_2}{\lambda_2-\lambda_1} (e^{-\lambda_1 t}-e^{-\lambda_2 t})$
Since $\lambda_2= 2.1 \times 10^{-6} \ s^{-1} \gg \lambda_1= 4.9 \times 10^{-18} \ s^{-1}$ and $1 \ week \doteq 6 \times 10^{5} \ s$ we can approximate this solution as
$A_2 \doteq A_1(1-e^{-\lambda_2 t})$
where $t$ is a time of $^{222}Rn$ "building up" in seconds and $A_1$, $A_2$ are activities of $^{238}U$ and $^{222}Rn$ respectively and $N_0$ is the initial number of $^{238}U$ atoms. It means, that after a week, the $^{222}Rn$ will grow to the ~ 70 % of its equilibrium activity, see the figure:
Assuming, that $^{222}Rn$ emanates from all of the walls including the floor and the ceiling, we obtain the activity $A_1$ of $^{238}U$ per unit volume of a wall material as
$A_1 =\frac{A_2 V_{air}}{V_{material}(1-e^{-\lambda_2 t})}=\frac{50 \times 10^3}{6 \times 10^2 \times 0.03 \times (1-e^{2.1 \times 10^{-6} \times 6 \times 10^5})} \doteq 3900 \ Bq/m^3$ |
Question:
Two identical, non-interacting spin-$1/2$ particles are in a 1D Harmonic Oscillator Potential. Their Hamiltonian is given by
$$H=\frac{p_{1x}^2}{2m}+\frac{1}{2}m\omega^2 x_1^2+\frac{p^2_{2x}}{2m}+\frac{1}{2}m\omega^2x_2^2 $$
What is the ground state wave function for the two particles in the singlet and triplet states; i.e., when $S=0$ and $S=1$, respectively.
Attempt: I believe that, for non-interacting indistinguishable particles, we have
$$\psi=\frac{1}{\sqrt{2}}\left \{\psi_1 (x_1)\psi_2(x_2)+\psi_1(x_2)\psi_2(x_1)\right \}$$
As well, the ground state of a single particle in a 1D Harmonic Oscillator Potential is
$$\psi_0(x)=\left ( \frac{m\omega}{\pi \hbar}\right )^{1/4}\exp \left \{-\frac{m\omega}{2\hbar}x^2 \right\}$$
Therefore, would our $\psi$ for the two particle system just be
$$\psi=\frac{2}{\sqrt{2}}\left ( \frac{m\omega}{\pi \hbar}\right )^{1/2} \left (\exp \left \{-\frac{m\omega}{2\hbar}(x_1^2+x_2^2) \right\} \right )$$ I feel like I'm missing something. Also, how do i account for the $S=0$ and $S=1$ cases? I'm very confused how I incorporate them into my general case above, which does not consider spin. Any help would be appreciated. |
This is an interesting question that I have asked myself. Below is my take.
Let us consider an economy $(\Omega,\mathcal{F},P)$ equipped with a filtration $(\mathcal{F})_{t \geq 0}$ consisting on a traded asset $S_t$ and a
numéraire $N_t$ specified by the following stochastic differential equations:$$\begin{align}\text{d}S_t&=\alpha(t,S_t)\text{d}t+\beta(t,S_t)\text{d}W_t\\[3pt]\text{d}N_t&=a(t,N_t)\text{d}t+b(t,N_t)\text{d}\tilde{W}_t\end{align}$$Our economy has a derivative contract written on the asset $S_t$ with payoff function $h(\cdot)$ at maturity $T$. By derivative pricing theory, the price $V_t$ of the derivative is given by the following expectation under the measure $P^N$ associated to the numéraire $N_t$, conditional on the available information:$$\tag{1}V_t=N_tE^N\left(\frac{h(S_T)}{N_T}\bigg|\mathcal{F}_t\right)$$Defining the function $g(\cdot)$ for $(s,n) \in \mathbb{R}_+^2$:$$g(s,n)=\frac{h(s)}{n}$$By the Markov Property $-$ see e.g. theorem 6.3.1. in Stochastic Calculus for Finance II by Shreve $-$ we have for $0\leq t\leq T$:$$\tag{2} V_t=v(t,S_t,N_t)$$Thus by Itô's lemma:$$\begin{align}\tag{3}\text{d}V_t=& \ \frac{\partial v}{\partial t}\text{d}t+\left(\frac{\partial v}{\partial S}\text{d}S_t+\frac{1}{2}\frac{\partial^2 v}{\partial S^2}(\text{d}S_t)^2\right)+\left(\frac{\partial v}{\partial N}\text{d}N_t+\frac{1}{2}\frac{\partial^2 v}{\partial N^2}(\text{d}N_t)^2\right)\\&+\left(\frac{\partial^2v}{\partial S\partial N}\text{d}S_t\text{d}N_t\right)\end{align}$$We note two things: Observability: by equation $(2)$ the value today of a derivative depends upon the value today of the underlying asset and the numéraire $N_t$, therefore the numéraire needs to be at least observable, i.e. it cannot be some latent state variable. If the numéraire is unobservable we cannot compute the price. Tradability: most importantly, by equation $(3)$ we observe the variations in value of the derivative also depends upon the variations of the value of the underlying asset and the numéraire. If we are to set up a hedging portfolio, we need to be able to trade the numéraire $N_t$ in order to offset the fluctuations in the value of the derivative due to fluctuations of the numéraire.
References
Shreve, S. (2004).
Stochastic Calculus for Finance II, Springer.
@AFK (2016). "Feynman Kac and choice of measure", Quant Stack Exchange.
@Quantuple (2016) "Other numeraire choices when applying Feynman Kac", Quant Stack Exchange. |
Given the two jump-diffusions: \begin{equation} \begin{aligned} dX_{1,t} &= a_1 dt + b_1 dW_t + c_1 dN_t(\lambda) \\ dX_{2,t} &= a_2 dt + b_2 dW'_t + c_2 dN_t(\lambda) \\ corr(dW,dW') &= \rho \\ dN & \mbox{: Poisson process, of intensity } \lambda \end{aligned} \end{equation} which SDE $$df_t = ? $$ satisfies the function $$ f_t = f(X_{1,t}, X_{2,t}) \mbox{ ???} $$ Thanks in advance for help and/or references.
Answer
Assuming the Poisson process $N_t$ is independent from the Brownian motions $(W_{1,t},W_{2,t})$, you'll have \begin{align} df(X_{1,t},X_{2,t}) &= \frac{\partial f}{\partial X_{1,t}} dX_{1,t}^c + \frac{\partial f}{\partial X_{2,t}} dX_{2,t}^c + \dots \\ &+ \frac{1}{2} \frac{\partial^2 f}{\partial X_{1,t}^2 } d\langle X_{1,t} \rangle_t^c + \frac{1}{2} \frac{\partial^2 f}{\partial X_{2,t}^2} d\langle X_{2,t} \rangle_t^c + \frac{\partial^2 f}{\partial X_{1,t} \partial X_{2,t}} d\langle X_{1,t} X_{2,t} \rangle_t^c + \dots \\ &+ \left( f(X_{1,t},X_{2,t}) - f(X_{1,t^-},X_{2,t^-}) \right) dN_t \end{align} where the superscript $c$ denotes the continuous part of the semi-martingales $(X_{i,t})_{t \geq 0}$ i.e. $$ d X_{i,t}^c = a_i dt + b_i dW_{i,t},\,\,\, i=1,2$$ such that $$ d\langle X_{i,t} \rangle_t^c = b_i^2 dt,\,\,\, i=1,2 $$ $$ d\langle X_{1,t}, X_{2,t} \rangle_t^c = \rho b_1 b_2 dt $$
Remark 1 : All the derivatives above should be evaluated at $t^-$; Remark 2 : I've used the notation $\langle X \rangle_t$ to refer to the optional quadratic variation (sometimes denoted by $[ X ]_t$ in the literature) rather than previsible quadratic variation (previsible quadratic variation is the compensator of optional quadratic variation). In case the process has continuous paths the 2 concepts coincide but here it's not the case so I hope this clarifies things. Background
Consider a non-continuous semi-martingale $(X_t)_{t \geq 0}$ solution of$$ dX_t = a dt + b dW_t + c dN_t $$Anticipating on what comes next, we also introduce its
continuous counterpart $(X_t^c)_{t \geq 0}$ verifying$$ dX_t^c = a dt + bdW_t $$
In differential form, the generalised Itô formula for non-continuous semi-martingales reads (cf. equation (2) in this great blog + demonstration), \begin{align} df(X_t) &= \frac{\partial f}{\partial X_t} dX_t + \frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} d\langle X \rangle_t \dots \\ &+ \left( \underbrace{\left( f(X_t)-f(X_{t^-}) \right)}_{\Delta f(X_t)} - \frac{\partial f}{\partial X_t} \underbrace{c}_{\Delta X_t} - \frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} \underbrace{c^2}_{\Delta X_t^2} \right) dN_t \tag{1} \end{align}
The quadratic variation of the non-continuous semi-martingale $(X_t)$ computes as $$ d\langle X \rangle_t = b^2 dt + c^2 dN_t = d\langle X \rangle_t^c + c^2 dN_t $$ assuming the Poisson process is independent from the Brownian motion under our working probability space (cf. section 15.4). Along with the definition of the SDE satisfied by $(X_t)_{t \geq 0}$ this result allows us to rewrite $(1)$ as \begin{align} \require{cancel} df(X_t) &= \frac{\partial f}{\partial X_t} \left(a dt + b dW_t + \cancel{c dN_t} \right) + \dots \\ &\frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} \left( b^2 dt + \cancel{c^2 dN_t} \right) + \dots \\ &+ \left( (f(X_t)-f(X_{t^-}) \cancel{- \frac{\partial f}{\partial X_t} c} \cancel{- \frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} c^2} \right) dN_t \end{align}
$$ df(X_t) = \frac{\partial f}{\partial X_t} dX_t^c + \frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} d\langle X_t\rangle_t^c + \left( f(X_t)-f(X_{t^-}) \right) dN_t $$
Now you can repeat the experiment starting from the multivariate counterpart of $(1)$ i.e. \begin{align} df(X_t) &= \frac{\partial f}{\partial X_t} dX_t + \frac{\partial f}{\partial Y_t} dY_t + \dots \\ &\frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} d\langle X \rangle_t + \frac{1}{2} \frac{\partial^2 f}{\partial Y_t^2} d\langle Y \rangle_t + \frac{1}{2} \frac{\partial^2 f}{\partial X_t Y_t} d\langle X, Y \rangle_t \dots \\ &+ \left( \Delta f(X_t,Y_t) - \frac{\partial f}{\partial X_t} \Delta X_t - \frac{\partial f}{\partial Y_t} \Delta Y_t \dots \\ - \frac{1}{2} \frac{\partial^2 f}{\partial X_t^2} \Delta X_t^2 - \frac{1}{2} \frac{\partial^2 f}{\partial Y_t^2} \Delta Y_t^2 - \frac{\partial^2 f}{\partial X_t \partial Y_t } \Delta X_t \Delta Y_t \right) dN_t \tag{2} \end{align} to end up on the above mentioned answer. |
Which is the best way to put function plots into a LaTeX document?
To extend the answer from Mica,
pgfplots can do calculations in TeX:
\documentclass{standalone}\usepackage{pgfplots}\begin{document}\begin{tikzpicture} \begin{axis}[ xlabel=$x$, ylabel={$f(x) = x^2 - x +4$} ] \addplot {x^2 - x +4}; \end{axis}\end{tikzpicture}\end{document}
or using GNUplot (requires
--shell-escape):
\documentclass{standalone}\usepackage{pgfplots}\begin{document}\begin{tikzpicture} \begin{axis}[ xlabel=$x$, ylabel=$\sin(x)$ ] \addplot gnuplot[id=sin]{sin(x)}; \end{axis}\end{tikzpicture}\end{document}
You can also pre-calculate values using another program, for example a spreadsheet, and import the data. This is all detailed in the manual.
With version 3 of PGF/TikZ the
datavisualization library is available for plotting data or functions. Here are a couple of examples adapted from the manual (see part VI,
Data Visualization).
\documentclass[border=2mm,tikz]{standalone}\usepackage{tikz}\usetikzlibrary{datavisualization}\usetikzlibrary{datavisualization.formats.functions}\begin{document}\begin{tikzpicture}\datavisualization [school book axes, visualize as smooth line, y axis={label={$y=x^2$}}, x axis={label} ]data [format=function] { var x : interval [-1.5:1.5] samples 7; func y = \value x*\value x; };\end{tikzpicture}\begin{tikzpicture}\datavisualization [scientific axes=clean, y axis=grid, visualize as smooth line/.list={sin,cos,tan}, style sheet=strong colors, style sheet=vary dashing, sin={label in legend={text=$\sin x$}}, cos={label in legend={text=$\cos x$}}, tan={label in legend={text=$\tan x$}}, data/format=function ]data [set=sin] { var x : interval [-0.5*pi:4]; func y = sin(\value x r);}data [set=cos] { var x : interval [-0.5*pi:4]; func y = cos(\value x r);}data [set=tan] { var x : interval [-0.3*pi:.3*pi]; func y = tan(\value x r);};\end{tikzpicture}\end{document}
tikz + gnuplot (see the manual for details). Here's a "live" example used in a lecture (using beamer) to illustrate the convergence of a series of square-integrable functions.
\begin{tikzpicture}[domain=-1:1,yscale=2,xscale=4,smooth]\fill[gray] (-1.2,-1.2) rectangle (1.2,2.5);\draw[very thin] (-1.1,-1.1) grid[step=.5] (1.1,2.4);\draw[thick,->] (-1.2,0) -- (1.2,0);\draw[thick,->] (0,-1.2) -- (0,2.5);\draw[color=red] plot[id=1] function{cos(pi*x)};\draw<2->[color=blue,thick] plot[id=2] function{cos(pi*x)+cos(2*pi*x)/2};\draw<3->[color=green!50!black,thick] plot[id=3] function{cos(pi*x) + cos(2*pi*x)/2 + cos(3*pi*x)/3};\draw<4->[color=yellow,thick] plot[id=4] function{cos(pi*x) + cos(2*pi*x)/2 + cos(3*pi*x)/3 + cos(4*pi*x)/4};\draw<5->[color=cyan,thick] plot[id=5] function{cos(pi*x) + cos(2*pi*x)/2 + cos(3*pi*x)/3 + cos(4*pi*x)/4 + cos(5*pi*x)/5};\end{tikzpicture}
OK, here's a non-TikZ answer for balance (you'd think TikZ is the second coming on SE!)
\documentclass{minimal}\usepackage{pstricks-add}\begin{document}\psset{xunit=7cm,yunit=0.6cm}\def\xlim{1}\def\ylim{16}\begin{pspicture*}(-\xlim,-\ylim)(\xlim,\ylim)\psaxes[Dx=0.5,Dy=5]{<->}(0,0)(-\xlim,-\ylim)(\xlim,\ylim)\psplot[plotpoints=500,showpoints=false,algebraic]{-1}{1}{sin(1/x)/x}\end{pspicture*}\end{document}
Vincent Zoonekynd gives an example for this, from his long list of Metapost examples:
beginfig(166) ux:=2mm; uy:=5mm; numeric xmin, xmax, ymin, ymax, M; xmin := -6.3; xmax := 12.6; ymin := -2; ymax := 2; M := 100; draw (ux*xmin,0) -- (ux*xmax,0); draw (0,uy*ymin) -- (0,uy*ymax); pair a[]; for i=0 upto M: a[i] := ( xmin + (i/M)*(xmax-xmin), sind(180/3.14*( xmin + (i/M)*(xmax-xmin) )) ) xscaled ux yscaled uy; endfor; draw a[0] for i=1 upto M: --a[i] endfor; endfig;
gives
This is much longer than the other examples, because it does everything from scratch, but it would be easy to put some functions for creating axes and scaling the graph, so that specifying the plot was some boilerplate plus the function definition. I might do that later...
Is there a specific reason you need to graph the function within LaTeX? wouldn't it be better to use something like R or matlab to generate a pdf that you can then
\includegraphics ? This will generally speed up compilation, and graphs thus generated are probably more customisable and so on.
If you absolutely have to generate the graph inside LaTeX then consider using the standalone package: this will save some time when compiling big documents...
Then of course, there is sweave...
R and sweave were already mentioned but I couldn't pass up the opportunity to mention
tikzDevice (yes, again
tikz). I have successfully been using it to generate
.tex documents with R, for example
options(tikzLatex='/path to TeX distribution on computer' )require(tikzDevice) tikz("~/some destination/rgraph.tex", width = 5, height = 5.5)Some R codedev.off()
Usually I point it to the same folder as the working LaTeX document, and put it in the document
\input{rgraph}
I feel this gives me much needed control over my graphs, although I'll have to try some other solutions here before I decide which solution is the most comfortable for me. Just thought I'd add something (hopefully) of value.
The latest version of gnuplot itself also has a tikz output terminal
xyplot is nice.
edit: Oops — I thought you meant graph-theory graphs, not plots of ƒ(x) versus x. I would use
R and
Sweave to make the graphs in
LaTeX. |
Since I spend a lot of time on solving sparse linear equation systems then I am also a user of sparse matrix reordering methods. My claim to fame is that I have implemented approximate minimum degree myself and it is used in MOSEK.
Below I summarize some interesting link to graph partitioning software:
It is very common to use a BLAS library to perform linear algebra operations such as dense matrix times dense matrix multiplication which can be performed using the dgemm function. The advantage of BLAS is
it is well a defined standard.
and hardware vendors such as Intel supplies a tuned version.
Now at MOSEK my employeer we use the Intel MKL library that includes a BLAS implementation. It really helps us deliver good floating point performance. Indeed we use a sequential version of Intel MKL but call it from potentially many threads using Clik plus. This works well due to the well designed BLAS interface. However, there is one rotten apple in the basket and that is error handling.
Here I will summarize why the error handling in the BLAS standard is awful from my perspective.
First of all why can errors occur when you do the dgemm operation if we assume the dimensions of the matrices are correct and ignoring issues with NANs and the like. Well, in order to obtain good performance the dgemm function may allocate additional memory to store smallish matrices that fit into the cache. I.e. the library use a blocked version to improve the performance.
Oh wait that means it can run of memory and then what? The BLAS standard error handling is to print a message to stderr or something along that line.
Recall that dgemm is embedded deep inside MOSEK which might be embedded deep inside a third party program. This implies an error message printed to stderr does not make sense to the user. Also the user would NOT like us to terminate the application with a fatal error. Rather we want to know that an out of space situation happened and terminate gracefully. Or doing something to lower the space requirement. E.g. use a fewer threads.
What is the solution to this problem? The only solution offered is to replace a function named xerbla that gets called when an error happens. The idea is that the function can set a global flag indicating an error happened. This might be a reasonable solution if the program is single threaded. Now instead assume you use a single threaded dgemm (from say Intel MKL) but call it from many threads. Then first of all you have to introduce a lock (a mutex) around the global error flag leading to performance issues. Next it is hard to figure out which of all the dgemm calls that failed. Hence, you have to fail them all. What pain.
Why is the error handling so primitive in BLAS libraries. I think the reasons are:
BLAS is an old Fortran based standard.
For many years BLAS routine would not allocate storage. Hence, dgemm would never fail unless the dimensions where wrong.
BLAS is proposed by academics which does not care so much about error handling. I mean if you run out of memory you just buy a bigger supercomputer and rerun your computations.
If the BLAS had been invented today it would have been designed in C most likely and then all functions would have returned an error code. I know dealing with error codes is a pain too but that would have made error reporting much easier for those who wanted to do it properly.
I found the talk: Plain Threads are the GOTO of todays computing by Hartmut Kaiser very interesting because I have been working on improving the multithreaded code in MOSEK recently. And is also thinking how MOSEK should deal with all the cores in the CPUs in the future.I agree with Hartmut something else than plain threads is needed.
First a clarification conic quadratic optimization and second order cone optimization is the same thing. I prefer the name conic quadratic optimization though.
Frequently it is asked on the internet what is the computational complexity of solving conic quadratic problems. Or the related questions what is the complexity of the algorithms implemented in MOSEK, SeDuMi or SDPT3.
To the best of my knowledge almost all open source and commercial software employ a primal-dual interior-point algorithm using for instance the so-called Nesterov-Todd scaling.
A conic quadratic problem can be stated on the form
\[\begin{array}{lccl}\mbox{min} & \sum_{j=1}^d (c^j)^T x^j & \\\mbox{st} & \sum_{j=1}^d A^j x^j & = & b \\& x^j \in K^j & \\\end{array}\]where \(K_j\) is a \(n^j\) dimensional quadratic cone.Moreover, I will use \(A = [A^1,\ldots, A^d ]\) and \(n=\sum_j n^j\). Note that \(d \leq n\).First observe the problem cannot be solved exactly on a computer using floating numbers since the solution might be irrational. This is in contrast to linear problems that always have rational solution if the data is rational.
Using for instance the primal-dual interior point algorithm the problem can be solved to \(\varepsilon\) accuracy in \(O(\sqrt{d} \ln(\varepsilon^{-1}))\) interior-point iterations, where \(\varepsilon\) is the accepted duality gap. The most famous variant having that iteration complexity is based on Nesterov and Todds beautiful work on symmetric cones.
Each iteration requires solution of a linear system with the coefficient matrix\[ \label{neweq}\left [ \begin{array}{cc}H & A^T \\A & 0 \\\end{array}\right ] \mbox{ (*)}\]This is the most expensive operation and that can be done in \(O(n^3)\) complexity using Gaussian elimination so we end at the complexity \(O(n^{3.5}\ln(\varepsilon^{-1}))\).
That is the theoretical result. In practice the algorithms usually works much better because they normally finish in something like 10 to 100 iterations and rarely employs more than 200 iterations. In fact if the algorithm requires more than 200 iterations then typically numerical issues prevent the software from solving the problem.
Finally, typically conic quadratic problem is sparse and that implies the linear system mentioned above can be solved must faster when the sparsity is exploited. Figuring our to solve the linear equation system (*) in the lowest complexity when exploiting sparsity is NP hard and therefore optimization only employs various heuristics such minimum degree order that helps cutting the iteration complexity. If you want to know more then read my Mathematical Programming publication mentioned below. One important fact is that it is impossible to predict the iteration complexity without knowing the problem structure and then doing a complicated analysis of that. I.e. the iteration complexity is not a simple function of the number constraints and variables unless A is completely dense.
To summarize primal-dual interior-point algorithms solve a conic quadratic problem in less 200 times the cost of solving the linear equation system (*) in practice.
So can the best proven polynomial complexity bound be proven for software like MOSEK. In general the answer is no because the software employ an bunch of tricks that speed up the practical performance but unfortunately they destroy the theoretical complexity proof. In fact, it is commonly accepted that if the algorithm is implemented strictly as theory suggest then it will be hopelessly slow.
I have spend of lot time on implementing interior-point methods as documented by the Mathematical Programming publication and my view on the practical implementations are that are they very close to theory. |
Here are the bond angles for each molecule (data from wikipedia):
\begin{array}{|c|c|}\hline\mathrm{Molecule} & \mathrm{Bond \space Angle \space (^\circ)} \\ \hline\ce{H2S} & 92.1 \\ \hline\ce{H2O} & 104.5 \\ \hline\ce{NH3} & 107.8 \\ \hline\ce{SO2} & 119 \\ \hline\end{array}
So $L \propto \frac{1}{BA}$ where $L$ is the number of lone pairs and $BA$ is bond angle.
This is true but only in very specific situations; when dealing with molecules that have a central atom in the same period and outer atoms of the same element (e.g $\ce{CH4}$, $\ce{NH3}$, $\ce{H2O}$). It breaks down as soon as you start comparing molecules with central atoms from different periods (e.g $\ce{PH3}$ has fewer lone pairs than $\ce{H2O}$ but a smaller bond angle) or when you compare molecules with different outer atoms (e.g $\ce{NF3}$ has fewer lone pairs than $\ce{H2O}$ but a smaller bond angle). For the reasons behind this I direct you to an excellent previous answer from @ron.
Additionally, I think you need to reconsider the number of lone pairs in $\ce{SO2}$, which can be described by these two resonance structures.
As you can see there is only one lone pair, but unfortunately this doesn't help us very much as we have no other molecule to compare it to.
Also $BA \propto ENC$ where $ENC$ is the electronegativity of the central atom.
This is incorrect. In fact the reverse is true - that $BA \propto \frac{1}{ENC}$ - but
only in the same situations as mentioned before. In this case the trend is only really pronounced in the second period; it is very slight in the third period and virtually non existent in the fourth. In fact, this 'rule' doesn't really have anything to do with electronegativity (that's just a coincidence) and it's essentially just a result of the first rule.
Also $BA \propto \frac{1}{ENS}$ where $ENS$ is the electronegativity of the surrounding atom.
This is sort of true - it's known as Bent's rule and it can be very useful but it's not really applicable here. It's been discussed many times on this site but here and here are some good introductions.
So how should you answer this question?
The first thing to note is that $\ce{SO2}$ only has three 'groups' on the central atom (sometimes called 'effective electron pairs' in VSEPR theory) - two bonds intermediate between a double and a single bond and a lone pair - whereas all the other molecules have three. Therefore we expect $\ce{SO2}$ to have the largest bond angle of the four molecules, and this is indeed the case. $\ce{H2O}$ and $\ce{NH3}$ are hydrides of the same period so we can use the first rule to determine that $\ce{H2O}$ has a smaller bond angle. Now we just have to decide whether $\ce{H2O}$ or $\ce{H2S}$ has a smaller bond angle. We can apply the hybridisation arguments given by @ron in the answer I linked earlier to determine that $\ce{H2S}$ has the smallest bond angle, and indeed we find that it is almost unhybridised with a bond angle very close to $\mathrm{90~^\circ}$. |
My question is in the title, but here is a more detailed formulation:
Let Top be the category of all topological spaces and continuous maps, and let CGTop be the subcategory of compactly generated spaces, as defined in Strickland's notes. We have a functor $k: Top \rightarrow CGTop$ that replaces the topology on a space by the associated compactly generated topology.
If X is a simplicial topological space (a simplicial object in the category of all topological spaces) is the identity map $|kX|\rightarrow k|X|$ a homeomorphism?
Here $|-|$ denotes geometric realization. I'm actually most interested in the "thick" realization (where the defining equivalence relation uses only the face maps), but the question makes sense for both the thick and thin versions.
It follows from the results in the above notes that $|kX|$ is compactly generated (being a quotient of a disjoint union of compactly generated spaces), and that the identity map $|kX|\rightarrow k|X|$ is continuous.
Here is a more general question: if $X$ is a topological space and $\sim$ is an equivalence relation on $X$, then $(kX)/{\sim}$ is compactly generated and the identity map $(kX)/{\sim} \to k(X/{\sim})$ is continuous. Is this map always a homeomorphism? |
This question already has an answer here:
Show that the language L = {ww^Rw: w in {a,b}*} is not a context-free language.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Show that the language L = {ww^Rw: w in {a,b}*} is not a context-free language.
Let's call the first $w$ of a word $ww^Rw \in L$ "part A", the middle part $w^R$ "part B" and the remaining $w$ "part C".
Use the word $$z=\underbrace{0^n10^n}_{A} \underbrace{0^n10^n}_{B} \underbrace{0^n10^n}_{C} \in L$$ with $|z|\geq n$.
The pumping lemma states that if $L$ is context-free, $z$ can be written as $z=uvwxy$ and
(1) $|vwx| \leq n$,
(2) $|vx| \geq 1$,
(3) $\forall i \in \mathbb{N}: uv^iwx^iy \in L$.
However, when setting $i=0$, the resulting word $z'=uwy$ cannot be element of $L$ as one of the following situations occurs:
Therefore we have to give up the assumption that $L$ is a context-free language.
Let $1 \leq p$ and let $p\leq|w|$ if we take the $ww^{R}w \in L$ then by the pumping leemma we are going to have that : $\forall n\ uv^{n}wx^{n}y \in L $ and $|vwx|\leq p$ and |vx|>0 so we can see that either $v,x$ are letters of $w^{R},w$ or one is in $w$ and the other in $w^{R}$ . if we take $n=0$ we get to contraction. So it isnt context-free. |
Knapsack Problems¶
This module implements a number of solutions to various knapsack problems, otherwise known as linear integer programming problems. Solutions to the following knapsack problems are implemented:
Solving the subset sum problem for super-increasing sequences. General case using Linear Programming
AUTHORS:
Minh Van Nguyen (2009-04): initial version Nathann Cohen (2009-08): Linear Programming version Definition of Knapsack problems¶
You have already had a knapsack problem, so you should know, but in case you do not, a knapsack problem is what happens when you have hundred of items to put into a bag which is too small, and you want to pack the most useful of them.
When you formally write it, here is your problem:
Your bag can contain a weight of at most \(W\). Each item \(i\) has a weight \(w_i\). Each item \(i\) has a usefulness \(u_i\).
You then want to maximize the total usefulness of the items you will store into your bag, while keeping sure the weight of the bag will not go over \(W\).
As a linear program, this problem can be represented this way (if you define \(b_i\) as the binary variable indicating whether the item \(i\) is to be included in your bag):
(For more information, see the Wikipedia article Knapsack_problem)
Examples¶
If your knapsack problem is composed of three items (weight, value) defined by (1,2), (1.5,1), (0.5,3), and a bag of maximum weight 2, you can easily solve it this way:
sage: from sage.numerical.knapsack import knapsacksage: knapsack( [(1,2), (1.5,1), (0.5,3)], max=2)[5.0, [(1, 2), (0.500000000000000, 3)]]
Super-increasing sequences¶
We can test for whether or not a sequence is super-increasing:
sage: from sage.numerical.knapsack import Superincreasingsage: L = [1, 2, 5, 21, 69, 189, 376, 919]sage: seq = Superincreasing(L)sage: seqSuper-increasing sequence of length 8sage: seq.is_superincreasing()Truesage: Superincreasing().is_superincreasing([1,3,5,7])False
Solving the subset sum problem for a super-increasing sequence and target sum:
sage: L = [1, 2, 5, 21, 69, 189, 376, 919]sage: Superincreasing(L).subset_sum(98)[69, 21, 5, 2, 1]
class
sage.numerical.knapsack.
Superincreasing(
seq=None)¶
A class for super-increasing sequences.
Let \(L = (a_1, a_2, a_3, \dots, a_n)\) be a non-empty sequence of non-negative integers. Then \(L\) is said to be super-increasing if each \(a_i\) is strictly greater than the sum of all previous values. That is, for each \(a_i \in L\) the sequence \(L\) must satisfy the property\[a_i > \sum_{k=1}^{i-1} a_k\]
in order to be called a super-increasing sequence, where \(|L| \geq 2\). If \(L\) has only one element, it is also defined to be a super-increasing sequence.
If
seqis
None, then construct an empty sequence. By definition, this empty sequence is not super-increasing.
INPUT:
seq– (default:
None) a non-empty sequence.
EXAMPLES:
sage: from sage.numerical.knapsack import Superincreasing sage: L = [1, 2, 5, 21, 69, 189, 376, 919] sage: Superincreasing(L).is_superincreasing() True sage: Superincreasing().is_superincreasing([1,3,5,7]) False sage: seq = Superincreasing(); seq An empty sequence. sage: seq = Superincreasing([1, 3, 6]); seq Super-increasing sequence of length 3 sage: seq = Superincreasing(list([1, 2, 5, 21, 69, 189, 376, 919])); seq Super-increasing sequence of length 8
is_superincreasing(
seq=None)¶
Determine whether or not
seqis super-increasing.
If
seq=Nonethen determine whether or not
selfis super-increasing.
Let \(L = (a_1, a_2, a_3, \dots, a_n)\) be a non-empty sequence of non-negative integers. Then \(L\) is said to be super-increasing if each \(a_i\) is strictly greater than the sum of all previous values. That is, for each \(a_i \in L\) the sequence \(L\) must satisfy the property\[a_i > \sum_{k=1}^{i-1} a_k\]
in order to be called a super-increasing sequence, where \(|L| \geq 2\). If \(L\) has exactly one element, then it is also defined to be a super-increasing sequence.
INPUT:
seq– (default:
None) a sequence to test
OUTPUT:
If
seqis
None, then test
selfto determine whether or not it is super-increasing. In that case, return
Trueif
selfis super-increasing;
Falseotherwise.
If
seqis not
None, then test
seqto determine whether or not it is super-increasing. Return
Trueif
seqis super-increasing;
Falseotherwise.
EXAMPLES:
By definition, an empty sequence is not super-increasing:
sage: from sage.numerical.knapsack import Superincreasing sage: Superincreasing().is_superincreasing([]) False sage: Superincreasing().is_superincreasing() False sage: Superincreasing().is_superincreasing(tuple()) False sage: Superincreasing().is_superincreasing(()) False
But here is an example of a super-increasing sequence:
sage: L = [1, 2, 5, 21, 69, 189, 376, 919] sage: Superincreasing(L).is_superincreasing() True sage: L = (1, 2, 5, 21, 69, 189, 376, 919) sage: Superincreasing(L).is_superincreasing() True
A super-increasing sequence can have zero as one of its elements:
sage: L = [0, 1, 2, 4] sage: Superincreasing(L).is_superincreasing() True
A super-increasing sequence can be of length 1:
sage: Superincreasing([randint(0, 100)]).is_superincreasing() True
largest_less_than(
N)¶
Return the largest integer in the sequence
selfthat is less than or equal to
N.
This function narrows down the candidate solution using a binary trim, similar to the way binary search halves the sequence at each iteration.
INPUT:
N– integer; the target value to search for.
OUTPUT:
The largest integer in
selfthat is less than or equal to
N. If no solution exists, then return
None.
EXAMPLES:
When a solution is found, return it:
sage: from sage.numerical.knapsack import Superincreasing sage: L = [2, 3, 7, 25, 67, 179, 356, 819] sage: Superincreasing(L).largest_less_than(207) 179 sage: L = (2, 3, 7, 25, 67, 179, 356, 819) sage: Superincreasing(L).largest_less_than(2) 2
But if no solution exists, return
None:
sage: L = [2, 3, 7, 25, 67, 179, 356, 819] sage: Superincreasing(L).largest_less_than(-1) is None True
subset_sum(
N)¶
Solving the subset sum problem for a super-increasing sequence.
Let \(S = (s_1, s_2, s_3, \dots, s_n)\) be a non-empty sequence of non-negative integers, and let \(N \in \ZZ\) be non-negative. The subset sum problem asks for a subset \(A \subseteq S\) all of whose elements sum to \(N\). This method specializes the subset sum problem to the case of super-increasing sequences. If a solution exists, then it is also a super-increasing sequence.
Note
This method only solves the subset sum problem for super-increasing sequences. In general, solving the subset sum problem for an arbitrary sequence is known to be computationally hard.
INPUT:
N– a non-negative integer.
OUTPUT:
A non-empty subset of
selfwhose elements sum to
N. This subset is also a super-increasing sequence. If no such subset exists, then return the empty list.
ALGORITHMS:
The algorithm used is adapted from page 355 of [HPS2008].
EXAMPLES:
Solving the subset sum problem for a super-increasing sequence and target sum:
sage: from sage.numerical.knapsack import Superincreasing sage: L = [1, 2, 5, 21, 69, 189, 376, 919] sage: Superincreasing(L).subset_sum(98) [69, 21, 5, 2, 1]
sage.numerical.knapsack.
knapsack(
seq, binary=True, max=1, value_only=False, solver=None, verbose=0)¶
Solves the knapsack problem
INPUT:
seq– Two different possible types:
A sequence of tuples
(weight, value, something1, something2, ...). Note that only the first two coordinates (
weightand
values) will be taken into account. The rest (if any) will be ignored. This can be useful if you need to attach some information to the items.
A sequence of reals (a value of 1 is assumed). A sequence of tuples
binary– When set to
True, an item can be taken 0 or 1 time. When set to
False, an item can be taken any amount of times (while staying integer and positive).
max– Maximum admissible weight.
value_only– When set to
True, only the maximum useful value is returned. When set to
False, both the maximum useful value and an assignment are returned.
solver– (default:
None) Specify a Linear Program (LP) solver to be used. If set to
None, the default one is used. For more information on LP solvers and which default solver is used, see the documentation of class
MixedIntegerLinearProgram.
verbose– integer (default:
0). Sets the level of verbosity. Set to 0 by default, which means quiet.
OUTPUT:
If
value_onlyis set to
True, only the maximum useful value is returned. Else (the default), the function returns a pair
[value,list], where
listcan be of two types according to the type of
seq:
The list of tuples \((w_i, u_i, ...)\) occurring in the solution. A list of reals where each real is repeated the number of times it is taken into the solution.
EXAMPLES:
If your knapsack problem is composed of three items
(weight, value)defined by
(1,2), (1.5,1), (0.5,3), and a bag of maximum weight \(2\), you can easily solve it this way:
sage: from sage.numerical.knapsack import knapsack sage: knapsack( [(1,2), (1.5,1), (0.5,3)], max=2) [5.0, [(1, 2), (0.500000000000000, 3)]] sage: knapsack( [(1,2), (1.5,1), (0.5,3)], max=2, value_only=True) 5.0
Besides weight and value, you may attach any data to the items:
sage: from sage.numerical.knapsack import knapsack sage: knapsack( [(1, 2, 'spam'), (0.5, 3, 'a', 'lot')]) [3.0, [(0.500000000000000, 3, 'a', 'lot')]]
In the case where all the values (usefulness) of the items are equal to one, you do not need embarrass yourself with the second values, and you can just type for items \((1,1), (1.5,1), (0.5,1)\) the command:
sage: from sage.numerical.knapsack import knapsack sage: knapsack([1,1.5,0.5], max=2, value_only=True) 2.0 |
If two time series follow a GARCH process, and a third is a linear combination of them, is the third also GARCH process?
I think there are a lot of different ways to specify this problem. For simplicity, consider independent Garch processes $$ r_{1,t} \sim N\left(0,\sigma_{1,t}^{2}\right) $$ $$ \sigma_{1,t}^{2} = \beta_{1,1}+\beta_{1,2}\varepsilon_{1,t-1}^{2}+\beta_{1,3}\sigma_{1,t-1}^{2} $$ and $$ r_{2,t} \sim N\left(0,\sigma_{2,t}^{2}\right) $$ $$ \sigma_{2,t}^{2} = \beta_{2,1}+\beta_{2,2}\varepsilon_{2,t-1}^{2}+\beta_{2,3}\sigma_{2,t-1}^{2} $$ where $\left[\begin{array}{cc} \varepsilon_{1,t} & \varepsilon_{2,t}\end{array}\right]\sim N\left(0,\left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right]\right)$.
In this case, the linear combination equals $$ r_{3,t} = \alpha_{1}r_{1,t}+\alpha_{2}r_{2,t} \sim N\left(0,\alpha_{1}^{2}\sigma_{1,t}^{2}+\alpha_{2}^{2}\sigma_{2,t}^{2}\right) $$
Assuming the coefficients in the Garch equations are constrained to be positive and sum to less than or equal to one on the lagged values, then $r_{3,t}$ will also follow a Garch process as a result of inheriting the Garch variances of the other variables.
No, a sum of two GARCH processes is generally not a GARCH process.
(I am not even sure whether there exists a nontrivial special case where the opposite holds.)
By GARCH I mean the classic definition of GARCH due to Bollerslev (1986), not an arbitrary variation like EGARCH, IGARCH, FIGARCH or whatever else.
Let me provide an example. Take two independent zero-conditional-mean processes $e_{1,t}$ and $e_{2,t}$. Let their conditional variances follow GARCH(1,1). Then the conditional variance equations of $e_{1,t}$ and $e_{2,t}$ are
$$ \begin{aligned} \sigma_{1,t}^2 = \omega_1 + a_1 e_{1,t-1}^2 + b_1 \sigma_{1,t-1}^2; \\ \sigma_{2,t}^2 = \omega_2 + a_2 e_{2,t-1}^2 + b_2 \sigma_{2,t-1}^2. \\ \end{aligned} $$
Take $e_t$ to be the simplest possible linear combination of $e_{1,t}$ and $e_{2,t}$, namely, their sum:
$$ e_t := e_{1,t} + e_{2,t}. $$
Will its conditional variance follow a GARCH process? If it would, we could express the conditional variance of $e_t$ as
$$ \sigma_t^2 = \omega + \sum_{i=1}^s \alpha_i e_{t-i}^2 + \sum_{i=1}^r \beta_i \sigma_{t-i}^2 $$
(a GARCH($s$,$r$) equation). To show the conditional variance of $e_t$ follows GARCH($s$,$r$) we need to find the appropriate $\omega$, $\alpha$s, $\beta$s, $s$ and $r$. Can this be done?
Let us start by writing the conditional variance of $e_t$ explicitly based on the fact that $e_t = e_{1,t} + e_{2,t}$ and the properties of $e_{1,t}$ and $e_{2,t}$. The conditional variance of $e_t$ will be the sum of the conditional variances of $e_{1,t}$ and $e_{2,t}$ (there are no covariances due to the assumed independence):
$$ \begin{aligned} \sigma_t^2 &= \sigma_{1,t}^2 + \sigma_{2,t}^2 \\ &= \omega_1 + a_1 e_{1,t-1}^2 + b_1 \sigma_{1,t-1}^2 \\ &+ \omega_2 + a_2 e_{2,t-1}^2 + b_2 \sigma_{2,t-1}^2 \\ &= (\omega_1+\omega_2) + (a_1 e_{1,t-1}^2+a_2 e_{2,t-1}^2) + (b_1 \sigma_{1,t-1}^2+b_2 \sigma_{2,t-1}^2). \\ \end{aligned} $$
It does not seem possible to express this in terms of $\sigma_t^2 = \omega + \sum_{i=1}^s \alpha_i e_{t-i}^2 + \sum_{i=1}^r \beta_i \sigma_{t-i}^2$ (but how to prove it formally?). And this is the simple example where $e_{1,t}$ and $e_{2,t}$ are independent (so we spare any covariances that would otherwise appear in the above expressions) and the lag orders of their respective GARCH processes coincide.
Why do I arrive at a different conclusion than @John? His claim
Assuming the coefficients in the Garch equations are constrained to be positive and sum to less than or equal to one on the lagged values, then $r_{3,t}$ will also follow a Garch process as a result of inheriting the Garch variances of the other variables
is unfounded, i.e. there is no proof or derivation supporting it. On the contrary, the above expressions illustrate (admittedly, without a formal proof) that the inheritence from the two component processes does not add up to fit in the form of a GARCH model.
References: Bollerslev, Tim. "Generalized autoregressive conditional heteroskedasticity." Journal of Econometrics31.3 (1986): 307-327. |
I googled it and I couldn't found it. And I am really sure this is a question that a lot of people ask, but I really googled it and couldn't find it.
Is it possible to use an "invisble equal sign" in the align environment? Some times I would like align to align equations different, instead of under the equation mark. In some formulas I don't have equation signs. So is it possible to use an invisible equation mark or some trick to align to something else than an equation mark?
Here a minimal example. I would like to have the last two "equations" aligned (I just wrote some random numbers, to make an example). The first two environments was to show how I normally use align/equation.
\documentclass[12pt,a4paper]{article}\usepackage[utf8]{inputenc}\usepackage{amsmath}\usepackage{amsfonts}\usepackage{amssymb}\begin{document}%Normal use of align\begin{align} x &= 0.999\ldots \\10x &= 9.999\ldots \\ 10x &= 9+0.999\ldots \\ 10x &= 9 + x\\ 9x &= 9\\ x &= 1\end{align}%Normal use of equation\begin{equation}0.999\ldots = 9\left(\tfrac{1}{10}\right) + 9\left({\tfrac{1}{10}}\right)^2 + 9\left({\tfrac{1}{10}}\right)^3 + \cdots = \frac{9\left({\tfrac{1}{10}}\right)}{1-{\tfrac{1}{10}}} = 1.\,\end{equation}%What I would like to have aligned\begin{align}bci-ax+i > n\\\sin(\theta_B) \rightarrow 9x\end{align}\end{document}
Kind regards! |
Consider the following example:
\documentclass{article}\usepackage[utf8]{inputenc}\usepackage{amsmath, amsthm, amssymb, mathtools, thmtools, unicode-math}\begin{document} \begin{equation*} \begin{split} |\mu|(B):=\sup\left\{&\sum_{i=1}^k|\mu(B_i)|:k\in\mathbb N\text{ and}\right.\\ &\left.B_1,\ldots,B_k\in\mathcal E\text{ are disjoint with }\biguplus_{i=1}^kB_i\subseteq B\right\} \end{split} \end{equation*} \end{document}
This is how the output looks like:
I need to break the definition into two lines, since it takes too much horizontal space. Obviously, there is a problem with the braces. Actually, I think I know what's going wrong. The problem is that the first ampersand is occurring after the first
\left\{ but the corresponding closing
\right. is occurring after the first ampersand. So, I guess that's causing that they cannot match. However, I don't know how I can fix this. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
Formal approach of the problem
We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, \widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbraceThe objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like
non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.
This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.
Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}for any \mathbf{a}\in\mathbb{R}^d. One might say that the AIC could be written-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}And actually, this will be our objective function. More specifically, we will consider
\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbracefor some norm \|\cdot\|. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for \|\cdot\|_{\ell_2} (the Ridge regression) and for \|\cdot\|_{\ell_1} (the LASSO regression). Normalization of the covariates
The problem of \|\mathbf{\beta}\| is that the norm should make sense, somehow. A small \mathbf{\beta}_j is with respect to the “dimension” of x_j‘s. So, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)
y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) Ridge Regression (from scratch)
Before running some codes, recall that we want to solve something like\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression.
The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : \min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbracecan be written equivalently (it is a strictly convex problem)\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbraceThus, the constrained maximum should lie in the blue disk
LogLik = function(bbeta){ b0=bbeta[1] beta=bbeta[-1] sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))} u = seq(-4,4,length=251) v = outer(u,u,function(x,y) LogLik(c(1,x,y))) image(u,u,v,col=rev(heat.colors(25))) contour(u,u,v,add=TRUE) u = seq(-1,1,length=251) lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue") lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue")
Let us consider the objective function, with the following code
PennegLogLik = function(bbeta,lambda=0){ b0 = bbeta[1] beta = bbeta[-1] -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)* log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2) }
Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case
lambda = 1 beta_init = lm(PRONO~.,data=myocarde)$coefficients vpar = matrix(NA,1000,8) for(i in 1:1000){ vpar[i,] = optim(par = beta_init*rnorm(8,1,2), function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par} par(mfrow=c(1,2)) plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1]) plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2]) Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum.
The code to compute \widehat{\mathbf{\beta}}_{\lambda} would then be
opt_ridge = function(lambda){ beta_init = lm(PRONO~.,data=myocarde)$coefficients logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), method = "BFGS", control=list(abstol=1e-9)) logistic_opt$par[-1]}
and we can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} as a function of {\lambda}
v_lambda = c(exp(seq(-2,5,length=61))) est_ridge = Vectorize(opt_ridge)(v_lambda) library("RColorBrewer") colrs = brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1]) for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])
At least it seems to make sense: we can observe the shrinkage as \lambda increases (we’ll get back to that later on).
Ridge, using Netwon Raphson algorithm
We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}where
\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})and\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}where \mathbf{\Delta}_{old} is the diagonal matrix with terms \mathbf{p}_{old}(1-\mathbf{p}_{old}) on the diagonal.
Thus\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]that we can also write\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. Here, on the penalized problem, we can easily prove that\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}while\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}Hence\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}
The code is then
Y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) X = cbind(1,X) colnames(X) = c("Inter",names(myocarde[,1:7])) beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1) for(s in 1:9){ pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi)) z = X%*%beta[,s] + solve(Delta)%*%(Y-pi) B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z) beta = cbind(beta,B)} beta[,8:10] [,1] [,2] [,3] XInter 0.59619654 0.59619654 0.59619654 XFRCAR 0.09217848 0.09217848 0.09217848 XINCAR 0.77165707 0.77165707 0.77165707 XINSYS 0.69678521 0.69678521 0.69678521 XPRDIA -0.29575642 -0.29575642 -0.29575642 XPAPUL -0.23921101 -0.23921101 -0.23921101 XPVENT -0.33120792 -0.33120792 -0.33120792 XREPUL -0.84308972 -0.84308972 -0.84308972
Again, it seems that convergence is very fast.
And interestingly, with that algorithm, we can also derive the variance of the estimator\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}where\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}
The code to compute \widehat{\mathbf{\beta}}_{\lambda} as a function of \lambda is then
newton_ridge = function(lambda=1){ beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8) for(s in 1:20){ pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi)) z = X%*%beta[,s] + solve(Delta)%*%(Y-pi) B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z) beta = cbind(beta,B)} Varz = solve(Delta) Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*% Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}
We can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} (as a function of \lambda)
v_lambda=c(exp(seq(-2,5,length=61))) est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i]) and to get the evolution of the variance
v_lambda=c(exp(seq(-2,5,length=61))) est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2) Recall that when \lambda=0 (on the left of the graphs), \widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco} (no penalty). Thus as \lambda increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease. Ridge, using glmnet
As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with \alpha=0
y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) library(glmnet) glm_ridge = glmnet(X, y, alpha=0) plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)
as a function of the norm
the \ell_1 norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post…
Ridge with orthogonal covariates
An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates.
library(factoextra) pca = princomp(X) pca_X = get_pca_ind(pca)$coord
Let us run a ridge regression on those (orthogonal) covariates
library(glmnet) glm_ridge = glmnet(pca_X, y, alpha=0) plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)
plot(glm_ridge,col=colrs,lwd=2)
We clearly observe the shrinkage of the parameters, in the sense that \widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}
Application
Let us try with our second set of data
df0 = df df0$y=as.numeric(df$y)-1 plot_lambda = function(lambda){ m = apply(df0,2,mean) s = apply(df0,2,sd) for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j] reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda) u = seq(0,1,length=101) p = function(x,y){ xt = (x-m[1])/s[1] yt = (y-m[2])/s[2] predict(reg,newx=cbind(x1=xt,x2=yt),type='response')} v = outer(u,u,p) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) }
We can try various values of \lambda
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=log(.2)) plot_lambda(.2) or
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=log(1.2)) plot_lambda(1.2) Next step is to change the norm of the penality, with the \ell_1 norm (to be continued…) |
Let's assume that we are simply trying to maximize the expected value of our roll. (As discussed this might not be a realistic representation of actual gameplay, but we can work it out anyway.) Then the rolls on any of the dice don't affect our decisions to reroll for other dice - that is, any one die's rolls and strategy is independent of other dice and we can work out expected value as the expected value for one die times the number of dice.
Now for the expected value of a N-sided die with R rerolls, we can establish a recurrence relation.
Starting with 0 rerolls, this is the normal expected value for a single die:
$$ E = \dfrac{1+N}{2} $$
Given \$E\$ as the expected value for R rerolls, we calculate \$E'\$ for R+1 rerolls:
$$ E' = P(reroll) \cdot E + P(keep) \cdot (average keep) $$
Now the decision to reroll is based on whether our expected value with R rerolls is higher than our current roll. Let \$\lfloor{E}\rfloor\$ be the floor of \$E'\$ (i.e. \$E'\$ rounded down to the nearest whole number - the highest number we will want to reroll), then:
$$P(reroll) = \dfrac{\lfloor{E}\rfloor}{N} \\P(keep) = \dfrac{N-\lfloor{E}\rfloor}{N} \\\text{Average keep} = \dfrac{\lfloor{E}\rfloor+1 + N}{2}$$
This gives us a formula for R+1 rerolls:
$$\begin{align}E' &= \dfrac{E\lfloor{E}\rfloor}{N} + \dfrac{(N-\lfloor{E}\rfloor)(N+\lfloor{E}\rfloor+1)}{2N} \\ &= \dfrac{2E\lfloor{E}\rfloor + (N-\lfloor{E}\rfloor)(N+\lfloor{E}\rfloor+1)}{2N}\end{align}$$
With an \$\lfloor{E}\rfloor\$ in our final formula, we can't get a nice closed form for any number of rerolls, but we can just calculate the values from the recurrence relation. For example, for your example with \$N=10\$:
$$\begin{align}E[\text{0 rerolls}] &= 5.5 & (\lfloor{E}\rfloor=5) \\E[\text{1 reroll}] &= \dfrac{55 + 80}{20} = 6.75 & (\lfloor{E}\rfloor=6) \\E[\text{2 rerolls}] &= \dfrac{81 + 68}{20} = 7.45\end{align}$$
For 3D10 with 2 rerolls our expected value is \$(3 \times 7.45) = 22.35\$. Our strategy is to reroll all values 1-6 on our first roll, reroll all values 1-5 on our second roll. |
We have the matrix Laplacian matrix $G=A^TA$ which has a set of eigenvalues $\lambda_0\leq\lambda_1\leq\ldots\leq \lambda_n$ for $G\in\mathbb{R}^{n\times n}$ where we always know $\lambda_0 = 0$. Thus the Laplacian matrix is always symmetric positive semi-definite. Because the matrix $G$ is not symmetric positive definite we have to be careful when we discuss the Cholesky decomposition. The Cholesky decomposition exists for a positive semi-definite matrix but it is no longer unique. For example, the positive semi-definite matrix $$ A=\left[\!\!\begin{array}{cc} 0 & 0 \\ 0 & 1\end{array}\!\!\right],$$has infinitely many Cholesky decompositions$$ A=\left[\!\begin{array}{cc} 0 & 0 \\ 0 & 1\end{array}\!\right]= \left[\!\begin{array}{cc} 0 & 0 \\ \sin\theta & \cos\theta\end{array}\!\right] \left[\!\begin{array}{cc} 0 & \sin\theta \\ 0 & \cos\theta\end{array}\!\right]=LL^T.$$
However, because we have a matrix $G$ that is known to be a Laplacian matrix we can actually avoid the more sophisticated linear algebra tools like Cholesky decompositions or finding the square root of the positive semi-definite matrix $G$ such that we recover $A$. For example, if we have the Laplace matrix $G\in\mathbb{R}^{4\times 4}$,$$G=\left[\!\begin{array}{cccc} 3 & -1 & -1 & -1\\-1 & 1 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ -1 & 0 & 0 & 1 \\\end{array}\!\right]$$we can use graph theory to recover the desired matrix $A$. We do so by formulating the oriented incidence matrix. If we define the number of edges in the graph to be $m$ and the number of vertices to be $n$ then the oriented incidence matrix $A$ will be an $m\times n$ matrix given by $$A_{ev} = \left\{\begin{array}{lc} 1 & \textrm{if }e=(v,w)\textrm{ and }v<w \\ -1 & \textrm{if }e=(v,w)\textrm{ and }v>w \\ 0 & \textrm{otherwise},\end{array}\right.$$where $e=(v,w)$ denotes the edge which connects the vertices $v$ and $w$. If we take a graph for $G$ with four vertices and three edges,then we have the oriented incidence matrix $$A = \left[\!\begin{array}{cccc} 1 & -1 & 0 & 0\\ 1 & 0 & -1 & 0 \\ 1 & 0 & 0 & -1 \\\end{array}\!\right],$$and we can find that $G=A^TA$. For the matrix problem you describe you would construct a graph for $G$ with the same number of edges as vertices, then you should have the ability to reconstruct the matrix $A$ when you are only given the Laplacian matrix $G$.
Update:
If we define the diagonal matrix of vertex degrees of a graph as $N$ and the adjacency matrix of the graph as $M$, then the Laplacian matrix $G$ of the graph is defined by $G=N-M$. For example, in the following graph
we find the Laplacian matrix is$$G=\left[\!\begin{array}{cccc} 3 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\\end{array}\!\right] - \left[\!\begin{array}{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\\end{array}\!\right].$$Now we relate the $G$ to the oriented incidence matrix $A$ using the edges and nodes given in the pictured graph. Again we find the entries of $A$ from $$A_{ev} = \left\{\begin{array}{lc} 1 & \textrm{if }e=(v,w)\textrm{ and }v<w \\ -1 & \textrm{if }e=(v,w)\textrm{ and }v>w \\ 0 & \textrm{otherwise},\end{array}\right..$$ For example, edge $e_1$ connects the nodes $v_1$ and $v_2$. So to determine $A_{e_1,v_1}$ we note that the index of $v_1$ is less than the index of $v_2$ (or we have the case $v<w$ in the definition of $A_{ev}$). Thus, $A_{e_1,v_1} = 1$. Similarly by the way of comparing indices we can find $A_{e_1,v_2} = -1$. We give $A$ below in a more explicit way referencing the edges and vertices pictured.$$A = \begin{array}{c|cccc} & v_1 & v_2 & v_3 & v_4 \\ \hline e_1 & 1 & -1 & 0 & 0\\ e_2 & 1 & 0 & -1 & 0 \\ e_3 & 1 & 0 & 0 & -1 \\\end{array}.$$
Next, we generalize the concept of the Laplacian matrix to a weighted undirected graph. Let $Gr$ be an undirected finite graph defined by $V$ and $E$ its vertex and edge set respectively. To consider a weighted graph we define a weight function $$w: V\times V\rightarrow \mathbb{R}^+,$$which assigns a non-negative real weight to each edge of the graph. We will denote the weight attached to edge connecting vertices $u$ and $v$ by $w(u,v)$. In the case of a weighted graph we define the degree of each vertex $u\in V$ as the sum of all the weighted edges connected to $u$, i.e.,$$d_u = \sum_{v\in V}w(u,v).$$From the given graph $Gr$ we can define the weighted adjacency matrix $Ad(Gr)$ as an $n\times n$ with rows and columns indexed by $V$ whose entries are given by $w(u,v)$. Let $D(Gr)$ be the diagonal matrix indexed by $V$ with the vertex degrees on the diagonal then we can find the weighted Laplacian matrix $G$ just as before$$G = D(Gr) - Ad(Gr).$$
In the problem from the original post we know $$G=\left[\!\begin{array}{ccc} \tfrac{3}{4} & -\tfrac{1}{3} & -\tfrac{5}{12} \\-\tfrac{1}{3} & \tfrac{2}{3} & -\tfrac{1}{3} \\ -\tfrac{5}{12} & -\tfrac{1}{3} & \tfrac{3}{4} \\\end{array}\!\right].$$ From the comments we know we seek a factorization for $G$ where $G=A^TA$ and specify $A$ is of the form $A=I-1_nw^T$ where $w^T1_n=1$. For full generality assume the matrix $A$ has no zero entries. Thus if we formulate the weighted oriented incidence matrix to find $A$ we want the weighted adjacency matrix $Ad(Gr)$ to have no zero entries as well, i.e., the weighted graph will have loops. Actually calculating the weighted oriented incidence matrix seems difficult (although it may simply be a result of my inexperience with weighted graphs). However, we can find a factorization of the form we seek in an ad hoc way if we assume we know something about the loops in our graph. We split the weighted Laplacian matrix $G$ into the degree and adjacency matrices as follows$$G=\left[\!\begin{array}{ccc} \tfrac{5}{4} & 0 & 0 \\0 & 1 & 0 \\ 0 & 0 & \tfrac{11}{12} \\\end{array}\!\right]-\left[\!\begin{array}{ccc} \tfrac{1}{2} & \tfrac{1}{3} & \tfrac{5}{12} \\\tfrac{1}{3} & \tfrac{1}{3} & \tfrac{1}{3} \\ \tfrac{5}{12} & \tfrac{1}{3} & \tfrac{1}{6} \\\end{array}\!\right] = D(Gr)-Ad(Gr).$$
Thus we know the loops on $v_1$, $v_2$ and $v_3$ have weights $1/2$, $1/3$, and $1/6$ respectively. If we put the weights on the loops into a vector $w$ = $[\frac{1}{2}$ $\frac{1}{3}$ $\frac{1}{6}]^T$ then we can recover the matrix $A$ we want in the desired form$$A = I-1_nw^T = \left[\!\begin{array}{ccc} \tfrac{1}{2} & -\tfrac{1}{3} & -\tfrac{1}{6} \\-\tfrac{1}{2} & \tfrac{2}{3} & -\tfrac{1}{6} \\ -\tfrac{1}{2} & -\tfrac{1}{3} & \tfrac{5}{6} \\\end{array}\!\right].$$
It appears if we have knowledge of the loops in our weighted graph we can find the matrix $A$ in the desired form. Again, this was done in an ad hoc manner (as I am not a graph theorist) so it may be a hack that worked just for this simple problem. |
In the solution to Making more easy the itemized of item with tabulation system, I am counting up the number of spaces in order to determine the type of leading character to insert. However, if the I replace the leading spaces with a tab character, the solution does not work.
If I could detect the tab character in the
literate, then I could have
\ProcessSpace increment the counter
NumOfContigousSpaces appropriately, but I don't know how to test for it?
I thought adding
tabsize=4, keepspaces=true would do the job but this is not quite enough. So I attempted to use
lstag@tabulator from How to automatically skip leading white spaces in listings, but was not able to get that to work.
The code below has a 4 leading spaces before the
W in the first line and a tab as the leading character before the
W in the second line. This produces no bullet for the line with a tab:
The correct output can be seen by using 4 spaces before the
W in both lines:
Note: It appears the posting a code snippet here replaces a tab with 4 spaces. So to use the MWE below you will need to replace the four leading spaces before the
Wxxxwith a
tabcharacter.
Code:
\documentclass{article}\usepackage{pgf}\usepackage{xstring}\usepackage{listings}\newcounter{NumOfContigousSpaces}%\setcounter{NumOfContigousSpaces}{0}%\newcommand{\Width}{1}%\newcommand*{\AddApproriateBulletIfFirstChar}[1]{% \pgfmathtruncatemacro{\BulletType}{\arabic{NumOfContigousSpaces}/4}% \IfEqCase{\BulletType}{% {0}{\gdef\Width{1}} {1}{\gdef\Width{3}$\bullet$ } {2}{\gdef\Width{3}$\circ$ } {3}{\gdef\Width{3}$\times$ } {4}{\gdef\Width{3}$\star$ } {5}{\gdef\Width{3}$-$ } }[\gdef\Width{3}$\bullet$ ]% #1% \setcounter{NumOfContigousSpaces}{0}%}%\newcommand*{\ProcessSpace}{% \addtocounter{NumOfContigousSpaces}{1}% \space%}%\newcommand*{\ProcessTab}{% \addtocounter{NumOfContigousSpaces}{4}% \space\space\space\space%}%\makeatletter\lstdefinestyle{MyItemize}{% basicstyle=\ttfamily, columns=flexible, tabsize=4, keepspaces=true, literate=% {\ }{{{\ProcessSpace}}}1% Count contigous spaces {lstag@tabulator}{{{\ProcessTab}}}4% ??? how detect a tab? % %--- much code removed here (See https://tex.stackexchange.com/questions/57939/making-more-easy-the-itemized-of-item-with-tabulation-system for full code) {W}{{{\AddApproriateBulletIfFirstChar{W}}}}\Width {x}{{{\AddApproriateBulletIfFirstChar{x}}}}\Width}%\makeatother\begin{document}\begin{lstlisting}[style=MyItemize] Wxxx xxx Wxxx xx xx\end{lstlisting}\end{document} |
Introduction to Quasisymmetric Functions¶
In this document we briefly explain the quasisymmetric function bases andrelated functionality in Sage. We assume the reader is familiar with thepackage
SymmetricFunctions.
Quasisymmetric functions, denoted \(QSym\), form a subring of the power series ring in countably many variables. \(QSym\) contains the symmetric functions. These functions first arose in the theory of \(P\)-partitions. The initial ideas in this field are attributed to MacMahon, Knuth, Kreweras, Glânffrwd Thomas, Stanley. In 1984, Gessel formalized the study of quasisymmetric functions and introduced the basis of fundamental quasisymmetric functions [Ges]. In 1995, Gelfand, Krob, Lascoux, Leclerc, Retakh, and Thibon showed that the ring of quasisymmetric functions is Hopf dual to the noncommutative symmetric functions [NCSF]. Many results have built on these.
One advantage of working in \(QSym\) is that many interesting families of symmetric functions have explicit expansions in fundamental quasisymmetric functions such as Schur functions [Ges], Macdonald polynomials [HHL05], and plethysm of Schur functions [LW12].
For more background see Wikipedia article Quasisymmetric_function.
To begin, initialize the ring. Below we chose to use the rational numbers \(\QQ\). Other options include the integers \(\ZZ\) and \(\CC\):
sage: QSym = QuasiSymmetricFunctions(QQ)sage: QSymQuasisymmetric functions over the Rational Fieldsage: QSym = QuasiSymmetricFunctions(CC); QSymQuasisymmetric functions over the Complex Field with 53 bits of precisionsage: QSym = QuasiSymmetricFunctions(ZZ); QSymQuasisymmetric functions over the Integer Ring
All bases of \(QSym\) are indexed by compositions e.g. \([3,1,1,4]\). Theconvention is to use capital letters for bases of \(QSym\) and lowercaseletters for bases of the symmetric functions \(Sym\). Next set up names for theknown bases by running
inject_shorthands(). As with symmetric functions,you do not need to run this command and you could assign these bases othernames.
sage: QSym = QuasiSymmetricFunctions(QQ)sage: QSym.inject_shorthands()Defining M as shorthand for Quasisymmetric functions over the Rational Field in the Monomial basisDefining F as shorthand for Quasisymmetric functions over the Rational Field in the Fundamental basisDefining E as shorthand for Quasisymmetric functions over the Rational Field in the Essential basisDefining dI as shorthand for Quasisymmetric functions over the Rational Field in the dualImmaculate basisDefining QS as shorthand for Quasisymmetric functions over the Rational Field in the Quasisymmetric Schur basisDefining YQS as shorthand for Quasisymmetric functions over the Rational Field in the Young Quasisymmetric Schur basisDefining phi as shorthand for Quasisymmetric functions over the Rational Field in the phi basisDefining psi as shorthand for Quasisymmetric functions over the Rational Field in the psi basis
Now one can start constructing quasisymmetric functions.
Note
It is best to use variables other than
M and
F.
sage: x = M[2,1] + M[1,2]sage: xM[1, 2] + M[2, 1]sage: y = 3*M[1,2] + M[3]^2; y3*M[1, 2] + 2*M[3, 3] + M[6]sage: F[3,1,3] + 7*F[2,1]7*F[2, 1] + F[3, 1, 3]sage: 3*F[2,1,2] + F[3]^2F[1, 2, 2, 1] + F[1, 2, 3] + 2*F[1, 3, 2] + F[1, 4, 1] + F[1, 5] + 3*F[2, 1, 2] + 2*F[2, 2, 2] + 2*F[2, 3, 1] + 2*F[2, 4] + F[3, 2, 1] + 3*F[3, 3] + 2*F[4, 2] + F[5, 1] + F[6]
To convert from one basis to another is easy:
sage: z = M[1,2,1]sage: zM[1, 2, 1]sage: F(z)-F[1, 1, 1, 1] + F[1, 2, 1]sage: M(F(z))M[1, 2, 1]
To expand in variables, one can specify a finite size alphabet \(x_1, x_2, \ldots, x_m\):
sage: y = M[1,2,1]sage: y.expand(4)x0*x1^2*x2 + x0*x1^2*x3 + x0*x2^2*x3 + x1*x2^2*x3
The usual methods on free modules are available such as coefficients, degrees, and the support:
sage: z=3*M[1,2]+M[3]^2; z3*M[1, 2] + 2*M[3, 3] + M[6]sage: z.coefficient([1,2])3sage: z.degree()6sage: sorted(z.coefficients())[1, 2, 3]sage: sorted(z.monomials(), key=lambda x: x.support())[M[1, 2], M[3, 3], M[6]]sage: z.monomial_coefficients(){[1, 2]: 3, [3, 3]: 2, [6]: 1}
As with the symmetric functions package, the quasisymmetric function
1has several instantiations. However, the most obvious way to write
1leads to an error (this is due to the semantics of python):
sage: M[[]]M[]sage: M.one()M[]sage: M(1)M[]sage: M[[]] == 1Truesage: M[]Traceback (most recent call last):...SyntaxError: invalid syntax
Working with symmetric functions¶
The quasisymmetric functions are a ring which contains the symmetric functions as a subring. The Monomial quasisymmetric functions are related to the monomial symmetric functions by \(m_\lambda = \sum_{\mathrm{sort}(c) = \lambda} M_c\), where \(\mathrm{sort}(c)\) means the partition obtained by sorting the composition \(c\):
sage: SymmetricFunctions(QQ).inject_shorthands()Defining e as shorthand for Symmetric Functions over Rational Field in the elementary basisDefining f as shorthand for Symmetric Functions over Rational Field in the forgotten basisDefining h as shorthand for Symmetric Functions over Rational Field in the homogeneous basisDefining m as shorthand for Symmetric Functions over Rational Field in the monomial basisDefining p as shorthand for Symmetric Functions over Rational Field in the powersum basisDefining s as shorthand for Symmetric Functions over Rational Field in the Schur basissage: m[2,1]m[2, 1]sage: M(m[2,1])M[1, 2] + M[2, 1]sage: M(s[2,1])2*M[1, 1, 1] + M[1, 2] + M[2, 1]
There are methods to test if an expression \(f\) in the quasisymmetric functions is a symmetric function:
sage: f = M[1,1,2] + M[1,2,1]sage: f.is_symmetric()Falsesage: f = M[3,1] + M[1,3]sage: f.is_symmetric()True
If \(f\) is symmetric, there are methods to convert \(f\) to an expression in the symmetric functions:
sage: f.to_symmetric_function()m[3, 1]
The expansion of the Schur function in terms of the Fundamental quasisymmetric functions is due to [Ges]. There is one term in the expansion for each standard tableau of shape equal to the partition indexing the Schur function.
sage: f = F[3,2] + F[2,2,1] + F[2,3] + F[1,3,1] + F[1,2,2]sage: f.is_symmetric()Truesage: f.to_symmetric_function()5*m[1, 1, 1, 1, 1] + 3*m[2, 1, 1, 1] + 2*m[2, 2, 1] + m[3, 1, 1] + m[3, 2]sage: s(f.to_symmetric_function())s[3, 2]
It is also possible to convert any symmetric function to the quasisymmetric function expansion in any known basis. The converse is not true:
sage: M( m[3,1,1] )M[1, 1, 3] + M[1, 3, 1] + M[3, 1, 1]sage: F( s[2,2,1] )F[1, 1, 2, 1] + F[1, 2, 1, 1] + F[1, 2, 2] + F[2, 1, 2] + F[2, 2, 1]sage: s(M[2,1])Traceback (most recent call last):...TypeError: do not know how to make x (= M[2, 1]) an element of self
It is possible to experiment with the quasisymmetric function expansion of other bases, but it is important that the base ring be the same for both algebras.
sage: R = QQ['t']sage: Qp = SymmetricFunctions(R).hall_littlewood().Qp()sage: QSymt = QuasiSymmetricFunctions(R)sage: Ft = QSymt.F()sage: Ft( Qp[2,2] )F[1, 2, 1] + t*F[1, 3] + (t+1)*F[2, 2] + t*F[3, 1] + t^2*F[4]
sage: K = QQ['q','t'].fraction_field()sage: Ht = SymmetricFunctions(K).macdonald().Ht()sage: Fqt = QuasiSymmetricFunctions(Ht.base_ring()).F()sage: Fqt(Ht[2,1])q*t*F[1, 1, 1] + (q+t)*F[1, 2] + (q+t)*F[2, 1] + F[3]
The following will raise an error because the base ring of
F is notequal to the base ring of
Ht:
sage: F(Ht[2,1])Traceback (most recent call last):...TypeError: do not know how to make x (= McdHt[2, 1]) an element of self (=Quasisymmetric functions over the Rational Field in the Fundamental basis)
QSym is a Hopf algebra¶
The product on \(QSym\) is commutative and is inherited from the product by the realization within the polynomial ring:
sage: M[3]*M[1,1] == M[1,1]*M[3]Truesage: M[3]*M[1,1]M[1, 1, 3] + M[1, 3, 1] + M[1, 4] + M[3, 1, 1] + M[4, 1]sage: F[3]*F[1,1]F[1, 1, 3] + F[1, 2, 2] + F[1, 3, 1] + F[1, 4] + F[2, 1, 2] + F[2, 2, 1] + F[2, 3] + F[3, 1, 1] + F[3, 2] + F[4, 1]sage: M[3]*F[2]M[1, 1, 3] + M[1, 3, 1] + M[1, 4] + M[2, 3] + M[3, 1, 1] + M[3, 2] + M[4, 1] + M[5]sage: F[2]*M[3]F[1, 1, 1, 2] - F[1, 2, 2] + F[2, 1, 1, 1] - F[2, 1, 2] - F[2, 2, 1] + F[5]
There is a coproduct on this ring as well, which in the Monomial basis acts by cutting the composition into a left half and a right half. The co-product is non-co-commutative:
sage: M[1,3,1].coproduct()M[] # M[1, 3, 1] + M[1] # M[3, 1] + M[1, 3] # M[1] + M[1, 3, 1] # M[]sage: F[1,3,1].coproduct()F[] # F[1, 3, 1] + F[1] # F[3, 1] + F[1, 1] # F[2, 1] + F[1, 2] # F[1, 1] + F[1, 3] # F[1] + F[1, 3, 1] # F[]
The Duality Pairing with Non-Commutative Symmetric Functions
These two operations endow \(QSym\) with the structure of a Hopf algebra. It is the dual Hopf algebra of the non-commutative symmetric functions \(NCSF\). Under this duality, the Monomial basis of \(QSym\) is dual to the Complete basis of \(NCSF\), and the Fundamental basis of \(QSym\) is dual to the Ribbon basis of \(NCSF\) (see [MR]):
sage: S = M.dual(); SNon-Commutative Symmetric Functions over the Rational Field in the Complete basissage: M[1,3,1].duality_pairing( S[1,3,1] )1sage: M.duality_pairing_matrix( S, degree=4 )[1 0 0 0 0 0 0 0][0 1 0 0 0 0 0 0][0 0 1 0 0 0 0 0][0 0 0 1 0 0 0 0][0 0 0 0 1 0 0 0][0 0 0 0 0 1 0 0][0 0 0 0 0 0 1 0][0 0 0 0 0 0 0 1]sage: F.duality_pairing_matrix( S, degree=4 )[1 0 0 0 0 0 0 0][1 1 0 0 0 0 0 0][1 0 1 0 0 0 0 0][1 1 1 1 0 0 0 0][1 0 0 0 1 0 0 0][1 1 0 0 1 1 0 0][1 0 1 0 1 0 1 0][1 1 1 1 1 1 1 1]sage: NCSF = M.realization_of().dual()sage: R = NCSF.Ribbon()sage: F.duality_pairing_matrix( R, degree=4 )[1 0 0 0 0 0 0 0][0 1 0 0 0 0 0 0][0 0 1 0 0 0 0 0][0 0 0 1 0 0 0 0][0 0 0 0 1 0 0 0][0 0 0 0 0 1 0 0][0 0 0 0 0 0 1 0][0 0 0 0 0 0 0 1]sage: M.duality_pairing_matrix( R, degree=4 )[ 1 0 0 0 0 0 0 0][-1 1 0 0 0 0 0 0][-1 0 1 0 0 0 0 0][ 1 -1 -1 1 0 0 0 0][-1 0 0 0 1 0 0 0][ 1 -1 0 0 -1 1 0 0][ 1 0 -1 0 -1 0 1 0][-1 1 1 -1 1 -1 -1 1]
Let \(H\) and \(G\) be elements of \(QSym\) and \(h\) an element of \(NCSF\). Then if we represent the duality pairing with the mathematical notation \([ \cdot, \cdot ]\), we have:
For example, the coefficient of
M[2,1,4,1] in
M[1,3]*M[2,1,1] may becomputed with the duality pairing:
sage: I, J = Composition([1,3]), Composition([2,1,1])sage: (M[I]*M[J]).duality_pairing(S[2,1,4,1])1
And the coefficient of
S[1,3] # S[2,1,1] in
S[2,1,4,1].coproduct() isequal to this result:
sage: S[2,1,4,1].coproduct()S[] # S[2, 1, 4, 1] + ... + S[1, 3] # S[2, 1, 1] + ... + S[4, 1] # S[2, 1]
The duality pairing on the tensor space is another way of getting thiscoefficient, but currently the method
duality_pairing()is not defined on the tensor squared space. However, we can extend thisfunctionality by applying a linear morphism to the terms in the coproduct,as follows:
sage: X = S[2,1,4,1].coproduct()sage: def linear_morphism(x, y):....: return x.duality_pairing(M[1,3]) * y.duality_pairing(M[2,1,1])sage: X.apply_multilinear_morphism(linear_morphism, codomain=ZZ)1
Similarly, if \(H\) is an element of \(QSym\) and \(g\) and \(h\) are elements of \(NCSF\), then
For example, the coefficient of
R[2,3,1] in
R[2,1]*R[2,1] is computedwith the duality pairing by the following command:
sage: (R[2,1]*R[2,1]).duality_pairing(F[2,3,1])1sage: R[2,1]*R[2,1]R[2, 1, 2, 1] + R[2, 3, 1]
This coefficient should then be equal to the coefficient of
F[2,1] # F[2,1]in
F[2,3,1].coproduct():
sage: F[2,3,1].coproduct()F[] # F[2, 3, 1] + ... + F[2, 1] # F[2, 1] + ... + F[2, 3, 1] # F[]
This can also be computed by the duality pairing on the tensor space, as above:
sage: X = F[2,3,1].coproduct()sage: def linear_morphism(x, y):....: return x.duality_pairing(R[2,1]) * y.duality_pairing(R[2,1])sage: X.apply_multilinear_morphism(linear_morphism, codomain=ZZ)1
The Operation Adjoint to Multiplication by a Non-Commutative Symmetric Function
Let \(g \in NCSF\) and consider the linear endomorphism of \(NCSF\) defined by left (respectively, right) multiplication by \(g\). Since there is a duality between \(QSym\) and \(NCSF\), this linear transformation induces an operator \(g^\perp\) on \(QSym\) satisfying
for any non-commutative symmetric function \(h\).
This is implemented by the method
skew_by().Explicitly, if
H is a quasisymmetric function and
ga non-commutative symmetric function, then
H.skew_by(g) and
H.skew_by(g, side='right') are expressions that satisfy,for any non-commutative symmetric function
h, the followingidentities:
H.skew_by(g).duality_pairing(h) == H.duality_pairing(g*h)H.skew_by(g, side='right').duality_pairing(h) == H.duality_pairing(h*g)
For example,
M[J].skew_by(S[I]) is \(0\) unless the composition \(J\)begins with \(I\) and
M(J).skew_by(S(I), side='right') is \(0\) unlessthe composition \(J\) ends with \(I\):
sage: M[3,2,2].skew_by(S[3])M[2, 2]sage: M[3,2,2].skew_by(S[2])0sage: M[3,2,2].coproduct().apply_multilinear_morphism( lambda x,y: x.duality_pairing(S[3])*y )M[2, 2]sage: M[3,2,2].skew_by(S[3], side='right')0sage: M[3,2,2].skew_by(S[2], side='right')M[3, 2]
The antipode
The antipode sends the Fundamental basis element indexed by the composition \(I\) to \(-1\) to the size of \(I\) times the Fundamental basis element indexed by the conjugate composition to \(I\):
sage: F[3,2,2].antipode()-F[1, 2, 2, 1, 1]sage: Composition([3,2,2]).conjugate()[1, 2, 2, 1, 1]sage: M[3,2,2].antipode()-M[2, 2, 3] - M[2, 5] - M[4, 3] - M[7]
We demonstrate here the defining relation of the antipode:
sage: X = F[3,2,2].coproduct()sage: X.apply_multilinear_morphism(lambda x,y: x*y.antipode())0sage: X.apply_multilinear_morphism(lambda x,y: x.antipode()*y)0
REFERENCES:
[HHL05] A combinatorial formula for Macdonald polynomials.Haiman, Haglund, and Loehr.J. Amer. Math. Soc. 18 (2005), no. 3, 735-761.
[LW12] Quasisymmetric expansions of Schur-function plethysms.Loehr and Warrington.Proc. Amer. Math. Soc. 140 (2012), no. 4, 1159-1171.
[KT97] Noncommutative symmetric functions IV: Quantum linear groups andHecke algebras at \(q = 0\).Krob and Thibon.Journal of Algebraic Combinatorics 6 (1997), 339-376. |
I want to optimize the vertex positions in a mesh, with a given cost function on the associated triangles. The paper gives a cost function, which evaluates to an real number by using a sum over the triangles in the mesh, which connect the vertices to a valid simplical complex. They suggest to use a L-BFGS solver and i want to use PETSc to for the calculations.
The solver interface for L-BFGS (and some other algorithms) in PETSc gets a vector with the current values and has a pointer to a output vector for the residual values, with the same cardinality.
How do i design the cost function and the residual vector based on the cost function to evaluate the cost of vertex positions based on the resulting triangles?
i filled the f vector like this: $[v_1^x, v_1^y, v_1^z, \dots, v_N^x, v_N^y, v_N^z]$.
What do it put in the vector returned to get a good solution? I tried ...
all the same: $\text{cost}(v) \equiv cost \in \mathbb{R}\ \forall v$ (
VecSet(r, cost).)
$\text{cost}(v_i^x) = \text{cost}(v_i^y) = \text{cost}(v_i^z) = \sum_{t \in T, v_i\in t} \text{cost}(t)\ \forall i=1\dots N$ with $T$ as the set of all triangles and the relation $v \in t$ when $v$ is a vertex of a triangle $t$.
Both do not obtain good solutions. Further i guess the residual may need to differ in x,y,z to get useful gradients for moving the vertices. |
A) \[\frac{599}{311}\]
B) \[\frac{680}{216}\]
C) \[\frac{642}{133}\]
D) \[\frac{501}{301}\]View Solution
A) \[\frac{24}{5}\]
B) \[-\frac{24}{5}\]
C) \[25\]
D) \[-25\]View Solution
A) \[\frac{2}{5}\]
B) \[\frac{8}{17}\]
C) \[-\frac{2}{3}\]
D) \[-\frac{4}{3}\]View Solution
A) \[-\frac{13}{2}\]
B) \[-\frac{15}{2}\]
C) \[\frac{13}{2}\]
D) \[\frac{15}{2}\]View Solution
A) Every point on the number line represents a rational number.
B) The product of a rational number and its reciprocal is 0.
C) \[{{(17\times 12)}^{-1}}={{17}^{-1}}\times 12\]
D) Reciprocal of \[\frac{1}{a},a\ne 0\] is a.View Solution
A) \[\frac{a}{b}\]
B) \[\frac{b}{a}\]
C) \[-\frac{b}{a}\]
D) None of theseView Solution
question_answer7) Which of the following properties of rational numbers is given below? \[\frac{7}{4}\times \left( \frac{-8}{3}+\frac{-13}{12} \right)=\frac{7}{4}\times \frac{-8}{3}+\frac{7}{4}\times \frac{-13}{12}\]
A) Commutativity of addition
B) Associativity of multiplication
C) Distributivity of multiplication over addition
D) Distributivity of addition over multiplicationView Solution
A) \[\frac{8}{5}\]
B) \[-\frac{8}{5}\]
C) \[0\]
D) \[1\]View Solution
A) \[\frac{5}{7}<\frac{7}{9}<\frac{9}{11}<\frac{11}{13}\]
B) \[\frac{11}{13}<\frac{9}{11}<\frac{7}{9}<\frac{5}{7}\]
C) \[\frac{5}{7}<\frac{11}{13}<\frac{7}{9}<\frac{9}{11}\]
D) \[\frac{5}{7}<\frac{9}{11}<\frac{11}{13}<\frac{7}{9}\]View Solution
A) \[\frac{3}{8}\]
B) \[\frac{7}{16}\]
C) \[\frac{1}{4}\]
D) \[\frac{13}{32}\]View Solution
A) \[-\frac{177}{286}\]
B) \[-\frac{303}{40}\]
C) \[\frac{289}{492}\]
D) \[\frac{17}{24}\]View Solution
A) \[-\frac{6}{13}\]
B) \[\frac{1}{4}\]
C) \[\frac{2}{7}\]
D) \[-\frac{1}{8}\]View Solution
A) \[\frac{-2}{3}\]
B) \[\frac{-41}{10}\]
C) \[\frac{39}{5}\]
D) \[\frac{41}{10}\]View Solution
A) \[\frac{15}{2}\]
B) \[-\frac{13}{5}\]
C) \[\frac{17}{6}\]
D) \[-\frac{11}{6}\]View Solution
A) \[\frac{7}{13}\]
B) \[-\frac{11}{15}\]
C) \[-\frac{2}{11}\]
D) \[\frac{5}{8}\]View Solution
question_answer16) There are 42 students in a class. Out of these, \[\frac{3}{4}\] of the boys and \[\frac{2}{3}\]of the girls come to school by bus. The total number of boys and girls of the same class who come to school by bus is 30. How many boys are there in the class?
A) 20
B) 24
C) 26
D) 16View Solution
A) Rs.7120
B) Rs.5250
C) Rs.5520
D) Rs.6562.50View Solution
question_answer18) One fruit salad recipe requires \[\frac{1}{2}\] cup of sugar. Another recipe for the same fruit salad requires 2 tablespoons of sugar. If 1 tablespoon is equivalent to \[\frac{1}{16}\] cup, how much more sugar does the first recipe require?
A) \[\frac{\text{4}}{\text{5}}\text{cup}\]
B) \[\frac{6}{\text{5}}\text{cup}\]
C) \[\frac{3}{8}\text{cup}\]
D) \[\frac{\text{5}}{\text{8}}\text{cup}\]View Solution
Species of birds Blue jay Golden eagle Seagull Albatross Length of wingspans \[\frac{41}{100}m\] \[2\frac{1}{2}m\] \[1\frac{7}{10}m\] \[3\frac{3}{5}m\]
A) \[\frac{209}{100}cm\]
B) \[\frac{209}{100}m\]
C) \[\frac{9}{100}m\]
D) \[\frac{215}{100}cm\]View Solution
question_answer20) There are few adults and children in a restaurant. If \[\frac{3}{8}\] of the people in the restaurant are adults and there are 90 more children than adults then how many children are there in the restaurant?
A) 180
B) 200
C) 225
D) 230View Solution
A) The rational number 0 is the additive identity for rational numbers.
B) The additive inverse of the rational number a/b is - a/b and vice-versa.
C) Rational numbers are closed under the operations of subtraction, multiplication and division.
D) There are infinite rational numbers between any two rational numbers.View Solution
Column - I Column - II (P) Product of a rational number and its reciprocal is (i) -1 (Q) If \[\frac{12}{30}\] and \[\frac{x}{5}\] are equivalent, then \[x=\] (ii) 0 (R) \[\left[ \frac{8}{21}\div \left( \frac{-32}{39}\div \frac{16}{13} \right) \right]\times \frac{7}{4}=\] (iii) 2 (S) Sum of a rational number and its additive inverse is (iv) 1
A) (P)\[\to \](iv): (Q)\[\to \](iii); (R)\[\to \](i); (S)\[\to \](ii)
B) (P)\[\to \](i); (Q)\[\to \](iii): (R)\[\to \](iv); (S)\[\to \](ii)
C) (P)\[\to \](iv): (Q)\[\to \](iii); (R)\[\to \](ii): (S)\[\to \](i)
D) (P)\[\to \](i); (Q)\[\to \](iv); (R)\[\to \](iii); (S)\[\to \](ii)View Solution
(i) O is neither P nor Q . (ii) R has/have no reciprocal. (iii) The rational numbers S and T are equal to their reciprocal.
A)
P Q R S T Positive Negative 1 1/2 -1/2
B)
P Q R S T Integer Rational 0 -1 0
C)
P Q R S T Positive Negative 0 1 -1
D)
P Q R S T Natural Integer -1 1 -1
Statement 1: Rational numbers are closed under division. Statement 2: The value of \[\left( \frac{-7}{18}\times \frac{15}{-7} \right)-\left( 1\times \frac{1}{4} \right)+\left( \frac{1}{2}\times \frac{1}{4} \right)\] is \[\frac{17}{24}\].
A) Both Statement -1 and Statement - 2 are true.
B) Statement-1 is true and Statement - 2 is false.
C) Statement -1 is false but Statement - 2 is true.
D) Both Statement - 1 and Statement - 2 are false.View Solution
(i) The rational number \[\frac{-8}{-3}\] lies neither to the right nor to the left of zero on the number line. (ii) The rational numbers \[\frac{1}{2}\] and \[-\frac{5}{2}\] are on the opposite sides of 0 on the number line. (iii) 0 is the smallest rational number. (iv) For every rational number\[x,\text{ }x+1=x\].
A)
(i) (ii) (iii) (iv) F T T F
B)
(i) (ii) (iii) (iv) T F F F
C)
(i) (ii) (iii) (iv) F T F F
D)
(i) (ii) (iii) (iv) T T F F
You need to login to perform this action.
You will be redirected in 3 sec |
What you have done is correct! Note that whenever you have inverse trigonometric expressions you can express your answer in more than one way! Your answer can be expressed in a different way (without the trigonometric and inverse trigonometric functions) as shown below.
We will prove that $$\sec \left( \arctan \left( \dfrac{x}4 \right) \right) = \sqrt{1 + \left(\dfrac{x}{4} \right)^2}$$
Hence, your answer $$\ln \left \lvert \dfrac{x}4 + \sec \left(\arctan \left( \dfrac{x} 4\right) \right) \right \rvert + c$$ can be rewritten as $$\ln \left \lvert \dfrac{x}4 + \sqrt{1+ \left(\dfrac{x}{4} \right)^2} \right \rvert + c$$Note that $$\theta = \arctan\left( \dfrac{x}4 \right) \implies \tan( \theta) = \dfrac{x}4 \implies \tan^2(\theta) = \dfrac{x^2}{16} \implies 1 + \tan^2(\theta) = 1+\dfrac{x^2}{16}$$Hence, we get that $$\sec^2(\theta) = 1+ \left(\dfrac{x}{4} \right)^2 \implies \sec (\theta) = \sqrt{1+ \left(\dfrac{x}{4} \right)^2} \implies \sec \left(\arctan\left( \dfrac{x}4 \right) \right) = \sqrt{1+ \left(\dfrac{x}{4} \right)^2}$$Hence, you can rewrite your answer as$$\ln \left \lvert \dfrac{x}4 + \sqrt{1+ \left(\dfrac{x}{4} \right)^2} \right \rvert + c$$
Also, you have been a bit sloppy with some notations in your argument.
For instance, when you substitute $x = 4 \tan (\theta)$,$$\dfrac{dx}{\sqrt{x^2+16}} \text{ should immediately become }\dfrac{4 \sec^2(\theta)}{\sqrt{16 \sec^2(\theta)}} d \theta$$
Also, you need to carry the $d \theta$ throughout the answer under the integral.
Writing just $\displaystyle \int\sec(\theta)$ or $\displaystyle \int\dfrac{4 \sec^2(\theta)}{\sqrt{16 \sec^2(\theta)}}$ without the $d \theta$ is notationally incorrect.
Anyway, I am happy that you are slowly getting a hang of these! |
Answer
$\frac{2i}{-1 - i\sqrt{3}}$ = $\frac{-\sqrt{3}}{2} - \frac{i}{2}$
Work Step by Step
$\frac{2i}{-1 - i\sqrt{3}}$ = $\frac{2cis90^\circ}{2cis240^\circ}$ = $cis(90-240)^\circ$ = $cis(-150)^\circ$ = $\frac{-\sqrt{3}}{2} - \frac{i}{2}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
Let $X$ be a noetherian scheme over $\mathbb{C}$, and let $E$ be a locally free sheaf of finite rank over $X$. Then we have the projective bundle $f: \mathbb{P}(E)\rightarrow X$.
Now $f$ is a flat morphism and we have $f_{*}\mathcal{O}_{\mathbb{P}(E)}=\mathcal{O}_X$ and $R^if_{*}\mathcal{O}_{\mathbb{P}(E)}=0$ for $i\geq 1$.
So according to this question and the following answers we should get for every coherent sheaf $H\in Coh(X)$ an isomorphism $f_{*}f^{*}H\cong H$ and for $i\geq 1$ we should have $R^if_{*}f^{*}H=0$. $(*)$
Is there a direct way to see that we have the two facts in $(*)$ for any noetherian $X$ without using the derived category? Or is this wrong in this generality?
I'm asking because in the answers to my question here, it is sugested to use the projection formula for $f$ to see the vanishing of higher direct images of sheaves $G$ with the property $f^{*}f_{*}G\cong G$. But the usual projection formula $R^if_{*}(G\otimes f^{*}H)\cong (R^if_{*}G)\otimes H$, e.g. Hartshorne Exercise III.$8.3$ needs the sheaf $H$ on $X$ to be locally free. In this case we don't have an arbitrary morphism but a flat one and we have the fact $f_{*}\mathcal{O}_{\mathbb{P}(E)}=\mathcal{O}_X$ resp. $R^if_{*}\mathcal{O}_{\mathbb{P}(E)}=0$ for $i\geq 1$ so maybe it works also for coherent sheaves on $X$. |
It might be better to write them in a bit plainer English: the first is
"Every
ordinal is in bijection with some initial ordinal,"
while the second is
"Every
set is in bijection with some initial ordinal."
Since there are lots of sets which aren't ordinals, in principle there's no reason for the first statement to imply the second. Indeed, to get from the first statement to the second statement we would need to prove
"Every
set is in bijection with some ordinal,"
and this is the Well-Ordering Theorem, which is equivalent to the axiom of choice.
You also ask why the first statement is provable in ZF, since you need to pick the least ordinal with a certain property. Well, the point here is
not every choice requires the axiom of choice! In ZF alone, we can prove that every set of ordinals has an $\in$-least element; in particular, every ordinal $\alpha$ is in bijection with the least ordinal in the set $\{\beta:$ there is a bijection from $\alpha$ to $\beta$$\}$, which exists by the fact above, and this ordinal is clearly an initial ordinal.
If you haven't yet proved "Every set of ordinals has an $\in$-least element" in ZF, you should try to do that.
Interesting side note: the well-ordering theorem says "every set is in bijection with some well-ordered set." To get from this to "every set is in bijection with some ordinal," we need to prove "every well-ordered set is in bijection with some ordinal." The proof of this fact uses transfinite recursion, which requires the axiom scheme of Replacement. |
Kyle Kanos's answer looks to be very full, but I thought I'd add my own experience. The split-step Fourier method (SSFM) is extremely easy to get running and fiddle with; you can prototype it in a few lines of Mathematica and it is, extremely stable numerically. It involves imparting only unitary operators on your dataset, so it automatically conserves probability / power (the latter if you're solving Maxwell's equations with it, which is where my experience lies). For a one-dimensional Schrödinger equation (i.e. $x$ and $t$ variation only), it is extremely fast even as Mathematica code. And if you need to speed it up, you really only need a good FFT code in your target language (my experience lies with C++).
What you'd be doing is a disguised version of the Beam Propagation Method for optical propagation through a waveguide of varying cross section (analogous to time varying potentials), so it would be helpful to look this up too.
The way I look at the SSFM/BPM is as follows. Its grounding is the Trotter product formula of Lie theory:
$$\tag{1}\lim\limits_{m\to\infty}\left(\exp\left(\mathcal{D}\,\frac{t}{m}\right)\,\exp\left(\mathcal{V}\,\frac{t}{m}\right)\right)^m = \exp((\mathcal{D+V}) t)$$
which is sometimes called the operator splitting equation in this context. Your dataset is an $x-y$ or $x-y-z$ discretised grid of complex values representing $\psi(x,y,z)$ at a given time $t$. So you imagine this (you don't have to
do this; I'm still talking conceptually) whopping grid written as an $N$-element column vector $\Psi$ (for a $1024\times1024$ grid we have $N=1024^2=1\,048\,576$) and then your Schrödinger equation is of the form:
$$\tag{2}\mathrm{d}_t \Psi = K\Psi = (\mathcal{D+V}(t)) \Psi$$
where $K = \mathcal{D+V}$ is an $N\times N$ skew-Hermitian matrix, an element of $\mathfrak{u}(N)$, and $\Psi$ is going to be mapped with increasing time by an element of the one parameter group $\exp(K\,t)$. (I've sucked the $i\hbar$ factor into the $K = \mathcal{D+V}$ on the RHS so I can more readily talk in Lie theoretic terms). Given the size of $N$, the operators' natural habitat $\mathfrak{U}(N)$ is a thoroughly colossal Lie group so PHEW! yes I am still talking in wholly theoretical terms!. Now, what does $\mathcal{D+V}$ look like? Still imagining for now, it could be thought of as a finite difference version of $i\,\hbar\,\nabla^2/(2\,m) - i\hbar^{-1}V_0 + i\hbar^{-1}(V_0-V(x,y,z,t_0))$, where $V_0$ is some convenient "mean" potential for the problem at hand.
We let:
$$\tag{3}\begin{array}{lcl}\mathcal{D} &=& i\frac{\hbar}{2\,m} \nabla^2 - i\hbar^{-1}V_0\\\mathcal{V}&=&i\hbar^{-1}(V_0-V(x,y,z,t))\end{array}$$
Why I have split them up like this will become clear below.
The point about $\mathcal{D}$ is that it can be worked out analytically for a plane wave: it is a simple multiplication operator in momentum co-ordinates. So, to work out $\Psi\mapsto\exp(\Delta t\,\mathcal{D}) \Psi$, here are the first three steps of a SSFM/BPM cycle:
Impart FFT to dataset $\Psi$ to transform it into a set $\tilde{\Psi}$ of superposition weights of plane waves: now the grid co-ordinates have been changed from $x,\,y,\,z$ to $k_x,\,k_y,\,k_z$; Impart $\tilde{\Psi}\mapsto\exp(\Delta t\,\mathcal{D}) \tilde{\Psi}$ by simply multiplying each point on the grid by $\exp(i\,\Delta t (V_0-k_x^2+k_y^2+k_z^2)/\hbar)$;
Impart inverse FFT to map our grid back to $\exp(\Delta t\,\mathcal{D}) \Psi$
.Now we're back in position domain. This is the better domain to impart the operator $\mathcal{V}$ of course: here $\mathcal{V}$ is a simple multiplication operator. So here is your last step of your algorithmic cycle:
Impart the operator $\Psi\mapsto\exp(\Delta t\,\mathcal{V}) \Psi$ by simply multiplying each point on the grid by the phase factor $\exp(i\,\Delta t\,(V_0-V(x,y,z,t))/\hbar)$
....and then you begin your next $\Delta t$ step and cycle over and over. Clearly it is very easy to put time-varying potentials $V(x,y,z,t)$ into the code.
So you see you simply choose $\Delta t$ small enough that the Trotter formula (1) kicks in: you're simply approximating the action of the operator $\exp(\mathcal{D+V}\,\Delta t)\approx\exp(\mathcal{D}\,\Delta t)\,\exp(\mathcal{V}\,\Delta t)$ and you flit back and forth with your FFT between position and momentum co-ordinates, i.e. the domains where $\mathcal{V}$ and $\mathcal{D}$ are simple multiplication operators.
Notice that you are only ever imparting, even in the discretised world, unitary operators: FFTs and pure phase factors.
One point you do need to be careful of is that as your $\Delta t$ becomes small, you must make sure that the spatial grid spacing shrinks as well. Otherwise, suppose the spatial grid spacing is $\Delta x$. Then the physical meaning of the one discrete step is that the diffraction effects are travelling at a velocity $\Delta x/\Delta t$; when simulating Maxwell's equations and waveguides, you need to make sure that this velocity is much smaller than $c$. I daresay like limits apply to the Schrödinger equation: I don't have direct experience here but it does sound fun and maybe you could post your results sometime!
A second "experience" point with this kind of thing - I'd be almost willing to bet this is how you'll wind up following your ideas. We often have ideas that we want to do simple and quick and dirty simulations but it never quite works out that way! I'd begin with the SSFM as I've described above as it is very easy to get running and you'll quickly see whether or not its results are physical. Later on you can use your, say Mathematica SSFM code check the results of more sophisticated code you might end up building, say, a Crank Nicolson code along the lines of Kyle Kanos's answer.
Error Bounds
The Dynkin formula realisation of the Baker-Campbell-Hausdorff Theorem:
$$\exp(\mathcal{D}\Delta t)\,\exp(\mathcal{V})\Delta t) = \exp\left((\mathcal{D}+\mathcal{V})\Delta t + \frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2 + \cdots\right)$$converging for some $\Delta t>0$ shows that the method is accurate to second order and can show that:
$$\exp(\mathcal{D}\Delta t)\,\exp(\mathcal{V})\Delta t)\,\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right) = \exp\left((\mathcal{D}+\mathcal{V})\Delta t + \mathcal{O}(\Delta t^3)\right)$$
You can, in theory, therefore use the term $\exp(\mathcal{V})\Delta t)\,\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right)$ to estimate the error and set your $\Delta t$ accordingly. This is not as easy as it looks and in practice bounds end up being instead rough estimates of the error. The problem is that:
$$\frac{\Delta t^2}{2}[\mathcal{D},\,\mathcal{V}] = -\frac{i\,\Delta t^2}{2\,m}\,\left(\partial_x^2 V(x,\,t) + 2 \partial_x V(x,\,t)\,\partial_x\right)$$
and there are no readily transformed to co-ordinates wherein $[\mathcal{D},\,\mathcal{V}]$ is a simple multiplication operator. So you have to be content with $\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right) \approx e^{-i\,\varphi\,\Delta t^2}\left(\mathrm{id} -\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2\right)$ and use this to estimate your error, by working out $\left(\mathrm{id} -\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2\right) \,\psi$ for your currently evolving solution $\psi(x,\,t)$ and using this to set your $\Delta t$ on-the-fly after each cycle of the algorithm. You can of course make these ideas the basis for an adaptive stepsize controller for your simulation. Here $\varphi$ is a global phase pulled out of the dataset to minimise the norm of $\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2$; you can of course often throw such a global phase out: depending on what you're doing with the simulation results often we're not bothered by a constant phase global $\exp\left(\int \varphi\,\mathrm{d}t\right)$.
A relevant paper about errors in the SSFM/BPM is:
Lars Thylén. "The Beam Propagation Method: An Analysis of its Applicability",
Optical and Quantum Electronics 15 (1983) pp433-439.
Lars Thylén thinks about the errors in non-Lie theoretic terms (Lie groups are my bent, so I like to look for interpretations of them) but his ideas are essentially the same as the above. |
256 0
[tex]\int_0^\sqrt{6}}e^{-x^2}\frac{x^2}{2}[/tex]
should i use a u-substitution or integration by parts?
should i use a u-substitution or integration by parts?
RadiationX said:[tex]\int_0^\sqrt{6}}e^{-x^2}\frac{x^2}{2}[/tex] should i use a u-substitution or integration by parts? An answer in terms of erf is no worse than one in terms of sin or exp. There are table and computer programs to find values. Infinite series are helpful for some purposes, but unless one is going to compute an approximation by hand, an expression in terms of erf looks nicer and is more informative. Were you also discusted by integrals likekant said:These a-hole intergral disgust me greatly when i worked on calaulus.. It is better if you just express e^t as a infinite serie. Substitude t=-x^2 in to the series. After that, multiple the entire series by x/2. intergrat it term by term, and plug numbers. This function can only be tame;not solve. Well in that case my new function is called easyanswer(t), easyanswer(t) is defined such that where t is some real number of my choice it is the solution to the given numerical integral in front of me. Much easier exams nowlurflurf said:What good is an answer like log(2) or sin(exp(sqrt(2))) anyway. End special treatment for elementary function. Equality for special functions. Equal rights for all functions. |
Prove that $$\sum_{n=1}^\infty\frac1{n^6}=\frac{\pi^6}{945}$$ by the Fourier series of $x^2$.
By Parseval's identity, I can only show $\sum_{n=1}^\infty\frac1{n^4}=\frac{\pi^4}{90}$. Could you please give me some hints?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
We may start from$$ f_1(x) = \sum_{n\geq 1}\frac{\sin(nx)}{n} \tag{1}$$that is the Fourier series of a sawtooth wave, equal to $\frac{\pi-x}{2}$ on the interval $I=(0,2\pi)$.
By termwise integration, we get that $$ \forall x\in I,\quad \sum_{n\geq 1}\frac{1-\cos(nx)}{n^2} = \frac{2\pi x-x^2}{4}$$ hence: $$ \forall x\in I,\quad f_2(x) = \sum_{n\geq 1}\frac{\cos(nx)}{n^2}=\frac{\pi^2}{6}-\frac{\pi x}{2}+\frac{x^2}{4}\tag{2} $$ $$ \forall x\in I,\quad f_3(x) = \sum_{n\geq 1}\frac{\sin(nx)}{n^3}=\frac{\pi^2 x}{6}-\frac{\pi x^2}{4}+\frac{x^3}{12}\tag{3} $$ (the integration constants are computed from the fact that $f_j(x)$ has to have mean zero over $I$) and by Parseval's theorem $$ \zeta(6) = \frac{1}{\pi}\int_{0}^{2\pi}f_3(x)^2\,dx = \frac{2\pi^6}{9}\int_{0}^{1}\left[x(x-1)(2x-1)\right]^2\,dx=\color{red}{\frac{\pi^6}{945}}.\tag{4}$$ With the same approach it is not difficult to prove that for any $n\geq 1$, $\zeta(2n)$ is a rational multiple of $\pi^{2n}\int_{0}^{1}B_n(x)^2\,dx$, where $B_n(x)$ is a Bernoulli polynomial.
Considering $f(x)=x^3$ By Parseval identity we can prove that $\sum_{n=1}^{\infty}\frac{1}{n^6}=\frac{\pi^6}{945}$. Then $$a_0=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)dx=0,$$$$a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos(nx)dx=0$$ and \begin{align}b_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\sin(nx)dx\\&=(-1)^{n+1}\frac{2\pi^2}{n}+(-1)^{n}\frac{12\pi}{n^3}\end{align} From the relation $$\frac{1}{\pi}\int_{-\pi}^{\pi}|f|^2dx=\frac{a_0}{2}+\sum_{n=1}^\infty(a_n^2+b_n^2)$$we get \begin{align}\sum_{n=1}^\infty(\frac{144}{n^6}+\frac{4\pi^4}{n^2}-\frac{48\pi^2}{n^4})=\frac{2\pi^6}{7}\end{align}$$\sum_{n=1}^\infty\frac{144}{n^6}=\frac{16\pi^6}{105}$$ $$\sum_{n=1}^\infty\frac{1}{n^6}=\frac{\pi^6}{945}$$ |
I have the following equation:
Notice how the last two terms are not centered? If I use the code specified below, I would get that effect. Now, what I would like to have is the effect shown in the first term (centered the term "Bias"). You probably also see that since I kind of fail with paint, the first terms are not centered correctly.
Now I was wondering if it is possible to do that in Latex? Notice that it should be centered between the equal sign and the plus sign.
In code form:
\begin{equation}\begin{aligned}Err(x_{0}) &=\left(E\left[\hat{f}(x_{0})\right]-f(x_{0})\right)^{2}&+&E\left[\hat{f}(x_{0})-E\left[\hat{f}(x_{0})\right]\right]^{2}&+&\sigma^{2}_{\epsilon} \\&=\text{Bias}^{2}&+&\text{Var}(\hat{f}(x_{0})&+&\text{Var}(\epsilon)\end{aligned}\end{equation}
Now unfortunately, the terms in the first equation and the second equations are not centered. I was wondering how I can center each term.
Thanks |
Fully Truncated Simplices Barry Monson University of New Brunswick Leah Berman U. Alaska- Fairbanks Deborah Oliveros UNAM - Quer\'{e}taro Gordon WilliamsPDF U. Alaska - Fairbanks Minisymposium:POLYTOPES AND GRAPHS Content:If you truncate an equilateral triangle $\{3\}$ to its
edge midpoints you get another, smaller $\{3\}$. If you truncate
a regular tetrahedron $\{3, 3\}$ to its edge midpoints, you get (a little
unexpectedly) a regular octahedron $\{3, 4\}$. In higher ranks,
the fully truncated n-simplex $\mathcal{T}_n$ is no longer regular, though it is
uniform with facets of two types.
We want to understand the minimal regular cover $\mathcal{R}_n$
of $\mathcal{T}_n$, which in turn means
we need to understand its monodromy group $M_n := \mathrm{Mon}(\mathcal{T}_n)$.
For $n\geq 4$, we `know' that this group has Schl\"{a}fli type $\{3,12,3,\ldots,3\}$
and the impressive order
$$ \frac{[(n+1)!]^{n-1} \cdot (n-1)!}{2^{n-2}}$$
%
I will discuss all this and report on the current state of my confusion
amongst all this fun. (How can one call mathematics `work'?) |
Let $M$ be a Riemannian manifold. Assume $\gamma$ is a path in $M$ , such that it's image is a totally geodesic submanifold of $M$.
I am trying to prove (the seemingly trivial result) that $\gamma$ must be a reparametrization of a geodesic in $M$.
I came up with two possible explanations (described below). I would like to find a simpler argument.
First explanation:
Lemma(1): On a one dimensional Riemannian manifold, from any point there is only one path with constant speed $c$. (up to direction). Proof:
Since any one dimensional manifold $M$ is locally diffeomorphic to $\mathbb{R}$ and the question is local in nature, we can assume $M = (0,1)$ , with some arbitary metric.
Assume $\beta,\alpha:I \to M=(0,1) \, , \, \alpha(0)=\beta(0)=p, \dot\alpha(0)=\dot\beta(0) \, , \, \forall t \, \|\dot\alpha(t)\|=\|\dot\beta(t)\|=1$.
Since $\|\dot\alpha(t)\|=1$, $\dot\alpha(t) \in \{1,-1 \}$. By continuity it does not change sign. Since we assumed $\dot\alpha(0)=\dot\beta(0) \,$, it follows that $\forall t \, \dot\alpha(t)=\dot\beta(t)$, so now integration gives us $\alpha(t)=\beta(t)$.
Corollary(1): On a one dimensional Riemannian manifold, any path with constant speed is a geodesic. Proof:
Any geodesic has a constant speed, but from any point there is only one path with constant speed $c$. So this path must be a geodesic.
Proof of the claim:
Assume $S=\operatorname{Image} \gamma$ is a totally geodesic submanifold of $M$.
By corollary (1), if $\alpha$ is a constant speed reparametrization of $\gamma$, it's a geodesic in $S$. Hence it's a geodesic in $M$.
Second explanation:
Lemma(1):$S=\operatorname{Image} \gamma$ is totally geodesic iff (*) $\nabla_{\dot{\gamma}} \dot{\gamma} - \frac{g(\nabla_{\dot{\gamma}}\dot{\gamma}, \dot{\gamma} )}{g(\dot{\gamma}, \dot{\gamma})}\dot{\gamma} = 0.$
where $\nabla_{\dot{\gamma}} \dot{\gamma}$ is the usual covariant derivative (in M) along the path $\gamma$.
Proof:
Since the question is local, we assume $\gamma$ is injective, hence it's a invertible from it's image. So any path $\alpha$ in $S$, is of the form: $\alpha(s)=\gamma(\phi(s))$.
So, $\dot \alpha(s)=\dot \gamma(\phi(s)) \cdot \phi'(s)$
$\nabla_{\dot{\alpha}}^M \dot{\alpha}=\phi''(s)\cdot \dot \gamma(\phi(s)) + \big(\phi'(s)\big)^2 \cdot \nabla_{\dot{\gamma}}^M \dot{\gamma} (\phi(s))$
The covariant derivative along a path in a Riemannian submanifold $S \subseteq M$ is the projection on $TS$ of the covariant derivative in $M$. In our case this just amounts to projecting on $sp\{\dot\gamma(t)\} \subseteq T_{\gamma(t)}M$:
$\nabla_{\dot{\alpha}}^S \dot{\alpha}=\phi''(s)\cdot \dot \gamma(\phi(s)) + \big(\phi'(s)\big)^2 \cdot Pr\nabla_{\dot{\gamma}}^M \dot{\gamma} (\phi(s))$
So it's easy to see that:
$\alpha$ is a geodesic in $S$ (i.e $\nabla_{\dot{\alpha}}^S \dot{\alpha}=0$) $\Rightarrow$ $\alpha$ is a geodesic in $M$ (i.e $\nabla_{\dot{\alpha}}^M \dot{\alpha}=0$) iff $\nabla_{\dot{\gamma}} \dot{\gamma} - \frac{g(\nabla_{\dot{\gamma}}\dot{\gamma}, \dot{\gamma} )}{g(\dot{\gamma}, \dot{\gamma})}\dot{\gamma} = 0.$
Proof of the claim:
Let $\alpha(s)=\gamma(\phi(s))$ be a constant speed reparametrization of $\gamma$, so $|\dot \alpha(s)|=|\dot \gamma(\phi(s))| \cdot |\phi'(s)|=1$.
This implies (assuming $\phi'(s) >0$) $\phi'(s) =\frac{1}{\|\dot \gamma(\phi(s)) \|}=\frac{1}{\sqrt{g(\dot \gamma,\dot \gamma) \circ \phi}}$
Hence:
\begin{align} & \phi''(s)=-\frac{1}{\|\dot \gamma(\phi(s)) \|^2} \cdot \frac{1}{2\sqrt{g(\dot \gamma,\dot \gamma) \circ \phi}} \cdot \frac{d}{ds}\big( g(\dot \gamma,\dot \gamma) \circ \phi \big)= \\ & -\frac{1}{\|\dot \gamma(\phi(s)) \|^2} \cdot \frac{1}{2\sqrt{g(\dot \gamma,\dot \gamma) \circ \phi}} \cdot \Big( 2g(\nabla_{\dot{\gamma}} \dot{\gamma} ,\dot{\gamma}) \circ \phi \Big) \cdot \phi'(s) = \\ & - \big(\phi'(s)\big)^2 \cdot \frac{g(\nabla_{\dot{\gamma}}\dot{\gamma}, \dot{\gamma} )}{g(\dot{\gamma}, \dot{\gamma})} \circ \phi \end{align}
So finally,
$\nabla_{\dot{\alpha}} \dot{\alpha}=\phi''(s)\cdot \dot \gamma(\phi(s)) + \big(\phi'(s)\big)^2 \cdot \nabla_{\dot{\gamma}} \dot{\gamma} (\phi(s))= \big(\phi'(s)\big)^2 \cdot \Big( \big(\nabla_{\dot{\gamma}} \dot{\gamma} - \frac{g(\nabla_{\dot{\gamma}}\dot{\gamma}, \dot{\gamma} )}{g(\dot{\gamma}, \dot{\gamma})}\dot{\gamma} \big) \circ \phi \Big) =0$
As required. |
The speed of sound is given by:
$$v = \sqrt{\gamma\frac{P}{\rho}} \tag{1} $$
where $P$ is the pressure and $\rho$ is the density of the gas. $\gamma$ is a constant called the adiabatic index.
The equation should make intuitive sense. The density is a measure of how heavy the gas is, and heavy things oscillate slower. The pressure is a measure of how stiff the gas is, and stiff things oscillate faster.
Now let's consider the effect of temperature. When you're heating the gas you need to decide if you're going to keep the volume constant and let the pressure rise, or keep the pressure constant and let the volume rise, or something in between. Let's consider the possibilities.
Suppose we keep the volume constant, in which case the pressure will rise as we heat the gas. That means in equation (1) $P$ increases while $\rho$ stays constant, so the speed of the sound goes up. The speed of sound is increasing because we're effectively making the gas stiffer.
Now suppose we keep the pressure constant and let the gas expand as it's heated. That means in equation (1) $\rho$ decreases while $P$ stays constant and again the speed of sound increases. The speed of sound is increasing because we're making the gas lighter so it oscillates faster.
And if we take a middle course and let the pressure and the volume increase then $P$ increases and $\rho$ decreases and again the speed of sound goes up.
So whatever we do, increasing the temperature increases the speed of sound, but it does it in different ways depending on how we let the gas expand as it's heated.
Just as a footnote, an ideal gas obeys the equation of state:
$$ PV = nRT \tag{2} $$
where $n$ is the number of moles of the gas. The (molar) density $\rho$ is just the number of moles per unit volume, $\rho = n/V$, which means $n = \rho V$. If we substitute for $n$ in equation (2) we get:
$$ PV = \rho VRT $$
which rearranges to:
$$ \frac{P}{\rho} = RT $$
Substitute this into equation (1) and we get:
$$ v = \sqrt{\gamma RT} $$
so:
$$ v \propto \sqrt{T} $$
which is where we came in. However in this form the equation conceals what is really going on, hence your confusion. |
First, existence: there is a primitive root modulo $n$ if and only if $n$ is $1$ or $2$ or $4$ or $p^\alpha$ or $2p^\alpha$, where $p$ is prime, $p\ne2$, and $\alpha\ge1$.
Second, a systematic way of finding all primitive roots modulo $n$. Begin by finding one primitive root, say $g$. Then all the units modulo $n$ are $g^\alpha$ for $\alpha=0,1,2,\ldots,\phi(n)-1$, and the primitive roots are those in which the exponent $\alpha$ is relatively prime to $\phi(n)$.
Saving the difficult one for last. . . how to find a primitive root $g$ to begin with. Sadly, there is no straightforward way that is much better than trial and error: though as usual, intelligent trial and error is better than mindless trial and error.
Let's illustrate with an example. Suppose that we want to find a primitive root $g$ modulo $43$: since $43$ is prime, such a root does exist. For every $g\not\equiv0\pmod{43}$ we have$$g^\alpha=1$$when $\alpha=\phi(43)=42$: to find a primitive root we need $g$ for which this is
not true when $\alpha=1,2,\ldots,41$. However we don't need to check all of these: the order of any element modulo $43$ must be a factor of $42$, so we only have to rule out the possible orders$$1,2,3,6,7,14,21.$$And we can do even better than this. Suppose we have checked that $g^{21}\not\equiv1$: then we can automatically say that $g^1,g^3,g^7\not\equiv1$ and we don't actually need to check them (see if you can explain why). So we only need to rule out$$2,6,14,21.$$And for similar reasons, we don't need to check $2$. So what it comes down to is that we need to find by trial and error a value of $g$ such that$$g^6,\,g^{14},\,g^{21}\not\equiv1\pmod{43}\ .$$
Try $g=2$: we can save work by using repeated squaring to calculate powers. We have$$\eqalign{ 2^6&=64\equiv21\not\equiv1\cr 2^7&\equiv2\times21=42\equiv-1\cr 2^{14}&=(2^7)^2\equiv(-1)^2=1\cr}$$and so $g=2$ fails. Try $g=3$: we have$$\eqalign{ 3^4&=81\equiv-5\cr 3^6&\equiv9(-5)=-45\equiv-2\not\equiv1\cr 3^7&\equiv-6\cr 3^{14}&=(3^7)^2\equiv36\not\equiv1\cr 3^{21}&=3^{14}3^7\equiv(-7)(-6)=42\equiv-1\not\equiv1\ .\cr}$$Therefore $3$ is a primitive root modulo $43$, and all primitive roots are $3^\alpha$ where$$\alpha=1,5,11,13,17,19,23,25,29,31,37,41.$$By generalising this example you can prove the following: if $n$ has a primitive root, then the condition for a unit $g$ to be a primitive root is:$$\hbox{for every prime factor $q$ of $\phi(n)$ we have $g^{\phi(n)/q}\not\equiv1\pmod n$}.$$ |
We can see the value \(y_2=3\) is incorrect: \(|x_2-a_2|=|2-2|=0\). The underlying reason is of course: we are not minimizing the second term \(|y_2-a_2|\).
A correct formulation will need extra binary variables (or something related such as a SOS1 construct):\[\begin{align}\min\>&z\\&-z\le y_1-y_2 \le z\\& y_i \ge x_i - a_i \\& y_i \ge -(x_i - a_i)\\ & y_i \le x_i - a_i + M\delta_i\\& y_i \le -(x_i - a_i) + M(1-\delta_i)\\&\delta_i \in \{0,1\} \end{align}\]
Background Results
The example model now will show:
I think we need binary variables for for both terms \(y_i\) although for this numerical example it seems that we could only do it for the second term \(y_2\), and handle \(y_1\) as before, I believe that is just luck. A correct formulation will use binary variables for both inner absolute values.
References Linear Programming: Converting nested absolute value, https://math.stackexchange.com/questions/2625516/linear-programming-converting-nested-absolute-value |
Is there an example of an eigenfunction of a linear time invariant (LTI) system that is
not a complex exponential? Justin Romberg's Eigenfunctions of LTI Systems says such eigenfuctions do exist, but I am not able to find one.
All eigenfunctions of an LTI system can be described in terms of complex exponentials, and complex exponentials form a complete basis of the signal space. However, if you have a system that is
degenerate, meaning you have eigensubspaces of dimension >1, then the eigenvectors to the corresponding eigenvalue are all linear combination of vectors from the subspace. And linear combinations of complex exponentials of different frequencies are not complex exponentials anymore.
Very simple example: The identity operator 1 as an LTI system has the whole signal space as eigensubspace with eigenvalue 1. That implies ALL functions are eigenfunctions.
I thought I had worded my response clearly---apparently not :-). The original question was, "Are there eigensignals besides the complex exponential for an LTI system?". The answer is, if one is given the fact that the system is LTI but nothing else is known, then the only confirmed eignensignal is the complex exponential. In specific cases, the system may have additional eigensignals as well. The example I gave was the ideal LPF with sinc being such an eigensignal. Note that the sinc function is not an eigensignal of an arbitrary LTI system. I gave the LPF and the sinc as an example to point a non-trivial case---x(t) = y(t) will satisfy a mathematician but not an engineer :->. I am sure one can come up with other specific non-trivial examples that have other signals as eigensignals besides the complex exponential. But these other eigensignals will work for those specific examples only.
Also, cos and sin are not, in general, eigensignals. If cos(wt) is applied and the output is A cos(wt + theta), then this output cannot be expressed as a constant times the input (except when theta is 0 or pi, or A=0), which is the condition needed for a signal to be an eigensignal. There may be conditions under which cos and sin are eigensignals, but they are special cases and not general.
CSR
For any arbitrary LTI sytem, the complex exponential is, to the best of my knowledge, the only known eigensignal. On the other hand, consider the ideal LPF. The $\operatorname{sinc}$ function: $$\operatorname{sinc}(t) \triangleq \frac{\sin(\pi t)}{\pi t}$$ can easily be seen to be an eigen signal. This points to the existence of LTI systems (such as the ideal LPF) having signals other than complex exponentials as eigen signals ($\frac{\sin(\pi t)}{\pi t}$ in this case).
Maybe spatially invariant multidimensional objects like lenses with circular symmetry. It is called the Fourier Bessel expansion. There is no T for time but the convolution frequency domain relations hold |
(HP65) Factorial and Gamma Function
10-21-2017, 08:32 AM (This post was last modified: 10-26-2017 05:10 AM by Gamo.)
Post: #1
(HP65) Factorial and Gamma Function
Just noticed that HP 65 cannot calculate decimal Factorial and HP67 app for Android also cannot do it.
Here is a handy program using Stirling's approximation for Factorial and Gamma Function.
This approximation is good for x<70
Program:
Code:
Instruction:
1. Press [A] Initialize
2. Press [E] for x!
Example: [A] Initialize then 4.25 [B]
result approximation 35.21
10-21-2017, 01:21 PM (This post was last modified: 10-21-2017 01:26 PM by Dieter.)
Post: #2
RE: (HP65) Factorial and Gamma Function
(10-21-2017 08:32 AM)Gamo Wrote: Just noticed that HP 65 cannot calculate decimal Factorial and HP67 app for Android also cannot do it.
What you are missing is a Gamma function. The one or other HP67 simulator app may have such a function, for instance the one from CuVee software for iPhone. Which app one are you referring to? Maybe yours has such a function as well?
BTW, the first HP with Gamma (by means of the x! key) was the HP-34C from 1979. Maybe this was even the first pocket calculator with an accurate Gamma function at all (does anyone know?). Earlier models did not feature this, and even the 41-series (introduced about the same time as the 34C) had no Gamma. Possbily to keep its factorial function compatible with the 67/97's. But a separate Gamma function (like on the 42s) would have been nice.
(10-21-2017 08:32 AM)Gamo Wrote: Here is a handy program using Stirling's approximation for Factorial and Gamma Function.
What can you say about this approximation's accuracy? It looks good for large arguments but less so for small x, e.g. x=1 results in 0,9995. If you omit the first constant –571/2488320 the average accuracy actually seems to increase.
The approximation is good for even larger arguments, the accuracy even increases. But due to the limitation of the HP65/67's working range the max. x is near 69,9575 where the result approches 1E100, so larger x will cause an overflow error.
BTW, the ENTER after LBL E can and should be omitted and instead of [1/x] [x] you may use a simple division.
That's why I prefer a CLX or CLST at the end of such initialization routines. ;-)
Here the approximation has an absolute error of ~ –2E–5. Without the first constant it is only ~ +5E–6. ;-)
Dieter
10-21-2017, 08:01 PM
Post: #3
RE: (HP65) Factorial and Gamma Function
(10-21-2017 01:21 PM)Dieter Wrote: What can you say about this approximation's accuracy? It looks good for large arguments but less so for small x, e.g. x=1 results in 0,9995. If you omit the first constant –571/2488320 the average accuracy actually seems to increase.
As already mentioned, the error is larger for small arguments and smaller for large ones. With a little bit of tweaking the coefficients this can be changed to a more evenly distributed error. And finally there is the shift-and-divide method: the approximation is only used for sufficiently large x, say x>6. For smaller x, e.g. 4.25, the approximation is calculated for 6.25 and finally the result divided by (5.25*6.25).
Here is a quick and dirty version of this idea, with modified coefficients:
Code:
LBL A
If evaluated exactly (!) the largest error should be about 1...2 units in the 9th significant digit. Due to the numeric limitations of a 10-digit calculator the error can and will be slightly higher here and there.
The result for x=4,25 now is 35,21161186. The true result is ...1185.
Dieter
10-22-2017, 02:40 AM
Post: #4
RE: (HP65) Factorial and Gamma Function
Dieter Thank You
Your quick modification is very good with more accurate approximation.
The HP67 APP for Android simply called in the play store as HP67 if you have Android device this app is highly recommended and free except that this particular app can not do decimal factorial.
I do have the HP67 iOS app from CuVee Soft and noticed that this version can do this no problem.
RPN-65 SD iOS from CuVee Soft that emulated HP65
can not do decimal factorial.
Gamo
10-22-2017, 08:49 AM
Post: #5
RE: (HP65) Factorial and Gamma Function
Hello Gamo,
which formula you use for your little program?
10-22-2017, 05:28 PM (This post was last modified: 10-22-2017 05:28 PM by Dieter.)
Post: #6
RE: (HP65) Factorial and Gamma Function
(10-22-2017 02:40 AM)Gamo Wrote: Your quick modification is very good with more accurate approximation.
Here is an improved version. If uses a different technique to prevent overflow during the calculation of x
x · e –x. In your program and my first version this is done with two consecutive multiplications of x x/2, while now this term has been rearranged to (x/e) x:
Code:
LBL A
The program also no longer requires R5 and R6, most of the calculation is done on the stack.
Dieter
10-24-2017, 06:34 PM
Post: #7
RE: (HP65) Factorial and Gamma Function
Since this has not been answered yet: it's Stirling's appproximation. The program uses the first terms of the series given in the section "Speed of convergence and error estimates". In my modified version the n² and n³ coefficents have been replaced by optimized values.
BTW I just realize that the linked Wikipedia article has some nice other approximations that may be worth a try, e.g. the one by Nemes (2007).
Dieter
10-25-2017, 12:26 AM
Post: #8
RE: (HP65) Factorial and Gamma Function
Hello peacecalc
That using Stirling's approximation This formula can be fit into limited 99 steps programmable calculator.
Gamo
10-25-2017, 04:06 PM
Post: #9
RE: (HP65) Factorial and Gamma Function
Hello Dieter, hello Gamo,
thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments. I remember this, I also used for large arguments the stirling approx (x>10) as a example for coprozesser programming. But for smaller arguments I used the method described above (divsion by integer values). For negative number I used the formula:
\[ \Gamma(x) =\frac{\pi}{\sin(\pi x)\cdot\Gamma(1-x)}\] f. e.:
\[ \Gamma(-3.6) =\frac{\pi}{\sin(\pi (-3.6))\cdot\Gamma(4.6)}\].
10-26-2017, 06:59 AM (This post was last modified: 10-26-2017 05:42 PM by Dieter.)
Post: #10
RE: (HP65) Factorial and Gamma Function
(10-25-2017 04:06 PM)peacecalc Wrote: thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments.
Ah, yes, Turbo Pascal – I loved it.
(10-25-2017 04:06 PM)peacecalc Wrote: I remember this, I also used for large arguments the stirling approx (x>10) as a example for coprozesser programming. But for smaller arguments I used the method described above (divsion by integer values). For negative number I used the formula: (...)
Great. Here is an HP67/97 version that applies the same formula, modified for x! instead of Gamma. Also the sin(pi*x) part is calculated in a special way to avoid roundoff errors for multiples of pi, especially if x is large.
Edit: code has been replaced with a slightly improved version
Code:
LBL e
Initialize with f [e].
–3,6 [E] => –0,888685714
–4,6 [E] => 0,246857143
Edit:
If you don't mind one more second execution time, here is a version with the constants directly in the code. Except R0 no other data registers are used, and an initialisation routine is not required either.
Code:
LBL E
Dieter
10-26-2017, 07:11 AM
Post: #11
RE: (HP65) Factorial and Gamma Function
(10-26-2017 06:59 AM)Dieter Wrote:(10-25-2017 04:06 PM)peacecalc Wrote: thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments.
Yep! What about BCD math, for instance?
Greetings,
Massimo
-+×÷ ↔ left is right and right is wrong
10-26-2017, 03:41 PM
Post: #12
RE: (HP65) Factorial and Gamma Function
Hello friends,
a interesting remark: the coprocessor worked stack-orientated and for calculating the sum with the Bernoulli-numbers I used the horner-scheme.
10-26-2017, 05:34 PM
Post: #13
RE: (HP65) Factorial and Gamma Function
AFAIK this was only available in version 3.0. I preferred the later versions with a decent IDE, especially from 4.0 to 6.0.
But BCD math indeed is a great plus. I wish it was available in more classic programming languages. BTW, what about the HP85/86's BASIC in this regard?
Dieter
10-26-2017, 08:11 PM
Post: #14
RE: (HP65) Factorial and Gamma Function
(10-26-2017 05:34 PM)Dieter Wrote:
Oh well, I had COMP[UTATIONAL]-3 type in COBOL! :)
Greetings,
Massimo
-+×÷ ↔ left is right and right is wrong
10-27-2017, 08:06 AM
Post: #15
RE: (HP65) Factorial and Gamma Function
OT for the Pascal lovers (hi!): so do you love also the HPPL?
I mean the syntax is pretty similar.
Wikis are great, Contribute :)
10-27-2017, 01:57 PM
Post: #16
RE: (HP65) Factorial and Gamma Function
Here is the formula the Stirling series.
Gamo
10-27-2017, 02:26 PM
Post: #17
RE: (HP65) Factorial and Gamma Function
(10-27-2017 08:06 AM)pier4r Wrote: OT for the Pascal lovers (hi!): so do you love also the HPPL?
Yes, it is similar and yes, I love it! I just wish they'd add a few things like enumeration and user-defined records. Pointers I could live without.
Tom L
People may say I'm inept but I consider myself to be totally ept.
User(s) browsing this thread: 1 Guest(s) |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
The following two facts concerning totally disconnected spaces should (separately) help you demonstrate that your space is totally disconnected.
Fact 1: Every T$_1$-space with a basis consisting of clopen sets ( i.e., every zero-dimensional space) is totally disconnected.
proof. Suppose that $A \subseteq X$ contains at least two points, and let $x , y \in A$ be distinct. By assumption there is a clopen $U \subseteq X$ such that $x \in U$ and $y \notin U$. But then $U$ and $X \setminus U$ witness that $A$ is not a connected subset of $X$. $\quad\Box$
Fact 2: Every product of (nonempty) totally disconnected spaces is totally disconnected.
proof. Suppose that $X_i$ is totally disconnected for all $i \in I$, and let $A \subseteq \prod_{i \in I} X_i$ contain at least two points. Then there must be a $j \in I$ such that $A_j = \{ x_j : x = ( x_i )_{i \in I} \in A \}$ contains at least two points. As $X_j$ is totally disconnected, there are open $U_j , V_j \subseteq X_j$ such that $U_j \cap A_j \neq \emptyset \neq V_j \cap A_j$ and $U_j \cap V_j \cap A_j = \emptyset$ and $A_j \subseteq U_j \cup V_j$. Let $$U = {\textstyle \prod_{i \neq j}} X_i \times U_j; \quadV = {\textstyle \prod_{i \neq j}} X_i \times V_j.$$Then $U , V$ are open subsets of $\prod_{i \in I} X_i$, $U \cap A \neq \emptyset \neq V \cap A$, $U \cap V \cap A = \emptyset$ and $A \subseteq U \cup V$. Thus $A$ is not a connected subset of $\prod_{i \in I} X_i$. $\quad\Box$
Either of these should be useful (but especially the second) because your space appears to be a subspace of $\{ 1 , \ldots , k \}^{\mathbb{N}}$ taking $\{ 1 , \ldots , k \}$ to be discrete, and then taking the product topology. (Also, total disconnectedness is a hereditary property of topological spaces.) |
Low-energy limit of SMEFT applied to tau to pi pi nu_tau decays
Pre-published on: 2019 August 08
Published on: —
Abstract
We perform an effective field theory analysis of the $\tau^-\to \pi^- \pi^0 \nu_\tau$ decays, that includes the most general interactions between Standard Model fields up to dimension six, assuming left-handed neutrinos. This approach corresponds to the low-energy limit of the SMEFT, which is the EFT of the SM in absence of New Physics up to few TeV. We constrain as much as possible the necessary Standard Model hadronic input using chiral symmetry, dispersion relations, data and asymptotic QCD properties. As a result, we set precise (competitive with low-energy and LHC measurements) bounds on (non-standard) charged current tensor interactions, finding a very small preference for their presence, according to Belle data. Belle-II near future measurements can thus be very useful in either confirming or further restricting new physics tensor current contributions to these decays. |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
First note, that hybridisation is a mathematical concept which can be applied to interpret a bonding situation. It has no physical meaning whatsoever. Instead it helps us to understand the direction of bonds better.
Second note, that the second period usually behaves quite differently from the remaining elements in a group. So in a way, ammonia behaves unnatural or anomalous.
If you compare nitrogen with phosphorus, you will note, that the former is much smaller than the latter, i.e. van der Waals radii $r(\ce{N})=155~\mathrm{pm};\ r(\ce{P})=180~\mathrm{pm}$ (ref. wikipedia), covalent radii $r(\ce{N})=71~\mathrm{pm};\ r(\ce{P})=107~\mathrm{pm}$ (ref. wikipedia). Therefore also the orbitals in nitrogen are smaller, and $\ce{s}$ and $\ce{p}$ orbitals will occupy more of the same space than in phosphorus. As a result the $\ce{N-H}$ bond distance will naturally also be shorter.
A lone pair is usually most stable in an orbital that has high $\ce{s}$ character. Bonds will most likely be formed with the higher lying $\ce{p}$ orbitals. The orientation of these towards each other is exactly $90^\circ$.
In ammonia this would lead to very close $\ce{H\cdots{}H}$ contacts, which are repulsive and therefore the hydrogen atoms are pushed away from each other. This is possible since in the second period the $\ce{s-p}$ splitting is still very small and the nitrogen $\ce{s}$ orbital is accessible for the hydrogen atoms. This will ultimately result in mixing $\ce{s}$ and $\ce{p}$ orbitals for nitrogen in the respective molecular orbitals. This phenomenon can be referred to as hybridisation - the linear combination of orbitals from the same atom. This term is therefore somewhat independent from its most common usage.
It is also very important to know, that the molecular wavefunction of a molecule has to reflect its overall symmetry. In this case it is $C_{3v}$, which means there is a threefold rotational axis and three vertical mirror planes (the axis is element of these planes). This gives also rise to degenerate orbitals. A canonical orbital picture has to reflect this property (BP86/cc-pVDZ; valence orbitals are ordered with increasing energy from left to right).
Note that the lowest lying valence molecular orbital is formed only from $\ce{s}$ orbitals (There is one additional $\ce{1s^2-N}$ core orbital.) Now Natural Bond Orbital (NBO) Theory can be used to transform these delocalised molecular orbitals to a more common and familiar bonding picture, making use of atomic hybrid orbitals. This method is called localising orbitals, but it has the expense of losing the energy eigenvalue that may be assigned to canonical orbitals (NBO@BP86/cc-pVDZ; valence NBO cannot be ordered by energy levels). In this theory you will find three equivalent $\ce{N-H}$ bonds, that are composed of $32\%~\ce{1s-H}$ and $68\%~\ce{s^{$0.87$}p^3-N}\approx\ce{sp^3-N}$ orbitals. Note that the lone pair orbital at nitrogen has a slightly higher $\ce{s}$ orbital contribution, i.e. $\ce{s^{1.42}p^3-N}\approx\ce{sp^3-N}$.
So the thermodynamically most favoured angle is found to be $107^\circ$ due to a compromise between optimal orbital overlap and least internuclear repulsion.
The canonical bonding picture in phosphine is very similar to ammonia, only the orbitals are larger. Even in this case it would be wrong to assume, that there is no hybridisation present at all. However, the biggest contribution to the molecular orbitals stems from the $\ce{p}$ orbitals at phosphorus.
Applying the localisation scheme, one end up with a different bonding picture. Here are three equal $\ce{P-H}$ bonds that are composed of $48\%~\ce{1s-H}$ and $52\%~\ce{s^{$0.5$}p^3-P}$ orbitals. The lone pair at phosphorus is composed of $57\%\ce{s} + 43\%\ce{p}$ orbitals.
One can see the difference of the molecules also in their inversion barrier, while for ammonia the inversion is readily available at room temperature, $\Delta E \approx 6~\mathrm{kcal/mol}$, it is very slow for phosphine, $\Delta E \approx 40~\mathrm{kcal/mol}$.
This is mostly due to the fact, that the nitrogen hydrogen bonds have already a significant $\ce{s}$ orbital contribution, which can be easily increase, to form the planar molecule with formally $\ce{sp^2}$ hybrids. |
Answer
The solution set of the equation is $$\{90^\circ, 210^\circ,330^\circ\}$$
Work Step by Step
$$\sin3\theta=-1$$ over interval $[0^\circ,360^\circ)$ 1) Find corresponding interval for $3\theta$ The interval for $\theta$ is $[0^\circ,360^\circ)$, which can also be written as the inequality: $$0^\circ\le\theta\lt360^\circ$$ Therefore, for $3\theta$, the inequality would be $$0^\circ\le3\theta\lt1080^\circ$$ Thus, the corresponding interval for $3\theta$ is $[0^\circ,1080^\circ)$. 2) Now we examine the equation: $$\sin3\theta=-1$$ Over interval $[0^\circ,1080^\circ)$, there are 3 values whose sine equals $-1$, which are $\{270^\circ, 630^\circ,990^\circ\}$ Therefore, $$3\theta=\{270^\circ, 630^\circ,990^\circ\}$$ It follows that $$\theta=\{90^\circ, 210^\circ,330^\circ\}$$ This is the solution set of the equation. |
HI!We were given a lot of description this time :D and here they are:
Alice and Bob went a long way in crypto. They designed a super secure crypto system to encrypt their messages. We managed to steal the source and some other information. Find the FLAG! We also got this: Download Mirror File We also got this: Download
In the first file we had an example of the algorithm and the cipher we needed to decrypt and from the second file which is an archive we get 3 files 2 keys and the encryption script but script was obfuscated and names in the script were not readable to human, so lets start with de-obfuscating it - we did it by hand- and here is the a bit more readable script:
http://www.codesend.com/view/addce89369568510c33ff3154bf6dd89/ Basically it was reading four keys, and processing an asymmetric encryption on its first argument according to those keys. The argument string is converted into a base256 number by those lines: $$T \equiv C^{A} \pmod{B}\\ P \equiv C^{S} \pmod{B}\\ Q \equiv M \cdot T^{S} \pmod{B}$$ where A,B,C and D are values in the keyfiles respectively and S is the random seed generated from D. We know the values of B and C from the zip archive and P and Q from the message text that is containing an example and the cipher we need to find a way to get the value of M with those guys. Let's examine the $T^{S}$ actually it is equal to $(C^{A})^{S}$ and which is $P^{A}$ we just swapped the exponents. and last line became: $$Q \equiv M \cdot P^{A}\pmod{B}$$ And besides those things we actually got one more very important data, the EXAMPLE! The example's P value and the P value of the cipher we need to decrypt are equal, also we know that A is equal for both of them so we can get the value of $P^{A}$ from the example -as we know Q and M for it. and then just multiply both sides of our equations with the inverse of the $P^{A}$ modulo B and then get the decrypted cipher lets do it now! There's our python code for geting the flag: http://www.codesend.com/view/7d97576bb77a0cc1ca33a717b6c8ed4d/
http://www.codesend.com/view/addce89369568510c33ff3154bf6dd89/
Basically it was reading four keys, and processing an asymmetric encryption on its first argument according to those keys.
The argument string is converted into a base256 number by those lines:
a, bb = argv[1], 0So what the encryption doing by mathematically is something like this: for ccc in a: bb = (bb*256) + ord(ccc)
$$T \equiv C^{A} \pmod{B}\\
P \equiv C^{S} \pmod{B}\\
Q \equiv M \cdot T^{S} \pmod{B}$$
where A,B,C and D are values in the keyfiles respectively and S is the random seed generated from D.
We know the values of B and C from the zip archive and P and Q from the message text that is containing an example and the cipher we need to find a way to get the value of M with those guys. Let's examine the $T^{S}$ actually it is equal to $(C^{A})^{S}$ and which is $P^{A}$ we just swapped the exponents. and last line became:
$$Q \equiv M \cdot P^{A}\pmod{B}$$
And besides those things we actually got one more very important data, the EXAMPLE! The example's P value and the P value of the cipher we need to decrypt are equal, also we know that A is equal for both of them so we can get the value of $P^{A}$ from the example -as we know Q and M for it. and then just multiply both sides of our equations with the inverse of the $P^{A}$ modulo B and then get the decrypted cipher lets do it now!
There's our python code for geting the flag:
http://www.codesend.com/view/7d97576bb77a0cc1ca33a717b6c8ed4d/ |
I have a general question in partial differential equations.
Can we say that when an even function is expressed as a Fourier series, the Fourier cosine series is also the Fourier series?
My thinking is that a Fourier series has the form,
$$f(x) = \frac{a_0}{2}+\sum^{\infty}_{n=1} a_n cos(nx) + \sum^\infty_{n=1}b_nsin(nx)$$
where $$a_0 = \frac{1}{\pi}\int ^\infty _{-\infty}f(x)dx$$, $$a_n = \frac{1}{\pi}\int ^\infty _{\infty}f(x)cos(nx)dx$$, $$b_n = \frac{1}{\pi}f(x)sin(nx)dx$$
where $cos$ is a even function and $sin$ is a odd function. Then if $f(x)$ is even, multiplying a even and odd function together gives a odd function which is $0$ which would eliminate the $b_n$ term, leaving just $a_0$ and $a_n$ which is the fourier cosine series. Therefore yes this is true |
(
Update. Added a self-contained proof that under ZF+DC+BP there is no norm.)
Martín-Blas Pérez Pinilla is on the right track. You can't even put a norm on $C(\mathbb{R})$ without using the axiom of choice in an essential way. Dependent choice is not enough.
Claim. It is consistent with ZF+DC that there does not exist any norm on $C(\mathbb{R})$.
Recall that a subset $E$ of a topological space is said to have the Baire property if it can be written as a symmetric difference $E = U \triangle M$ where $U$ is open and $M$ is meager (a countable union of nowhere dense sets).
A celebrated theorem of Shelah says that consistent with ZF+DC is the statement BP: "Every subset of $\mathbb{R}$ has the Baire property." From BP it follows that in fact every subset of any Polish space has the Baire property.
Let $\tau$ be the usual topology on $C(\mathbb{R})$ (uniform convergence on compact sets). It is induced by a translation-invariant metric:
$$d(f,g) := \sum_{n=1}^\infty 2^{-n} \min\left(1, \sup_{[-n,n]} |f-g|\right) $$
The metric $d$ is complete (this comes from the fact that a uniform limit of continuous functions is continuous). And it's not hard to see that $\tau$ is separable (the polynomials with rational coefficients are $\tau$-dense, by the Weierstrass approximation theorem). So $(C(\mathbb{R}),\tau)$ is a Polish space.
Suppose now that $\|\cdot\|$ is a norm on $C(\mathbb{R})$. Let $B$ be the closed unit $\|\cdot\|$-ball. We will show that $B$ does not have the Baire property with respect to $\tau$. Specifically, let $U$ be any $\tau$-open set; we will show that $B \triangle U$ is $\tau$-nonmeager.
Suppose first that $U = \emptyset$ so that $B \triangle U = B$. Since $\bigcup_{n=1}^\infty nB = C(\mathbb{R})$, by the Baire category theorem $B$ is $\tau$-nonmeager.
Now suppose that $U$ is nonempty. We will show $U \setminus B$ is $\tau$-nonmeager. Let us begin by showing $U \setminus B$ is nonempty. Let $f \in U$ and let $g \in C(\mathbb{R})$ be your favorite nonzero continuous function which is supported in $[0,1]$. Let $g_n(x) = g(x-n)$ be translates of $g$. For each $n$, let $a_n$ be a real number sufficiently large that $\|f + a_n g_n\| > 1$, so that $f + a_n g_n \notin B$. Then $a_n g_n \to 0$ uniformly on compact sets (i.e. in the $\tau$ topology), so for some $N$ we have $h := f + a_N g_N \in U$. Thus $h \in U \setminus B$. (To say this another way, every nonempty $\tau$-open set is unbounded, but $B$ is bounded, so $U$ cannot be a subset of $B$.)
Now for any $u \in C(\mathbb{R})$, we have $h + \frac{1}{n} u \to h$ in both the $\tau$ and $\|\cdot\|$ topologies. So for sufficiently large $n$, we have $h + \frac{1}{n} u \in U$ and $h + \frac{1}{n} u \in B^c$ (since $B$ is $\|\cdot\|$-closed). That is, $u \in n((U \setminus B)-h)$. Since $u$ was arbitrary we have shown $\bigcup_{n=1}^\infty n((U \setminus B)-h) = C(\mathbb{R})$. By the Baire category theorem, $U \setminus B$ is $\tau$-nonmeager, hence so is $B \triangle U$.
So we have shown that if $C(\mathbb{R})$ has a norm, then it has a set that lacks the Baire property with respect to $\tau$. Under ZF+DC+BP there is no such set and hence no norm.
(Credit where credit is due: This proof is loosely based on the idea of the proof of the Garnir-Wright closed graph theorem from Theorem 27.45 of Eric Schechter's
Handbook of Analysis and its Foundations.)
Incidentally, the only property of the vector space $C(\mathbb{R})$ we used is that it admits a Polish topology in which every nonempty open set is unbounded. So the same argument would apply to other vector spaces with this property, such as $\mathbb{R}^{\mathbb{N}}$, $C^\infty(\mathbb{R}^d)$, etc. |
A) The current through NP is 0.5 A
B) The value of \[{{R}_{1}}=20\Omega \]
C) The value of \[R=14\Omega \]
D) The potential difference across R = 49 V
Correct Answer:
C
Potential difference across MP = p. d. across NO = p. d. across N.P (see figure.) \[\therefore \]\[{{I}_{NP}}\times 10=20\times 1\]or \[{{I}_{NP}}=2A\] Across MP. 0.5\[{{R}_{1}}=20\] or\[{{R}_{1}}=40\Omega \] Total current \[=2+035+1.0=3.5\,A\] \[3.5=\frac{69}{R+40/7}\]yields \[R=14\Omega \] Hence, the correct choice is [c].
Solution :
You need to login to perform this action.
You will be redirected in 3 sec |
I happen to have revised our calculus syllabus for first year biology majors about one year ago (in a French university, for that matter). I benefited a lot from my wife's experience as a math-friendly biologist.
The main point of the course is to get students able to deal with
quantitative models. For example, my wife studied the movement of cells under various circumstances.
A common model postulates that the average distance $d$ between two
positions of a cell at times $t_0$ and $t_0+T$ is given by
$$d = \alpha T^\beta$$ where $\alpha>0$ is a speed parameter and
$\beta\in[\frac12,1]$ is a parameter that measures how the movement
fits between a Brownian motion ($\beta=\frac12$)
and a purely ballistic motion ($\beta=1$).
This simple model is a great example to show how calculus can be relevant to biology.
My first point might be specific to recent French students: first-year students are often not even proficient enough with basic algebraic manipulations to be able to do anything relevant with such a model. For example, even asking to compute how $d$ changes when $T$ is multiplied by a constant needs to now how to
deal with exponents. In fact, we even had serious issues with the mere use of percentages.
One of the main point of our new calculus course is to be able to
estimate uncertainties: in particular, given that $T=T_0\pm \delta T$, $\alpha=\alpha_0\pm\delta\alpha$ and $\beta=\beta_0\pm\delta\beta$, we ask them to estimate $d$ up to order one (i.e. using first-order Taylor series). This already involves derivatives of multivariable functions, and is an important computation when you want to draw conclusions from experiments.
Another important point of the course is the
use of logarithms and exponentials, in particular to interpret log or log-log graphs. For example, in the above model, it takes a (very) little habit to see that taking logs is a good thing to do: $\log d = \beta\log T+\log \alpha$ so that plotting your data in log-log chart should give you a line (if the models accurately represent your experiments).
This then interacts with
statistics: one can find the linear regression in log-log charts to find estimates for $\alpha$ and $\beta$. But then one really gets an estimate of $\beta$ and... $\log\alpha$, so one should have a sense of how badly this uncertainty propagates to $\alpha$ ( one variable first-order Taylor series: easy peasy).
The other main goal of the course is to get them able to deal with some (ordinary) differential equations. The motivating example I chose was offered to me by the chemist of our syllabus meeting.
A common model for the kinetics of a chemical reaction$$A + B \to C$$is the second-order model: one assumes that the speed of the reaction is proportional to the product of the concentrations of the species A and B. This leads to a not-so-easy differential equation of the form$$ y'(t) = (a-y(t))(b-y(t)).$$This is a
first-order ODE with separable variables. One can solve it explicitly (a luxury!) by dividing by the second member, integrate in $t$, do a change of variable $u=y(t)$ in the left-hand-side, resolve into partial fractions the rational fraction that comes out, and remember how log is an antiderivative of the inverse function (and how to adjust for the various constants the appeared in the process). Then, you need some algebraic manipulations to transform the resulting equation into the form $y(t) = \dots$. Unfortunately and of course, we are far from being able to properly cover all this material, but we try to get the student able to follow this road later on, with their chemistry teachers.
In fact, I would love to be able to do more quantitative analysis of differential equations, but it is difficult to teach since it quickly goes beyond a few recipes. For example, I would like them to become able to tell in a glimpse the
variations of solutions to$$y'(t)=a\cdot y(t)-b \sqrt{y(t)}$$(a model of population growth for colonies of small living entities organized in circles, where death occur mostly on the edge - note how basic geometry makes an appearance here to explain the model) in terms of the initial value. Or to be able to realize that solutions to$$y'(t)=\sqrt{y(t)}$$must be sub-exponential (and what that even means...). For this kind of goals, one must first aim to basic proficiency in calculus.
To sum up,
dealing with any quantitative model needs a fair bit of calculus, in order to have a sense of what the model says, to use it with actual data, to analyze experimental data, to interpret it, etc.
To finish with a controversial point, it seems to me that, at least in my environment, biologists tend to underestimate the usefulness of calculus (and statistics, and more generally mathematics) and that improving the basic understanding of mathematics among biologists-to-be can only be beneficial. |
The soft-photon theorem is the following statement due to Weinberg:
Consider an amplitude ${\cal M}$ involving some incoming and some outgoing particles. Now, consider the same amplitude with an additional soft-photon ($\omega_{\text{photon}} \to 0$) coupled to one of the particles. Call this amplitude ${\cal M}'$. The two amplitudes are related by $$ {\cal M}' = {\cal M} \frac{\eta q p \cdot \epsilon}{p \cdot p_\gamma - i \eta \varepsilon} $$ where $p$ is the momentum of the particle that the photon couples to, $\epsilon$ is the polarization of the photon and $p_\gamma$ is the momentum of the soft-photon. $\eta = 1$ for outgoing particles and $\eta = -1$ for incoming ones. Finally, $q$ is the charge of the particle.
The most striking thing about this theorem (to me) is the fact that the proportionality factor relating ${\cal M}$ and ${\cal M}'$ is independent of the type of particle that the photon couples to. It seems quite amazing to me that even though the coupling of photons to scalars, spinors, etc. takes such a different form, you still end up getting the same coupling above.
While I can show that this is indeed true for all the special cases of interest, my question is:
Is there a general proof (or understanding) that describes this universal coupling of soft-photons? |
Gold Member
526 92
It's been a long time since my last exam on QM, so now I'm struggling with some basic concept that clearly I didn't understand very well.
1) The Sch. Eq for a free particle is ##-\frac {\hbar}{2m} \frac {\partial ^2 \psi}{\partial x^2} = E \psi## and the solutions are plane waves of the form ##\psi(x) = Ae^{1kx} + Be^{-ikx}##. This functions can not be normalized thus they do not represent a physical phenomenon, but if I superimpose all of them with an integral on ##k## I get the "true" solution (the wave packet). This implies that a free particle with definite energy does not exist (only superposition of states with different energies can exist). This bugs me a lot. For example, think about an atom hit by an ionizing radiation: at some point an electron will be kicked out of the shell and now, if I wait some time, I have a free electron (so a free particle) and what about its energy? It should be defined by the law of conservation of energy... 2) I'm reading some lecture notes about scattering. Why does everyone take the incoming particle to be described by the state ##\psi_i = e^{i \mathbf k \cdot \mathbf r}## if it is not normalizable ? It seems to me they all assume the particle to be inside a box of length ##L## and forget about about the normalization constant. But why ?
1) The Sch. Eq for a free particle is ##-\frac {\hbar}{2m} \frac {\partial ^2 \psi}{\partial x^2} = E \psi## and the solutions are plane waves of the form ##\psi(x) = Ae^{1kx} + Be^{-ikx}##. This functions can not be normalized thus they do not represent a physical phenomenon, but if I superimpose all of them with an integral on ##k## I get the "true" solution (the wave packet). This implies that a free particle with definite energy does not exist (only superposition of states with different energies can exist). This bugs me a lot. For example, think about an atom hit by an ionizing radiation: at some point an electron will be kicked out of the shell and now, if I wait some time, I have a free electron (so a free particle) and what about its energy? It should be defined by the law of conservation of energy...
2) I'm reading some lecture notes about scattering. Why does everyone take the incoming particle to be described by the state ##\psi_i = e^{i \mathbf k \cdot \mathbf r}## if it is not normalizable ? It seems to me they all assume the particle to be inside a box of length ##L## and forget about about the normalization constant. But why ? |
For a given scalar field $\phi$, the stress energy tensor is \begin{equation} T_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi - \frac{1}{2} g_{\mu\nu}(\partial_{\alpha}\phi\partial^{\alpha}\phi). \end{equation} How can I get a second order differential equation for $\phi$, assuming that the metric $g_{\mu\nu}$ satisfies Einstein's equation? I tried to use the identity $\nabla^{\mu}G_{\mu\nu}=0$, getting the following equation \begin{equation} 0=\nabla^{\mu}T_{\mu\nu}=g^{\mu\rho}\nabla_{\rho}T_{\mu\nu}=g^{\mu\rho}(\partial_{\rho}T_{\mu\nu}-\Gamma^{\alpha}_{\rho\mu}T_{\alpha\nu}-\Gamma^{\alpha}_{\nu\rho}T_{\mu\alpha}). \end{equation} But turns out that this is not a second order differential equation. How can I get such equation?
Note that what you did does not provide a second order differential equation for $\phi$, since $T_{\mu\nu}$ is already second order in the derivatives.
The stress-energy tensor of a matter sector is given by: $$ T_{\mu\nu}=\frac{1}{\sqrt{g}}\frac{\delta S_m}{\delta g^{\mu\nu}} $$
so we need to find an action $S_m=\int d^4x\,\sqrt{g}\,\mathcal{L}[\phi,\partial_\mu\phi]$ such that $$ T_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi - \frac{1}{2} g_{\mu\nu}(\partial_{\alpha}\phi\partial^{\alpha}\phi) $$ and since $\phi$ is scalar, and $T_{\mu\nu}$ is second order in the derivatives, we can easily conclude that: $$ \mathcal{L}=g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi $$
Now, you calculate the equations of motion for $\phi$:
$$ \frac{\delta S_m}{\delta \phi}=0 $$ that is: $$ \partial_{\mu}(\sqrt{g}\,\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)})=0 $$ a second order differential equation for $\phi$, for a given $g_{\mu\nu}$ determined.
HINT: It'll be easiest to just write the stress-energy tensor in terms of the covariant derivative (note that these are equivalent for scalar fields):$$T_{\mu \nu} = \nabla_\mu \phi \nabla_\nu \phi - \frac{1}{2} g_{\mu \nu} \nabla^\rho \phi \nabla_\rho \phi$$Now take the divergence of $T_{\mu \nu}$
without writing down any Christoffel symbols; leave all your derivatives in terms of $\nabla_\mu$, rather than $\partial_\mu$. Use the facts that $\nabla_\mu$ obeys the product rule, and that by definition $\nabla_\mu g_{\rho \sigma} = 0$.
BTW, the assumption that $g_{\mu \nu}$ satisfies Einstein's equations is completely unnecessary for this derivation. |
We have $X \sim \mathrm{Unif}[0,2]$ and $Y \sim \mathrm{Unif}[3,4]$. The random variables $X,Y$ are independent. We define a random variable $Z = X + Y$ and want to find the PDF of $Z$ using convolution. Here is my work so far:
The definition of convolution is:
$f_Z(z) = \int_{-\infty}^{\infty}f_X(x)f_Y(z-x)\mathrm{d} x$
We know the PDF's of $X$ and $Y$ because they are just uniform distributions. The hard part for me is finding the limits of integration. We have to solve for the constraints.
The integrand is nonzero when $3 \leq z-x \leq 4$ and when $0 \leq x \leq 2$. Together these constraints imply that $\max \{0, z-4\} \leq x \leq \min \{2, z-3 \}$.
These constraints imply that there are three cases:
Case 1 - $z \leq 4 \implies f_Z(z) = \int_0^{z-3}$ Case 2 - $4 \leq z \leq 5 \implies f_Z(z) = \int_{z-4}^{z-3}$ Case 3 - $z \geq 5 \implies f_Z(z) = \int_{z-4}^{2}$
My question is how to find the bounds of $Z$ i.e. what are the possible values of $Z$? Does $Z$ run from $0 \to 6$ since it is the sum of $X+Y$ and this sum will have some value for every value $\in [0,6]$? |
First off, ${\bf R}^2$ is not a field. Let me build up the idea you want. If $L/K$ is a field extension, then in particular $L$ is a vector space over $K$, and the degree of the extension $[L:K]$ is defined to be the dimension $\dim_KL$ of $L$ over $K$. So perhaps you want to ask
Question: What kinds of degree $2$ extensions of the reals are there?
Answer: The complex numbers are the only such extension.
This requires only the most basic field theory; if $K/{\bf R}$ is a degree two extension, then $K={\bf R}(a)$ for any $a\in K\setminus{\bf R}$, and such an $a$ has minimal polynomial of degree two over $\bf R$, the only irreducibles over $\bf R$ are of the form $(x+s)^2+b$ with $b>0$, so ${\bf R}(a)\cong {\bf R}(\sqrt{-b})={\bf R}(i)={\bf C}$.
Note: the notation $F(a)$, where $F$ is some field, means "the smallest field containing $F$ and $a$," where $a$ lives in some field containing $F$.
There are many fields containing $\bf R$ other than just $\bf C$. Field theory gives ways to
construct field extensions of any field, in fact. Here is one way: if $F$ is a field that is not algebraically closed, then there is some polynomial $p(x)\in F[x]$ that does not have a root in $F$, and we can adjoin a root of the polynomial to $F$ via the quotient ring $F[x]/(p(x))$.
(However, it does make sense to adjoin
different roots and obtain distinct extensions, though the extensions are isomorphic as fields, from the perspective of some overlying field $L/F$. For instance, adjoining the real cube root of $2$ to $\bf Q$ and adjoining the complex cube root of $2$ to $\bf Q$ yield two distinct number fields, finite field extensions of $\bf Q$.)
Another way is to adjoin a
transcendental element. If $F$ is a field and we want $T$ to be a transcendental element over $F$, then $F(T)$ can be thought of as the field of all rational functions in the variable $T$ with coefficients from $F$. (In the case of positive characteristic, we should be quick to note that two rational functions, in fact two polynomials, can act as the same function even though they are different as abstract expressions.)
This makes sense: if there were two distinct rational functions $a(\cdot)/b(\cdot)$ and $c(\cdot)/d(\cdot)$ such that $a(T)/b(T)=c(T)/d(T)$ in $F(T)$, then $a(T)d(T)-b(T)c(T)=0$ would make $T$ algebraic, a contradiction. Thus $F(T)$ contains all rational functions in $T$ as distinct elements, and by definition of minimality this means $F(T)$ need contain no further elements.
There is an
analytically inspired way of creating fields that I will also note. If a field $F$ is also a metric space, then the metric space completion of the field $F$ will also be a field. This can be constructed abstractly as the ring of all sequences of elements of $F$ that are Cauchy with respect to the metric, modulo the maximal ideal comprised of null sequences (those that converge to $0$ in the metric).
Question: What kinds of ways are there of constructing new fields from old ones?
Answer: Adjoining algebraic elements, adjoining transcendental elements, completing with respect to a metric, taking certain types of closures.
An example of a closure is an
algebraic closure of a field. We say $\bar{F}$ is an algebraic closure of $F$ is it is (i) algebraically closed, (ii) contains $F$, and (iii) there is no field lying between $\bar{F}$ and $F$ that is also algebraically closed. In fact any two algebraic closures of a field $F$ will be isomorphic (a fact that requires some amount of choice.) We can restrict ourselves to other closures too, such as maximal abelian extension, maximal (totally/tamely) ramified extensions, etc.
There is more to say about
uniqueness of field extensions and its relation to $\bf C$ than in the first section above. In fact with the axiom of choice,
Question How many algebraically closed fields of cardinal size $\kappa\ge{\frak c}:=|{\bf R}|$ are there?
Answer: There is always only one, up to isomorphism.
I don't actually know how to prove such a fact. |
Erratum to: On optimal designs for censored data 574 Downloads 1 Erratum to: Metrika DOI 10.1007/s00184-014-0500-1
In the original publication, Theorem 3.5 was incorrectly published as:
Theorem 3.5
Let \(\fancyscript{X} = \{x_{1}, x_{2}, x_{3}\}\) be the design region. For \(i, j \in \{1, 2, 3\}, i \ne j\), let \(d_{ij}=Q(\beta _0+\beta _1x_i)\) and \(l \in \{1, 2, 3\} \backslash \{i, j \}\).
The correct theorem should read as:
Theorem 3.5
Let \(\fancyscript{X} = \{x_{1}, x_{2}, x_{3}\}\) be the design region. For \(i, j \in \{1, 2, 3\}, i \ne j\), let \(d_{ij}=Q(\beta _0+\beta _1x_i)\,Q(\beta _0+\beta _1x_j)\,(x_j-x_i)^2\) and \(l \in \{1, 2, 3\} \backslash \{i, j \}\).
Unfortunately, this error was not noticed during the subsequent stages of publication. The publisher regrets for this mistake. |
statsmodels.tsa.statespace.structural.UnobservedComponents¶ class
statsmodels.tsa.statespace.structural.
UnobservedComponents(
endog, level=False, trend=False, seasonal=None, freq_seasonal=None, cycle=False, autoregressive=None, exog=None, irregular=False, stochastic_level=False, stochastic_trend=False, stochastic_seasonal=True, stochastic_freq_seasonal=None, stochastic_cycle=False, damped_cycle=False, cycle_period_bounds=None, mle_regression=True, use_exact_diffuse=False, **kwargs)¶
Univariate unobserved components time series model
These are also known as structural time series models, and decompose a (univariate) time series into trend, seasonal, cyclical, and irregular components.
Parameters level{bool,
str},
optional
Whether or not to include a level component. Default is False. Can also be a string specification of the level / trend component; see Notes for available model specification strings.
trendbool,
optional
Whether or not to include a trend component. Default is False. If True, level must also be True.
seasonal{
int,
None},
optional
The period of the seasonal component, if any. Default is None.
freq_seasonal{
list[
dict],
None}, optional.
Whether (and how) to model seasonal component(s) with trig. functions. If specified, there is one dictionary for each frequency-domain seasonal component. Each dictionary must have the key, value pair for ‘period’ – integer and may have a key, value pair for ‘harmonics’ – integer. If ‘harmonics’ is not specified in any of the dictionaries, it defaults to the floor of period/2.
cyclebool,
optional
Whether or not to include a cycle component. Default is False.
autoregressive{
int,
None},
optional
The order of the autoregressive component. Default is None.
exog{array_like,
None},
optional
Exogenous variables.
irregularbool,
optional
Whether or not to include an irregular component. Default is False.
stochastic_levelbool,
optional
Whether or not any level component is stochastic. Default is False.
stochastic_trendbool,
optional
Whether or not any trend component is stochastic. Default is False.
stochastic_seasonalbool,
optional
Whether or not any seasonal component is stochastic. Default is True.
stochastic_freq_seasonal
list[bool],
optional
Whether or not each seasonal component(s) is (are) stochastic. Default is True for each component. The list should be of the same length as freq_seasonal.
stochastic_cyclebool,
optional
Whether or not any cycle component is stochastic. Default is False.
damped_cyclebool,
optional
Whether or not the cycle component is damped. Default is False.
cycle_period_bounds
tuple,
optional
A tuple with lower and upper allowed bounds for the period of the cycle. If not provided, the following default bounds are used: (1) if no date / time information is provided, the frequency is constrained to be between zero and \(\pi\), so the period is constrained to be in [0.5, infinity]. (2) If the date / time information is provided, the default bounds allow the cyclical component to be between 1.5 and 12 years; depending on the frequency of the endogenous variable, this will imply different specific bounds.
use_exact_diffusebool,
optional
Whether or not to use exact diffuse initialization for non-stationary states. Default is False (in which case approximate diffuse initialization is used).
Notes
These models take the general form (see [R0058a7c6fc36-1] Chapter 3.2 for all details)\[y_t = \mu_t + \gamma_t + c_t + \varepsilon_t\]
where \(y_t\) refers to the observation vector at time \(t\), \(\mu_t\) refers to the trend component, \(\gamma_t\) refers to the seasonal component, \(c_t\) refers to the cycle, and \(\varepsilon_t\) is the irregular. The modeling details of these components are given below.
Trend
The trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend. It can be written:\[\begin{split}\mu_t = \mu_{t-1} + \beta_{t-1} + \eta_{t-1} \\ \beta_t = \beta_{t-1} + \zeta_{t-1}\end{split}\]
where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.
Here \(\eta_t \sim N(0, \sigma_\eta^2)\) and \(\zeta_t \sim N(0, \sigma_\zeta^2)\).
For both elements (level and trend), we can consider models in which:
The element is included vs excluded (if the trend is included, there must also be a level included).
The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)
The only additional parameters to be estimated via MLE are the variances of any included stochastic components.
The level/trend components can be specified using the boolean keyword arguments level, stochastic_level, trend, etc., or all at once as a string argument to level. The following table shows the available model specifications:
Model name
Full string syntax
Abbreviated syntax
Model
No trend
‘irregular’
‘ntrend’\[y_t &= \varepsilon_t\]
Fixed intercept
‘fixed intercept’\[y_t &= \mu\]
Deterministic constant
‘deterministic constant’
‘dconstant’\[y_t &= \mu + \varepsilon_t\]
Local level
‘local level’
‘llevel’\[\begin{split}y_t &= \mu_t + \varepsilon_t \\ \mu_t &= \mu_{t-1} + \eta_t\end{split}\]
Random walk
‘random walk’
‘rwalk’\[\begin{split}y_t &= \mu_t \\ \mu_t &= \mu_{t-1} + \eta_t\end{split}\]
Fixed slope
‘fixed slope’\[\begin{split}y_t &= \mu_t \\ \mu_t &= \mu_{t-1} + \beta\end{split}\]
Deterministic trend
‘deterministic trend’
‘dtrend’\[\begin{split}y_t &= \mu_t + \varepsilon_t \\ \mu_t &= \mu_{t-1} + \beta\end{split}\]
Local linear deterministic trend
‘local linear deterministic trend’
‘lldtrend’\[\begin{split}y_t &= \mu_t + \varepsilon_t \\ \mu_t &= \mu_{t-1} + \beta + \eta_t\end{split}\]
Random walk with drift
‘random walk with drift’
‘rwdrift’\[\begin{split}y_t &= \mu_t \\ \mu_t &= \mu_{t-1} + \beta + \eta_t\end{split}\]
Local linear trend
‘local linear trend’
‘lltrend’\[\begin{split}y_t &= \mu_t + \varepsilon_t \\ \mu_t &= \mu_{t-1} + \beta_{t-1} + \eta_t \\ \beta_t &= \beta_{t-1} + \zeta_t\end{split}\]
Smooth trend
‘smooth trend’
‘strend’\[\begin{split}y_t &= \mu_t + \varepsilon_t \\ \mu_t &= \mu_{t-1} + \beta_{t-1} \\ \beta_t &= \beta_{t-1} + \zeta_t\end{split}\]
Random trend
‘random trend’
‘rtrend’\[\begin{split}y_t &= \mu_t \\ \mu_t &= \mu_{t-1} + \beta_{t-1} \\ \beta_t &= \beta_{t-1} + \zeta_t\end{split}\]
Following the fitting of the model, the unobserved level and trend component time series are available in the results class in the level and trend attributes, respectively.
Seasonal (Time-domain)
The seasonal component is modeled as:\[\begin{split}\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \\ \omega_t \sim N(0, \sigma_\omega^2)\end{split}\]
The periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time (if this is not desired, \(\sigma_\omega^2\) can be set to zero using the stochastic_seasonal=False keyword argument).
This component results in one parameter to be selected via maximum likelihood: \(\sigma_\omega^2\), and one parameter to be chosen, the number of seasons s.
Following the fitting of the model, the unobserved seasonal component time series is available in the results class in the seasonal attribute.
** Frequency-domain Seasonal**
Each frequency-domain seasonal component is modeled as:\[\begin{split}\gamma_t & = \sum_{j=1}^h \gamma_{j, t} \\ \gamma_{j, t+1} & = \gamma_{j, t}\cos(\lambda_j) + \gamma^{*}_{j, t}\sin(\lambda_j) + \omega_{j,t} \\ \gamma^{*}_{j, t+1} & = -\gamma^{(1)}_{j, t}\sin(\lambda_j) + \gamma^{*}_{j, t}\cos(\lambda_j) + \omega^{*}_{j, t}, \\ \omega^{*}_{j, t}, \omega_{j, t} & \sim N(0, \sigma_{\omega^2}) \\ \lambda_j & = \frac{2 \pi j}{s}\end{split}\]
where j ranges from 1 to h.
The periodicity (number of “seasons” in a “year”) is s and the number of harmonics is h. Note that h is configurable to be less than s/2, but s/2 harmonics is sufficient to fully model all seasonal variations of periodicity s. Like the time domain seasonal term (cf. Seasonal section, above), the inclusion of the error terms allows for the seasonal effects to vary over time. The argument stochastic_freq_seasonal can be used to set one or more of the seasonal components of this type to be non-random, meaning they will not vary over time.
This component results in one parameter to be fitted using maximum likelihood: \(\sigma_{\omega^2}\), and up to two parameters to be chosen, the number of seasons s and optionally the number of harmonics h, with \(1 \leq h \leq \floor(s/2)\).
After fitting the model, each unobserved seasonal component modeled in the frequency domain is available in the results class in the freq_seasonal attribute.
Cycle
The cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between “1.5 and 12 years” (see Durbin and Koopman).\[\begin{split}c_{t+1} & = \rho_c (\tilde c_t \cos \lambda_c t + \tilde c_t^* \sin \lambda_c) + \tilde \omega_t \\ c_{t+1}^* & = \rho_c (- \tilde c_t \sin \lambda_c t + \tilde c_t^* \cos \lambda_c) + \tilde \omega_t^* \\\end{split}\]
where \(\omega_t, \tilde \omega_t iid N(0, \sigma_{\tilde \omega}^2)\)
The parameter \(\lambda_c\) (the frequency of the cycle) is an additional parameter to be estimated by MLE.
If the cyclical effect is stochastic (stochastic_cycle=True), then there is another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).
If the cycle is damped (damped_cycle=True), then there is a third parameter to estimate, \(\rho_c\).
In order to achieve cycles with the appropriate frequencies, bounds are imposed on the parameter \(\lambda_c\) in estimation. These can be controlled via the keyword argument cycle_period_bounds, which, if specified, must be a tuple of bounds on the
period(lower, upper). The bounds on the frequency are then calculated from those bounds.
The default bounds, if none are provided, are selected in the following way:
If no date / time information is provided, the frequency is constrained to be between zero and \(\pi\), so the period is constrained to be in \([0.5, \infty]\).
If the date / time information is provided, the default bounds allow the cyclical component to be between 1.5 and 12 years; depending on the frequency of the endogenous variable, this will imply different specific bounds.
Following the fitting of the model, the unobserved cyclical component time series is available in the results class in the cycle attribute.
Irregular
The irregular components are independent and identically distributed (iid):\[\varepsilon_t \sim N(0, \sigma_\varepsilon^2)\]
Autoregressive Irregular
An autoregressive component (often used as a replacement for the white noise irregular term) can be specified as:\[\begin{split}\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t \\ \epsilon_t \sim N(0, \sigma_\epsilon^2)\end{split}\]
In this case, the AR order is specified via the autoregressive keyword, and the autoregressive coefficients are estimated.
Following the fitting of the model, the unobserved autoregressive component time series is available in the results class in the autoregressive attribute.
Regression effects
Exogenous regressors can be pass to the exog argument. The regression coefficients will be estimated by maximum likelihood unless mle_regression=False, in which case the regression coefficients will be included in the state vector where they are essentially estimated via recursive OLS.
If the regression_coefficients are included in the state vector, the recursive estimates are available in the results class in the regression_coefficients attribute.
References
R0058a7c6fc36-1
Durbin, James, and Siem Jan Koopman. 2012. Time Series Analysis by State Space Methods: Second Edition. Oxford University Press.
Methods
clone(endog[, exog])
filter(params[, transformed, …])
Kalman filtering
fit([start_params, transformed, …])
Fits the model by maximum likelihood via Kalman filter.
fit_constrained(constraints[, start_params])
Fit the model with some parameters subject to equality constraints.
fix_params(params)
Fix parameters to specific values (context manager)
from_formula(formula, data[, subset])
Not implemented for state space models
handle_params(params[, transformed, …])
hessian(params, *args, **kwargs)
Hessian matrix of the likelihood function, evaluated at the given parameters
impulse_responses(params[, steps, impulse, …])
Impulse response function
information(params)
Fisher information matrix of model.
Initialize (possibly re-initialize) a Model instance.
initialize_approximate_diffuse([variance])
Initialize approximate diffuse
initialize_default([…])
initialize_known(initial_state, …)
Initialize known
initialize_statespace(**kwargs)
Initialize the state space representation
Initialize stationary
loglike(params, *args, **kwargs)
Loglikelihood evaluation
loglikeobs(params[, transformed, …])
Loglikelihood evaluation
observed_information_matrix(params[, …])
Observed information matrix
opg_information_matrix(params[, …])
Outer product of gradients information matrix
predict(params[, exog])
After a model has been fit predict returns the fitted values.
Prepare data for use in the state space representation
score(params, *args, **kwargs)
Compute the score function at params.
score_obs(params[, method, transformed, …])
Compute the score per observation, evaluated at params
set_conserve_memory([conserve_memory])
Set the memory conservation method
set_filter_method([filter_method])
Set the filtering method
set_inversion_method([inversion_method])
Set the inversion method
set_smoother_output([smoother_output])
Set the smoother output
set_stability_method([stability_method])
Set the numerical stability method
setup()
Setup the structural time series representation
simulate(params, nsimulations[, …])
Simulate a new time series following the state space model
simulation_smoother([simulation_output])
Retrieve a simulation smoother for the state space model.
smooth(params[, transformed, …])
Kalman smoothing
transform_jacobian(unconstrained[, …])
Jacobian matrix for the parameter transformation function
transform_params(unconstrained)
Transform unconstrained parameters used by the optimizer to constrained parameters used in likelihood evaluation
untransform_params(constrained)
Reverse the transformation
update(params[, transformed, …])
Update the parameters of the model
Properties
Names of endogenous variables.
The names of the exogenous variables.
(list of str) List of human readable parameter names (for parameters actually included in the model).
(array) Starting parameters for maximum likelihood estimation.
(list of str) List of human readable names for uonbserved states. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.